article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
energy harvesting ( eh ) technology may provide a perpetual power supply to energy - constrained wireless systems such as sensor networks . to maximize the system performance ,the eh transmitters should adapt their output powers based on the energy harvested up to the time of transmission ( causality constraint ) , whereas theoretical performance bounds are often determined for non - causally adapted output powers [ [ lita1]]-[[lit5 ] ] .the eh from the environment ( e.g. , solar or wind ) is an intermittent process , which can be mitigated by using wireless power transfer ( wpt ) , based on far - field radio frequency ( rf ) radiation from a distant energy source .if the same signal is used for simultaneous energy and information transfer , a fundamental tradeoff exists between energy transfer and the achievable rate , as has been shown , e.g. , for the noisy channel [ [ lita3 ] ] , the multiple - input multiple - output ( mimo ) broadcast channel [ [ lita4 ] ] , and multiple - access channel ( mac ) [ [ lita5 ] ] . in this paper, we study an eh network that consists of a base station ( bs ) and multiple eh users ( ehus ) , where the bs broadcasts rf energy to the ehus over the downlink , and the ehus send information simultaneously to the bs over the uplink mac ( denoted here as the eh mac ) .such a scenario is relevant , e.g. , for sensor networks operating in hostile or inaccessible environments .only a few existing works consider similar systems [ [ litb2 ] ] , [ [ litb3 ] ] , and only [ [ litb3 ] ] studies the achievable rates of the ehus in block fading channels . in [ [ litb3 ] ] , the ehus send their information to the bs using time - division - multiple - access ( tdma ) , and , for each fading state , the ehus sum - rate is maximized by jointly optimizing the duration of the downlink interval allocated to the bs and the duration of the uplink intervals allocated to each of the ehus . the system model considered in this paper is more general : ( a ) the ehus are allowed to send their information simultaneously to the bs , instead of via tdma .( b ) the ehus are equipped with eh batteries that can store energy over multiple fading states ( slots ) .( c ) the bs transmit power may change from slot to slot subject to a long - term average power constraint . assuming only availability of causal channel state information ( csi ) , we propose online rate and power allocation strategies for the ehus and the bs so as to maximize the average achievable rates in the eh mac .the network consists of one bs and ehus .the time is divided into slots of equal duration , whose total number satisfies . the ehu s uplink and downlink channelsare affected by independent channel fading and additive white gaussian noise ( awgn ) with power .the fading in each channel is a stationary and ergodic random process , and follows the block fading model ( i.e. , they are constant in each slot but change from one slot to the next ) . in slot , the fading power gains of the ehu s uplink and downlink channels are denoted by and , respectively . for convenience , these gains are normalized by the awgn power , such that and .we assume that the bs and ehus operate in half - duplex mode .two multiplexing schemes for energy and information transfer are possible : \(1 ) time - division information and power transfer ( tdt ) : here , the uplink and downlink transmissions occur in the same frequency band but in different time slots .thus , the time slot is dedicated to either uplink or downlink transmission .the uplink and downlink channels are assumed reciprocal , i.e. , . to mathematically model tdt, we introduce a scheduling " variable , defined as \(2 ) frequency - division information and power transfer ( fdt ) : here , the uplink and downlink transmissions occur simultaneously but in two different frequency bands .thus , in the time slot , simultaneous information and energy transmission is performed in the uplink and downlink , respectively . in this case , the power gains and are assumed to be independent .the downlink may require negligible spectrum resources ( e.g. , a `` power carrier '' sinusoid may be used ) .the bs output power in the slot is .two types of constraints are imposed on : ( a ) a peak power constraint , , and ( b ) an average power constraint , \leq p_{avg} ] for fdt , where ] , is set equal to its average harvested power , . since ] .the space of all possible vectors is divided into disjoint regions , such that the region is associated with the binary expansion , where implies an active user and implies a silent user . if the indices denote the positions of 1s in this binary expansion , then ehus are silent and ehus transmit with optimal powers [ [ lit2 ] , eq .( 10 ) ] constants and are lagrangian multipliers , which are determined from and with " replaced by " .although ( [ rav17 ] ) is a non - convex optimization problem , we can still apply the lagrange duality method to solve it , because it satisfies the _ time - sharing _ condition in [ [ lit3 ] ] and has zero duality gap . to see this, we transform the inner sum of the cost function of ( [ rav17 ] ) to where , and , and also note that ] are increasing functions of .thus , similarly to [ [ lit4 ] , ( p4 ) ] , we can conclude that the maximum value of optimization problem ( [ rav17 ] ) is concave in , and thus ( [ rav17 ] ) has zero duality gap . in order to apply the lagrange duality method ,the boolean constraint is relaxed to a linear constraint , : . in the following ,we show that the optimal solution is satisfied at the boundaries of , thus exactly satisfying .the lagrangian is written as + \tau_{1i } a_i + \tau_{2i } ( 1 - a_i),\end{aligned}\ ] ] where and are the non - negative lagrange multipliers representing and , respectively , and correspond to , whereas and correspond to . by differentiating ( [ rav19 ] ) with respect to ( w.r.t . ) and , and then setting both derivatives to zero , we obtain according to the karush - kuhn - tucker ( kkt ) conditions , complimentary slackness should be satisfied , : = \tau_{1i } a_i = \tau_{2i } ( 1 - a_i ) = 0 ] and ] .the average and the maximum output powers of the bs are and , respectively .we set , , , such that . because the average power harvested by each ehu is very low ( less than ] . due to the severe attenuation of the wireless power transfer ,both fdt and tdt schemes allocate very few time slots for uplink data transmissions , thus achieving comparatively low rates .although fdt transfers power and information simultaneously , for tdt only few time slots are lost for wireless power transfer compared to fdt .thus , fdt outperforms tdt only by a small margin . v. sharma , u. mukherji , v. joseph , and s. gupta , `` optimal energy management policies for energy harvesting sensor nodes , '' _ ieee trans .wireless commun ._ , vol . 9 , no .1326 - 1336 , apr .2010 [ lita1 ] o. ozel , k. tutuncuoglu , j. yang , s. ulukus , and a. yener , `` transmission with energy harvesting nodes in fading wireless channels : optimal policies , '' _ ieee j. sel .areas commun .29 , no . 8 , pp . 1732 - 1743 , sep .2011 [ lita2 ] k. huang and v. k. n. lau , `` enabling wireless power transfer in cellular networks : architecture , modeling and deployment , '' _ ieee trans .wireless commun .902 - 912 , feb .2014 [ litb2 ] d. n. tse and s. hanly , `` multi - access fading channels : part 1 : polymatroid structure , optimal resource allocation and throughput capacities , '' _ ieee trans .info . theory _ ,2796 - 2815 , oct .1998 [ lit1 ]
|
we consider the achievable average rates of a multiple - access system , which consists of energy - harvesting users ( ehus ) that transmit information over a block fading multiple - access channel ( mac ) and a base station ( bs ) that broadcasts radio frequency ( rf ) energy to the ehus for wireless power transfer . the information ( over the uplink ) and power ( over the downlink ) can be transmitted either in time division duplex or frequency division duplex . for the case when the ehus battery capacities and the number of transmission slots are both infinite , we determine the optimal power allocation for the bs and the optimal rates and power allocations for the ehus that maximize the achievable rate region of the mac . the resulting online solution is asymptotically optimal , and also applicable for a finite number of transmission slots and finite battery capacities . multiuser channels , energy harvesting , wireless power transfer , fading channels , multiplexing
|
high dimensional multivariate data is becoming increasingly prevalent , with the estimation of the covariance matrix for such data sets being an important fundamental problem .the classical estimator , i.e. the sample covariance matrix , though , is known to be highly non - robust under longer tailed alternatives to the multivariate normal distribution , as well as being highly non - resistant to outliers in the data .consequently , there have been numerous proposals for robust alternatives to the sample covariance matrix , with one of the earliest alternatives being the -estimators of multivariate scatter . as with the multivariate -estimators of scatter , most of the subsequent proposals for robust estimators of multivariate scatter are affine equivariant . however , for sparse multivariate data , that is when the sample size is less than or not much larger than the dimension of the data , such estimators of scatter do not differ greatly from the sample covariance matrix , and for the case , they are simply proportional to the sample covariance , see .even when the distribution is normal and there are no outliers in the data set , the sample covariance matrix can still be unreliable for sparse data sets due to the large number of parameters being estimated , namely .consequently , one may wish to model the covariance matrix using less parameters , or one may wish to give preference to certain covariance structures and pull the estimator towards such structures via penalization or regularization techniques .traditionally , research on robust estimators of multivariate scatter have not taken these concerns into account , and the statistics literature has focused primarily on the unrestricted robust estimation of the scatter matrix . within the signal processing community ,though , there has been an increasing interest in the -estimators of multivariate scatter and more recently an interest in developing regularized versions of them . an important mathematical contribution arising from the area of signal processingis the realization in that treating the multivariate scatter matrices as elements in a riemannian manifold and using the notion of geodesic convexity can be very useful , leading to elegant theory as well as new results .these concepts had been applied previously within the statistics literature , but only for the specific case of the distribution free -estimator of multivariate scatter .more recently they have been used in and implicitly in the survey paper on -functionals of multivariate scatter .the purpose of the present paper is threefold .we first review the standard riemannian geometry on the space of symmetric positive definite matrices and the notion of geodesic convexity in section [ sec : g - convexity ] .in particular we introduce and utilize first and second order taylor expansions of such functions with respect to geodesic parametrizations .such expansions allow us to introduce sufficient conditions for a function to be geodesically convex .in addition we introduce the concept of geodesic coercivity , which is important in establishing the existence of both the -estimators of scatter and their regularized versions . as in classical convex analysis , a real valued function on the space of symmetric positive definite matrices which is continuous , strictly geodesically convex and coercive has a unique minimizer .our second contribution is a general analysis of regularized -estimators of multivariate scatter with respect to geodesic convexity and coercivity in section [ sec : regularized.scatter ] .our starting point are results of and which show that the log - likelihood type functions underlying -estimators of multivariate scatter are geodesically convex under rather general conditions .we show that various penalty functions favoring matrices which are close to the identity matrix or to multiples of the identity matrix are geodesically convex .this leads to a rather complete picture concerning existence and uniqueness of regularized -functionals of scatter .it also provides new results on regularized sample covariance matrices when using penalty functions which are geodesically convex but not convex in the inverse of the covariance matrix .furthermore , we propose a cross - validation method for choosing a scaling parameter for the penalty function .finally , we present a general partial newton algorithm to minimize a smooth and strictly geodesically convex function in section [ sec : algorithm ] .this algorithm is a generalization of the partial newton method of with guaranteed convergence .we illustrate this method with a numerical example in section [ sec : example ] .all proofs and some auxiliary results are deferred to section [ sec : proofs ] and to a supplement [ sec : auxiliary ] .we begin with some notation and a brief background review .let the space of symmetric matrices in be denoted by , and let stand for its subset of positive definite matrices , i.e. symmetric matrices with eigenvalues in . for a distribution on with given center and a function , an -functional of multivariate scatter can be defined as a matrix which minimizes the objective function \ ,q(dx ) + \log \det(\sigma)\ ] ] over .when represents an empirical distribution , then the minimizer defines an -estimator of scatter , and the objective function can be viewed as a generalization of the negative log - likelihood function arising from an elliptical distribution .the term is not needed when working with empirical distributions . in general , though , this term allows us to be able to consider distributions for which . for continuous with sill , defined below , a minimizer to known to exist , provided no subspace contains too may data points , or specifically if the following condition holds for .[ [ condition1 . ] ] condition 1 .+ + + + + + + + + + + + for all linear subspaces with , where .( note that the function in the present paper corresponds to in and other publications . )if is differentiable , then the critical points , and hence any minimizer , of satisfy the -estimating equations where .furthermore , if we define , then the sill equals the limit whenever the latter exists . to assure the uniqueness of a minimizer to or a unique solution to the -estimating equations , further conditions on the function are needed .it has been know since the introduction of the -estimators of scatter that one such sufficient condition is the following .[ [ condition2 . ] ] condition 2 .+ + + + + + + + + + + + the function is differentiable , with being non - increasing and being non - decreasing and strictly increasing for .the proof of uniqueness given in assumes more restrictive conditions on the distribution than that given by condition 1 , although it is shown in that conditions 1 and 2 are sufficient for the existence of a unique solution to , i.e. for the existence and uniqueness of the -estimator of scatter .some common examples of -estimators satisfying condition 2 are huber s -estimator for which with tuning constants and , and the maximum likelihood estimators derived from an elliptical t - distribution on degrees of freedom , for which .the above conditions lack some intuition as to why has a unique minimum .the proofs of uniqueness given in are based on a study of the -estimating equations .recall that for the classical case when corresponds to the negative log - likelihood under a -dimensional normal distribution with mean zero and covariance , i.e. when , then is strictly convex in and hence has a unique minimizer , namely the sample covariance matrix .for general , however , tends not to be convex in .important insight into the function has recently been given within the area of signal processing .in particular , it is shown in that if the function is convex in , then is geodesically convex in , and that if the function is strictly convex in , then is strictly geodesically convex in provided the data span .consequently , when condition 1 holds , then the minimizer set for is a geodesically convex set when is convex , and the minimizer is unique when is strictly convex . the results on geodesic convexity , or g - convexity , not only give a mathematically elegant insight into uniqueness , but they also yield more general results .for example , need not be differentiable . also , when is differentiable , then is ( strictly ) convex in if and only if is ( strictly ) increasing , with no additional conditions on being needed , i.e. need not be non - increasing .the notion of g - convexity also allows for the development of new results regarding minimizing over a g - convex subset of , as well as minimizing a penalized objective function when the penalty function is also g - convex . before addressing these problems , though, we provide a thorough review and present some new results on the notion of geodesic convexity .note that our objective function assumes to be the center of the distribution . in various applications in signalprocessing the center of is often known or hypothesized , and consequently all the aforementioned signal processing references presume a known center . in more traditional location - scatter problems , one could embed the location - scatter problem in dimension into a scatter - only problem in dimension as explained in .but regularization in this setting is less clear .if the location parameter is merely a nuisance parameter , then one can first center the data using an auxiliary estimate of location .alternatively , the location parameter can be removed by symmetrization , i.e. instead of one considers the symmetrized distribution with independent random vectors ; see for further details .we collect a few basic ideas about positive definite matrices and their geometry . for a full treatmentwe refer to .the euclidean norm of a vector is denoted by . for matrices with identical dimensions we write so is the frobenius norm of .equipped with this inner product and norm , the matrix space is a euclidean space of dimension , and is an open subset thereof .but in the context of scatter estimation an alternative geometry turns out to be useful .let be the sample covariance matrix of independent random vectors with distribution with and .it is well known that with the identity matrix and a random matrix .the distribution of depends only on and is invariant under transformations with , the set of orthogonal matrices in .moreover , as .thus one could measure the distance between and by with the local norm corresponding to the local inner product of matrices . to define a distance between two arbitrary matrices , we consider a smooth path connecting them .that means , \to { { \mathbb{r}}_{{\rm sym},+}^{q\times q}} ] be a path connecting and .then with equality if , and only if , for some non - decreasing , piecewise continuously differentiable function \to { \mathbb{r}} ] does not depend on the function but is equal to \} ] given by .indeed , and the path has constant geodesic speed in the sense that for all ] and satisfying .then where and for .furthermore , for . * ( a ) * specifically let .then the previous limit may be rewritten as * ( b ) * specifically let for .then on , and the previous limit may be rewritten as this proposition will be used later in connection with regularized scatter functionals . in the present contextit implies necessary and sufficient conditions for g - coercivity in the following two settings : * setting 0 .* for , and . * setting 1 . * , , and satisfies .[ thm : g - coercivity ] * ( a ) * in setting 1 , is geodesically coercive if , and only if , for all linear subspaces with .if in addition is strictly increasing on , then has a unique minimizer . *( b ) * in setting 0 , is geodesically coercive on if , and only if , for all linear subspaces with . in this case , has a unique minimizer on .note that the condition in part ( a ) of theorem [ thm : g - coercivity ] is precisely condition 1 mentioned in section [ sec : background ] .the additional assumption for uniqueness of the minimizer covers -estimators of scatter as proposed in with functions which are not strictly g - convex on the whole positive half - line . in part( b ) the condition can be eliminated by replacing with , .the conclusion of part ( b ) is well known , see and . in connection with the algorithms introduced laterwe need objective functions which are twice continuously differentiable . in setting 0 this is the case , but setting 1 will be replaced with the following one : * setting 2 . * is twice continuously differentiable on such that is strictly increasing in with limits and ] and , and with the convention . in particular , whenever . moreover , if with and such that , then } \infty - \sum_{i=1}^q \gamma_i & \text{if } \ k = 1 , \\\infty & \text{if } \ k = 2 .\end{cases}\ ] ] this lemma and theorem [ thm : mfunc ] together show that using any of the penalties , or together with a g - convex function yields an objective function in which is strictly g - convex . in particular , by corollary [ cor : uniqueness.minimizer] , has a unique minimizer or no minimizer . with or g - coercivity and thus existence of a unique minimizeris guaranteed , regardless of .this is in contrast to the non - regularized case for which conditions on are needed to insure the existence of a minimizer .shrinkage towards a different given matrix is obtained by replacing in with .[ [ shrinkage - towards - multiples - of - i_q . ] ] shrinkage towards multiples of .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + functions which penalize large condition numbers of are given by all three functions are scale - invariant with minimizing if , and only if , is a positive multiple of .moreover , and satisfy the symmetry relation , whereas penalizes relatively small eigenvalues more severely than relatively large ones .here are the main facts : [ lem : all.about.pi ] for , the penalty function is scale - invariant , twice continuously differentiable and geodesically convex .on it is strictly geodesically convex with a unique minimum at .precisely , for any , as , with and given in the following table : here , for , and ] and such that and . if , then thus is g - coercive on if , and only if , for any subspace of with . if , then thus is g - coercive on if , and only if , for any subspace of with .in case of for any fixed , the function is g - coercive on without further constraints on .this is the case , for instance , if or for with a non - decreasing convex function such that as .explicit examples for such functions are rather than choose in beforehand , one can use data dependent methods for selecting .one possible approach is to use an oracle type estimator for , as is done in .such an approach is based upon minimizing the mean square error under a specific distribution with the method being dependent on the choice of the penalty and the -function .a more universal approach is to use cross - validation .here we propose a leave - one - out cross validation approach for the current problem as follows .let denoted the empirical distribution when the data point is removed , and for a given define with the minimum being taken over .next , define an aggregate robust measure of how well reflects the left - out observation by the objective is to then minimize over . in practice , this would be done over over some finite set of values for .some examples are given in section [ sec : example ] .since the cross validation approach can be computationally intensive , we first discuss algorithms for computing the regularized -estimators of scatter .there is a rich literature on optimization on riemannian manifolds , see and the references therein .for the special case of functions on , propose various fixed - point and gradient descent methods .newton - raphson algorithms would be another possibility but may be inefficient due to the high dimension of hessian operators . for the minimization of a smooth and g - convex functionwe propose a partial newton - raphson algorithm which is similar to a method of for pure -functionals of scatter .while the latter method has been designed for special settings in which a certain fixed - point algorithm serves as a fallback option with guaranteed convergence , the present approach is more general .we consider a twice continuously differentiable function such that in particular , is strictly g - convex .furthermore we assume that is g - coercive , so exists .finally we assume that and are continuous in for any fixed .under these conditions on one can devise an iterative algorithm to compute the minimizer . according to lemma [ lem : minimizer.g - convex ] ,this is equivalent to finding a matrix such that .[ [ algorithmic - mappings . ] ] algorithmic mappings .+ + + + + + + + + + + + + + + + + + + + + to compute we iterate a certain mapping such that and whenever . if we replace the latter condition by a somewhat stronger constraint , iterating the mapping yields sequences with guaranteed converge to .[ lem : algorithm ] suppose that satisfies and let be an arbitrary starting point , and define inductively for .then this lemma belongs to the folklore in optimization theory . for the readers convenience we provide its short proof in section [ sec : auxiliary ] .[ [ construction - of - phi . ] ] construction of .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let with be our current candidate for .note that the quadratic term may be rewritten as for a self - adjoint linear operator with strictly positive eigenvalues .thus a promising new candidate for would be with a full newton step in local geodesic coordinates .computing would require substantial memory and computation time , though .alternatively one could try a gradient descent step : with as a compromise between a full newton and a mere gradient step we propose a partial newton step : to this end we consider a spectral decomposition with an orthogonal matrix and a vector .then we define with this may be computed explicitly : since for a certain matrix , we may write if is far from , the matrix need not be better than itself . to avoid poor steps we introduce a simple step size correction and define finally with being the smallest integer such that for a given .the rationale behind this definition is the fact that and note that whenever , which is equivalent to .otherwise this algorithmic mapping has the desired properties , no matter how the factor of and the orthogonal matrix in the spectral decomposition are chosen .[ thm : phi ] the algorithmic mapping just defined has the properties described in lemma [ lem : algorithm ] . moreover , if is sufficiently close to , then the number in the step size correction equals , whence .[ [ pseudo - code - for - phicdot . ] ] pseudo - code for .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one may interpret our algorithmic mapping such that the factor of our current candidate for is replaced with a new matrix and .here is corresponding pseudo - code for the computation of : illustrate the proposed methods in case of and the resulting functional is strictly g - convex and g - coercive on for any value . precisely , we chose and simulated a random sample of size from the multivariate cauchy distribution with center and scatter matrix then we computed the minimizer of with being the empirical distribution of this sample for with .table [ tab : cv.errors ] shows the resulting values and the following estimation errors : where , , and refers to the vector of the ordered eigenvalues of a symmetric matrix .note that our cross - validation criterion yields , which is a reasonable choice in view of the estimation errors .figure [ fig0 ] shows a bar plot of the log - transformed eigenvalues of and of . ( green ) and ( blue).,scaledwidth=99.0% ] this simulation was repeated 100 times , and in all cases the minimizer of on the given grid turned out to be .figure [ fig1 ] shows box plots of and the estimation errors , , for these simulations .( upper left ) and estimation errors ( upper right ) , ( lower left ) , ( lower right ) versus .,title="fig:",scaledwidth=49.0% ] ( upper left ) and estimation errors ( upper right ) , ( lower left ) , ( lower right ) versus .,title="fig:",scaledwidth=49.0% ] ( upper left ) and estimation errors ( upper right ) , ( lower left ) , ( lower right ) versus .,title="fig:",scaledwidth=49.0% ] ( upper left ) and estimation errors ( upper right ) , ( lower left ) , ( lower right ) versus .,title="fig:",scaledwidth=49.0% ]for and define . then with , and this implies that for some . hence and for ] as .thus we have to show that is g - coercive if , and only if , for any .suppose that is not g - coercive .then there exists a sequence in such that but for all indices and some real constant .writing for a matrix with norm one , we may even assume that with , .now for any fixed , in the first and third step we used convexity of in , the second and last step rely on continuity of and the choice of .these considerations show that .on the other hand , suppose that is g - coercive .then for any and sufficiently large , + by continuity of , the set is closed , and by g - convexity of it is g - convex . obviously , the set is identical with the set of minimizers of on the closed set .if is also g - coercive , the set is even compact , and is a nonvoid and closed subset of , so it is compact itself .now suppose that has a minimizer , .note that g - coercivity is equivalent to this follows from the inequality which will be proved later .now suppose that is minimal at but not g - coercive .that means , there exists a sequence in with but for all indices and some real constant .writing for a matrix with norm one , we may even assume that with , .since is convex in , we may conclude that for any fixed , this implies that for all , so is geodesically unbounded .it remains to prove inequality which is related to geodesic distances . on the one hand , on the other hand , in the last step we utilized that and have the same eigenvalues , which follows from the singular value decomposition of .this criterion follows from the fact that for , so as . by means of lemma [ lem : convexity ] in supplement [sec : auxiliary ] , this shows that is convex in , provided that for all .this convexity is strict if for all .if for some and , then for sufficiently small , hence thus is not convex in , so is not geodesically convex . that is the unique minimizer of follows from the fact that , , and for .note first that satisfies the expansion this and remark [ rem : inversion2 ] implies that while is given by .the inequality for can be proved similarly as the inequality in example [ ex2 ] . in case of with an orthogonal matrix and a vector with non - decreasing componnents , as , unless . as to , it follows from the previous considerations and example [ ex0 ] that and . again for . moreover , if as before , as , }^ { } \infty - \sum_{i=1}^q \gamma_i .\ ] ] for the expansion is a consequence of corollary [ cor : penalties ] in supplement [ sec : auxiliary ] .just note that we may write with and , , and moreover , , so .elementary considerations reveal that all penalty functions are scale - invariant .next we show that a matrix with eigenvalues minimizes if , and only if , .on the one hand , with equality if , and only if , for all indices .this follows from for arbitrary . in case of ,note that by jensen s inequality and strict concavity of on , with strict inequality unless all are identical .finally , with equality if , and only if , all are identical .next we verify the geodesic second order taylor expansions of .it follows from examples [ ex0 ] and [ ex2 ] and remark [ rem : inversion2 ] that and with .the considerations to example [ ex2 ] reveal that both and are strictly positive whenever .the expansion for follows from corollary [ cor : penalties ] with the same arguments as in the proof of lemma [ lem : all.about.pi ] . in particular , with .concerning coercivity , let with and .then for , and as .this implies for the asserted limits of . for claim follows from + our proof of theorem [ thm : phi ] is based on two elementary inequalities for the accuracy of taylor expansions of which are derived in supplement [ sec : auxiliary ] : [ lem : remainders ] for and let for arbitrary with and , and one can deduce from continuity of in for fixed and being finite - dimensional that both and are continuous in , where .additional quantities we shall use repeatedly are and .both are continuous in . for arbitrary , , we can say that because and on the other hand , hence it follows from lemma [ lem : remainders ] that for any fixed integer , note that is continuous in .moreover , for any fixed there is an integer such that .consequently , if is sufficiently close to , then the integer in satisfies , and this shows that for close to we only consider and utilize the second bound in lemma [ lem : remainders ] .namely , consequently , but as and , so consequently , if is sufficiently close to .the next three lemmas provide expansions and inequalities for matrix exponentials and logarithms .they involve the auxiliary function given by one may also write with a random variable which is uniformly distributed on ] with . again one can deduce from convexity of the exponential function and jensen s inequality that another useful identity which will be used later is for [ cor : penalties ] for let and . for arbitrary vectors with and matrices , as , where with , , and it is wellknown that the mapping is bijective with inverse function . moreover , the exponential mapping is continuously differentiable with derivative at , where denotes the linear mapping see . by means of the spectral representation onemay write since for arbitrary , this representation shows that is a non - singular linear transformation of with inverse by the inverse function theorem , the function is also continuously differentiable with and .we first prove the inequalities for . with follows from lemma [ lem : derivatives.exp.log ] and its proof that writing with a vector and a matrix \in { { \mathbb{r}}_{\rm orth}^{q\times q}} ] . since is continuously differentiable by assumption , we may rephrase this as for some bounded function \to [ 0,\infty) ] with , and for ] , to this end we write with an orthogonal matrix \in { { \mathbb{r}}_{}^{q\times q}} ] , and for .but then elementary calculations show that whence .[ lem:2nd.order.taylor ] let be an open subset of , and let have the following property : for each there exist a vector and a matrix such that further suppose that is continuous .then is twice continuously differentiable with and .we start with dimension . for and let be the infimum and the supremum of on \cap \omega ] such that form a basis of and })_{i=1}^q ] such that , and .further let }/d - 1_{[d < i \le d+e ] } / e ] , and is well - defined and non - decreasing in . hence by dominated convergence and monotone convergence , now we partition as . for , } & \text{if } \\gamma_j = 0 \\ 0 & \text{if } \\gamma_j < 0 \end{cases}\ ] ] and as .hence all in all we obtain the asserted limit . with we may write , and all summands are non - negative . hence in the special case that the limit equals in the special case of for , on ,so the limit equals + we start with part ( a ) . according to lemma [ lem : g - coercivity ] and proposition [ prop : g - coercivity ] ( a ), is g - coercive on if , and only if , it satisfies the following inequalities : for any \in { { \mathbb{r}}_{\rm orth}^{q\times q}} ] for a fixed index , then the left hand side of equals which is positive if , and only if , .note also that all differences are non - negative .this shows that is satisfied for arbitrary nonzero vectors with non - decreasing components if , and only if , but since is an arbitrary orthonormal basis of , these considerations show that g - coercivity of is equivalent to for arbitrary linear subspaces with . by virtue of lemma [ lem : existence.minimizers ] , g - coercivity of guarantees the existence of a minimizer of .it remains to be shown that this minimizer is unique in case of being strictly increasing on the interval .if the latter interval equals , then the function is strictly g - convex in , so it follows from theorem [ thm : mfunc ] and condition for arbitrary linear subspaces of with that is strictly g - convex .hence the minimizer is unique , see corollary [ cor : uniqueness.minimizer ] .now suppose that for some .writing with , it suffices to show that for any fixed , the function with has a unique minimum at , where , . as shown in the proof of theorem [ thm : mfunc ], is convex , and optimality of implies that .it remains to be shown that recall that \ , q_b(dx ) - t \sum_{i=1}^q \gamma_i\ ] ] with and .since for all , the function is convex and strictly increasing .moreover , is strictly convex unless is an eigenvector of .thus is strictly convex , unless where . since , strict convexity of implies .suppose that is true. then we may write with \ , q_b(dx ) - \dim({\mathbb{v}}(\gamma_o ) ) u .\ ] ] note that })_{i=1}^q , \ ] ] so each function is convex with .consequently it suffices to show that for any , note that for some would imply that for real numbers .but so \ , q_b(dx ) .\ ] ] the strict monotonicity property of would imply that for -almost all .hence a contradiction to . in the latter display we used in the second and in the third step . concerning part ( b ) , lemma [ lem : g - coercivity ] with the modifications mentioned in section [ subsec : scale - invariance ] and proposition [ prop : g - coercivity ] ( b ) imply that is g - coercive on if , and only if , it satisfies the following inequalities : for any \in { { \mathbb{r}}_{\rm orth}^{q\times q}} ] , then the left hand side of equals .note also that all differences are non - negative . thus is true for arbitrary vectors with non - decreasing components summing to zero if , and only if , for .hence g - coercivity of on is equivalent to for arbitrary linear subspaces with .note that g - convexity of would be equivalent to g - gonvexity of .now consider and with .then but for , the right hand side equals with for . by definition , the sequence is non - increasing , and stays in the compact set .suppose does not converge to .then there exists a subsequence with limit .it follows from continuity of and monotonicity of that but this contradicts our assumption of , because + recall that for any function ) ] with moreover , , , and . but for some orthogonal matrix , and with and , so and ; see also remark [ rem : orthogonal transformations ] . thus +
|
as observed by auderset et al . ( 2005 ) and wiesel ( 2012 ) , viewing covariance matrices as elements of a riemannian manifold and using the concept of geodesic convexity provide useful tools for studying -estimators of multivariate scatter . in this paper , we begin with a mathematically rigorous self - contained overview of riemannian geometry on the space of symmetric positive definite matrices and of the notion of geodesic convexity . the overview contains both a review as well as new results . in particular , we introduce and utilize first and second order taylor expansions with respect to geodesic parametrizations . this enables us to give sufficient conditions for a function to be geodesically convex . in addition , we introduce the concept of geodesic coercivity , which is important in establishing the existence of a minimum to a geodesic convex function . we also develop a general partial newton algorithm for minimizing smooth and strictly geodesically convex functions . we then use these results to generate a fairly complete picture of the existence , uniqueness and computation of regularized -estimators of scatter defined using additive geodescially convex penalty terms . various such penalties are demonstrated which shrink an estimator towards the identity matrix or multiples of the identity matrix . finally , we propose a cross - validation method for choosing the scaling parameter for the penalty function , and illustrate our results using a numerical example . [ [ ams - subject - classifications ] ] ams subject classifications : + + + + + + + + + + + + + + + + + + + + + + + + + + + + 62h12 , 65c60 , 90c53 . [ [ key - words ] ] key words : + + + + + + + + + + matrix exponential function , matrix logarithm , newton - raphson algorithm , penalization , riemannian geometry , scale invariance , taylor expansion .
|
neurons in primary visual cortex show a large increase in input conductance during visual activation : _ in vivo _ recordings ( see , e.g. , ) show that the conductance can rise to more than three times that of the resting state .such _ high conductance states _ lead to faster neuronal dynamics than would be expected from the value of the passive membrane time constant , as pointed out by shelley et al .we use mean field theory to study the firing statistics of a model network with balanced excitation and inhibition and observe consistently such high conductance states during stimulation . in our study , we classify the irregularity of firing with the fano factor , defined as the ratio of the variance of the spike count to its mean . fortemporally uncorrelated spike trains ( i.e. , poisson processes ) , while indicates a tendency for spike clustering ( bursts ) , and points to more regular firing with well separated spikes .observed fano factors for spike trains of primary cortical neurons during stimulation are usually greater than 1 and vary within an entire order of magnitude ( see , e.g. , ) .we find the same dynamics in our model and are able to pin - point the relevant mechanisms : synaptic filtering leads to spike clustering in states of high conductance ( thus ) , and fano factors depend sensitively on variations in both threshold and synaptic time constants .we investigate a cortical network model that exhibits self - consistently balanced excitation and inhibition .the model consists of two populations of neurons , an excitatory and an inhibitory one , with dilute random connectivity .the model neurons are governed by leaky integrate - and - fire subthreshold dynamics with conductance - based synapses .the membrane potential of neuron in population ( for excitatory and inhibitory , respectively ) obeys the first sum runs over all populations , including the excitatory input population representing input from the lgn and indexed by .the second sum runs over all neurons in population of size .the reversal potential for the excitatory inputs ( ) is higher than the firing threshold , the one for the inhibitory inputs ( ) is below the reset value .the constant leakage conductance is the inverse of the membrane time constant .the time dependent conductance from neuron in population to neuron in population is taken as if there is a connection between those two neurons , otherwise zero .the sum runs over all spikes emitted by neuron , is the synaptic time constant for the synapse of type ( excitatory or inhibitory ) , and is the heavyside step function . denotes the average number of presynaptic neurons in population .we followed van vreeswijk and sompolinsky by scaling the conductances with so that their fluctuations are of order one , independent of network size .we use mean field theory to reduce the full network problem to two neurons : one for each population .this method is exact in the limit of large populations with homogeneous connection probabilities .the neurons receive self - consistent inputs from their cortical environment , exploiting the fact that all neurons within a population exhibit the same firing statistics due to homogeneity .the time dependent conductance described in ( [ eq : conductance ] ) can then be replaced by a realization of a gaussian distributed random variable with mean and covariance here , is the firing rate of the presynaptic neuron , and is the autocorrelation function of its spike train .a simple approximation of the autocorrelation , like the one used by and , is to assume to be temporally uncorrelated ( i.e. , white noise ) , in which case it simplifies to .the term is a correction for the finite connection concentration and can be derived using the methods of .the self - consistent balance condition is obtained by setting the net current in ( [ eq : dufull ] ) to zero when the membrane potential is at threshold and the conductances have their mean values ( [ eq : mean ] ) . in the large -limit, it reads the distribution of the variables can be calculated numerically using an iterative approach .one starts with a guess based on the balance equation ( [ eq : balance ] ) for the means and covariances and generates a large sample of specific realizations of , which are used to integrate ( [ eq : dufull ] ) to generate a large sample of spike trains .the latter can then be used to calculate new estimates of the means and covariances by applying ( [ eq : mean ] ) and ( [ eq : covar ] ) and correction of the initial guess towards the new values .these steps are repeated until convergence .for the above described model , we chose parameters corresponding to population sizes of 16,000 excitatory neurons and 4,000 inhibitory neurons , representing a small patch of layer iv cat visual cortex .the neurons were connected randomly , with 10% connection probability between any two neurons .the firing threshold was fixed to 1 , excitatory and inhibitory reversal potentials were set to and , respectively , and the membrane time constant was ms . for the results presented here , the integration time step was 0.5 ms .figure [ fig : autocorr ] illustrates the importance of coloring the noise produced by intra - cortical activity .the white noise approximation underestimates both the correlation times and the strength of the correlations in the neuron s firing : its autocorrelation ( blue ) is both narrower and weaker than the one for colored noise ( red ) .fano factors vary systematically with both the distance between reset and threshold and the synaptic time constant .non - zero synaptic time constants produced consistently fano factors greater than one .we varied the reset between 0.8 and 0.94 and between 0 and 6 ms , which resulted in values for that span an entire order of magnitude , from slightly above 1 to approximately 10 for ms ( see figure [ fig : fanos ] ) . .longer synaptic time constants lead to increased clustering ( bursts ) of spikes , which is reflected in higher fano factors.,width=514,height=340 ]in all our simulations , we observed that the membrane potential changed on a considerably faster time scale than the membrane time constant ms .this behavior is only observed if conductance - based synapses are included in the integrate - and - fire neuron model . to understand this phenomenon , it is convenient to follow the notation of shelley et al . to rewrite the equation for the membrane potential dynamics ( [ eq : dufull ] ) in the following form : with the _ total conductance _ , and the _ effective reversal potential _ .the membrane potential follows the effective reversal potential with the input dependent _ effective membrane time constant _ .the effective reversal potential changes on the time scale of the synaptic time constants , which are up to five times shorter than in our simulations . however , if the effective membrane time constant is shorter than the synaptic time constant due to a large enough total conductance , then can follow closely , as observed in our simulations ( see figure [ fig : vvs ] ) .( red ) follows the effective reversal potential ( blue ) closely , except for detours when the neuron is reset due to firing .the membrane potential recovers fast enough to spike several times while stays above threshold , thus producing bursts of spikes . here , the threshold is set to 1 and the reset to 0.94.,width=514,height=340 ] in high conductance states , the firing statistics are strongly influenced by synaptic dynamics ( see figure [ fig : fanos ] ) .this is in contrast with strictly current based models , where the neuron reacts too slow to reflect fast synaptic dynamics in its firing .the ` synaptic filtering ' of arriving spikes leads to temporal correlations in and thus to temporal correlations ( by way of spike clustering ) in firing .therefore , the model neurons receive temporally correlated input rather than white noise . for this reason , in mean field models dealing with conductance based dynamics ,coloring the noise is important to arrive at the full amount of temporal correlation in firing statistics ( see figure [ fig : autocorr ] ) .we confirmed these considerations by running simulations without synaptic filtering ( ) .as expected , intra - cortical activity became uncorrelated and the white noise approximation produced the same result as coloring the noise correctly .in that case , fano factors stayed close to 1 ( see figure [ fig : fanos ] ) , i.e , no tendency of spike clustering was observed .previous investigations showed that varying the distance between threshold and reset in balanced integrate - and - fire networks has a strong effect on the irregularity of the firing . by including a conductance - based description of synapses ,we were now able to show the importance of synaptic time constants on firing statistics , even if they are several times smaller than the passive membrane time constant : synaptic filtering facilitates clustering of spikes in states of high conductance .
|
measured responses from visual cortical neurons show that spike times tend to be correlated rather than exactly poisson distributed . fano factors vary and are usually greater than 1 due to the tendency of spikes being clustered into bursts . we show that this behavior emerges naturally in a balanced cortical network model with random connectivity and conductance - based synapses . we employ mean field theory with correctly colored noise to describe temporal correlations in the neuronal activity . our results illuminate the connection between two independent experimental findings : high conductance states of cortical neurons in their natural environment , and variable non - poissonian spike statistics with fano factors greater than 1 . synaptic conductances , response variability , cortical dynamics
|
on february 11 the ligo - virgo collaboration announced the detection of gravitational waves ( gw ) .they were emitted about one billion years ago by a binary black hole ( bbh ) merger and reached earth on september 14 , 2015 .the claim , as it appears in the ` discovery paper' and stressed in press releases and seminars , was based on `` significance . ''ironically , shortly after , on march 7 the american statistical association ( asa ) came out ( independently ) with a strong statement warning scientists about interpretation and misuse of p - values .as promptly reported by nature , `` this is the first time that the 177-year - old asa has made explicit recommendations on such a foundational matter in statistics , says executive director ron wasserstein .the society s members had become increasingly concerned that the p value was being misapplied in ways that cast doubt on statistics generally , he adds . '' in junewe have finally learned that another one _ and a half _ gravitational waves from binary black hole mergers were also observed in 2015 , where by the ` half ' i refer to the october 12 event , highly _ believed _ by the collaboration to be a gravitational wave , although having only 1.7 _ significance _ and therefore classified just as lvt ( ligo - virgo trigger ) instead of gw .however , another figure of merit has been provided by the collaboration for each event , a number based on probability theory and that tells how much we modify the relative beliefs of two alternative hypotheses in the light of the experimental information .this number , at my knowledge never even mentioned in press releases or seminars to large audiences , is the bayes factor ( bf ) , whose meaning is easily explained : if you considered _ priori _ two alternative hypotheses equally likely , a bf of 100 changes your odds to 100 to 1 ; if instead you considered one hypothesis rather unlikely , let us say your odds were 1 to 100 , a bf of turns them the other way around , that is 100 to 1. you will be amazed to learn that even the `` 1.7 sigma '' lvt151012 has a bf of the order of , considered a very strong evidence in favor of the hypothesis `` binary black hole merger '' against the alternative hypothesis `` noise '' .( alan turing would have called the evidence provided by such an huge ` bayes factor , ' or what i. j. good would have preferred to call `` bayes - turing factor'' , makes no sense in that equation and it should have been , where and stand for ` complementary ' ( formally `` exhaustive , mutually exclusive '' ) hypotheses .the equation should then read where and are _ prior _ and _ posterior _ odds , i.e. , respectively , and .eq.(1 ) of would then result into or in words ( for log representation of odds and bayes factors see section 2 and appendix e of and references therein , although at that time turing s contributions , as well as ` bans ' and ` decibans ' , were unknown to the author , who arrived at the same conclusion of turing s 1 deciban as rough estimate of _ human resolution _ to _ judgement leaning _ and _ weight of evidence _ table 1 in page 13 and text just below it . ) ] 100 _ deciban _ , well above the 17 deciban threshold considered by the team at bletchley park during world war ii to be reasonably confident of having cracked the daily enigma key . ) in the past i have been writing quite a bit on how ` statistical ' considerations based on p - values tend to create wrong expectations in frontier physics ( see e.g. and ) .the main purpose of this paper is the opposite , i.e. to show how p - values might relegate to the role of a possible fluke what is most likely a genuine finding .in particular , the solution of the apparent paradox of how a marginal ` 1.7 sigma effect ' could have a huge bf such as ( and virtually even much more ! ) is explained in a didactic way .since this paper can be seen as the sequel of refs . and , with the basic considerations already expounded in , for the convenience of the reader i shortly summarize the main points maintained there . * the `` essential problem of the experimental method '' is nothing but solving `` a problem in the probability of causes '' , i.e. ranking in credibility the hypotheses that are considered to be possibly responsible of the observations , ( quotes by poincar ) .+ is indeed no conceptual difference between `` comparing hypotheses '' or `` inferring the value '' of a physical quantity , the two problems only differing in the numerosity of hypotheses , _ virtually _ infinite in the latter case , when the physical quantity is _ assumed _ , for mathematical convenience , to assume values with continuity. ] . ]is or , considering also and taking the ratio of the two _ posterior probabilities _, where stands for the _ background information _ , sometimes implicitly assumed . * important consequences of this rule i like to call them laplace s teachings , because they stem from his `` _ fundamental principle _ of that branch of the analysis of chance that consists of reasoning a posteriori from events to causes'' are : * * it makes no sense to speak about how the probability of changes if : 1 .there is no alternative cause ; 2 . the way how might produce not properly modelled , i.e. if has not been _ somehow _ assessed .could be , and on how much is more or less believable , you can not modify your ` confidence ' on , as it will be further reminded in section [ sec : p - values_bf ] .] * * the updating of the probability ratio depends only on the so called _ bayes factor_ + + ratio of the probabilities of given either hypotheses , is also known as `` _ likelihood _ ratio '' , but i avoid and discourage the use of the ` _l_-word ' , being a major source of misunderstanding among practitioners , who regularly use the ` _ l_-function ' as pdf of the unknown quantity , taking then ( also in virtue of an unneeded ` principle ' ) its argmax as _ most believable value _ , sticking to it in further ` propagations' .( a recent , important example comes from two reports of the same organization , each using the ` _l_-word ' with two different meanings . ) ] and _ not on the probability of other events that have not been observed and that are even less probable than _( upon which p - values are instead calculated ) . ** one should be careful not to confuse with , and in general , with . or, moving to continuous variables , with , where : ` ' stands here , depending on the contest , for a _ probability function _ or for a _ probability density function _ ( pdf ) : and are symbols for observed quantity and ` true ' value , respectively , the latter being in fact just the _ parameter of the model we use to describe the physical world_. * * cause is _ falsified _ by the observation of the event _ only if _ produce it , and not because of the smallness of . * * extending the reasoning to continuous observables ( generically called ) characterized by a pdf , the probability to observe a value in the _ small _ interval is .what matters , for the comparison of two hypotheses in the light of the observation , is therefore the ratio of pdf s , and not the smallness of , which tends to zero as .therefore , _ an hypothesis is _ , strictly speaking , _ falsified _ , in the light of the observed , _ only _ if . *finally , i would like to stress that _ falsificability is not a strict requirement for a theory to be accepted as ` scientific'_. in fact , in my opinion a weaker condition is sufficient , which i called _ testability _ in : given a theory and possible observational data , it should be possible to model in order to compare it with an alternative theory characterized by . )supporters should tell us in what differs from from standard model , with being past , present or future _ observational data_. ] this will allow to rank theories in probability in the light of empirical data and of any other criteria , like simplicity or aesthetics without the requirement of falsification , that can not be achieved , logically speaking , in most cases . that might have produced the observation .since , strictly speaking , any gaussian might produce any real value , it follows none of the models can be falsified .nevertheless , every one will agree that it is _ more likely _ to be attributed to model than .but you can not say that the observation falsifies model ![ fn : falsification_gaussians ] ]the statement of the american statistical association on march this year did not arrive completely unexpected .many scientists were in fact aware and worried of the `` science s dirtiest secret '' , i.e. that `` the ` scientific method ' of testing hypotheses by statistical analysis stands on a flimsy foundation'' .indeed , as allen caldwell of mpi munich eloquently puts it ( e.g. in ) `` the real problem is not that people have difficulties in understanding bayesian reasoning .the problem is that they do not understand the frequentist approach and what can be concluded from a frequentist analysis .what is not understood , or forgotten , is that the frequentist analysis relates only to possible data outcomes within a model context , and not probabilities of a model being correct .this misunderstanding leads to faulty conclusions . ''faulty conclusions based on p - values are countless in all fields of research , and frankly i am personally much more worried when they might affect our health and security , or the future of our planet , rather then when they spread around unjustified claims of revolutionary discoveries or of possible failures of the so called standard model of particle physics .. my worries mainly concern negative reputation the field risks to gain and , perhaps even more , bad education provided to young people , most of which will leave pure research and will try to apply elsewhere the analysis methods they learned in searching for new particles and new phenomena . ] for instance , `` a lot of what is published is incorrect '' reported last year _ the lancet _ s editor - in - chief richard horton .this could be because , _looking around more or less ` at random ' _ , statistical ` significant results ' will soon or later show up ( as that of the last frame of an _ xkcd _ cartoon shown in fig.[fig : xkcd - significant ] see for the full story ) ; or because dishonest ( or driven by wishful thinking , which in science is more or less the same ) researchers might do some _p - hacking _ ( see e.g. and ) in order to make ` significant effects ' appear remember that `` if you torture the data long enough , it will confess to anything'' .a special mention deserves the february 2014 editorial of david trafimow , director of _ basic and applied social psychology _ ( basp ) , in which he takes a strong position against `` null hypothesis significance testing procedure ( nhstp ) '' because it `` has been shown to be logically invalid and to provide little information about the actual likelihood of either the null or experimental hypothesis'' .in fact a large echo ( see e.g. , and ) had last year a second editorial , signed together with his associate director michael marks published on february 15 , 2015 , in which they announce that , after `` a grace period allowed to authors '' , `` from now on , basp is banning the nhstp'' .moving finally to the content of the asa statement , after a short introduction , in which it is recognized that `` the p - value ]would be equal to or more extreme than its observed value '' ) a list of six items , indicated as `` principles '' , follows ( the highlighting is original ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 1 .* _ p_-values can indicate how incompatible the data are with a specified statistical model . * + a -value provides one approach to summarizing the incompatibility between a particular set of data and a proposed model for the data .the most common context is a model , constructed under a set of assumptions , together with a so - called `` null hypothesis . ''often the null hypothesis postulates the absence of an effect , such as no difference between two groups , or the absence of a relationship between a factor and an outcome .the smaller the -value , the greater the statistical incompatibility of the data with the null hypothesis , if the underlying assumptions used to calculate the -value hold .this incompatibility can be interpreted as casting doubt on or providing evidence against the null hypothesis or the underlying assumptions ._ p_-values do not measure the probability that the studied hypothesis is true , or the probability that the data were produced by random chance alone .* + researchers often wish to turn a -value into a statement about the truth of a null hypothesis , or about the probability that random chance produced the observed data .the -value is neither .it is a statement about data in relation to a specified hypothetical explanation , and is not a statement about the explanation itself .* scientific conclusions and business or policy decisions should not be based only on whether a _ p_-value passes a specific threshold . * + practices that reduce data analysis or scientific inference to mechanical `` bright - line '' rules ( such as `` '' ) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making . a conclusion does not immediately become `` true '' on one side of the divide and `` false '' on the other .researchers should bring many contextual factors into play to derive scientific inferences , including the design of a study , the quality of the measurements , the external evidence for the phenomenon under study , and the validity of assumptions that underlie the data analysis .pragmatic considerations often require binary , `` yes - no '' decisions , but this does not mean that -values alone can ensure that a decision is correct or incorrect . the widespread use of `` statistical significance '' ( generally interpreted as ) as a license for making a claim of a scientific finding ( or implied truth ) leads to considerable distortion of the scientific process .* proper inference requires full reporting and transparency * + -values and related analyses should not be reported selectively .conducting multiple analyses of the data and reporting only those with certain -values ( typically those passing a significance threshold ) renders the reported -values essentially uninterpretable .cherry - picking promising findings , also known by such terms as data dredging , significance chasing , significance questing , selective inference , and `` -hacking , '' leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided .one need not formally carry out multiple statistical tests for this problem to arise : whenever a researcher chooses what to present based on statistical results , valid interpretation of those results is severely compromised if the reader is not informed of the choice and its basis .researchers should disclose the number of hypotheses explored during the study , all data collection decisions , all statistical analyses conducted , and all -values computed .valid scientific conclusions based on -values and related statistics can not be drawn without at least knowing how many and which analyses were conducted , and how those analyses ( including -values ) were selected for reporting .p_-value , or statistical significance , does not measure the size of an effect or the importance of a result .* + statistical significance is not equivalent to scientific , human , or economic significance .smaller -values do not necessarily imply the presence of larger or more important effects , and larger -values do not imply a lack of importance or even lack of effect .any effect , no matter how tiny , can produce a small -value if the sample size or measurement precision is high enough , and large effects may produce unimpressive -values if the sample size is small or measurements are imprecise . similarly , identical estimated effects will have different -values if the precision of the estimates differs .* by itself , a _ p_-value does not provide a good measure of evidence regarding a model or hypothesis .* + researchers should recognize that a -value without context or other evidence provides limited information .for example , a -value near 0.05 taken by itself offers only weak evidence against the null hypothesis .likewise , a relatively large -value does not imply evidence in favor of the null hypothesis ; many other hypotheses may be equally or more consistent with the observed data . for these reasons, data analysis should not end with the calculation of a -value when other approaches are appropriate and feasible ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ these words sound as an admission of failure of much of the statistics teaching and practice in the past _ many _ decades .but yet i find their courageous statement still somehow unsatisfactory , and , in particular , the first principle is in my opinion still affected by the kind of ` original sin ' at the basis of p - value misinterpretations and misuse .many practitioners consider in fact a value occurring several ( but often just a few ) standard deviations from the ` expected value ' ( in the probabilistic sense ) to be a deviance from the model , which is clearly absurd : no value a model can yield can be considered an _ exception _ from the model itself ( see also footnote [ fn : falsification_gaussians ] the reason why `` p - values _ often work _ '' will be discussed in section [ sec : p - values_bf ] ) .then , moving to principle 2 , it is not that `` researchers _ often wish _ to turn a -value into a statement about the truth of a null hypothesis '' ( italic mine ) , as if this would be an extravagant fantasy : reasoning in terms of degree of belief of whatever is uncertain is connatural to the ` human understanding' : _ all methods that do not tackle straight the fundamental issue of the probability of hypotheses , in the problems in which this is the crucial question , are destinated to fail , and to perpetuate misunderstanding and misuse_.rumors that the ligo interferometers had most likely detected a gravitation wave ( gw ) were circulating in autumn last year .personally , the direct information i got quite late , at the beginning of december , was `` we have seen a _ monster _ '' , without further detail .therefore , when a few days before february 11 quantitative rumors talked of 5.1 sigmas , i was disappointed and highly puzzled .how could a monster have _ only _ just a bit more than five sigmas ?indeed in the past decades we have seen in particle physics several effects of similar statistical significance coming and going , as alvaro de rujula depicted already in 1985 in his famous _ cemetery of physics _ of fig.[fig : derujula_cemetry].[multiblock footnote omitted ] therefore for many of us a five - sigma effect would have been something worth discussions or perhaps further investigations but certainly not a monster .750gev excess at lhc ( and some even to the opera s superluminar neutrinos ! ) .i hope they will learn from the double / triple lesson.[fn:5sigmahiggs ] ] this impression was very evident from the reaction many people had _after _ seeing the wave form .`` came on , this is not a five - sigma effect '' , commented several colleagues , more or less using the same words , `` these are _ hundreds _ of sigmas ! '' , a colored expression to say that just by eye the hypothesis noise was beyond any imagination .the reason of the ` monstrosity ' of gw150914 was indeed in table 1 of the accompanying paper on _ properties of the binary black hole merger gw150914_ : a bayes factor `` bbh merger '' vs `` noise '' of about ( yes , _ five times ten to one - hundred - twenty - five _ ) .this means that , no matter how small the odds in favor of a bbh merger were and even casting doubt on the evaluation of the bayes factor , and and data , the bayes factor vs is where for sake of simplicity we identify with `` bbh merger '' and with `` noise '' .now the question is that _ there is not a single , precisely defined , hypothesis `` bbh merger''_. and the same is true also for the ` null hypothesis ' `` noise '' .this is because each hypothesis comes with free parameters .for example , in the case of `` bbh merger '' , the conditional probability of depends on the masses of the two black holes ( and ) , on their distance from earth ( ) and so on , i.e. . the same holds for the noise , because there is no such a thing as `` the noise '' , but rather a noise model with many parameters obtained monitoring the detectors .so in general , for the generic hypothesis we have in which stands for the set of parameters of the hypothesis .but what matters for the calculation of the bayes factor is , and this can be evaluated from probability theory taking account all possible values of the set of parameters , weighting them by the pdf , i.e. ` simply ' as but the game can be not simple at all , because i ) this integral can be very difficult to calculate ; ii ) the result , and then the bf , depends on the prior about the parameters , which have to be properly modeled from the physics case .a rather simple example , also related to gravitational waves , is shown in and helped dumping down claims of gw detection based on p - values , resulting in fact in ineffective bayes factors signal vs noise of the order of the unity , with values depending on the model considered .the calculations of the bf s published by the ligo - virgo collaboration are _ much _ more complicate than those of ( see and and references therein , in particular ) , and they have highly benefitted of skilling s _ nested sampling _algorithm . and , for the little i can understand of bbh mergers , the priors on the parameters appear to have been chosen safely , so that the resulting bf s seem very reliable .[ fn : nota_bf ] ] the posterior odds would be extraordinary large , the probability of noise being smaller than shakespeare s drop of water identically recovered from the sea.[multiblock footnote omitted ]the results of the full observing run of the advanced ligo detectors ( september 12 , 2015 , to january 19 , 2016 ) have been presented on june 8 , slightly updating some of the february s digits .figure [ fig : fig1_ligo - june2016 ] summarizes detector performances and results , with some important numbers ( within this context ) reminded in the caption .the busy plot on the left side shows the sensitivity curves of the two interferometers ( red and blue curves , with plenty of resonant peaks ) and how the three signals fall inside them ( bands with colors matching the wave forms of the right plot ) . in short ,the two curves tell us that a signal of a given frequency can be distinguished from the noise if its amplitude is above them .therefore all initial parts of the waves , when the black holes begin to spiral around each other at low frequency , are unobservable , and the bands below are extrapolations from the physical models . later , when the frequency increases , the wave enters the sensitivity range , ] which extends up to a given frequency , after which we ` loose ' it .the lower and upper boundary frequencies depend on the amplitude of the signal , as it also happens in acoustics .the plot on the right shows finally the ` waves ' from the instant they enter the optimal 30hz sensitivity region ( the acoustic analogy depicted in footnote[fn : acustic_analogy ] might help ) : * the wave indicated by gw150914 ( the ` monster ' , with gw standing for gravitational wave and 150914 for the detection date , september 14 , 2015 ) is characterized by high amplitude , but short duration in the sensitivity region , because it fades out at a few hundred hertz .* gw151226 instead , although of smaller intensity , has a longer ` life ' ( about 1.7 seconds ) in the ` audible ' spectrum , and therefore the signature of a bbh merger is also very recognizable . *then there is the october 12 event , lvt151012 , which has an amplitude comparable to that of gw151226 , but smaller duration .it has , nevertheless , about 20 oscillations in the sensitivity region , an information that , combined with the peculiar shape of the signal ( remarkably the crests get closer as time passes , while the amplitude increases , until something ` catastrophic ' seems to happen ) and the fact that two practically ` identical ' and ` simultaneous ' signals have been observed by the two interferometers 3000 km apart , makes the experts highly confident that this is also a gravitational wave .however , even if at a first sight it does not look dissimilar from gw151226 ( but remember that the waves in fig.[fig : fig1_ligo - june2016 ] do not show raw data ! ) , the october 12 event , hereafter referred as _, is not ranked as gw , but , more modestly , as lvt , for ligo - virgo trigger .the reason of the downgrading is that _ `she ' can not wear a `` s dress '' to go together with the ` sisters ' to the ` sumptuous ball of the establishment . '_ in fact chance has assigned ` her ' only a poor , unpresentable ranking , usually considered in the particle physics community not even worth a mention in a parallel session of a minor conference by an undergraduate student ..i hope it is so just by chance and that no `` '' requirement was applied to the data , then filtering out other possible good signals . ]but , despite the modest ` statistical significance ' , experts are highly confident , because of physics reasons . remember that whatever we observe in real life , if seen with high enough resolution in the -dimensional phase space , probability to occur ! ( imagine , as a _ simplified _example , the pixel content of any picture you take walking on the road , in which is equal to five , i.e two plus the rgb code of each pixel). ] , or of the size of the area left or right .* the reason why p - values ` often work ' ( and can then be useful _alarm bells _ when getting experiments running , or validating freshly collected data ) , is quite simple . * * small p - values are normally associated to small values of the pdf , as shown in the upper plot of fig.[fig : xm_p - value ] .+ c + * * it is then _ conceivable _ an alternative hypothesis such that , as shown in the bottom plot of fig.[fig : xm_p - value ] . ** then , * if * this is the case , the observed _ would push our beliefs towards _ , in the sense . * * * but * we need to take into account also the priors odds . * * in the extreme case such a conceivable , or it could be , or it could be just , as it happens in recent years , with a plethora of ` theorists ' who give credit to any fluctuation .if this is the case , as it is often the case in frontier physics , then * * * * * * _ the smallness of the p - value is irrelevant ! _ + ( note that if , instead of the smallness of the value of the pdf , the rational were really the smallness of the area below the pdf , than the absurd situation might arise in which one could choose a `` rejection area '' anywhere , as shown in chapter 1 of . ) * finally , in order to understand the apparent paradox of large p - value and indeed very large bf , think at a very predictive model , whose pdf of the observable overlaps with that of , like in the upper plot of fig.[fig : high_p - value_high_bf ] . + c + + we clearly see that , thus resulting in a bayes factor highly in favor of , although the p - value calculated from the null hypothesis would be absolutely _insignificant_. something _ like that _ occurs in the analysis of the gravitational wave analysis , the case of cinderella being the most striking one .tests must have already realized that the example does not apply _ tout court _ to what they do , because in that case is usually ` richer ' than and it has then a higher level of adaptability .therefore the observed value of decreases ( with a ` penalty ' that frequentists quantify with a reduced number of degree of freedom ) . as a consequence ,the measured value of the test variable is different under the two hypothesis , and , in order to distinguish them , let us indicate the first by and the second by .what instead still holds , of the example sketched in the text , is that the adaptability of makes the p - value calculated from larger that that calculated from , and therefore ` gets preferred ' to .but , as stated in the text , the alternative hypothesis could be hardly believable , and therefore its ` nice ' p - value will not affect the credibility of .this almost regularly happens when suspicions against only arise from _ event counting _ in a particular variable , _ without any specific physical signature_. a side remark , i would like to point out , or to remind , that one of the nice features of the bayes factor calculated integrating over the prior parameters of the model , as sketched in footnote [ fn : nota_bf ] , is that models which have a large numbers of parameters , whose possible values _ a priori _ extend over a large ( hyper-)volume , are suppressed by the integral with respect to ` simpler ' models .this effect is known as _bayesian occam s razor _ and is independent from other considerations which might enter in the choice of the priors .those interested to the subject are invited to read chapter 28 of david mackay s great book. $ ] ] * and ` paradoxically ' this is just a colloquial term , since there is no paradox at all large deviations from the expected value of given , corresponding to small p - values , are those which favor , if and are the only hypotheses in hand , as shown in the bottom plot of the same figure .now , in the light of these examples , i simply re - propose you the following sentence from the first principle of the asa s statement `` the smaller the -value , the greater the statistical incompatibility of the data with the null hypothesis , if the underlying assumptions used to calculate the -value hold.'' as you can now understand , it is not a matter of assumptions concerning , but rather on whether alternative hypotheses to are conceivable and , more important , believable ! i hope it is now clear the reason why p - values and bayes factors have in principle nothing to do with each other , and why p - values are not only responsible of unjustified claims of discoveries , but might also relegate genuine signals to the level of fluke , or reduce their ` significance ' , the word now used as normally understood and not with the ` technical meaning ' of statisticians .but since i know that many might not be used with the reasoning just shown , i made a little r script , so that those who are still sceptical can run it and get a feeling of what is going on . .... # initialization mu.h0 < - 0 ; sigma.h0 < - 1 mu.h1 < - 0 ; sigma.h1 < - 1e-3 p.h1 < - 1/2 mu < - c(mu.h0 , mu.h1 ) sigma < - c(sigma.h0 , sigma.h1 ) # simulation function simulate < - function ( ) { m < - rbinom(1 , 1 , p.h1 ) ; x< - rnorm(1 , mu[m+1 ] , sigma[m+1 ] ) x < - rnorm(1 , mu[m+1 ] , sigma[m+1 ] ) p.val < - 2 * pnorm(mu[1 ] - abs(x - mu[1 ] ) , mu[1 ] , sigma[1 ] ) bf < - dnorm(x , mu[2 ] , sigma[2 ] ) / dnorm(x , mu[1 ] , sigma[1 ] ) lbf < - dnorm(x , mu[2 ] , sigma[2 ] , log = true ) - dnorm(x , mu[1 ] , sigma[1 ] , log = true ) cat(sprintf("x = % .5f = >p.val = % .2e , bf = % .2e [ log(bf ) = % .2e ] \n " , x , p.val , bf , lbf ) ) return(m ) } .... by default is simply a standard gaussian distribution ( and ) , while is still a gaussian centered in 0 , with a very narrow width ( ) .the prior odds are set at 1 to 1 , i.e. .each call to the function simulate ( ) prints the values that we would get in a real experiment ( x , p - value , bayes factor and its log ) and returns the true model ( 0 or 1 ) , stored in a vector variable for later check . in this wayyou can try to infer what was the real cause of x before knowing the ` truth ' ( in simulations we can , in physics we can not ! ) . hereare the results of a small run , with x = 12 chosen in order to fill the page , thus postponing the solution to the next one .+ and _ the winners are _ : + it should not be any longer a surprise that the best figure to discriminate between the two models is the bayes factor and not the p - value. you can now play with the simulations , varying the parameters .if you want to get a situation yielding bayes factors of you can keep the standard parameters of , fixing instead at and at .then you can choose p.h1 at wish and run the simulation .( you also need to change the numbers of digits of x , replacing `` % .5f '' by `` % .11f '' inside sprintf ( ) . )uncritical or wishful use of p - values can be dangerous , not to speak of unscrupulous p - hacking .while years ago these criticisms were raised by a minority of thorny bayesians , now the effect on the results in several fields of science and technology is felt as a primary issue.[multiblock footnote omitted ] the statement of the american statistical association is certainly commendable in addressing the issue , but it is in my opinion unsatisfactory not admitting that the question is inherent to all statistical methods that refuse the very idea of probability of hypotheses , or of `` probability of causes '' , i.e. what poincar used to call `` the essential problem of the experimental method . '' while i had experienced several times in the past , including this winter , claims of possible breaking discoveries in particle physics simply due to misinterpretations of p - values , for the first time i have realized of a case in which judgements based on p - values strongly reduce the ` significance ' of important results .this happens with the gravitational wave events reported this year by the ligo - virgo collaboration , and in particular with the october 12 events timidly reported as a ligo - virgo trigger ( ` cinderella ' ) , because of its 1.7 sigmas , in spite of the huge bayes factor of about , that should instead convince any hesitating physicist about its nature of a gravitational wave radiated by a binary black hole merger , especially in the light of the other , more solid two events ( ` the two sisters').[multiblock footnote omitted ] i hope than that lvt151012 will be upgraded to gw151012 and that in future searches the bayes factor will become the principal figure of merit to rank gravitational wave candidates .* _ which bayes factor would characterize the 750gev excess ?_ + the result depends on the model to explain the excess and an answer came the week after maxent 2016 by andrew fowlie . for the modelconsidered he got a bf _ around _ 10 , the exact value being irrelevant : a weak indication , but nothing striking to force sceptics to change substantially their opinion . *_ could have cdf at fermilab claimed to have observed the higgs boson if they had done a bayesian analysis ? _+ i am quite positive they could have it , also because the prior on the possible values of the higgs mass was not so vague and well matching the value found later , and therefore the bayes factor would have been rather high ( and the prior probability of a possible manifestation of the boson in the final state was high too ) .* acknowledgements * + this work was partially supported by a grant from simons foundation , which allowed me a stimulating working environment during my visit at the isaac newton institute of cambridge , uk .the understanding and/or presentation of several things of this paper has benefitted of the interactions with pia astone , ariel caticha , kyle cranmer , walter del pozzo , norman fenton , enrico franco , gianluca gemme , stefano giagu , massimo giovannini , keith inman , gianluca lamanna , paola leaci , marco nardecchia , aleandro nisati , and cristiano palomba .i am particularly indebded to allen caldwell , alvaro de rujula and john skilling for many discussions on physics , probability , epistemology and sociology of scientific communities , as well for valuable comments on the manuscript , which has also benefitted of an accurate reading by christian durante and dino esposito .b. p. abbott et al .( ligo scientific collaboration and virgo collaboration ) , _ observation of gravitational waves from a binary black hole merger _ , prl * 116 * , 061102 ( 2016 ) , https://dcc.ligo.org/public/0122/p150914/014/ligo-p150914_detection_of_gw150914.pdf r. l. wasserstein and n. a. lazara , _ the asa s statement on p - values : context , process , and purpose _ , the american statistician , * 70:2 * ( 2016 ) 129 - 133 , doi : 10.1080/00031305.2016.1154108 , http://dx.doi.org/10.1080/00031305.2016.1154108 i. j. good , _ a list of properties of bayes - turing factors _ , document declassified by nsa in 2011 , https://www.nsa.gov/news-features/declassified-documents/tech-journals/assets/files/list-of-properties.pdf .s. b. mcgrayne , _ the theory that would not die : how bayes rule cracked the enigma code , hunted down russian submarines , and emerged triumphant from two centuries of controversy _ , yale university press 2012 .( video of the presentation of the book at google available at https://www.youtube.com/watch?v=8od6ebkjf9o . )g. dagostini , _ from observations to hypotheses : probabilistic reasoning versus falsificationism and its statistical variations _ , 2004 vulcano workshop on frontier objects in astrophysics and particle physics , vulcano ( italy ) , http://arxiv.org/abs/physics/0412148 d. overbye , _ physicists in europe find tantalizing hints of a mysterious new particle _ , the new york times , december 15 , 2015 , http://www.nytimes.com/2015/12/16/science/physicists-in-europe-find-tantalizing-hints-of-a-mysterious-new-particle.html?_r=0 .j. parsons , _ cern announces potential discovery of a new higgs boson particle at the large hadron collider _ , mirror , december 17 , 2015 , http://www.mirror.co.uk/news/technology-science/science/cern-announces-potential-discovery-new-7027421 b. crew , _ evidence of a new particle that could break the standard model of physics is mounting _ , science alert , march 21 , 2016 , http://www.sciencealert.com / evidence - of - a - new - particle - that - could - break - the - standard - model - of- physics - is - mounting .laplace , _ essai philosophique sur les probabilits _ , 1814 , + http://books.google.it/books?id=jrewaaaaqaaj ( the english quotes in this paper are taken from a.i .dale s translation , springer - verlag , 1995 ) .n. fenton , d. berger , d. lagnado , m. neil and a. hsu , _ when ` neutral ' evidence still has probative value ( with implications from the barry george case ) _ , science and justice * 54 * ( 2014 ) 274 , http://www.scienceandjusticejournal.com/article/s1355-0306(13)00059-2/abstract .european network of forensic science institutes , _ enfsi guidelines for the evalutative reporting in forensic science _ , march 8 , 2015 , http://www.enfsi.eu/sites/default/files/documents/external_publications/m1_guideline.pdf european network of forensic science institutes , _ best practice manual for the forensic examination of digital technology _ , enfsi - bpm - fit-01 ,november 2015 .e. iorns , _ is medical science built on shaky foundations ? _ , new scientist , 12 september 2012 ,https://www.newscientist.com / article / mg21528826 - 000-is - medical - science - built - on - shaky - foundations/. p. jump , _ more than half of psychology papers are not reproducible _ , times higher education , august 27 , 2015 , https://www.timeshighereducation.com/news/more-half-psychology-papers-are-not-reproducible .p. jump , _ reproducing results : how big is the problem ? _ , times higher education , september 3 , 2015 , https://www.timeshighereducation.com/features/reproducing-results-how-big-is-the-problem . l. d. nelson , _ false - positives , p - hacking , statistical power , and evidential value _ , bitss 2014 summer institute , june 2014 , https://bitssblog.files.wordpress.com/2014/02/nelson-presentation.pdf .a. charpentier , _ p - hacking , or cheating on a p - value _ , r - bloggers , june 2015 , https://www.r-bloggers.com/p-hacking-or-cheating-on-a-p-value/ a. gelman ,_ psych journal bans significance tests ; stat blogger inundated with emails _ , february 26 , 2015 , http://andrewgelman.com/2015/02/26/psych-journal-bans-significance-tests-stat-blogger-inundated-with-emails/ j. berger , ph .dawid , j. kadane , t. ohagan , l. piricchi , ch .p. robert and d. szucs , contributions to _ banning null hypothesis significance testing _ , isba bullettin 22 , march 2015 , 5 , https://bayesian.org/sites/default/files/fm/bulletins/1503.pdf d. hume , _ a treatise of human nature , 1739 _ ; _ an enquiry concerning human understanding _ , 1748 .+ ( also available as audiobooks at librivox , with links to the online texts : + https://librivox.org/treatise-of-human-nature-vol-1-by-david-hume/ ; + https://librivox.org / an - enquiry - concerning - human - understanding - by - david - hume/. ) g. dagostini and g. degrassi , _constraints on the higgs boson mass from direct searches and precision measurement _ , eur.phys.j .* c10 * ( 1999 ) 663 , http://link.springer.com/article/10.1007%2fs100529900171 ( arxiv : hep - ph/9902226 , http://arxiv.org/abs/hep-ph/9902226 ) .g. dagostini and g. degrassi , _ constraining the higgs boson mass through the combination of direct search and precision measurement results _ , arxiv : hep - ph/0001269 , http://arxiv.org/abs/hep-ph/0001269 .p. astone , g. dagostini and s. dantonio , _ bayesian model comparison applied to the explorer - nautilus 2001 coincidence data_ , class.quant.grav . * 20 * ( 2003 ) s769-s784 ( arxiv : gr - qc/0304096 , http://xxx.lanl.gov/abs/gr-qc/0304096 ) .j. skilling , _ nested sampling for general bayesian computation _ , bayesian analysis* 1 * ( 2006 ) 833 , http://www.mrao.cam.ac.uk/~steve/maxent2009/images/skilling.pdf , https://en.wikipedia.org/wiki/nested_sampling_algorithm .r core team ( 2016 ) ._ r : a language and environment for statistical computing_. r foundation for statistical computing , vienna , austria .https://www.r - project.org/. ( script at http://www.roma1.infn.it/~dagos/prob+stat.html . )corriere della sera , _`` trovata la particella di dio '' una caccia lunga mezzo secolo _ , july 3 , 2012 , http://www.corriere.it/scienze/12_luglio_03/trovata-particella-di-dio-caccia-lunga-mezzo-secolo-giovanni-caprara_b967689e-c4d0-11e1-a141-5df29481da70.shtml .repubblica , _ da un laboratorio ungherese spunta la quinta forza _ , may 25 , 2016 , http://www.repubblica.it / scienze/2016/05/25/news / modello_standard_forze_fondamentali_cern_lhc_particelle_fondamentali_materia_oscura_bosone-140567449/. d. ghosh , m. nardecchia and s. a. renner , _ hint of lepton flavour non - universality in meson decays _ , j. high energ .( 2014 ) 131 , http://link.springer.com/article/10.1007/jhep12(2014)131 http://arxiv.org/pdf/1408.4097.pdf
|
this paper shows how p - values do not only create , as well known , wrong expectations in the case of flukes , but they might also dramatically diminish the ` significance ' of _ most likely _ genuine signals . as real life examples , the 2015 first detections of gravitational waves are discussed . the march 2016 statement of the american statistical association , warning scientists about interpretation and misuse of p - values , is also reminded and commented . ( the paper is complemented with some remarks on past , recent and future claims of discoveries _ based on sigmas _ from particles physics . ) # 1
|
how much can you tell about the past of a quantum system from its present state ? this is the problem of retrodiction .one is often concerned with prediction , for example , describing the results of measurements made on the system at a later time .retrodiction is concerned with the past of the system .an example of a situation that encompasses both prediction and retrodiction is given by the standard setup in quantum communication theory .alice chooses a state from a set of states known to alice and bob , and sends it to bob , who then measures the state .alice would like to predict the result of bob s measurement based on which state she sent , and bob would like to retrodict which state alice sent .the theory of quantum retrodiction allows one to define a retrodictive state , which can be used to make predictions .this point of view has been used to analyze a number of systems in quantum optics , including a beam splitter , amplifiers and attenuators , and a driven atom .it can be applied to both closed and open systems . herewe shall be interested in the retrodiction of measurement results .suppose a quantum system has been prepared in a quantum state and then subjected to a series of measurements .different measurement results will lead to different final states of the system .we assume that all we have access to is the final state of the system , and not the results of the measurements , and we would like to gain information about those results .the set of measurement results can be viewed as a trajectory of the quantum system , and we will explore what can be learned about that trajectory from the final state of the system . we may be interested in only part of the trajectory , the entire trajectory , or determining whether a particular trajectory did not occur .the problem studied here is closely related to that of sequential measurements on the same quantum system .rather surprisingly , it has been shown that one can gain information about the initial state of a system even though a measurement has intervened and changed the state of the system . in our case , for the measurements determining the trajectory, subsequent measurements can disturb the quantum state resulting from a previous one , thereby complicating the task of determining the trajectory .we shall approach the problem of retrodicting measurement results from the final state in two ways .after a short discussion of some simple cases , we will see what can be done when a quantum system is subjected to two two - outcome measurements , the second of which is a projective measurement .next , we will study a simple model that will allow us to look at more general types of measurements .the picture behind the model is that of a photon going through a sequence of interferometers , where in each interferometer there is a detector that gives us information about which path the photon took through that interferometer .we would like to find out what we can infer about the photon s path , i.e. the results of the path detectors , from its state when it emerges from the final interferometer .we will make use of a qubit instead of a photon , and instead of measuring paths , our detectors will tell us whether the qubit is in the state or .a measurement is described by a positive operator valued measure ( povm ) , which is a set of positive operators , such that .if the state being measured is , then the probability of obtaining the result is , and if the result is obtained , the state after the measurement is . while this not the most general measurement model possible , ( see ), it will suffice for our purposes here .we can obtain an idea of the range of possible relations between a sequence of measurement results and the final state of a system by considering some simple examples . at one extreme , there are cases in which we learn nothing about the measurement results from the final state of the system .let us consider making two measurements on a qubit , with the first measurement described by the second by .we will consider the case in which all of these operators are diagonal in the computational basis , where now suppose we start the qubit in the state .the probability that we obtain for the first measurement and for the second , where is , but in all cases the final state of the system is .therefore , in this case , we learn nothing about the results of the measurements from the final state of the system .we also note that each measurement is independent of the ones before it .a less extreme case is when the measurement operators are one - dimensional projections .then the final state of the system is determined only by the final measurement result , and so it would seem to carry no information about the previous ones .however , the probability that a particular final state occurs does depend on the results of the previous measurements , so we can infer some information about those measurements from the final state . a measurement sequence of this type can be described as a markov chain .the probability of a measurement result only depends on the result of the previous measurement , because that measurement determines the state that is being measured .finally , suppose our system consists of two qubits , and the measurement operators are given by and , where and are orthogonal one - dimensional projections .the first measurement only measures the first qubit , and the second measures the second . in this case ,different final states are correlated with different sequences of measurement results , and these states are orthogonal .therefore , by measuring the final state of the system we will know what both measurement results were .what we can conclude from these examples is that there is wide range of behaviors possible .correlations between final states and measurement results can range from nonexistent to perfect . in order to further examine what is possible ,let us first look at the case of two two - outcome measurements .we start with the system in the state , and perform two two - outcome measurements on it .we denote the outcomes of the measurements by .the first measurement is described by a povm , where .if the measurement result was the post - measurement state is , and if the result was it is .we shall assume for now that the second measurement is described by the projections , where .now suppose we have been given the system after the measurements have been made , and we would like to determine the result of the first measurement . this can be viewed as a problem of discriminating between two density matrices .the first density matrix is the one that results at the output if the result of the first measurement was , which is given by , \ ] ] and it occurs with a probability of .the second density matrix is the one that results if the result of the first measurement is , , \ ] ] and it occurs with a probability of .these density matrices can not , in general , be perfectly distinguished , so we need to turn to a strategy that will give us some information about which one we have .the minimum - error strategy minimizes the probability of making a mistake .suppose we are trying to discriminate between two density matrices , , which occurs with probability , and , which occurs with probability .minimum - error discrimination gives us a two - element povm , , where corresponds to detecting and corresponds to detecting .the probability of successfully identifying the state is and for the optimal povm , that is , for the one that minimizes the probability of making a mistake , is given by , \ ] ] where the norm in the above equation is the trace norm .setting , the povm element corresponding to detecting , , is the projection onto the subspace spanned by the eigenvectors of with positive eigenvalues , and the povm element corresponding to detecting , , is the projection onto the subspace spanned by the eigenvectors of with either negative or zero eigenvalues ( the states with eigenvalue zero can be placed in either povm element , we have chosen to include them in the one corresponding to ) . in our case, we can evaluate the trace norm .note that we have the trace norm can be split into two parts , because and have orthogonal supports . in each of the parts ,the problem is reduced to finding the trace norm of a two dimensional matrix . in the first term , the support of the operator is the subspace spanned by the vectors and , and for the second term the support lies in the subspace spanned by the vectors and .we then find that ^{1/2 } \nonumber \\ & & + \left [ \left ( \|q_{-}a_{+}\psi \|^{2 } + \|q_{-}a_{-}\psi\|^{2}\right)^{2 } \right .\nonumber \\ & & \left . - 4|\langle\psi |a_{-}^{\dagger}q_{-}a_{+}\psi\rangle |^{2}\right]^{1/2 } .\end{aligned}\ ] ] now is between and , with corresponding to perfectly distinguishable states and corresponding to states that can not be distinguished . in our case , if , then we will have . in this case , the result of the first measurement determines the result of the second measurement . in order for , it must be the case that and for some and . for the case of a qubit , this can occur when , , where , and and . for qubits, we can go further . assuming that are rank one projections , the inner products in eq .( [ lambda ] ) factorize , and we have that where , for , and is the probability that the first measurement gives the result and the second gives . from this , we see that if the probabilities of the different measurement outcomes are close to the same , it will be difficult to distinguish the output states corresponding to different values of the first measurement . for qudits ,the expression on the right - hand side of eq .( [ qubit - lambda ] ) is a lower bound for , so its value gives a worst case for ones ability to determine the result of the first measurement .now let us look at determining the results of both measurements .in the case we have been considering so far , in which the second measurement is a projective one is straightforward , because the projections and have orthogonal support , which implies that the states that result from different outcomes for the second measurement are perfectly distinguishable .this also makes it simple to determine the results of both measurements .first we measure the output state in order to determine whether it is in the support of or .that reduces the problem to one of distinguishing between two states , for example , if the output state was found to be in the support of , then we would need to discriminate between and .this can then be accomplished by using minimum error discrimination .the success probability for determining both measurements using this procedure is the same as that of determining the result of the first measurement , , where is given by eq .( [ lambda ] ) .this is shown in greater detail in appendix a , and it is also shown there that this procedure is optimal . in the next section , we will look at the case when both measurements are povm s for a simple example , a double qubit interferometer .we will see what one can learn about the path taken through the interferometer , which is specified by the results of two measurements , by measuring the final state of the qubit .can one learn more about the path of the qubit if the measurements are less disturbing and , therefore , interfere with each other less ?in particular , if the second measurement is not a projection , one would expect more information about the result of the first measurement to make it through to the final state .our model allows us to examine this idea .we shall consider a qubit double interferometer based on the qubit single interferometer used by englert to derive a visibility - path - information duality relation .this will allow us to consider measurements other than projective measurements .we start the qubit in the state and it then passes through a hadamard gate , which puts it in the state , where we have denoted the operator corresponding to the hadamard gate by . note that .we then measure which path the qubit took , by which we mean whether it is in the state or .the qubit then passes through a second hadamard gate , and we again measure whether it is in the state or .the qubit then passes through a final hadamard gate .we can view the measurement results as defining a trajectory that the qubit follows through the interferometer , and we are interested in determining what information we can gain about the trajectory by measuring the state of the qubit when it emerges from the interferometer. the measurements will not necessarily extract all of the information about the qubit s state so that we can examine the relation between how much path information is extracted and the final state of the qubit . to measure the qubit going through the interferometer ( qubit ) we couple it first to a second qubit ( qubit ) , which is initially in the state , using the unitary operation where .the parameter controls how much information the measurement extracts about the path .if no path information is extracted , while if the maximum amount of information is extracted . when we measure the auxiliary qubit , we perform the optimal minimum error measurement to distinguish and .that means we measure in the basis .we shall interpret the result , meaning as corresponding to qubit being in the state and corresponding to qubit being in the state .let us now find the measurement operators corresponding to this procedure .if the pre - measurement state is and we obtain as the measurement result , the post - measurement state is similarly , we find that in terms of matrices in the basis we have the corresponding povm operators are the final states , up to normalization , are given by applying hadamard operators and the measurement operators to the initial state . in particular , if both measurements yielded , then the final state is proportional to ( we shall henceforth drop the subscript on the qubit ) .after the first hadamard , the state is and the probabilities of the first measurement are .the joint probabilities for the two measurements are given by .\end{aligned}\ ] ] similarly we find \nonumber \\p(-,+ ) & = & p(-,- ) = \frac{1}{4 } [ 1 + \sin ( 2\theta ) \cos ( 2\theta ) ] , \end{aligned}\ ] ] where the first argument in the probability corresponds to the second measurement and the second argument corresponds to the first measurement , i.e. is the probability of first getting and then getting for the measurement results .this corresponds to the order in which the measurement operators are applied to the state .the resulting normalized output states , with the same convention for the ordering of the measurement results , are \nonumber \\ & & + \sin\theta ( \sin\theta + \cos\theta ) |-x\rangle ] \nonumber \\ & & + \sin\theta ( \sin\theta - \cos\theta ) |-x\rangle ] \nonumber \\ & & - \sin\theta ( \sin\theta -\cos\theta ) |-x\rangle ] \nonumber \\\end{aligned}\ ] ] note that when , in which case the measurement extracts no path information , all of these vectors become , and there is no correlation between the final state and the measurement results . when , then and are parallel to and and are parallel to . then we can only distinguish between the two sets , and .if we represent the four output states as vectors in the plane , with being the horizontal direction and the vertical , we find the following .the states and make an angle of and , respectively , with the horizontal axis , where and and make angles of and , respective , with the horizontal axis , where note that for , we have that and that is an increasing function of , which goes from at to at .the behavior of is a bit more complicated .it is at , increases and then decreases again becoming at .both and are plotted as functions of in fig . 1 . [ phi12 ] and plotted as functions of .the dotted line is for and the solid line is for .,title="fig : " ]now that we have the final states , we can ask what kind of information we can learn by measuring them .there are a number of possibilities .one is to determine , as best we can , the results of either the first or the second measurement . another possibility is to perform a four - outcome measurement that maximizes our probability of finding both measurement results .a final possibility is to perform a measurement that eliminates some of the possible trajectories .we shall look at each of these possibilities in turn .suppose we only wish to determine the result of the second measurement . the density matrix corresponding to the result , if we ignore the result of the first measurement , is } [ p(+,+ ) |\psi_{out}^{++}\rangle\langle\psi_{out}^{++}| \nonumber \\ & & + p(+,-)|\psi_{out}^{+-}\rangle\langle \psi_{out}^{+-}| ] .\end{aligned}\ ] ] and the density matrix corresponding to is } [ p(-,+ ) |\psi_{out}^{-+}\rangle\langle \psi_{out}^{-+}| \nonumber \\ & & + p(-,-)|\psi_{out}^{--}\rangle\langle \psi_{out}^{--}| ] .\end{aligned}\ ] ] our problem in determining the result of the second measurement is reduced to discriminating between these two density matrices .this can be done using minimum - error state discrimination . in this case , choosing as , which occurs with a probability of , and , which occurs with a probability of , ( see eq .( [ min - err ] ) ) we find that this implies that the povm element corresponding to detecting is , and the povm element corresponding to is .we shall denote the results of the output measurement as , corresponding to the detection of , and , corresponding to the detection of . finding the trace norm of gives us that .\ ] ] this result is not surprising in that the density matrices become more distinguishable as goes from to .at they are identical and equally probable , so guessing is the best we can do . at are also equally probable , but they are now orthogonal and , therefore , perfectly distinguishable . using bayes theoremwe can see what is the effect of updating the probabilities for the occurrence of the two density matrices , and , which is also the same as updating the probabilities for the result of the second measurement .let us denote by and the probabilities of obtaining and at the output , respectively .these are given by \nonumber \\p(\rho_{2- } ) & = & p(-,+ ) + p(-,- ) = \frac{1}{2 } [ 1 + \sin ( 2\theta ) \cos ( 2\theta ) ] .\nonumber \\\end{aligned}\ ] ] what we would like to find are the probabilities of the occurrence of and conditioned on the result of the measurement on the output state .this is the same as finding the probabilities of the result of the second measurement conditioned on the measurement of the output state. we shall denote these probabilities by , where , , i.e. the probability of occurring if the measurement of the output state is .bayes theorem tells us that where , and from this we find that \nonumber \\ p(\rho_{2-}|m_{out}=+ ) & = & \frac{1}{2 } [ 1- \sin ( 2\theta ) ] \nonumber \\p(\rho_{2+}|m_{out}=- ) & = & \frac{1}{2 } [ 1 - \sin ( 2\theta ) ] \nonumber \\p(\rho_{2-}|m_{out}=- ) & = & \frac{1}{2 } [ 1 + \sin ( 2\theta ) ] .\end{aligned}\ ] ] now suppose we measured the output state and obtained . before the measurement the probability of the output state being being was , while after the measurement it is , and a comparison of eqs .( [ bef - meas2],[aft - meas2 ] ) shows that .therefore , the result of the measurement on the output state has increased the probability that the result of the second measurement was indeed , and the difference between and is an increasing function of .the situation becomes more interesting if we wish to determine only the result of the first measurement .ignoring the result of the second measurement , the output density matrix corresponding to the result for the first measurement is } [ p(+,+ ) |\psi_{out}^{++}\rangle\langle \psi_{out}^{++}| \nonumber \\ & & + p(-,+ ) |\psi_{out}^{-+}\rangle\langle \psi_{out}^{-+}| ] , \end{aligned}\ ] ] and the output density matrix corresponding to is } [ p(+,-)|\psi_{out}^{+-}\rangle\langle \psi_{out}^{+-}| \nonumber \\ & & + p(-,-)|\psi_{out}^{--}\rangle\langle \psi_{out}^{--}| ] , \end{aligned}\ ] ] and the probabilities of these output density matrices occurring are .we can now find the optimal minimum - error discrimination for this situation , and we find that the povm elements are and , and the success probability is now this has a different behavior than the success probability for the second measurement .it is at since , again , the states are identical and equally probable , and then increases reaching a maximum value of at .it then decreases back to at .the reason for the decrease is that the states are equally probable for the entire range of , and as approaches , the second measurement becomes closer to a projective measurement , and this eliminates the correlation between the first measurement and the final state .the success probability in discriminating the two output states resulting from either of the measurements ( in this case the first or the second ) serves as a useful measure or the influence of the measurement on the output state . in the case of the first measurement ,the influence for small is small , then grows , but subsequently declines as the second measurement forces and to become less distinguishable . as before , we can use bayes theorem to find the probabilities of , that is the probabilities of the results of the first measurement , conditioned on a result of the measurement of the output state .we now let the output state measurement result correspond to and correspond to . in analogy with what we did before, we find that \nonumber \\ p(\rho_{1-}|m_{out}=+ ) & = & \frac{1}{2 } [ 1- \sin ( 2\theta ) \cos ( 2\theta ) ] \nonumber \\p(\rho_{1+}|m_{out}=- ) & = & \frac{1}{2 } [ 1 - \sin ( 2\theta ) \cos ( 2\theta ) ] \nonumber \\p(\rho_{1-}|m_{out}=- ) & = & \frac{1}{2 } [ 1 + \sin ( 2\theta ) \cos ( 2\theta ) ] .\end{aligned}\ ] ] note that in this case , the difference between , for example , and first increases with as the measurements extract more information , but then decreases as the second measurement interferes with the first . instead of trying to determine the result of either the first or second measurement, one can try to determine both .we then need a measurement that will discriminate among the four output states in eq .( [ output - states ] ) .unfortunately an explicit form for the optimal minimum - error measurement is only known for two states , so we will have to proceed in a different manner than we have so far .first , we will use a pretty good discrimination measurement , the square - root measurement .next we will numerically find an optimal discrimination measurement , and compare its success probability to that of the square - root measurement .suppose we want to discriminate among the states , where occurs with probability .the povm elements for the square root measurement are given by where , and the inverse is take on the span of the vectors . in our case we find that so that defining the states the povm elements for the square - root measurement are where .the probability of successfully identifying the state is .\end{aligned}\ ] ] it is useful to compare this to the optimal minimum - error measurement for these states , which we shall find numerically .the set of four states we are trying to discriminate is invariant under a reflection about the axis , so the povm elements should also have this property .consequently , we choose , , , and , where is just with replaced by and is just with replaced by .we also have that and are between and .the requirement that the povm elements sum to the identity gives us that adding these equations we find that and subtracting them gives .these equations will have a solution in the range if either and , or and .we will choose and , which guarantees that this condition is satisified .solving these equations for and , we find that in our case , the states and occur with a probability and and occur with a probability of , where .the success probability is now for each value of , which determines the values of and , we can do a search in the allowed ranges of and in order to find values that maximize the above expression .the results are shown in fig . 2 .these results are surprising . both the square - root measurement andthe numerical results show that the success probability is greatest at , where we can determine with certainty the result of the second measurement , but lose all information about the first . one might have thought that an intermediate value of would give the greatest value , because in that case the final state would depend on the results of both measurements .as one can see , however , that is not the case .[ double - inf ] for a two - loop interferometer .the dashed line corresponds to the square - root measurement , and the solid line to the numerically optimized measurement.,title="fig : " ] so far , all of the information we have gained about possible trajectories is probabilistic , we can identify likely trajectories , but we can not say that one definitely occurred .is there a measurement we can make that will allow us to say something definite about a trajectory ?the answer to this question is yes if instead of asking which trajectory occurred , we ask if there is one that did not occur .measurements can be used to identify states , but they can also be used to eliminate states from a known set . this type of measurement has proven useful in quantum digital signature schemes . herewe would like to develop a measurement that eliminates one of the four possible trajectories .each povm element will be a projection onto a vector orthogonal to one of the four output states , that is , when acting on one of the output states the result is zero .if we obtain the measurement result corresponding to that povm element , then the output state can not be the state that is annihilated by that element .since the set of states we are considering is invariant under a reflection about the axis , we can construct the povm elements from the vectors , for from the previous section .we again choose , , , , and .the conditions that guarantee that the povm elements sum to the identity are given in eq .( [ id - cond ] ) .we will consider in the range , which implies that is between and , and is between and .define the vectors for .now we can set and , which leads to the identification , , and .we have that and , so the conditions for the povm elements to sum to the identity are fulfilled .the povm elements are where if we measure the states with this povm and obtain the result corresponding to , where , then that means the output state was not .this type of measurement can be used to generate a guess for the trajectory that is guaranteed to have at least one of the measurement results correct .if the party measuring the final state obtains the result corresponding to , which means that the trajectory did not occur ( is the result of the first measurement , the result of the second ) , then the guess for the trajectory should be , where the bar indicates taking the opposite sign , e.g. if , then . to see how this works ,suppose we find that the trajectory did not occur , so we guess .now since did not occur , the possibilities are , , and .the guess , matches the first two possibilities in one place and matches the third possibility completely .a similar situation arises when trying to find the state of two qubits each of which is in one of two nonorthogonal states ( see ) .it is useful to extend the interferometer from two loops to three in order to see how our ability to retrodict trajectories changes as the trajectories become longer .we will explore a measurement derived from the square - root measurement and one derived numerically . in this case, we have eight , instead of four , possible output states .these states are derived from the ones in eq .( [ output - states ] ) by applying either or to them .non - normalized versions of these states are given in the appendix .in particular , the state , where , is given by with explicit expressions given in eq .( [ 3-loop - out ] ) .the density matrix , , that appears in the square root measurement is , in this case , , \end{aligned}\ ] ] so that the povm elements are given by once one has the povm , calculation of the success probability of the measurement , , is straightforward , and a plot of versus is give for the three - loop case in fig .[ triple - inf ] for a three - loop interferometer .the dashed line corresponds to the square - root measurement , and the solid line to the numerically optimized measurement.,title="fig : " ] we also used a numerical approach to optimize the povm . in this casewe note that the set of output states is invariant under reflections about the state , for example , and are taken into each other by this reflection .therefore , our povm elements will also have this symmetry , so we have where , , , and correspond to , , , and respectively , with all the of s going to - s .it is important to note here that in the previous case , our condition that povms sum to identity reduced our number of free parameters from 4 to 2 , leaving only and . here, the same condition reduces them from 8 to 6 , requiring that in addition to the 4 s we must have 2 of the s be free parameters as well . choosing to eliminate and , we find \nonumber \\ & & + c_{2}[\cos(2\mu_{4 } ) - \cos(2\mu_{2 } ) ] - \cos(2\mu_{4})\ } \nonumber \\c_{4 } & = & \frac{1}{\cos(2\mu_{3 } ) - \cos(2\mu_{4 } ) } \{-c_{1}[\cos(2\mu_{3 } ) - \cos(2\mu_{1 } ) ] \nonumber \\ & & - c_{2}[\cos(2\mu_{3 } ) - \cos(2\mu_{2 } ) ] - \cos(2\mu_{3})\ } \end{aligned}\ ] ] one then optimizes over the remaining parameters in order to find .the result is shown in fig .2 . as expected , the success probability is lower than in the two - loop case , but , more interestingly , the behavior is quite different as well . instead of approaching a plateau ,the success probability reaches a maximum and then decreases .the success probability goes to at , because at that value of , the eight possible output states collapse down to two , , so that each output state corresponds to four different trajectories .the fact that the maximum success probability occurs at an intermediate value of , where the final state depends on all of the measurement results , is more in line with one s expectations than the result in the two - loop case where was a maximum when it depended only on the result of the second measurement .we have studied a number of instances of the effect of measurements on the final state of a quantum system , and our ability to use that state to retrodict the results of the measurements .this ability can range from none to perfect , depending on the measurement and the initial state of the quantum system . using a qubit interferometer , we examined the retrodiction of a sequence of measurements for which we could vary the strength of the measurements .the measurement we make on the final state of the quantum system depends on what we want to find out about the sequence of previous measurements .we may want to find out the result of only one of the measurements , all of them , or find a measurement sequence that was not realized . in our study of the two loop interferometer, we found that the highest success probability for determining the result of the first measurement occurred when both measurements were weaker than full projective measurements .if the second measurement is a projective one , it erases the information about the first measurement .surprisingly , however , if we are trying to determine the results of both measurements , we found that the case with the highest success probability was when both measurements were projective .we were also able to construct a measurement that would conclusively eliminate one of the trajectories . in the case of a three loop interferometer , when determining the entire trajectory the highest success probability occurred when the measurements were weaker than projective measurements .there are issues that could benefit from further study . in all of the cases we examined , the party making the final state measurement and at least one of the parties making the earlier measurements share information , which means that this process can be viewed as a kind of communication channel .this is the case , because the final state of a quantum system usually does carry information about the history of measurements on the system , and it is possible to gain access to this information by making measurements on the final state .this suggests that the application of information measures to this problem would be a fruitful .a second topic , which was not addressed here , is the role of the initial state .some initial states will prove better than others in transmitting the information about the measurement results to the final state .we hope to make both of these issues the subject of future work .this research was supported by a grant from the john templeton foundation .here we will look in more detail at the case of two two - outcome measurements when the second measurement is a projective one , which was discussed in section iii .the strategy for determining the outcome of both measurements was first to measure which of the two subspaces , the one corresponding to or the one corresponding to , the final state of the system is in .since it is definitely in one of these subspaces , and the subspaces are orthogonal , this measurement is deterministic .one then performs one of two minimum - error measurements , which one depends on which subspace the state is in , in order to determine the result of the first measurement .we want to determine the overall success probability of this procedure .the probability that the final state is in the subspace corresponding to is and the probability that it is in the subspace corresponding to is where , and .if we find that the state is in the subspace corresponding to then we are faced with discriminating between two states , , which occurs with a probability of and , which occurs with probability the success probability for this problem is given by , where similarly , if one finds the final state in the support of , one wants to discriminate between and , and this can be done with a success probability of , where the overall success probability is which is the same as eq .( [ lambda1 ] ) .now we would like to show that the optimal four - element povm for determining the final state in this case splits into a two - element povm on the support of and a two - element povm on the support of .this implies that the optimal povm is the one discussed above , where we determine which subspace , support of or support of , the final state is in and then apply the optimal two - element povm to distinguish between the two possible final states in that subspace .now suppose that the optimal povm is .the success probability for this measurement is we first note that which implies that now define a new povm this is a povm , because its elements sum to the identity , see eq .( [ id - sum ] ) , and all of the operators are positive. it also has the property that its success probability , satisfies .for example , looking at the first terms in the sums for the two probabilities , we see that where the second term on the right - hand side is clearly nonnegative .so each term in the sum for is greater than or equal to the corresponding term in the sum for .since we can take any povm and create another one , which has the property that two of its elements have support in the support of and two have support in the support of , and this second povm has a greater than or equal success probability , the optimal povm will have and with support in the support of and and with support in the support of .the output states for the three - loop interferometer , in a form that is not normalized , are latexmath:[\[\begin{aligned } \label{3-loop - out } 99 d. t. pegg and s. m. barnett , j. mod .opt . * 47 * , 1779 ( 2000 ) .s. m. barnett , d. t. pegg , and j. jeffers , j. opt .b : quantum semiclass . opt . * 1 * , 442 ( 1999 ) .s. m. barnett , d. t. pegg , j. jeffers , o. jedrkiewicz , and r. loudon , phys .a * 62 * , 022313 ( 2000 ) .d. t. pegg , s. m. barnett , and j. jeffers , phys .a * 66 * , 022106 ( 2002 ) . for a review see s. barnett in _ quantum information and coherence _ edited by e. andersson and p. berg ( springer , heidelberg 2014 ) .p. rapan , j. calsamiglia , r. muoz - tapia , e. bagan , and v. buek , phys .a * 84 * , 032326 ( 2011 ) .j. bergou , e. feldman , and m. hillery , phys .111 * , 100501 ( 2013 ) .t. heinosaari and t. miyadera , phys .rev . a * 91 * , 022110 ( 2015 ) .t. heinosaari and m. ziman , _ the mathematical language of quantum theory _( cambridge university press , cambridge , 2012 ) .c. w. helstrom , _ quantum detection and estimation theory _( academic , new york , 1976 ) .b. -g .englert , phys .lett . * 77 * , 2154 ( 1996 ) .s. m. barnett and s. croke , advances in optics and photonics * 1 * , 238 ( 2009 ) and arxiv:0810.1970 .e. andersson , s. m. barnett , c. r. gilson , and k. hunter , phys .a * 65 * , 052308 ( 2002 ) .s. barnett , _ quantum information _ ( oxford university press , oxford , 2009 ). s. bandyopahdyay , r. jain , j. oppenheim , and c. perry , phys .a * 89 * , 022336 ( 2014 ) .r. j. collins , r. j. donaldson , v. dunjko , p. walden , p. j. clarke , e. andersson , j. jeffers , and g. s. buller , phys . rev . lett .* 113 * , 040502 ( 2014 ) .p. walden , v. dunjko , and e. andersson , j. phys .a * 47 * , 125303 ( 2014 ) .
|
we study how well we can retrodict results of measurements made on a quantum system if we can make measurements on its final state . we know what measurements were made , but not their results . an initial examination shows that we can gain anywhere from no information to perfect information about the results of previous measurements , depending on the measurements and the initial state of the system . the case of two two - outcome measurements , the second of which is a projective measurement , is examined in some detail . we then look at a model of a qubit interferometer in which measurements are made in order to determine the path the qubit followed . the measurement made on the final state of the qubit depends on the information about previous measurement results that we are trying to determine . one can attempt to find the result of just one of the measurements , all of them , or find a measurement sequence that was not realized . we study all three possibilities .
|
the reduction of friction in practical applications has been studied since antiquity .pictographs found in uruk , located in modern day iraq , have been dated to ca .3000 b.c . andillustrate the transition from sleds to wheels ( see hamrock & dowson ( 1981 ) for a historical overview ) .while this advance certainly reduced friction , further reductions were possible upon the introduction of a viscous lubricating fluid in the axle joints .the theoretical underpinings of fluid lubrication in such geometries can be traced back to the work of reynolds ( 1886 ) , who studied the mechanics of fluid flow through a thin gap using an approximation to the stokes equations , now known as lubrication theory .recent efforts in this technologically important problem have focused on modifications of reynolds lubrication theory to account for elastohydrodynamic effects ( elastic surface deformation due to fluid pressure ) , piezoviscous behavior ( lubricant viscosity change due to high pressure ) , and thermoviscous behavior ( lubricant viscosity change due to frictional heating ) ( dowson & higginson 1959 ; odonoghue , brighton & hooke 1967 ; conway & lee 1975 ; hamrock & dowson 1981 ) .inspired by a host of applications in physical chemistry , polymer physics and biolubrication , in this paper we focus on the elastohydrodynamics of soft interfaces , which deform easily thereby precluding piezoviscous and thermoviscous effects .there have been a number of works in these areas in the context of specific problems such as cartilage biomechanics ( grodzinsky , lipshitz & glimcher 1978 ; mow , holmes & lai 1984 ; mow & guo 2002 ) , the motion of red blood cells in capillaries ( lighthill 1968 ; fitz - gerald 1969 ; tzeren & skalak 1978 ; secomb _ et al ._ 1986 ; damiano _ et al ._ 1996 ; secomb , hsu & pries 1998 ; feng & weinbaum 2000 ; weinbaum _ et al ._ 2003 ) , the elastohydrodynamics of rubber - like elastomers ( tanner 1966 ; martin _ et al ._ 2002 ) , polymer brushes ( klein , perahia & warburg 1991 ; sekimoto & leibler 1993 ) and vesicles ( abkarian , lartigue & viallat 2002 ; beaucourt , biben & misbah 2004 ) .another related phenomenon is that of a bubble rising slowly near a wall ; the bubble s surface deformation then leads to a lift force ( leal 1980 ; takemura _ et al ._ 2002 ) . instead of focusing on specific applications , herewe address a slightly different set of questions : how can one generate lift between soft sliding surfaces to increase separation and reduce wear ? what is the role of geometry in determining the behavior of such systems ? how do material properties influence the elastohydrodynamics ? are there optimal combinations of the geometry and material properties that maximize the lift force ? and finally , can the study of soft lubrication lead to improved engineering designs and be of relevance to real biological systems ? to address some of these questions we start with the simple case of two fluid - lubricated rigid non - conforming surfaces sliding past one another at a velocity as shown in figure [ schematic1 ] . the viscous stresses and pressure gradient due to flow in the narrow contact zone are dominant . for a newtonian fluid ,the stokes equations ( valid in the gap ) are reversible in time , , so that the transformation implies the transformations of the velocity and the normal force . in the vicinity of the contact regionnon - conforming surfaces are symmetric which implies that these flows are identical and therefore .elastohydrodynamics alters this picture qualitatively . in front of the sliderthe pressure is positive and pushes down the substrate , while behind the slider the pressure is negative and pulls up the substrate . as the solid deforms , the symmetry of the gap profile is broken leading to a normal force which pushes the cylinder away from the substrate .this picture applies naturally to soft interfaces which arise either due to the properties of the material involved , as in the case of gels , or the underlying geometry , as in the case of thin shells . in [ thine ] we study the normal - tangential coupling of a non - conforming contact coated with a thin compressible elastic layer .if the gap profile prior to elastic deformation is parabolic in the vicinity of the contact , the contact is non - conforming .however , if a parabolic description prior to deformation is insufficient we refer to the contact as conforming ; _e.g. _ the degenerate case considered in [ dsec ] . [ 6 ] treats normal - tangential coupling of non - conforming contacts coated with a thick compressible elastic layer . in[ 4 ] we consider the normal - tangential coupling of non - conforming contacts coated with an incompressible elastic layer . in [ 5 ] we treat the normal - tangential coupling of non - conforming contacts coated with a thin compressible poroelastic layer which describes a biphasic material composed of an elastic solid matrix and a viscous fluid ( biot 1941 ) . in [ 7 ] we study the normal - tangential coupling of a non - conforming contact where one solid is rigid and the other is a deformable cylindrical shell . in [ 8 ] we study a conforming contact : a journal bearing coated with a thin compressible elastic layer ; finally , in [ 10 ] we treat the elastohydrodynamics of 3-dimensional flows using scaling analysis . figure [ summary ] provides an overview of the different geometries and elastic materials considered .our detailed study of a variety of seemingly distinct physical systems allows us clearly observe their similarities and to outline a robust set of features we expect to see in any soft contact . in all the casesstudied the normal force = contact area characteristic hydrodynamic pressure , where is the dimensionless lift and the softness parameter .tables [ t.1 ] and [ t.2 ] summarize our results for . increasing increases the asymmetry of the gap profile which results in a repulsive elastohydrodynamic force , _i.e. _ in the generation of lift forces .however , increasing also decreases the magnitude of the pressure distribution .the competition between symmetry breaking , which dominates for small , and decreasing pressure , dominant at large , produces an optimal combination of geometric and material parameters , , that maximize the dimensionless lift , . whether or not the normal force has a maximum depends on the control parameter : the normal force increases monotonically with the velocity , but has a maximum as a function of the effective elastic modulus of the system .this suggests that a judicious choice of material may aid in the generation of repulsive elastohydrodynamic forces thereby reducing friction and wear .we consider a cylinder of radius moving at a velocity and rotating with angular frequency and immersed completely in a fluid of viscosity as shown in figure [ schematic1 ] .the surfaces are separated by a distance , the gap profile , where the is parallel to the solid surface and the is perpendicular to it .we assume that the velocity and pressure field are two - dimensional and in the region of contact we use a parabolic approximation , valid for all non - conforming contacts , for the shape of the cylindrical surface in the absence of any elastic deformation .then the total gap between the cylinder and the solid is given by with begin the additional elastic deformation and the characteristic gap thickness in the absence of solid deformation .the size of the contact zone , characterizes the horizontal size over which the lubrication forces are important .consistent with the parabolic approximation in ( [ 1 ] ) .if , the gap reynolds number re re , the nominal reynolds number. then , if re we can neglect the inertial terms and use the lubrication approximation ( reynolds 1886 ) to describe the hydrodynamics . for a 2-dimensional velocity field and a pressure field the fluid stress tensor is stress balance in the fluid , , yields mass conservation implies the associated boundary conditions are where we have chosen to work in a reference frame translating with the cylinder .we make the variables dimensionless with the following definitions here , is the characteristic scale of the deflection , where is the effective elastic modulus of the medium and are the length scales of the system . the pressure scaling follows from ( [ s1 ] ) and the fact that , after dropping the primes the dimensionless versions of equations ( [ s1])-([sbc ] ) are here characterizes the ratio of rolling to sliding , the softness parameter characterizes the scale of the elastic deformation relative to the gap thickness , which is related to the compliance of the elastic material .the dimensionless version of the fluid stress tensor ( [ fst1 ] ) is where . solving ( [ ssystem ] ) gives the reynolds equation ( batchelor 1967 ) subject to note that is scaled away . here, is the gap profile given by ( [ 1 ] ) in dimensionless terms to close the system we need to determine , the elastic response to the hydrodynamic forces .this depends on the detailed geometry and constitutive behavior of the cylindrical contact . in the following sections we explore various configurations that allow us to explicitly calculate , thus allowing us to calculate the normal force on the cylinder and determine the elastohydrodynamic tangential - normal coupling .we note that when , the contact is symmetric ( [ h(x ) ] ) so that the form of ( [ req ] ) implies that and .in our first case , we consider a thin elastic layer of thickness coating the cylinder or the rigid wall or both , all of which are mathematically equivalent ( figure [ cases ] ) .we first turn our attention to determining the surface deflection of the layer for an arbitrary applied traction . throughout the analysiswe assume that the surface deflection so that a linear elastic theory suffices to describe the material response .the stress tensor , , for a linearly elastic isotropic material with lam coefficients and is where is the displacement field , and is the identity tensor .stress balance in the solid implies we make the equations dimensionless using we note that the length scale in the is the depth of the layer ; the length scale in the is the length scale of the hydrodynamic contact zone , ; the displacements and have been scaled with the characteristic gap thickness ; and the stress has been scaled using the hydrodynamic pressure scale following ( [ fscale ] ) .we take the thickness of the solid layer to be small compared to the length scale of the contact zone with , and restrict our attention to compressible elastic materials , where . then , after dropping primes , the dimensionless 2-dimensional form of the stress tensor ( [ stresst ] ) is here is the softness parameter , a dimensionless number governing the relative size of the surface deflection to the undeformed gap thickness .stress balance ( [ fbalance ] ) yields so that to the leading order balance is the normal unit vector to the soft interface is , which in dimensionless form is where .the balance of normal traction on the solid - fluid interface yields that at the interface between the soft film and the rigid substrate , the no slip condition yields solving ( [ vert ] ) , ( [ uzsurf ] ) and ( [ vertbc ] ) gives us the displacement of the solid - fluid interface this linear relationship between the normal displacement and fluid pressure is known as the winkler or mattress elastic foundation model ( johnson 1985 ) . in light of ( [ locald ] ) we may write the gap profile ( [ h(x ) ] ) as equations ( [ req ] ) , ( [ reqbc ] ) and ( [ limit ] ) form a closed system for the elastohydrodynamic response of a thin elastic layer coating a rigid cylinder . we note that lighthill ( 1968 ) found a similar set of equations while studying the flow of a red blood cell through a capillary . however , his model s axisymmetry proscribed the existence of a force normal to the flow .when we can employ a perturbation analysis to find the lift force experienced by the cylinder .we use an expansion of the form to find ^ 3\partial_xp^{(0 ) } \}=0,\\ \label{eta11 } \eta^1 : ~\partial_x \ { 6 h^{(1 ) } + 3 [ h^{(0)}]^2 h^{(1 ) } \partial_xp^{(0 ) } + [ h^{(0)}]^3\partial_xp^{(1 ) } \}=0,\end{aligned}\ ] ] where subject to the boundary conditions solving ( [ eta00])-([pexpbc ] ) yields so that in dimensional form the lift force per unit length is as reported in skotheim & mahadevan ( 2004b ) .the same scaling was found by sekimoto & leibler ( 1993 ) , but with a different prefactor owing to a typographical error .when , the system ( [ req ] ) , ( [ reqbc ] ) and ( [ limit ] ) is solved numerically using a continuation method ( doedel _ et al ._ 2004 ) with as the continuation parameter . in figure [ bbb ]we show the pressure distribution , and gap as a function of . for , . as increases , increases and the asymmetric gap profile begins to resemble that of a tilted slider bearing , which is well known to generate lift forces .however , an increase in the gap thickness also decreases the peak pressure ( see figure [ bbb]a ) .the competition between symmetry breaking , dominant for , and decreasing pressure , dominant for , produces a maximum scaled lift force at . in dimensional terms, this implies that the lift as a function of the effective modulus will have a maximum , however , the lift as a function of the relative motion between the two surfaces increases monotonically ( see figure [ elastic]b ) .in fact , ( [ dimlift ] ) shows that the dimensional lift increases as for .in this section we consider the case where the parabolic approximation in the vicinity of the contact breaks down .since rotation changes the nature of the contact region for such interfaces , we consider only a purely sliding motion with .we assume that the gap thickness is described by where characterizes the geometric nature of the contact and the contact length is .we note that we always focus on symmetric contacts .we make the variables dimensionless using the following scalings in [ thine ] we have seen that for a thin compressible soft layer the pressure and and surface deflection can be linearly related by ( [ locald ] ) so that the scale of the deflection is . to find the scale of the deflection for the degenerate contact described by ( [ hdeg ] ) , we replace with the appropriate pressure scale so that and the size of the deformation relative to the gap size is governed by the dimensionless group then the dimensionless version of the gap thickness profile ( [ hdeg ] ) is as in [ thine ] ,we employ a perturbation expansion for the pressure field in the parameter ( ) to solve ( [ req ] ) , ( [ reqbc ] ) and ( [ ndegh ] ) and find the pressure field and the lift for small .this yields and gives the following dimensionless result where we have not shown the pressure distribution due to its unwieldy size .in dimensional terms , the normal force reads as the results for and are shown in figures [ hhh ] and [ ccc ]. one noteworthy feature of a degenerate contact is that the torque experienced by the slider arises from the fluid pressure rather that the shear force because the normal to the surface no longer passes through the center of the object as it would for a cylinder or sphere .the ratio of the torque due to shear to the torque due to the pressure is hence , the dominant contribution to the torque is due to the pressure and for , so that contrast our result for a thin layer with that for a soft slider , we consider the case where the entire cylindrical slider of length and radius is made of a soft material with lam coefficients and .equivalently , we could have a rigid slider moving above a soft semi - infinite half space . since the deformation is no longer locally determined by the pressure we use a green s function approach to determine the response to the hydrodynamic pressure .following davis , serayssol & hinch ( 1986 ) , we use the green s function for a point force on a half space since the scale of the contact zone , , the cylinder radius . following landau & lifshitz ( 1970 ) we writethe deformation at the surface due to a pressure field as with as defined in figure [ schematic ] . neglecting end effects so the pressure is a function of only , we integrate ( [ gfcn ] ) over to get ' \nonumber \\ \label{dh2 } = & \frac{\lambda + 2g}{4\pi g(\lambda+g ) } \int_{-\infty}^\infty \log\left[\frac{4(l^2-y^2)}{(x - x')^2}\right ] p(x ' ) dx ' , \end{aligned}\ ] ] to make the equations dimensionless we employ the following scalings so that the dimensionless gap thickness ( [ h(x ) ] ) now reads \ ] ] where comparing with our softness parameter for a thin section we see that _ i.e. _ a thin layer is stiffer than a half space made from the same material by the geometric factor . for small write where as in ( [ p00 ] ) . substituting into ( [ dh ] ) yields to order , ( [ req ] ) and ( [ reqbc ] ) yield the equations for the perturbation pressure , : , \nonumber \\p_1(-\infty)=p_1(\infty ) = 0,\end{aligned}\ ] ] which has the solution hence the dimensionless lift force is in dimensional terms the lift force is comparing this expression with that for the case of a thin elastic layer , equation ( [ dimlift ] ) , we see that confinement acts to reduce deformation and hence reduce the lift in the small deflection , , regime .when we solve ( [ req ] ) , ( [ reqbc ] ) and ( [ dh ] ) for using an iterative procedure .first , we guess an initial gap profile and use a shooting algorithm to calculate the pressure distribution .the new pressure distribution is then used in equation ( [ dh ] ) with ( corresponding to a very long cylinder ) to calculate a new gap profile , .if the calculation is stopped , else we set and iterate .the results are shown in figures [ softh ] and [ softlift ] , and not surprisingly they have the same qualitative features discussed previously , _i.e. _ for , ; shows a maximum at and decreases when .the reasons for this are the same as before , _i.e. _ the competing effects of an increase in the gap thickness and the increased asymmetry of the contact zone .in contrast to compressible layers , an incompressible layer ( _ e.g. _ one made of an elastomer ) can deform only via shear . for thin layers , incompressibility leads to a geometric stiffening that qualitatively changes the nature of the elastohydrodynamic problem ( johnson 1985 ) . to address this problem in the most general case , we use a green s function approach .the constitutive behavior for an incompressible linearly elastic solid is where is the displacement and is the pressure in the solid .mechanical equilibrium in the solid implies , _i.e. _ incompressibility of the solid implies for the green s function associated with a point force , where is a delta function , the boundary conditions are we solve the boundary value problem ( [ inc1])-([inc3 ] ) by using a 2-d fourier transforms defined as then equations ( [ inc1])-([inc3 ] ) in fourier space are subject to the boundary conditions solving ( [ hateq ] ) and ( [ hatbc ] ) for we find since this corresponds to a radially symmetric integral kernel we can take and use the hankel transform , , to find the surface displacement ( gladwell 1980 ) : where and is a bessel function of the first kind . to see the form of the surface displacement , we integrate ( [ dimw ] ) numerically andshow the dimensionless function in figure [ icgf]a .the surface displacement for a pressure distribution is found using the green s function for a point force ( [ dimw ] ) , so that dy^ * \right\ } dx^*,\ ] ] where . in terms of dimensionless variables with the following definitions equation ( [ incomph ] ) may be rewritten ( after dropping primes ) as dy^ * \right\ } dx^*,\ ] ] where , and the dimensionless group is the ratio of the layer thickness to the size of the contact zone . to understand the form of ( [ ich ] ) we consider the response of a line force and show the results graphically in figure [ icgf]b . moving on to the case at hand , that of a parabolic contact of a cylinder of length , we write the dimensionless gap thickness as for we can expand the pressure in powers of and carry out a perturbation analysis as in [ thine ] .this yields a linear relation between the dimensionless lift , , and the scale of the deformation : , where is shown in figure [ icf]d .the dimensional lift per unit length is as , we approach the limit of an infinitely thick layer , in which case there is no stiffening due to incompressibility , so that , which is the result for an infinitely thick layer . to study the effects of confinement on a thick layer , _i.e. _ , we approximate ( [ ich ] ) as dq \right\ } dy^ * dx^*,\end{aligned}\ ] ] where we are now integrating over the dimensionless length of the cylinder , since the interactions for deep layers are not limited by confinement effects . evaluating ( [ ih1 ] ) yields dy^ * \right\ } dx^*.\ ] ] integrating ( [ ih2 ] ) with respect to and keeping the leading order terms in yields + 2\log\left[\frac{(x - x^*)^2 + 4\zeta^2}{4(l^{'2}+y^2)}\right ] \right .- \frac{8\zeta^2[(x - x^*)^2 + 12\zeta^2]}{[(x - x^*)^2 + 4\zeta^2]^2 } \right\ } dx^*,\end{aligned}\ ] ] where we have used the leading order pressure ( [ p00 ] ) to evaluate . we integrate ( [ ih3 ] ) with respect to to find }{[x^2+(1 + 2\zeta)^2]^3}\ ] ] as in [ thine ] , equations ( [ eta11 ] ) and ( [ pexpbc ] ) yields the system of equations to be solved for the pressure perturbation : = 0 \nonumber \\\label{ih5 } p^{(1)}(-\infty ) = p^{(1)}(\infty ) = 0.\end{aligned}\ ] ] solving ( [ ih4])-([ih5 ] ) yields } { 2 \zeta^4(1+x^2)^3},\ ] ] which we integrate with respect to to find the dimensionless lift on the other hand , as we approach the limit where the contact length is much larger than the layer thickness leading to geometric stiffening . due to solid incompressibility the layer may only deform via shear and the dominant term of the strain is . since see that .balancing the strain energy with the work done by the pressure yields so that .for a thin incompressible layer the characteristic deflection is reduced by an amount so that . is displayed in figure [ icf]a . for intermediate values of , we computed the results numerically .the nonlinear problem arising for is solved using ( [ req ] ) , ( [ reqbc ] ) , ( [ ich ] ) and ( [ icg ] ) .we first guess an initial gap profile , then a shooting algorithm is employed to calculate the pressure distribution .the new pressure distribution is then used in ( [ ich ] ) and ( [ icg ] ) to calculate a new gap profile , . if then the calculation is stopped , else we iterate with . for results are shown in figure [ icf]b - d .not surprisingly , we find the same qualitative features discussed previously : has a maximum due to the competing effects of an increase in the gap thickness and the increased asymmetry of the contact zone .motivated in part by applications to the mechanics of cartilagenous joints , we now turn to the case of a cylinder moving above a fluid filled gel layer .this entails a different model for the constitutive behavior of the gel accounting for both the deformation of an elastic network and the fluid flowing through it . to describe the mechanical properties of a fluid filled gel we use poroelasticity , the continuum description of a material composed of an elastic solid skeleton and an interstitial fluid ( biot ( 1941 ) ; for a review of the literature see cederbaum , li & schulgasser ( 2000 ) or wang ( 2000 ) ) .our choice of poroelasticity to model the gel is motivated by the following scaling argument ( skotheim & mahadevan 2004a ) .let and denote gradients on the system scale and the pore scale respectively ; is the pressure varying on the system scale due to boundary conditions driving the flow , while is the pressure varying on the microscopic scale due to pore geometry .fluid stress balance on the pore scale implies that the sum of the macroscopic pressure gradient driving the flow , , and the microscopic pressure gradient , , is balanced by the viscous resistance of the fluid having viscosity and velocity , , so that the momentum balance in the fluid yields when the pore scale , , and system size , , are well separated , _ i.e. _ , equation ( [ balape ] ) yields the following scaling relations from which we conclude that the dominant contribution to the fluid stress tensor comes from the pressure .the simplest stress - strain law for the composite medium , proposed by biot ( 1941 ) , is found by considering the linear superposition of the dominant components of the fluid and solid stress tensor .if strains are small , the elastic behaviour of the solid skeleton is well characterized by isotropic hookean elasticity . for a poroelastic material composed of a solid skeleton with lam coefficients and drained and a fluid volume fraction , the stress tensor is given by the constitutive equation the equations of equilibrium are where we have neglected inertial effects .mass conservation requires that the rate of dilatation of a solid skeleton having a bulk modulus is balanced by the fluid entering the material element : since the lam coefficients and are for the composite material and take into account the microstructure , while is independent of the microstructure ; for cartilage , while . equations ( [ pece ] ) , ( [ pef ] ) and ( [ pecont ] ) subject to appropriate boundary conditions describe the evolution of displacements and fluid pressure in a poroelastic medium .although there is flux through the gel - fluid interface , the reynolds equation ( [ req ] ) for the fluid pressure will remain valid if the fluid flux through the gel is much less than the flux through the thin gap .a fluid of viscosity flows with a velocity through a porous medium of isotropic permeability according to darcy s law , so that .hence , the total flux through a porous medium of thickness is , which will scale as . comparing this with the flux throughthe thin gap leads to the dimensionless group .if we can neglect the flow through the porous medium . for cartilage , and , and flow through the porous medium can be neglected if .this implies that the reynolds lubrication approximation embodied in ( [ req ] ) remains valid in the gap for situations of biological interest . in response to forcing, a poroelastic material can behave in three different ways depending on the relative magnitude of the time scale of the motion and the poroelastic time scale . if the fluid in the gel is always in equilibrium with the surrounding fluid and a purely elastic theory for the deformation of the gel suffices ; if the gel will behave as a material with a memory ; if the fluid has no time to move relative to the matrix and the poroelastic material will again behave as a solid albeit with a higher elastic modulus . in the physiological case of a cartilage layer in a rotational joint the poroelastic time scale for bovine articular cartilageis reported to be seconds by grodzinsky , lipshitz & glimcher ( 1978 ) , and seconds by mow , holmes & lai ( 1984 ) . for time scales on the order of 1 second, the cartilage should behave as a solid , but with an elastic modulus greater then that measured by equilibrium studies .we consider three different cases corresponding to : when the cylinder moves slowly , , the time scale of the motion is much larger than the time scale over which the pressure diffuses ( _ i.e. _ ) so that ( [ 17 ] ) becomes solving ( [ 7.9 ] ) subject to ( [ diff ] ) yields _ i.e. _ at low speeds the fluid pressure in the gel is the same as the fluid pressure outside the gel. equations ( [ pevertbc ] ) and ( [ peu_z ] ) can be solved to yield we see that this limit gives a local relationship between the displacement of the gel surface and the fluid pressure in the gap , exactly as in the case of a purely elastic layer treated in [ thine ] .when the time scale of the motion is much smaller than the time scale over which the pressure diffuses , _ i.e. _ , ( [ 17 ] ) becomes the gel is at equilibrium with the external fluid before the cylinder passes over it , , and equation ( [ fast ] ) yields inserting ( [ pg ] ) into ( [ peu_z ] ) and integrating yields in this limit the fluid has no time to flow through the pores and the only compression is due to bulk compressibility of the composite gel , which now behaves much more rigidly .the effective elastic modulus of the solid layer is now rather than .however , the relationship between the pressure and displacement remains local as in [ thine ] .we note that if , ( [ pecont ] ) has no forcing term and .poroelastic theory does not take into account shear deformations since these involve no local change in fluid volume fraction in the gel . in this caseall the load will be borne by the elastic skeleton .however , shear deformation in a thin layer will involve geometric stiffening due to incompressibility so that the effective modulus will be ( skotheim & mahadevan 2004b ) .hence , if the deformation should be treated as an incompressible layer as in [ 4 ] . if , the layer should be treated as in [ thine ] with an effective modulus .when rewriting ( [ 17 ] ) for the difference between the fluid pressure inside and outside the gel , , yields with the boundary conditions we expand in terms of the solution of the homogeneous part of ( [ above])-([above2 ] ) : inserting the expansion into ( [ above ] ) we find \sin \pi(n+\frac{1}{2})z = ( \delta-1)\partial_tp.\ ] ] multiplying ( [ 7.18 ] ) with and integrating over the thickness yields solving ( [ 7.19 ] ) for yields substituting ( [ 7.20 ] ) into ( [ homog ] ) yields the fluid pressure in the gel finally , ( [ pevertbc ] ) , ( [ peu_z ] ) and ( [ pgel ] ) yield .\ ] ] since the higher order diffusive modes ( ) decay more rapidly than the leading order diffusive mode ( ) , a good approximation to ( [ 25 ] ) is .\ ] ] this approximation is similar to that used in skotheim & mahadevan ( 2004a ) . to simplify ( [ 7.23 ] ) for the case of interest we define so that in the reference frame of the steadily moving cylinder so that integrating the above yields consequently , the distance between the gel and cylinder is found from ( [ h(x ) ] ) , ( [ 7.23 ] ) , ( [ pe102 ] ) and ( [ pe105 ] ) to be \ ] ] where where we are considering the case where the bulk modulus of the skeletal material is much larger than the modulus of the elastic matrix , so that ( [ pegamma ] ) implies .this leaves us a system of equations ( [ req ] ) , ( [ reqbc ] ) and ( [ h1 ] ) for the pressure with 2 parameters : characterizes the deformation ( softness ) ; and is the ratio of translational to diffusive timescales . the two limits and of ( [ h1 ] ) can both be treated using asymptotic methods . for , ( [ h1 ] ) yields and we recover the limit of a thin compressible elastic layer treated in [ thine ] with . for ,( [ h1 ] ) yields which is the result for a thin compressible layer with .when we expand the pressure field as in ( [ expansion ] ) writing , where as in ( [ p00 ] ) . inserting this expression into ( [ h1 ] ) yields \partial_{x'}p_0\,dx ' \right\ } + o(\gamma \eta)\ ] ] which can be integrated to give .\ ] ] we see that increasing increases the gap thickness and lowers the pressure without increasing the asymmetry , thus decreasing the lift . in the small deflection limit , ,the dimensionless lift force is and the lift force in dimensional terms is where is a function of and shown graphically in figure [ maxporo]a .when , we use a numerical method to solve ( [ req ] ) , ( [ reqbc ] ) and ( [ h1 ] ) on a finite domain using the continuation software auto ( doedel _ et al ._ 2004 ) with and as the continuation parameters .the initial solution from which the continuation begins is with , corresponding to , and .the form of the lift force as a function of , for various can be almost perfectly collapsed onto a single curve after appropriately scaling the axes using the position of the maximum ; _ i.e. _ }{l_{max}(\gamma)} ] where and are shown in figure [ asymptotic]c , d .so far , with the exception of [ dsec ] , we have dealt only with non - conforming contacts . in this sectionwe consider an elastohydrodynamic journal bearing : a geometry consisting of a cylinder rotating within a larger cylinder that is coated with a soft solid .the journal bearing is a conforming contact and is a better representation of bio - lubrication in mammalian joints in which synovial fluid lubricates bone coated with thin soft cartilage layers .previous analyses of the elastohydrodynamic journal bearing have focused on situations where fluid cavitation needs to be taken into account ( odonoghue , brighton & hooke 1967 ; conway & lee 1975 ) . as before , we restrict our attention the case where the surface deforms appreciably before the cavitation threshold is reached so that the gap remains fully flooded .a schematic diagram is shown in figure [ jbschematic ] .we take the center of the inner cylinder , , to be the origin ; the center of the outer cylinder , , is located at , .the inner cylinder of radius rotates with angular velocity ; the stationary outer cylinder of radius is coated with a soft solid of thickness and lam coefficients and . here, is the average distance between the inner cylinder and the soft solid . following leal ( 1992 ), we use cartesian coordinates to describe the eccentric geometry and applied forces , but use polar coordinates to describe the fluid motion .when , the lubrication approximation reduces the stokes equations in a cylindrical geometry for a fluid of viscosity , pressure , and velocity field , to ( leal 1992 ) subject to the boundary conditions since , the continuity equation simplifies to ( leal 1992 ) and the gap thickness profile simplifies to where is the elastic interface displacement due to the fluid forces . as in [ thine ], so that ( [ jb3 ] ) yields using the following primed dimensionless variables , we write ( [ jb1])-([jb2 ] ) , ( [ jbndim2 ] ) , after dropping the primes , as subject to where , is the softness parameter . as in [ ltheory ] , we use ( [ prere ] ) to derive the system of equations for the fluid pressure in addition , fluid incompressibility implies that the average deflection must vanish : the forces on the inner cylinder are here , is the vertical force and is the horizontal force .we begin with the classical solution for a rigid journal bearing ( leal 1992 ) : .the following brief symmetry argument shows that when .since stokes equations for viscous flow are reversible in time , the transformation implies that .however , due to symmetry we expect the solution to be a reflection about the -axis and we conclude that . as in previous sectionswe investigate how elastohydrodynamics alters this picture by specifying the eccentricity and calculating the forces as a function of the softness parameter .solutions to ( [ jbre])-([jbbc ] ) are computed numerically using the continuation software auto 2000 ( doedel _ et al ._ 2004 ) with as the continuation parameter and the solution for as the initial guess . just as for different geometries analyzed in previous sections , the deflection of the surface of the soft solid breaks the symmetry and leads to the generation of a horizontal force in the -direction : . for small deformations ( ) the dimensionless horizontal force , where the coefficient is shown in figure [ jblc ] . in dimensional terms , the horizontal force per unit length for small deformations is for nearly concentric cylinders , , . for large eccentricities , ,the lubrication pressure diverges and .for we show , and in figure [ jblift ] .the analysis of the 3-dimensional problem of a sphere moving close to a soft substrate is considerably more involved . stone _( _ in preparation _ )are currently engaged in using perturbation methods to calculate the elastohydrodynamic lift for the case of a sphere translating above a thin elastic layer . here , we restrict ourselves to the use scaling arguments to generalize the quantitative results of previous sections to spherical sliders .the results are tabulated in table 2 . in the fluid layerseparating the solids , balancing the pressure gradient with the viscous stresses yields where is the size of the contact zone . substituting with find that the lubrication pressure is the reversibility of stokes equations and the symmetry of paraboloidal contacts implies that the lift force when . for ,we expand the pressure as .since will not generate vertical forces , the the lift on a spherical slider , , will scale as to compute , we need a prescription for the softness and the contact radius for each configuration .\(a ) for a thin compressible elastic layer ( [ thine ] ) , we substitute and into ( [ slift ] ) to find ( b ) for a thin elastic layer with a degenerate axisymmetric conforming contact ( [ dsec ] ) , and so that ( [ slift ] ) yields ( c ) for a soft spherical slider ( or thick layer ; [ 6 ] ) , the deflection is given by ( [ gfcn ] ) so that so that the size of the contact zone so that ( [ slift ] ) and ( [ dh ] ) yield ( d ) for an incompressible layer ( [ 4 ] ) we have two cases depending on the thickness of the substrate relative to the contact zone characterized by the parameter . for , and so that ( [ slift ] ) yields for the case the proximity of the undeformed substrate substantially stiffens the layer . in sharp contrast to a compressible layer , a thin incompressible layer will deform via shear with an effective shear strain .an incompressible solid must satisfy the continuity equation , which implies that .consequently , . balancing the elastic energy with the work done by the pressure yields since , ( [ pscale ] ) , ( [ slift ] ) and ( [ iceta ] ) yield ( e ) for a thin poroelastic layer ( [ 5 ] ) , and so that ( [ slift ] ) yields where is the ratio of the poroelastic time scale to the time scale of the motion .( f ) for a spherical shell slider ( [ 7 ] ) there are two cases : the thickness of the shell , , is smaller than the gap thickness , _i.e. _ and all the elastic energy is stored in stretching ; or and bending and stretching energies are of the same order of magnitude ( landau & lifshitz 1970 ) . for a localized force the deformation will be restricted to a region of area .the stretching energy per unit area scales as , while the bending energy scales as . the total elastic energy , , of the deformation is then given by which has a minimum at .comparing with we see that the hydrodynamic pressure is localized if . for a localized force while for a non - localized force .the elastic energy of a localized deformation , , and a non - localized deformation , , are given by the moment exerted by the hydrodynamic pressure on the spherical shell slider is which is independent of .the work done by the moment ( [ 3dmom ] ) , which acts through an angle , is balancing the work done by the fluid ( [ fwork ] ) with the stored elastic energy ( [ ele ] ) for both nonlocal and local deformations yields so that ( [ slift ] ) and ( [ hs1 ] ) yield the lift force on the sphere for the two cases ( g ) for the ball and socket configuration , roughly the 3-dimensional analog of the journal bearing ( [ 8 ] ) , and that the horizontal force is given by ( [ slift ] ) : various combinations of geometry and material properties in this paper yield some simple results of great generality : the elastohydrodynamic interaction between soft surfaces immersed in a viscous fluid leads generically to a coupling between tangential and normal forces regardless of specific material properties or geometrical configurations , _i.e. _ a lift force that arises due to the asymmetric fluid pressure deforming the soft solid which breaks the symmetry of the gap profile . for small surface deformations , ,the dimensionless normal force is linear in . increasing ( _ i.e. _ softening the material ) increases the asymmetry but decreases the magnitude of the pressure .the competition between symmetry breaking , which dominates for small , and decreasing pressure , which dominates for large , produces a maximum in the lift force as a function of , the material s softness .additional complications such as nonlinearities and anisotropy in both the fluid and solid , streaming potentials and current generated stresses ( frank & grodzinsky 1987a , b ) would clearly change some of our conclusions . however , the robust nature of the coupling between the tangential and normal forces illustrated in this paper should persist and suggests both experiments and design principles for soft lubrication .the authors thank mederic argentina for assistance using the auto 2000 software package , and both howard stone and tim pedley for their thoughtful comments . we acknowledge support via the norwegian research council ( js ) ,the us office of naval research young investigator program ( lm ) and the us national institutes of health ( lm ) .0 m. abkarian , c. lartigue & a. viallat , tank treading and unbinding of deformable vesicles in shear flow : determination of the lift force , " phys .lett . * 88 * , 068103 ( 2002 ) .barry & m. holmes , asymptotic behaviors of thin poroelastic layers ," i m a journal of applied mathematics * 66 * , 175 - 194 ( 2001 ) . g.k . batchelor _an introduction to fluid dynamics _( cambridge university press , cambridge , uk , 1967 ) .j. beaucourt , t. biben & c. misbah , optimal lift force on vesicles near a compressible substrate , " europhys . lett . * 67 * 676 - 682 ( 2004 ) .biot , `` general theory of three - dimensional consolidation , '' journal of applied physics * 12 * , 155 - 165 ( 1941 ) .g. cederbaum , l.p .li & k. schulgasser , _ poroelastic structures _( elsevier , oxford , 2000 ) . h.d .conway & h.c .lee the analysis of the lubrication of a flexible journal bearing , " trans .asme j. lub. tech . * 97 * 599 - 604 ( 1975 ) .damiano , b.r .duling , k. ley & t.c .skalak , axisymmetric pressure - driven flow of rigid pellets through a cylindrical tube lined with a deformable porous wall layer , " j. fluid mech . * 314 * , 163 - 189 ( 1996 ) .r. h. davis , j - m serayssol & e. j. hinch , the elastohydrodynamic collision of two spheres , " j. fluid mech . * 163 * 479 - 497 ( 1986 ) .doedel , r.c .paffenroth , a.r .champneys , t.f .fairfrieve , y.a .kuznetsov , b.e .oldeman , b. sandstede & x. wang , _ auto 2000 : continuation and bifurcation software for ordinary differential equations _ ( 2004 ) .d. dowson & g.r .higginson , a numerical solution to the elasto - hydrodynamic problem , " j. mech eng .* , 7 - 15 ( 1959 ) .j. feng & s. weinbaum , lubrication theory in highly compressible porous media : the mechanics of skiing , from red cells to humans , " j. fluid mech . * 422 * , 281 - 317 ( 2000 ) .fitz - gerald , mechanics of red - cell motion through very narrow capillaries , " proc .soc . lond .b * 174 * , 193 - 227 ( 1969 ) .frank & a.j .grodzinsky , cartilage electromechanics .i. electrokinetic transduction and the effects of electrolyte ph and ionic strength , " j. biomech . * 20 * , 615 - 27 ( 1987a ) e.h .frank & a.j .grodzinsky , cartilage electromechanics .ii . a continuum model of cartilage electrokinetics andcorrelation with experiments , " j. biomech . * 20 * , 629 - 39 ( 1987b ) g.m.l .gladwell , _ contact problems in the classical theory of elasticity _ ( sijthoff and noordhoff , alphen aan den rijn , the netherlands , 1980 ) a.j .grodzinsky , h. lipshitz & m.j .glimcher , electromechanical properties of articular cartilage during compression and stress relaxation , " nature * 275 * , 448 - 450 ( 1978 ) .hamrock , _ fundamentals of fluid film lubrication _ ( mcgraw - hill , new york , 1994 ) .hamrock & d. dowson , _ ball bearing lubrication : the elastohydrodynamics of elliptical contacts _( john wiley & sons , new york , 1981 ) .j. happel & h. brenner , _ low reynolds number hydrodynamics _( kluwer boston inc . ,hingham ma , 1983 ) . d.j .jeffrey & y. onishi , the slow motion of a cylinder next to a plane wall , " q. j. appl . math . * 34 * pt . 2, 129 - 137 ( 1981 ) .johnson , _ contact mechanics _( cambridge university press , cambridge uk , 1985 ) .j. klein , d. perahia & s. warburg , forces between polymer - bearing surfaces undergoing shear , " nature * 352 * 143 , 1991 . l.d .landau & e.m .lifshitz , _ theory of elasticity _( pergamon press , oxford , uk , 1970 ) .leal , particle motions in a viscous fluid , " annu .fluid mech .* 12 * , 435 - 476 ( 1980 ) .leal , _ laminar flow and convective transport processes : scaling principles and asymptotic analysis _ ( butterworth - heineman , newton , ma , 1992 ) .lighthill , pressure - forcing of tightly fitting pellets along fluid - filled elastic tubes , " j. fluid mech . * 34 * , 113 - 143 ( 1968 ) .love , _ a treatise on the mathematical theory of elasticity _( fourth ed .dover 1944 ) .a. martin , j. clain , a. buguin & f. brochard - wyart , wetting transitions at soft , sliding interfaces , " phys .e * 65 * 031605 ( 2002 ) .mow & x.e .guo , mechano - electrochemical properties of articular cartilage : their inhomogeneities and anisotropies , " annu . rev .eng . * 4 * , 175 - 209 ( 2002 ) .mow , m.h .holmes & w.m .lai , fluid transport and mechanical properties of articular cartilage : a review , " j. biomechanics * 17 * , 377 - 394 ( 1984 ) .j. odonoghue , d.k .brighton & c.j.k .hooke , the effect of elastic distortions on journal bearing performance , " trans .asme j. lub. tech . * 89 * , 409 - 417 ( 1967 ) .o. reynolds , on the theory of lubrication and its application to mr .beauchamp tower s experiments , including an experimental determination of the viscosity of olive oil , " philos .london , ser .a * 177 * 157 - 234 ( 1886 ) .secomb , r. hsu & a.r .pries , a model for red blood cell motion in glycocalyx - lined capillaries , " am. j. physiol . * 274 * , h1016 - 1022 ( 1998 ) .secomb , r. skalak , n. zkaya & j.f .gross , flow of axisymmetric red blood cells in narrow capillaries , " j. fluid mech .* 163 * , 405 - 423 ( 1986 ) .k. sekimoto & l. leibler , a mechanism for shear thickening of polymer - bearing surfaces : elasto - hydrodynamic coupling , " europhys . lett .* 23 * 113 - 117 ( 1993 ) .selvadurai ( ed . ) , _ mechanics of poroelastic media _ , solid mechanics and its application series , vol.35 , wolters kluwer academic publishers ( 1996 ) .skotheim & l. mahadevan , dynamics of poroelastic filaments , " proc .london ser .a * 460 * , 1995 - 2020 ( 2004a ) . j.m .skotheim & l. mahadevan , soft lubrication , " phys .lett . * 92 * , 245509 ( 2004b ) .h.a . stone __ , _ in preparation_. f. takemura , s. takagi , j. magnaudet & y. matsumoto , drag and lift forces on a bubble rising near a vertical wall in a viscous fluid , " j. fluid mech . * 461 * , 277 - 300 ( 2002 ) .tanner , an alternative mechanism for the lubrication of synovial joints , " phys* 11 * 119 - 127 ( 1966 ) .h. tzeren & r. skalak , the steady flow of closely fitting incompressible elastic spheres in a tube , " j. fluid mech .* 87 * , 1 - 16 ( 1978 ) .wang , _ theory of linear poroelasticity with applications to geomechanics and hydrogeology _ ( princeton university press , 2000 ) .w. wang & k.h .parker , the effect of deformable porous surface layers on the motion of a sphere in a narrow cylindrical tube , " j. fluid . mech . * 283 * , 287 - 305 ( 1995 ) .s. weinbaum , x. zhang , y. han , h. vink & s. cowin , mechanotransduction and flow across the endothelial glycocalyx , " pnas * 100 * , 7988 - 7995 ( 2003 ) .wolfram research , inc ., _ mathematica _ , version 5.0 , champaign , il ( 2003 ) .above a thin gel layer of thickness that covers a rigid solid substrate .the asymmetric pressure distribution pushes down on the gel when the fluid pressure in the gap is positive while pulling up the gel when the pressure is negative .the asymmetric traction breaks the symmetry of the gap thickness profile , , thus giving rise to a repulsive force of hydrodynamic origin .the pressure profile and gap thickness shown here are calculated for a thin elastic layer ( [ thine ] ) for a dimensionless deflection .,width=377 ] and are the lam coefficients of the linear elastic material , where corresponds to an incompressible material , is the depth of the elastic layer coating a rigid surface , is the contact length , is the time scale over which stress relaxes in a poroelastic medium ( a material composed of an elastic solid skeleton and an interstitial viscous fluid ) , and is the time scale of the motion . [ thine ] treats normal - tangential coupling of non - conforming contacts coated with a thin compressible elastic layer . [ dsec ] treats normal - tangential coupling of higher order degenerate contacts coated with a thin compressible elastic layer . [ 6 ] treats normal - tangential coupling of non - conforming contacts coated with a thick compressible elastic layer . [ 4 ] treats normal - tangential coupling of non - conforming contacts coated with an incompressible elastic layer . [ 5 ] treats normal - tangential coupling of non - conforming contacts coated with a thin compressible poroelastic layer . [ 7 ] treats normal - tangential coupling of non - conforming contacts between a rigid solid and a cylindrical shell . [ 8 ] treats elastohydrodynamic effects due to coating a journal bearing with a thin compressible elastic layer . [ 10 ] treats elastohydrodynamic effects for 3-dimensional flows using scaling analysis ., width=264 ] as a function of .( b ) gap thickness profile , as a function of .the initially parabolic gap thickness profile is broken and the maximum value of the pressure decreases as increases.,width=302 ] , plotted against , the softness parameter . has a maximum at which is the result of a competition between symmetry breaking ( dominant for ) and decreasing pressure ( dominant for ) due to increasing the gap thickness . for small asymptotic analysis yields , which matches the numerical solution .( b ) the dimensional lift force , , is quadratic in the velocity for small velocities while being roughly linear for large velocities . is the velocity and rate of rotation at which , and is corresponding lift ., width=264 ] with ( a ) , ( c ) , and ( b ) , ( d).,width=377 ] where .the curves are similar , however , they can not be rescaled to a universal curve.,width=377 ] , and gives rise to a repulsive force of elastohydrodynamic origin .the dashed line in the lower right hand denotes the undeformed location of the gel cylinder.,width=377 ] and pressure as a function of for a soft cylindrical gel slider .we note that while the pressure distribution is localized to the region near the point of closest contact , the change in gap thickness is spread out due to the logarithmic nature of the green s function of a line contact : ], the curves can be almost perfectly collapsed onto a single curve.,width=226 ] , thickness and lam coefficients and moving at a velocity while completely immersed in a fluid of viscosity .the edges of the half - cylinder are clamped at a distance from the surface of an undeformed solid . denotes the angle between the tangent to the surface and the . are the laboratory frame coordinates of the half - cylinder as a function of the arc - length coordinate .,width=377 ] and lam coefficients and subject to a traction applied by a viscous fluid . and are coordinates in the reference frame of the rigid solid , while is the arc - length coordinate in the shell . is the bending moment.,width=377 ] , as a function of the softness , .as increases the asymmetry of the pressure distribution increases and the maximum pressure decreases .( c),(d ) shape of the sheet , where and are the coordinates of the center line in the laboratory frame .we see that the point of nearest contact is pulled back and the symmetry of the profile is broken by the forces exerted by the fluid on the cylindrical shell ..,width=377 ] . for . , where is shown in ( b ) .the form of can be almost perfectly collapsed onto a single curve after appropriately scaling the , axes , _i.e. _ }{l_{max}(h_0/r)}$ ] , where and are shown in ( c ) and ( d ) respectively ., width=377 ] has been coated by a soft solid of thickness having lam coefficients and .the larger cylinder s axis is located a distance in the and a distance in the from the axis of the inner cylinder of radius .the average gap thickness is ., width=340 ] the dimensionless horizontal force , , where the coefficient is plotted above.,width=264 ] , acting on the inner cylinder as a function of , a measure of the surface deflection ; ( b ) the corresponding gap thickness profiles ; ( c ) the corresponding pressure profiles . , .,width=302 ].summary of results for small surface deflections . row corresponds to , while the lower row corresponds to and the undeformed dimensionless gap thickness profile is . [ cols="<,<,^,^",options="header " , ]
|
we study the lubrication of fluid - immersed soft interfaces and show that elastic deformation couples tangential and normal forces and thus generates lift . we consider materials that deform easily , due to either geometry ( _ e.g _ a shell ) or constitutive properties ( _ e.g. _ a gel or a rubber ) , so that the effects of pressure and temperature on the fluid properties may be neglected . four different system geometries are considered : a rigid cylinder moving parallel to a soft layer coating a rigid substrate ; a soft cylinder moving parallel to a rigid substrate ; a cylindrical shell moving parallel to a rigid substrate ; and finally a cylindrical conforming journal bearing coated with a thin soft layer . in addition , for the particular case of a soft layer coating a rigid substrate we consider both elastic and poroelastic material responses . for all these cases we find the same generic behavior : there is an optimal combination of geometric and material parameters that maximizes the dimensionless normal force as a function of the softness parameter which characterizes the fluid - induced deformation of the interface . the corresponding cases for a spherical slider are treated using scaling concepts .
|
wavelength multi / demultiplexers are central components in optical telecommunication networks , subject to demanding requirements , both in terms of performance and manufacturing .photonic integration is usually the basis for large count wdm multiplexers .the cost of an integrated circuit is fundamentally related to its footprint . in terms of footprint ,reflective multiplexers as the echelle diffraction grating ( edg ) achieve considerable size reduction .one issue with edgs is to maximize the reflection on the grating , in order to minimize the overall insertion losses , with the deposition of metal layers at the edge of the grating , or the addition to the grating of other structures such as bragg reflectors . while the deposition of metals supplies broadband reflectors , it requires resorting to additional fabrication steps .conversely , bragg reflectors can be manufactured in the same steps than the edg , but it is well known the reflection bandwidth is inversely proportional to their strength . similarly , awg layouts with reflective structures midway in the array ,i.e. reflective awgs ( r - awg ) are possible as well .the reflectors can be implemented in analog ways to the ones for the edgs , as reflective coatings on a facet of the chip , photonic crystals , external reflectors and bragg reflectors .a common issue of all the described approaches for the reflector is that broadband full reflectivity requires additional fabrication steps , and therefore increases the final cost of the multiplexer . in configuration for a r - awg , where the well known sagnac loop reflectors ( slr ) are used as reflective elements at the end of the arrayed waveguides , was demonstrated in silica technology .a slr is composed of an optical coupler with two output waveguides , that are connected to each other forming a loop .these reflectors are broadband , can supply total reflection , and can be fabricated in the same lithographic process than the rest of the awg .moreover , the reflection of a slr depends on the coupling constant of the coupler .hence , it can be different for each of the waveguides in the array .the modification of the field pattern in the arrayed waveguides of an awg allows for spectral response shaping , as for example box like transfer function and multi - channel coherent operations amongst other . in this paperwe report on the experimental demonstration of a silicon - on - insulator r - awg based on slrs with gaussian pass - band response .this opens the door to further research on non gaussian response r - awgs , using mmis with coupling ratios other than 50:50 as we propose in .following the model and methodology from , regular and reflective awgs were designed having as target polarization te , on a silicon - on - insulator ( soi ) substrates consisting of a 3 m thick buried oxide layer and a 220 nm thick si layer , with no cladding .the effective indexes , calculated using a commercial software , are 2.67 in the arrayed waveguides ( ) -waveguide with 0.8 m to minimize phase errors , see - and 2.83 in the slab coupler ( ) .the r - awg parameters are the following : the center wavelength is 1550 nm , using 7 channels with a spacing of 1.6 nm and a free spectral range ( fsr ) of 22.4 nm .the calculated focal length is 189.32 m , the incremental length between aws is 36.03 m and the number of aws is 49 . the bend radius was set to 5 m .the r - awgs has a footprint of 350x950 ( width x height ) following an orthogonal layout .the fabricated devices are shown in fig .[ fig : awg]-(a ) .each waveguide in the r - awg array is terminated by a slr built with a 1x2 multimode interference coupler , with 50:50 splitting ratio for ideally full reflectivity .the input / output waveguides are equipped with focusing grating couplers ( fgcs ) .the waveguides are fabricated on the soi substrates by electron beam lithography ( ebl ) and dry etching in a two - step process .first , using hydrogen silsesquioxane negative tone resist in combination with a high contrast development process all device features are defined and fully etched to the buried oxide using an hbr - based icp - rie process . in a second step , a positive tone zep resist mask is carefully aligned to those features , exposed and used to define the shallow etched parts of the devices using a c4f8/sf6-based dry etching process . forboth process steps a multi pass exposure approach is used to further reduce the sidewall roughness of the photonic device , hence minimizing scattering losses in those devices .furthermore , special care is taken to guarantee accurate cd of all parts of the device by applying a very accurate proximity effect correction in combination with a well - balanced exposure dose . for spectral characterization ,a broadband source was employed in the range of 1525 - 1575 nm , and traces were recorded using a optical spectrum analyzer with 10 pm resolution .all the traces were normalized with respect to a straight waveguide .the results are shown in fig .[ fig : awg ] . panel ( b ) shows the spectra for the seven channels of the awg , from the central input .peak insertion loss is approximately 3 db .note this value is subject to small variations in the performance of the fgcs ( expected .4 db ) .the highest side lobe level is 12 db below the pass band maximum .panel ( c ) shows the the spectra for the three inner channels from the central input , for the r - awg .the other three channels were not designed to be measured , as they end in the same side of the chip .finally , panel ( d ) shows the comparison of both awg and r - awg .two main differences are clearly visible in the figure , between the awg and the r - awg. these can be appreciated in panel ( d ) comparing for instance traces a0 and r0 .first , the shape of the pass band is slightly degraded towards longer wavelength , for the r - awg , where broadening happens at 6 db below maximum .second , the side lobe level is increased by 4 db in the r - awg as compared to the awg .being the only difference between both devices the presence of slrs , these degradations are likely to be due to phase / amplitude imperfections in the reflectors .in conclusion , we have reported the experimental demonstration of a soi reflective arrayed waveguide grating , with sagnac mirrors and gaussian spectral response .the performance of this first prototype is comparable to that of a regular twin awg in the same die .differences , likely due to dissimilarities between reflectors , are subject of current on - going research .the demonstration is the first step towards arbitrary pass band response r - awgs employing slrs with different reflection coefficients in each arm .the authors acknowledge the spanish micinn tec2010 - 21337 , mineco tec2013 - 42332-p , feder upvov 10 - 3e-492 , feder upvov 08 - 3e-008 and fpi bes-2011 - 046100 .d. feng , w. qian , h. liang , c. kung , j. fong , b.j .luff , and m. asghari , `` fabrication insensitive echelle grating in silicon - on - insulator platform , '' ieee photon .. lett . * 23*(5 ) , 284286 ( 2011 ) .e. ryckeboer , a. gassenq , m. muneeb , n. hattasan , s. pathak , l. cerutti , j.b .rodriguez , e. tourni , w. bogaerts , r. baets , and g. roelkens , `` silicon - on - insulator spectrometers with integrated gainassb photodiodes for wide - band spectroscopy from 1510 to 2300 nm , '' opt .express * 21*(5 ) , 61016108 ( 2013 ) .soole , m.r .amersfoort , h.p .leblanc , a. rajhel , c. caneau , c. youtsey , and i. adesida , `` compact polarization independent inp reflective arrayed waveguide grating filter , '' electron .32*(19 ) , 17691771 ( 1996 ) .d. dai , x. fu , y. shi , and s. he , `` experimental demonstration of an ultracompact si - nanowire - based reflective arrayed - waveguide grating ( de)multiplexer with photonic crystal reflectors , '' opt .lett . * 35*(15 ) , 25942596 ( 2010 ) .y. ikuma , m. yasumoto , d. miyamoto , j. ito , t. jiro ; h. tsuda , `` small helical reflective arrayed - waveguide grating with integrated loop mirrors , '' in proc .european conference on optical communications ( ecoc ) , ( 2007 ) .w. bogaerts , p. dumon , d. van thourhout , d. taillaert , p. jaenen , j. wouters , s. beckx , v. wiaux , and r.g .baets , `` compact wavelength - selective functions in silicon - on - insulator photonic wires , '' ieee j. sel .topics quantum electron . *12*(6 ) , 13941401 ( 2006 ) .lemme , t. mollenhauer , h.d.g .gottlob , w. henschel , j. efavi , c. welch and h. kurz , `` highly selective hbr etch process for fabrication of triple - gate nano - scale soi - mosfets '' , microelec .73 346 - 350 ( 2004 ) j. bolten , t. wahlbrink , n. koo , h. kurz , s. stammberger , u. hofmann and n. nal , `` improved cd control and line edge roughness in e - beam lithography through combining proximity effect correction with gray scale techniques '' , microelec .87 10411043 ( 2010 )
|
in this paper the experimental demonstration of a silicon - on - insulator reflective arrayed waveguide grating ( r - awg ) is reported . the device employs one sagnac loop mirror per arm in the array , built with a 1x2 input / outputs , 50:50 splitting ratio , multimode interference coupler , for total reflection . the spectral responses obtained are compared to those of regular awgs fabricated in the same die .
|
large scale data storage systems that are employed in social networks , video streaming websites and cloud storage are becoming increasingly popular . in these systems the integrity of the stored data and the speed of the data access needs to be maintained even in the presence of unreliable storage nodes . this issue is typically handled by introducing redundancy in the storage system , through the usage of replication and/or erasure coding .however , the large scale , distributed nature of the systems under consideration introduces another issue .namely , if a given storage node fails , it need to be regenerated so that the new system continues to have the properties of the original system .it is of course desirable to perform this regeneration in a distributed manner and optimize performance metrics associated with the regeneration process . in recent years, regenerating codes have been the subject of much investigation ( see and its references ) .the principal idea of regenerating codes is to use subpacketization . in particular, one treats a given physical block as consisting of multiple packets ( unlike the mds code that stores exactly one packet in each node ) .coding is now performed across the packets such that the file can be recovered by contacting a certain minimum number of nodes . in addition, one can regenerate a failed node by downloading appropriately coded data from the surviving nodes . a distributed storage system ( henceforth abbreviated to dss )consists of storage nodes , each of which stores packets . in our discussion, we will treat these packets as elements from a finite field .thus , we will equivalently say that each storage node contains symbols ( we use symbols and packets interchangeably throughout our discussion ) .a given user , also referred to as the data collector needs to have the ability to reconstruct the stored file by contacting any nodes ; this is referred to as the maximum distance separability ( mds ) property of the system .suppose that a given node fails .the dss needs to be repaired by introducing a new node .this node should be able to contact any surviving nodes and download packets from each of them for a total repair bandwidth of packets .thus , the system has a repair degree of , normalized repair bandwidth and total repair bandwidth .the new dss should continue to have the mds property .a large body of prior work ( see for instance for a representative set ) has considered constructions for functional and exact repair at both the minimum bandwidth regenerating ( mbr ) point where the repair bandwidth is minimum and the minimum storage regenerating ( msr ) point where the storage capacity is minimum . however , repair bandwidth is not the only metric for evaluating the repair process .it has been observed that the number of nodes that the new node needs to contact for the purposes of repair is also an important metric that needs to be considered . for either functional or exact repair ( discussed above )the repair degree needs to be at least .the notion of local repair introduced by , considers the design of dss where the repair degree is strictly smaller than .this is reasonable since contacting nodes allows the new node to reconstruct the entire file , assuming that the amount of data downloaded does not matter .much of the existing work in this broad area considers _ coded _ repair where the surviving nodes and the new node need to compute linear combinations for regeneration .it is well recognized that the read / write bandwidth of machines is comparable to the network bandwidth .thus , this process induces additional undesirable delays in the repair process. the process can also be potentially memory intensive since the packets comprising the file are often very large ( of the order of gb ) . in this work we consider the design of dss that can be repaired in a local manner by simply downloading packets from the surviving nodes , i.e. , dss that have the exact and uncoded repair property .the problem of local repair was first considered in references .tradeoffs between locality and minimum distance , and corresponding code constructions were proposed in for the case of scalar codes ( ) and extended to the case of vector codes ( ) in .the design of dss that have exact and uncoded repair and operate at the mbr point was first considered in the work of and further constructions appeared in .codes for these systems are a concatenation of an outer mds code and an inner fractional repetition code that specifies the placement of the encoded symbols on the storage nodes . in this workwe consider the design of such codes that allow for local repair in the presence of one of more failures .the work of , considers vector codes that allow local recovery in the presence of more than one failure . in their setting each storage node participates in a local code that has minimum distance greater than two .they present minimum distance bounds and corresponding code constructions that meet these bounds .the work of on the design of mbr repair - by - transfer codes is most closely related to our work . however , as we shall see our constructions in section [ sec : code_cons_bounds ] are quite different from those that appear in and allow for a larger range of code parameters .moreover , as we focus on fractional repetition codes , our minimum distance bound is much tighter than the general case treated in .the dss is specified by parameters where - number of storage nodes , - number of nodes to be contacted for recovering the entire file and is the local repair degree , i.e. , the number of nodes that an incoming node connects to for regenerating a failed node .the repair is performed by simply downloading packets from the existing nodes and is symmetric , i.e , the same number of packets are downloaded from each surviving node that is contacted .it follows that we download packets from the surviving nodes . the proposed architecture for the system consists of an outer mds code followed by an inner fractional repetition code .specifically , let the file that needs to be stored consist of symbols .suppose that these symbols are encoded using a -mds code to obtain encoded symbols .the symbols are placed on the storage nodes , such that each symbol appears exactly times in the dss .an example is illustrated in fig .[ dss-(15,4,4,2 ) ] .a node can be repaired locally by contacting the other two nodes in the same column .the system is resilient upto node failures . on the other hand ,local fr codes are resilient only a single node failure .hence and .moreover , contacting any four nodes recovers at least distinct symbols so that the file size is .therefore , the minimum distance of the code is for the filesize ., title="fig : " ] let = \{1 , 2 , \dots , \theta \} ] with the following properties . *the cardinality of each is . *each element of belongs to sets in .* let denote any sized subset of and .each is -recoverable from some -sized subset of .note that we only consider fr codes without repeated storage nodes to avoid trivialities .it can be observed that is a measure of the resilience of the system to node failures , while still allowing exact and uncoded repair .we define the code rate of the system as . for a fr codewe define , where is the set of all -sized subsets of , i.e. , is the minimum number of symbols accumulated when union of storage nodes from is considered .we say that nodes cover at least symbols if .a fr code is in one - to - one correspondence with a 0 - 1 matrix of dimension ( called the incidence matrix ) , where the -th entry of the matrix is 1 if the -th storage node contains the -th symbol .note that in an fr code we do not have any restriction on the repair degree ._ locally recoverable fractional repetition code ._ let be a fr code for a dss , with repetition degree and normalized repair bandwidth .let denote the local repair degree where and .a node of is said to be locally recoverable if there exists a set such that and is -recoverable from .we call the local structure associated with node .the fr code is locally recoverable if all nodes in belong to at least one local structure .let denote the maximum number of node failures such that each failed node has at least one local structure in the set of surviving nodes .we call the local failure resilience of the dss .we note that it is possible that the local structures themselves are fr codes ; in this case we call them local fr codes. _ minimum distance . _the minimum distance of a dss denoted is defined to be the size of the smallest subset of storage whose failure guarantees that the file is not recoverable from the surviving nodes .it is evident that . in our constructions in section[ sec : code_cons_bounds ] , we will evaluate the different code designs on these parameters .bounds on the minimum distance of locally recoverable codes have been investigated in prior work .specifically , considers the case of scalar ( ) storage nodes and considers vector ( ) storage nodes .[ local bound ] consider a locally recoverable dss with parameters with file size with minimum distance .then , note that if a code is optimal with respect to lemma , then the file can be recovered from node erasures .this implies that .therefore , a code is equivalently optimal if and every set of nodes can reconstruct the file .the minimum distance bound was tightened by when each storage node participates in a local code with minimum distance at least two ; this allows for local recovery when there is more than one failure .however , for the class of codes that we consider , our bound ( see section [ sec : code_cons_bounds ] ) is tighter .in this section , we present several code constructions and a minimum distance bound for a specific system architecture where the local structures are also fr codes .our first construction is a class of codes which is optimal with respect to the bound provided in lemma [ local bound ] and allow local recovery in the presence of a single failure .our construction leverages the properties of graphs with large girth .an undirected graph is called an -graph if each vertex has degree , and the length of the shortest cycle in is .[ grcons1 ] let be a -graph with .* arbitrarily index the edges of from 1 to . *each vertex of corresponds to a storage node and stores the symbols incident on it .it can be observed that the above procedure yields an fr code with storage nodes , parameters , and . upon single failure ,the failed node can be regenerated by downloading one symbol each from the storage nodes corresponding to the vertices adjacent to it in ( i.e. , ) ; thus , .we note that the work of also used the above construction for mbr codes where the file size was guaranteed to be at least ; however , they did not have the restriction that is a -graph . as we discuss next , -graphsallow us to construct locally recoverable codes and provide a better bound on the file size when .we allow the system parameter to be greater than , however in the work of , they consider only the case .[ lemma : grcons1_coverage ] let be a fr code constructed by construction [ grcons1 ] .if , and , we have for any let and be any nodes in our dss , where .we argue inductively .note that .suppose that for , where is the number of connected components formed by the nodes in .now consider where .note that since there can be no cycle in .thus , is connected at most once to each connected component in .suppose that is connected to existing connected components in , where .then , the number of connected components in is and the number of new symbols that it introduces is .therefore .this proves the induction step .thus , , where is the number of connected components formed by . now consider .note that there can be a cycle introduced at this step if .now , if , it can be seen that can only connect to each of the connected components once , otherwise it would imply the existence of a cycle of length strictly less than in .thus , in this case . on the other handif , then can connect at most twice to this connected component . in this case again we can observe that .let be a -graph with and . if such that , then obtained from by construction [ grcons1 ] is optimal with respect to bound in lemma [ local bound ] when the file size . from lemma [ lemma : grcons1_coverage ] ,any nodes cover at least symbols .thus , the code is optimal when the following holds . we have since , the following holds . and [ corollary_constr_1 ] let be a -graph with and . if , then obtained from by construction [ grcons1 ] is optimal with respect to the bound in lemma [ local bound ] for file size . it can be observed that in the specific case of , applying construction [ grcons1 ] results in a dss where the union of any nodes has at least symbols .we now discuss some examples of codes that can be obtained from our constructions .sachs provided a construction which shows that for all , there exists a -regular graph of girth . also , explicit constructions of graphs with arbitrarily large girth are known . using thesewe can construct infinite families of optimal locally recoverable codes .the petersen graph on 10 vertices and 15 edges can be shown to be -graph .we label the edges and in fig .[ ptrgraph ] .let the filesize ; we use a outer mds code . applying construction [ grcons1 ], we obtain a dss with parameters . from corollary [ corollary_constr_1 ], we observe that the dss meets the minimum distance bound .each vertex acts as a storage node and stores the symbols incident on it.,title="fig : " ] an -graph with the fewest possible number of vertices , among all -graphs is called an -cage and will result in the maximum code rate for our construction .for instance , the -cage is the petersen graph .we note here that bipartite cages of girth 6 were used to construct fr codes in though these were not in the context of locally recoverable codes .it can be seen that construction [ grcons1 ] can be extended in a straightforward way to larger filesizes .our second class of codes are such that the local structures are also fr codes .the primary motivation for considering this class of codes is that they naturally allow for local recovery in the presence of more than one failure as long as the local fr code has a repetition degree greater than two .thus , in these codes , each storage node participates in one or more local fr codes that allow local recovery in the presence of failures . for these classes of codes, we can derive the following tighter upper bound on the minimum distance ( the proof appears in the appendix ) when the file size is larger than the number of symbols in one local structure .[ minimum distance ] let be a locally recoverable fr code with parameters where each node belongs to a local fr code with parameters .suppose that the file size .then , the following corollary can be also be established ( see appendix ) .[ mincor ] let be a locally recoverable fr code with parameters where each node belongs to a local fr code with parameters . furthermore, suppose that can be partitioned as the union of disjoint local fr codes .if the file size for some integer and , we have [ design ] let be a fr code with parameters such that any + 1 nodes in cover symbols and for , we have when . we construct a locally recoverable fr code by considering the disjoint union of copies of .thus , has parameters .we call the local fr code of .[ lemma : cond_kron_opt ] let be a code constructed by construction [ design ] for some such that the parameters of the local fr code satisfy .let the file size be for some .then is optimal with respect to corollary [ mincor ] .it is evident that is the disjoint union of local fr codes .thus , the minimum distance bound here is the code is optimal when any nodes in cover at least symbols .we show that this is the case below .let be the number of nodes that are chosen from the -th local fr code and be the symbols covered by these nodes .note that for any if , then ( the maximum possible ) .suppose there are local fr codes that cover symbols .it can be seen that in this case it suffices to show that nodes cover at least symbols . herewe can omit case of , since our claim clearly holds in this situation .suppose that these nodes belong to local fr codes , where . by applying corradi s lemma we obtain this implies that the above lemma can be used to generate several examples of locally recoverable codes with .we discuss two examples below .[ affine resolvable ] in our previous work we used affine resolvable designs for the construction of fr codes that operate at the mbr point . let be a prime power .these codes have parameters and .moreover the code is resolvable , i.e. , we can vary the repetition degree by choosing an appropriate number of parallel classes .suppose we choose the local fr code by including parallel classes .thus , the parameters of the local fr code are . for this codeit can be shown that and that .it can be observed that this local fr code satisfies the conditions of lemma [ lemma : cond_kron_opt ] when .we construct a locally recoverable fr code by taking the disjoint union of of the above local fr codes .thus , has parameters .it can be seen that the code allows for local recovery in the presence of at most failures , i.e. , . let the file size be for some .then is optimal with respect to corollary [ mincor ] .[ projective plane ] a projective plane of order also forms a fr code , where and .furthermore , if and each pair of symbols appears in exactly one node ; this further implies that .a simple counting argument shows that and .it can be shown that satisfies the conditions of lemma [ lemma : cond_kron_opt ] with since any nodes cover symbols .we construct a locally recoverable fr code by taking copies of the code . so the code has parameters .let the file size be for some .then , is optimal with respect to lemma [ minimum distance ] and has .an example is illustrated in fig .[ fano ] .it is worth noting that one can also obtain codes using the technique presented above by choosing the local fr code from several other structures including complete graphs and cycle graphs . owing to space limitations, we can not discuss all these examples here .[ t ] and each local fr code ( the rows in the figure ) is a projective plane of order which is also known as a fano plane . hereany set of nodes cover at least symbols .thus , the minimum distance of the code is for the filesize .,title="fig : " ] 10 [ 1]#1 url [ 2]#2 [ 2]l@#1=l@#1#2 a. dimakis , k. ramchandran , y. wu , and c. suh , `` a survey on network codes for distributed storage , '' _ proceedings of the ieee _99 , no . 3 , pp . 476 489 , 2011 .a. dimakis , p. godfrey , y. wu , m. wainwright , and k. ramchandran , `` network coding for distributed storage systems , '' _ ieee trans . on info ._ , vol .56 , no . 9 , pp . 4539 4551 , sept .k. rashmi , n. shah , p. kumar , and k. ramchandran , `` explicit construction of optimal exact regenerating codes for distributed storage , '' in _47th annual allerton conference on communication , control , and computing _ , 2009 , pp .1243 1249 . c. suh and k. ramchandran , `` exact - repair mds code construction using interference alignment , '' _ ieee trans . on info ._ , vol .57 , no . 3 ,1425 1442 , 2011 .p. gopalan , c. huang , h. simitci , and s. yekhanin , `` on the locality of codeword symbols , '' _ ieee trans . on info ._ , vol .58 , no .11 , pp . 6925 6934 , 2012 .d. papailiopoulos and a. dimakis , `` locally repairable codes , '' in _ ieee intl .symposium on info .th . _ , 2012 ,2771 2775 .f. oggier and a. datta , `` self - repairing homomorphic codes for distributed storage systems , '' in _ infocom , 2011 proceedings ieee _ , april 2011 , pp .1215 1223 .`` wikipedia : list of device bit rates , available at http://en.wikipedia.org/wiki/listof device bandwidths . ''s. jiekak , a .- m .kermarrec , n. l. scouarnec , g. straub , and a. v. kempen , `` regenerating codes : a system perspective , '' 2012 [ online ] available : http://arxiv.org/abs/1204.5028 .g. m. kamath , n. prakash , v. lalitha , and p. v. kumar , `` codes with local regeneration , '' 2012 [ online ] available : http://arxiv.org/abs/1211.1932 .a. k. rawat , o. o. koyluoglu , n. silberstein , and s. vishwanath , `` optimal locally repairable and secure codes for distributed storage systems , '' 2012 [ online ] available : http://arxiv.org/abs/1210.6954 .s. e. rouayheb and k. ramchandran , `` fractional repetition codes for repair in distributed storage systems , '' in _ 48th annual allerton conference on communication , control , and computing _ , 2010 ,1510 1517 .j. koo and j. gill , `` scalable constructions of fractional repetition codes in distributed storage systems , '' in _49th annual allerton conference on communication , control , and computing _ , 2011 , pp .1366 1373 .o. olmez and a. ramamoorthy , `` repairable replication - based storage systems using resolvable designs , '' in _50th annual allerton conference on communication , control , and computing _ , 2012 .h. sachs , `` regular graphs with given girth and restricted circuits , '' _ journal of the london mathematical society - second series _s1 - 38 , no . 1 ,pp . 423429 , 1963 .f. lazebnik and v. a. ustimenko , `` explicit construction of graphs with an arbitrary large girth and of large size , '' _discrete applied mathematics _60 , no . 1 ,pp . 275284 , 1995 .s. jukna , _ extremal combinatorics : with applications in computer science_.1em plus 0.5em minus 0.4emspringer , 2011 . , for each node , identify an fr code ( if it exists ) such that . if no such fr code exists , find an fr code that has no intersection with and set equal to it. we will apply an algorithmic approach here ( inspired by the one used in ) .namely , we iteratively construct a set so that .the minimum distance bound is then given by .our algorithm is presented in fig .[ min_dist_algorithm ] . towards this end , let and represent the number of nodes and the number of symbols included at the end of the -th iteration . furthermore , let and , represent the corresponding increments between the -th and the -the iteration .we divide the analysis into two cases .* case 1 : [ the algorithm exits without ever entering line 8 . ] note that we have and where is the minimum number of symbols covered by nodes in the local fr code and hence the minimum size of . by considering the bipartite graph representing the local fr codeit can be seen that .thus , we have suppose that the algorithm runs for iterations and exits on the iteration .then since the algorithm exits without ever entering line , it is unable to accumulate even one additional node .hence thus , the bound on the minimum distance becomes * case 2 : [ the algorithm exits after entering line 8 .] note that by assumption , .suppose that the algorithm enters line , times .now we have , otherwise we could include another local structure . hence we need to add nodes so that strictly less than symbols are covered . it can be seen that we can include at least more nodes .therefore , the total number of nodes therefore , we have the following minimum distance bound the final bound is obtained by taking the maximum of the two bounds obtained above .the proof of corollary [ mincor ] follows by observing that when the code consists of disjoint local fr codes and the file size , where , the algorithm in fig .[ min_dist_algorithm ] never enters line 8 .
|
we consider the design of regenerating codes for distributed storage systems that enjoy the property of local , exact and uncoded repair , i.e. , ( a ) upon failure , a node can be regenerated by simply downloading packets from the surviving nodes and ( b ) the number of surviving nodes contacted is strictly smaller than the number of nodes that need to be contacted for reconstructing the stored file . our codes consist of an outer mds code and an inner fractional repetition code that specifies the placement of the encoded symbols on the storage nodes . for our class of codes , we identify the tradeoff between the local repair property and the minimum distance . we present codes based on graphs of high girth , affine resolvable designs and projective planes that meet the minimum distance bound for specific choices of file sizes .
|
in recent years , the literature has presented a lot of applications of fractal theory to the solution of problems from distinct areas . as examples we may cite applications in botany , medicine and geology . particularly , in physics, we may find applications of fractal theory in optics , materials science and electromagnetism , among many other areas .such large amount of works exploring tools from fractal theory is fully justified by an interesting observation already pointed out in .this observation states that systems observed in the nature generally may be modelled by fractal measures rather than by classical formalisms . among the applications of fractal theory , most of them aim at using the fractal modeling in order to extract features from objects of interest according to the problem domain , like textures , contours , surfaces , etc .such features are then provided as input data , for example , to methods for segmentation , classification and description of objects .a classical example of such fractal feature is the fractal dimension . as in the most of casesthe simple use of fractal dimension is still not sufficient to well represent the complexity of an object or scenario from the real world , the literature developed techniques for the extraction of a set of features based on the fractal dimension .examples of such approaches are multifractal theory , multiscale fractal dimension ( mfd ) and fractal descriptors . here, we are focused on fractal descriptors approach .several authors , like in , obtained interesting results in different applications of fractal descriptors technique to texture and shape analysis , mainly in the description of natural objects .particularly , here we are focused on an approach developed in which uses the volumetric bouligand - minkowski fractal dimension to generate a set of descriptors .such descriptors obtained a high performance in an application to a task of plant leaves classification based on texture . nevertheless , an important drawback of fractal descriptors technique , particularly that based on bouligand - minkowski , is that the curve formed by the set of descriptors present a high correlation , that is , each descriptor is strongly dependent on each other .this correlation does their performance decrease drastically in problems of classification and segmentation with a high number of samples and classes . in such situations ,volumetric bouligand - minkowski descriptors have severe limitations .aimimg at enhancing bouligand - minkowski descriptors , preserving the reliability of the results , this work proposes the development and use of functional data analysis ( fda ) transform concept . functional data analysis is a powerful statistical tool developed in .it represents an alternative to the traditional multivariate approach and deals with complex data as being a simple analytical function : the functional data .fda approach presents certain advantages in this kind of application , like the easy handling of data in nonlinear domains ( as the case in bouligand - minkowski descriptors ) and the intuitive notion of functional operations , like derivatives and smoothing , employed in the definition of fractal descriptors . up our knowledge ,florindo et al . is the first work to apply the fda approach to fractal descriptors . in that work, functional data representation is used for reducing the dimensionality of the descriptors set in shape recognition problems . here, we propose a different paradigm for fda use , by defining the concept of fda transform .the fda transform is defined as the operation which changes the original data set ( in this case , descriptors ) space into the space of coefficients of functional data .the transform still presents two variants : the first uses the coefficient directly , the second performs a second algebraic transform , described in .the relevance of the fda transform is verified in experiments of classification of two well known datasets , that is , brodatz and outex .the results are compared in terms of classification correctness rate .it was considered two variants of the fda transform and it was compared through three classifiers very well known in the literature : linear discriminant analysis ( lda ) , k - nearest neighbors ( knn ) and bayesian .this work is divided into seven sections , including this introduction .the following explains the concepts of fractal theory , fractal dimension and fractal descriptors .the third introduces the functional data analysis theory and definitions .the fourth shows the proposed method .the fifth describes the experiments .the sixth section shows the results and the last section concludes the work .the literature shows a lot of applications of fractal geometry involving the characterization of natural objects and scenarios .examples of such applications may be found in .most of these works use the fractal dimension as a metric for describe the object .this strategy is justified by the fact that fractal dimension measures the complexity of a structure .physically , the complexity corresponds to the irregularity or to the spatial occupation . these properties are tightly related to constitution aspects which allow the identification of such objects .an important drawback of using only fractal dimension is that it is a unique global value and is not capable of extract information about intricate details of a structure . with the aim of exploring fully the potential of fractal theory, the literature shows the development of techniques which provide not only a unique value but a set of values capable of describing in a richer way an object , based on the fractal theory . among these techniques, we have the multifractal , the multiscale fractal dimension and the fractal descriptors .multifractal theory replaces the fractal dimension analysis by the concept of fractal spectrum , capable of modeling objects which can not be represented by a single fractal measure .multifractal demonstrates to be an interesting tool to capture the different power - law scaling present in a system .the literature still shows an alternative technique for the modeling of objects with fractal theory .this approach is the multiscale fractal dimension ( mfd ) . in mfd approach , instead of simply calculate the fractal dimension from interest objects , a set of features is extracted from the derivative of the whole power - law curve used to provide the fractal dimension .an extension of mfd are the fractal descriptors . in this case, we extract features ( descriptors ) from an object through the calculus of the fractal dimension taking the object under different observation scales .these descriptors are used to compose a feature vector that could be mean as a `` signature '' to characterize the object .particularly , fractal descriptors demonstrate to be an efficient tool for the discrimination of natural textures like that analyzed in the present work .the figure [ fig : mfd1 ] illustrates the discrimination power of fractal descriptors by showing two distinct textures whose fractal dimensions are identical but the curve of fractal descriptors is visually distinct .the following sections describe in more details the aspects involved in fractal descriptors technique , starting from the fractal dimension definition .fractal dimension is a real positive number constituting the main measure extracted from a fractal object .there is no absolute definition for the concept of fractal dimension .the most used and classical one is the hausdorff - besicovitch dimension .hausdorff - besicovitch dimension is a concept derived from the measure theory and is defined over a set as where is the hausdorff - besicovitch measure , defined by where in above equations , expresses for the diameter in , that is , . in many situations ,the calculus of hausdorff - besicovitch dimension is very complex and even impracticable . in such cases, we can calculate it by generalizing the concept of classical euclidean dimension . in this way, we obtain the following expression where is the minimum number of objects with linear size needed to cover .most of different definitions of fractal dimension are based on a generalization of equation [ eq : hb ] , expressed through where is a set measure depending on the specific fractal dimension method and is the scale parameter .as example of fractal dimensions defined from the previous expression we can cite the box - counting , the packing dimension , the renyi dimension , etc .particularly , here we are focused on the bouligand - minkowski fractal dimension . as the hausdorff - besicovitch dimension, the bouligand - minkowski dimension also is based on a topological measure , in this case , the bouligand - minkowski measure calculated through where is the object ( set ) of interest , is a structuring element with radius and is the volume of the dilation between and the boudary of .the bouligand - minkowski dimension itself is given by for an application to discrete objects represented in a digital image , the calculus is significantly simplified through the use of neighborhood techniques . in this way, the above expression becomes in which is a disk with diameter ( also called dilation radius ) , is the number of points pertaining to the dilation region and is the topological dimension of the space in which is immersed .although fractal dimension is an important measure , it is insufficient for a good representation of complex systems which present different fractal dimension depending on the observation scale taken into account . in order to provide a richer fractal - based information from an object, the literature shows the multiscale fractal dimension ( mfd ) .mfd consists in the application of a multiscale transform to the fractal dimension .the multiscale transform of a signal is the function , where is directly associated with and is the scale variable .essentially , the multiscale is performed through three approaches : scale - space , time - frequency and time - scale . in the following ,we describe the approach used in mfd , e.g. , scale - space .more details are found in . scale - space is a particular case of multiscale transform .it is based on the derivative of the signal followed by a convolution with a smoothing gaussian filter : where expresses the zero - crossings and represents the convolution of the original signal with the first derivative of the gaussian , that is : in , the mfd is obtained from the bouligand - minkowski fractal dimension in the following manner : where is the dilation area for each dilation radius . in mfd technique ,some characteristics of mfd curve , like maximum , minimum and area below the curve graph , are extracted to compose a feature vector for the analyzed object .fractal descriptors are an extension of mfd concept where a feature vector is extracted from the fractal dimension calculated over a whole interval of scales .generally speaking , fractal descriptors are obtained from the function : where is a measure depending on the fractal dimension estimation method and is the scale parameter .the function must be used directly , as in , or may be summited to a particular transform .for instance , in , the descriptors are extracted from the fourier derivative of : where is equivalent to , is the fourier transform of and is fourier derivative : where is the imaginary number . in order to attenuate noises inherent to the derivative operation, one may still apply a convolution with a gaussian filter embedded in the fourier derivative , as employed in .thus , the above expression becomes : where is the derivative of the gaussian in the fourier domain .the figure [ fig : mfd3 ] shows the aspect of descriptors curve of an object . in , the descriptors are obtained from the fourier derivative , followed by a principal component analysis ( pca ) transform , aiming at reduce correlation among descriptors . in this way , a more reliable and consistent set of descriptors are provided to characterize plant leaf shapes analyzed in that work . here, we propose the application of functional data analysis , described in the following , as a transform to , in order to generate more robust and precise fractal descriptors . in this workwe focus on a specific fractal descriptors approach developed in called volumetric bouligand - minkowski fractal descriptors ( vbfd ) .the main idea is the calculus of bouligand - minkowski fractal dimension of a 3d surface taken under a range of observation scales .these descriptors are employed to describe texture images , that is , analysis of images based on spatial and color arrangement of pixels . in the first step ,we map the intensity image \times [ 1:n ] \rightarrow \re ] is divided into `` knots '' as in each subinterval $ ] , the spline is given by the polynomial .the order of the spline corresponds to the highest order of polynomials .each polynomial is called a basis of the spline function .a b - spline is a particular category of splines characterized by minimum support ( number of points where the function has value different of zero ) .each b - spline basis is defined through finally , the b - spline curve is given by where is the degree of the basis and corresponds to the knots .beyond its importance as a statistical analysis tool , fda has demonstrated to be an efficient technique to extract relevant information from a large data set .for example , in , a large amount of data respect to the water quality is collected in a specific local .thus , the fda approximating function is obtained from each curve of observed values and coefficients are used to extract important characteristics from data . in ,fda is used to reduce the dimensionality and extract utile information from fractal descriptors used in a task of shape analysis . in that case , instead of using directly coefficients , it is employed a transform of that coefficients which takes into account the contribution of the function basis space used .this transform is performed by the canonical transform matrix where are the basis functions . besides, to simplify the notation , we use , corresponding to the set of coefficients from the basis functions . thus , the transformed coefficients are given through where is the result from the matrix decomposed by the choleski method , that is , is a unique lower triangular decomposition matrix of , such that , where is the conjugate transpose of . the figure [ fig : fda3 ] shows the discriminative power of fda .we observe a data represented in two curves with similar visual aspect and the discrimination of fda coefficients without and with transform .the present work extends the application in to the analysis of volumetric bouligand - minkowski descriptors , applied to texture classification .the motivation for this application comes from the fact that volumetric bouligand - minkowski descriptors corresponds to a typical case of data whose fda representation is interesting , according to . in fact , the descriptors present a global smooth aspect , being therefore analytical .besides , they are extracted from a nonlinear space ( log - log curve ) and then are provided by a domain irregularly spaced .another motivation is the fact that fractal descriptors may involve a derivative operation which becomes more intuitive by the handling of descriptors as a function and not only as a simple set of non related values . unlike the situation in ,the objective of using fda here is not the simple dimensionality reduction , even because volumetric descriptors are easily treated by traditional classification methods .a problem with volumetric descriptors is that , although they allow for the achievemet of good results , they present a high level of correlation , that is , the original descriptors have a high dependence among themselves .this fact implies in difficulties for discrimination tasks envolving a large number of samples and classes .our purpose is to evidentiate patterns in the global structure of descriptors which turn possible the enhancement of the discrimination power of original volumetric descriptors . for this goal, we propose the use of direct coefficients or transformed coefficients replacing conventional fractal descriptors in a classification method .we call this representation form as fda transform .in fact , we have a typical transform , in which the data is mapped from the log - log space of volumetric bouligand - minkowski descriptors onto the space of coefficients of functional data .the figure [ fig : fda ] illustrates the fda transform steps .the performance of the fda transform on volumetric bouligand - minkowski fractal descriptors is tested in an application to the classification of textures from two different datasets .the first is the classical brodatz texture dataset , composed by 111 classes , each one with 10 samples ( images ) corresponding to photographies of real world textures .the second analyzed dataset is the also well known outex dataset , composed by 68 classes with 20 images in each class .the steps in the experiments may be summarized through the following items : 1 .extraction of volumetric bouligand - minkowski descriptors from each image in the dataset ; 2 .computation of coefficients of the approximating analytical function ( functional data ) ; 3 .obtainment of the coefficients after the transform described in the section [ sec : method ] ; 4 .use of coefficients and as input to different classification methods ; 5 .comparison , in terms of classification performance , among the proposed approach and the direct use of volumetric bouligand - minkowski descriptors .the performance of the fda transform is verified in direct approach ( using ) and transformed approach ( using ) . the basis used was b - spline . for the classification processwe use classical methods from the literature , that is , bayesian , k - nearest neighbor ( knn ) and linear discriminant analysis ( lda ) .the results are showed in graphs and tables which represent the different ways for the use of the fda transform combined to bouligand - minkowksi descriptors in the datasets analyzed .empirically , we found an optimal interval for the number of descriptors used , that is , between 60 and 100 for direct fda coefficients and between 10 and 50 for transformed fda fractal descriptors .firstly , the figure [ fig : resultbrod ] shows the correctness rate for the use of fda fractal descriptors in the classification of brodatz data set . at left ,we show the results for normal fda . at right , for transformed fda . from above to below , we use bayesian , knn and lda classifier .initially , we can not notice any direct relation among number of descriptors , basis order and correctness rate .the exception occurs with the use of lda with transformed fda descriptors . in this case, it is clear that the correctness rate increases with the number of descriptors . in the most of cases ,however , it is noticeable that higher order basis yield greater correctness .this is explained by the fact that those basis are capable of capture more details from the conventional vbfd descriptors .relative to the number of descriptors , the graph shows that each specific combination of fda descriptors and classifier provides a different pattern for the correctness rate results .this is also waited due to the fact that each classifier has a particular way of dealing with correlation and irregularity information .the figure [ tab : resultoutex ] shows the correctness rate in outex data set .the graphs are organized in the same way as in the figure [ fig : resultbrod ] .the observations from brodatz data set are also valid in this case .particularly , an interesting observation is that the aspect of the graph of each combination descriptor / classifier is similar in both data sets .the unique significant difference is the global correctness that is smaller in outex , due to its greater difficulty level when compare to brodatz data set .now we show the best results achieved by each combination of descriptors and classifiers and the number of used descriptors . in the table [ tab :resultbrod ] we show the correctness rate for brodatz data set .we observe that even using a reduced set of descriptors , fda achieved results sensibly more precise than classical vb fractal descriptors .this advantage is more notable in knn and bayesian classifier . in bayesian, fda presented an advantage of 42% while in knn this advantage was 27% .another important observation from the table is that in this specific application the use of normal fda coefficients demonstrated to be the better solution .this is very encouraging since this fda approach is computationally simple and allows an easy statistical interpretation of the analyzed data .[ cols="^,^,^,^,^",options="header " , ] from the previous results , we observe that we can not extract an exact relation between the number of fda basis ( and , consequently , descriptors ) , the order of used basis and the correctness rate results .however , analyzing without excessive severity , we observe that generally the use of higher order basis increases the classification performance .nevertheless , it is always important to verify the combinations for each different application . analyzing more globally the results ,we observe initially that the fda transform provides a significant increasing in the performance of volumetric bouligand - minkowski descriptors , mainly when we used knn and bayesian classifier .this fact attests that the fda transform is capable of extract relevant features from the set of descriptors , allowing for the classifiers to provide a more precise classification result .as discussed in the section [ sec : method ] , the good performance of the fda transform was expected due to the smooth analytical and irregularly spaced nature of bouligand - minkowski descriptors . the smaller efficiency in lda classifier is easily explained by the fact that one step in the lda method involves a correlation space transform ( principal component analysis ) .so , features extracted by the fda transform do not necessarily to have the same correlation properties as the original descriptors and this fact prejudices the performance of the whole classification process .this work proposed and analyzed the use of the fda transform aiming at enhancing the performance of volumetric bouligand - minkowski fractal descriptors , applied to texture classification .the transform consists in the use of coefficients from the functional data representation replacing the original descriptors .results demonstrated that the fda transform increased significantly the accuracy of classification process , mainly when using bayesian and knn classifiers .results confirmed what is expected from the theory , once fda is a powerful statistical tool for the representation of smooth analytical data , like fractal descriptors .the fda transform extracts important features and patterns from the original descriptors set yielding a better classification performance .results suggest strongly that fda must be considered as an auxiliary tool for other methods shown in the literature for obtaining fractal descriptors or even other techniques in texture analysis that generate a set of values which may be handled as an analytical function .odemir m. bruno gratefully acknowledges the financial support of cnpq ( national council for scientific and technological development , brazil ) ( grant # 308449/2010 - 0 and # 473893/2010 - 0 ) and fapesp ( the state of so paulo research foundation ) ( grant # 2011/01523 - 1 ) .joo b. florindo is grateful to cnpq ( national council for scientific and technological development , brazil ) for his doctorate grant .r. quevedo , f. mendoza , j. m. aguilera , j. chanona , g. gutierrez - lopez , determination of senescent spotting in banana ( musa cavendish ) using fractal texture fourier image , journal of food engineering 84 ( 4 ) ( 2008 ) 509515 .r. quevedo , m. jaramillo , o. diaz , f. pedreschi , j. miguel aguilera , quantification of enzymatic browning in apple slices applying the fractal texture fourier image , journal of food engineering 95 ( 2 ) ( 2009 ) 285290 .r. lopes , m. steinling , w. szurhaj , s. maouche , p. dubois , n. betrouni , fractal features for localization of temporal lobe epileptic foci using spect imaging , computers in biology and medicine 40 ( 5 ) ( 2010 ) 469477 .n. bird , m. diaz , a. saa , a. tarquis , fractal and multifractal analysis of pore - scale images of soil , journal of hydrology 322 ( 1 - 4 ) ( 2006 ) 211219 , international conference on fractals in the hydrosciences ( hydrofractals 03 ) , ascona , switzerland , aug , 2003 .m. hiltunen , l. dal negro , n .-n . feng , l. c. kimerling , j. michel , modeling of aperiodic fractal waveguide structures for multifrequency light transport , journal of lightwave technology 25 ( 7 ) ( 2007 ) 18411847 .d. chappard , i. degasne , g. hure , e. legrand , m. audran , m. basle , image analysis measurements of roughness by texture and fractal analysis correlate with contact profilometry , biomaterials 24 ( 8) ( 2003 ) 13991407 .r. p. wool , twinkling fractal theory of the glass transition , journal of polymer science part b - polymer physics 46 ( 24 ) ( 2008 ) 27652778 , annual meeting of the american - physical - society , new orleans , la , mar 10 , 2008 .i. das , n. r. agrawal , s. k. gupta , s. k. gupta , r. p. rastogi , fractal growth kinetics and electric potential oscillations during electropolymerization of pyrrole , journal of physical chemistry a 113 ( 18 ) ( 2009 ) 52965301 .w. y. chen , s. j. chang , m. h. weng , c. y. hung , design of the fractal - based dual - mode bandpass filter on ultra thin liquid - crystal - polymer substrate , journal of electromagnetic waves and applications 24 ( 2 - 3 ) ( 2010 ) 391399 .k. vinoy , j. abraham , v. varadan , on the relationship between fractal dimension and the performance of multi - resonant dipole antennas using koch curves , ieee transactions on antennas and propagation 51 ( 9 ) ( 2003 ) 22962303 .s. lovejoy , p. garrido , d. schertzer , multifractal absolute galactic luminosity distributions and the multifractal hubble 3/2 law , physica a - statistical mechanics and its applications 287 ( 1 - 2 ) ( 2000 ) 4982 .e. t. m. manoel , l. da fontoura costa , j. streicher , g. b. mller , multiscale fractal characterization of three - dimensional gene expression data , in : sibgrapi , ieee computer society , 2002 , pp .269274 .a. r. backes , d. casanova , o. m. bruno , plant leaf identification based on volumetric fractal dimension , international journal of pattern recognition and artificial intelligence ( ijprai ) 23 ( 6 ) ( 2009 ) 11451160 .r. o. plotze , j. g. padua , m. falvo , m. l. c. vieira , g. c. x. oliveira , o. m. bruno , leaf shape analysis by the multiscale minkowski fractal dimension , a new morphometric method : a study in passiflora l. ( passifloraceae ) , canadian journal of botany - revue canadienne de botanique 83 ( 3 ) ( 2005 ) 287301 .t. ojala , t. maenpaa , m. pietikainen , j. viertola , j. kyllonen , s. huovinen , outex - new framework for empirical evaluation of texture analysis algorithms , in : kasturi , r and laurendeau , d and suen , c ( ed . ) , 16th international conference on pattern recognition , vol i , proceedings , international conference on pattern recognition , ieee computer soc , 10662 los vaqueros circle , po box 3014 , los alamitos , ca 90720 - 1264 usa , 2002 , pp .701706 .a. p. witkin , scale space filtering : a new approach to multi - scale descriptions , in : proceedings ... , icassp - ieee international conference on acoustics , speech , and signal processing , gretsi , saint martin dhres , france , 2003 , pp .
|
this work proposes and study the concept of functional data analysis transform , applying it to the performance improving of volumetric bouligand - minkowski fractal descriptors . the proposed transform consists essentially in changing the descriptors originally defined in the space of the calculus of fractal dimension into the space of coefficients used in the functional data representation of these descriptors . the transformed decriptors are used here in texture classification problems . the enhancement provided by the fda transform is measured by comparing the transformed to the original descriptors in terms of the correctness rate in the classification of well known datasets .
|
the emergence of coordinated behavior between humans is a common phenomenon in many areas of human endeavor .examples include improvisation theater , group dance , music playing , team sports and parade marching . at the core of the interaction between the playerslies a fundamental feedback mechanism where each player adapts his / her motion in response to the observed movement of the other . to study this intriguing phenomenon ,the mirror game has been recently proposed as a simple , yet effective paradigm . in its simplest formulation ,the mirror game features two people imitating each other s movements at high temporal and spatial resolution .the game can be played in different experimental conditions : the former where one of the players leads and the other has to follow the leader movement ( leader - follower condition ) ; the latter where the two players create joint synchronized movement ( joint improvisation condition ) .the theory of similarity in social psychology suggests that people prefer to team up with others possessing similar morphological and behavioral features , and that interpersonal coordination is enhanced if their movement shares similar kinematic features .further evidence suggests that motor processes caused by interpersonal coordination are strictly related to mental connectedness .to be specific , motor coordination between two people contributes to social attachment .as suggested in , coordination games can therefore be used to help people suffering from social disorders to improve their social skills .also they can be effectively exploited in social robotics to enhance attachment , coordination and rehabilitation during human - robot interactions .for this reason , it has been proposed that creating a vp or avatar able to coordinate its motion with that of a hp can be extremely useful to study the onset of coordination and how it is affected by similarity / dissimilarity between the players motion characteristics .a vp can also be used for diagnostics and rehabilitation of patients suffering from social disorders as recently proposed in .the aim of this paper is the design of a novel interactive cognitive architecture ( ica ) based on nonlinear control theory able to drive a vp to play the mirror game with a human either as a leader or as a follower .specifically , the goal is that of designing a cognitive architecture able to drive the motion of the vp interacting with a hp in real - time while exhibiting certain desired kinematic features .when playing as a follower , the ica needs to guarantee that , while exhibiting the desired movement properties , the vp tracks as closely as possible the motion of the human leader .when playing as the leader , the ica needs instead to generate new interesting motion . in both cases , it is crucial for the vp to engage with the hp by producing human - like response in terms of kinematics ( maximum acceleration , velocity profile etc ) and delay times . in this paperwe take the view that the design of such an architecture is fundamentally a nonlinear control design problem where given some reference input the architecture has to drive the vp onto a desired motion which is a function of the movement of the human player being sensed during the game .in particular , the ica can be integrated into the humanoid robot to achieve the desired dual - arm coordination .we explore two different approaches , one based on adaptive control , the other on optimal control .our control architecture mimics the two fundamental actions which have been suggested to be at the core of the emergence of motor coordination between two or more effectors in biological systems : feedback and feedforward .specifically , the motor system is able to correct the deviation from the desired movement with the aid of feedback control , whilst feedforward control allows it to reconcile the interdependency of the involved effectors and preplan the response to the sensory incoming information .it is shown experimentally that the proposed control architectures are able to effectively drive the vp to play the mirror game while generating motion with desired kinematic properties .in particular , we use the concept of individual motor signature ( ims ) recently proposed in to characterize the motion of an individual player and evaluate how similar / dissimilar the motion of two different individuals is .following our approach we are able to show that the vp driven by the cognitive architecture presented in the rest of this paper can play the mirror game either as a leader or a follower while exhibiting a desired ims .relevant previous work in the literature includes the generation of human - like movement , the development of a mathematical model to explain the coordination dynamics observed experimentally in the mirror game , and the human dynamic clamp paradigm proposed in where the use of a virtual partner driven by appropriate mathematical models is proposed to study human motor coordination .these previous approaches will be used to investigate and compare the performance of the novel strategy presented in this paper .we wish to emphasize that the control algorithms developed and validated in what follows can be also effectively used for trajectory planning to enhance human - robot coordination in joint interactive tasks .the rest of the paper is organized as follows .the mirror game set - up , problem statement and motor signature are discussed in section [ sec : problem ] before presenting the schematic of the proposed cognitive architecture in section [ sec : ca ] .the feedback control strategies at the core of the ica are developed and analyzed in section [ sec : adaptive ] and [ sec : optimal ] .the experimental validation of the control algorithms is presented in section [ sec : validation ] where experimental results are discussed showing the effectiveness of the proposed strategies .a comparison with other existing approaches is also carried out .finally , conclusions and suggestions for future work are drawn in section [ sec : conclusions ] .investigation of interpersonal coordination requires appropriate experimental paradigms .a typical paradigm recently proposed in the literature is the mirror game , which involves two people imitating each other s movements at high temporal and spatial resolution .it can be played in two different conditions : _ leader - follower condition _ , where the follower attempts at tracking the leader motion as accurately as possible , and _ joint improvisation condition _ , where the players jointly coordinate and synchronize their movements without any of the two being designated as leader or follower .our set - up is inspired by the one in .specifically , a small orange ball is mounted onto a string , which the hp can move back and forth along the string itself . in the meanwhile ,the vp on the opposite screen moves its own ball on a parallel string with the same length ( see fig . [ setup ] ) . in this implementation of the mirror game ,two players ( a hp and a vp ) are required to move their respective ball back and forth and synchronize their movement . here, we assume that the game is played in a leader - follower condition , where the hp is the leader and the vp ( robot or computer avatar ) is the follower trying to track the leader movement .however , the vp can opt to act as the leader as well .[ 0.25]experimental set - up of the mirror game between a vp and a hp at the university of montpellier , france ( see for further details).,title="fig : " ] the position of the ball moved by the hp is detected by a camera .a feedback control strategy then needs to be designed in order to generate the trajectory of the ball moved by the vp so as to track the movement of the ball controlled by the hp .such a trajectory can then be provided to the on - board controllers of the vp ( robot or computer avatar ) as the desired trajectory for its end effector . to solve this control problem so that the vp motion presents similar features to the motion of a human player , we need to choose an appropriate model of the vp motion that can then be controlled using a nonlinear feedback strategy . to this purpose , here we use the haken - kelso - bunz oscillator , which was first proposed in as a model able to capture the observations made in experiments on human bimanual coordination .the model consists of two nonlinearly coupled nonlinear oscillators described by (\dot{z}-\dot{w})\ ] ] where represent the position and velocity of finger , the position and velocity of finger ( modeled by a replica of the equation above obtained by swapping with ) ; and are the coupling parameters and , , and characterize the response of each uncoupled finger when subject to some reference signal .however , it is worth pointing out that , other than describing intrapersonal motor coordination , the hkb model has been also used to describe interpersonal motor coordination involving two different people . in particular , the hkb oscillator has been suggested in the literature as a paradigmatic example of human motor coordination . solving the mirror game can then be formulated as the following control problem .given a nonlinear hkb oscillator of the form where and refer to the position and velocity of the end effector of the vp , respectively , and is an external control input , the problem is to design a feedback controller such that achieves bounded asymptotic tracking of the position of the hp , while expressing some desired kinematic features .as metrics to characterize the kinematic properties of the motion of an individual playing the game we use the concept of individual motor signature ( ims ) , recently introduced in . it has been shown that the ims is time invariant and is unique for each player .it is defined in terms of the velocity profile ( or distribution ) of the player s motion in solo trials . to quantify how similar or dissimilar the signatures of two different players are, we use the earth mover s distance ( emd ) between any two probability distribution functions ( pdf ) of their velocity time series .the emd between two pdfs and can be computed as follows where denotes the integration domain , and denotes the cumulative distribution function of the distribution .fig.[pdf_vel](a ) shows the position time series of the same hp and the corresponding pdf of velocity in two solo trials .it is visible that the two pdfs of velocity time series resemble each other in terms of their shape , and the emd between them is .in contrast , the two pdfs of velocity time series in fig.[pdf_vel](b ) differ remarkably from each other , and the value of emd is , which confirms the qualification of the pdf of velocity time series in solo trials as individual motor signature .[ 0.07]motor coordination between two players in mirror game.,title="fig : " ] we design the cognitive architecture of the vp so as to replicate the main processes involved in making a human being play the mirror game ( see fig .[ human ] ) .the visual system detects the ball s position on the string and generates visual signals , which are then transmitted to the central nervous system ( cns including brain and spinal cord ) .several parts of the cns ( such as ventral horn , cerebellum and motor cortex ) use an internal model to predict the kinematic characteristics of the other player s motion and generate the neural impulses that control the extension and contraction of muscles .finally , the neuromuscular system activates and coordinates the muscles involved in generating the hand movements .this architecture is mapped onto the real - time control schematics shown in fig .[ sig ] whose blocks are briefly described below . *a _ camera _ is used to detect the position of the hp , say ; * a _ filtering and velocity estimation block _ is used to filter the position data acquired by the camera via a low pass filter and to estimate the velocity of the hp ( reference ) via the simple formula \ ] ] where , and denotes the sampling period of the camera .the estimated velocity is then used to predict the hp position over the next interval by using the expression : \ ] ] + as an alternative , we could adopt a nonlinear observer to provide a better prediction of the reference velocity ; for example , the nonlinear extended observer in . here we find that such a complication is unnecessary to solve the problem of interest and therefore choose to use the simple yet effective estimation strategy discussed above .* at the core of the architecture lie the two blocks _ temporal correspondence control _ and _ signature control_. the former is designed to regulate the end effector model so that its motion tracks that of the hp with varying degrees of dynamic similarity .specifically , it aims at minimizing the position error between the time series of the hp and that of the vp .the latter block uses the prerecorded velocity time series of a reference hp with the desired ims ( velocity profile ) in order to generate the avatar trajectory with desired kinematic features .in particular , the aim of the signature controller is that of reducing the distance ( computed in terms of emd ) between the velocity distribution of the vp and that of some reference hp it aims at replicating the motion characteristics of .* the prerecorded velocity trajectory of a reference hp playing solo representing the desired ims is stored in the _ signature generator block _ while the signature of the avatar motion is estimated by the _ signature estimation block_. * the _ end effector model _ is used to generate the avatar motion via an appropriate feedback control scheme . as mentioned before, we use the hkb oscillator to describe the dynamics of the end effector . *finally , the output of the cognitive architecture ( position and velocity and ) is used as the reference motion for the vp . in what follows we focus on the design of the feedback control strategies that drive the cognitive architecture .we derive and compare two different types of controllers .first we develop an adaptive algorithm able to control the temporal correspondence between the vp and the hp during the game ( green blocks in fig.[sig ] ) .then , we consider an optimal controller to solve simultaneously the multi objective control problem of tracking the trajectory of the hp while preserving the features of the desired ims of interest ( both green and blue blocks in fig.[sig ] ) . for both strategies a proof of convergence is given before presenting numerical and experimental investigation of their performance .to solve the control problem of temporal correspondence , we propose an adaptive controller based on the end effector model shown in ( [ system ] ) . specifically , we choose the nonlinear controller given by (\dot{x}-\hat{r}_v)}_{coordination}-\underbrace{c_pe^{-\delta ( \dot{x}-\hat{r}_v)^2}(x - r_p)}_{temporal~correspondence}\ ] ] where is the position of the hp , is the estimated velocity , and are constant parameters while the coupling parameters and are updated according to the adaptive laws : -\eta_a \end{split}\ ] ] and -\eta_a \end{split}\ ] ] where is a positive constant .note that the control law ( [ eq : u ] ) consists of two complementary terms .the first has the same structure as the one of the coupling proposed in to model the interaction between two hps , albeit with the introduction of adaptive parameters to account for variability between different hps .the second term , depending on the fixed parameters and , deals with the position error when the velocity mismatch approaches zero and hence the first term decays to zero .when is relatively large , the coupling term of the hkb equation instead dominates and motor coordination between the two players becomes more pronounced during the mirror game .theoretical analysis of the adaptive control algorithm in table [ table_ac ] is given in what follows below ..[table_ac ] adaptive control algorithm [ cols="<,^ , < " , ] [ 2]pdf of velocity time series for different vp models.,title="fig:",width=144 ]we presented the novel design of an interactive cognitive architecture able to drive a virtual player to play the mirror game against a human player .two strategies were developed .the first , based on adaptive control , was shown to be effective to achieve temporal correspondence between the motion of the virtual player and that of the human individual .convergence of the algorithm was proved .it was noticed that the adaptive control strategy does not allow the vp to exhibit some desired kinematic features ( individual motor signature ) of a given human player . to overcome this limitation , a different strategy based on the iterative solution of an appropriate optimal control problemwas proposed . after proving boundedness and convergence of this additional approach, its effectiveness was tested experimentally .it was shown that the proposed strategy is able to drive the vp so as to play the game both as leader or follower while matching well the individual motor signature of a given individual .finally , a comparison with other existing models was carried out confirming the effectiveness of the proposed approach .we wish to emphasize that our approach opens the possibility of making vps , each modeling a different individual , play against each other and produce in silico experiments .this can reduce the cost and time of carrying out joint action experiments and can be effectively used to test different human - machine interaction scenarios via the mirror game .this work was funded by the european project alterego fp7 ict 2.9 - cognitive sciences and robotics , grant number 600610 .the authors wish to thank prof .benoit bardy , dr . ludovic marin and dr .robin salesse at euromov , university of montpellier , france for all the insightful discussions and for collecting some of the experimental data that is used to validate the approach presented in this paper .the solution of the optimal control algorithm in each time interval ] and ^t ] and optimal costate equation with the terminal condition denote the approximation of the optimal solution , then it is feasible to estimate the position error between the vp and the hp based on the collocation method as . notice that is negligible due to the high approximation accuracy of numerical methods .in particular , considering that normally the optimal solution is not available , the approximate solution exactly corresponds to the position of the vp in the simulation .thus , we mainly focus on the estimation of . for simplicity, we define , and , where , and , are unknown constants and $ ] .substituting , and into the above optimal state equation and costate equation at the boundary points yields the linear matrix equation where and solving equation ( [ app_lin ] ) determines the vector of unknown constants thus , we obtain the approximate solution \ ] ] where \\ & -\eta_m(\frac{t^2\omega^2}{2}+\alpha t^2 x(t_k)y(t_k)+\alpha tx(t_k)^2 + 3\beta t y(t_k)^2-\gamma t+2)\\ & \cdot [ ( \alpha x(t_k)^2+\beta y(t_k)^2-\gamma)y(t_k)+\omega^2x(t_k ) ] \end{split}\ ] ] and then we can compute where and since , , , and are all bounded , it follows from inequality ( [ error ] ) that the bound on the tracking error converges to as and .similarly , we can estimate the velocity error between the vp and the reference signal encoding the desired signature as follows where \ ] ] according to inequality ( [ error_vel ] ) , the bound of the velocity error goes to if , and .noy , l. , dekel , e. , alon , u. ( 2011 ) .the mirror game as a paradigm for studying the dynamics of two people improvising motion together ._ proceedings of the national academy of sciences _ , 108(52 ) , 20947 - 20952 .liu , z. , chen , c. , zhang , y. , chen , c. l. p. ( 2015 ) .adaptive neural control for dual - arm coordination of humanoid robot with unknown nonlinearities in output mechanism ._ ieee transactions on cybernetics _ ,45(3 ) , 521 - 532 .drop , f. m. , pool , d. m. , damveld , h. j. , van paassen , m. m. , mulder , m. ( 2013 ) .identification of the feedforward component in manual control with predictable target signals ._ ieee transactions on cybernetics _ , 43(6 ) , 1936 - 1949 .laurense , v. , pool , d. m. , damveld , h. j. , van paassen , m. r. m. , mulder , m. ( 2015 ) .effects of controlled element dynamics on human feedforward behavior in ramp - tracking tasks ._ ieee transactions on cybernetics _ , 45(2 ) , 253 - 265 .sowiski , p. , rooke , e. , di bernardo , m. , tanaseva - atanasova , k. ( 2014 ) .kinematic characteristics of motion in the mirror game ._ proceedings of ieee international conference on systems , man and cybernetics _ , san diego , california , usa , 748 - 753 , october 5 - 8 , 2014 .sowiski , p. , zhai , c. , alderisio , f. , salesse , r. n. , gueugnon , m. , marin , l. , benoit , b. g. , di bernardo , m. , tsaneva - atanasova , k. ( 2015 ) .dynamic similarity promotes interpersonal coordination in joint - action .available arxiv preprint : arxiv.org/abs/1507.00368/euromov.eu/alterego .zhang , z. , beck , a. , magnenat - thalmann , n. ( 2014 ) .human - like behavior generation based on head - arms model for robot tracking external targets and body parts ._ ieee transactions on cybernetics _ , 45(8 ), 1390 - 1400 .levina , e. , bickel , p. ( 2001 ) .the earth mover s distance is the mallows distance : some insights from statistics ._ proceedings of the 8th ieee international conference on computer vision _ ( vol .251 - 256 ) , 2001 .kreuz , t. , mormann , f. , andrzejak , r. g. , kraskov , a. , lehnertz , k. , grassberger , p. ( 2007 ) .measuring synchronization in coupled model systems : a comparison of different approaches ._ physica d : nonlinear phenomena _ , 225(1 ) , 29 - 42 .
|
the mirror game has been recently proposed as a simple , yet powerful paradigm for studying interpersonal interactions . it has been suggested that a virtual partner able to play the game with human subjects can be an effective tool to affect the underlying neural processes needed to establish the necessary connections between the players , and also to provide new clinical interventions for the rehabilitation of patients suffering from social disorders . inspired by the motor processes of the central nervous system ( cns ) and the musculoskeletal system in the human body , in this paper we develop a novel interactive cognitive architecture based on nonlinear control theory to drive a virtual player ( vp ) to play the mirror game with a human player ( hp ) in different configurations . specifically , we consider two cases : the former where the vp acts as leader and the latter where it acts as follower . the crucial problem is to design a feedback control architecture capable of imitating and following or leading a human player ( hp ) in a joint action task . movement of the end - effector of the vp is modeled by means of a feedback controlled haken - kelso - bunz ( hkb ) oscillator , which is coupled with the observed motion of the hp measured in real time . to this aim , two types of control algorithms ( adaptive control and optimal control ) are used and implemented on the hkb model so that the vp can generate human - like motion while satisfying certain kinematic constraints . a proof of convergence of the control algorithms is presented in the paper together with an extensive numerical and experimental validation of their effectiveness . a comparison with other existing designs is also discussed , showing the flexibility and the advantages of our control - based approach .
|
one focus of soil mechanical experiments is the stress evolution for given strain rate and density .three striking characteristics being observed at slow , elasto - plastic rates are ( 1 ) rate - independence , ( 2 ) the existence of a critical state , and ( 3 ) proportional paths as summed up by the goldscheider rule ( gr ) .rate - independence means that if the given strain rate is a constant , the stress is a function of the strain , and does not depend on the actual rate .the critical state is an expression of `` ideal plasticity . ''starting from an isotropic stress , and applying a constant shear rate ( denotes the traceless part ) while maintaining a vanishing trace to keep the density constant a granular system will always go into an asymptotic , stationary state , in which the stress no longer changes with time , although goes on providing a constant rate of deformation .we shall call this asymptotic state characterized by the direction of the rate ( where ) and the density the _ critical state_. ( the asymptotic state is more typically arrived at for given shear rate , at constant pressure or one of the principle stress value , rather than the density .and there are some in the engineering community who insist on restricting the term _ critical state _ to the results of this second type of approaches. the narrower definition would be sensible if the respective asymptotic states were different .we do not believe this to be the case , for rather basic reasons , as will become clear in section [ sec1b ] . )the goldscheider rule or gr is a generalization of the critical state .first , it states that a granular system will converge onto the same critical state associated with and , starting from any initial stress , not only an isotropic one . and second, it postulates the existence of asymptotic states also for cases of changing a point that we believe may be understood as follows : in the principal strain axes , a constant means the system moves with a constant rate along its direction , , .this circumstance is referred to as _ a proportional strain path_. in the stress space , is a stationary dot and does not move . now , adding a constant to the isochoric strain path , \,t ] . for granular materials ,the following expression is appropriate in many respects , was instrumental in achieving the agreements with all the granular phenomena mentioned above , especially static stress distribution , incremental stress - strain relation , and elastic waves . varying the coefficient , the yield surface changes to resemble different yield laws , including drucker - prager , lade - duncan , coulomb , and matsuoka - nakai , see . for qualitative considerations , however , it is frequently sufficient to set .we then find the elastic energy convex only for where , . the energy turns concave if this condition is violated .we keep a finite for the rest of this paper , taking it along with as density independent . but is specified as ^{0.15},\]]with , and .this expression accomplishes three things : the energy is concave for any density smaller than the random loose one , implying no elastic solution exists there .the energy is convex between the random loose density and the random close one , ensuring the stability of any elastic solutions in this region .in addition , the density dependence of sound velocities as measured by harding and richart is well rendered by .the elastic energy diverges , slowly , at , approximating the observation that the system becomes orders of magnitude stiffer there . the elastic stress may be written as calculated employing eq.([w ] ) . using eqs.([gsh-13 ] ) it can also be shown that for any isotropic energy , the stress and elastic strain tensors have same principal directions . andsince the critical elastic strain is colinear with the strain rate , all three have the same principal axes asymptotically .the stress eigenvalues are , \notag\end{aligned}\]]where denote eigenvalues of . from ( [ gsh-15 ] ) ,the following relations between the triplet of strain invariants and stress invariants {\sigma _{ ik}^{\ast } \sigma _ { kj}^{\ast } \sigma _ { ji}^{\ast } } \right) ] , and because ) to express , similarly , we can write the stress tensor as a rotation matrix equal to the unit matrix for the asymptotic states in both gsh and barodesy . need to be expressed by two angles , because .we take them as the stress lode angle and the friction angle , as defined in the appendix , eqs.([120417 - 1],[120417 - 2 ] ) , path implies time - independent . [ the relation between the angles and the stress invariants are given in the appendix , eqs.([120417 - 1],[120417 - 2 ] ) . ] the association between the strain and stress paths may be given as we now calculate the stress evolution for proportional strain paths , obtained by inserting ( [ 120418 - 2],[120418 - 3],[120418 - 4 ] ) into ( [ 120415 - 1 ] ) , taking const .both gsh and barodesy deliver analytical expressions .the gsh equations can be solved as follows .first , inserting the strain rate ( [ 120418 - 13 ] ) into eq.([120409 - 3 ] ) , and noting that , we have that the solution for is : the initial condition is assumed .the notation is magnitude of total strain .clearly ( or ) is the time ( or strain ) scale needed for going to its saturation ( asymptotic ) value . inserting ( [ 120418 - 14 ] ) into eq.([120409 - 1 ] ), we obtain the deviatoric elastic strain , is the initial strain . because for , the initial strain decays as .the dimensionless parameter is typically for the intermediate void ratio of 0.65 , see . so the decay is fast , ending at about strain magnitude . inserting ( [ 120418 - 14 ] ) into ( [ 120409 - 2 ] ), we have the initial bulk strain decays with , more slowly , as .for , the strain ( [ 120418 - 15],[120418 - 16 ] ) becomes stationary : associated three invariants are the third expression is obtained with eq.([120418 - 5 ] ) .inserting these into ( [ gsh-15 ] ) , we have the following principal stresses : \end{aligned}\]] \\%\end{aligned}\]]\end{aligned}\]]where . in the -coordinates ( defines as , , see the appendix for more details ) , the critical stress has more compact expressions : the lode angle varies from to , the loci given by eqs.([gsh-22],[gsh-23 ] ) give a triangle - like curve , as shown by the full line in fig.[stationary - surface - in - paiplane ] .the curve is determined by the three parameters : , , , and reduces to a circle if .-plane , as obtained , respectively , from gsh ( full ) and barodesy ( dots ) .dashed curve is the static yield surface , ie . the convexity surface of the energy , eq.([w ] ) . ]the critical surface of the barodesy is obtained by taking and . in this case ,eq.([barodesy-1 ] ) reduces to and the strain rate eq.([120418 - 13 ] ) in the principal axes , we have , \label{barodesy-10 } \\ \sigma _ { 2 } & = & \left ( \sigma _ { 1}\sigma _ { 2}\sigma _ { 3}\right ) ^{1/3}\exp \left ( -c_{2}\sqrt{\frac{2}{3}}\sin l_{\varepsilon } \right ) , \label{barodesy-11 }\\ \sigma _ { 3 } & = & \left ( \sigma _ { 1}\sigma _ { 2}\sigma _ { 3}\right ) ^{1/3}\exp \left [ c_{2}\sqrt{\frac{2}{3}}\sin \left ( l_{\varepsilon } + \frac{\pi } { 3}\right ) % \right ] .\label{barodesy-12}\end{aligned}\]]or in the -coordinates: } { \exp \left ( \sqrt{2}\left\vert c_{2}\right\vert \cos l_{\varepsilon } \right ) + \exp \left [ \sqrt{2}\left\vert c_{2}\right\vert \cos \left(l_{\varepsilon } -\frac{\pi } { 3}% \right ) \right ] + 1 } , \label{barodesy-20 } \\\pi _ { 2 } & = & \frac{3}{\sqrt{6}}\frac{2\exp \left ( \sqrt{2}\left\vert c_{2}\right\vert \cos l_{\varepsilon } \right ) -\exp \left [ \sqrt{2}\left\vert c_{2}\right\vert \cos \left ( l_{\varepsilon } -\frac{\pi } { 3}\right ) \right ] -1}{\exp \left ( \sqrt{2}\left\vert c_{2}\right\vert \cos l_{\varepsilon } \right ) + \exp \left [ \sqrt{2}\left\vert c_{2}\right\vert \cos \left ( l_{\varepsilon } -\frac{\pi } { 3}% \right ) \right ] + 1}. \label{barodesy-21}\end{aligned}\ ] ] in barodesy , the critical surface is determined by the parameter , and is triangle - like in the -plane , see fig.[stationary - surface - in - paiplane ] , which is the same curve as in fig.1 of the second reference of . transforming both the gsh expressions of eqs.([gsh-22],[gsh-23 ] ) and the barodesy ones of eq.([barodesy-20],[barodesy-21 ] ) into the angles using the formula given in the appendix , we retrieve the association of eq.([120418 - 11 ] ) , as shown in fig.[fig2 ] , again with great similarity between both theories .in contrast to the last two figures that contain only asymptotic information , fig.[fig3 ] shows the evolution of three stress eigenvalues , starting from an initially isotropic stress state .the numeric calculation employs gsh , eqs.([120409 - 1],[120409 - 2],[120409 - 3 ] ) and barodesy , eqs.([barodesy-1 ] ) , for .the transient behavior is clearly somewhat different , it contains an oscillation in gsh ( full ) , but is monotonic in barodesy ( dashed ) .the discrepancy is probably due to the ( correct ) nonmonotonic behavior of the pressure and shear stress in gsh . ) with the total strain , as obtained with gsh ( full ) and barodesy ( dashed ) .( b ) evolution of ( normalized by the initial pressure ) with .the barodesy curve is monotonic , the gsh one is not .this explains the difference in the transient regime . ]although the expressions from barodesy , eqs.([barodesy-20],[barodesy-21 ] ) , and gsh , eq.([gsh-22],[gsh-23 ] ) , are rather different , the relevant plots are not .yet to achieve this agreement , hardly any fiddling with the parameters was necessary .the barodesy parameters were simply taken from eq.([barodesy - parameters ] ) ; the gsh parameters are essentially the same as we employed them before : are part of the energy and represent static parameters .we took as we have mostly done before , and took .( in , we took .this slight change perfected the agreement of fig.1 that we could not resist . )we also took here .previously , we equivalently took , , and in , separately .and the stress magnitude of the asymptotic state ( normalized by at ) , as given respectively by gsh and barodesy . with , the two quantities and change in tandem , according to either of these two curves .] if the strain path contains a small , the density and void ratio will change , as will the magnitude of the stress , according to eq.([gsh - dichte ] ) in gsh , and to eq.([barodesy-3 ] ) in barodesy .again , in spite of the different expressions , the curves are similar , at least qualitatively , see fig.([fig6 ] ) .the convergence onto the asymptotic state is depicted in fig.([fig7 ] ) .following kolymbas papers , we have also computed 4 figures each for ( drained ) triaxial and oedometric tests employing gsh , see fig.[fig9 ] and [ fig8 ] .and the density varies linearly , as given in the inset of ( b ) .second , only the gsh curve is displayed , not the barodesy one ( because given fig.[fig3],[fig6 ] , they can not be that different ) . convergence of the stress ratios takes place quickly , from where on the stress path is proportional .the first part of the pressure change occurs during the convergence , the second part , where the pressure change associated with positive and negative diverge , belongs to the asymptotic state . ] , holding constant .( the axial direction is .the case with an initially higher density is rendered in solid lines , the looser one in dashed lines . )these are : ( a ) deviatoric stress ; ( b ) void ratio ; ( c ) volumetric strain ; ( d ) the friction ange , .we chose : and . ] while holding the lateral strain fixed : ( a ) of a axial stress - strain curve ; ( b ) of a stress path ; ( c ) versus time ; ( d ) void ratio versus time .note moving upwards implies a compaction of the system . ]in comparing gsh to barodesy , we set out to achieve two goals : to validate gsh , and to provide a transparent , sound understanding for gr .both goals were reached .gsh is validated because it yields similar results for various key quantities as barodesy , achieving better agreement than would be reasonably expected , without much fiddling with parameters .conversely , the understanding of goldscheider rules and barodesy comes from the physics of gsh .the theory has ( for the range of shear rates typical of soil mechanical experiments ) three state variables and two constituent parts .the state variables are the density , granular temperature ( quantifying granular jiggling ) , and the elastic strain tensor ( accounting for the coarse - grained deformation of the grains ) .the two constituent parts are first the explicit expression of the stress tensor , a function of that is obtained from the elastic energy ; and second a rate - independent relaxation equation for , derived from the notion of variable transient elasticity . given any initial , the system will always converge onto the stationary solution as prescribed by the relaxation equation . is a function of the constant strain rate , or equivalently , of the proportional strain path s direction , and may be identified as the asymptotic , critical state for isochoric paths , .this convergence , a consequence of the relaxation equation , is closely related to variable transient elasticity , and hence a generic aspect of granular behavior . given and the density ,the stress is also fixed .its form , however , depends on the expression for the elastic energy that is material dependent and less robust .if the strain path is isochoric , with , the asymptotic stress state is a constant of time , but a function of , or equivalently , of the strain path s direction .as the path varies , the associated stress states lie within a triangle , as depicted in fig.[stationary - surface - in - paiplane ] .if the shear rate is a sum of and a small , the asymptotic state is ( cum grano salis ) still given by the associated with , though the density will now change .the asymptotic stress is therefore a function of the same and a changing density , hence no longer a constant . that the stress path is also proportional , that only the magnitude of the stress changes with time , not the ratios of its eigenvalues , is the least robust part of gr , because it depends on the density dependence of certain coefficients canceling .constructing a constitutive relation , specifying , is only for someone with vast experience with granular media and deep knowledge of how they behave .derived from two simple notions of what the two basic elements of granular physics are yields an equivalent account , is eye - opening , and the actually amazing fact of the presented agreement .a symmetric tensor , e.g. the stress tensor , can be decomposed into two parts : a spatial rotation and a part which is invariant under any rotation . in most analysiswe are interested mainly in the three invariants .there are usually various ways to represent the invariant triplet .one of which is {\sigma _ { ik}^{\ast } \sigma _ { kj}^{\ast } \sigma _ { ji}^{\ast } } .\label{120417 - 0b}\end{aligned}\ ] ] another is where two angle variables ( ) . in soil mechanics is usually called the lode angle of stress .the angle can be interpreted as a `` friction angle '' ( because it represents the ratio between shear force and pressure ) .moreover we can also use the three eigenvalues of the stress tensor as an invariant triplet , which are related to by is given by ( [ 120417 - 1 ] ) . in soil mechanics , it is also usual to define the two coordinates in the so called -plane , ( [ 120417 - 3],[120417 - 4],[120417 - 4 ] ) into ( 120417 - 6a,[120417 - 6b ] ) , we have the help of eqs.([120417 - 0]-[120417 - 7 ] ) we can readily transform among the invariant triplets : , , , .similar decompositions apply for the elastic strain , total strain , strain rate tensor etc ., only note that the first invariant is frequently defined with a factor different from that of .kolymbas d. barodesy : a new constitutive frame for soils .geotechnique letters 2 , 1723 , ( 2012 ) , http://dx.doi.org/10.1680/geolett.12.00004 ; barodesy : a new hypoplastic approach . international journal for numerical and analytical methods in geomechanics ( 2011 ) .doi:10.1002/nag.1051 ; sand as an archetypical natural solid . in mechanics of natural solids , kolymbasd , viggiani g ( eds . ) .springer : berlin , ( 2009 ) ; 126 ; d. kolymbas . .balkema , rotterdam , 2000 .w. wu and d. kolymbas . .springer , berlin , 2000 .jiang and m. liu , granular solid hydrodynamics . granular matter , * 11 * , 139 ( 2009 ) ; y.m .jiang and m. liu , the physics of granular mechanics ._ mechanics of natural solids _ , edited by d. kolymbas and g. viggiani , springer , pp .2746 ( 2009 ) ; g. gudehus , y.m .jiang , and m. liu , seismo- and thermodynnamics of granular solids . granular matter , * 1304 * , 319 ( 2011 ) .ezaoui , a. and di benedetto , h. experimental measurements of the global anisotropic elastic behaviour of dry hostun sand during triaxial tests , andeffect of sample preparation .gotechnique , 59(7):621 - 635 , 2009 .thornton , c. and anthony , s. j. quasistatic deformation of particulate media .philosophical transactions of the royal society of london .series a : mathematical , physical and engineering sciences , 356(1747 ) : 2763 - 2782 , 1998 .
|
_ propotional paths _ as summed up by the _ goldscheider rule _ ( gr ) stating that given a constant strain rate , the evolution of the stress maintains the ratios of its components is a characteristics of elasto - plastic motion in granular media . _ barodesy _ , a constitutive relation proposed recently by kolymbas , is a model that , with gr as input , successfully accounts for data from soil mechanical experiments . _ granular solid hydrodynamics _ ( gsh ) , a theory derived from general principles of physics and two assumptions about the basic behavior of granular media , is constructed to qualitatively account for a wide range of observation from elastic waves over elasto - plastic motion to rapid dense flow . in this paper , showing the close resemblance of results from barodesy and gsh , we further validate gsh and provide an understanding for gr .
|
it is very common in practice , for a model , to measure the regressors(covariates ) but for varied reasons it is sometimes impossible to have all values of the response variable .the most widely used idea is to remove of model the observations with missing data .an alternative solution is to consider empirical likelihood(el ) method which is a powerful nonparametric method for constructing confidence regions of parameters .+ to our knowledge , previous theoretical and numerical investigations in literature have focused for model with missing response only in the linear case .the el method , proposed by does not need the asymptotic variance of estimator and it outperforms the normal approximation method in term of coverage probability for linear models . develops el inferences for the mean of a response variable under regression imputation of missing responses for a linear regression model and random covariates . construct an el statistic on parameter when regressors are deterministic and if regressors are random , based on least squares(ls ) method for linear model . consider the general linear model with a known vector function and investigate a hypothesis test on the response variable .these last three papers impose the condition that the conditional expectation of error with respect to covariate is zero .if this hypothesis is not satisfied , in order to reconstitute the response variable the least absolute deviations(lad ) method can be used .one advantage of least absolute deviations estimation is that it does not require any moment condition on the errors to obtain asymptotic normality .+ it is well known also that one outlier may cause a large error in a least squares estimator .this occurs in the case of fatter tail distributions of the error term . on the other hand ,as and indicate it , for heavy tailed distributions the lad estimator is more efficient than ls estimator .+ concerning the lad estimator in a complete nonlinear model we can refer to following papers : shows conditions for its consistency , proves that this estimator is consistent and asymptotically normal in a dynamic nonlinear model with neither independent nor identically distributed errors . gives the convergence rate , where is a monotone positive sequence such that and for .it is well known that confidence regions based on the asymptotic normality could encounter large coverage errors in small sizes or if the error distribution has outliers .the lad technique was already used in censored median linear regression model with missing data : see e.g. . + for other relevant papers ( not exhaustive list ) on el method for missing data in a linear models see , , , , .+ in this paper , for a nonlinear random model with missing responses , some empirical likelihood ratios are constructed by using complete - case or imputed values . the nonparametric version of wilks theorem is proved for two cases : the parameters are estimated by least squares and least absolute deviations on complete data .the limiting distribution of el statistic is , results which can be used to construct confidence region on parameter . in order to complete data ,the value imputed of missing value of variable response is obtained by generalising idea for linear model using a semiparametric technique : the parameters regression are estimated by ls method and missing probability by a nonparametric method .we show that the empirical log - likelihood ratio on parameter based on the improved data is asymptotically chi - squared .the numerical simulations proves that el methods outperforms the normal approximation method in terms of coverage probability up to and including on the reconstituted data .if the distribution of the errors presents outliers , the lad method gives generally best results that ls method on coverage probability and on parameter estimators efficiency .in addition to that , if the expectation of error does not exist , as it is the case for cauchy distribution , the normal approximation of ls estimator can not be satisfied .on the real data we obtain also that our semiparametric method gives more precise results to reconstitute the response variable that classic parametric ls method .+ the rest of this paper is organised as follows . in section 2 we introduce model , assumptions and some notations . in section 3 the wilks theorem for el statistic and also for its approximationis given , when the parameters are estimated using ls or lad method on complete data .the ls case is developed in section 4 , a reconstituted value for the response variable is introduced and asymptotic distribution of el for response variable is obtained .section 5 illustrates by simulation results that el methods for nonlinear random model outperform the normal method and give very competitive coverage probabilities .an application to the real data is presented in section 6 .finally , section 7 contains the proofs of the lemmas and of the theorems .let us consider following random nonlinear model : where is a sequence of continuous independent random vectors with the same joint distribution as and is a vector of unknown regression parameters .more precisely , is a random variable and a random vector of covariates .let denote the true ( unknown ) of the parameter . + with regard to the random variable we make one of the suppositions : + * ( h1 ) * =0 ] , .+ * ( h1bis ) * =0 ] for any positive sequence as .+ * ( h4 ) * , are bounded for any and in a neighborhood of . + sets and are compacts .this , assumptions ( h2 ) and ( h4 ) are commonly used in nonlinear modelisation and are necessary for the consistency and for the asymptotic normality of the ls or lad estimator .assume also that the model is identifiable : if with probability one , then + for model ( [ e1 ] ) , all the s are observed , in exchange response variable can be missing .let be the sequence of random variables defined by : if is missing and if is observed .we suppose that is missing at random(mar ) : =\ep[\delta_i=1|\ex_i] ] , .the supposition is a common assumption in the literature .the parameter is estimated on the completely observed data by two methods : least squares(ls ) method : ^ 2\ ] ] and least absolute deviations(lad ) method : to build the el statistic , let us consider following functions , for : \ef(\ex_i,\eb),\ ] ] and let be also : with either or .the two estimators are a solution of the system of equations : . under ( h1 ) ,respectively ( h1bis ) , we have : =\textbf{0} ] , where satisfies the equation : thus , can be written : in order to study the asymptotic properties of the el statistic given by ( [ e7 ] ) , let us consider following matrix : and under assumption ( h1 ) : .on suppose for the first matrix : + * ( ha ) * is a positive definite matrix .+ let us notice that is fisher information matrix on complete data .we give first a classical result for a mar model , lemma that turn out to be useful in the proof of the main results .[ lemma2 ] under assumptions ( h1 ) , respectively ( h1bis ) and ( h2 ) , ( h4 ) , ( ha ) , we have : + ( i ) , + ( ii ) , + ( iii ) , + with for ls and for lad method .let us consider following matrix : following theorem gives the asymptotic distribution of the empirical log - likelihood statistic ( [ e7 ] ) , evaluated at the true value .[ theorem1 ] under assumptions ( h1 ) , respectively ( h1bis ) , ( h2 ) , ( h4 ) , ( ha ) .we can use theorem [ theorem1 ] to get approximate confidence region for or for testing the hypothesis : . since , by the proof of theorem [ theorem1 ] ( see appendix ) for the el statistic we have : + , in order to calculate numerically we can use the approximation : we state this as a corollary .[ corollaire1 ] under the same assumptions as in theorem [ theorem1 ] , the asymptotic distribution of is .thus , an asymptotic confidence region for , based on el statistic on complete data is : where is the quantile of the chi - squared distribution with degrees of freedom .it is very interesting to note that to construct the confidence region for it is not necessary to calculate the lagrange multiplier which intervenes in ( [ e7 ] ) , observed data are enough .+ asymptotic normality of ls and lad estimators calculated on complete data is given by the following result .[ theorem2 ] ( i ) under assumption ( h1 ) , ( h2 ) , ( h4 ) , ( ha ) we have : + .+ ( ii ) under assumption ( h1bis ) , ( h2 ) , ( h4 ) , ( ha ) we have : + . these theorem allows to give the normal approximation based confidence region , expression which will be specified in section 5 .then , on complete data , we have the choice between four statistics ( for ls , for lad , , ) to test hypotheses or to build the asymptotic confidence region of model parameter .we see in sections 5 and 6 , by simulations and a model on real data , that approximated el statistics are sharply superior to normal approximation given by theorem [ theorem2 ] .if error distribution presents outliers then for lad method is recommended , otherwise it is better to consider for ls method .this last one will more be developed in the following section .the missing probabilities are estimated by a nonparametric method , this is going to allow to reconstitute the missing responses . on the observed and the reconstituted observations one defined a new el statistic , which also satisfies a wilk s theorem . besides , numerically , it gives very competitive results ( see sections 5 and 6 ) .following e.g. , for linear model , we shall introduce the forecast of , constructed by using ls estimator for parameter and a nonparametric estimator for probability : with a nonlinear estimator for , as in the linear regression : where is a positive sequence tending to 0 as and is a kernel function defined in .the bandwidth satisfies : + * ( h5 ) * and , as , with the sequence given in assumption ( h3 ) .+ the kernel function , satisfies the classical condition ( imposed also for the linear model using the ls method of ) : + * ( h6 ) * there exist positive constants , and such that : .+ concerning the selection probability function let us make following regularity hypothesis necessary in the study of its nonparametric estimator . +* ( h7 ) * has bounded partial derivatives up to order almost everywhere .+ conditions ( h5)-(h7 ) are usual assumptions for convergent rates of kernel estimating method .let us denote ] error variance .+ following lemma gives the asymptotic normality for the sequence and other two similar results of lemma [ lemma2 ] .[ lemma3 ] under assumptions ( h1)-(h6 ) andif < \infty ] .using similar arguments as for theorem [ theorem1 ] we obtain the following result .we hence omit its proof .[ theorem3 ] suppose that assumptions ( h1)-(h6 ) hold , then for empirical log - likelihood for : we have : .this result can be used to make test of hypothesis or to construct asymptotic confidence region for the response variable . constructs a weight - corrected empirical log - likelihood ratio for which is also asymptotically chi - squared .+ let be now following functions constructed using the reconstituted response : \ef(\ex_i;\eb ) , \qquad i=1 , \cdots , n.\ ] ] consider also the empirical log - likelihood associated at : then , the equivalent of ( [ e7 ] ) is : consider following lemma needed for theorem [ theorem5 ] .[ lemma4 ] under assumptions ( h1)-(h7 ) , we have : + ( i ) .+ ( ii ) . + ( iii ) .following result shows that the empirical log - likelihood ratio on based on the reconstituted data converges to towards .this theorem shows in a similar way as theorem [ theorem1 ] , then the proof will be omitted .[ theorem5 ] under assumptions ( h1)-(h7 ) we have : . from theorem [ theorem5 ], one can construct an asymptotic -level confidence region for using all available values for and the reconstituted values for . in a similar way in the complete data , corollary [ corollaire1 ] ,the statistic may be approximated by : with .the asymptotic distribution of is .this implies that for testing hypothesis we can use the statistic with asymptotic reject region where is the quantile of the chi - squared distribution with degrees of freedom . since the convergence rate of to can be slower than ( see ) , then lemma [ lemma4 ] can not be true and the analogue of the theorem [ theorem5 ] can not be consider for lad estimator .we can minimise and we obtain another estimator of , called the maximum empirical likelihood estimator ( mele ) . using the same arguments as used in the proof of theorem 1 in , we obtain : [ theorem4 ] under assumptions ( h1 ) , ( h2 ) , ( h4 ) , ( ha ) and : * ] is , * is continuous in a neighborhood of the true value , is bounded by some integrable function in this neighborhood then + ( i ) convergence rate of is : .+ ( ii ) , with , -\ee[\pi(\ex ) \ef(\ex;\ebo ) \ef^t(\ex;\ebo ) \ef(\ex;\ebo ) \ef^t(\ex;\ebo)] ] .this last relation implies : =o_{\ep}(n^{-1/2}).\ ] ] on the other hand , using taylor expansion : + -\delta_i \ef(\ex_i;\ebo)[y_i - f(\ex_i;\ebo ) ] ] .+ under assumptions ( h1 ) and ( h4 ) : + . taking into account relation ( [ e14 ] ) , we have : combining these last two results , we obtain : and claim follows .+ ( ii ) we apply . + * proof of lemma [ lemma3 ] * _ ( i ) _ let us consider following decomposition : + , with : + , + , + =var[\frac{\delta_i \varepsilon_i}{\pi(\ex_i)}]+var[f(\ex_i;\ebo)]+2cov(f(\ex_i;\ebo),\frac{\delta_i \varepsilon_i}{\pi(\ex_i)}) ]. since are independent for different , then the random variables are also independent .then , we can apply the central limit theorem : .we make a limited development for until order 2 around and taking into account relation ( [ e15 ] ) , hypothesis and ( h4 ) : + . by lemma 3 of have , then the claim ( i ) is proved .the proof of ( ii ) and ( iii ) is similar . + * proof of lemma [ lemma4 ] * _ ( i ) _ function can be written : \ef(\ex_i;\eb).\ ] ] for the first term of the right - hand side of ( [ e16 ] ) we have : on the other hand , using a limited development , relation ( [ e15 ] ) and assumption ( h4 ) : \ef(\ex_i;\ebo) ] , + ] + ^ 2 | \ex_i } } ] for , then . in a similar way : + \leq \frac{c}{n^2 } \sum^n_{i=1 } \cro{\ee \pth{\frac{\delta_i}{\pi^2(\ex_i ) } \| \ef(\ex_i;\ebo ) \|^2 |\ex_i } \pth{1-\sum^n_{j=1 } w_{nj}(\ex_i ) } } \rightarrow 0 ] and with the same arguments as for : \rightarrow 0 $ ] for , then . with all this ,relation ( [ * 0 ] ) is proved .+ the proof of ( ii ) and ( iii ) is similar . + 3 bai , j. , 1998 , estimation of multiple - regime regressions with least absolute deviation . , * 74 * , pp .103 - 134 .ciuperca , g. , 2010 , .estimating nonlinear regression with and without change - points by the lad - method ., doi : 10.1007/s10463 - 009 - 0256-y .kim , h.k . , choi , s.h ., 1995 , asymptotic properties of non - linear least absolute deviation estimators . , * 24 * , pp. 127 - 139 .liang h. , qin y. , zhang x. , ruppert d. , 2009 , empirical likelihood - based inferences for generalized partially linear models ., * 36*(3 ) , 433 - 443 .oberhofer , w. , 1982 , the consistency of nonlinear regression minimizing the -norm ., * 10 * , no .316 - 319 .owen a. , 1990 , empirical likelihood ratio confidence regions , ( 1 ) , 90 - 120 .qin y. , li l. , lei q. , 2009 , empirical likelihood for linear regression models with missing responses , ( 11 ) , 1391 - 1396 .qin j. , lawless j. , 1994 , empirical likelihood and general estimating equations , ( 1 ) , 300 - 325 .seber g.a.f ., wild c.j . , 2003 , nonlinear regression , wiley series in probability and mathematical statistics , john wiley sons , inc . , hoboken , new jersey .sun z. , wang , q. , dai p. , 2009, model checking for partially linear models with missing responses at random . , * 100*(4 ) , 636 - 651 .sun z. , wang , q. , 2009 , checking the adequacy of a general linear model with responses missing at random . , * 139*(10 ) , 3588 - 3604 .wang , q. , rao j.n.k . , 2002 , empirical likelihood - based inference in linear models with missing data ., * 29*(3 ) , 563 - 576 .wang , q. , linton o. , hrdle w. , 2004 , semiparametric regression analysis with missing response at random . , * 99*(466 ) , 334 - 345 .wang , q. , sun z. , 2007 , estimation in partially linear models with missing responses at random ., * 98*(7 ) , 1470 - 1493 .weiss , a.a ., 1991 , estimating nonlinear dynamic models using least absolute error estimation . , * 7 * , 46 - 68 .xue l. , 2009 , empirical likelihood for linear models with missing responses , , 1353 - 1366 .xue l. , 2009 , empirical likelihood confidence intervals for response mean with data missing at random , ( 4 ) , 671 - 685 .yang y. , xue l. , cheng w. , 2009 , empirical likelihood for a partially linear model with covariate data missing at random . , * 139*(12 ) , 4143 - 4153 .zhao y. , chen f. 2008 , empirical likelihood inference for censored median regression model via nonparametric kernel estimation , ( 2 ) , 215 - 231 .
|
a nonlinear model with response variable missing at random is studied . in order to improve the coverage accuracy , the empirical likelihood ratio ( el ) method is considered . the asymptotic distribution of el statistic and also of its approximation is if the parameters are estimated using least squares(ls ) or least absolute deviation(lad ) method on complete data . when the response are reconstituted using a semiparametric method , the empirical log - likelihood associated on imputed data is also asymptotically . the wilk s theorem for el for parameter on response variable is also satisfied . it is shown via monte carlo simulations that the el methods outperform the normal approximation based method in terms of coverage probability up to and including on the reconstituted data . the advantages of the proposed method are exemplified on the real data . + _ keywords : _ random nonlinear model ; response missing at random ; empirical likelihood ; semi - parametric estimation ;
|
the concept of space solar power ( ssp ) bases on conversion of solar energy into electricity on orbit and transmission of the collected energy to earth through wireless power transmission .the concept was first proposed in ref . and continuously attracted interest from researchers and government agencies .several system - level studies established technological feasibility of ssp ( e.g. , the iaa study , ref . ) and potential economic viability ( e.g. , ref .multiple subsystem technology verifications and demonstrations have been carried out ( for an overview see refs . and ) , and first prototype integration studies including space environment simulation were performed at the naval research laboratory , usa .space solar power represents a candidate large - scale renewable energy source in the context of rising global energy demand , which is expected to increase by until .compared to energy production from non - renewable energies , e.g. fossil fuels , electricity from ssp causes lower emissions of greenhouse gases which are connected to man - made climate change .this paper in parts bases on research work reported in an entry to the space solar power international student and young professional design competition of the space generation advisory council / international astronautical federation which was selected as the winning contribution and which will be published in the international astronautical congress proceedings . while the eventual ssp constellation architecture is entirely different , similarities between ref . and the design presented in this paper will be explicitly pointed out .the goal of this paper is the same as in ref . : to contribute to enabling mid - term ssp deployment . to achieve this ,the latest experimental ssp system integration results are combined with aspects of previous ssp system architecture work to formulate a scalable ssp design , under special attention to independence of in - space assembly and transportation infrastructure .the design avoids such auxiliary systems since their development is likely to delay or even prevent realization of utility - scale ssp as a whole due to the larger amount of required resources to achieve first power .the ssp design presented in this paper similarly to ref . also bases heavily on hypermodularity to reduce total system cost through mass production and increase system reliability due to fewer points of failure ( as recently shown in e.g. ref .thus the design is named hypermodular self - assembling space solar power ( hsa - ssp ) .another feature of ssp designs first introduced in ref . is the limitation of ssp satellite size to the capacities of near - term launch vehicles , such as spacex falcon heavy launcher , to ensure realistic self - assembly .this requirement allows insect - like formation of large structures or constellations by identical , but yet independently fully functional sps , similar but different to the ideas presented in ref .in addition , the concept of such fragmentation of large structures into fully functional units enables the begin of power production already after the delivery of a certain smaller threshold amount of units , which allows revenue income from early on in the deployment time frame and shortens the time to amortization of invested cost .similarly to the analysis performed in ref . , the design of hsa - ssp utilizes photovoltaic cells for electricity generation and microwave power transmission at ghz using solid state power amplifiers for simplicity .along the descriptions in ref . , each hsa - sps consists of a main planar platform composed of several hexagonally - shaped sandwich structures where the power generation and power transmission surfaces are located close to each other in a parallel , sandwich - like layering , similar in design to the hexbus sandwich structures proposed in ref . .the main platform also contains all other systems necessary to operate the satellite in space , such as thermal management systems , guidance , navigation and control , command and data systems , and communication systems .the hsa - sps power transmission surface of the sandwich modules is pointed towards earth at all times . in order to achieve continuous illumination of the power generating surface throughout the entire orbit , each hsa - sps platformis associated with a free - flying associated reflector or concentrator structure .the reflector structure is of sufficient size to allow three - sun concentration on the photovoltaic sandwich surface and can consist of individually controlled movable reflector elements , but will otherwise not be specified in more detail in this paper .the reflectors are not structurally connected to the sandwich platform to reduce self - assembly complexity and both launch mass and volume .similarly to the design described in ref . , mid - term technology advancement is expected to improve current sandwich performance parameters reported in ref . by about .this yields an area - specific mass of about kg / m including all power transmission electronic elements , an effective sandwich height , including protective packaging , of about cm , and a sunlight - to - rf sandwich module efficiency of about .in addition , the thermal properties of mid - term hsa - sps are expected to allow three - sun concentration on the photovoltaic surface of the sandwich modules . as described above, the size of a complete hsa - sps system is limited to the payload volume and payload mass of the spacex falcon heavy vehicle .similar to the study in ref . , about are subtracted from the total payload volume of about m for protective packaging against shock and vibration during launch . in the present study , however , sandwich modules are more stringently limited to occupy of the remaining volume or about m , to leave sufficient volume ( about m ) for the reflector array and all other hsa - sps subsystems .this is assumed to improve the realistic value of the hsa - sps design .figure [ fig : payloadintegration ] shows a sketch of the payload integration of a complete hsa - sps in the payload fairing of a falcon heavy vehicle . illustrating payload integration scheme of a complete hsa - sps in a falcon heavy vehicle.,scaledwidth=40.0%] considering the expected mid - term effective sandwich module height of about cm , about hexagonal sandwich modules with base areas of about m stacked along the vertical axis of the falcon heavy payload volume fulfill the volume constraints . assuming that of the total , about m power generating surface area of the sandwich modules are used to generate electricity at -sun concentration with an incident energy density of about w / m , a single falcon heavy can launch a hsa - sps with a nameplate rf power generation capacity of about kw . similarly to the studies in ref . , the sandwich platform of each hsa - sps will self - assemble on orbit through e.g. spring - loaded interconnects between sandwich modules or other low - complexity technology options . as an illustration , table [ tab : midtermcomp ] compares the specifications of sandwich platforms performing at current sandwich integration results with sandwich platforms performing along assumed mid - term technology advances ..comparison between sandwich platforms consisting of sandwiches with currently achieved module parameters and expected mid - term improvements of sandwich parameters for utility - scale mid - term hsa - sps .resulting nameplate rf power levels for complete hsa - sps from single falcon heavy launches are given as well .current sandwich performance parameters are taken from refs . and . [ cols="^,^,^",options="header " , ] in a similar reasoning as in ref . , the estimated development , production and launch cost of a gw rf power hsa - sps constellation would amortize within about years , if the rectenna could be located in new england and power could be sold to utility companies for cents / kwh , so cheaper than the average retail price for all end - use sectors . again , earlier break - even could be reached if power is being sold already before all hsa - sps are delivered to glpo .the main design advantages of hsa - sss are shared with the design proposed in ref . : independence of costly and yet - to - be developed in - space assembly and orbital transfer infrastructure ; sale of utility - scale power to utility companies within probably less than years of production start ( assuming falcon heavy launches per year ) ; reduction of development time due to limited size of complete hsa - sps systems . also , damaged hsa - sps can be removed from the constellation , repaired in a repair orbit and then re - inserted into the constellation , while new hsa - sps can be added to the constellation at any time via falcon heavy launches .critically , the hsa - sps design at the present level of study does not pose inherent , with current or mid - term technology insurmountable orbital dynamic or rf beam forming difficulties , in contrast to the design shown in ref .the hsa - sps design is fairly straight forward in terms of only consisting of two large - scale elements on the architecture level , which are the sandwich platform and the reflector array .distance between these two elements can be chosen such that the difference in orbital velocity and period can be remedied via electric thrusters with a moderate propellant requirement .placement of the elements in glpo should largely reduce the two necessary station - keeping efforts for pointing the rf transmission surfaces at earth at all times and directing concentrated sunlight on the photovoltaic surfaces from the reflector arrays .additional orbital dynamic studies should be performed to verify the validity and low technological complexity of the design , and that system lifetimes of the order of years are realistic .thermal performance of hsa - sps can be further enhanced , as mentioned above , by introducing gaps between adjacent single hsa - sps and utilizing the gaps for radiator material to decrease the operating temperature of the sandwich module .in addition , similar to the design shown in ref . , hsa - sps are sufficiently small such that any sandwich module is fairly close to potential radiators on the outsides of the sps .this would enable fairly efficient transport of excess heat from any sandwich module via heat pipes to the outside radiators , which could allow higher concentration factors on the pv modules .another possible opportunity for the hsa - sps design is that electrical power could be shared among neighboring sandwich modules or among modules in the entire hsa - sps .in order to optimize economics of sps launch , it is generally accepted that area - specific mass should be decreased and efficiency increased . however , in the context of pre - fabricated and self - deploying sps designs , sandwich volume , or effective module height for a given base surface area , is identified as a critical parameter . to mosteconomically use e.g. the spacex falcon heavy vehicle capacities , an average effective payload density of about tons / m has to be achieved , if ssp remains the only large - scale buyer on the launch market . for an assumed mid - term / m area - specific mass of sandwich modules , a corresponding effective module height of about cm is necessary to attain the optimal module mass density , and thus to maximize the number of modules transported per launch . considering that the mass of about sandwich modules with parameters given in table [ tab : midtermcomp ] could be launched by a single falcon heavy vehicle , but the volume of only modules , module height can be regarded as a more limiting aspect in sandwich payload integration than area - specific mass at this time .if in - space assembly infrastructure is not developed in the next decades , dedicated r effort should be focused in the near - term on reducing the effective height of sps sandwich modules , e.g. to less than cm .this could reduce sps launch costs by a factor or more .a possible solution to this problem could be attempting to integrate pmad and rf power electronics into a heat - distributing substrate attached to high - efficiency thin - film pv , combined with thin rf antennas .planarity could be supported by , e.g. , an aluminum mesh .the thermal properties of such a system remain to be investigated .an update of the ssp design presented in ref . results in the formulation of a new ssp concept , hsa - ssp .the design is scalable to utility - scale power and does not rely on in - space assembly or transportation infrastructure .extrapolations of current technology suggests a realistic design for mid - term sps deployment . a cost estimation analysis for a gwrf power hsa - sps constellation yields a total cost of about b usd , utilizing a learning curve approach similar to the one described in ref .the advantages of the presented design will be solidified and extended in further research .i would like to thank mr .ian mcnally from the university of glasgow , scotland , uk and mr .paul jaffe from the naval research laboratory , usa for helpful discussions and suggestions .g. r. schmidt , m. j. patterson , and s. w. benson , proceedings to international astronautical congress , iac--c.. .see http://en.wikipedia.org/wiki/thinned-array_curse , accessed .
|
this paper presents a design for scaleable space solar power systems based on free - flying reflectors and module self - assembly . lower system cost of utility - scale space solar power is achieved by design independence of yet - to - be - built in - space assembly or transportation infrastructure . using current and expected near - term technology , this study describe a design for mid - term utility - scale power plants in geosynchronous orbits . high - level economic considerations in the context of current and expected future launch costs are given as well .
|
the study of obliquely incident plane wave upon planar interfaces is of fundemantal interest to electromagnetic ( em ) wave propagation .it underlies snell s law of refraction and leads to important concepts such as total reflection and brewster s angle. one can easily relate to the concept of an obliquely incident plane wave by the daily experience of looking into a mirror . in practice ,oblique incidence is vastly applied in em related applications such as fiber optics, underground object detection, and rf - human body interaction. in the growing field of nanoplasmonics, oblique incidence finds applications particularly in exciting surface plasmon polaritons ( spps ) , exemplified by the common experimental setup in which subwavelength defects or attenuated total reflection are utilized to couple the obliquely incident plane wave into propagating spps. by taking advantage of the incident angle degree of freedom , several experiments have demonstrated spp near - field manipulation, which has been proposed as a direct approach to measuring spp generation efficiency. most recently , it has been shown that spp s can be directly generated on a planar metal surface by interfering incoming light beams with different incident angles in a four - wave mixing scheme. the effects of light incident at oblique angle on sub - wavelength defects in metallic layered media have been studied by frequency - domain calculations based on either coupled wave analysis or a semi - analytical model .these references have explored the obliquely incident light transmission through a single defect, and the spp generation efficiency. our work is largely motivated by recent experimental study of spp dynamics excited or controlled by a femto - second ( fs ) laser pulse obliquely incident on a spp propagating interface. to describe such experiments , a time - domain method is desirable because of the ultrafast nature of the exciting or controlling laser pulse . the major challenge in developing such a method , is to accurately treat the oblique incidence as well as the material dispersiveness .this poses a special challenge for mesh - based propagation method ( such as the finite - difference time domain method ) because even in the absence of the inhomogeneous media the wave front not aligned with the cartesian mesh is required to be uniform and to have arbitrary incident angle and time profile . in this paper, we develop a numerical method to rigorously treat obliquely incident plane wave scattering at embedded scatterers in layered dielectric and dispersive media . to the best of our knowledge, such a method was not published as yet. targetted mainly at time - domain studies of em wave phenomena that involve spp excitation and propagation in metallic films , the developed method is formulated within the framework of the finite - difference time - domain ( fdtd ) method .this method has enjoyed a wide range of applications in the field of nanoplasmonics, and its time - domain nature makes it particularly well suited to ultrafast phenomena .our treatment of the oblique plane wave is an extension of the total field / scatter field ( tf / sf ) technique to describe media characterized by the combination of debye- , drude- and lorentz - type poles. the tf / sf technique has been applied successfully in the fdtd study of free - standing scatterers , layered dielectrics and dispersive media describable by a single debye pole. it is based on the linearity of maxwell s equations and decomposes the total field into an incident and a scattered field components, by setting up an artificial boundary between the tf and sf regions in the fdtd simulations , a plane wave of arbitrary time profile and incident angle can be achieved by matching the known incident field at the tf and sf boundary . in presenting our method ,we will focus on the derivation of an equivalent one - dimensional ( 1d ) wave equations for the tf / sf boundary condition , suitable for various types of material dispersiveness , and explain in detail the numerical considerations involved .this will be followed by extensive numerical tests of the convergence properties of the method . for clarity, several important concepts from the previous literature are reemphasized .this paper is organized as follows : in section [ sec : sec2 ] , we derive the equivalent 1d wave equations , show the numerical flow chart for matching the tf / sf boundary condition , and discuss several practical simulation details , including stability , interface treatment , and leakage .section [ sec : sec3 ] tests our approach by comparison of numerical with analytical results for model problems . finally , concluding remarks are provided in section [ sec : sec4 ] .in the following , we provide the equations and numerical method for solving the transverse magnetic ( tm ) mode in two dimensions ( magnetic field perpendicular to the two - dimensional plane ) . special emphasis is placed on the tm mode because of its relevance to spp excitation. the numerical approach for solving the transverse electric ( te ) mode equations is similar to that for the tm mode , and the corresponding equations are given in appendix [ appx1 ] .the media considered are vacuum , linear dielectric media ( characterized by a dielectric constant ) , and linear dispersive media ( characterized by a finite sum of debye , lorentz , and drude types of poles ) .our starting point is maxwell s equations in the frequency domain for the tm mode , where the coordinate system is defined in fig .[ fig : f1 ] , is the free space permitivity , is the free space permeability , and is the dielectric function for a dispersive media , which reduces to a constant for vacuum and dielectric media . in the case studies below , we assume a dispersive medium with a single ( non - zero ) drude pole and provide two separate sets of equations for solving eqs .( [ eqn : eqn1]-[eqn : eqn3 ] ) .the first set of equations is based on the auxiliary differential equation ( ade ) approach with polarization currents to account for the dispersiveness . in this case, we further assume that media other than vacuum are not extended into the absorbing boundary , which allows us to use berenger s pml absorbing boundary condition. the second set of equations is formulated within the general context of the uniaxial perfectly matched layers ( upml ) absorbing boundary conditions, and involves a different approach to treat the dispersiveness . in this case, we can effectively absorb the outgoing waves exiting the simulation domain in the dielectric and dispersive media .separate tests have been done to ensure that the two approaches provide the same solution. in the following , we assume that the 2d electric and magnetic fields propagate on the yee mesh with the dependence . is the size of a unit cell , and is unit time step .details on the fdtd equations in both ade and upml approaches are given in appendix [ appx2 ] .if we now consider tm mode wave propagation with obliquely incident plane wave on layered media with translational invariance , the 2d equations of motion can be reduced to an equivalent 1d wave propagation problem along the direction perpendicular to the interfaces between the media. we proceed to derive the equivalent 1d wave equation for the tm mode , which will serve as a means of introducing incident fields along the tf / sf boundary . the corresponding derivation for the te modeis provided in appendix [ appx1 ] .substituting eq .( [ eqn : eqn3 ] ) into eq .( [ eqn : eqn1 ] ) yields , because of the translational invariance and phase matching across the interfaces between different layers , , with being a wavevector that is identical for waves in different layers. if we further assume that an oblique plane wave is incident from a dielectric medium with relative permitivity , then , which can be substituted into eq .( [ eqn : eqn4 ] ) to give , .\label{eqn : eqn5}\ ] ] equations ( [ eqn : eqn2 ] ) and ( [ eqn : eqn5 ] ) constitute a system of equations for 1d tm wave propagation across the interfaces between the media . to translate those equations into fdtd equations , jiang et al .introduced a convenient method to overcome the difficulty of time - domain convolution between the term in the square bracket and in eq .( [ eqn : eqn5 ] ) . in this method, eq .( [ eqn : eqn5 ] ) is first split into a pair of equations as , equations ( [ eqn : eqn2 ] ) , ( [ eqn : eqn6 ] ) and ( [ eqn : eqn7 ] ) then lead to the following set of fdtd equations , in obtaining eq .( [ eqn : eqn11 ] ) , we have multiplied both sides of eq .( [ eqn : eqn7 ] ) by and fourier transformed the result into the time domain .we have also made the assumption that a drude model is used , .the updating coefficients in eq .( [ eqn : eqn11 ] ) are \label{eqn : eqn12}\\ \end{cases}\ ] ] in vacuum ( ) and dielectric media ( constant ) , and }{(2 + \gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right]+\omega_d^2\delta t^2},\\ b_{y4}=-\frac{(2-\gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right ] + \omega_d^2\delta t^2 } { ( 2+\gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right ] + \omega_d^2\delta t^2},\\ b_{y5}=\frac{(2+\gamma_d\delta t)\epsilon(\infty)+\omega_d^2\delta t^2 } { ( 2+\gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right ] + \omega_d^2\delta t^2},\\b_{y6}=\frac{-4\epsilon(\infty ) } { ( 2+\gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right ] + \omega_d^2\delta t^2},\\ b_{y7}=\frac{(2-\gamma_d\delta t)\epsilon(\infty)+\omega_d^2\delta t^2 } { ( 2+\gamma_d\delta t)\left[\epsilon(\infty)-\epsilon_{1r}\sin^2(\theta)\right ] + \omega_d^2\delta t^2}\label{eqn : eqn13}\\ \end{cases}\ ] ] in drude media. here , we note the similarity between the updates of the pair and the pair in the upml formulation , which results from the fact that both pairs involves updating an auxiliary variable before the treatment of the material dispersiveness . in the case that contains a linear sum of different types of poles ( e.g. , to accurately describe metals near inter - band transition energies ) , direct fourier transform may not be as efficient because of higher - order derivatives with respect to time . for a systematic treatment of this situation ,interested readers are referred to appendix [ appx3 ] .the updating coefficients in eqs .( [ eqn : eqn8 ] ) to ( [ eqn : eqn10 ] ) are identical to those in eqs .( [ eqn : a ] ) , ( [ eqn : b ] ) and ( [ eqn : c ] ) , which are given in eqs .( [ eqn : d]-[eqn : h ] ) .we note that the updating coefficients corresponding to berenger s pml formulation can be used here provided that the two end media in the layers are vacuum . in the case of non - vacuum semi - infinite media at the two ends ,1d upml can be used to effectively absorb the outgoing waves , for example , equations similar to eqs .( [ eqn : l]-[eqn : k ] , [ eqn : i0 ] , [ eqn : i ] ) can be used by setting and in eqs .( [ eqn : j1 ] ) and ( [ eqn : k2 ] ) .the main panel of fig .[ fig : f1 ] illustrates the geometry of the fdtd simulation region .the layered media are denoted by , , etc . and are stacked along the direction .the thick , dashed ( thin , dotted ) lines denote the tf / sf boundaries along which the incident -field ( -field ) is calculated .incident field alignments on the boundaries are shown more explicitly in the zoom - in panels to the left and below the main panel .in this work , we assume that the oblique incidence field is introduced from the lower left corner ( crossing point between lines and in the main panel of fig . [fig : f1 ] ) with incident angle to the normal of the media interfaces ( direction ) .we further assume that the two end media in the layers are vacuum . consequently , so that field propagation along the horizontal boundaries through can be readily calculated by a delay of the free - space propagation time .in addition , the 1d field propagation along the vertical lines can be terminated by berenger s pml formulation .the perfectly matched layers absorbing boundaries are not shown in fig .[ fig : f1 ] . they will be further illustrated and explained when we consider specific examples in section [ sec : sec3 ] .the lower left panel in fig .[ fig : f1 ] shows the field alignment along line for the 1d wave propagation .the same setup applies to lines , and .importantly , a 1d total field / scattered field approach is used here ( the boundary points are highlighted in the shaded rectangle ) because we must allow the wave from the multiple interface reflection to exit the 1d simulation and be absorbed at the bottom on the 1d simulation line. our simulation follows the flow chart shown in fig .[ fig : f2 ] . the procedures belonging to 1d and 2d field updatesare highlighted in the shaded rounded rectangles . in each iteration ,the code updates the 1d -field , 2d -field , 1d -field , and 2d -field in a sequence .the order of 1d field storage and its matching to 2d simulation are important to ensure correct implementation of the 2d tf / sf scheme . before updating the 1d field, the code needs to store at each time instant the 1d field values at the crossing points between line and lines , , , and .the field matching at the tf / sf boundary is performed differently in accordance with the different updating schemes introduced in section [ sec2:subsec1 ] . in the ade approach , the tf / sf boundary matching equations on lines , , read , these updates are performed immediately after eqs .( [ eqn : a ] ) , ( [ eqn : c ] ) , and ( [ eqn : b1 ] ) . for the -field update on lines and , because depends on the updated value of in eq . ([ eqn : b3 ] ) , the boundary matching is performed in between eqs .( [ eqn : b2 ] ) and ( [ eqn : b3 ] ) , for example , on line , in the upml formulation , the tf / sf boundary matching is carried out immediately after the and updates ( before updating and ) in eqs .( [ eqn : l]-[eqn : i ] ) , for example , the updates on lines and read because the above updates are performed between the updates of and or and , they are indicated in the flow chart ( fig .[ fig : f2 ] ) by the upward arrows on the right .we note that if the same type of pml absorbing boundary condition is used to terminate both the 1d and the 2d field propagation , one can allow them to have the same updating coefficients in the pml region and therefore remove the procedures of saving and matching the field components on lines and [ and . this particular setup is useful in the description of a very thick bottom layer ( semi - infinite in the positive direction ) . in the case of normal incidence ,the code simplifies in two ways .first , in eq .( [ eqn : eqn7 ] ) , , and therefore eq . ( [ eqn : eqn11 ] ) is removed from the 1d -field update procedure .second , it is not necessary to store and interpolate the field values , , , and , because field excitation is synchronized along lines , , and , respectively .the incident field values on lines , , and are calculated from eqs .( [ eqn : eqn8]-[eqn : eqn11 ] ) using a 1d tf / sf scheme that allows fields reflected from the interfaces to exit the 1d simulation domain . based on the geometry shown in the lower left panel in fig .[ fig : f1 ] , we assume that the incoming -field with time - dependence excites the 1d field at point . paired with this excitation is an -field of the form , exciting the 1d field at point .for example , on line in fig .[ fig : f1 ] , the 1d tf / sf boundary matching equations read , the values of and are calculated from the known expressions of and using time - domain interpolation when necessary .these values are stored at each instant to generate the excitation fields for lines , , and by introducing a time delay . the field values on the horizontal lines , , and are obtained in a similar fashion .one can also store the field values at each point on line , and save the computation along lines , and by introducing a proper time delay .this scheme reduces the computation time for the cost of larger memory requirement .finally , the 2d tf / sf boundary values [ along lines ( ) are readily calculated from the -field values on lines and ( and ) using eqs .( [ eqn : b2 ] ) and ( [ eqn : b3 ] ) .in addition , we note that the excitation and pml absorbing boundary conditions are enforced on in the 1d field updates .several practical issues should be considered .first , in vacuum , the projection of the phase velocity of the oblique incident field on the -axis is .as the incident angle increases , the phase velocity can be very large and cause numerical instability if a fixed courant criterium is enforced ( e.g. , ) . based on this observation ,we vary the courant number to ensure stability .when the incident angle is small , a small courant number is used to ensure resolution of the time domain interpolation along the horizontal boundaries . in our simulation ,the same courant number is used for both 1d and 2d wave propagations , while an interpolation scheme to match different courant numbers in 1d and 2d wave propagations is explained in ref . .second , as the dielectric function is discontinuous across the interface between layers of different media , we have used an average dielectric function for updating the fields at the interface. for example , in the left panel of fig . [fig : f1 ] , we use the dielectric function for the -field updates at the interface . in section[ sec : sec3 ] , we will show that this scheme leads to faster convergence and/or higher accuracy as compared to the standard step - like change of .finally , we use a gaussian ramping in the hard source time - response in eqs .( [ eqn : eqn20 ] ) and ( [ eqn : eqn21 ] ) to slowly ramp the field to continuous wave so as to avoid high - frequency component leakage out of the tf / sf domain . specifically , , fs , fs , for the ramping phase .to test the accuracy of the tf / sf scheme , we compare our simulation results to analytical results by considering the oblique tm wave incident upon a slab sandwiched between two vacuum media .the analytical results for the reflection and transmission coefficients are given by for tm wave , here , and denote , respectively , the reflection and transmission coefficients at the interface between media and , and denotes the refractive index of media .we assume that medium is a slab of thickness .the waves at the input side of the slab , where the incident and reflected waves propagate , and at the output side , where the transmitted wave propagates , can then be expressed as , +r\exp[i(k_x x - k_y y)],\label{eqn : eqn28}\\ \psi^{\rm output}&=&t\exp[i(k_x x+k_y y)].\label{eqn : eqn29}\end{aligned}\ ] ] the above expressions indicate that the maximum field amplitude on the input and output sides are and , respectively .these quantities can be obtained along a -direction detection line in the tf / sf scheme for layered media without placing any scatterer inside the tf region . using this scheme, we also test the leakage , defined as the ratio of the maximum field magnitude in the scattered field region to the maximum field magnitude in the total field region : , where refers to , or , or . in the ideal case , , whereas in practice , ( or db ) is desirable. specific numerical examples of the tests are provided in section [ sec : sec3 ] , where `` leakage '' refers to the largest leakage among , or , or .in addition , we have tested the accuracy of the wave propagation in the layered media by inspecting the and projections of the wavelength [ where , for instance , the -projection is and . in the case of a dispersive slab , we have also tested the skin depth ( the distance where the field decays to of its value at the surface , ca . 30 nm for the drude model and parameters in our calculation ) , by considering a slab with thickness larger than nm .these tests all show an error within compared to analytical results .to illustrate the generality of our formulation , we first consider the simple case of plane wave propagation in vacuum , illustrated in fig .[ fig : fig1 ] . in panels( a - c ) , the plane wave ( wavelength nm ) is injected from the lower left corner into the tf region ( bounded by the thick , dashed lines ) with incident angle . in panel( d ) , the plane wave propagates in the positive direction .it is shown that as the field penetrates into the berenger pml located at the top of the simulation domain , it is efficiently absorbed .negligible leakage is introduced at the pml boundary as the 1d field updating equations acquire the same coefficients as the 2d equations [ see section [ sec : sec2 ] ] . the dashed oval in panel ( a ) indicates considerable leakage ( ) outside the tf region because in this case the incident continuous wave ( cw ) field is turned on instantaneously .consequently , the high frequency components in the leading wave front are not well matched at the tf / sf boundary , resulting in the leakage .as shown by panels ( b ) and ( c ) , the leakage can be reduced by one order of magnitude by slow ( gaussian ) ramping of the incident field to steady - state cw oscillations . in light of this , hereafter we use gaussian ramping prior to cw in the excitation hard source and .the maximum leakage in the calculations of panels ( b d ) is and the relative error in the vacuum wave impedance ( ) is . as a second example, we study a plane wave obliquely incident on a dielectric slab. . in fig .[ fig : fig2 ] , we plot snapshots of the magnetic field of a plane wave ( wavelength nm ) incident at an angle on a -nm thick dielectric slab ( dielectric constant ) . in both panels ,the solid rectangle indicates the location of the slab , while the thick , dashed rectangle shows the tf / sf boundary .the plane wave is injected from the lower left corner and first impinges on the lower vacuum / dielectric interface . in panel( a ) , we observe the interference patterns of the reflected wave with the incident wave below the lower vacuum / dielectric interface while the refracted wave front propagates in the slab .the faint wave front in the dielectric slab is due to the slow gaussian ramping of the incident field . after a steady stateis established [ fig .[ fig : fig2](b ) ] , the magnetic field pattern clearly reveals the interference between the reflected and incident waves , the interference within the dielectric slab , and the final transmission through the slab . in fig .[ fig : fig2 ] ( b ) , it is observed that the final transmitted wave maintains the same propagation direction as the incident wave ( to direction ) because the media below and above the slab are both vacuum .we proceed to examine the convergence of the magnitude of the reflection ( ) and transmission ( ) coefficients to the analytical results given by eqs .( [ eqn : eqn22 ] ) and ( [ eqn : eqn23 ] ) . in fig .[ fig : fig3 ] , we plot the relative errors in ( a ) and ( b ) with respect to analytical results as a function of the mesh size . the red , solid ( blue , dashed ) curve in fig .[ fig : fig3 ] shows the convergence result without ( with ) the interface averaging of the dielectric constant . from the comparison ,it is clear that calculations with interface correction lead to uniformly smaller error than that without the interface correction .the slope of each line in the log - log plot obtained by the least - square fit indicates that second order accuracy of yee s algorithm is maintained with the interface correction , while the accuracy degrades to first order without the interface correction .similar effects have been reported in previous studies on the accuracy of fdtd results with dielectric interfaces, while here we observe such effects within the tf / sf formulation in the context of layered media .we note that the interface correction scheme does not entail additional computational and memory requirements and is thus always recommended . in fig .[ fig : fig3 ] ( c ) , we show that the maximum leakage with interface averaging is uniformly smaller than that without the interface averaging for different mesh sizes . throughout, the maximum leakage is below , substantiating our confidence in the tf / sf scheme. to further test the accuracy of the dielectric slab reflection and transmission upon oblique incidence of a plane wave , we compare the analytical results with fdtd calculated results at different incident wavelengths in table [ tab : tab1 ] and at different incident angles in table [ tab : tab2 ] .as shown , the relative error ( given in parentheses ) is uniformly below , except for incident angle , where is below .it is interesting to note that the relative error diminishes with increasing wavelength , while a non - monotonic trend is seen in the errors of both reflection and transmission coefficients for an increasing incident angle .next we apply the tf / sf method to study the reflection and refraction of a plane wave obliquely incident upon a dispersive metal slab .snapshots of the magnetic field are shown in fig .[ fig : fig4 ] as the plane wave passes through the metal slab .specifically , we consider an incident plane wave with wavelength nm and injected from the lower left corner of the tf region upon an nm thick dispersive metal slab described by the drude model , with , rad / s , and rad / s .this set of parameters is optimized to fit the dielectric data reported in ref . for bulk silver in the spectral range from to nm . figures [ fig : fig4 ] ( a ) and ( b ) illustrate the magnetic field distribution before and after reaching a steady state , respectively . in fig .[ fig : fig4 ] ( b ) , the large curvatures at the interference minima between the incident and reflected fields below the lower interface indicate a large reflection coefficient ( ) . inside the metal , because of the complex dielectric function of the slab , the wave front is no longer a plane wave , as is clearly discernable in fig . [fig : fig4 ] .however , the final transmitted wave exiting from the upper interface recovers a plane wave front and the same propagation constant as the incident wave , because the media below and above the dispersive slab are both vacuum . from figs .[ fig : fig2](b ) and [ fig : fig4](b ) , it is seen that the periodicity in the direction of the fields below , inside , and above the slab is the same . by further observing the field propagation after reaching the steady state in both cases (not shown ) , it is clear that the phase of the total field in the direction is matched .this observation confirms the phase matching condition parallel to the interface ( same across the interfaces ) , which is critical to the derivation of the 1d field propagation , eq .( [ eqn : eqn5 ] ) . to examine the convergence of our results in the case of the metal slab, we use the same incident field condition as that in fig . [fig : fig4 ] and plot the relative error of the reflection and transmission coefficients as a function of the mesh size in figs .[ fig : fig5 ] ( a ) and ( b ) , respectively .it is seen that the results with interface averaging ( blue , dashed curves ) of the dielectric function yield uniformly lower error than the results without the interface averaging ( red , solid curves ) .a first - order power law is seen in the error of the transmission coefficient as a function of the mesh size without interface averaging , all other errors are near and below , illustrating the convergence of the fdtd results .fdtd simulations on similar dispersive systems have been reported by mohammodi et al . , who suggested that the dispersive contour - path method is able to achieve smaller error even for a relatively large step - size ( ). fig .[ fig : fig5](c ) shows that the leakage decreases with a decreasing mesh size , albeit in this case the leakage with the interface averaging of the dielectric function is slightly larger than that without the averaging [ cf .[ fig : fig3](c ) ] .in tables [ tab : tab3 ] and [ tab : tab4 ] , we compare between the analytical and the fdtd calculated reflection and transmission coefficient magnitudes at various incident wavelengths and incident angles for the metal slab studied in fig .[ fig : fig4 ] .the fdtd results are obtained after steady state is reached under cw incident plane wave illumination . in the frequency domain, this corresponds to a fixed incident wavelength , and the drude model provides a constant complex value of dielectric function , which can be used in eqs .( [ eqn : eqn22 ] ) and ( [ eqn : eqn23 ] ) to obtain the reflection and transmission coefficients . in table[ tab : tab3 ] , we list the free space wavelength in the to nm range , to which the fitted drude model is applicable .the small relative errors ( ) shown in the parentheses in tables [ tab : tab3 ] and [ tab : tab4 ] illustrates the reliability of our calculations using the tf / sf formulation in the case of layered dispersive media .the maximum leakage found in obtaining the data in tables [ tab : tab3 ] and [ tab : tab4 ] is , which occurs at .panels in the left column of fig .[ fig : fig6 ] illustrate snapshots of the magnetic field as the wave propagates through two - layer media consisting of a lower layer of -nm thick dispersive material and an upper layer of -nm thick dielectric material under oblique plane wave incidence .the material parameters are given in the caption of fig .[ fig : fig6 ] . in these panels ,the solid horizontal lines define the boundaries between different layers , which are extended into the upml in the direction .the dashed box denotes the tf / sf boundary .the incident plane wave with wavelength nm and is injected from the lower left corner of the tf region .the magnetic field snapshots in the first column of fig . [ fig : fig6 ] show that snell s law is obeyed when the field passes through the two layers of materials .in particular , the propagation direction in the high - index dielectric material exhibits a smaller angle to the normal than the incident wave , whereas the final transmitted wave propagates along the direction of the incident wave .more importantly , we observe that the phase of the waves across the different layers is matched in the direction after a steady state is established ( bottom panel in the left column ) , which is again consistent with eq .( [ eqn : eqn5 ] ) . in this case , the magnitude of the reflection and transmission coefficients calculated by fdtd is and , respectively .the bottom panel in the left column also shows non - negligible leakage penetrating through the tf / sf box and propagating into the lower right corner of the simulation domain , nevertheless , the maximum value of the leakage in is , which is insignificant in practice .panels in the second column of fig .[ fig : fig6 ] are obtained under the same conditions as those in the first column except that a slit of nm width ( in the direction ) and nm depth ( in the direction ) is placed in the middle of the simulation domain . in the tf region ,the slit causes strong scattering of the injected plane wave , which results in the observed interference patterns .the slit introduces entirely new physics : outside the tf / sf boundary , the purely scattered wave distribution is reminiscent of a dipole radiation pattern .closer inspection reveals that the field distribution is asymmetric with respect to the slit center ( nm ) .the scattered field is strongest near the lower surface of the dispersive slab and to the right of the tf / sf box and weakest above the upper surface of the dielectric slab and to the left of the tf / sf box .the asymmetric angular distribution is a clear signature of the oblique incidence of the exciting plane wave . by enlarging the sf region size , we find that the purely scattered wave along the lower surface of the metal thin film consists mainly of surface plasmon polariton ( spp ) waves propagating away from the slit .these are identified by their wavelength - nm in the direction compared with the analytical result for the wavelength of spp at the interface between vacuum and metal , which is given by nm . for the simulations in fig . [fig : fig6 ] , we have updated the field at the horizontal and vertical interfaces using the averaging scheme discussed above , and have tested the convergence of the fields in the tf and sf regions with respect to mesh size ( ) , tf box size , and physical size of the simulation region .we note that upml termination of the simulation domain is important because the scattered field due to the slit is significant .our tests show that the maximum scattered field in the sf region is only one order of magnitude less than the maximum field in the tf region .additionally , the upml can effectively absorb the outgoing wave in the dispersive and dielectric layers .furthermore , the boxed tf / sf boundary has advantage over the -shaped boundary considered previously , particularly when one is interested in the full angular distribution of the scattered field in the far - field zone .using maxwell s equations for the transverse magnetic wave , along with translational invariance and phase matching principles , we derived an equivalent one - dimensional wave propagation equation along the direction perpendicular to the interfaces between layered media .we then derived the corresponding finite - difference time - domain equations for layered dielectric media and dispersive media with a drude pole pair . to utilize these equations for a plane wave with oblique incidence, we discussed the simulation setup and procedure in the framework of the total field / scattered field formulation with a special emphasis on techniques to match the fields at the total field / scattered field boundary .we have performed tests on vacuum propagation and on the reflection and refraction at a dielectric and a dispersive slab . converged simulation results for various incident angles and wavelengths reveal that the errors in the reflection and refraction coefficients are uniformly below compared to analytic results .the numerical example of scattering at a nano - scale ( sub - wavelength ) slit in a dispersive medium invites interesting applications of our formulation to time - dependent studies of electromagnetic wave scattering at surface or embedded scatters in dispersive media , for example , the coupling of incident oblique plane wave into surface plasmon polaritons . for this purpose, the developed method offers the flexibility of choosing the total field region inside which the near - field exhibits interference pattern between incident and scattered fields , while outside which the scattered far - field can be detected at all angles .this work is supported by the w. m. keck foundation ( grant number 0008269 ) and by the national science foundation ( grant number esi-0426328 ) .the authors thank hrvoje petek and atsushi kubo for the communications of corresponding experimental data , and maxim sukharev , gilbert chang , jeffrey mcmahon , stephen gray and allen taflove for insightful discussions .they are particularly thankful to ilker apolu for discussions on the total field / scattered field formalism .maxwell s equations in 2d in the frequency domain for the te mode read , substituting eq .( [ eqn : eqn32 ] ) into eq . ( [ eqn : eqn30 ] ) yields , {z},\label{eqn : eqn33}\\\ ] ] where denotes the relative permitivity of the first medium ( see fig . [fig : f1 ] ) . equations ( [ eqn : eqn31 ] ) and ( [ eqn : eqn32 ] ) are used for 1d te mode wave propagation .these equations can be readily solved using the same fdtd procedure as for eqs .( [ eqn : eqn1 ] ) and ( [ eqn : eqn2 ] ) .the time - domain solution is facilitated by the fact that material dispersiveness introduces a factor ] , which entails more difficulty for the fdtd solution. the simulation setup and flow chart in section [ sec : sec2 ] can be used for the te mode by exchanging the roles of the and fields .the fdtd equations based on the auxiliary differential equation ( ade ) approach read , the coefficients in the -field updating equations are medium dependent ; specifically , in vacuum ( ) and dielectric media ( constant ), in drude media , , ,\\ a_{x3 } = a_{y3 } = -\delta t/\left[\epsilon_0\epsilon(\infty)\right],\\ a_{x4 } = a_{y4 } = ( 1-\gamma_d\delta t)/(1+\gamma_d\delta t),\\ a_{x5 } = a_{y5}= \epsilon_0\omega_d^2\delta t/(1+\gamma_d\delta t),\label{eqn : e } \end{cases}\ ] ] and in the pml region , /(\delta x\sigma_y),\\ a_{x3 } = a_{x4 } = a_{x5 } = 0;\\ a_{y1 } = \exp(-\sigma_x\delta t/\epsilon_0),\\ a_{y2 } = -\left[1-\exp(-\sigma_x\delta t/\epsilon_0)\right]/(\delta x\sigma_x),\\ a_{y3 } = a_{y4 } = a_{y5 } = 0.\label{eqn : f } \end{cases}\ ] ] the coefficients in the -field updating equations outside the pml regions are , whereas in the pml regions they read , /(\delta x\sigma^*_x),\\ b_{y1 } = \exp(-\sigma^*_y\delta t/\mu_0),\\ b_{y2 } = \left[1-\exp(-\sigma^*_y\delta t/\mu_0)\right]/(\delta x\sigma^*_y).\label{eqn : h } \end{cases}\ ] ] here , we assume a polynomial grading of the pml parameters: , where is the maximum conductance in the pml , is the distance into the pml , and is the thickness of the pml region . in this paper , we use a power and . is optimized to give a maximum reflection error on the order of .the fdtd equations based on the upml formulation read, the coefficients in the -field updating equations in all media are , in vacuum ( ) and dielectric media ( constant ) , whereas in drude media , outside the upml regions , and in the upml regions , the coefficients in the -field updating equations outside the upml region are , and in the upml regions , here , we assume a polynomial grading of the pml parameters, and , where and denote the maxima of the upml parameters is distance into the pml , and is the thickness of the pml .in this paper , we use power , , , and is optimized to give a maximum reflection error on the order of for a simulation region consisting of vacuum and on the order of for a simulation region consisting of drude dispersive media .in this appendix we provide a systematic solution for eq .( [ eqn : eqn7 ] ) when and consists of a linear superposition of debye , drude , and lorentz types of poles . in this case , we first rearrange eq .( [ eqn : eqn7 ] ) as where and we introduce auxiliary variables to rewrite eq .( [ eqn : eqn34 ] ) as a system of equations , equations ( [ eqn : eqn36 ] ) and ( [ eqn : eqn37 ] ) correspond to the set of fdtd equations , the translation of eq .( [ eqn : eqn38 ] ) into a set of fdtd equations depends on the type of pole(s ) considered [ see eq .( [ eqn : eqn35a ] ) ] . for a single debye pole ( ) , for a drude pole pair ( ) , for a lorentz pole pair ( ) , equations ( [ eqn : eqn39 ] ) through ( [ eqn : eqn43 ] ) form a linear system of equations for the unknowns and ( ) , which can be solved by existing numerical solvers for linear systems of equations . the solution is then used to replace eq . ( [ eqn : eqn11 ] ) to proceed the 1d wave propagation for the tm mode comparing to a direct fourier transform of eq .( [ eqn : eqn5 ] ) , the above procedure only requires the storage of the quantities at the previous two time instances and thus avoids the complexity of numerical high - order derivatives with respect to time .this procedure can be extended systematically to multi - poles in the material dispersiveness , although it involves solving a linear system of equations .f. lpez - tejeira , sergio g. rodrigo , l. martn - moreno , f. j. garca - vidal , e. devaux , w. ebbesen , j. r. krenn , i. p. radko , s. i. bozhevolnyi , m. u. gonzlez . ,j. c. weeber , and a. dereux , nat .phys . * 3 * , 324 ( 2007 ) .we note that an obliquely incident beam in the finite - difference time - domain method has been developed previously .see , e.g. , t .- w .lee and s. k. gray , appl .. lett . * 86 * , 141105 ( 2005 ) ; k. j. willis , j. b. schneider , and s. c. hagness , opt . exp . *16 * , 1903 ( 2008 ). however , we stress that in these formulations the incident beam wave front is spatially non - uniform , in contrast to the formulation in this paper .we note that , when incoporated with the convolutional perfectly matched layers absorbing boundary conditions , the ade approach can systematically treat a general dispersive medium with a finite sum of debye- , lorentz- and drude - type poles , while the upml approach with higher order poles becomes increasingly difficult because of higher - order derivatives with respect to time . however , for a single drude pole considered in this paper , the upml approach is numerically tractable . if , total reflection occurs , and the solution of eqs .( [ eqn : eqn7 ] ) and ( [ eqn : eqn33 ] ) becomes unstable . in the examples provided in this paper , this situation is not allowed .interested readers are referred to ref . for a detailed discussion of the fdtd formulation and solution in this case . , , etc .thick , dashed ( thin , dotted ) lines denote the boundaries to which the ( ) field is assigned .the left , lower left , and lower right panels show the specific field point assignment at the interface , along line , and at the horizontal boundaries , respectively .the -coordinates of lines , , , , , are , , , , , and , respectively. the -coordinates of lines , , , and are , , , and , respectively.,width=415 ] coordinate . for1d field updates an auxiliary differential equation ( ade ) approach is used , while 2d field updates are performed by either the ade approach or the equations consistent with the uniaxial perfectly matched layers ( upml ) formulation.,width=453 ] .( a ) magnetic ( ) field snapshot at fs for a plane wave propagating in vacuum .the dashed oval indicates the leakage outside the tf / sf boundary as a result of the instantaneous turn - on of the field .panels ( b d ) show -field snapshots for a plane wave propagation with initial gaussian ramping .( b ) -field snapshot during ramping ( at fs ) and( c ) after steady state is established ( at fs ) .( d ) -field snapshot at fs for a plane wave propagating in the positive direction .for all calculations , the incident wavelength is nm ( period fs ) , and the steady - state amplitude of the incident magnetic field is a / m .the mesh size is nm , and the courant number is .a log color scale ( ) is used in all plots . the thick , dashed ( thin , dotted ) rectangle indicates the tf / sf ( inner pml ) boundary.,width=472 ]fs ) and ( b ) after steady state is established ( at fs ) .the incident plane wave enters the tf region from the lower left corner of the tf / sf boundary with incident angle .the incident wavelength in vacuum is nm ( period fs ) , and the steady - state amplitude of the incident magnetic field is a / m in all calculations .the dielectric constant and thickness of the slab are and nm , respectively .the media above and below the slab are vacuum , the mesh size is nm , and the courant number is .a log color scale is used in all plots . the thick , dashed ( thin , dotted ) rectangle indicates the tf / sf ( inner pml ) boundary .the slab does not penetrate into the pml region.,width=472 ] and ( b ) the transmission coefficient as a function of the mesh size .( c ) maximum leakage as a function of mesh size . in all calculations ,the courant number is . in all figures ,the red , solid ( blue , dashed ) curve shows the result without ( with ) the interface averaging of the dielectric constants .the parameters of the incident wave and the dielectric slab are as in the calculation leading to fig .[ fig : fig2].,width=453 ] fs ) and ( b ) after ( at fs ) steady state is established . in all calculations ,the incident plane wave enters the tf region from the lower left corner of the tf / sf boundary ( dashed lines ) with incident angle .the incident wavelength in vacuum is nm ( period fs ) , and the steady - state amplitude of the incident magnetic field is a / m .the metal slab is nm thick with drude parameters : , rad / s , and rad / s .the media above and below the slab are vacuum , the mesh size is nm , and the courant number is .a log color scale is used in all plots .the thick , dashed ( thin , dotted ) rectangle indicates the tf / sf ( inner pml ) boundary .the slab does not penetrate into the pml region.,width=472 ] and ( b ) the transmission coefficient as a function of the mesh size .( c ) maximum leakage as a function of mesh size .the courant number is in all calculations . in all figures ,the red , solid ( blue , dashed ) curve shows the result without ( with ) the interface averaging of the dielectric constants .the parameters of the incident wave and the metal slab are the same as in the calculations leading to of fig . [fig : fig4].,width=453 ] nm thick drude metal with parameters : , rad / s , and rad / s .the upper layer is a nm dielectric with dielectric constant . the media below and above the two layers are vacuum .the panels in the right column illustrate the scattering due to a slit of width nm and depth nm in the same layered structure as in the left column . rows 1 , 2 , and 3 show snapshots at , , fs ( before a steady state is established ) ; row 4 shows the snapshot at fs ( after a steady state is established ) .for all calculations , the incident plane wave enters the tf region from the lower left corner of the tf / sf boundary with with incident angle .the incidence wavelength is nm ( period fs ) , and the steady - state amplitude of the incident magnetic field is a / m .the mesh size is nm , and the courant number is .a log color scale is used in all plots .the thick , dashed ( thin , dotted ) rectangle indicates the tf / sf ( inner pml ) boundary .the slabs are extended into the upml region.,width=321 ] .[tab : tab1]comparison of the magnitude of the reflection ( ) and transmission ( ) coefficients between the analytical and numerical results for different incidence wavelengths ( ) .superscript denotes the analytical , and superscript denotes the numerical results . the percentages in brackets denote the relative errors in the numerical results .the mesh size is nm and the courant number is . [cols="^,^,^,^,^",options="header " , ]
|
we formulate a finite - difference time - domain ( fdtd ) approach to simulate electromagnetic wave scattering from scatterers embedded in layered dielectric or dispersive media . at the heart of our approach is a derivation of an equivalent one - dimensional wave propagation equation for dispersive media characterized by a linear sum of debye- , drude- and lorentz - type poles . the derivation is followed by a detailed discussion of the simulation setup and numerical issues . the developed methodology is tested by comparison with analytical reflection and transmission coefficients for scattering from a slab , illustrating good convergence behavior . the case of scattering from a sub - wavelength slit in a dispersive thin film is explored to demonstrate the applicability of our formulation to time- and incident angle - dependent analysis of surface waves generated by an obliquely incident plane wave .
|
the feature of quantum mechanics which most distinguishes it from classical mechanics is the coherent superposition of distinct physical states , usually referred to as quantum coherence .it embraces also entanglement , i.e. non - local quantum correlations arising in composite systems .quantum coherence results rather fragile against environment effects and this fact has boosted the development of a quantum control theory . just like the classical one ,quantum control theory includes open - loop control and closed - loop control according to the principle of controllers design .feedback is a paradigm of closed loop control , in that it involves gathering information about the system state and then according to that actuate a corrective action on its dynamics .it has been shown that quantum feedback is superior to open - loop control in dealing with uncertainties in initial states .moreover , it has been proven that it works better than open - loop control when it aims at restoring quantum coherence . in the presence of feedback, suitable quantum operations are added to the bare dynamical map ( resulting from the environment action ) of a quantum system .these quantum operations should be determined according to the desired target state .this is like to say that one optimizes the _actuation_. besides , it is known that there is a correspondence between measurement on the environment and the representation of the map .therefore , it is clear that one has to optimize the _measurement _ overall possible representations of the map in order to to extract the maximum information with the minimum disturbance .altogether , it can be said that feedback implies in the quantum realm a double optimization , over the measurement and over the actuation process .this makes designing the optimal feedback control a daunting task for quantum systems , especially composite ones and hence entanglement control ( we refer here to _ local _control , i.e. measurement and actuation are both local operations ) . in linear bosonic systemsthe pursued strategy was to steer a system towards a stationary state entangled as much as possible . dealing with the inherent nonlinearity of qubitsmakes this strategy very challenging and no progresses have been made since the seminal work of ref. .hence , we shall consider here a feedback control whose aim is to preserve as much as possible an initial maximally entangled states for two - qubit dissipating into their own environments .actually we shall employ maps and corrective actions much in the spirit of , without analyzing continuous time evolution .optimal control is found by first gaining insights from the subsystem purity and then by numerical analysis on the concurrence .repeated feedback action is also investigated , thus paving the way for a continuous time formulation and solution of the problem .the layout of the paper is as follows .we start by introducing the model in sec.[sec : model ] .then we discuss the feedback action in sec.[sec : fb ] and subsequently address its optimality in sec.[sec : opt ] .sec.[sec : repeat ] is devoted to repeated applications of the dynamical map .finally , sec.[sec : conclu ] is for conclusion .we consider two qubits ( distinguished whenever necessary by labels and ) undergoing the effect of local amplitude damping , so that their initial state changes according to the following quantum channel map where are the kraus operators ( satisfying ) constructed from those of local ( single qubit ) amplitude damping channels here ( resp . ) is the ground ( resp .excited ) qubit state and $ ] is the single qubit damping rate .the map implies the probability for each qubit of losing independently the excitation into its own environment .suppose that the two qubits are initially prepared in a maximally entangled states , e.g. with in the computational basis , it has the following matrix representation : from here on we assume the freedom to perform local operations ( and eventually classical communication ) , i.e. they are costless .hence the above assumption of the initial state is equivalent to any other maximally entangled state . in the computational basis ,the state resulting from eq .reads now consider the subsystem purity as measure of entanglement .although it is only valid for pure states , it can give us some insights also for mixed states entanglement .thanks to it is straightforward to show that the minimum is achieved for , i.e. when the channel reduces to the identity map .a faithful measure of entanglement is the concurrence defined as where are , in decreasing order , the nonnegative square roots of the moduli of the eigenvalues of with denoting the complex conjugate of . usingwe can show that fig .[ fig1 ] illustrates the subsystem purity as well as concurrence resulting from state as a function of .we can see that they behave opposite one to another .hence we can argue that parameters minimizing the subsystem purity would also maximizing the concurrence . for the state .,width=302,height=226 ]the map in eq . can be regarded as the effect of a measurement process described by a probability operator valued measure ( povm ) whose elements are and whoseoutcomes are labelled by the values of .notice that the elements are local , hence we consider _ local feedback _actions su(2)(2 ) to be applied in correspondence of the outcomes .that is , in the presence of feedback the dynamical map changes into due to the symmetry of the initial state and of the action of the dissipative map , the unitary operators can be taken as : where are generic elements of su(2 ) with , , the euler angles .this model makes fully sense because now once we are given an entangled state the feedback operations are completely local and the aim is to restore as much as possible entanglement ( degraded by the local dissipation ) .so the goal is to find the euler angles that maximizes the amount of entanglement of .applying , with and , to gives whose matrix elements in the basis are : {11 } & = \frac{1}{8 } \bigg\{\left(1-\eta^2\right)\left ( 1 + \cos\beta_v\right)^2+\left ( 1 + \cos\beta_u\right)^2 \nonumber\\ & + 8\eta(1-\eta)\cos ^2\left(\beta_v/2\right)\sin ^2\left(\beta_u/2\right ) + 4\eta^2\sin ^4\left(\beta_u/2\right ) \nonumber\\ & + 4\eta\left ( 1 + \cos\beta_u\right)\cos ( 2 \gamma_u)\sin ^2\left(\beta_u/2\right)\bigg\ } , \label{eq : r11 } \\\left[\rho''\right]_{12}&=\frac{1}{8 } \bigg\{e^{-i \alpha_v}(1-\eta ) \sin \beta_v\big [ ( 1-\eta ) \cos \beta_v+1-\eta\cos\beta_u \big ] \nonumber \\ & + e^{-i \alpha_u}\sin\beta_u\big[(1-\eta)(1-\eta \cos \beta_v ) + 2 i \eta \sin ( 2\gamma_u ) \nonumber\\ & + \cos \beta_u\left(1+\eta ^2 - 2 \eta \cos ( 2 \gamma_u)\right ) \big ] \bigg\ } , \label{eq : r12}\end{aligned}\ ] ] {13}&= \left[\rho''\right]_{12 } , \\\nonumber\\ \left[\rho''\right]_{14}&=\frac{1}{8}e^{-2 i ( \alpha_u+\gamma_u)}\bigg\ { \eta(1 + \cos \beta_u)^2 + 4\eta e^{4 i\gamma_u}\sin ^2\left(\beta_u/2\right)\nonumber \\ & + 2e^{2 i\gamma_u}(1+\eta^2)(1+\cos \beta_u ) \sin ^2\left(\beta_u/2\right ) \nonumber \\ & + 2e^{2 i ( \alpha_u-\alpha_v+\gamma_u)}(1-\eta)^2(1+\cos \beta_v ) \sin ^2\left(\beta_v/2\right ) \nonumber \\ & -2e^ { i ( \alpha_u-\alpha_v+2\gamma_u)}\eta(1-\eta)\sin \beta_u\sin \beta_v \bigg\},\end{aligned}\ ] ] {22}&= \frac{1}{8 } \bigg\{4 ( 1-\eta ) \eta \cos ^2\left(\beta_u/2\right ) \cos ^2\left(\beta_v/2\right ) \nonumber\\ & + \left(1+\eta ^2 - 2 \eta \cos ( 2 \gamma_u)\right)\sin ^2\beta_u \nonumber \\ & + 2 ( 1-\eta ) \big[1-\eta \cos\beta_u+(1-\eta ) \cos \beta_v\big]\sin ^2\left(\beta_v/2\right ) \bigg\ } , \label{eq : r22 } \\ \nonumber \\\left[\rho''\right]_{23}&=\frac{1}{8}\bigg\{\left(1+\eta ^2 - 2 \eta \cos ( 2 \gamma_u)\right)\sin ^2\beta_u\nonumber\\ & -(1-\eta)\sin\beta_v \big[2\eta\cos(\alpha_u-\alpha_v)\sin \beta_u \nonumber\\ & -(1-\eta)\sin\beta_v \big]\bigg\ } , \label{eq : r23}\end{aligned}\ ] ] {24}&=\frac{1}{8 } \bigg\ { e^{-i \alpha_u}\sin\beta_u\big[(1-\eta)(1+\eta \cos \beta_v ) \nonumber\\ & -\cos \beta_u\left(1+\eta ^2 - 2 \eta \cos ( 2 \gamma_u)\right ) -2 i \eta \sin ( 2\gamma_u ) \big ] \nonumber\\ & + e^{-i \alpha_v}(1-\eta ) \big[1+\eta\cos\beta_u-(1-\eta ) \cos \beta_v\big]\sin \beta_v \bigg\ } , \label{eq : r24}\\ \nonumber\\ \left[\rho''\right]_{33}&= \left[\rho''\right]_{22},\end{aligned}\ ] ] {34}&= \left[\rho''\right]_{24 } , \\ \nonumber\\ \left[\rho''\right]_{44}&=\frac{1}{8 } \bigg\ { \eta^2\left ( 1 + \cos\beta_u\right)^2 + 4(1-\eta)^2\sin^4\left(\beta_v/2\right ) \nonumber\\ & + 4\sin^4\left(\beta_u/2\right)+8 ( 1-\eta ) \eta \cos ^2\left(\beta_u/2\right ) \sin ^2\left(\beta_v/2\right ) \nonumber\\ & + 4\eta(1+\cos\beta_u)\cos(2\gamma_u)\sin ^2\left(\beta_u/2\right)\bigg\}. \label{eq : r44}\end{aligned}\ ] ] the subsystem purity for the state reads \bigg\},\end{aligned}\ ] ] where , taking the partial derivatives of with respect to and and setting them equal to zero , we arrive at the following equations : they have a set of solutions which leads to the same amount of without feedback .the other set of solutions of leads to constant subsystem purity equal to ( minimum obtainable value ) for any value of ( and arbitrary value of ) .all the values in give the following density operator whose concurrence results the results for the subsystem purity and concurrence are displayed in figs .[ fig2 ] and [ fig3 ] .they show that the behaviour of purity and concurrence versus are consistent . without feedback action ( dot red line ) and in the presence of feedback action with ( solid blue line ) ., width=302,height=226 ] without feedback action ( dot red line ) and in the presence of feedback action with ( solid blue line).,width=302,height=226 ]it is known that the same quantum channel can have many ( actually infinite many ) kraus decompositions and each one can be interpreted as a given measurement performed on the environment to gain information about the system . hence , in this section , we will check the optimality of feedback action on unitarily equivalent kraus representation of map . to this end , first notice that the kraus representation provided in is canonical , i.e. . then , restricting to canonical kraus operators , we should consider new kraus operators obtainable by linear combination of the old ones through a unitary matrix , in which su(2 ) and similarly to can be parametrized as with .explicitly we have this means we can now describe the dynamics of the density matrix in the presence of feedback as the expression of is too cumbersome to be reported here .however , computing its subsystem purity , the surprising aspect is that it becomes function of only .actually it reads in which it is obvious that the minimum of is achieved when the quantity vanishes when and , which leads to and . with these values, the quantity vanishes with where and [ alphabeta ] with .therefore , in this case having fixed and , the concurrence remains function of four parameters , i.e. . in order to find the maximum of concurrence over these four parameters and give a comparison with the concurrence of canonical kraus operators , we perform a numerical maximization over , , and .this is done by choosing 11 values for and for ( varying them from 0 to 1 with step ) , as well as 61 values for and for ( varying them from 0 to with step ) . for any values of , we obtain the maximum of concurrence over other points .the numerical results show that the optimal concurrence is exactly the same as the one obtained in the canonical scenario , i.e. for .examples of numerical results are reported in fig .[ fig : conpur ] . taking into account the results of this and the previous section ( i.e. optimal feedback achieved for ) we end up with the following optimal local unitaries characterizing the feedback action in with arbitrary and back to eq . we may observe that the matrix representation of has nonzero entries where also of eq . has .hence we may argue that the devised feedback action is optimal also starting from .then we repeat the analysis of sections [ sec : fb ] and [ sec : opt ] starting from a state where is a generic complex number such that . in the computational basis and in the absence of feedback action , the state resulting from eq . reads its subsystem purity is the same as eq.([p1 ] ) but its concurrence now depends on . on the other hand , the matrix elements of in the basis applying , with and , on the initial state ( [ inirhoq ] ) result : {{11 } } & = \frac{1}{2 } \bigg\{\left(1-\eta^2\right)\cos^2(\beta_v/2)+\eta^2 \sin^4(\beta_u/2 ) \nonumber\\ & + 2\eta(1-\eta)\cos ^2\left(\beta_v/2\right)\sin ^2\left(\beta_u/2\right ) + \cos^2(\beta_u/2 ) \nonumber\\ & + \eta\left ( 1 + \cos\beta_u\right)\re\left(q e^{-2i\gamma_u}\right ) \sin ^2\left(\beta_u/2\right)\bigg\ } , \label{eq : rq11 } \\ \nonumber \\\left[\rho_q''\right]_{12}&=\frac{1}{8 } \bigg\{e^{-i \alpha_v}(1-\eta)^2 \sin \beta_v \bigg [ \cos \beta_v+\frac{1-\eta\cos\beta_u}{1-\eta}\bigg ] \nonumber \\ & + e^{-i \alpha_u}\sin\beta_u\big[(1-\eta)(1-\eta \cos \beta_v ) + ( 1+\eta ^2)\cos \beta_u \nonumber\\ & -2\eta\cos\beta_u\re\left(q e^{-2i\gamma_u}\right ) -2i\eta\im\left(q e^{-2i\gamma_u}\right ) \big ] \bigg\ } , \label{eq : rq12}\end{aligned}\ ] ] {13}&= \left[\rho_q''\right]_{12 } , \\\nonumber\\ \left[\rho_q''\right]_{14}&=\frac{1}{8}e^{-2 i ( \alpha_u+\gamma_u)}\bigg\ { \eta q(1 + \cos \beta_u)^2 \nonumber \\ & + 4e^{2 i\gamma_u}\left[q^*\eta e^{2i\gamma_u } + ( 1+\eta^2)\cos^2(\beta_u/2)\right ] \sin ^2\left(\beta_u/2\right ) \nonumber \\ & + 2e^{2 i ( \alpha_u-\alpha_v+\gamma_u)}(1-\eta)^2(1+\cos \beta_v ) \sin ^2\left(\beta_v/2\right ) \nonumber \\ & -2e^ { i ( \alpha_u-\alpha_v+2\gamma_u)}\eta(1-\eta)\sin \beta_u\sin \beta_v \bigg\},\end{aligned}\ ] ] {22}&= \frac{1}{8 } \bigg\{4 ( 1-\eta ) \eta \cos ^2\left(\beta_u/2\right ) \cos ^2\left(\beta_v/2\right ) \nonumber\\ & + \left(1+\eta ^2 - 2 \eta \re\left(q e^{-2i\gamma_u}\right)\right)\sin ^2\beta_u \nonumber \\ & + 2 ( 1-\eta ) \big[1-\eta \cos\beta_u+(1-\eta ) \cos \beta_v\big]\sin ^2\left(\beta_v/2\right ) \bigg\ } , \label{eq : rq22 } \\ \nonumber \\\left[\rho_q''\right]_{23}&=\frac{1}{8}\bigg\{\left(1+\eta ^2 - 2 \eta \re\left(qe^{-2i\gamma_u}\right ) \right)\sin ^2\beta_u\nonumber\\ & -(1-\eta)\sin\beta_v \big[2\eta\cos(\alpha_u-\alpha_v)\sin \beta_u \nonumber\\ & -(1-\eta)\sin\beta_v \big]\bigg\ } , \label{eq : rq23}\end{aligned}\ ] ] {24}&=\frac{1}{8 } \bigg\ { e^{-i \alpha_u}\sin\beta_u\big[(1-\eta)(1+\eta \cos \beta_v ) \nonumber\\ & -\left(1+\eta^2 -2\eta\re\left(q e^{-2i\gamma_u}\right)\right)\cos\beta_u + 2i\eta\im\left(q e^{-2i\gamma_u}\right ) \big ] \nonumber\\ & + e^{-i \alpha_v}(1-\eta ) \big[1+\eta\cos\beta_u-(1-\eta ) \cos \beta_v\big]\sin \beta_v \bigg\ } , \label{eq : rq24}\\ \nonumber\\ \left[\rho_q''\right]_{33}&= \left[\rho_q''\right]_{22},\end{aligned}\ ] ] {34}&= \left[\rho_q''\right]_{24 } , \\ \nonumber\\ \left[\rho_q''\right]_{44}&=\frac{1}{8 } \bigg\ { \eta^2\left ( 1 + \cos\beta_u\right)^2 + 4(1-\eta)^2\sin^4\left(\beta_v/2\right ) \nonumber\\ & + 4\sin^4\left(\beta_u/2\right)+8 ( 1-\eta ) \eta \cos ^2\left(\beta_u/2\right ) \sin ^2\left(\beta_v/2\right ) \nonumber\\ & + 4\eta(1-\cos^2\beta_u)\re\left(q e^{-2i\gamma_u}\right)\bigg\}. \label{eq : rq44}\end{aligned}\ ] ] for the state , the subsystem purity turns out to be the same of , i.e. not depending on .this leads us to conclude that also for the optimal feedback is achieved by and hence . with this, the state after feedback action reads , in the basis , its concurrence results the optimality of this result is confirmed by numerical investigations over non - canonical kraus decompositions .similarly to sec .[ sec : opt ] we have maximized the concurrence over parameters , and , this time for each pair of values of and .this has been done by choosing 11 values for , for and for ( varying them between 0 and 1 with step ) , as well as 61 values for and for ( varying them from 0 to with step ) . for any pair of and the maximum concurrence has been obtained over other points .the numerical results show that the optimal concurrence is exactly , i.e. the one obtained in the canonical scenario ( ) .thanks to the above results , we can consider repeated applications of the map without feedback , giving where is the number of map s applications , as well as repeated applications of the map with feedback giving the corresponding concurrences , \end{aligned}\ ] ] and are reported in fig.[num_feed_n ] . therewe can see that the advantage of feedback tends to persist only at sufficiently high values of , by increasing . for different number of applications of the amplitude damping map , without feedback action ( dashed line ) and with feedback action ( solid line).,width=302,height=226 ]in conclusion , we have addressed the problem of correcting errors intervening in two - qubit dissipating into their own environments by resorting to local feedback actions with the aim of preserving as much as possible the initial amount of entanglement .optimal control is found by first gaining insights from the subsystem purity and then by numerical analysis on the concurrence .this is tantamount to a double optimization , on the actuation and on the measurement precesses .the results are obtained for single shot .the results , although obtained with the help of numerics , are analytically clear and can be summarized by eqs . and with .our results could be helpful in designing experiments where entanglement control is required , particularly in settings like cavity qed , superconducting qubits , optomechanical systems .it remains open the problem of steering the system towards a desired target ( entangled ) state ; to this end we need to consider repeated map s applications for which we paved the way in section [ sec : repeat ] .the feedback strategy employed along this line is in the same spirit of _ direct feedback _ , in that it does not involve processing the information obtained from the system in order to estimate its state . on the other hand in the context of repeated map s applications , and particularly in the continuous time analysis of the problem , optimization of feedback action should also involve bayesian ( state estimation based ) strategies and an extension to two qubits of the analysis for single qubit control performed in ref. would be very welcome .m. gregoratti and r. f. werner journal of modern optics * 50 * , 915 ( 2003 ) ; l. memarzadeh , c. cafaro and s. mancini journal of physics a : mathematical and theoretical * 44 * , 045304 ( 2011 ) ; l. memarzadeh , c. macchiavello , and s. mancini , new journal of physics , * 13*(10 ) , 103031 ( 2011 ) .
|
we study the correction of errors intervening in two - qubit dissipating into their own environments . this is done by resorting to local feedback actions with the aim of preserving as much as possible the initial amount of entanglement . optimal control is found by first gaining insights from the subsystem purity and then by numerical analysis on the concurrence . this is tantamount to a double optimization , on the actuation and on the measurement precesses . repeated feedback action is also investigated , thus paving the way for a continuous time formulation and solution of the problem .
|
wireless communication using visible light wavelengths ( 400 to 700 nm ) in indoor local area network environments is emerging as a promising area of research .visible light communication ( vlc ) is evolving as an appealing complementary technology to radio frequency ( rf ) communication technology . in vlc ,simple and inexpensive light emitting diodes ( led ) and photo diodes ( pd ) act as signal transmitters and receptors , respectively , replacing more complex and expensive transmit / receive rf hardware and antennas in rf wireless communication systems . other favorable features in vlc include availability of abundant visible light spectrum at no cost , no licensing / rf radiation issues , and inherent security in closed - room applications .the possibility of using the same leds to simultaneously provide both energy - efficient lighting as well as high - speed short - range communication is another attractive feature .the potential to use multiple leds and pds in multiple - input multiple - output ( mimo ) array configurations has enthused mimo wireless researchers to take special interest in vlc - .signaling schemes considered in multiple - led vlc include space shift keying ( ssk ) and its generalization ( gssk ) , where the on / off status of the leds and the indices of the leds which are on convey information bits , . other multiple - led signaling schemes considered in the literature include spatial multiplexing ( smp ) , spatial modulation ( sm ) , and generalized spatial modulation ( gsm ) ,,, .these works have considered real signal sets like -ary pulse amplitude modulation ( pam ) with positive - valued signal points in line with the need for the transmit signal in vlc to be positive and real - valued to intensity modulate the leds . the vlc channel between an led and a photo detector in indoor environments can be a multipath channel .the multipath effects can be mitigated by using orthogonal frequency division multiplexing ( ofdm ) .the use of complex signal sets like -ary quadrature amplitude modulation ( qam ) along with ofdm in vlc is studied extensively in the literature - .techniques reported in these works include dc - biased optical ( dco ) ofdm , asymmetrically clipped optical ( aco ) ofdm - , flip ofdm , , non - dc biased ( ndc ) ofdm , and index modulation for ndc ofdm .a key constraint in the above techniques , however , is that they perform hermitian symmetry operation on the qam symbol vector at the ifft input so that the ifft output would be positive and real - valued .a consequence of this is that channel uses are needed to send symbols . in this paper , we propose two simple and novel complex modulation techniques for vlc using multiple leds , which do not need hermitian symmetry operation .the proposed schemes exploit the spatial dimension to convey complex - valued modulation symbols .* the first proposed idea is to use four leds to form a single modulation unit that simultaneously conveys the real and imaginary parts of a complex modulation symbol and their sign information .while the magnitudes of the real and imaginary parts are conveyed through intensity modulation ( i m ) of leds , the sign information is conveyed through spatial indexing of leds . since four ledsform one complex modulation unit , we term this as _ quad - led complex modulation ( qcm ) _ . * the second idea is to exploit the representation of a complex symbol in polar coordinates . instead of conveying the real and imaginary parts of a complex symbol and their sign information using four leds in qcm , we can convey only the magnitude and phase of a complex symbol .we need only two leds for this purpose and there is no sign information to convey in this representation .so we use only two leds to form a single modulation unit in this case .we term this scheme as _ dual - led complex modulation ( dcm ) _ since two leds constitute one complex modulator . *the third proposed idea is to bring in the advantages of spatial modulation to the dcm scheme . instead of using all the four leds to transmit one complex symbol ( as in qcm ), we choose two out of four leds to transmit the magnitude and phase of a complex symbol as in dcm scheme . since we have to choose one pair of leds ( one block ) out of two and each pair will perform the same operation as in dcm scheme , we term this scheme as _ spatial modulation - dcm ( sm - dcm ) _ .we investigate the bit error performance of the proposed qcm , dcm , and sm - dcm schemes through analysis and simulations .we obtain upper bounds on the bit error rate ( ber ) of qcm , dcm , and sm - dcm .these analytical bounds are very tight at high signal - to - noise ratios ( snr ) .therefore , these bounds enable us to easily compute and plot the achievable rate contours for a desired a target ber ( e.g. , ber ) in qcm , dcm , and sm - dcm .the analytical and simulation results show that the qcm , dcm , sm - dcm schemes achieve good ber performance .dcm has the advantage of fewer leds ( 2 leds ) per complex modulator and better performance compared to qcm for small - sized modulation alphabets ( e.g. , 8-qam ) . on the other hand, qcm has the advantage of additional degrees of freedom ( 4 leds ) compared to dcm , because of which it achieves better performance compared to dcm for large alphabet sizes ( e.g. , 16-qam , 32-qam , 64-qam ) .sm - dcm achieves better performance compared to dcm and qcm for small - sized modulation alphabets ( e.g. , 16-qam ) since it requires smaller modulation size . on the other hand , for large alphabet sizes, sm - dcm performs better compared to qcm at low values due to lower order modulation size , whereas at high values , sm - dcm performance degrades because of the reduced average relative distance between transmit vectors compared to qcm .since qcm and dcm can directly handle complex symbols in vlc , techniques which are applied to complex modulation schemes to improve performance in rf wireless channels can be applied to vlc as well .for example , it is known that rotation of complex modulation symbols can improve ber performance in rf wireless communication . motivated by this observation, we explore the possibility of achieving performance improvement in vlc through phase rotation of complex modulation symbols prior to mapping the signals to the leds in qcm .we term this scheme as qcm with phase rotation ( qcm - pr ) .results show that phase rotation of modulation symbols indeed can improve the ber performance of qcm in vlc .we also study the proposed qcm and dcm schemes when used along with ofdm ; we refer to these schemes as qcm - ofdm and dcm - ofdm. we present zero - forcing and minimum distance detectors and their performance for qcm - ofdm and dcm - ofdm .the rest of this paper is organized as follows .the indoor vlc system model is presented in section [ sec2 ] .the proposed qcm , qcm - pr , and qcm - ofdm schemes and their performance are presented in section [ sec3 ] . section [ sec4 ] presents the proposed dcm and dcm - ofdm schemes and their performance .section [ sec5 ] presents the proposed sm - dcm scheme and it performance .section [ sec6 ] presents the spatial distribution of the received snrs and the rate contours achieved in qcm , dcm , and sm - dcm .conclusions are presented in section [ sec7 ] .consider an indoor vlc system with leds ( transmitter ) and photo detectors ( receiver ) .assume that the leds have a lambertian radiation pattern , .in a given channel use , each led is either off or emits light with some intensity which is the magnitude of either the real part or imaginary part of a complex modulation symbol .an led which is off implies a light intensity of zero .let ^t ] , and is the row of .the proposed qcm scheme uses four leds at the transmitter .figure [ fig3 ] shows the block diagram of a qcm transmitter .let denote the complex modulation alphabet used ( e.g. , qam ) . in each channel use , one complex symbol from ( chosen based on information bits )is signaled by the four leds as described below .each complex modulation symbol can have a positive or negative real part , and a positive or negative imaginary part .for example , the signal set for 16-qam is .let be the complex symbol to be signaled in a given channel use .let where and are the real and imaginary parts of , respectively .two leds ( say , led1 and led2 ) are used to convey the magnitude and sign of as follows .led1 will emit with intensity if is positive , whereas led2 will emit with the same intensity if is negative .note that , since is either or , only any one of led1 and led2 will be on in a given channel use and the other will be off . in a similar way , the remaining two leds ( i.e. , led3 and led4 ) will convey the magnitude and sign of in such a way that led3 will emit intensity if is , whereas led4 will emit with the same intensity if is .therefore , qcm sends one complex symbol in one channel use .the mapping of the magnitudes and signs of and to the activity of leds in a given channel use is summarized in table [ tab ] ..[tab ] mapping of complex symbol ( with real part and imaginary part ) to leds activity in qcm .[ cols="^,<,^ , < " , ] we consider a target ber of .note that figs .[ fig17 ] , [ fig18 ] , and [ fig19 ] demonstrated the tightness of the ber upper bounds obtained in secs .[ sec3c ] , [ sec4a ] , and [ sec5d ] for qcm , dcm , and sm - dcm , respectively . indeed, the upper bounds and simulation results almost match for bers below .therefore , these bounds can be used to accurately map the spatial distribution of the snrs to achievable rate contours for the considered target ber of .this is done as follows . using the average received snr at a given spatial position of the receiver and the ber vs snr relation given by the ber upper bound expression , determine the maximum qam size ( among 2- , 4- , 8- , 16- , 32- , 64-qam ) that meets the ber target .this determination is made for all spatial positions of the receiver at a spatial resolution of 2.5 cm .the resulting spatial map of the maximum qam size possible gives the achievable rate in bpcu at various spatial positions of the receiver ._ results and discussions : _ we computed the spatial performance measures discussed above for qcm , dcm , and sm - dcm with m and . figures [ fig_a](a),(b ) , and ( c ) show these performance plots for qcm , dcm , and sm - dcm , respectively .it can be observed that the maximum rate achieved by qcm and sm - dcm while meeting the ber target is 5 bpcu ( i.e. , maximum supported qam size is 32-qam and 16-qam for qcm and sm - dcm , respectively ) and the maximum rate achieved by dcm is 4 bpcu ( i.e. , maximum supported qam size is 16-qam ) .this is due to the observation we made in fig .[ dcmber1 ] and [ fig20 ] , where we saw that qcm had a larger average relative distance between the transmit vectors compared to dcm and sm - dcm for large qam sizes and this resulted in a favorable performance for qcm over dcm and sm - dcm .this is found to result in qcm achieving a larger percentage area of the room covered by 4 bpcu ( covering 70% area ) and 5 bpcu ( covering 45% area ) rates than dcm .similarly , qcm achieves a larger percentage area of the room covered by 5 bpcu ( covering 70% area ) than sm - dcm .dcm shows a performance advantage over qcm for 8-qam ; this can be seen by observing that dcm supports 8-qam in more than 90% of the room while qcm covers a lesser area for 8-qam .similarly , sm - dcm shows a performance advantage over qcm and dcm for bpcu and bpcu , respectively .this can be seen by observing that sm - dcm covers more than 90% of the room while qcm and dcm covers a lesser area for bpcu and bpcu , respectively .we proposed three simple and novel complex modulation schemes that avoided the hermitian symmetry operation to generate led compatible positive real signals encountered in vlc .this was achieved through the exploitation of the spatial dimension for the purpose of complex symbol modulation . in the proposed qcm scheme ,four leds were used to convey the real and imaginary parts of a complex symbol and their sign information . while intensity modulation of leds was employed to convey the magnitudes of the real and imaginary parts , spatial index modulation of ledswas used to convey their sign information . the proposed dcm scheme , on the other hand , exploited the polar representation of complex symbols to use only two leds to convey the magnitude and phase information of a complex symbol .the proposed sm - dcm scheme exploited the use of spatial modulation in dcm .analytical upper bounds and simulation results showed that the proposed qcm , dcm , and sm - dcm achieve good ber performance .phase rotation of modulation symbols was shown to improve the ber performance in qcm .zero - forcing and minimum distance detectors for qcm and dcm when used along with ofdm showed good performance for these qcm - ofdm and dcm - ofdm schemes .the analytical ber upper bounds were shown to be very tight at high snrs , and this enabled us to easily compute and plot the achievable rate contours for a given target ber ( e.g. , ber ) in qcm , dcm , and sm - dcm .t. q. wang , y. a. sekercioglu , and j. armstrong , `` analysis of an optical wireless receiver using a hemispherical lens with application in mimo visible light communications , '' _ j. lightwave tech .1744 - 1754 , jun . 2013 .j. barry , j. kahn , w. krause , e. lee , and d. messerschmitt , `` simulation of multipath impulse response for indoor wireless optical channels , '' _ ieee j. sel .areas in commun .367 - 379 , apr . 1993 .l. zeng , d. obrien , h. le minh , k. lee , d. jung , and y. oh , `` improvement of date rate by using equalization in an indoor visible light communication system , '' _ proc .ieee iccsc 2008 _ , pp .678 - 682 , may 2008 .
|
in this paper , we propose simple and novel complex modulation techniques that exploit the spatial domain to transmit complex - valued modulation symbols in visible light wireless communication . the idea is to use multiple light emitting diodes ( leds ) to convey the real and imaginary parts of a complex modulation symbol and their sign information , or , alternately , to convey the magnitude and phase of a complex symbol . the proposed techniques are termed as _ quad - led complex modulation ( qcm ) _ and _ dual - led complex modulation ( dcm)_. the proposed qcm scheme uses four leds ( hence the name ` quad - led ' ) ; while the magnitudes of the real and imaginary parts are conveyed through intensity modulation of leds , the sign information is conveyed through spatial indexing of leds . the proposed dcm scheme , on the other hand , exploits the polar representation of a complex symbol ; it uses only two leds ( hence the name ` dual - led ' ) , one led to map the magnitude and another led to map the phase of a complex modulation symbol . these techniques do not need hermitian symmetry operation to generate led compatible positive real transmit signals . we present zero - forcing and minimum distance detectors and their performance for qcm - ofdm and dcm - ofdm . we further propose another modulation scheme , termed as sm - dcm _ ( spatial modulation - dcm ) _ scheme , which brings in the advantage of spatial modulation ( sm ) to dcm . the proposed sm - dcm scheme uses two dcm blocks with two leds in each block , and an index bit decides which among the two blocks will be used in a given channel use . we study the bit error rate ( ber ) performance of the proposed schemes through analysis and simulations . using tight analytical ber upper bounds and spatial distribution of the received signal - to - noise ratios , we compute and plot the achievable rate contours for a given target ber in qcm , dcm , and sm - dcm . _ * keywords * _
|
high resolution optical imaging instruments based on aperture synthesis have been developed over the last decades with the aim of reaching angular resolution in the nano radian range .these different instruments use the property of the zernike van cittert theorem to recover the intensity distribution of the object by means of spatial coherence analysis . with this method ,the instrument can never select the light coming only from one of the pixels composing the full object because the measuremnts are being carried out on the fourier spectral domain . for high - dynamics objects , such as a star + exoplanet system ,this technique is limited since the information on the faint object is always mixed with the light emitted by the main source .consequently , direct imaging is to be preferred and the analysis of the object is made easier in the image domain than in the fourier spectrum one . since the beginning of high resolution imaging , measurements have never been achieved both with a very high resolution in the range of nanoradian and a very high dynamics in the range of . in order to meet this challenge a. labeyrie has proposed a solution which is known as the hypertelescope .this new type of instrument solves the problem of the highly structured point spread function ( psf ) of a diluted array thanks to a pupil densification process .the psf of a hypertelescope being sharp and smooth , it is possible to use the instrument for direct imaging .the image which equals the convolution of the object by the psf , looks like the object but with a limited resolution .different versions of hypertelescopes have been proposed using field combination in the pupil plane or pupil densification thanks to the use of monomode optical fibres .parallel to the hypertelescope studies promoted by a. labeyrie , we have proposed a temporal alternative to the initial design that used spatial classical optics ( cf fig.[fig : spathyp ] ) .the main purpose of this new concept is to answer some technical difficulties met with classical hypertelecopes and to propose new functionalities .+ in we theoretically demonstrated the possibility to design a hypertelescope by using temporal optical path modulation . in the next paragraphs, we briefly recall the principle of a classical and a temporal hypertelescope . .the image is displayed as a function of spatial coordinates in the image plane ., width=302 ] figure [ fig : spathyp ] recalls the structure of a hypertelescope as proposed by a. labeyrie .this simplified drawing does not detail the reconfiguration and densification process .this technique makes it necessary to remap the input pupil taking care to apply an homothetic contraction of beams center distribution in the output pupil according to the `` golden rule of imaging interferometry '' . denoting the pupil reconfiguration ratio ,the distance between telescope and the input pupil center is related to the position of the sub pupil in the output pupil by : where o is the output pupil origin . after passing through the focusing lens ,the beam in the image plane generates a limited plane wave : with where is the focal length of the focusing lens , and are respectively the modulus and input phase of beam . is the position vector in the image plane with two coordinates and ( ) . in the labeyrie configuration the field envelop resulting from the diffraction of each sub pupil supposed to be identical for all beams .the last term describes the linear variation of the phase in the image plane with and slope coefficients along and axes . with where and represents the projection of vector along and axis . this way, each position of telescope drives the slope of the corresponding wave front and reaches the image plane .the phase evolution can be analysed along different axes parallel to the axis .the phase variation is linear with an offset depending on the coordinate : with as demonstrated by a. labeyrie , the possibility to retrieve the image results from the coherent superposition of these different limited plane waves and the convolution of the resulting point spread function ( psf ) with the object intensity distribution .the psf is given by the intensity distribution corresponding to the coherent superposition of field in the image plane when the input pupil is illuminated by a plane wave with a wave vector perpendicular to the input pupil plane . under such conditions ,the coefficients are identical ( set to one for example ) and the phase terms in the equation [ field ] are equal to 0 . if in the labeyrie configuration , corresponds to the sub pupil far field , in the iran ( interferometric remapped array nulling ) proposal this term is replaced by the sub - pupil function itself .in both cases this envelope term fixes the phase modulation span in the recombining plane .this phase range is directly related to the optical pieces dimension and will not be easily adjusted in a real experiment . for a tilted point like source illuminatingthe telescope array with a obliquity , the amplitudes remain constant but the phase ( as defined in eq ( [ field ] ) ) becomes : the total phase of the field can be written in the image plane as : this additional term induces a spatial shift of the corresponding intensity distribution : for an extended object , the incoherent superposition of the different contribution of the object leads to the fallowing image intensity distribution : of course , any process able to achieve such phase modulations will provide an image with the same basic properties .the next paragraph deals with the possibility to obtain such a result in the time domain .as above mentioned , in a spatial hypertelescopes , the linear phase modulations are related to the and tilts of the optical fields reaching the observation plane . in the temporal case ,the phase modulation is generated thanks to an optical path variation induced by optical path modulators linearly actuated as a function of time as shown on figure [ fig : phvar ] . using a temporal frequency scaled on the slope allows to propose a temporal configuration equivalent to the spatial concept . for this purpose, the frequency has to be proportional to with an arbitrary coefficient . according to this spatial to temporal transposition , the optical field : \}}\ ] ] is replaced by : \}}\ ] ] function expresses the span of phase variation . in the classical concept ,the image is analyzed along the axis .the factor represents the relationship between the spatial parameter and time . in the time domain , the image is obtained by using the temporal psf given by : \}}\bigg|^{2}\ ] ] as seen in the previous case , for a tilted point like source with a obliquity , the total phase of the field becomes : the corresponding intensity distribution is a shifted temporal psf : \ ] ] and the incoherent superposition of the different contribution of the object leads to the temporal d\theta\ ] ] this result demonstrates the full equivalence between spatial and temporal display as illustrated in figure [ fig : tempdisp ] . in order to analyze a two dimensional image, the phase has to be modified step by step in order to display successively the different s of the image using a set of parallel scans .the information provided along the axis at a given position is fully equivalent to the one observed as a function of time for the scan related to the same value .the image is temporally multiplexed as for a video raster scan .it can be noticed that in the spatial domain , the span is determined by the diffraction field distribution related to the sub pupil geometry . in the temporal domain , the image fieldis determined by the extension of phase modulation directly driven by the span of the optical path modulator .it can be adjusted very easily and reduced down to contrary to the spatial configuration in which the limitation results from the beam dimensions .the different advantages of temporal hypertelescopes are discussed in .figure [ fig : temppropal ] shows a generic representation of a temporal hypertelescope .this sketch uses optical fibres and an optical coupler but classical mirror trains , air delay lines and classical beam splitters could be used to design such an instrument .+ let s consider a telescope array pointing at a scientific target .the light is picked up at the n telescope foci and passes through optical path modulators before being recombined using an n to one beam combining coupler in a coaxial configuration .the required optical path modulations of the n interferometric arms are generated using optical fibre stretchers .they induce an optical path variation in order to generate the convenient phase modulations as previously mentioned . for this purpose, the optical path modulation can be expressed as : these optical path modulations can be servo controlled using opto - electronic systems previously developed for such kinds of applications ( ; ) . it allows to monitor , with a nanometric accuracy , the linearity of the optical path variation as a function of time . for each scan ,the offset is set in order to display the signal that would be observed at position for a classical spatial hypertelescope . to properly operate a temporal hypertelescope ,the optical path modulators are driven by an 8 channel function generator and related high voltage electronics ( not drawn in the picture ) .the output voltage drives the optical path modulators with a full span in the range of tens and a typical nanometric sensitivity .the electronic gain and the voltage generator slopes allow to set the frequencies at the proper values .in such a configuration , we can theoretically get the same imaging properties as for the first classical design using spatial pupil densification .the breadboard described in this paper and the related experimental results reported in the next paragraphs aim to demonstrate the validity of this new concept .our experimental set - up ( see figure [ schema_complet ] ) has been designed and implemented thanks to the different skills developed in our team for two decades ( ; ; ; ; ) . consequently, our tht experimental test bench uses optical fibres and couplers for the different optical functions to be implemented .however , we would like to stress that the use of guided optics components is not mandatory for the implementation of a tht .a classical design with classical components could be chosen if preferred .this point will remain a minor one as long as we will focus more on the demonstration of tht principle than on the technological aspects .the following items give the general framework of our experimental study . *the operating wavelength , all over this instrument , is to take advantage of the mature and available technologies of telecom components .* light propagation between the entrance pupil and the detector is achieved through monomode polarization maintaining fibres ( panda fibres ) .* all connections between the different components use fc / apc connectors to avoid parasitic back reflections . *the tht configuration has been optimized for the imaging of an unbalanced binary star with a high dynamics such as an exoplanet - star couple . for this purpose we will use the theoretical results reported in which implies a redundant spacing of the array and an optimized power distribution over the different telescopes to apodize the point spread function . *our telescope array includes 8 apertures equivalent to a linear redundant configuration : the corresponding spatial frequencies sampling enables a convenient analysis of a complex linear object for a realistic experimental demonstration .the following sections summarize the tht bench structure .it consists of three main parts ( cf fig.[schema_complet ] ) : a star simulator , a telescope array and a combining interferometer .the calibrated object is the first subsystem required for testing the imaging capability of a tht .for this first experimental demonstration , the selected astronomical target is a binary star with a convenient angular separation and adjustable dynamics .+ for this purpose , the object consists of two tips of monomode panda fibres glued on a v - groove .these monomode waveguides are fed by two independent distributed feedback lasers ( dfb ) with the same emitting wavelength and act as two incoherent point like sources . this way the object is spatially incoherent and the dynamicsis controlled by adjusting the laser driving currents .a set of doublets and collimating lenses allows to provide an angular intensity distribution compatible with the spatial frequencies sampled by our telescope array . in our experimentthe angular separation , as seen by the telescope array , is .as our instrument is designed for a linear input polarization , a polarizing cube is inserted in the doublet spacing in order to select and fixes a linear vertical input polarization ( not drawn on fig [ schema_complet ] ) .the experimental setup can be seen in fig.[photo_objet ] .the telescope array arrangement has to be carefully selected to fit the sampling criteria for a proper image analysis .as previously demonstrated , high dynamics imaging capability requires a redundant array configuration .consequently , our telescope array must periodically sample the spatial frequency domain .the object dimension and the focal length of the collimator have to be determined by comparing the object spectrum and the spatial frequencies sampled by our instrument .the intensity observed in the image plane of the instrument is given by : where denotes the pseudo - convolution operator , the point spread function of the instrument and , the object intensity distribution . in the fourier domain, this relationship becomes a simple product : where is the object intensity spectrum and , the input pupil autocorrelation function .as shown on fig .[ array_design ] the periodicity of the spatial frequencies sampled by the telescope array has to be compliant with the classical shannon sampling criterium . the smallest sampled frequency ( )is related to the instrument field of view and the largest one ( ) determines the instrument resolution : the telescope array resolution allows to discriminate the sharpest details to be observed on the object , and the field of view has to be adapted to the object s overall size . + these design trade - offs lead to the following tht bench characteristics : these characteristicshave been chosen to be compatible with the observation of our laboratory binary star characterized by a angular separation . notice that a redundant array , as proposed in this paper , induces a periodic psf as reported on figure [ redond_champ ] .the equation [ fv ] gives the field of view limits proper to avoid aliasing around the 0 interferometric order .the area between two consecutive main lobes is called clear field ( cf fig.[redond_champ ] ) . in the case of polychromatic sources ,only the zero interferometric order is achromatic and consequently appropriate for direct imaging. pitch . ] during the implementation of our breadboard we faced compactness constraint due to the mechanical dimension of the launching assemblies .the focusing lenses are small enough to allow a compact linear array but the 3 axes nanopositioning mountings , required for the fine adjustment of the fibre inputs , can not be aligned with a pitch smaller than 25 mm .so far , the real tht telescope array has been designed in a two - dimensional configuration but dedicated to observe a one dimensional object along the vertical axis .the projection of the telescope positions on the vertical axis is true to the expected linear array design as shown in fig.[array_telescope ] .in such a configuration , the instrumental response is exactly the same as with the linear telescope array when observing a vertical linear object .+ in order to image an unbalanced binary system with very high dynamics , the use of a set of suitable apodization coefficients will optimize the psf dynamics with a low decay of the resolution ( cf fig [ psf_ideal ] ) . on each aperture ,the control of the intensity is achieved by means of mobile shutters ( cf fig.[photo_reseau ] ) actuated with a high position accuracy according to the theoretical optimum distribution .psf dynamics is defined as the ratio between the maximum intensity on the psf and the higher intensity over the clear field ( cf fig[def_dynamique ] ) .is defined as the ratio between the maximum intensity on the psf and the higher intensity over the clear field ] the last part of the system is the optical field combiner mixing the contributions of the 8 telescopes ( cf fig[photo_interferometre ] ) .each interferometric arm includes a fibre delay line and an optical path modulator .the 8- fibre arms have been cut with a few mm accuracy in order to reduce the optical path differences as much as possible .the fibre delay lines allow to adjust the optical path with an accuracy of few .the fibre optical path modulators temporally reproduce the linear phase variation observed in the image plane of `` classical '' hypertelescopes ( cf eq.[phase_equa ] ) and allow the fine residual optical path compensation .a national instrument virtual instrument and a voltage generator have been implemented and developed to drive the piezoelectric actuators of the fibre optical path modulators .a set of high voltage amplifiers allows to reach a proper range of command voltages .experimentally , the optical path control is achieved with an optical path sensitivity over .+ the optical fields emerging from the 8 interferometric arms reach a 8 to one polarization maintaining ( pm ) coupler to achieve the interferometric mixing . at the output an ingaas photodiode detects the interferometric signal that is recorded through a standard 12 bits adc voltage acquisition system .in order to calibrate the imaging properties of our instrument , the first investigation has been to characterize its point spread function . for this purpose ,the telescope array has been illuminated by a plane wave using only a single point - like source ( i.e. switching on only 1 fibre of the object ) .a servo control system not available during this experiment , the 8 telescope cophasing has been achieved manually taking advantage of the relative stability of our instrument .the voltage offsets of the electronic commands , driving the optical path modulators , have been adjusted in order to increase the dynamics as well as possible .this is possible if you carrefully avoid any perturbation of the instrument over the cophasing process . for this purpose ,the instrument has been acoustically and thermally baffled .the first experimental results are shown in fig.[experimental_psf ] .they are consistent with the theoretical psf simulation proposed in fig.[psf_ideal ] .the second main feature to be analyzed is the influence of the apodization process as shown on fig [ experimental_psf ] .using the apodozation coefficient computed in ref allows to reduce consequentially the ripple in the clear field all around the psf peak .the best dynamics experimentally observed with our breadboard are in the range of 300 but as shown on figure [ dyn_evol ] these results are not stable over time , probably due to environmental perturbations such as acoustic waves , vibrations or temperature fluctuations .nevertheless , such results have been obtained reliably and give a first indication on the tht imaging potential capabilities . as we will demonstrate in the next paragraph , the major limitation on the dynamics results from the manual cophasing that remains of poor quality when compared with a servo controlled system .the main effect is a reduction of the dynamics that is particularly clear in the apodized configuration .these first tests demonstrate the possibility to image a point - like source on axis . the phenomena able to reduce the psf dynamics can be listed as follows : * the residual optical path difference between the 8 interferometric arms .* the differential polarization properties between the 8 fields to be mixed due to differential birefringent properties of the interferometric arms .* the difference between the theoretical results reported in and the apodization really applied in the experiment . * the differential dispersion properties between the 8 interferometric arms .this last point is not relevant in our experiment due to the use of a quasi - monochromatic source .+ the influence of the third point has been analyzed by modeling the effect of the experimental uncertainties on apodization coefficients . according tothe error experimentally observed , the coefficients are randomly changed in a numerical simulation and the final psf is computed .table [ tableau_apodisation ] summarizes the experimental uncertainties observed in our test bench .this statistical distribution on coefficient has been used as input for our numerical simulations .a set of psf has been computed varying the allowing to plot a histogram of the rejection ratio ( fig .[ erreur_apod ] ) .this curve demonstrates that the probability to get dynamics lower than 1500 is practically zero .this dynamics is far higher than the observed value .accordingly , the experimental uncertainty on the apodization coefficient can not explain the observed experimental rejection ratio and can not be identified as responsible for the 300 experimental dynamics .i & coeff & normalized & intensity + & & intensity & range + 1 - 8 & 0.104 & 0.011 & [ 0.005;0.016 ] + 2 - 7 & 0.364 & 0.133 & [ 0.122;0.143 ] + 3 - 6 & 0.724 & 0.525 & [ 0.496;0.553 ] + 4 - 5 & 1 & 1.000 & [ 0.985;1.015 ] + in order to carry on our investigation on with the experimental origin of the limited dynamics , the next point under study is the polarization behaviour .the optical fibre arms of the interferometer are made of polarization - maintaining monomode fibres .this choice results from the necessity to avoid any loss of the polarization coherence between the optical fields to be interferometrically mixed .there can be two problems : * input linear polarization can be misaligned with the used neutral axis at the input of waveguides . *crosstalk between the two polarization modes corresponding to the two neutral axes . in both cases, it results in an incoherent background intensity propagating along the undesirable polarization axis .this parasitic intensity leads to a background limit reducing the dynamics to be reached by the instrument . in our experiment ,care has been taken to limit this problem by means of polarizers placed at the entrance pupil and after the 8 to 1 optical coupler .our bread board has been characterized by measuring the contrast with each telescope couple .all the contrasts remain better than 0.999 as reported in table.[tableau1 ] . according to these experimental data , it is possible to compute a global psf introducing a random polarization crosstalk for each interferometric arm .it results in a histogram of the rejection ratio to be reached according to such a perturbation ( fig.[erreur_contrast ] ) .this simulation shows that the probability to get a rejection ratio lower than 500 is practically zero .accordingly , the polarization behaviour can not be identified as responsible for the 300 dynamics observed in our experiment .consequently , the 300 dynamics can only be explained by the residual optical path differential fluctuations between the 8 interferometric arms .frequencies&contrast + 1 & 0.9996 ( ) + 2 & 0.9995 ( ) + 3 & 0.9993 ( ) + 4 & 0.9994 ( ) + 5 & 0.9992 ( ) + 6 & 0.9992 ( ) + 7 & 0.9991 ( ) + the second stage of our experimental investigation has been to check the invariance of this point spread function over the imaging field of view . as a preliminary test , we moved the point like source in the object plane by shifting the optical fibre tip using manual translations .as shown on figure [ psf_shift ] , the psf was simply shifted with a constant width of the main lobe .the intensity on the clear field shows some fluctuations leading to dynamics losses .these imperfections are due to environmental parameter fluctuations resulting from the lack of a real time cophasing system .this demonstrates that our instrument is linear and translation invariant and therefore suitable for imaging .+ in order to illustrate the imaging capabilities of our instrument , we have used a binary star as source .the two fibre tips of the laboratory object receive the light from two different lasers operating at the same wavelength . adjusting the driving currentallows to simulate any unbalanced binary source . in the absence of a real time cophasing system , we used a secondary star intensity higher than the clear field intensity fluctuations .the experimental result is shown in fig.[binaire ] and exhibits a main lobe surrounded by two side lobes .the bright one is related to the main star and the two others to its companion with the 0 interference order on the right and the first one on the left of this figure .notice that in the case of polychromatic sources , only the zero interferometric order will be used as being achromatic .the source characteristics with a separation and a intensity ratio are correctly recovered in the image .results are consistent with the theoretical convolution between the point spread function and the object intensity distribution according to eq.[equa_psf ] .consequently , over the instrument clear field the instrument imaging capabilities have been demonstrated .this first experimental study was mainly dedicated to the demonstration of the imaging capabilities of a temporal hypertelescope on a one dimensional object .due to mechanical constraints , the telescope array has been designed in a 2d configuration .even if this array is not absolutely optimized for a two dimensional imaging , it is possible to test the 2d psf to demonstrate the operation of the 2d imaging process .this picture can be obtained row by row like in a tv raster scan using the phase modulations reported in equation eq.[phase_equa_temporal ] .a set of phase shifts is sent through the driving voltage electronics taking into account the projection of the telescope baseline along the direction : this equation describes the phase shift that could be observed for telescope in the image plane along the vertical axis in a spatial configuration .as demonstrated in ref ( ) it is possible this way to get a 2d information using a temporal hypertelescope .we tested this process by recording the 2d psf using our experimental test bench .the theoretical and experimental patterns are reported on fig.[psf_2d ] .the array cophasing was manually achieved during the first scan on the central row ( ) . after this initialization process , it is possible to scan all the field of view taking advantage of the relative stability of the telescope array .this full operation lasts about 20 seconds to get 50 rows over the 2d psf . in order to quantify the discrepancies between ideal ( figure [ psf_2d ] top ) and experimental ( figure [ psf_2d ] middle ) 2d psf , figure [ psf_2d ] bottom reports the absolute value of the difference between the two images .the significant fluctuations are probably due to the lack of an active phase control system during the 2d scan. a fine study of this limitation will be achieved after the implementation of a servo control system of our test bench .nevertheless , these results are promising and could be enhanced with a convenient cophasing system .the aim of this study was to experimentally demonstrate the operation of a temporal hypertelescope .this experimental demonstration was achieved by testing the instrument point spread function as a first stage .the second experiment demonstrated the possibility to observe an unbalanced binary star with a 20 flux ratio . during this first test, the psf dynamics was limited to 300 due to the lack of servo control system .the next stage of this study will be the design and implementation of a servo control system to accurately cophase our telescope array in order to enhance the dynamics of our instrument .this study has been financially supported by cnes , insu and thales alenia space in the frame of different contracts .our thanks go to emmanuelle abbott for her help in writing this paper and alain dexet for the fabrication of the mechanical parts of our experiment . 99 alleman j.j . , reynaud f. , connes p. , 1995, applied optics , 34 , 2284 armand p. , benoist j. , bousquet e. , delage l. , olivier s. , reynaud f. , 2009 , european j. of operational res . , 195 , 519 bracewell r.n . , 1978 , nature , 274 , 780 delage l. , reynaud f. , lannes a. , 2000 , applied optics , 39 , 6421 huss g.,reynaud f. , delage l. , 2001 , optics communications , 196 , 55 labeyrie a. , 1996 , a&as , 118 , 517 lawson r. , 1997 , long baseline stellar interferometry , spie milestone series , vol ms 139 , bellingham leger a. , mariotti j.m . , mennesson b. , ollivier m. , puget j.l ., rouan d. , schneider j. , 1993 , icarus , 123 , 249 olivier s. , delage l. , reynaud f. , et al ., 2007 , applied optics , 46 , 834 olivier s. , delage l. , reynaud f. , collomb v. , persegol d. , 2005 , j. opt.a : pure apll . opt ., 7 , 660 patru f. , mourard d. , clausse j.m . , et al . , 2008 , a&a , 477 , 345 g. perrin , j. woillez , o. lai , et al ., 2006 , science , 311 , 194 reynaud f. , delage l. , 2007 , a&as , 465 , 1093 simohamed l.m . , reynaud f. , 1997 , pure and applied optics , 6 37 vakili f. , aristidi.e ., abe l. , lopez b. , 2004 , a&a , 421 , 147
|
in this paper , we report the first experimental demonstration of a temporal hypertelescope ( tht ) . our breadboard including 8 telescopes is firstly tested in a manual cophasing configuration on a 1d object . the point spread function ( psf ) is measured and exhibits a dynamics in the range of 300 . a quantitative analysis of the potential biases demonstrates that this limitation is related to the residual phase fluctuation on each interferometric arm . secondly , an unbalanced binary star is imaged demonstrating the imaging capability of tht . in addition , 2d psf is recorded even if the telescope array is not optimized for this purpose . [ firstpage ] instrumentation : high angular resolution - instrumentation : interferometers
|
human epithelial type 2 cells processed by indirect immunofluorescence protocol is the standard method of identifying antinuclear autoantibodies ( ana ) , and consequently detecting autoimmune diseases such as systemic lupus erythematosus ( sle ) , rheumatoid arthritis , multiple sclerosis and diabetes .however , current methods require at least one expert to visually analyze the distributions of antibodies across multiple images .usually this analysis is performed through a microscope and is comprised of three steps : i ) detection of at least one mitotic cell , ii ) evaluation of the fluorescence signal intensity ( negative in the absence of fluorescence , else intermediate or positive ) , iii ) determining the cells classification according to the auto - antibody type distribution .these multi - steps manual analyses are tedious , time consuming , subjective and have high inter-/intra - observer variability ( up to , as reported in ) .moreover , the increasing number of patients and the limited number of experts make this impractical to scale to a large number of clinics .therefore , a stable and effective automatic computer - aided diagnosis ( cad ) system is needed .+ hopefully , cell classification is now a well - established task , as the advent of high - throughput imaging techniques has introduced the need for a robust system to automatically analyze thousands of cell images .typically , most classification systems consist of two cascaded modules one module that extracts useful features from a cell or a group of cells , followed by a second module that classifies the cells or the group using the extracted features .unfortunately , the range of images qualities as well as the classes to predict ( see fig .[ figure_cells0 ] , [ figure_cells1 ] and [ figure_cells2 ] ) makes cell classification a particularly complicated task .+ in this paper , we address these imaging issues by introducing new texture features extraction methods .these methods are robust to quality variations ( particularly noise ) , and able to efficiently describe a wide variety of classes .this was accomplished by introducing fuzzy logic before the filling of statistical matrices . in order to demonstrate that our work can be used for different cytology purposes , we use three datasets composed of iff images , which contain different image qualities as well as classes to predict . +before delving into the paper , we first describe the three representative tasks from quantitative image - based cell biology . next we outline a typical cell classification system ( section [ section_classification ] ) , and present a review of the different statistical matrices ( section [ section_statisticalmatrices ] ) .then we present our work : a fuzzy generalization of existing statistical matrices ( section [ subsection_previousfuzzy ] ) , as well as the fuzzy zone definition and computation ( section [ subsection_fuzzyzones ] ) .finally , the proposed matrices are evaluated on three tasks for classifying cells and their structures ( section [ section_results ] ) . +* icpr 2012 hep-2 cells classification contest - * this widely used dataset is composed of cells manually segmented from iff images , and annotated by experts .each image contains many cells ( min , max , with average dimensions about pixels ) of a unique type , which can be one of the six imbalanced classes ( see fig .[ figure_cells0 ] ) : centromere ( ce ) , uniform discrete speckles located throughout the entire nucleus ; homogeneous ( ho ) , diffuse staining in the entire nucleus ; coarse speckles ( cs ) , densely distributed , variously sized speckles , generally associated with larger speckles ; fine speckles ( fs ) , fine speckled staining in an uniform distribution , sometimes very dense and almost homogeneous ; nucleolar ( nu ) , less than six large coarse speckled staining within the nucleus ; cytoplasmic ( cy ) , fine dense granular to homogeneous staining or cloudy pattern , covering part or the whole cytoplasm . [ cols="^,^,^ " , ] + & & all the fuzzy zones can be characterized and used to fill a szm ( or a rlm ) : for an image and a fuzzy zone , the size is computed and the matrix case is increased by . the fuzzy zones computation allows to introduce the fuzziness at the image level instead of the matrix filling level .such a new fuzzy szm and rlm are annotated _ fuzzyszm _ and _ fuzzyrlm _ respectivily .it is no longer required to reduce the gray levels number , and therefore the matrix s height is equal to the image gray levels number .the algorithm required to find the fuzzy zones has a non linear complexity that depends on the fuzzy parameter , and consequently the fuzzyszm / fuzzyrlm filling is much more time consuming ( by at least a factor of ) than a classical szm / rlm . +this fuzzy version using fuzzy zones fills a matrix with a fixed height equal to the gray levels number in the image .therefore , the multiple gray levels principle described at the end of section [ section_statisticalmatrices ] no longer makes sense .however , the fuzzyszm required a fixed fuzzy parameter , so a multiple fuzzy szm can be created : the same matrix is filled using different fuzzy parameters .this section presents the results obtained from the three different datasets introduced in section [ sec_intro ] .all the classic statistical matrices are used with two gray level reduction algorithms ( linear and histogram ) , six quantizations ( dyadic values from to ) , and our new fuzzy statistical matrices were tested with a linear membership function and for different fuzzy parameters . for each method , only the best result is reported .the blue numbers indicate that the fuzzy version improves the corresponding basic algorithm ( com , rlm , szm ) performances , and the red number points out the optimal performance for each class .+ in this section , the two classifiers used are : 1 ) a neural network of type perceptron , with one hidden layer containing neurons ( best configuration experimentally found ) , trained with back - propagation , using individual adaptive learning rates and double momentums ; 2 ) random forests with times more trees than features . each classifieris then validated using leave - one - out or k - fold cross validation .+ in section [ subsection_previousfuzzy ] , we presented frlm and fszm , extensions of rlm and szm according to the com fuzzy principle described in .unfortunately , among the three datasets used in this paper , frlm and fszm never improve rlm and szm performances .moreover , fcom slightly improved the com performances only once , with a gap for the class nucleolar into the icip 2013 contest dataset , using the random forests .consequently , fcom , frlm and fszm results are not presented in this section , because of lack of efficacy .the highly reliable and widely used leave - one - out cross - validation was performed over all images . as each image contains only one type of cell , two different results levels were reported : at the cell level the results try to predict each cell class , and at the image level the results try to predict the most frequently assigned cell class within that image .a six - classes classifier was built using a neural network ( lower results were obtained with random forests , and then were not reported ) , where the results are displayed in tables [ table_resultsicpr2012comp ] and [ table_resultsicpr2012 ] .the fuzzy versions results were compared with those obtained from the original versions , and with many methods from the state - of - the - art ( table [ table_resultsicpr2012comp ] ) .these methods used different features ( such as local binary patterns , morphological , statistical , fisher tensors , moments , etc . ) and classifiers ( mainly support vector machines , but neural networks and random forests as well ) .contrary to our fuzzy matrices that provide around features , all these methods use a huge number of features and often require a features selection .but the results show that few highly relevant features can outperform other methods using a large number of features , which demonstrates the efficacy of our fuzzy versions . in the table[ table_resultsicpr2012 ] , the fuzzy versions provide high prediction rates for each class .the methods produce efficient features describing each class without any ambiguity . this resultis confirmed at the image level in the table [ table_resultsicpr2012comp ] , where we can observe that our classification is highly accurate .moreover , the regular versions ( rlm and szm ) provide comparable results as , but the fuzzy versions outperform for most of the classes at the cell level and the image level . from the same tables , we can confirm that the prediction rates for the cytoplasmatic and nucleolar classes are higher than other classes .this is due to these classes having typical textures different from the others : cytoplasmic cells are highly heterogeneous with a dark nucleus , and the nucleolar cells have big homogeneous bright patterns .consequently , they appear atypical and easier to classify . for the same reasons ,the fine speckled class has among the lowest predictions rates , because slightly speckled cells may appear homogeneous and more speckled cells may appear coarse speckled .the table [ table_resultsicip ] shows the results on the icip contest dataset , which contains highly noisy images .we can observe that the fuzzy versions using the fuzzy zones significantly improve the performances for most classes . indeed, the fuzzyrlm systematically surpasses the rlm , and the fuzzyszm surpasses the szm for of cases , at both cell and image levels .moreover , excepting only one case , the best result is provided by the fuzzy version .the tables [ table_resultsrf_hpa ] and [ table_resultsmlp_hpa ] present results obtained on the hpa dataset , which contains high quality ( staining , illumination , contrast , etc . ) images .the results are less dramatic , because the fuzzyrlm does not improve performances in most cases .however the fuzzyszm still performs as well as szm if not better .this paper presents different versions of fuzzy statistical matrices .the first version is a generalization of an existing technique , and introduces the fuzzification at the matrix filling level by spreading the information .the results presented in section [ section_results ] show that this method never improved the results for the three datasets used in this paper . even if this method was introduced to reduce noise sensitivity , the results are lower than the classical algorithm .+ next we define the original fuzzy zone , which is not flat but has fuzzy values .the fuzzy zones are used to fill statistical matrices , and then to create fuzzy statistical matrices .these new matrices are powerful descriptors , particularly effective at characterizing highly noisy images .the efficiency is particularly significant for the fuzzy run length matrix , which systematically outperforms the regular run length matrix performances , on both noisy datasets and using different classification methods .moreover , the fuzzy size zone matrix using fuzzy zones also provides good characteristics on high quality images . in order to validate the results, we performed a comparison with the best methods from the state - of - the - art , which provide comparable results with the regular matrices , but are outperformed by the new fuzzy versions .+ as a result this paper demonstrates that the new fuzzy version using fuzzy zones generates reliable and effective fuzzy statistical matrices , and provides better results than the original fuzzy version .moreover , the new fuzzy statistical matrices systematically provide better results than the widely used co - occurrences matrix. therefore our methods can be used to improve the characterization of images , for example medical imaging and the delicate issue of describing cancerous cells or tumors .+ the classic statistical matrices and the new fuzzy statistical matrices use different gray level reduction algorithms and quantizations .unfortunately , no fine - tuning method exists to automatically determine the optimal configuration .moreover , the experiments perform in this paper have shown that the performances greatly vary according to the dataset : no gray level reduction algorithm or quantization has proven to be more likely to provide better results .consequently , it is necessary to test a maximum of configurations in order to find the best results .this work was funded by nsf award 1027834 . any opinions , findings , conclusions or recommendations expressed in this publication are those of the authors and do not reflect the views of the nsf .karl egerer , dirk roggenbuck , rico hiemann , max - georg weyer , thomas bttner , boris radau , rosemarie krause , barbara lehmann , eugen feist , and gerd - rdiger burmester . automated evaluation of autoantibodies on human epithelial-2 cells as an approach to standardize cell - based immunofluorescence tests ., 12(r40 ) , 2010 . santa di cataldo , andrea bottino , ihtesham ul islam , tiago figueiredo vieira , and elisa ficarra .subclass discriminant analysis of morphological and textural features for hep-2 staining pattern classification ., 47:23892399 , 2014 .rico hiemanna , thomas bttnerb , thorsten kriegerc , dirk roggenbuckb , ulrich sackd , and karsten conrad .challenges of automated screening and differentiation of non - organ specific autoantibodies on hep-2 cells . , 9(1):1722 ,september 2009 .nicola bizzaro , renato tozzoli , elio tonutti , anna piazza , fabio manoni , anna ghirardello , danila bassetti , danilo villalta , marco pradella , and paolo rizzotti .variability between methods to determine ana , anti - dsdna and anti - ena autoantibodies : a collaborative study with the biomedical industry ., 219(12):99107 , october 1998 .anne e. carpenter , thouis r. jones , michael r. lamprecht , colin clarke , in h. kang , ola friman , david a. guertin , joo h. chang , robert a. lindquist , jason moffat , polina golland , and david m. sabatini .cellprofiler : image analysis software for identifying and quantifying cell phenotypes ., 7:r100 , 2006 .p. perner , h. perner , and b. mller .texture classification based on random sets and its application to hep-2 cells . in _ieee international conference on image processing ( icip ) _ , volume 2 , pages 406411 , 2002 .yan yang , arnold wiliem , azadeh alavi , brian c. lovell , and peter hobson . visual learning and classificationof human epithelial type 2 cell images through spontaneous activity patterns . , 47:23252337 , 2014 .mathias uhlen , per oksvold , linn fagerberg , emma lundberg , kalle jonasson , mattias forsberg , martin zwahlen , caroline kampf , kenneth wester , sophia hober , henrik wernerus , lisa bjrling , and fredrik ponten . towards a knowledge - based human protein atlas ., 28:12481250 , 2010 .jieyue li , justin y. newberg , mathias uhln , emma lundberg , and robert f. murphy .automated analysis and reannotation of subcellular locations in confocal images from the human protein atlas . , 7(11 ) ,november 2012 .xu - ying liu , jianxin wu , and zhi - hua zhou .exploratory under - sampling for class - imbalance learning . in _ieee international conference on data mining _ , pages 965969 , washington , dc , usa , december 2006 .ieee computer society .simon marcellin , djamel - abdelkader zighed , and gilbert ritschard .an asymmetric entropy measure for decision trees . in _ information processing and management of uncertainty in knowledge - based systems ( ipmu ) _ , pages 12921299 , 2006 .jun wang , xiaobo zhou , pamela l. bradley , shih - fu chang , norbert perrimon , and steffen t.c . wong .cellular phenotype recognition for high - content rnai genome - wide screening ., 13(1):2939 , january 2008 .larry s. davis , m. clearman , and j.k .aggarwal . a comparative texture classification study based on generalized cooccurrence matrices . in _ieee conference on decision and control _ , miami fl , december 1979 .h. schulerud , jens michael carstensen , and h.e .multiresolution texture analysis of four classes of mice liver cells using different cell cluster representations . in _the 9th scandinavian conference on image analysis _ , pages 121129 , uppsala , sweden , 1995 .guillaume thibault , bernard fertil , claire navarro , sandrine pereira , pierre cau , nicolas levy , jean sequeira , and jean - luc mari .texture indexes and gray level size zone matrix .application to cell nuclei classification . in _pattern recognition and information processing ( prip ) _ ,pages 140145 , minsk , belarus , may 2009 .guillaume thibault , bernard fertil , claire navarro , sandrine pereira , pierre cau , nicolas levy , jean sequeira , and jean - luc mari .shape and texture indexes : application to cell nuclei classification . , 27(1 ) , 2013 .guillaume thibault , jesus angulo , and fernand meyer .advanced statistical matrices for texture characterization : application to dna chromatin and microtubule network classification . in _ieee international conference on image processing ( icip ) _ , pages 5356 , september 2011 .plagianakos , g.d .magoulas , and m.n .vrahatis . learning rate adaptation in stochastic gradient descent . in pardalos( eds . ) , editor , _ advances in convex analysis and global optimisation _ , pages 433444 .kluwer academic publishers , 2001 .santa di cataldo , andrea bottino , ihtesham ul islam , tiago figueiredo vieira , and elisa ficarra .subclass discriminant analysis of morphological and textural features for hep-2 staining pattern classification ., 47:23892399 , 2014 .yan yang , arnold wiliem , azadeh alavi , brian c. lovell , and peter hobson .visual learning and classification of human epithelial type 2 cell images through spontaneous activity patterns . , 47:23252337 , 2014 .tomoharu kiyuna , akira saito , elizabeth kerr , and wendy bickmore .characterization of chromatin texture by contour complexity for cancer cell classification . in _ieee international conference on bioinformatics and bioengineering _ , pages 16 , october 2008 .lothar haberle , florian wagner , peter a. fasching , sebastian m. jud , katharina heusinger , christian r. loehberg , alexander hein , christian m. bayer , carolin c. hack , michael p. lux , katja binder , matthias elter , christian munzenmayer , rudiger schulz - wendtland , martina meier - meitinger , boris r. adamietz , michael uder , matthias w. beckmann , and thomas wittenbberg . characterizing mammographic images by using generic texture features ., 14(1):347358 , 2012 .
|
in this paper , we generalize image ( texture ) statistical descriptors and propose algorithms that improve their efficacy . recently , a new method showed how the popular co - occurrence matrix ( com ) can be modified into a fuzzy version ( fcom ) which is more effective and robust to noise . here , we introduce new fuzzy versions of two additional higher order statistical matrices : the run length matrix ( rlm ) and the size zone matrix ( szm ) . we define the fuzzy zones and propose an efficient algorithm to compute the descriptors . we demonstrate the advantage of the proposed improvements over several state - of - the - art methods on three tasks from quantitative cell biology : analyzing and classifying human epithelial type 2 ( hep-2 ) cells using indirect immunofluorescence protocol ( iff ) . cell texture characterization and classification , structural statistical matrices , gray level size zone matrix ( szm ) , fuzzy statistical matrices , quantitative cytology .
|
mobile multi hop ad hoc networks play a crucial role in setting up a network on the fly where deployment of network is not practical in times of utmost urgency due to both time and economical constraints .industrial instrumentation , personal communication , inter - vehicular networking , law enforcement operation , battle field communications , disaster recovery situations , mobile internet access are few examples to cite . in mobile ad hoc network ( manet ), communication between nodes situated beyond their radio range is also possible . for this type of communication, the nodes have to take help from other relay nodes which have overlapping radio communication . herethe communication is possible by knowing a path or route between the source and destination node .transmission schedule should be known for the route .finding the optimal route and transmission schedule shall be referred as ` route discovery ' .in static scenario route discovery is to be initiated only at beginning . in mobile scenario linksmay break or be created ( nodes are moving within communication range ) .we are motivated by the question : when to initiate route and schedule discovery in a manet ?a discovery entails some cost ; so one would not like to initiate discovery too often . on the other hand , not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule , which hurts end - to - end throughput .our interest in this question stems from the need to assess how policies based on simple heuristics perform in comparison with policies that are optimal in some precisely defined sense . if it turns out that the simple heuristic is far from optimal , then the search for improved heuristics must continue .else , it is reassuring to know that the heuristic performs nearly as well as it can . in our earlier work , we had studied this problem in the framework of markov decision theory . a simpleone - dimensional network was considered , and a simple mobility model led to a controlled markov chain , and our interest was in obtaining the best route and schedule discovery policy .the resulting problem was solved numerically , using the _ value iteration algorithm _ ( via ) . however , as pointed out in the earlier work , the via approach led to a huge computational burden .computing the optimal policy required knowing the present ` _ _ state _ _ ' ( often impossible in practice ) , as well as significant computation .therefore , a simple and suboptimal policy was considered : the _ threshold policy_. whenever the end - to - end throughput dropped below a threshold , route and schedule discovery was initiated . while the idea of a threshold policy is straightforward , the issue was the threshold value to use . in the earlier work ,the best threshold was obtained by an exhaustive search within a finite set of possible thresholds : the one resulting in the best performance was found in this way . in this paper , we address this specific question : can we arrive at a simple rule for setting the threshold , given the parameters of the system ( number of relay nodes , number of positions , cost parameter , mobility parameters ) ?even though the literature on manets is extensive , the issue of capturing the cost of route discovery in a formal framework does not seem to have received much attention . in this paperour contributions are : + i. providing a rule that yields the threshold value for use in the threshold policy for deciding to do route and schedule discovery or not : threshold value computed using the configuration information and ideal scheduling + ii .the study of the scheduling and end - to - end throughput characteristics which provides many incite of a linear ad hoc networkwe also point out that our results remain valid in a slightly different mobility model ; this model is a first step towards an ` open ' network in which existing relay nodes can leave and/or new relay nodes can join the network .+ the boundary condition is relaxed and modeled as a wrap around condition for making a open end network .the mobility here is not necessary to be symmetrical .it is shown that the characteristic of the network does not change .our results indicate that the performance of the proposed rule is no worse than 7% of the best possible threshold threshold policy , and no worse than 15% of the optimal , when the route discovery cost is low . in the following section [ sec : relatedwork ] ,the related work for this paper is discussed . in section[ sec : system - model ] , the system model is described in detail . in section [ sec : recapitulation ] discusses our previous work of finding long run average of throughput which is studied in the framework of markov decision theory .section [ sec : threshold - value ] discusses the derivation of threshold value for simple threshold - based heuristic and compare the performance with respect to the throughput - optimal policy . in section [ sec : openendedboundary ] , we discussed the boundary conditions relaxed to make a open network .this network may cater the scenario which can be seen as a small area of concern in a large linear system .we conclude in section [ sec : conclusions ] .gupta and kumar studied throughput of static wireless networks , .they have considered protocol model and physical model for the studying of impact of interfering transmission on snr .they observed that in a network comprising of identical nodes , each of which communicating with another nodes , the throughput per node under protocol model is of order if placement of nodes is random .the throughput per node becomes if node placement and communication patterns is optimal .the later result is valid for physical model as explained intuitively by .while the overall on - hop throughput of the network grows as , the average path length grows as , which makes the throughput per node to vary as .jain et al . used linear programming approach to characterize networks with interference , .they used a conflict graph to model constraints on simultaneous transmissions . in the paper ,both approximation algorithms that solve both the end - to - end flow routing problem and link scheduling problem near optimal are proposed . in the paper , it is shown that the problem of solving the optimal scheduling given the concurrency constraints to maximize network throughput , is np - hard . in ,grossglauser and tse introduced mobility of nodes into the static model presented by . many authors have discussed route discovery process , , but neither suggested when to initiate route discovery as their case is related to static case nor they have suggested how frequently to initiate the discovery process , in case of mobile network .most of them suggested initiation of discovery only when a link , in the existing route , is disrupted i.e. , in case of route break . in , the authors proposed a modified aodv which uses the concept of reliable distance that change dynamically .peng fu et al . suggested distributed route discovery method that uses reinforcement learning . in ,the authors have used fuzzy controller in every node . in their paper , the destination evaluates performance of all those routes and arranges it in order of preference , when route - request packet reaches its destination . in paper , the authors discussed the route discovery initiation to reduce the frequency of flooding request by elongating the link duration of the selected paths . in ,the authors suggested extension in storing multiple paths as route rather than unipath as route . in this paperit is assumed that at first the techniques , to reduce the route discovery cost , are applied .and then , the discovery cost is represented as the fraction of the discrete time slot which will be explained in the section [ sec : system - model ] . in this paper, we suggest a simple rule for setting the threshold value for the threshold policy , given the parameters of the system ( number of relay nodes , number of positions , route discovery cost , mobility level ) , since the threshold policy is easy to implement .we consider a network same as our earlier work in paper which is shown in fig [ fig : model ] . here the network where is the set of vertices and is the set of links . while source and destination nodes are assumed to be fixed at the two ends of the linear grid , relay nodes are movable and can occupy any position in between the source and destination nodes .the number of possible positions which the relay nodes can occupy is .we will consider a bounded area i.e , number of the relay nodes is which is assumed to be same all the time .in the figure [ fig : model ] , the value of and are 4 and 3 respectively .number of relay nodes can be more or less than the number of grid position .but we will consider different ranges of node density as ratio of to . we will consider a discrete - time slotted system .nodes can change grid position to left or right with a probability of and respectively only at the beginning of a time slot .a node may stay at the same position with probability at the beginning of the time slot but the node will not change the positions during whole period of time slot .however , if the node finds a boundary at the beginning of the time slot , it will wait at the boundary .we model the mobility of the network by specifying the duration of each time slot and the probability with which a node can move to the left or the right .note that short ( long ) time slots correspond to a network with high ( low ) mobility .all nodes transmit and receive over a common channel .transmission range is assumed to be units lengths of the linear grid . the link capacity or data rate will be 1 ( normalized )if nodes are at neighboring positions , data rate will reduced to a smaller rate ( 1/2 ) if there is a vacant position in between and data rate will be 0 if there are at least two vacant places at two consecutive places .we assume interference range is more than the transmission range ( ) but less than units of lengths i.e. , if node transmits , it will interfere with any other nodes trying to transmit during the same time slot if the separation of nodes is less than interference range ( here ) . againas any communication between two nodes requires exchange of packets both by the transmitter and by the receiver for setting up the link , both the nodes of a link should not be within the interference range of another communication link at the same time . among the links of communication between nodesif at least one of the nodes of each such links is in the interference range of others , only one link can be active at time .hence the links whose both ends ( the nodes ) are away than both the nodes of other links , more than the interference range , can be active simultaneously .hence for end - to - end communication through these links , the links are to be scheduled , i.e. , when and what fraction of time they will be active satisfying the criteria discussed just now . we model the _ cost _ associated with route discovery as follows . in every slot in which route discoveryis initiated , we assume that no data can be transmitted for a fraction of the slot. suppose that route and schedule discovery takes no more than units , where is less than a slot duration .then the ratio of to the slot duration is .clearly , as moves closer to 1 , the mobility level and the cost of route and schedule discovery increase .correspondingly , as becomes smaller , the network is more and more static and the cost of route and schedule discovery can be amortized by sending more data over the slot . in the limitas goes to zero , we have a static network where route and schedule discovery is done at the beginning , and data can be transferred forever .this reduces to the model considered in , for example , .just as is treated as a cost , the number of bits transferred over the slot duration behaves like a _ reward_. suppose that an end - to - end transmission rate can be supported over the duration of the slot for the chosen route and transmission schedule .then , assuming that the slot duration is defined as the unit of time , the net reward over the slot is if route discovery is done .when route discovery is not done , the net reward is simply .clearly , the net reward corresponds to the number of data bits transmitted from the source to the destination during the slot .a _ route _ is defined as a sequence of grid - positions , where position 0 and indicate the positions of and respectively , and , , , indicate positions on the line , with , and , , , , , in this one - dimensional network . given a route , it is possible that there is no node at a particular position .we still consider this as a valid route ; however , the rate that can be supported on such a route is clearly zero .similarly , it is also possible that there are multiple nodes at a particular position . in this case ,any of the nodes at that position can act as the relay node . because our performance criterion depends on the transmission _ rate _ that corresponds to a route , the individual node identities do not matter .the problem of route and schedule discovery was solved in the framework of markov decision process ( mdp) in our earlier work .five elements of mdp namely _ ( a ) the state space _ , _ ( b ) the action space _ , _ ( c ) the conditional transition probability given the current state and action _ , _( d ) the one - step expected cost _ and _ ( e ) the total cost criterion over a finite or infinite time horizon_. the details of each elements can be found in the paper .some of the results are reproduced in fig [ fig : opt - thru - with - phi ] .two networks with and respectively are considered for the optimal net throughput with cost parameter( ) .it is known that computing the optimal policy using via is a significant computational burden . herewe discuss a simplified policy which is used for obtaining high net end - to - end throughput .the motivation for this policy is as follows : if the observed throughput in a slot is small , then the current route is likely to be poor .the policy is : if the observed throughput is smaller than the threshold , then perform route and schedule discovery else continue with the currently known route and schedule .this is discussed in our earlier paper .some of the results are reproduced here in fig [ fig : efct - avg - tput - of - best - tput - pol - with - phik5n10 - 05 ] .this figure indicates that , by proper choice of threshold value , there is advantage to implement the ( best ) threshold policy at the very low implementation cost .in this paper our objective is to find the threshold value which is close to the best threshold value that gives throughput as good as the best threshold value .also , we would like to compute this threshold value in a simple manner . given a configuration , and ignoring any discovery cost, we can ask : what is the best possible throughput in this configuration ? let this end - to - end throughput named as ` raw ' throughput corresponding to a given configuration . now allowing the configuration to vary over all possibilities, we can come up with an expected raw throughput .this is possible because we can find out the steady state probability of each configuration as given in section [ sec : ssprobability - configuration ] . no discovery cost means =0 .finally , we incorporate the role of discovery cost ( ) , by setting the threshold as follows . at higher , the tendency to have less route discovery , with other conditions being same . which implies that as increases , the threshold value will decrease . in other word, threshold value is decreasing function of .+ we propose the following : it can be easily shown that the positions of single node when movement is random walk with boundary behavior ` pause and restart ( stuck - at - boundary ) ' in one dimension , is uniformly distributed at all movement positions .this is because of doubly stochastic nature of the state transition probability matrix . + the probability that the node is at any of the moving positions and when such nodes are there , steady state probability of any ordered configuration of nodes = + as in our case the first part of the state which is based only on movement positions counted as ` how many nodes are at one movement positions ' with such movement positions , we have to find out how many numbers of ordered pair of nodes those make one single state as discussed .+ this problem reduces as follows : + let there be ] is for any configuration , there will be at most a certain number of routes possible according to and . for ex . with as in fig [ fig : model] : if route is represented as then the possible routes are and ( null route ) . for each route , the best throughput can be computed by using optimal scheduling as discussed in section [ sec : opt - sch - ind - set ] .the null route is added here just to address the situation when the system has no route at the beginning .this part of the our derivation is similar to the derivation of the problem when the network is static as in paper .any communication between two nodes causes contention with any other node within the interference range of both the nodes if both are active simultaneously .this problem is approached using ` conflict graph ' whose vertices correspond to the links of the transmission graph ( ) of the network . in this conflictgraph an edge from a vertex to itself is not drawn .if the edge between two nodes exist then the corresponding links in transmission graph interfere with each other and hence can not be active simultaneously .links belonging to an independent set in the conflict graph can be scheduled simultaneously . using maximalindependent set the optimal scheduling problem can be expressed as linear program . and solving the linear program , we can get link schedule i.e. , the fraction of time the links which will be active . the expected raw throughput computed using the above methodis taken as a threshold value for the threshold policy and simulations are done for different system parameters . from simulationsit is observed that gives a good approximation for most of the cases .the related graphs are given in fig [ fig : grexth5c1k5n9 ] , [ fig : grexth5jk5n9_wrtoptim ] . it can observed from the graphs that most of the configurations , when , the performance will not be worse than 7% of the best possible threshold value and will not be worse than 15% of the average throughput as obtained by using the optimal policy . as , it is observed that threshold policy follows the route break policy . when the expected throughput is as per the rule as in eqn [ eqn : threshold - value],scaledwidth=49.0%,scaledwidth=25.0% ] , scaledwidth=49.0%,scaledwidth=25.0% ]the boundary condition explained earlier is close ended system i.e. , stuck - at - boundary model is necessarily make the number of relay nodes in the area of concerned constant as nodes are neither allowed to leave or join the existing network .but the model is meaningful only when mobility is symmetrical i.e. , ; otherwise , eventually , all the nodes move to the leftmost or rightmost position with probability 1 . to allow the unsymmetrical mobility model( ) , another boundary model namely _ wrap - around _ model is considered . to make the relay node constant , an assumption , though little bit artificial ,is made : if a node move out of(into ) the area at one end then another node is move into(out of ) the area at the other end .the fraction of time a single node is at any of the moving positions for 1-dimensional random walk with _wrap around _ boundary conditions is uniformly distributed even when moving probabilities toward left or right are unequal . in this model , when ever a boundary is found , instead of jumping out of the bounded area , node will be transfered to the other end for that time slot . + as the state transition matrix is doubly stochastic matrix , even when mobility is non - uniform , the steady state distribution ( )= $ ]. so the earlier analysis applies .threshold policy is a practical method for being both a simplified , less computational intensive approach , and the approach which can be implemented by measuring the throughput instead of knowing the states .the policy that measures the throughput systematically along with randomly measuring it relieved from the requirement to know the mobility condition ( time slot depends upon mobility ) by measuring the change of throughput also .the expected throughput is analytically derived in this paper and it is observed that is a good approximation to the multiplying factor for most of the cases when is considered . the analytical method for finding the steady state probability of different configurations based on number of relay nodes at different nodes is obtained .it is observed from the simulations that for most of the configurations , by considering the stated rule , when the performance of the proposed rule is no worse than 7% of that of the best threshold policy , and no worse than 15% of the optimal . as , it is observed that threshold policy follows the route break policy .the the boundary condition is modified to satisfy open network where nodes can leave / join the network .this is modeled as wrap - around boundary condition . hereassumption taken is : whenever a node leaves ( joins ) another node joins ( leaves ) simultaneously at the other end of the boundary so as to make the number of nodes in the network same .it is analyzed that the behavior in this case is also same as struck - at - boundary condition . ` by withdrawing the previous assumption , i.e. , the number of relay nodes varies randomly , now the analysis can be modeled as birth - death process ' , is the future work we are continuing now .t. k. patra , j. kuri , and p. nuggehalli , `` on optimal performance in mobile ad hoc networks , '' _2nd international conference on communication systems software and middleware[comsware ] _ , pp . 18 , january 2007 .z. qiang and z. hongbo , `` an optimized aodv protocol in mobile ad hoc network , '' _ wireless communications , networking and mobile computing , 2008 .wicom 08 .4th international conference on _ , pp .14 , 2008 . p.fu , j. li , and d. zhang , `` heuristic and distributed qos route discovery for mobile ad hoc network , '' _ proceedings of the 2005 the fifth international conference on computer and information technology ( cit05 ) _ , 2005 .e. sakhaee , t. taleb , a. jamalipour , n. kato , and y. nemoto , `` a novel scheme to reduce control overhead and increase link duration in highly mobile ad hoc networks , '' _ wireless communications and networking confrence _ , pp . 39753980 , 2007 .f. l. presti , `` joint congestion control , routing and media access control optimization via dual decomposition for ad hoc wireless network , '' in _ acm international symposium on modeling and simulation of wireless and mobile system [ mswim ] _ , 0ctober-2005 .
|
achieving optimal transmission throughput in data networks in a multi - hop wireless networks is fundamental but hard problem . the situation is aggravated when nodes are mobile . further , multi - rate system make the analysis of throughput more complicated . in mobile scenario , link may break or be created as nodes are moving within communication range . ` route discovery ' which is to find the optimal route and transmission schedule is an important issue . route discovery entails some cost ; so one would not like to initiate discovery too often . on the other hand , not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule , which hurts end - to - end throughput . the implementation of the routing decision problem in one dimensional mobile ad hoc network as markov decision process problem is already is discussed in the paper . a heuristic based on threshold policy is discussed in the same paper without giving a way to find the threshold . in this paper , we suggested a rule for setting the threshold , given the parameters of the system . we also point out that our results remain valid in a slightly different mobility model ; this model is a first step towards an ` open ' network in which existing relay nodes can leave and/or new relay nodes can join the network . + one dimensional mobile ad hoc network ( manet ) , route discovery initiation , multi - rate system , optimal policy , threshold policy , combinatorial problem , optimal throughput .
|
quantum mechanics allows two parties , alice and bob , to grow a random , secret bit string at a distance . in theory , the quantum key distribution ( qkd ) is secure , even if an eavesdropper eve can do anything allowed by the currently known laws of nature . in practical qkd systemsthere will always be imperfections .the security of qkd systems with a large variety of imperfections has been proved .device - independent qkd tries to minimize the number of assumptions on the system , but unfortunately the few assumptions in the security proofs seem to be too strict to allow useful implementations with current technology .several security loopholes caused by imperfections have been identified , and attacks have been proposed and in some cases implemented . with notable exceptions ,most of the loopholes are caused by an insufficient model of the detectors .while several detection schemes exist , most implementations use avalanche photodiodes ( apds ) gated in the time - domain to avoid high rate of dark counts .gated means that the apd is single - photon sensitive only when a photon is expected to arrive , in a time window called the _ detector gate_. attacks on these detection schemes are based on exploiting the classical photodiode mode of the apd , or the detector response at the beginning / end of the detector gate . in the attacks based on the classical photodiode mode of the apd ,the detectors are triggered by bright pulses .if necessary , the apds can be kept in the classical photodiode mode , in a so - called _ blind _ state , using additional bright background illumination .when the detectors are blind , they are not single - photon sensitive any more , but only respond to bright optical trigger pulses . in most gated systems , blinding is not necessary because the apds are in the classical photodiode mode outside the gates .therefore , in the _ after - gate attack _ , the trigger pulses are simply placed after the gate .several attacks are based on _ detector efficiency mismatch _ ( dem ) .if bob s apparatus has dem , eve can control the efficiencies of bob s detectors individually , by choosing a parameter in some external domain .examples of such domains can be the timing , polarization , or frequency of the photons . as an example , consider dem in the time - domain .usually bob s apparatus contains two single - photon detectors to detect the incoming photons , one for each bit value . due to different optical path lengths , inaccuracies in the electronics , and finite precision in detector manufacturing , the detection windows and hence the efficiency curves of the two detectors and slightly shifted , as seen in fig .[ fig : time - diagram-2](a ) .several attacks exploit dem in various protocols , some of which are implementable with current technology .the time - shift attack has been used to gain an information - theoretical advantage for eve when applied to a commercially available qkd system . in the experiment , eve captured partial information about the key in 4% of her attempts , such that she could improve her search over possible keys .after each loophole has been identified , effort has been made to restore security of the detection schemes .dem is now included in the receiver model of several security proofs as an efficiency mismatch or blinding parameter , defined differently according to the generality of the proof . for arbitrary systems that can be described with linear optics , where and are the detection efficiencies of the two detectors .here labels the different optical modes ; in the special case without mode coupling it labels the different temporal modes .an example is given in fig .[ fig : time - diagram-2](a ) . in the most general case is given by the lowest probability that a non - vacuum state incident to bob is detected .for either definition of , there is an infinite number of modes involved ( all superpositions of temporal modes ) which makes the blinding parameter difficult to measure or bound in practice . for a given value of , the secret key rate is given by where is the quantum bit error rate ( qber ) measured by alice and bob , and is the binary shannon entropy function .here we have assumed symmetry between the bases in the protocol ; in addition , we have ignored any basis leakage from alice and back - reflection from bob ( the most general expression is given in the original reference ) .unfortunately , in practical systems the rate will usually be zero , since due to the edges of the detector gates .for the commercial qkd system subject to the time - shift attack , ( estimated from the curves in ( * ? ? ?3 ) using eq . ) .as noted in , one way of obtaining a better would be to discard pulses near the edge of the detector gate. then could be calculated from including only the modes which are accepted as valid detections .however , this is highly non - trivial .the avalanche in an apd is a random process , and the jitter in the photon - timing resolution is of the same order of magnitude as the duration of the detector gate .a good photon - timing resolving detector still has 27 ps jitter .furthermore , the unavoidable difference in the acceptance windows for the different detectors will also contribute to dem ( one detector accepts clicks while the other discards them ) .a frequently mentioned countermeasure for systems with dem is called _ four - state bob _ .then bob uses a random detector bit mapping , randomly assigning the bit values 0 and 1 to the detectors , for each gate . in a phase - encoded qkd system, this can be implemented by bob choosing from four different phase settings instead of only two .then eve does not know which detector characteristics correspond to which bit value .however , as mentioned previously this patch opens a different security loophole .eve may use a _ trojan - horse attack _ to read bob s phase modulator settings , thus additional hardware modifications are required .note also that the four - state bob patch does not secure against the after - gate attack nor any of the detector control attacks .here we present a novel way of securing bob s receiver called _ bit - mapped gating _( section [ sec : basis_gating ] ) .it secures the system against all kinds of pulses outside the central part of the detector gate in the bennett - brassard 1984 ( bb84 ) and related protocols .the technique is compatible with the existing security proofs and makes it simple to find .in general it represents a useful concept , where parameters from characteristics of the qkd system are coupled to the parameters estimated by the protocol . in this case becomes coupled to the qber .subsequently we analyze the security of bit - mapped gating ( section [ sec : security_analysis ] ) , discuss how to characterize detectors , and how to implement a guarantee of single - photon sensitivity ( section [ sec : detector - design ] ) .finally we conclude ( section [ sec : discussion ] ) .( blue , dashed ) and ( red , solid ) are the efficiencies of the two detectors and .( b),(c ) possible optical bit - mapping ( purple ) when the software bit - mapping is set to , ( fig .( b ) ) and , ( fig . ( c ) ) . in a phase - encoded systemthe two levels would correspond to and phase shift in one basis , and and phase shift in the opposite basis .note that the software bit - mapping and the optical bit - mapping coincide in the bit - mapped gate , which is well within the detector gates .( d ) ( green ) as obtained from with the bit - mapped gate shown in ( b ) and ( c).,width=262 ]let us start with two definitions .the _ software bit - mapping _ determines how the signals from detectors and are mapped into the logical bits , . similarly the_ optical bit - mapping _ which can be implemented by generalizing the basis selector , maps quantum states with bit values , ( for instance , in the -basis ) to the detectors , . note that if the software bit - mapping and the optical bit - mapping do not coincide , a bit value sent by alice will be detected as bit value by bob ._ bit - mapped gating _ works as follows : * somewhere in between the detector gates , bob randomly selects the software bit - mapping , assigning detectors , to bit values , . *likewise , the basis is selected randomly between the and basis , along with a random optical bit - mapping . since this happens between the detector gates, jitter is not critical .* inside the detector gate , the optical bit - mapping is matched to the software bit - mapping .the period with matching optical and software bit - mapping is the _ bit - mapped gate ._ note that the optical bit - mapping can be equal on both sides of the bit - mapped gate to minimize the need for random numbers .[ fig : time - diagram-2 ] shows a typical time diagram . as an example , consider a phase - encoded implementation of the bb84 protocol , where the basis selector at bob is usually a phase modulator . 0phase shift corresponds to basis and phase shift corresponds to the basis .the optical bit - mapping can be selected by adding either or to the phase shift .hence in this implementation the bit - mapped gating patch could be implemented as follows : bob randomly selects the software bit - mapping somewhere between the gates .furthermore , bob selects a random basis , i.e or phase shift between the gates , and adds either or to the phase shift to apply the random optical bit - mapping . during the gate , the software and the optical bit - mapping coincide .all states received and detected outside the bit - mapping gate cause random detection results ( due to the random optical and software bit - mapping ) , and thus introduce a qber of 50% .the measured qber could be used to estimate the fraction of detections which must have happened in the center of the gate ( in fig .[ fig : time - diagram-2 ] : close to zero qber would mean that most detection events must have passed the basis selector , and thus hit the detector , in the middle of the gate ) .this can be used to limit the dem , because considering only the modes in the center of the detector gate gives less dem than considering all modes .the goal of this section is to derive an expression for the minimum qber introduced by any state received by bob , during the transition to and from the bit - mapped gate .ideally , the minimum qber is 0 inside the bit - mapped gate , and outside the bit - mapped gate .the input of bob s detection system consists of many optical modes , for instance corresponding to different arrival times at bob s system .each mode may contain a mixture of different number states .note that bob could have measured the photon number in each mode without disturbing the later measurement ; thus it suffices to address specific number states .we use the usual assumption that each photon in a -photon state is detected individually . under these assumptions , we first calculate the minimum qber caused by a single photon arriving in a single mode at bob . then , in appendix [ sec : multiphotons ] we show that multiple photons in this mode , or photons in other modes can only increase the minimum qber . consider a single photon arriving at bob in a given mode .since the bb84 protocol is symmetric with respect to the bit values and the bases , we may assume without loss of generality that alice sent and that bob measures in the basis .outside the bit - mapped gate , bob performs four different measurements depending on the software and optical bit - mapping . for each measurement, bob will obtain one out of three measurement outcomes , bit 0 , bit 1 or vacuum denoted by subscript .let be the efficiencies of the two detectors , and . during a bit - mapped gate , is varied from to . for each value of , bob performs one out of the four measurements [ eq : single_m ] if bob uses the four measurements with equal probabilities , the statistics will be given by using the measurement operators [ eq : single_e ] note that , so the detection probability is independent of the photon - state : = \frac{\eta_a + \eta_b}{2 } \text{.}\ ] ] the eigenvalues of operators and are given by .thus the minimum and maximum probability of detecting bit values 0 and 1 for any single photon sent by eve is given by since alice sent , the minimum qber introduced by a single photon is given by as expected , for .for multiphotons , a random bit value is assigned to double clicks .appendix [ sec : multiphotons ] shows that sending multiple photons can only increase the qber caused by detection events .hence eq .gives the minimum qber for any photonic state sent by eve .the security proofs in refs . involve bob predicting the results of alice s virtual -basis measurement .since the prediction is not carried out in practice , bob can perform any operation permitted by quantum mechanics . in the proofsbob s prediction consists of a filter followed by an `` x - basis '' measurement .when nothing is known about the distribution of the detection events within the gate , the worst case assumption is that all the detection events occur with maximum dem .therefore , the best filter we can construct can only guarantee that a fraction of the inputs can successfully pass the filter . with our patch, we may use the qber to determine a lower bound for the number of detection events which must have happened in the central part of the detector gate . assuming that labels temporal modes , consider the number of detection events which occurred in the range where ( see fig .[ fig : time - diagram-3 ] ) . here , is a threshold selected by bob .let be the blinding parameter for the modes for the range where .it can be calculated from eq . , but where only runs over this range .if the measured qber is equal to , a fraction must have been detected in the modes where .note that increasing increases , and may decrease ( see fig .[ fig : time - diagram-3 ] ) . as will become apparent below , should be selected to maximize . for decoy protocols , should be replaced with the qber estimated for single - photon states .this improves the estimate of the fraction , especially for large distances where the dark counts become a major part of the total qber . .the dashed line shows how a threshold can be used to limit the range of modes used to calculate or bound .,width=245 ] in the worst case , a fraction experienced a reduced dem .therefore , the filters in the security proofs can be replaced as follows : the new filter discards pulses in the modes for which . for the modes inside the bit - mapped gate , where , the new filter reverts the quantum operation from the receiver in the opposite basis in the same way that the old filter reverted it for all modes , but now having success rate . since we can guarantee that a fraction of the photons are in the bit - mapped gate , at least pulses will successfully pass the new filter .therefore the parameter in all the proofs can be replaced with , and the rate becomes , \label{eq : new_rate}\ ] ] when one assumes symmetry between the bases , and no source errors . without symmetry between the bases ,all parameters become basis - dependent , and the rate is the sum of the rates in each basis .let us see how bit - mapped gating could improve the secure key rate for the commercial qkd system in .for this system .in the same experiment , the qber is measured to be 5.68% .assuming and , becomes ; thus a substantial improvement .in fact , the rate obtained from eq . without the patch is 0 , while the rate obtained from eq .is 0.227 , so clearly the patch can be used to re - secure an insecure implementation .when designing bob s system , one should ensure that the bit - mapped gate is well within the detector gate , i.e. that the detector efficiencies are approximately equal within the bit - mapped gate .then , it should be possible to measure or bound the detector efficiencies and the basis selector response in the temporal domain . in a phase - encoded systemthis would correspond to measuring the detector efficiencies and the phase modulation as a function of time , over the range of wavelengths and polarizations accepted by bob . with this data, the minimum qber as a function of time can be calculated from , and a diagram similar to fig .[ fig : time - diagram-3 ] can be obtained . after selecting an appropriate limit , can be calculated by but where runs only over the modes where , and not over all available modes .in general there might be coupling between the different temporal modes due to misalignments and multiple reflections .the bit - mapped gate ensures that the pulse passed the basis selector inside the temporal detector gate , but does not guarantee the actual detection time .for example , a pulse could pass in the center of the bit - mapped gate , but afterwards take a multiple reflection path such that it hits the detector outside the detector gate .this can be handled by characterizing the worst case mode coupling as described previously .let be the worst case ( power ) coupling of modes inside the bit - mapped gate to outside the gate .this will typically be the worst case multiple - reflection path after the basis selector , and should be boundable from component characteristics .then , the parameter can be interpreted as in the worst case , of the detection events might have happened outside the central part of the detector gate ; thus one must let . finally one must guarantee that the detectors are not blind within the gate , and fulfill the assumptions in section [ sec : security_analysis ] during the transition of the optical bit - mapping .note that the transition ends when there is no longer any correlation between the software bit - mapping and the optical bit - mapping .if a significant correlation exists also after the detector gate , it could be exploited in the after - gate attack .although it is tempting to place an optical watchdog detector at the entrance of bob , the absence of bright illumination does not necessarily mean that the detectors are single - photon sensitive .for instance , due to the thermal inertia of the apd , it can remain blind for a long time after the bright illumination is turned off .a cheap way to guarantee single - photon sensitivity is to monitor all detector parameters , such as apd bias voltage , current and temperature .it seems difficult to monitor the temperature of the apd chip , but monitoring the bias voltage and current should make it possible to predict the heat generated by the apd , and thus prevent thermal blinding .the ultimate way of guaranteeing single - photon sensitivity is to measure it directly .this can be done by placing a calibrated light source inside bob that emits faint pulses at random times ( see fig . [fig : light - source - inside - bob ] ) .then the absence of detection events caused by this source would indicate that the detector is blind .further , a calibrated light source inside bob could be useful in more ways , for instance to characterize and calibrate detector performance in deployed systems . ) at bob s input guarantees that eve can not interfere with the detector operation based on whether the source is activated or not .pbs : polarizing beam splitter ; att . : optical attenuator ; pm : phase modulator ; c : 50/50% fiber - optic coupler ., width=245 ] the patch could cause a minor reduction in qkd performance compared to running an ( insecure ) system without the patch .in particular , the detector gates might have to be longer to contain the basis - selector gate .this would increase the dark count rate , and thus limit the maximum transmission distance .a calibrated light source inside bob would also cause a minor reduction in the performance since the gates used for testing the detector sensitivity likely can not be used to extract the secret key . however ,both these effects are minor , and are easily justified by the restoration of security .in this work , we have presented a technique called `` bit - mapped gating '' to secure gated single - photon detectors in qkd systems .it is based on a general concept where hardware imperfections are coupled to the parameters estimated by the protocol .bit - mapped gating causes all detection events outside the central part of the detector gate to cause high qber .bit - mapped gating is compatible with the current security proofs for qkd systems with detector efficiency mismatch .in particular it provides a simple way of measuring the detector blinding parameter .a secure gated detection scheme is obtained if bit - mapped gating is combined with detectors guaranteed to be single - photon sensitive .financial support is acknowledged from the research council of norway ( grant no 180439/v30 )here we prove that the minimum qber can only increase when the number of photons sent to bob is increased .as noted previously we use the usual assumption that each photon in a -photon state is detected individually .this means that each photon hits a separate set of detectors , and then the detection results are merged to give the detection results of threshold detectors .let us first consider the case where bob receives a large number of two - photon states .let the two photons within the states be labeled 1 and 2 .individually , each of the two photons would have caused the minimum qber and ( as found from eq . ) .again we assume that alice sends the bit value 0 , without loss of generality . for two - photon statesthere will be three cases of detected events : either only photon 1 is detected , only photon 2 is detected , or both photons are detected ( in our model , this latter possibility corresponds to the case where both sets of detectors register a click ) .let there be events where only photon 1 was detected , events where only photon 2 was detected , and events where both photons were detected . for photon , out of the events , and detected as the bit value 0 and 1 , respectively .likewise , out of the events where both photons are detected , and were detected as the bit value 0 and 1 for photon ( remember that in the model each photon hits a separate set of detectors ) . when only one of the photons is detected , the situation is identical to the single - photon case treated in section [ sec : security_analysis ] .hence states such that give the lowest possible qber . for the events where both photons are detected, the detections can have any correlation , but for each photon since represents the lowest fraction of the bit value 1 possible , regardless of the correlation with any other photon .the total qber can be found from merging the detections from the two sets of detectors .double clicks are assigned a random bit value , therefore half of the double clicks get the bit value 1 .this gives the total qber by repeating the argument above , but replacing the detection of photon 1 with the detection of photons , it is easy to see that . hence by induction , any detection event caused by more than one photon can only cause a higher qber than the single - photon case .48ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] in _ _ ( , , ) pp . * * , ( ) * * , ( ) * * , ( ) in _ _ , vol . ,( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.95.010503 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.230501 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1038/nphoton.2010.214 [ * * , ( ) ] link:\doibase 10.1103/physreva.78.042333 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.94.230503 [ * * , ( ) ] http://link.aip.org/link/?apl/70/793/1 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.105.070501 [ * * , ( ) ]
|
several attacks have been proposed on quantum key distribution systems with gated single - photon detectors . the attacks involve triggering the detectors outside the center of the detector gate , and/or using bright illumination to exploit classical photodiode mode of the detectors . hence a secure detection scheme requires two features : the detection events must take place in the middle of the gate , and the detector must be single - photon sensitive . here we present a technique called _ bit - mapped gating , _ which is an elegant way to force the detections in the middle of the detector gate by coupling detection time and quantum bit error rate . we also discuss how to guarantee single - photon sensitivity by directly measuring detector parameters . bit - mapped gating also provides a simple way to measure the detector blinding parameter in security proofs for quantum key distribution systems with detector efficiency mismatch , which up until now has remained a theoretical , unmeasurable quantity . thus if single - photon sensitivity can be guaranteed within the gates , a detection scheme with bit - mapped gating satisfies the assumptions of the current security proofs .
|
the fundamental agents of biological or socio - economic systems , from genes in gene - regulatory networks to stock holders in financial markets , act under complex interactions and in general we do not know how to derive their dynamics from first - principle potentials . in generalthese interactions are described by pair - wise relations through a matrix ( the adjacency matrix ) that regulates , typically in non - linear way , the effect of the interactions to the dynamics of the single component . in particular, there is a rising interest in assessing how interactions determine the stability ( or resilience ) of dynamical attractors , i.e. the ability of a system to return after a perturbation to the original equilibrium state ) .cell biology , ecology , environmental science , and food security are just some of the many areas of investigation where the relation between interaction properties and stability is , although deeply studied , a central open question .therefore , understanding the role of system topology in resilience theory for multi - dimensional systems is an important challenge from which our ability to prevent the collapse of ecological and economic systems , as well as to design resilient systems .existing methods are only suitable for low - dimensional system , and , in general , it is not possible to assume that a complex system dynamics can be approximated by one dimensional non - linear equation of the type , where the `` control '' parameter describes the endogenous effects on the system dynamics .recently , gao et al . developed a theoretical framework that collapses the multi - dimensional dynamical behavior onto a one - dimensional effective equation , that in turn can be solved analytically .they considered a class of equations describing the dynamics of several types ( ranging from cellular to ecological and social systems ) of multi - dimensional systems with pair - wise interactions . in this paper , we show under which assumption the proposed method works , we propose new insights on the validity of their framework and we generalize their previous results .our work is organized as follows . in the next sectionwe summarize the core of gao et al .framework , highlighting the assumption behind their methods . in section iiiwe then find that a more general condition poses effective limitations to the validity of the multi - dimensional reduction and we provide quantitative analytical predictions of the quality of the one - dimensional approximation as a function of the properties of the interaction networks and dynamics . in sectioniv we then show that the multi - dimensional reduction may work beyond the assumption of strictly mutualistic interactions , thus extending the validity of gao et al. framework .we prove analytically our results for generalized lotka - volterra and test our conclusions by numerical simulations also for more general dynamics .we start by giving a short summary of the multi - dimensional reduction approach for the study of the resilience in complex interacting systems .gao et al .consider a class of equations describing the dynamics of several types of multi - dimensional systems with two body interactions : where functions and represent the self - dynamics and interaction dynamics , respectively , and the weight matrix specifies the interaction between nodes . in particular , they limit their study only to those interaction networks that have negligible degree correlations and all positive entries ( ) .moreover , they assume that the node activities are uniform across nodes on both the drift and pair - wise interaction functions . the resilience of a given fixed point of a system driven by dynamics eq .( [ eq : gendyn ] ) is given by the maximum real eigenvalue of the jacobian matrix characterizing the linearized dynamics around the fixed point , i.e. . gao et al .characterize the effective state of the system using the average nearest - neighbor activity ( see appendix [ app : efffunder ] ) and an effective control parameter that depends on the whole network topology i.e. , is the average over the product of the outgoing and incoming degrees of all nodes . finally , they propose that the dynamics of following eq .( [ eq : gendyn ] ) can be mapped , independently on and , to the following one - dimensional effective equation : where is the control parameter . in this workwe will show that : the conditions - above are neither sufficient nor necessary to guarantee that the collapse works in general ; the validity of their results is not independent of the model chosen within the class of dynamics they considered , i.e. does depend on and . we show that the restriction can be omitted .we highlight that in this framework the system is assumed to be in one of the stable fixed points , , of eq .( [ eq : gendyneff ] ) satisfying and . in other words , for the one - dimensional system given by eq .( [ eq : gendyneff ] ) we can calculate analytically the resilience function uniquely determined by which represents the possible states of the system as a function of the parameter .therefore , in order to study the stability or the existence of critical transitions in the complex multi - dimensional system given by eq .( [ eq : gendyn ] ) one has to simply calculate from the network and analyse the corresponding resilience function corresponding to eq .( [ eq : gendyneff ] ) .if the collapse works , then is a point on the curve given by ( see figure [ fig : diagram ] and appendix [ app : efffunder ] for mathematical details ) . clearly , this is a powerful result as we can easily study the properties of the one - dimensional non - linear eq .( [ eq : gendyneff ] ) .therefore our framework is not specific for the theory of gao et al .( which yields a definite value for according to eq .( [ eq : betaeffdef ] ) ) , but explores the validity of the one - dimensional reduction for any possible value of .in order to better understand the relevance of conditions and on the validity of the results of gao et al , we consider a simplified setting where both conditions and are satisfied . by considering and ,the condition is valid by definition . in this casethe dynamics is defined by the generalized lotka - volterra ( glv ) equations : where is the intrinsic growth rate , and is the number of species in the community .the interaction matrix is taken to be a random matrix , so that condition is always satisfied .the advantage of using glv dynamics is that we have an analytical solution for the stationary state as a function of the interaction network .moreover , this solution is globally stable in the positive orthant if is negative definite .finally the corresponding one - dimensional analytical effective equation for glv dynamics reads as , whose feasible ( ) and stationary solution is : with . for values of , the solution exists , but is not meaningful . for each realization of the stochastic interaction matrix , we can define two errors ( see figure [ fig : diagram ] ) measuring the vertical and horizontal distance from the point ( , ) and the stationary solution of the one - dimensional resilience function . for the glv dynamics ,both errors become where , and the are the entries of the interaction matrix . corresponding to the stationary state variables of eq .( [ eq : gendyn ] ) for a given network .the blue curve is the analytical stationary solution of the one - dimensional effective function eq .( [ eq : gendyneff ] ) .the vertical and horizontal distances ( and ) between the point and the curve represent the error of the analytical approximation . ] by taking to be a random matrix , the error itself becomes a random variable whose probability distribution is inherited from the distribution of the random matrix .we can calculate the expected value and variance analytically under the assumption that the expected values of numerator and denominator in the terms above can be taken independently of each other . after making this approximation , we get the expected value and variance of the error : hence , we need to calculate the terms , , , and , where all indices are iterated over . in full generality , we assume that all pairs of off - diagonal elements and are drawn from a bivariate distribution with mean , standard deviation and correlation coefficient . the diagonal elements are either drawn from a univariate distribution following the same statistics as the unconditional off - diagonal elements or kept fixed and constant by setting . under this setting onecan generate both directed and undirected networks , being able to tune also the interaction properties .then for the different cases we can quantitatively predict the errors of gao et al .framework with respect to the actual quantities measured directly from the network .we now discuss a subtle , but important issue related to the existence of a reachable stable point in the multi - dimensional glv dynamics . indeed , depending on the parametrization of the adjacency matrix , eq .( [ eq : glvdyn ] ) may not have any stable stationary solutions .recently , the width of this parameter region was also discussed in .however , we find that if we apply the multi - dimensional reduction to these unstable systems , we still find an effective one - dimensional equation with feasible and stable solutions . in other words , the feasibility and stability of eq .( [ eq : glvsol ] ) does not imply that the corresponding solution of the full system given by eq .( [ eq : glvdyn ] ) is feasible and stable .the map in this case is not well defined , as can not be reach by the full dynamics . therefore , in order to have a meaningful multi - dimensional reduction , we must restrict our analysis only to those random matrices that assure stability ( and feasibility ) of the complete glv dynamics ( this issue is not discussed in ) . by combining our framework presented in section [ glvsec ] with results on the d - stability of random matrices , we can achieve this goal .if the off - diagonal elements of are given by a distribution with mean , standard deviation and correlation coefficient and the diagonal elements are all fixed to a constant ( ) , we could set so that the analytic solution of the multi - dimensional glv dynamics is stable and feasible ( see appendix [ app : stabilitycriteria ] ) . for ,the critical value to have stable glv dynamics is ( that is of order - rows 5 - 7 in table [ tab : results ] ) . for ,stable glv dynamics are assured if , that is of order ( rows 8 - 10 in table [ tab : results ] ) .note that for the case of a constant diagonal close to the critical value , shown in figure [ fig : error_glv_panel ] ( d ) , the theoretical value is not expected to give a good approximation to the empirical average , since in this case , the expected value of the denominator becomes zero . in this case , the approximation of taking numerator and denominator separately is not justified .furthermore , sampling becomes difficult , as outliers may govern the empirical mean and standard deviation .the analytic derivation is complicated and tedious .even in the simplest version of random matrix , the entries are all i.i.d . , we need to separate out pairs as they will lead to contributions other than where ( and similarly for higher order tuples ) . in order to do so, we devised an algorithm to solve it .the analytical expressions of expected value and variance of the error for different cases of interaction matrices at the highest order in the network size are listed in table [ tab : results ] ..resulting analytical expressions , approximated to highest order in s [ cols="<,<,<",options="header " , ] the results in table [ tab : results ] can be summarized as follow : : * in all cases , the error ( or its fluctuations ) grows without bound if the ratio goes to zero for a given network size . *the order of the fluctuations ( namely ) remains the same for all cases , while the order of the expected value changes .in particular , for interaction matrices without correlation ( ) , the term dominating the error for large are the fluctuations while the mean value is either zero ( for i.i.d . entries ) or of order ( in case of a constant diagonal ) . on the other hand , for networks with non - zero correlation , the mean becomes the dominating term of order . * if the diagonal is of the same scale as , the error may explode .this happens if , where corresponds to the value of where the interaction matrix becomes stable and non - reactive for positive .we note that , differently from what is predicted by gao et al . , the approximation does not work for any positive interaction matrix .in fact , on the one hand , our condition extends the validity of gao et al .framework for matrix with an asymmetric mixture of positive and negative interactions , as far as is not close to zero .indeed , we can now understand that the stringent hypothesis on the positivity of the interactions assumed in gao et al seminal work is not necessary . at the same time our results highlight that if matrix has a very large variance with respect to and is not large enough , then the collapse will fail .for example , if interactions strengths are very heterogenous ( e.g. power law distributed ) , although mutualistic ( positive ) , the system resilience can not be described by the one - dimensional analytical resilience function . in order to test these analytical results , we sampled the interaction matrix with the corresponding statistics numerically and compared the empirical mean and standard deviation with the theoretical predictions .the results can be observed in figure [ fig : error_glv_panel ] . in all cases ,the theoretical predictions are met very well .there is a notable but small deviation for small network sizes , namely slight underestimation of the mean for the case of correlation , c.f .plot b of figure [ fig : error_glv_panel ] . and standard deviations ( plotted as black dots and bars , respectively ) .the theoretical mean is plotted as a orange line , the shaded area indicates the predicted standard deviation . for all figures ,the entries of are drawn from a normal distribution with .the upper plots show the effect network size on the error in the case of ( a ) all elements drawn i.i.d . or ( b ) with positive correlation .the lower plots show the error for networks of size for ( c ) varying correlation or ( d ) enforcing a constant diagonal relative to the critical value . ] in our discussion , we set the connectivity ( the fraction of non - zero elements ) to one , i.e. . generalizing our results to not fully connected networksis straightforward .we model sparsely connected networks by drawing a mask with entries drawn from a bernoulli distribution , , independently drawing another matrix with specific statistics as before , and finally setting , where denotes the hadamard or entry - wise product .since and are independent , it suffices to insert the moments , into the calculation of and its variance , c.f .[ eq : errexp ] and [ eq : errvar ] . for the case of correlated pairs discussed above, the expression of the expected value remains the same , , while the variance is increased , this is to be expected , as for non - zero mean the sparse mask contributes to the variance .finally , our results are robust for other definitions of error . in the appendix [ app : anotherdef ], we also provide the analytical expressions of another error definition , i.e. the distance from the mean point ) . in the most general setting , the stationary solution of eq .( [ eq : gendyneff ] ) is .if we use the error definition ( see appendix [ app : anotherdef ] for more details ) , we can have some qualitative insights on the conditions under which the multi - dimensional collapse is expected to work also for more general dynamics than the glv discussed above .in fact , for the general dynamics given by eq .( [ eq : gendyn ] ) , then the following equation holds : for glv dynamics , the key quantity in determining the feasibility of the multi - dimensional reduction is a simple function of the product between and compared to the system size .we thus ansatz the possibility that this quantity is crucial in determining the quality of the collapse also for different type of dynamics . if the random matrix is generated by i.i.d .random variables ( ) and eq .[ eq : condforanotherdef ] holds , then we find through eq .( [ eq : betaeffdef ] ) that does not depend on the specific dynamics ( see appendix c ) . therefore , we obtain the following equation : we note that eq .( [ eq : errbetagen ] ) goes to zero , clearly depending on the functions and . in other words , the results presented by gao et al .hold only for particular choices of and , i.e. those for which . in brief , for general dynamics , if the does not hold , the collapse will fail ( e.g. glv dynamics ) ; if it holds and , the collapse will work . in figure [ fig : error_gen ], we test the above results by using the dynamics for ecological communities proposed in : where .we show that the collapse may work also for both positive - negative interactions if is large enough shown in figure [ fig : error_gen_normal ] . on the other hand , figure [ fig : error_gen_lognormal ] confirms that condition and and the positivity of the interactions of are not sufficient to guarantee the validity of the one - dimensional approximation also for dynamics beyond glv : if the matrix has a very large variance , the collapse fails also for the specific dynamics used by gao et al ..in this paper we have shown under which condition a large dynamical system can be effectively approximated with one - dimensional equation .the order parameter that appears as a variable in the effective equation can be obtained from a simple expression of the local variables . under this approximation, it becomes clear which properties of the interactions determine the state of the system and it turns out to be possible to quantify their effects .we explored which properties of the interactions determine the accuracy of the approximation .in general , the form of the multi - dimensional equations and how their non - linearities are introduced will influence the opportunity to approximate the original set of equations with the corresponding one - dimensional equation . in order to focus on the effect of the interactions , we therefore first have considered a simple idealized scenario the generalized lotka - volterra equations where the interactions are linear . in this context , the accuracy of the approximation is only determined by the interaction matrix .the criterion we obtained relates the variability of the interactions between the agents / nodes and the number of the agents . in particular , for the approximation to work , the size of the system has to be larger than a critical value proportional to the coefficient of variation of the interaction strengths . also the reciprocity of interactions plays an important role : the approximation is expected to work for any interaction strengths if there is not any correlation in the activity between each pair of nodes in the network .as the correlation between reciprocal interactions is increased , the larger the size of the system must be so to guarantee the accuracy of the approximation .finally we have shown that the approximation works also for interaction matrices with a mixture of positive and negative signs and that it can be extended to more complicated and non - linear dynamics .these results open up possible applications of the framework to food - webs , neuronal networks and social / economic interactions .26 ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1 '' '' [ 0]secondoftwo sanitize [ 0 ] + 12 ] . for glv dynamicsthe analytical solution for the equilibrium state is where is a vector whose components are all equal to the constant , so . according to the definition and , we could get following equations : and }{\sum_i ( -d_i)+s(s-1)\mu} ] . * off - diagonal drawn from a bivariate distribution and diagonal elements drawn from a univariate distribution .* if the diagonal elements are i.i.d . random variables with given distribution of mean and standard deviation , then and }{\mu_d + ( s-1)\mu} ] .finally , .if the following condition holds the collapse will work ( i.e. ) ; otherwise , the collapse will fail .
|
recently , a theoretical framework aimed at separating the roles of dynamics and topology in multi - dimensional systems has been developed ( gao et al , _ nature _ , vol 530:307 ( 2016 ) ) . the validity of their method is assumed to hold depending on two main hypothesis : the network determined by the the interaction between pairs of nodes has negligible degree correlations ; the node activities are uniform across nodes on both the drift and pair - wise interaction functions . moreover , the authors consider only positive ( mutualistic ) interactions . here we show the conditions proposed by gao and collaborators are neither sufficient nor necessary to guarantee that their method works in general , and validity of their results are not independent of the model chosen within the class of dynamics they considered . indeed we find that a new condition poses effective limitations to their framework and we provide quantitative predictions of the quality of the one dimensional collapse as a function of the properties of interaction networks and stable dynamics using results from random matrix theory . we also find that multi - dimensional reduction may work also for interaction matrix with a mixture of positive and negative signs , opening up application of the framework to food - webs , neuronal networks and social / economic interactions .
|
the synchronization of complex dynamical systems is one of the most intriguing problems and has roots in physics , biology , and engineering .the problem of synchronization in a network of interacting nodes was first brought to attention by wiener and later pursued by winfree , who modeled biological oscillators as phase oscillators , neglecting the amplitude .following winfree s pioneering work , there has been considerable effort to study the structural and dynamical effects of the network and its nodes on the state of synchrony .one important problem encountered in these studies is the possibility of separating the global ( network ) and local ( node ) characteristics to allow a general assessment on stability of the synchronous state . in the seminal work by pecora and carroll ,this issue was addressed by introducing the concept of master stability equation .this equation provides a condition on stability of the network , known as master stability condition ( msc ) .the msc also leads to useful bounds on structural properties of the network such as eigenvalues of the laplacian matrix of the network . following this important result ,most of the focus has been directed toward interpreting the bounds on eigenvalues of the laplacian matrix of the network implied by the msc as bounds on the degrees of nodes in the network . in and ,the effect of heterogeneity and _ smallness _ of the network on synchronization stability have been investigated and it has been shown that if the network is more homogeneous the stability of synchrony is easier to achieve .though most efforts have focused on the unweighted and undirected graphs , and have investigated the master stability equation in directed and weighted structures , respectively .since the synchronization manifold can be considered as a fixed point of a reduced ( induced ) space , the vast majority of the existing literature on the synchronization in the networks considers the largest traversal lyapunov exponent as a measure of stability of the synchronization .of course , as noted in , the negativity of lyapunov exponents is not a necessary nor a sufficient condition on the stability of manifold itself and it does not stop the manifold from bubbling and bursting . in this paper , we use an alternative master stability equation derived from lyapunov direct method to obtain a condition on the stability of the whole network , based on the eigenvalues of symmetric part of local and coupling dynamics . since this condition encompasses the lyapunov spectrum , it is a sufficient condition on the stability .furthermore , we generalize the conventional setup where the linkage matrices are diagonal ( often with binary components ) to the case of an arbitrary linkage matrix , allowing multi - state and cross - state linkages , possibly with different strengths , which is now receiving more attention .we then use the derived msc to calculate a lower bound on the probability of stability for large erds - rnyi networks .we relate the condition of stability to dynamical characteristics of individual nodes and their coupling and structural properties of the erds - rnyi networks , namely , network size and randomness parameter .consider a network of identical nodes with identical coupling dynamics where denotes the state vector of node , and and denote the node and coupling dynamics , respectively . ] is the error vector with respect to , where {n\times 1} ] where s are selected uniformly from interval $ ] and all the corresponding eigenvalues are calculated over average of cycles of initiated trajectories .[ fig : pnp ] shows the probability of the stability of the network as a function of network randomness , _ p _ , for different network size , _n _ , and with coupling strength .as it can be seen , probabilities of ( [ eq : stabcond1 ] ) and ( [ eq : instabcond1 ] ) are close to that of ( [ eq : msc2 ] ) .moreover , we observe that the approximated probabilities provided by the wigner s distribution of eigenvalues of the network are also reasonably close . as fig . [fig : pnp ] shows for positive definite in a large network if the average degree , , is above some _ threshold _, , ( in this example approximately ) the network becomes stable .note that in this particular numerical example , due to positive definiteness of , only the transition from asynchrony to synchrony is observed ..,width=508 ] the behavior of the network in the sense of its stability versus network size for several values of is shown in fig .[ fig : ppn ] .once again we observe that the probability of stability suddenly increases as crosses the threshold above ..,width=508 ] fig .[ fig : pnc ] shows that the stability of the network grows as the network size and coupling factor increase .of course this is due to our choice of coupling dynamics which is positive definite and by increasing coupling strength , it provides stronger negative feedback to stabilize the network .in conclusion , considering an alternative master stability condition , we have derived a sufficient condition of stability which is a function of the eigenvalues of network structure and symmetric parts of linearized local and coupling dynamics .our condition relates the largest eigenvalues of the symmetric coupling and the symmetric local dynamics to stability conditions of the networks . for erds - rnyi networkwe have calculated a lower bound on the probability of stability .then we have proceeded to calculated associated threshold value of randomness where the system starts to become stable as increases beyond this threshold .the reason for this phenomenon is that below the certain threshold , all or some of the nodes can not achieve sufficient information exchange . as a result , those nodes can not synchronize themselves with the rest of the network .since is real and symmetric , it is unitarily and orthogonally diagonalizable ( ch. 5.4 . [ 15 ] ) .that is , , where is unitary and is diagonal .define .since are mapped to by a non - singular linear transformation ( is unitary ) , their stabilities are equivalent .we have using the properties of the kronecker product we have the matrix is block diagonal .thus the resultant diagonal block matrix has as its diagonal blocks .in other words , stability of is equivalent to stability of for .
|
in a generalized framework , where multi - state and inter - state linkages are allowed , we derive a sufficient condition for the stability of synchronization in a network of chaotic attractors . this condition explicitly relates the network structure and the local and coupling dynamics to synchronization stability . for large erds - rnyi networks , the obtained condition is translated into a lower bound on the probability of stability of synchrony . our results show that the probability of stability quickly increases as the randomness crosses a threshold which for large networks is inversely proportional to the network size .
|
one of the most fundamental observations is that most processes we experience daily are intrinsically irreversible ( ` one can not make grain from beer ' ) . on the other hand , the fundamental laws governing the physics of the building blocks of our world most importantly , gravity and quantum electrodynamics feature time reversal symmetry .so , how can reversible microscopic behaviour give rise to irreversible collective macrophysical phenomena ? when discussing this question , the situation is actually obscured by the interplay of a number of very different facets : for one , we know that the standard model of particle physics contains exotic cp - violating processes which ( due to the cpt - theorem ) must also violate time reversal symmetry , most notably the decay of neutral k - mesons . while these processes must have played an important role in the early universe , in particular to establish the matter / antimatter asymmetry if god would switch them off right at this instant by pressing a button , we would not expect this to have any consequence whatsoever anymore towards the observed asymmetry between ` forward in time ' and ` backward in time ' .another important aspect is that the expanding universe has a dense , hot past , reaching back to the big bang , while it does not seem to have a corresponding fate in the future .important as it is to consider cosmological and field theoretic facets of the question of the nature of time asymmetry , this must not obscure that a number of crucial insights already can be obtained by studying properties of simple physical toy models : if some general observation can be shown to already follow from very simplistic assumptions , then the corresponding mechanisms also have to be taken into account when discussing irreversibility in the context of more realistic physical models .furthermore , misconceptions about fundamental issues can easily give rise to serious misunderstandings in more involved situations .a toy model used to study aspects of irreversibility in systems with reversible microscopic physics should have a number of highly desirable characteristics : * it should be governed by very simple fundamental processes that are _ manifestly _ time - reversal - symmetric .* it should be possible to easily simulate the evolution of the system on a computer at no loss of accuracy due to discretisation errors or similar technological restrictions .* it should be feasible to simulate a potentially large number of time steps with constant memory requirements .* the number of states of the system should be large , but finite , and ideally , entropy computations should not involve any mathematics more advanced than high school level . * the system must provide some model for the identification of macroscopic properties from microscopic configurations .one system that nicely satisfies all these conditions is a lattice gas model of discernible particles in which the fundamental interaction is scattering between particles .this model furthermore is made exactly tractable numerically ( i.e. free of accumulating rounding errors ) by taking particle positions and velocities not to be real numbers , but elements of the finite field , with being a prime .for all the examples in the rest of this work , we will specifically choose . while this trick simultaneously makes the number of microstates of the system finite ,the resulting model then unfortunately only retains a formal resemblance to real world physics .still , once the important insights have been established utilizing this model as a tool , one easily convinces oneself that many relevant properties can be lifted directly e.g. to a model of the dynamics of hard spheres .major technical restrictions are that numerical computations then have to be done with ridiculously high precision , which also will depend on the length of the interval of time to be simulated . on the conceptual side ,going to hard spheres will require replacing the simple counting of states ( i.e. combinatorics ) with more advanced measure theory .these three rules provide an axiomatic specification of the behaviour of the ` algebraic mechanics ' model : * ( arithmetics ) all arithmetics is to be done in the field , i.e. modulo the prime .prime , division is a well - defined operation , as there will be precisely one satisfying for . ] in the following , these arithmetic operations will be denoted by , etc . * ( world ) the system consists of finitely many labeled ( i.e. discernible ) particles living on the cells of a two - dimensional board .their physical degrees of freedom hence consist of two position coordinates as well as two velocity coordinates .multiple particles may occupy the same site on the lattice . *( dynamics ) time advances in discrete steps .a single step consists of subsequent stages , where each stage consists of three subsequent phases : motion , scattering , motion . in a ` motion ' phase, every particle s coordinates are increased according to the particle s velocities : . in a ` scattering ' phase ,particles positions are not updated , but whenever multiple particles occupy the same cell , their average velocity is determined ( using arithmetics ) , and the velocity of every particle in that cell is then replaced by .if or more particles occupy the same cell , no scattering happens in that cell ( i.e. velocities are not changed ) .one immediately notices that : * a scattering phase does not change the average velocity of all particles occupying the same cell , hence ` total momentum is conserved ' . *a stage is made up in such a way of phases that it ( a ) is inherently time - reversal - symmetric , and ( b ) involves changes to both positions and velocities .* the rule to exclude scattering for cells containing or more particles manages to make the dynamics well - defined in every situation and retains interesting nontrivial scattering properties as long as the number of particles is not far larger than the number of cells .* systems which have been set up in such a way that scattering events do not take place e.g. one particle per row , all of them moving horizontally only return to their initial configuration after stages , i.e. one time step .hence , under these rules , the time evolution of the system is governed by two- and multiparticle scattering processes .the rules given here are simple enough to be easily implemented in emacs lisp , so that everybody s favorite text editor can be used to study the behaviour of the system . a short piece of program code that implements a complete simulation framework is shown in appendix [ elisp ] .when starting from a very specific initial condition , such as a number of particles arranged as a compact block , with random initial velocities , one finds that for small block sizes , scattering processes are so rare that not much interesting happens on reasonable time scales . for our choice , a block appears to be just of the right size to show interesting dynamics , as demonstrated in figure [ ex : dynamics9x9 ] . ....t= 0 t= 1 t= 2 111111111 .......... 1111 .. 1.1 .......... 1111.1111 ... 1 ...... 111111111 .......... 111.11111 .......... 111.11111 ........ 1 .111111111 .......... 111211111 .......... 11111111 ........... 111111111 ..........1111.111 ........... .111111 ............ 111111111 .......... 11111111 ........... 111111111 .......... 111111111 .......... 11.111211 .......... 11.1.11.1 .......... 111111111 .......... 1 ...1.121 .......... 111.1 .. 1 ........... 111111111 .......... 1.211.11 ........... 11111.11 ........... 111111111 .......... .11111111 .......... .1.111111 .......... ................... .............. 1 .... ................... ................... .... 11 ............. ..... 2 ............. ................... .1 ................. .1 ........... 1 .... 1 ................... ................... ......... 1 ......... ................... .1 ............... 1 .................... ................... ................... ................. 1 . ................... 1 ............... 1 .. 1 .................. ................... .... 1 ....... 1 ...... .... 1 ........ 1 ..... ................... ............. 1 ..... 1 ............ 1 .... 1 ................... .................. 1 ............ 1 ..... 1 t= 5 t=10 t=20 1111.11.1.1 ........ 1.1 .. 1 .. 1.1 ... 1 .... .. 1 .. 11 ......... 1 .. 111.21111 .......... 121.111.1 .. 1 ....... .. 11 ... 21 .. 1 ...... 1 11.111.11 .......... 1 .. 1 ... 1 ........ 1 .. ...... 1.1 ... 2 ... 2 .. .1.1.111 ........... .. 1 ... 1 ........ 1.1 ..... 2.2 .......... 1 . .1111111 ........... .. 221111 ........... .. 1 .. 11 .... 1 ....... 11.111211 .......... 111.111.1 .......... 1 .... 2 .. 1 .......... 11 .... 12 ........... .1 .. 11 ............. .1 ....... 11 ...... 1 .1.111.11 ........... 1.111.1.1.........1 1 .. 1 ......... 11 .... .11111111 .. 1 ....... .. 111.1 .... 1 .... 1 .. .. 11 ...... 1 ...... 1 .................... 1 .... 1 ............ 1 ........... 1.2 .... 1 .... 11 ........... 1 . ... 1.1 ............. ........ 1 ........ 1 . .1 .......... 1 ...... .1 ........ 1 ....... 1 .1 .. 1.1 ... 1 .... 1 ... ............. 1 ..... ...... 1.1 .... 1 ..... 11 ..... 1 .. 1 .. 2 ... 1 ..1 ................. .1 ................ 1 1 ................................ 11 ... ........ 1 ....... 11 ..1 ............... 1 . 1 ............... 1 .. .1 ........... 11 .... ... 12 ... 1 .... 1 ..... .... 1 .............. ....... 1 .. 21.1 .. 1 .. .1 ..... 1 .. 1.11 ..... ............. 1 .... 1 .... 1 ................. 1 ... 1 .... 1 .. 1 ... ....... 1 .......... 1 ............ 1 ...... 1 .............. 3 ... .... in systems in which the _ ergodic hypothesis _ holds ( or at least a weakened version thereof which claims that the system will trace out a substantial fraction of the accessible configuration space in an ` effectively chaotic ' manner ) , asking the question what microstate the system is in at individual points in ` macroscopic time ' ( i.e. the scale of time differences is considerably larger than the time scale of microscopic processes ) is equivalent to obtaining data from an uncorrelated source of randomness ( such as a perfect die ) .then , the shannon entropy of such a random process just corresponds to the boltzmann entropy of the physical system ( up to a dimensionful proportionality constant required to match the statistical interpretation with the phenomenology of macroscopic thermodynamics ) .unfortunately , in a number of situations , the ergodic hypothesis is much more attractive than justifiable . in particular, we can consider ourselves lucky that the solar system does not seem to trace out all the mechanically possible configurations that would be allowed taking only the classical conservation laws into account . at first , this observation seems to be a major hurdle for the construction of a general theory explaining macroscopic phenomena in terms of microscopic processes .we will see , however , that the desirable link between shannon and boltzmann entropy can still be maintained even without invoking the ergodic hypothesis , if one is willing to pay a price in the form of a modified interpretation of macroscopic entropy .if one had to define entropy in but a single sentence , then the statement that _ ` entropy is a linearly additive measure of the size of a space of possibilities ' _ presumably would be a strong contender : while being simple enough to be directly applied to a number of systems that can even be studied at school level such as casting the die it still contains all the relevant essence necessary to evolve the analytic formula for entropy both in coding theory and statistical mechanics , applying not much more than simple consistency considerations as well as quite elementary mathematics . in particular , shannon s entropy formula is easily derived from consequent application of the three ideas that ` rolling two perfect die produces twice as much randomness in every step as rolling just one dice ' , ` rolling five perfect die produces slightly less randomness than throwing thirteen perfect coins ' , and ` an imperfect dice that only rolls 1 and 2 , with 50% probability each , is as a source of randomness equivalent to a perfect coin ' .in particular , the entropy associated to some specific outcome is proportional to the logarithm of its rarity ( inverse probability ) , and has to be weighted with the probability . here, the logarithm ensures that entropy is additive when composing two independent systems , where the space of possible configurations grows multiplicatively .a useful choice of normalization is to associate an entropy of 1 to a perfect coin , denoting this amount of entropy a ` bit ' .this boils down to using the _ logarithmus dualis _( base-2 logarithm ) when defining entropy : one of the beauties of the ` algebraic mechanics ' model is that we can easily compute the entropy as the logarithm of the number of microstate configurations that belong to a macrostate .considering a collection of labeled ( i.e. discernible ) particles moving on a lattice , the most generic macrostate description , which does not provide any additional constraints beyond this , can be realized through different microstates , as every particle can have arbitrary position and momentum , both being a pair of mod - p integers .the base-2 logarithm of this number gives the entropy of this macrostate , which is for just .it is extremely important to note here that every constraint on the configuration of the particles can be translated to a set of microstates satisfying that constraint , so _entropy is a property of a macroscopic _ description _ of a system , not the system itself ! _this means that different observers , which speak about the _ same _ system ( i.e. microstate ) , but have a different degree of information about it , will associate different entropy to it . to make this point explicitly clear ,let us consider the simple geometric pattern underlying the ` initial ' configuration in figure [ ex : dynamics9x9 ] .we will call this configuration .descriptions of of different level of detail correspond to different macrostates , hence different associated entropies : * ( ms1 ) a ` blind ' observer who does not know anything about this configuration except that `` _ _ it contains 81 labeled particles _ _ '' will associate an entropy of to it . *( ms2 ) an observer who describes this configuration as `` _ _ 81 particles arranged in a regular pattern in the top left corner of the lattice , with unspecified velocities _ _ '' , will associate to it an entropy of . *( ms3 ) an observer describing the configuration as `` __ all 81 particles being located somewhere in the top left corner of the lattice , with unspecified velocities _ _ '' would associate to this description an entropy of . *( ms4 ) an observer possessing detailed knowledge that `` _ _ the first particle goes into the top - left corner , the second into column 2 in the first row , etc . , but with unspecified velocities _ _ '' , would associate to his description of the system the entropy . *( ms5 ) an observer having `` _ _ detailed knowledge of the position and velocity ( i.e. ` as specified in the example program ' ) of each individual particle _ _ '' would associate to this description an entropy of . *( ms6 ) an observer using data - reducing measuring devices that probe spatially averaged properties , such as in particular cumulative particle numbers in blocks ( resp . , , blocks for the last row and column in the lattice ) , would see `` _ _ nine particles in each of the top left blocks , and none in other positions _ _ '' .such a description would be associated to an entropy of .it is especially this last case we will from now on be most concerned with .starting from a configuration such as the one named ` config - a0 ` in the code example in the appendix , scattering processes will soon eradicate all visible structure .however , fundamental laws being explicitly time symmetric , we can always ` respool ' the dynamics by just reversing all velocities . as there is a mapping between microstates at and microstates at any other time provided by time evolution ,all that happens in this model is that easily visible spatial correlation is shifted to and mixed into more complicated correlations which are completely non - obvious to the human eye ( not to speak of the fact that half the relevant information is missing in plots that do not show velocities ) .experimenting with algebraic mechanics , one finds that while we see perfect reversibility of time evolution recurrence phenomena that reproduce initial configurations after an unexpectedly small number of time evolution steps ( say , ) nevertheless do not seem to occur ( according to computer experiments ) . when going from the time to any other time , fully reversible microscopic dynamics guarantees that the macrostate ( ms2 ) as a set of microstates evolves into another set of precisely the same number of microstates .hence , entropy does not change with time in this process .this situation is completely analogous to the situation in classical mechanics , where liouville s theorem ensures the conservation of phase space volume .as there is a very terse textual description of the macrostate ( ms2 ) at , is there a similarly compact ` articulated ' description of the macrostate which we get from ( ms2 ) by time evolution ? the best we can do is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( ms2 ) _ a configuration which evolved out of a configuration that contained 81 particles in a regular pattern in the top left corner of the lattice , with unspecified velocities , by going from to . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this linguistic trick demonstrates the actual conceptual idea underlying the mathematical proofs of entropy conservation in both classical as well as quantum systems whose dynamics is given by the liouville - von neumann equation .it should be noted that so far , our reasoning did not depend on whether lies in the future or in the past of .the kind of partial information which we have about any specific macroscopic system depends on the way our measuring devices work . from this perspective , we will almost exclusively encounter macrostates such as ( ms6 ) when studying real systems : _ the way our measuring devices work strongly favors some macrostate descriptions over others_. it is useful to introduce a special notion here : we want to call a macrostate whose description corresponds to information obtained(/in principle obtainable ) about a set of microstates by applying some measuring apparatus an _-observed(/observable ) macrostate_. this idea allows us to define an entropy function that maps information obtained with the apparatus ( say , a digital camera , or a thermometer , or both combined , a pair of human eyes , etc . )to the entropy of the macrostate description of all the microstates compatible with that observation . sticking with the example ( ms6 ) ,the ( specific ) measuring apparatus that produces data of such form would embed the lattice into a lattice by adding two extra rows and columns to the right and bottom for bookkeeping purposes only i.e. particles are forbidden to go there and measure the number of particles in each of the blocks of size in the enlarged lattice .all information about particle identity as well as velocity is omitted , only locally averaged information about spatial distribution is determined .explicitly , the number of microstates corresponding to a measurement that gave particles in the block is : where is the number of real cells in the block , normally 9 , but for blocks that contain ` padding cells ' this may be or . time evolution from to maps the configuration ( microstate ) to some specific other configuration .the evolution of entropy as measured by the apparatus with time is displayed in figure [ entropy - evolution ] , both for evolution towards the future as well as evolution towards the past .[ entropy - evolution ] time evolution of the -observed macrostate entropy . here, it must be pointed out that , while is a very special configuration amongst all those microstates that belong to the macrostate ( ms6 ) , the behaviour observed is essentially the same for a generic microstate that belongs to ( ms6 ) : entropy increases both towards the past as well as the future , and also fluctuates .this claim is easily checked by using the function from the code given in the appendix .this produces a random , hence ( usually ) generic , microstate corresponding to a given -observed macrostate . the three crucial ingredients that produce the behaviour shown in figure [ entropy - evolution ] are : * a collection of macrostates that are linked to the behaviour of a ` data - reducing ' measuring device , which reports some kind of reduced information about the microstate configuration the system is in .* microscopic dynamics that is not aligned ( i.e. does not respect ) the data reduction associated with the measuring device . *an initial state which , from the perspective of the measuring device , belongs to a macrostate which is special in the sense that its observed properties constrain the number of associated microstates relative to the number of microstates associated to a generic ( again with respect to the measuring device ) macrostate . in real systems where the number of degrees of freedom usually is ridiculously large ,the first condition is satisfied automatically by any conceivable measuring device .usually , both omission and averaging over degrees of freedom are involved here .the second condition also is rather generic .so , whenever reversible microscopic dynamics does not care about the macroscopic notions we use to describe the processes it causes at macroscopic , ` averaged ' scales ( which practically always is the case ) , we expect to encounter the situation that _ time evolution does not change the entropy of the macrostate , but finding a new description of the evolved macrostate in terms that are discernible by the measuring device does ._ it is this process of ` re - articulation ' in which information about the macrostate is lost , and hence entropy increases .the information lost is precisely that part of the original knowledge about the system ( in this example : averaged positional information ) which by the dynamics was mixed into more complicated correlations not detected by the measuring apparatus. simply stated , entropy increases whenever we apply dynamics to partial knowledge about a system and eventually project back onto the same class of partial system descriptions used initially , unless these descriptions are compatible with the microscopic dynamics . as there is no intrinsic reason why they should be , this is normally the case but with this criterion in mind , it also is easy to construct counter - examples ( a trivial one being translational motion of a single solid body ) .in particular , entropy also increases when we go from the ` present ' to the ` past ' : just as it is difficult to draw conclusions about the future by studying the present , it is also difficult to draw conclusions about the past . generally speaking ,if the situation were so simple that the idea that ` entropy increases with time ' described everything there is to know , we should not have much difficulty answering a question such as ` where does the moon come from ' ? while a description of a system s past may well correspond to a low entropy macrostate, we can still easily encounter the situation that , for a given system , the totality of all conceivable pasts ( each of which may well have low entropy ) that are compatible with our knowledge is a macrostate of higher entropy than the one describing the present situation . in this toy system ,the magnitude of entropy increase is just a measure of the amount of information lost by going from a description such as ` evolved out of a system characterized by macroscopic parameters at time ' to a description ` characterized by macroscopic parameters at time ' .evidently , the latter macrostate can not contain fewer microstates than the former : perfect traceability would mean that we can identify the image of each microstate under the one - to - one mapping of time evolution .less - than - perfect traceability means that the number of possibilities increases .this is analogous to the very basic observation that , when adding the numbers 3 and 5 , and keeping the result while forgetting the summands , the number 8 ` does not know how it was produced ' it may also have been the result of adding 6 to 2 .so , in this ` irreversible addition ' , we ` produced entropy ' .the observed ` fluctuations ' on top of the gradual increase in entropy ( which may lead to transient decreases in entropy ) correspond to those situations where the association of macrostates to subsequent microstates happened to produce a ` comparatively small ' macrostate following a larger one . when investigating the dynamics of a single microstate only ,such processes are _ a priori _ not excluded , and certainly expected to determine the behaviour of the system at times far away from . if we started instead from an uniformly weighted collection of all microstates that represent a given macrostate , then re - articulation after time evolution would produce a weighted collection of new -observed macrostates . as a weighted collection of macrostatesagain is a macrostate ( but usually not an -observable one ) , and as this re - articulated description contains extra microstates in addition to the time - evolution images of the microstates in , the entropy of this less stringent description is larger than the entropy of : the re - articulation projection loses information about the time - evolved system . in this sense ,the increase in entropy is inevitable . if we started from any macrostate that is projected onto a specific -observable macrostate , but contains less microstates ( e.g. only a single one ) , then there is _ hidden information _ about the system : its state could be known more accurately than how it is described by the corresponding -observable macrostate .the dynamics will mix this hidden information to a varying degree with those parameters the measuring device is sensitive to , hence giving rise to entropy fluctuations .considerations on ` maxwell s daemon ' show that the magnitude of these fluctuations gives a lower bound on the minimal effort any physical realization of the measuring device must make .putting insights gained by studying the ` algebraic mechanics ' toy model into proper physical context requires a few additional remarks .computing the rate of change of entropy by applying the liouville - von neumann equation to the quantum mechanical expression for entropy ( ) gives the result that entropy is conserved .this situation is analogous to the situation encountered in the toy model when omitting the step of ` re - articulation ' .one easily convinces oneself ( e.g. by means of an example ) that the quantum mechanical density matrix reduction associated with the measurement process changes ( increases ) entropy .this again is associated to an information - lossy projection , usually called a ` quantum jump ' . just as with the question about entropy generation ,there is substantial confusion about the question ` when ( and how ) the wave function collapses ' ( e.g. whether this is a ` faster - than - light ' process ) . as ` quantum jumps ' lead to an increase of entropy, some relevant aspects are actually linked , and so are explanations . considering a combined system consisting of a ` quantum device ' q ( e.g. an excited atom ) and a ` measuring device ' d ( e.g. a detector ) , the quantum states of the combined system are elements of the tensor product hilbert space . due to the interaction between q andd , the ` initial time ' quantum states of the form evolve into entangled linear combinations .in other words , the final quantum states we would like to use to describe the system with _the interaction give us a different basis ( in the heisenberg picture ) than that of the initial quantum states .as one all too often thinks of these quantum states in terms of position space probability amplitudes , where they look the same , it is deceptively easy to get confused by failing to discern between elements of the ` initial ' hilbert space and the ` final ' hilbert space ( which are isomorphic ) .evidently , every description of a measurement process will at some point have to make the transition from using quantum states in to using quantum states in .it is precisely at this point _ in the description of the process _ where this transition happens that ` the wave function collapses ' , if it is done in such a way that phase correlation information is lost through projection .regardless of whether this step is consciously articulated or not , it will have to happen somewhere in every description of a measurement process .consequently , the confusion of conceptual levels associated with the question ` whether the collapse of the wave function is a faster - than - light process ' is of the same kind as the confusion demonstrated by the question - answer combination ` why the maid in shakespeare s poem _ a lover s complaint _ is pale ' / ` because it rhymes with _ tale _ ' : here , the question was posed at the level of content , while the answer was given at the level of description .this has to be taken into account when using quantum jumps and fermi s golden rule to justify a markov model as a basis for a proof of the quantum mechanical version of boltzmann s ` h - theorem ' .( essentially , this then amounts to proving entropy generation by assuming entropy generating irreversible fundamental processes . )the key property of the ` algebraic mechanics ' model is its perfect computational traceability without losses related to numerical limitations . while this helps to simplify a number of arguments ,the relevant reasoning can be lifted naturally to more realistic descriptions of microscopic physics that can not avoid the problem of limited numerical precision .this basically means to wrap up most statements in constructions such as ` if one demands numerical precision x on initial states , then for the given amount of time to pass , the following holds within numerical precision y : ' where the precision - related errors can be made small .apart from complicating the discussion , this does not introduce qualitatively new features .therefore , the value of the toy model lies in helping to isolate relevant aspects that lead to important insights from irrelevant ones .additionally , while this is not the subject of this work , there are situations where information about the behaviour of a ` continuous ' system can be gained by studying its behaviour in modulo- arithmetics .presumably the most famous example of such a situation is given by the grothendieck - katz conjecture .the ` algebraic mechanics ' toy model introduced here is both simple and powerful enough to elucidate some key aspects of the phenomenon of the thermodynamic arrow of time in terms accessible to a broad audience .other toy models that demonstrate entropy generation based on reversible microscopic dynamics exist , such as the kac ring model ( also see ) .the conceptual advantage of the very simple ` algebraic mechanics ' toy model is that model unifies exact computational traceability with formal similarity to mechanics .it demonstrates entropy generation with respect to time evolution towards the future _ as well as _ towards the past , and gives interpretations of this phenomenon through the notion of ` re - articulation ' .any ` measurement ' involves data reduction .in fact , considering the situation in quantum computing , it may be appropriate to _ define _ ` a measurement ' as ` a data - reducing projection ' .if the microscopic dynamics does not respect ( i.e. is agnostic about ) the eigenspaces of this projection , then time evolution followed by re - articulation will lead to an inevitable loss of information contained in the system s description .hence , as soon as we use dynamics to extract useful information about a system at any other time than the present , we see an increase in entropy , regardless of whether time evolution was performed towards the future or towards the past .when asking questions about the future , we will hence observe entropy to increase towards the future . when doing forensic analysis , we observe the opposite phenomenon : extracting information about how precisely an accident happened gets increasingly difficult the more time has passed since . when discussing entropy generation ( and also the quantum mechanical measurement process ) , great care has to be taken to discern between concepts that refer to two different levels : the ` fundamental dynamics level ' and the ` description level ' . while conclusions that can be obtained by studying this toy model leave open many important questions about the physical ` arrow of time ' ( e.g. why the universe has a hot past ) , it is important to first understand what aspects of entropy generation already follow from very basic general features before more advanced physics can be discussed . strictly speaking , this work says nothing new about physical processes ( e.g. fundamental dynamics ) .furthermore , it does not offer new descriptions of physical processes ( i.e. thermodynamics ) .it does , however , address some occasionally discussed issues concerning the descriptions of descriptions of physical processes .[ [ philosophical - aspects ] ] philosophical aspects + + + + + + + + + + + + + + + + + + + + + one important source of confusion in the discussion of the ` arrow of time ' is the philosophical question of determinism : is the future determined by the past? stated differently , if all information about the world were contained in a spatial slice , and ` dynamics ' were nothing else but some invented funny mathematics on top of such an initial configuration that meaninglessly maps it to other configurations , why should e.g. some form of ` dynamical laws ' be more ` real ' than another possible imagined choice ? physically speaking , asking ` whether the world is deterministic or not ' _ a priori _ is as much a non - question as is the question ` why anything exists at all ' : as it is impossible to perform experiments on ` the whole world ' , the toolbox of physics can not give an answer the only way to come up with an answer is to first find out how the question one had in mind should have been phrased accurately .evidently , we can ask whether some specific process in a system we can isolate and experiment with is ` deterministic ' .if a key property of all experiments is the separation of the system measurements are performed on from ` something else ' , then the question whether ` the world is deterministic ' is nonphysical in precisely the same way as the question ` what happens if an unstoppable force meets an unmoveable object ' , due to a fundamental contradiction in the assumptions . certainly , as abolishing relevant prejudice is an important prerequisite for gaining insight by means of the scientific method , discussing the ` arrow of time ' mandates overcoming all prejudice on determinism first .while this work demonstrates that the phenomenon of ` observed entropy generation ' already happens in an extremely simple completely deterministic toy model , and reasons that the underlying mechanisms are basic enough to generalize to virtually all more realistical physical models , this does not at all touch the question whether some particular fundamental physics actually follows deterministic laws or not .[ [ acknowledgments ] ] acknowledgments + + + + + + + + + + + + + + + the original incentive for this work came from a request to explain in more detail the physical aspects of a popular talk on the ` thermodynamic arrow of time ' given by the author at the interdisciplinary mind congress on ` time ' in nuremberg ( nrnberg ) , germany , 02.10 . the ` algebraic mechanics ' model was developed subsequently in an effort to construct a maximally simple easily traceable quantitative explanation of a key phenomenon underlying entropy generation .it hence is a pleasure to thank martin dresler for asking the right question namely for asking for a simple quantitative explanation of entropy generation also accessible to non - physicists .this piece of code , when loaded into the ( x)emacs editor with : + ` ( byte - compile - and - load - file " amech.el " ) ` + allows the simulation of the ` algebraic mechanics ' toy system : ( setf amech - div - table ( v - init amech - prime ( xlambda ( n ) ( let ( ( vn ( v - init amech - prime ( xlambda ( x ) ( mod ( * x n ) amech - prime ) ) ) ) ) ( position 1 vn ) ) ) ) ) ( defun a+ ( x y ) ( mod ( + x y ) amech - prime ) ) ( defun a- ( x y ) ( mod ( - x y ) amech - prime ) ) ( defun a * ( x y ) ( mod ( * x y ) amech - prime ) ) ( defun a/ ( x y ) ( mod ( * x ( aref amech - div - table y ) ) amech - prime ) ) ( defun va+ ( vx vy ) ( cons ( a+ ( car vx ) ( car vy ) ) ( a+ ( cdr vx ) ( cdr vy ) ) ) ) ( defun va- ( vx vy ) ( cons ( a- ( car vx ) ( car vy ) ) ( a- ( cdr vx ) ( cdr vy ) ) ) ) ( defun va/ ( vx n ) ( cons ( a/ ( car vx ) n ) ( a/ ( cdr vx ) n ) ) ) ( defun advance - time ( config & optional nr - steps ) ( dotimes ( step ( or nr - steps 1 ) config ) ( let * ( ( v00 ' ( 0 . 0 ) ) ( advance ( lambda ( p ) ( cons ( va+ ( car p ) ( cdr p ) ) ( cdr p ) ) ) ) ( new - config-1 ( let * ( ( c ( mapcar advance config ) ) ( ht - by - pos ( make - hash - table : test ' equal ) ) ) ( dolist ( np c ) ( push np ( gethash ( car np ) ht - by - pos nil ) ) ) ( maphash ( lambda ( pos particles ) ( when ( < ( length particles ) amech - prime ) ( let * ( ( v - avg ( va/ ( reduce # ' va+ particles : initial - value v00 : key # ' cdr ) ( length particles ) ) ) ) ( dolist ( p particles ) ( setf ( cdr p ) ( va- v - avg ( va- ( cdr p ) v - avg ) ) ) ) ) ) ) ht - by - pos ) c ) ) ( new - config-2 ( mapcar advance new - config-1 ) ) ) ( setf config new - config-2 ) ) ) ) ( defun display - config ( config ) ( macrolet ( ( ic ( spec ) ` ( let ( ( x ? . ) ? 1 ( int - char ( + 1 ( char - int $ x ) ) ) ) ) ) ) ) ( let ( ( board ( v - init amech - prime ( lambda ( n ) ( make - string amech - prime ? . ) ) ) ) ) ( dolist ( particle config ) ( ic ( aref ( aref board ( cdar particle ) ) ( caar particle ) ) ) ) ( dotimes ( j amech - prime ) ( insert ( format " \n%s " ( aref board j ) ) ) ) ) ) ( insert " \n " ) t ) ( defun config - entropy ( config ) ; ; " measuring " entropy with the device model described in the main text ( let * ( ( nr - blocks ( ceiling ( / amech - prime 3.0 ) ) ) ( boundary ( mod amech - prime 3 ) ) ; never = 0 !( b - sizes ( let ( ( v ( make - vector ( * nr - blocks nr - blocks ) 9 ) ) ) ( dotimes ( j nr - blocks ) ( setf ( aref v ( + ( * nr - blocks j ) nr - blocks -1 ) ) ( * boundary ( / ( aref v ( + ( * nr - blocks j ) nr - blocks -1 ) ) 3 ) ) ) ) v ) ) ( counts ( make - vector 49 0 ) ) ) ( dolist ( p config ) ( let * ( ( xpos ( floor ( caar p ) 3 ) ) ( ypos ( floor ( cdar p ) 3 ) ) ( nr - cell ( + ( * ypos 7 ) xpos ) ) ) ( incf ( aref counts nr - cell ) ) ) ) ( labels ( ( entropy ( nr - cell sum ) ( if (= nr - cell 49 ) ( reduce ( lambda ( sf x ) ( - sf ( log2-fakt x ) ) ) counts : initial - value ( + sum ( log2-fakt 81 ) ) ) ( entropy ( + 1 nr - cell ) ( + sum ( * ( aref counts nr - cell ) ( / ( log ( aref b - sizes nr - cell ) ) ( log 2.0 ) ) ) ) ) ) ) ) ( + ( entropy 0 0.0 ) ( * 81 2 ( / ( log 19 ) ( log 2 ) ) ) ) ) ) ) ( defun random - microstate - compatible - with - macrostate ( n - per-3x3 ) ( let ( ( config nil ) ( size ( length n - per-3x3 ) ) ) ( dotimes ( j size ) ( dotimes ( k size ) ( let ( ( n ( aref ( aref n - per-3x3 j ) k ) ) ) ( dotimes ( m n ) ( push ` ( ( , ( + ( * 3 j ) ( random ( min 3 ( - amech - prime ( * 3 j ) ) ) ) ) . , ( + ( * 3 k ) ( random ( min 3 ( - amech - prime ( * 3 k ) ) ) ) ) ) .( , ( random amech - prime ) ., ( random amech - prime ) ) ) config ) ) ) ) ) config ) ) ( defconst config - a0 ( if nil ; ; roll the die to produce a random configuration : ( let ( ( k 9 ) ) ( coerce ( v - init ( * k k ) ( lambda ( n ) ` ( ( , ( floor ( / n k ) ) . , ( mod n k ) ) .( , ( random amech - prime ) . , ( random amech - prime ) ) ) ) ) ' list ) ); ; use a definite initial configuration : ' ( ( ( 0 . 0 ) 14 .15 ) ( ( 0 .1 ) 0 . 10 ) ( ( 0 . 2 ) 6 . 8) ( ( 0 . 3 ) 0 . 7 ) ( ( 0 .. 18 ) ( ( 0 . 5 ) 16 .4 ) ( ( 0 . 6 ) 13 . 7 ) ( ( 0 . 7 ) 12 . 13 )1 ) ( ( 1 . 0 ) 8 . 9 ) ( ( 1 . 1 ) 9 . 8) ( ( 1 . 2 ) 4 .17 ) ( ( 1 .5 ) ( ( 1 .9 ) ( ( 1 . 5 ) 16 .15 ) ( ( 1 . 6 ) 9 . 12 ) ( ( 1 . 7 ) 11 .10 ) ( ( 1 .17 ) ( ( 2 .3 ) ( ( 2 . 1 ) 6 . 10 ) ( ( 2 . 2 ) 2 .4 ) ( ( 2 .16 ) ( ( 2 . 4 ) 11 .10 ) ( ( 2 . 6 ) 10 . 18 ) ( ( 2 . 7 ) 1 . 0 ) ( ( 2 . 8) 8 . 6 ) ( ( 3 .9 ) ( ( 3 . 1 ) 11 . 13 ) ( ( 3 . 2 ) 18 .9 ) ( ( 3 .9 ) ( ( 3 . 4 ) 2 .2 ) ( ( 3 . 5 ) 0 .5 ) ( ( 3 . 6 ) 0 . 0 )( ( 3 . 7 ) 9 . 7 )11 ) ( ( 4 . 0 ) 11 . 6 ) ( ( 4 . 1 ) 9 .4 ) ( ( 4 .1 ) ( ( 4 . 3 ) 14 .6 ) ( ( 4 . 4 ) 0 .15 ) ( ( 4 . 5 ) 6 . 9 ) ( ( 4 . 6 ) 2 . 6 ) ( ( 4 . 7 ) 18 .14 ) ( ( 4 .. 18 ) ( ( 5 . 0 ) 4. 10 ) ( ( 5 . 1 ) 9 .6 ) ( ( 5 . 2 ) 13 .10 ) ( ( 5 . 3 ) 11 . 14 ) ( ( 5 . 4 ) 10 .1 ) ( ( 5 . 5 ) 2 .2 ) ( ( 5 . 6 ) 13 . 14 ) ( ( 5 . 7 ) 8 .4 ) ( ( 5 . 8) 18 .5 ) ( ( 6 . 0 ) 5 . 13 ) ( ( 6 . 1 ) 11 .5 ) ( ( 6 .17 ) ( ( 6 .. 13 ) ( ( 6 . 4 ) 5 .15 ) ( ( 6 . 5 )6 ) ( ( 6 . 6 ) 14 . 12 ) ( ( 6 . 7 ) 17 .5 ) ( ( 6 . 8) 0 . 11 ) ( ( 7 . 0 ) 15 .12 ) ( ( 7 . 1 ) 6 . 7 ) ( ( 7 . 2 ) 14 .9 ) ( ( 7 . 3 ) 9 . 8) ( ( 7 . 4 ) 4 . 18 ) ( ( 7 . 5 ) 12 .4 ) ( ( 7 . 6 ) 4. 17 ) ( ( 7 . 7 ) 17 .15 ) ( ( 7 .8) 4 . 9 )0 ) 14 . 0 ) ( ( 8 . 1 ) 3 . 0 ) ( ( 8 . 2 ) 16 . 11 ) ( ( 8 . 3 ) 7 .11 ) ( ( 8 . 4 ) 5 .5 ) ( ( 8 . 5 ) 17 .6 ) ( ( 8 . 6 ) 16. 13 ) ( ( 8 . 7 ) 18 .4 ) ( ( 8 .8) 2 . 13 ) ) ) ) ( defconst config - a1 ; ; use this to check the claim of the genericity of entropy evolution ; ; over time ( if nil ( random - microstate - compatible - with - macrostate [ [ 9 9 9 0 0 0 0 ] [ 9 9 9 0 0 0 0 ] [ 9 9 9 0 0 0 0 ] [ 0 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 0 ] ] ) ' ( ( ( 8 . 6 ) 11 .1 ) ( ( 6 . 6 ) 8 . 12 ) ( ( 6 .6 ) 17 . 0 ) ( ( 6 . 6 ) 11 .3 ) ( ( 8 .( ( 8 . 6 ) 16. 11 ) ( ( 8 .5 ) ( ( 6 . 6 ) 16 . 6 ) ( ( 7 .. 18 ) ( ( 8 . 5 ) 3 . 8) ( ( 6 . 5 ) 11 . 14 ) ( ( 7 . 3 ) 0 . 6 ) ( ( 8 .4 ) 1 . 10 ) ( ( 7 . 3 ) 12. 11 ) ( ( 6 . 4 ) 16 .2 ) ( ( 7 . 4 ) 12 .17 ) ( ( 7 . 4 ) 11 .3 ) ( ( 8 . 5 ) 6 . 7 ) ( ( 8 . 2 ) 17 .4 ) ( ( 8 . 2 ) 3 .8) ( ( 6. 1 ) 18 . 8) ( ( 8 . 2 ) 6 .10 ) ( ( 7 . 2 ) 18 . 18 ) ( ( 7 . 2 ) 14 .8) ( ( 74 ) ( ( 7 .1 ) ( ( 7 . 2 ) 11 .8) ( ( 3 . 7 ) 0 .13 ) ( ( 4 . 7 ) 2 . 7 ) ( ( 4 . 6 ) 15 .7 ) ( ( 5 . 7 ) 2 . 9 ) ( ( 4 .9 ) ( ( 4 . 7 ) 10 .2 ) ( ( 5 . 7 ) 7 .11 ) ( ( 4 . 6 ) 15 .16 ) ( ( 5 . 8) 4 .11 ) ( ( 5 . 4 ) 12 .14 ) ( ( 4 . 5 ) 17 . 6 ) ( ( 5 . 5 ). 11 ) ( ( 4 . 5 ) 9 .10 ) ( ( 5 . 5 ) 4 .17 ) ( ( 4 . 5 ) 16 .10 ) ( ( 5 .3 ) ( ( 5 . 3 ) 2 . 9 ) ( ( 5 . 5 ) 12 .5 ) ( ( 4 . 2 ) 3 . 6 ) ( ( 3 . 0 ) 5 .4 ) ( ( 3 . 1 ) 13 . 10 ) ( ( 3 . 0 ) 1 . 13 ) ( ( 4 .. 12 ) ( ( 4 . 1 ) 9 . 7 ) ( ( 4 . 0 ) 10 .16 ) ( ( 5 . 2 ) 13 .5 ) ( ( 3 . 1 ) 10 .17 ) ( ( 1 . 7 ) 5 . 14 ) ( ( 1 . 6 ) 10 . 0 ) ( ( 0 . 7 ) 4 . 7 ) ( ( 2 . 8) 3 . 7 ) ( ( 0 . 7 ) 2 .12 ) ( ( 1 . 8) 5 .4 ) ( ( 1 . 6 ) 4 . 14 ) ( ( 1 . 8) 6. 15 ) ( ( 1 . 6 ) 15 .5 ) ( ( 0 . 3 ) 17 .3 ) ( ( 1 . 4 ) 4 . 15 ) ( ( 1 . 3 ) 11 .9 ) ( ( 1 . 4 ) 5 .16 ) ( ( 0 .10 ) ( ( 1 . 5 ) 13 . 8) ( ( 2 . 3 ) 0 .1 ) ( ( 0 . 4 ) 5 . 12 ) ( ( 0 . 4 ) 16 .1 ) ( ( 0 .. 13 ) ( ( 0 .0 ) 16 . 13 ) ( ( 1 . 0 ) 14 . 14 ) ( ( 2 . 1 ) 17 .2 ) ( ( 0 . 0 ) 13 . 12 ) ( ( 1 .. 7 ) ( ( 2 . 1 ) 9 .3 ) ( ( 1 .2 ) ( ( 1 . 2 ) 11 .5 ) ) ) ) ( defun show - evolution ( config & optional n tag offset ) ( dotimes ( j ( or n 10 ) t ) ( insert ( format " \n====== % 3s % 3d % 8.3f = = = = = = " ( or tag " t " ) j ( config - entropy config ) ) ) ( display - config config ) ( setf config ( advance - time config amech - prime ) ) ) config ) ( defun amech - demo ( & optional config n ) ( let ( ( start - config ( or config config - a0 ) ) ( nsteps ( or n 50 ) ) ( config - at nil ) ( config - rat nil ) ( config - a00 nil ) ) ( setf config - at ( show - evolution start - config nsteps " t+ " ) ) ( setf config - rat ( reverse - velocities config - at ) ) ( insert ( format " \n [ * * * * * * * * * * * * * * * * * ] " ) ) ( display - config config - rat ) ( insert ( format " [ * * * * * * * * * * * * * * * * * ] \n " ) ) ( setf config - a00 ( show - evolution config - rat ( + 1 nsteps ) " t- " 1 ) ) ( equal config - a00 start - config ) ) )
|
one observes that a considerable level of confusion remains about some of those aspects of irreversibility , entropy generation and ` the arrow of time ' which actually are well understood . this demands that great care must be taken in any discussion of irreversibility to use clear - cut notions and precise language in order to be definite about which property follows from which assumption . in this work , a novel toy model of ` algebraic mechanics ' is presented that elucidates specific key aspects of entropy generation in a system with extremely simple reversible fundamental dynamics . it is argued why insights gained through a detailed quantitative study of this toy model also have to be taken into account for any realistic model of microscopic dynamics , classical or quantum alike . as irreversibility also touches upon the quantum mechanical measurement process ( through the ` proof ' of the ` h - theorem ' ) , a simple way to address the tenacious question ` when ( and how ) the wave function collapses ' is offered . * algebraic mechanics as an accessible toy model demonstrating entropy generation from reversible microscopic dynamics * * thomas fischbacher + *
|
the hapmap project is an international effort to identify the genetic variation in the human population .this includes the identification of all single nucleotide polymorphisms ( snps ) arising in human populations . in the first major published study , approximately ten million snpsare described throughout the human genome , derived from the genotypes of 269 individuals from four populations . a crucial component of the project has involved the comprehensive detection of all snps in ten kb regions of the human genome .the ten regions were selected from targets studied by the encode project , whose goal is to annotate 1% of the human genome .thus , while the haplotype map of the human genome is not complete , substantial progress has been made to date , and there is every reason to expect a completed map in the near future .the characterization of genetic variation in the human population is only a first step towards the fundamental goal of relating phenotypes to genotypes . while in principle every snp may contribute individually to a phenotype , interactions among loci are commonfurthermore , the problem of identifying relevant snps , and understanding the interactions among them , is confounded by the vast number of snps in the genome .these issues make it non - trivial to perform _ association mapping _ , in which phenotypes are mapped by analyzing genotypes of cases and controls .fortunately , this problem is ameliorated by two factors .first , the number of individuals in the human population is far smaller than the number of possible haplotypes , so that even if every individual on earth were genotyped , the description of the data would not involve all possible haplotypes .secondly , there is a lot of _ linkage disequilibrium _ ( ld ) , which describes the situation in which some combinations of alleles occur more or less frequently in a population than would be expected by the overall frequency of the alleles in the population .thus , `` informative '' snps , also known as tag snps , can be identified and used to simplify the measurement of variation , and also to reduce the number of loci that need to be considered for association mapping .the data produced by the hapmap project are useful to understand these issues . by way of example , we consider encode region enr131 . this is a -base region from chromosome 2 . the hapmap project has revealed that there are snps in the region , meaning that even though there are bases , the genomes of any two individuals differ in at most sites .these sites typically contain one of two possible alleles , so a human haplotype is described by a binary vector of length , and a genotype by a vector of length whose entries are either or .a or indicates that the two haplotypes agree ( homozygous ) , and specifies the allele , and a indicates disagreement ( heterozygous ) . for our analysis , it is essential that and are the homozygotes .this encoding differs from the standard encoding where and are homozygotes , and is the heterozygote ( see , e.g. , ) . in general, a _ genotype _ is a vector of length whose elements are from the alphabet .the genotypes for a population of individuals form a matrix of size . in the case of enr131 ,the hapmap project genotyped of the snps in 269 individuals , resulting in a matrix .in fact , it is possible to reduce the number of snps that need to be considered in an association mapping study by a factor of 10 by selecting tag snps .the notion of a _ genotope _ was introduced in and further developed in .a genotope is the convex hull of all possible genotypes in a population .the regular triangulations of the genotope describe the possible epistatic interactions among the loci .these objects are fundamental for analyzing linkage disequilibrium .for example , the sign of the standard measure of ld for a pair of loci corresponds to one of the two triangulations of the respective genotope ( the square ) . for the data to be examined in this paper , the genotope is the convex hull of points in ,so it is a subpolytope of the -dimensional cube with side length two . in section 2we review the relevant mathematical theory and we discuss the meaning of the genotope for population genetics .the _ human genotope _ is the convex hull of about points , one for each individual in the human population , and the ambient dimension is bounded above by the number of all snps . what can currently be derived from the hapmap datais a subpolytope of the human genotope which is the convex hull of only points , one for each sequenced individual . in this paperwe also restrict our attention to two specific encode regions , so that the number of informative snps is on the order of hundreds .we refer to the resulting subpolytopes as the _encode genotopes_. in sections 4 - 6 we study several different low - dimensional projections of the encode genotopes .these projections are chosen in a statistically meaningful manner , and we argue that the low - dimensional polytopes are a useful geometric representation of the data . in section 4we apply principal component analysis ( pca ) to determine the most significant projections of our data .we compute the image of the encode genotopes under projection into the six most significant pca directions .these projections reveal the population structure of the hapmap genotypes in a manner consistent with .we then contrast pca projections with low - dimensional projections based on tag snps .the resulting polytope data are presented in section 5 .this geometric analysis suggests a new statistical test , the _ volume criterion _ , for identifying informative snps .in section 6 we apply a statistical method which is less well - known than pca but possibly more informative in the population genetics context ._ archetypal analysis _ was introduced by cutler and breiman for identifying a small collection of archetypes from given data points .we apply this method to our genotypes .the archetypes are either genotypes or mixtures of genotypes , so their convex hull is a polytope with vertices inside an encode genotope .we call it the _ -th archetope_. its defining property is that the total least squares error is minimal when each genotype is replaced by its nearest mixture of archetypes .we compute various archetopes and explain how these may be useful for designing genetic studies .our studies are preliminary and merely foreshadow the possibilities for a geometric organization of the large amount of genotype data that are currently being produced . we believe that low - dimensional projections of genotopes will be useful for correctly quantifying population structure variation , and also for studying interaction .our results demonstrate the feasibility of computing low - dimensional projections of genotopes , and our analysis of the hapmap data provides a first step towards the construction of the human genotope .the geometric concept of a genotope was introduced in for studying epistasis and shapes of fitness landscapes .this model was applied in to fitness data in _ e. coli _ , and its relevance for human genetics was demonstrated in .we briefly review the mathematical setup in but with emphasis on diploids rather than haploids .we consider genetic loci each of which is a diploid locus with alleles .a or indicates that the two haplotypes agree , and specifies the allele , and a indicates disagreement .the connection to the classical genetics notation , used to discuss diploids in ( * ? ? ?* example 2.5 ) , is as follows : there are genotypes , one for each element of the set .a population is a list of such genotypes .it determines an empirical probability distribution on .the set of all probability distributions on is a simplex of dimension .it is denoted by and called the _ population simplex_. the vector which represents a given genotype records the allele at each of the sites .if is a population then is a number between and which indicates the fraction of the population which has genotype .the _ allele frequency vector _ of the population is the vector which lies in .the -th coordinate of this vector indicates the average number of occurrences of the lower case letter at the -th site in the population .a _ genotype space _ is any subset of .the elements of are the genotypes that actually occur in some population .the genotype space will always be a very small subset of because the cardinality of is bounded above by the number of individuals , which is usually much smaller than .for example , the size of the human population , , is less than as soon as the number of sites exceeds twenty .we define the _ genotope _ to be the convex hull , denoted , of the given genotype space .equivalently , is the polytope in which consists of the allele frequency vectors of all possible populations with individuals in .we illustrate the concept of a genotope for the case of loci . here is any subset of the genotypes in , and the genotope is a convex polytope of dimension at most three .a basic invariant of such a polytope is the triple where is the number of vertices , is the number of edges and is the number of facets .figure 1 shows three concrete examples : * if then the genotope is the three - dimensional cube with side lengths two .it has .the eight vertices correspond to the genotypes that are homozygous at all three sites . * if the eight purely homozygous genotypes can not occur in a population then consists of the remaining genotypes . nowthe genotope is a _cubeoctahedron _ , with .its twelve vertices correspond to genotypes with precisely two homozygous sites . * if the alleles at all three sites must be distinct then and the dimension of the genotope drops to two .it is a regular hexagon with .the theory developed in concerns gene interactions and the shapes of fitness landscapes . by definition , a _ fitness landscape _ is any function . in population genetics, measures the expected number of offspring of an individual with genotype .the regular triangulations of the genotope describe epistatic interactions among the genotypes . in the context of human genetics, it makes sense to replace the notion of fitness by penetrance values for a disease or by expression levels of a gene .a classification of two - dimensional genotopes and their triangulations from this perspective is presented in .we can not construct the human genotope at this time because the hapmap project is not complete .however , in section 3 we explain how the existing preliminary hapmap data can already be used to reveal something about the human genotope . by restricting our attention to the encode regions , and by taking advantage of linkage disequilibrium , we are able to compute biologically meaningful low - dimensional projections of the human genotope .in this section we explain the data we used , how they were obtained , and how they were prepared .our data consists of 269 genotypes over dbsnp loci in the two encode regions enr131 and enm014 sampled by the hapmap project .these are the two regions listed in ( * ? ? ?* table 8) . in our study, we used the non - redundant version of the dataset which is available for download at www.hapmap.org/genotypes/latest_ncbi_build34/encode/ the data are grouped into four populations : utah - european ( ceu ) , han - chinese ( chb ) , japanese ( jpt ) and yoruba ( yri ) .the region enm014 has 2315 snps , of which 1968 were sampled in all four populations .only one of the snps was triallelic in the hapmap data , but many snps had incomplete data due to sequencing errors .in fact the number of snps in enm014 successfully sequenced in all 269 individuals is 790 , about a third of the total number of snps .it seems that a disproportionate number of loci in region enm014 had missing data , evidently due to unusual sequencing problems in several individuals . for simplicitywe restricted our attention to the portion of snp loci that had two or fewer observed alleles in the hapmap data , and which were successfully sampled in all 269 hapmap individuals .this implies that our projections of the human genotope are all representable as subpolytopes of a standard hypercube .after so restricting the set of loci , our data comprise 269 diploid genotypes , over 1154 loci from region enr131 and 790 loci from region enm014 . for each snp locus ,the lexicographically smaller nucleotide was used as the reference allele , and each of the observed 269 diploid genotypes was encoded as a numerical genotype in .for example , for each snp locus with nucleotide or we have , , and .we thereby encoded the 269 genotypes as a matrix over for region enr131 and a matrix for region enm014 .the convex hull of the rows of one of our matrices gives a projection of a subpolytope of the human genotope .as an illustration of our data , below is the upper left submatrix of the matrix for region enr131 .it encodes the first 15 out of the 1154 biallelic snps , which were sampled in our first 10 hapmap individuals in the ceu population : already in the above submatrix we see likely linkage between the 5th and 6th columns , and also between the 12th , 13th , and 14th columns .such covariance among snps will allow us to work with various projections of genotopes , which we discuss in the following sections .we also note that the individuals in rows 6 and 7 as well as those in rows 9 and 10 have identical sequences in this matrix .the eight distinct rows in this example are affinely independent , so this projection of the human genotope into is a -dimensional simplex .all of our data , along with the linux utility used to convert the hapmap data files into matrices can be downloaded at our supplementary website bio.math.berkeley.edu/humangenotope/ the utility takes the four population files for a particular encode region , downloaded from the above hapmap site , and it converts each file into the corresponding matrix over , in both maple and matlab format .principal component analysis ( pca ) is a standard statistical technique for reducing the dimension of high - dimensional data . in our study ,the data is a -matrix with entries in .the genotope under consideration is the convex hull of the row vectors of the matrix .our mathematical problem consists of applying pca to our data in a manner that is consistent with the affine geometry of convex polyhedra . for this reason ,we augment the matrix to a -matrix by adding a column of s . in our situation, we have , and pca amounts to computing the singular value decomposition where is a real diagonal -matrix whose main diagonal entries satisfy .the columns of the -matrix are the left singular vectors of ; they form an optimal orthonormal basis for the column space of . likewise ,the rows of the -matrix are the right singular vectors of ; they are an optimal orthonormal basis of the row space of . hereoptimality means that the matrix obtained from by setting is closest ( in the euclidean norm ) to among all -matrices of rank .consider any positive integer .let denote the submatrix of consisting of the first columns .we define the _-th pca projection _ of the genotope represented by to be the convex hull in of the row vectors of the matrix .this polytope can be regarded as the statistically most significant orthogonal projection into of the given genotope in .the numerical computation of pca projections is straightforward , equivalent to computing the svd of the data matrix . using matlab on a pentium 4 pc, we can compute pca projections of a hapmap encode region in a matter of seconds . however , there are non - trivial numerical issues with these computations which makes them more difficult in our case .what we are seeking is the correct combinatorial structure of the genotopes and their projections .however , arbitrarily small round - off errors can change the combinatorics of a polytope .an example is the standard -dimensional cube : if we slightly perturb one vertex , then at least one square face will be broken into two triangles . to solve this round - off problem, we computed each pca projection of our encode genotopes using very high precision arithmetic . whenever we observed a facet of a projected genotope nearly parallel to one of its neighboring facets , we merged the two facets together .this process gives the correct facets ..singular values and -vectors of the -th pca projection of the encode genotopes for regions enr131 and enm014 , for .[ cols="^,^,<,<",options="header " , ] .2 cm [ pcatable ] we computed the -th pca projection of the two encode genotopes for up to . for each projectionwe give the -vector , which records the number of faces of each dimension .the results are shown in table 1 .we note that the first few principal components explain most of the variation in the data , and that the -vectors of the -th pca projections are quite similar between the two regions .figure 2 shows the 3-dimensional pca projection of the enr131 genotope .the -vector is which means that the polytope has vertices , edges , and facets .each vertex of the polytope is a projection of one of the genotypes in the four populations , which are indicated by colors .we found that the projected genotopes give an excellent representation of the four different populations , in the sense that they correspond to four distinct regions on the polytope boundary .the only exception in figure 2 is one of the japanese genotypes that occurs among the utah genotypes .while this may be a coincidence , it is noteworthy that only the identification of japanese in the genotyping process was solely based on self - reporting .in this section we discuss coordinate projections of the encode genotopes .such projections are relevant because of the commonplace practice of selecting _tag snps _ for genetics studies .such snps are subsets of the available snps that are , as much as possible , in pairwise linkage disequilibrium .thus , despite the fact that tag snps capture less of the variation in the data than the pca projection , they are useful in re - sequencing applications where it is desirable to sample as few snps as possible . the polyhedral analog of restricting analysis to tag snps is the projection of the genotope onto the tag snp coordinates .each individual projection onto snps is easy to compute , provided is not too big .what makes the computation of all coordinate projections challenging is the combinatorial explosion in the number of -element subsets of the snps .we did not attempt to exhaustively compute all of these projections .instead we computed the projections of the two encode genotopes onto tag snps selected for further study . using the software hclust.r , we chose 35 tags out of the 790 snps for region enm014 , and 109 tags out of the 1154 snps for region enr131 .these rather small sets of tag snps still capture most of the variation in the data : for region enm014 , the sum of estimated variances of the original 790 columns is 108 , and after projecting all columns onto the span of the 35 tag columns ( and an added column of ones ) , the sum of the 790 estimated variances is 95 .similarly for region enr131 , the original sum of estimated variances is 288 , and becomes 274 after projecting .many of the snps not chosen as tags were monomorphic in the hapmap data , or had low observed minor allele frequencies .for example , in region enm014 there were 206 monomorphic sites and 331 with low minor allele frequency .hclust.r reported that 253 candidate snps were considered for region enm014 , and 726 candidate snps were considered for region enr131 .we then investigated random samples of coordinate projections of the encode projections onto tag snps for .this computation of tens of thousands of polytopes was accomplished by automating polyhedral software .the packages we used are polymake and ib4e .our geometric analysis suggests a new test for identifying informative snps . for each coordinate projection, we compute the volume of the resulting projected genotope .since points in a genotope correspond to allele frequency vectors which can be realized by populations over the genotypes , there is a natural probabilistic interpretation of such volumes of genotopes .the _ volume criterion _ seeks to identify the subset of snps which maximizes this polytope volume . as an example of our coordinate projections data , figure 3 shows the empirical distribution of the volumes of the tag snp projections of the enm014 genotope . in our random sample of 2000 6dprojections onto tag snps for region enm014 , the largest volume we observed was 23.36 .the projection attaining this maximal volume is a genotope with 84 vertices and 377 facets , which is high compared to randomly chosen 6d projections onto tag snps .moreover , this particular 6d projection explains almost half of the variation in our data matrix for region enm014 . out of the random sample of 2000 6dtag snp projections , only 13 produced a higher sum of variances of the projected columns .we take this as strong empirical support for our proposed volume criterion .archetypal analysis was introduced by cutler and breiman as an alternative to pca .its aim is to find low - dimensional projections of the data points onto meaningful mixtures of the high - dimensional points .our data points in this section are the rows of our data matrix , i.e. , the genotypes .they represent the individuals in the four populations .archetypal analysis finds _ archetypes _ that have the property that when each genotype is replaced by its nearest mixture of archetypes , the total least squares error is minimal . more precisely , if is the number of archetypes to be found ( specified by the user ) , then the goal is to find archetypes together with and ( and ) such that and is minimized .the benefit of archetypal analysis is that the archetypes have a useful and meaningful interpretation . for the data studied here , the archetypes are mixtures of genotypes .thus the inferred archetypes can be interpreted as representative populations for the measured genotypes .no efficient algorithm is known that guarantees finding the optimal archetypes , but the alternating optimization procedure suggested in appears to perform well in practice . in our view , computing archetypes from human variation data may be useful for designing population based genetic studies .in particular , the allele frequencies of the archetype populations suggest sampling strategies for controls in case - control studies , where it may be useful to sample a small number of groups of controls whose allele frequencies match the archetype populations .we now explain our implementation of archetypal analysis .the function to be minimized is a large biquadratic polynomial , that is , a polynomial which is separately quadratic in two groups of unknowns .this biquadratic polynomial is the _ residual sum of squares _( rss ) whose derivation is given in .our problem is to minimize the residual sum of squares subject to non - negativity constraints .this optimization problem can have many local minima , and , in general , we can not hope to find the global minima .the heuristic of alternating optimization for computing local minima works as follows .if we keep one of the two sets of variables fixed , then the objective function is just quadratic in the other set of unknowns and we can easily find the global minimum .we then fix those values and we allow the other set of unknowns to vary , solving again a quadratic optimization problem .iterating this procedure leads to a local optimum .this process can be repeated with many different starting values to reach a local optimum that is eventually satisfactory .we implemented this alternating optimization algorithm in matlab , using the high - performance optimization package sedumi to solve the arising quadratic optimization subproblems .as a heuristic to speed up computations , we first restricted our attention to tag snps and computed archetypes for these much smaller data sets .we then used the obtained archetypes as an initial guess for the archetypes in the full - dimensional data .we computed sets of three archetypes for enr131 and enm014 .the number is of particular interest in our study since we advocate that archetypes should be interpreted as representative populations , and the hapmap data is derived from three populations ( ceu , jpt+chb , and yri ) .figure 4 depicts the three archetypes computed for region enr131 . by definition, each archetype can be expressed as a mixture where the are the genotypes of the 269 hapmap individuals , and all , with . for each archetype, we plot the 269 coefficients as 269 bars in the figure .we plotted all three archetypes on the same figure , using a different bar color for each archetype .we confirm that the three archetypes correspond to the three main geographic populations ceu , jpt+chb , and yri .although there is some mixing between ceu and jpt+chb in two of the archetypes , the third archetype is purely yri and it is the only archetype containing yri .this is consistent with the fact that yri is an outgroup among the three populations .as the amount of human variation data continues to increase , it is becoming imperative to find representations of the data that are informative and convenient for analysis .the human genotope offers one such representation : a geometric description of the data that is useful for studying population structure and epistasis . in this paper , we have computed three low - dimensional projections of a subpolytope of the human genotope .the projections are all compatible with the geometric structure of the genotope , and are useful in different ways . in terms of population structure , we believe that the results using archetypal analysis are particularly interesting , and we were surprised at the natural separation of the populations that emerged in the archetypes ( figure 4 ) . in our computations we picked the number of archetypes to be based on information we had about the data .in general , it is an interesting problem to determine , in a statistically meaningful way , the `` optimal '' number of archetypes to use for an analysis .we believe that information theoretic measures , such as the jensen - shannon divergence , may be useful for this problem .the principal component projections in section 4 demonstrate that the genotope is a representation that is compatible with existing approaches to dimensionality reduction .in particular , we have shown that principal component projections can be applied to polytopes , and not just points .our results in figure 2 offer a geometric analog of cavalli - sforza s gene - based population analyses .we conclude by reiterating that our results are an initial first step towards the construction of the human genotope .next steps include the extension to more snps , association of phenotype data with the genotypes so that epistasis can be studied , and an analysis of how the genotope changes over time .this work was supported by in part by the defense advanced research projects agency ( darpa ) under grant hr0011 - 05 - 1 - 0057 .chesler , l. lu , s. shou , y. qu , j. gu , j. wang , h.c .hsu , j. d. mountz , n.e .baldwin , m. a. langston et al . , complex trait analysis of gene expression uncovers polygenic and pleiotropic networks that modulate nervous system function , _ nature genetics _ , * 37 * ( 2005 ) 233242 .i. grosse , p. bernaola - galvn , p. carpena , r. romn - roldn , j. oliver and h.e .stanley , analysis of symbolic sequences using the jensen - shannon divergence , _ physical review e _ * 65*(041904 - 1 ) ( 2002 ) 10631065 .price , n.j .patterson , r.m .plenge , m.e .weinblatt , n.a .shadlick , d. reich , principal components analysis corrects for stratification in genome - wide association studies , _ nature genetics _* 38 * ( 2006 ) 904909 .
|
the human genotope is the convex hull of all allele frequency vectors that can be obtained from the genotypes present in the human population . in this paper we take a few initial steps towards a description of this object , which may be fundamental for future population based genetics studies . here we use data from the hapmap project , restricted to two encode regions , to study a subpolytope of the human genotope . we study three different approaches for obtaining informative low - dimensional projections of this subpolytope . the projections are specified by projection onto few tag snps , principal component analysis , and archetypal analysis . we describe the application of our geometric approach to identifying structure in populations based on single nucleotide polymorphisms .
|
human response time ( rt ) is defined as the time delay between a signal and the starting point of human action .for example , one can measure time interval from a word appearing on a computer screen to when a participant pushes a keyboard button to indicate his or her response .two well established empirical facts of rt are the power law tails of rt distributions and 1/f noise of rt time series , to which any theoretical description must conform .the generalized inverse gamma ( giga ) function ( appendix [ giga_scale ] ) belongs to a family of distributions ( appendix [ giga_ln ] ) , which includes inverse gamma ( iga ) , lognormal ( ln ) , gamma ( ga ) and generalized gamma ( gga ) .the remarkable property of giga is its power - law tail ; for a general three - parameter case , the power - law exponent is given by the negative , so that , .giga emerges as a steady state distribution in a number of systems , from a network model of economy , to ontogenetic mass growth , to stock volatility .this common feature can be traced to a birth - death phenomenological model subject to stochastic perturbations ( appendix [ birth_death ] ) .here we argue that among closed form distributions the giga best describes rt distribution .giga has a natural scale parameter , which determines the onset of the power law tail , and two shape parameters , which determine the exponent of the tail .as such , our argument is an extension of previous approaches , such as cocktail " model , which effectively contains shape and scale parameters as well .furthermore , we speculate that the difficulty of a cognitive task tracks the half - width of the rt distribution and discuss it within the giga framework . our numerical analysis is performed on the following data ( explained in text ) : elp ( english lexicon project ) , he ( hick s experiments ) and ldt ( lexical decision time ) .two key features distinguish our approach .first , in addition to usual individual participant fitting , we perform distribution fitting on combined participants data . while in line with individual fitting, this creates considerably less noisy sets of data .second , we develop a procedure for fitting the tails of the distribution directly ( appendix [ tail_power ] ) , which unequivocally proves the existence of power law tails .this paper is organized as follows . in sectionii , we provide description of the experimental setup and data acquisition . in section iii , we conduct log - log tail fitting and rt distribution fitting with giga . in section iv , we conclude with the discussion of task difficulty .elp data is from the english lexicon project . he and ldt data was collected under the supervision of j. g. holden .elp ( english lexicon project ) studies pronunciation latencies to visually presented words ; participants sampled from six different universities . .data : two sessions , 470 participants each : session 1 ( elp1 ) , 1500 trials ; session 2 ( elp2 ) , 1030 trials .[ [ he ] ] he ^^ he ( hick s choice rt experiment ) given a stimulus selected from a finite set of stimuli , participants try to respond with an action from a set of actions corresponding to this set of stimuli .original he is described in .data : 11 participants completed 1440 trials of 2 , 4 , 6 , 8 , and 10 options , approximately 16 000 combined datapoints for each condition .ldt ( lexical decision time ) .data : three groups 60 participants completed 100 word and 100 nonword trials of 1 , 2 , and 4 word ldt respectively , only the correct word trials are depicted , approximately 6000 datapoints for each group . to enhance our efforts to understand the distribution s tail behavior , we combined all participants data from each experiment into a single distribution .+ + + + + + log - log plot of power law tail fitting is discussed in appendix [ tail_power ] . in figs .[ rt : fig : loglog_elp ] , [ rt : fig : loglog_hick ] , [ rt : fig : loglog_ldt ] , we show the results for rt experiments . with the exception of ldt , trials for most of the tasks timed out by 4 or 5 seconds .this requirement has the potential to distort rt distributions , especially their slow tails , as log - log plot bends downward when rt is close to 4 seconds .( in the future , the requirement of maximum time limit should be dropped or , at least , the limiting time cutoff must increase to reflect the natural rt distribution . ) in contrast , the maximum rt for ltd is approximately 10 seconds and , as seen in fig .[ rt : fig : loglog_ldt ] and fig .[ rt : fig : giga_ldt ] , the log - log plots are closer to straight lines and giga fit is good . with .elp2 : with .the p - values are both 0.,title="fig:",scaledwidth=45.0% ] with .elp2 : with .the p - values are both 0.,title="fig:",scaledwidth=45.0% ] with .two word ldt : with .four word ldt : with .the p - values are 0.97 , 0.82 , and 0.87 respectively.,title="fig:",scaledwidth=45.0% ] with . two word ldt : with .four word ldt : with .the p - values are 0.97 , 0.82 , and 0.87 respectively.,title="fig:",scaledwidth=45.0% ] with .two word ldt : with .four word ldt : with .the p - values are 0.97 , 0.82 , and 0.87 respectively.,title="fig:",scaledwidth=45.0% ] of giga are , , , and respectively . are 2.5 , 3.9 , 5.0 , and 8.6 respectively .the p - values are all 0.,title="fig:",scaledwidth=45.0% ] of giga are , , , and respectively . are 2.5 , 3.9 , 5.0 , and 8.6 respectively .the p - values are all 0.,title="fig:",scaledwidth=45.0% ] + of giga are , , , and respectively . are 2.5 , 3.9 , 5.0 , and 8.6 respectively .the p - values are all 0.,title="fig:",scaledwidth=45.0% ] of giga are , , , and respectively . are 2.5 , 3.9 , 5.0 , and 8.6 respectively .the p - values are all 0.,title="fig:",scaledwidth=45.0% ] in figs .[ rt : fig : giga_elp ] , [ rt : fig : giga_ldt ] , and [ rt : fig : giga_hick ] , we show giga distribution ( appendix [ giga_scale ] ) fitting of rt . in the figures ,the distance from the origin to the blue dot is rightward shift of giga distribution .the rts to the left of the red lines are cut in the fitting of giga distribution . , the cut and shift parameters are all found by minimizing the chi - squared test statistic as follows .we choose the cut and shift parameters , find through maximum likelihood estimation and compute the chi - squared test statistic .we repeat this process for another group of cut and shift parameters . in the end, we obtain the parameters that minimize the chi - squared test statistic . versus log - log fitted tail exponent ;triangles : elp , squares : ldt , diamonds : he , scaledwidth=45.0% ] visually , giga fitting is good , yet p - values are all zero with the exception of ldt . as discussed above , a possible explanation is that the participants are not given enough time to respond , which distorts rt distributions .also , ref . argues that chi - squared statistic yields poor results for goodness - of - fit we used chi - squared statistic because , due to the cut parameter , the total number of rts is not fixed in our parameter fitting .lastly , in fig .[ rt : alphagamma_loglog ] we show the the relationship between the tail exponent parameter and log - log fitted exponent parameter with the exception of 4 ldt ( which is one of the hardest tasks see below ) , the correspondence is quite good .versus its half width ; triangles : elp , squares : ldt , diamonds : he , scaledwidth=45.0% ] in fig .[ rt : half_width_alphagamma ] , we plot the power law exponent from the best fit giga above as a function of their half - width . with the exception of hick 6 , there is a clear tracking between the two ( notice that by eye he pdfs seemingly show decrease of modal pdf and increase of pdf half - width with the increase of hick s number ) .we speculate that the half width of the distribution would be a natural measure of a task difficulty .this is easily analyzed in terms of the giga distribution , which we believe is well suited to description of rt distributions . in appendix[ giga_scale ] , it is explained that due to giga s scaling property , it is sufficient to consider the case , that is iga. furthermore , we can eliminate one more parameter by setting mean to unity . in some cognitive tasks, the mean may not be a good indicator of difficulty since an easy cognitive task may require a more idiosyncratic response and vice versa . for such iga ,a single parameter then defines both scale and shape , that is the half width is directly relates to the exponent of the power law tail . as seen in fig .[ iga_halfwidth ] it has a maximum as a function of this parameter , which also marks a crossover between iga limiting behaviors .this opens up an interesting possibility that depending on the magnitude of , increase in the task difficulty may either increase or decreases the magnitude of the power law exponent .this subject , including sufficient data to analyze the aforementioned scaling property , requires further investigation .holden s work was supported by the national science foundation award bcs-0642718 . + * we repeat a number of appendices from verbatim , given that the expected audiences for these two papers are vastly different . *we begin with the limit of giga , namely iga distribution pdf { \left}({\frac}{{\beta}}{x}{\right})^{1+{\alpha } } .\ ] ] setting the mean to unity , the scaled distribution is the mode of the above distribution is .the modal pdf is which has a minimum at as shown in fig .[ st : fig : iga_pdf_mode ] . the change in pdf behavior on transition through this valueis clearly observed in fig .[ st : fig : iga_pdf_list ] . also plotted in fig .[ st : fig : iga_pdf_mode ] is the half - width of the distribution .clearly , it highly correlates with the pdf maximum above . ,title="fig:",scaledwidth=23.0% ] , title="fig:",scaledwidth=23.0% ] [ iga_halfwidth ] , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines.,scaledwidth=34.5% ] both minimum and maximum above clearly separate the regime of small : , where the approximate form of the scaled pdf is }{x^2}\ ] ] whose mode is and the magnitude of the maximum is /({\alpha}-1 ) \approx 4/({\alpha}-1 ) $ ] , from the regime of large , , where we now turn to giga distribution and the effect of parameter . in fig .[ st : fig : giga_contour ] we give the contour plots of modal pdf and total half - widths in the plane , where and is the exponent of the power law tail .we observe an interesting _ scaling property _ of giga : for , the dependence of the pdf on is very weak , as demonstrated in fig .[ st : fig : giga_pdf_list_overlay ] , where it is plotted for integer from 2 to 7 .an alternative way to illustrate this is to plot pdf for a fixed and variable , as shown in fig .[ st : fig : giga_pdf_list ] .following the thick line we notice that , for , mode and half - width change very little with .the key implication of the scaling property is that iga contains all essential features pertinent to giga . .thin lines : contours of modal pdf .thick line : .bottom : contours of total half - widths of giga distributions with mean .thick line : ., title="fig:",scaledwidth=34.5% ] .thin lines : contours of modal pdf .thick line : .bottom : contours of total half - widths of giga distributions with mean .thick line : ., title="fig:",scaledwidth=34.5% ] . in the plots , .six lines correspond to ,scaledwidth=34.5% ] . in each subplot with constant , from left to right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines . , title="fig:",scaledwidth=23.0% ] . in each subplot with constant , from leftto right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines ., title="fig:",scaledwidth=23.0% ] + . in each subplot with constant , from left to right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines . ,title="fig:",scaledwidth=23.0% ] . in each subplot with constant , from left to right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines ., title="fig:",scaledwidth=23.0% ] + . in each subplot with constant , from left to right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines . ,title="fig:",scaledwidth=23.0% ] . in each subplot with constant , from left to right , , and , corresponding to red , magenta , orange , green , cyan , blue , and purple lines . , title="fig:",scaledwidth=23.0% ]this appendix is a self - contained re - derivation of a ln limit of giga . the three - parameter giga distribution is given by for and 0 otherwise .we require that .iga is the the case of giga : note that giga and iga have power - law tails and respectively for .we proceed to rewrite giga in the following form : .\end{split}\ ] ] a re - parameterization with and , allows to express the old parameters in terms of the new : leading , in turn , to and where we have used the taylor expansion of the term in eq .( [ st : eq : exp_expansion ] ) , which depends on we can also prove that = { \frac}{1}{\sqrt{2\pi}{\sigma } } , \ ] ] based on the stirling s approximation when we let . upon substitution of eqs .( [ st : eq : gigaln1 ] ) and ( [ st : eq : gigaln2 ] ) into eq .( [ st : eq : pdf_giga ] ) , we obtain the ln distribution .\ ] ] in conclusion , giga has the limit of ln when tends to in such a way that tends to quadratically and tends to linearly .giga ( iga ) are also transparently related to gga ( ga ) distribution : and .note , finally , that lawless derived the ln limit of gga in a manner similar to ours , which solidifies the concept of the `` family '' that unites these distributions .many natural and social phenomena fall into a stochastic `` birth - death '' model , described by the equation where can alternatively stand for such additive quantities as wealth , mass of a species , volatility variance , etc . , and cognitive response times here .the second term in the rhs describes an exponentially fast decay , such as the loss of wealth and mass due to the use of one s own resources , or the reduction of volatility in the absence of competing inputs and of response times due to learning .the first rhs term may alternatively describe metabolic consumption , acquisition of wealth in economic exchange , plethora of market signals , and variability of cognitive inputs .the third , stochastic term is the one that changes the otherwise deterministic approach , characterized by the saturation to a final value of the quantity , with the probabilistic distribution of the values - as it were , giga in the steady - state limit .furthermore , just as the wealth model has microscopic underpinnings in a network model of economic exchange , it is likely that stochastic ontogenetic mass growth could be described by analogous network model based on capillary exchange .a network analogy may be possible for cognitive response times and volatility as well .the exponent of a power law tail can be easily calculated once we notice that if with , then in figs . [st : fig : ln_simulation_loglogplot ] and [ st : fig : iga_simulation_loglogplot ] , we show the log - log plot of the tail of ln and iga distributions respectively .clearly , a straight line fit is considerably better for the latter , even though the fitted slope does not coincide with the theoretical value . towards this end , in fig .[ st : fig : giga_ga_half_simulation_loglogplot ] , we show log - log plots of the tail of giga distributions for and .the empirical trend emerging form the iga and giga plots is that the straight line fits of log - log plots become progressively better as gets larger . with mean 1 .the left red , middle green , and right blue curves correspond to , and respectively .bottom : log - log plots of simulated data sampled from the iga distributions .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively are fitting of vs. in a range of cdf from 0.9 to 0.99 . ,title="fig:",scaledwidth=34.5% ] with mean 1 .the left red , middle green , and right blue curves correspond to , and respectively .bottom : log - log plots of simulated data sampled from the iga distributions .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively are fitting of vs. in a range of cdf from 0.9 to 0.99 . , title="fig:",scaledwidth=34.5% ] with mean 1 .the left red , middle green , and right blue curves correspond to , and respectively .bottom : log - log plots of simulated data sampled from the iga distributions .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively are fitting of vs. in a range of cdf from 0.9 to 0.99 . , title="fig:",scaledwidth=34.5% ] with mean 1 .the left red , middle green , and right blue curves correspond to , and respectively .bottom : log - log plots of simulated data sampled from the iga distributions .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively are fitting of vs. in a range of cdf from 0.9 to 0.99 . , title="fig:",scaledwidth=34.5% ] and ( bottom ) with mean 1 .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively ( top ) and , and ( bottom ) are fitting of vs. in a range of cdf from 0.9 to 0.99 . , title="fig:",scaledwidth=34.5% ] and ( bottom ) with mean 1 .below of the y - axis , the left blue , middle green , and right red curves correspond to , and respectively .the dashed lines with slopes , and respectively ( top ) and , and ( bottom ) are fitting of vs. in a range of cdf from 0.9 to 0.99 . , title="fig:",scaledwidth=34.5% ] to understand this -dependence the difference between the theoretical and fitted slope , we consider the local slope of the log - log plot , for giga ( and iga , ) , the local slope is given by with the regularized gamma function , where is the incomplete gamma function .the local slopes are shown , as function of in figs .[ rt : fig : local_slope_iga ] and [ rt : fig : local_slope_giga ] respectively .it is clear that the local slope can differ substantially from its limiting ( saturation ) value .as becomes larger , the local slope tends closer to its limiting value . with mean 1 ( ) .the left column is the log - log plot and the right one is the local slope of the log - log plot from eq .( [ rt : eq : local_slope_giga ] ) . is 2 , 3 , 5 , and 7 for the first , second , third , and fourth rows respectively .the red lines are : the limit of the local slope when .,scaledwidth=45.0% ] with mean 1 ( ) .the left column is the log - log plot and the right one is the local slope of the log - log plot from eq .( [ rt : eq : local_slope_giga ] ) . is , , , and for the first , second , third , and fourth rows respectively .the red lines are : the limit of the local slope when .,scaledwidth=45.0% ] for the ln distribution , the local slope is given by which slowly decreases with . butas is clear from ( [ rt : eq : local_scope_ln ] ) and fig .[ rt : fig : local_slope_ln ] , the local slope does not saturate when . .the left column is the log - log plot and the right one is the local slope of the log - log plot in eq .( [ rt : eq : local_scope_ln ] ) . is 0.2 , 0.5 , 1 , and 2 for the first , second , third , and fourth row respectively .the jagged part of the top right plot is due to computational precision.,scaledwidth=45.0% ]
|
we demonstrate that distributions of human response times have power - law tails and , among closed - form distributions , are best fit by the generalized inverse gamma distribution . we speculate that the task difficulty tracks the half - width of the distribution and show that it is related to the exponent of the power - law tail .
|
with the rapid growth in the availability and size of digital health data and wearable sensors , along with the rise of newer machine learning methods , health care analytics has become a hot area of research today .the main bottlenecks for solving a healthcare data analytics problem are : a ) effort required to build good models in terms of time , money and expertise b ) interpreting model features so that a healthcare expert can do a causality analysis and take preventable measures or derive meaningful insights backed by domain knowledge .a typical analytics solution requires a ) pre - processing b ) feature extraction c ) feature selection d ) modeling such as classification or regression . among these steps ,feature extraction and feature selection together form feature engineering ( fe ) and is the most time consuming and human expertise demanding among the rest .feature engineering can be broadly carried out in four ways : ( a ) manually selecting features guided by domain knowledge ( b ) recommending features by automated analysis - proposed method ( c ) feature transforms like principal component analysis ( pca ) ( d ) representation learning using deep architectures such as deep multi - layered perceptron ( mlp ) and convolutional neural network ( cnn ) . through experiments on 3 different types of healthcare datasets including a recent challenge dataset and comparison of the approaches ,the utility of our proposed method ( b ) has been shown .interpretation of features is not supported by deep learning and feature transform methods .but , manual feature engineering and our proposed method yield interpretable features which is very helpful in prognostic domains like healthcare .while in deep architectures , the different activation functions can be hierarchically stacked to form new structures , in our approach , this does not hold true .for example , wavelet transforms applied on fourier transforms does not make sense .hence , here the emphasis is on creating a wide architecture with meaningful hierarchies so that lowest layer contains basic feature extraction techniques , and as we move up we keep adding more meaningful layers on top of what was extracted .this helps in deriving physical interpretation of features ( from bottom to top ) .the dataset is partioned into p - folds of training , evaluation and testing sets ( range of p is 5 to 10 ) .the performance is reported on the hidden testing set .the proposed method consists of 3 steps : _ 1 .feature listing _ : we have organized commonly reported features ( in the literature of sensor data analytics ) in a hierarchical manner as shown in figure 1 .the basic features ( level 0 ) can be mainly categorized as : ( i ) time domain features ( td ) ( ii ) fourier transformation based features ( fd ) like short - time fourier transform ( stft ) ( iii ) discrete wavelet transformation based features ( dwt ) .one major challenge of using dwt features is the selection of suitable mother wavelet , as more than 100 different types of mother wavelets were reported in different papers .the automated mother wavelet selection is done by measuring energy to entropy ratio [ 1 ] . in level 1 , spectral ,statistical and peak - trough features are extracted .level 2 includes different ratios and derivatives of the level 1 features .the system has capability of easy plugging of new feature extraction algorithms that will lead to a collaborative ecosystem .hence , it is possible to get huge number ( say , ) of features ( including the transform domain coefficients ) from the sensor signals .this results in possible combinations of features , whose exploration is practically infeasible , thereby demanding usage of feature selection ._ 2 . feature selection _ : in our method , we followed an iterative feature selection where -features are selected ( k ) at each iteration and system performance ( e.g. classification accuracy ) is checked for this feature set . if the selected feature set results in _ expected _ performance , we return the feature set as the recommended one .otherwise , another -features are chosen in the next iteration and the same steps are repeated . for checking the classification accuracy ,we choose svm ( support vector machine ) based classification with different kernels .svm was selected as a classifier as it generalizes well and converges fast .several values of are tried to choose an optimal value . for a given value of , features are selected using two techniques namely , mrmr [ 2 ] and mrms [ 3 ] , described below : _ minimum redundancy and maximum relevance_(mrmr ) : in order to select effective features , mrmr optimizes an objective function , either mutual information difference ( mid ) or mutual information quotient ( miq ) , by minimizing the redundancy and maximizing the relevance of the features .mid ( additive ) and miq ( multiplicative ) are defined as follows .+ + where minimizes redundancy by computing f - statistics and maximizes relevance by computing correlation between a pair of features . + _ maximal relevance maximum significance_(mrms ) : this technique uses fuzzy - rough set selection criteria to select relevant and non - redundant ( significant ) features . the objective function is : + + where computes relevance of a recommended feature with respect to a class label and computes the significance of a pair of recommended features by computing their correlation , and is the weight parameter .let , and be the sets of features recommended by mrmr and mrms , respectively .then the recommended set of features r is , where , where . note that mrmr and mrms cover different aspects of feature selection .for instance , mrmr is classifier independent where as mrms is effective to reduce real valued noisy features which are likely to occur in large feature sets ._ 3 . feature recommendation _ : the system finds 2 feature sets for a particular performance metric ( such as accuracy , sensitivity , specificity , precision , f - score ) : a ) fe1 - that produces the highest metric in any fold of cross - validationb ) fe2 - that is most consistent and performs well across all folds .the above step of feature selection is done hierarchically - if layer 0 does not produce expected results set by pre - set threshold or maximum possible value of a selected metric , then layer 1 is invoked . similarly if layer 1 does not yield expected results , layer 2 is invoked .this follows the principle that if simple features can do the task , there is no need for complex features .` c ' is a regularizer for ` k ' and is dependent on the hardware capabilities of the system .the intuition is that on a high - end machine ( having higher valued ` c ' ) , feature combinations ( ) can be carried in acceptable time . using the recommended feature sets ,any classifier like svm or random forest can be trained to see the results obtained . also by looking up the recommend features from the feature listing database , interpretation of the featurescan be easily obtained by a domain expert .experiments were carried on 3 datasets : d1 , d2 , d3 in order to provide a comparison among the feature engineering ways ( proposed method , manual , dimension reduction and deep learning ) ._ d1 : _ the physionet 2016 challenge dataset [ 4 ] consists of 3153 heart sounds , including 2488 normal and 665 abnormal recordings .the ground truth label ( normal or abnormal heart sound ) of each record is manually annotated by expert doctors .raw pcg ( phonocardiogram ) is further down sampled to 1 khz from 2 khz , in order to segregate four cardiac states ( s1 , systole , s2 and diastole ) using the logistic regression based hsmm approach [ 5 ] .the winner [ 6 ] of the challenge used 124 features and used deep learning for classification .the challenge used their own modified metric for ranking participants , however for consistency of results across datasets , we have used accuracy score as the performance metric .we participated in the challenge using manual features and got only 1% increase in performance compared to the proposed automated method ._ d2 : _ the second dataset is derived from mimic - ii patients dataset [ 7 ] .a subset of the dataset containing ppg ( photoplethysmogram ) data was created after noise cleaning and the ground truth blood pressure ( bp ) was obtained from the simultaneously recorded arterial bp waveform , resulting in equally balanced 36 high ( > 140 mmhg reading ) and 36 low bp patient waveform data instances ._ d3 : _ the third dataset ( used to classify the emotion into happy and sad ) records the fingertip pulse oximeter ppg data of 33 healthy subjects ( female : 13 and male : 20 ) with average age 27 years .we used standard video stimuli as ground - truth and time synchronization errors were minimized .table 1 lists the obtained result for a dataset along with the corresponding configuration and effort for each of the feature engineering approaches .experiments has been carried out using theano based multi - layer perceptron with _ dropout _ and varying number of layers to see if features can be automatically learned on the datasets under experimentation .different epochs ( 5 to 15 ) has been tried to see how the learning rate affects performance .different activation functions like rectified linear unit ( relu ) , tanh , softmax , sigmoid , etc . has been tried out at different layer level to get an ideal architecture for classification task for the given problems .table 1 shows that mlp based techniques fail when compared to the state of the art and the proposed method .the problem with mlp and newer deep learning techniques like cnn is that they need a lot of data to train and there is no way to interpret the features .principal component analysis ( pca ) is a statistical procedure that uses an orthogonal transformation to derive principal components representative of the features under consideration .experiments have been carried out with aforementioned datasets and gaussian kernel is used for svm based classification .the different dimension reduction techniques used are singular value decomposition ( svd ) , eigen value decomposition ( eig ) and alternating least squares ( als ) .a varying number of principal components ( like 5 , 10 , 15 ) are also tried out .table 1 shows that pca based methods are outperformed by our proposed method .another drawback of pca and similar feature reduction techniques is that the derived features are not interpretable .it is seen that for 2nd and 3rd dataset , the proposed approach outperforms the state of the art ( soa ) methods , and for the 1st dataset , 94.38% of the accuracy level of the winner was reached by this method . in terms of effort taken to build the solution , the proposed method clearly beats others .[ cols="<,^,^,^,^,^",options="header " , ] interpretable feature engineering has been found to be the most demanding task among all the subtasks of health data analytics .hence , a system was built to automate this part of the process .the system has been tested on three healthcare datasets and was found to give good results when compared to state of the art . apart from manual feature engineering , comparison has been made with mlp and pca which are feature engineering approaches of different directions .interpretation of features is one of the strong points of the proposed method .another strong point of the proposed method is huge reduction in effort to develop a typical analytics solution .integration of knowledge bases for ease of interpreting features and automated causality analysis is also planned . the work will be exteneded to other domains such as machine prognostics .[ 1 ] ngui , w. k. et al . ( 2013 ) wavelet analysis : mother wavelet selection methods , _ applied mechanics and materials _ ; vol .953 - 958 .[ 2 ] peng , h et al .( 2005 ) feature selection based on mutual information criteria of max - dependency , max - relevance , and min - redundancy . _ ieee transactions on pattern analysis and machine intelligence _ , 27.8 : 1226 - 1238 . [ 3 ] maji , p et al ( 2012 ) fuzzy - rough mrms method for relevant and significant attribute selection , _ advances on computational intelligence : 14th international conference on information processing and management of uncertainty in knowledge - based systems .
|
in this paper , a wide learning architecture is proposed that attempts to automate the feature engineering portion of the machine learning ( ml ) pipeline . feature engineering is widely considered as the most time consuming and expert knowledge demanding portion of any ml task . the proposed feature recommendation approach is tested on 3 healthcare datasets : a ) physionet challenge 2016 dataset of phonocardiogram ( pcg ) signals , b ) mimic ii blood pressure classification dataset of photoplethysmogram ( ppg ) signals and c ) an emotion classification dataset of ppg signals . while the proposed method beats the state of the art techniques for 2nd and 3rd dataset , it reaches 94.38% of the accuracy level of the winner of physionet challenge 2016 . in all cases , the effort to reach a satisfactory performance was drastically less ( a few days ) than manual feature engineering .
|
in this paper we consider finite element approximations of the following linear elliptic pde in non - divergence form : [ problem ] here , is an open bounded domain with boundary , is given , and ^{n\times n} ] and .in addition , the existence of a strong solution to in two - dimensions and on convex domains is proved in .due to their non - divergence structure , designing convergent numerical methods , in particular , galerkin - type methods , for problem has been proven to be difficult .very few such results are known in the literature .nevertheless , even problem does not naturally fit within the standard galerkin framework , several finite element methods have been recently proposed . in authors considered mixed finite element methods using lagrange finite element spaces for problem .an analogous discontinuous galerkin ( dg ) method was proposed in .the convergence analysis of these methods for non - smooth remains open .a least - squares - type discontinuous galerkin method for problem with coefficients satisfying the cordes condition was proposed and analyzed in . here ,the authors established optimal order estimates in with respect to a -type norm .the primary goal of this paper is to develop a structurally simple and computationally easy finite element method for problem .our method is a primal method using lagrange finite element spaces .the method is well defined for all polynomials degree greater than one and can be easily implemented on current finite element software . moreover, our finite element method resembles interior penalty discontinuous galerkin ( dg ) methods in its formulation and its bilinear form , which contains an interior penalty term penalizing the jumps of the fluxes across the element edges / faces .hence , it is a dg finite element method .in addition , we prove that the proposed method is stable and converges with optimal order in a discrete -type norm on quasi - uniform meshes provided that the polynomial degree of the finite element space is greater than or equal to two .= [ rectangle , draw , fill = white!20 , text width=13em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = white!20 , text width=17em , text centered , rounded corners , minimum height=4em ] = [ draw , -latex ] ( init ) i. global stability estimate for pdes with constant coefficients + ; ( localc)ii . local stability estimate for pdes with constant coefficients + ; ( localnd ) iii .local stability estimate for pdes in non - divergence form + ; ( garding ) iv .global grding - type inequality for pdes in non - divergence form + ; ( globalnd ) v. global stability estimate for pdes in non - divergence form + ; ( init ) ( localc ) ; ( localc ) ( localnd ) ; ( localnd ) ( garding ) ; ( garding ) ( globalnd ) ;while the formulation and implementation of the finite element method is relatively simple , the convergence analysis is quite involved , and it requires several nonstandard arguments and techniques . the overall strategy in the convergence analysis is to mimic , at the discrete level , the stability analysis of strong solutions of pdes in non - divergence form ( see ( * ? ? ?* section 9.5 ) ) .namely , we exploit the fact that locally , the finite element discretization is a perturbation of a discrete elliptic operator in divergence form with constant coefficients ; see lemma [ operatordiffform ] .the first step of the stability argument is to establish a discrete calderon - zygmund - type estimate for the lagrange finite element discretization of the elliptic operator in with constant coefficients , which is equivalent to a global inf - sup condition for the discrete operator .the second step is to prove a local version of the global estimate and inf - sup condition . with these results in hand , local stability estimate for the proposed dg discretization ofcan be easily obtained .we then glue these local stability estimates to obtain a global grding - type inequality .finally , to circumvent the lack of a ( discrete ) maximum principle which is often used in the pde analysis , we use a nonstandard duality argument to obtain a global inf - sup condition for the proposed dg discretization for problem .see figure [ prooffig ] for an outline of the convergence proof .since the method is linear and consistent , the stability estimate naturally leads to the well - posedness of the method and the energy norm error estimate .the organization of the paper is as follows . in section [ sec-2 ]the notation is set , and some preliminary results are given .discrete stability properties , including a discrete calderon - zygmund - type estimate , of finite element discretizations of pdes with constant coefficients are established . in section [ sec-3 ], we present the motivation and the formulation of our discontinuous finite element method for problem . mimicking the pde analysis from at the discrete level , we prove a discrete stability estimate for the discretization operator .in addition , we derive an optimal order error estimate in a discrete -norm . finally , in section [ sec-4 ] , we give several numerical experiments which test the performance of the proposed dg finite element method and validate the convergence theory .let be an bounded open domain .we shall use to denote a generic subdomain of and denotes its boundary . denotes the standard sobolev spaces for and , and to denote the subspace of consisting functions whose traces vanish up to order on . denotes the standard inner product on and . to avoid the proliferation of constants, we shall use the notation to represent the relation for some constant independent of mesh size .let be a quasi - uniform , simplical , and conforming triangulation of the domain .denote by the set of interior edges in , the set of boundary edges in , and , the set of all edges in .we define the jump and average of a vector function on an interior edge as follows : \hspace{-0.075cm}\bigr]}\big|_e&={\bm w}^{+}\cdot { \bm n}_+ \big|_e+{\bm w}^{-}\cdot { \bm n}_-\big|_e , \\ { \bigl\{\hspace{-0.1cm}\bigl\{{\bm w}\bigr\}\hspace{-0.1cm}\bigr\}}\big|_e&=\frac12\bigl({\bm w}^{+}\cdot { \bm n}_+ \big|_e -{\bm w}^{-}\cdot { \bm n}_-\big|_e \bigr ) , \end{aligned}\ ] ] where and is the outward unit normal of . for a normed linear space ,we denote by its dual and the pairing between and . the lagrange finite element space with respect to the triangulation is given by where denotes the set of polynomials with total degree not exceeding on .we also define the piecewise sobolev space with respect to the mesh for a given subdomain , we also define and as the subspaces that vanish outside of by we note that is non - trivial for . associated with ,we define a semi - norm on for \hspace{-0.075cm}\bigr]}\bigr\|_{l^p(e\cap \overline{d})}^p\big)^{\frac{1}p}.\end{aligned}\ ] ] here , denotes the piecewise hessian matrix of , i.e. , for all .let be the projection defined by it is well known that satisfies for any for any domain and any , we also introduce the following mesh - dependent semi - norm by , it is easy to see that is a norm on .moreover by in this subsection we cite or prove some basic properties of the broken sobolev functions in , and in particular , for piecewise polynomial functions. these results , which have independent interest in themselves , will be used repeatedly in the later sections .we begin with citing a familiar trace inequality followed by proving an inverse inequality .[ tracelemma ] for any , there holds for any .therefore by scaling , there holds [ inverselem ] for any , and , there holds where by , and inverse estimates , we have \hspace{-0.075cm}\bigr]}\big\|_{l^p(e\cap \bar{d})}^p\big)^{\frac{1}{p}}\\ & \quad { \lesssim}\|d^2_h v_h\|_{l^p(d ) } + \mathop{\sum_{t\in { \mathcal{t}_h}}}_{t\subset { d}_h } \big(h_t^{1-p } \big(h_t^{p-1 } \|d^2 v_h\|_{l^p(t)}^p + h_t^{-1 } \|{\nabla}v_h\|_{l^p(t)}^p\big)\big)^{\frac{1}{p}}\\ & \quad { \lesssim}h^{-1}\|v_h\|_{w^{1,p}({d}_h)}.\end{aligned}\ ] ] the next lemma states a very simple fact about the discrete norm on .[ discretenormestimates ] for any , there holds next , we state some super - approximation results of the nodal interpolant with respect to the discrete semi - norm . the derivation of the following results is standard , but for completeness we give the proof in appendix [ appendixa ] [ superlem ] denote by the nodal interpolant onto .let with for .then for each with , there holds moreover , if , there holds here , satisfy the conditions in lemma [ inverselem ] . to conclude this subsection , we state andprove a discrete sobolev interpolation estimate .[ discreteinterp ] there holds for all , writing and integrating by parts , we find \hspace{-0.075cm}\bigr ] } w\ , ds\\ & \label{dinterpline0 } \quad { \lesssim}\sum_{t\in { \mathcal{t}_h } } \int_t |{\nabla}w|^{p-2 } |d^2_h w| |w|\ , dx + \sum_{e\in { \mathcal{e}_h}^i } \int_e { \bigl[\hspace{-0.075cm}\bigl[|{\nabla}w|^{p-2 } { \nabla}w\bigr]\hspace{-0.075cm}\bigr ] } w\ , ds.\end{aligned}\ ] ] to bound the first term in we apply hlder s inequality to obtain likewise , by lemma [ tracelemma ] we have \hspace{-0.075cm}\bigr ] } w\ , ds\\ & \ { \nonumber}\le \sum_{e\in { \mathcal{e}_h}^i } \big(h_e^{\frac{1}{p}}\big\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\big\|_{l^p(e)}\big)^{p-2 } \big(h_e^{\frac{1-p}{p}}\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\|_{l^p(e)}\big ) \big(h_e^{\frac{1}{p}}\|w\|_{l^p(e)}\big)\\ & \ { \nonumber}{\lesssim}\|{\nabla}w\|^{p-2}_{l^p({{\omega } } ) } \|w\|_{l^p({{\omega } } ) } \big(\sum_{e\in { \mathcal{e}_h}^i } h_e^{1-p } \big\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\big\|_{l^p(e)}^p\big)^{\frac{1}{p}}\\ & \ \nonumber { \lesssim}\|{\nabla}w\|^{p-2}_{l^p(\omega ) } \|w\|_{l^p(\omega)}\|w\|_{w^{2,p}_h(\omega)}.\end{aligned}\ ] ] combining we obtain the desired result .the proof is complete . in this subsection, we consider a special case of when the coefficient matrix is a constant matrix , .we introduce the finite element approximation ( or projection ) of on and extend to the broken sobolev space .we then establish some stability results for the operator .these stability results will play an important role in our convergence analysis of the proposed dg finite element method in section [ sec-3 ] .let be a positive definite matrix and set the operator induces the following bilinear form : and the lax - milgram theorem ( cf . ) implies that exists and is bounded .moreover , if , the calderon - zygmund theory ( cf .* chapter 9 ) ) infers that exists and there holds equivalently , the bilinear form naturally leads to a finite element approximation ( or projection ) of on , that is , we define the operator by when , the identity matrix , is exactly the finite element the discrete laplacian that is , . by finite element theory , we know that is one - to - one and onto , and therefore exists .recall the following dg integration by parts formula : \hspace{-0.075cm}\bigr ] } { \bigl\{\hspace{-0.1cm}\bigl\{v\bigr\}\hspace{-0.1cm}\bigr\ } } \ , ds \\ & \hskip 0.8 in + \int_e { \bigl\{\hspace{-0.1cm}\bigl\{\tau\bigr\}\hspace{-0.1cm}\bigr\}}\cdot { \bigl[\hspace{-0.075cm}\bigl[v\bigr]\hspace{-0.075cm}\bigr ] } \ , ds \bigr ) + \sum_{e\in \mathcal{e}_h^b } \int_e ( \tau \cdot n_e ) v \ , ds , \nonumber\end{aligned}\ ] ] which holds for any piecewise scalar valued function and vector valued function . here , is defined piecewise , i.e. , for all . for any , using with , we obtain \hspace{-0.075cm}\bigr]}v_h\ , ds.\end{aligned}\ ] ] we note that the above new form of is not well defined on . however , it is well defined on with .hence , we can easily extend the domain of the operator to broken sobolev space .precisely , ( abusing the notation ) we define to be the operator induced by the bilinear form on , namely , a key ingredient in the convergence analysis of our finite element methods for pdes in non divergence form is to establish global and local discrete calderon zygmund - type estimates similar to for .these results are presented in the following two lemmas .[ stabilitylem ] there exists such that for all there holds first note that is equivalent to for any fixed , let and .therefore , and , respectively , are the solutions of the following two problems : and thus , is the elliptic projection of . by wehave using well known finite element estimate results ( * ? ? ?* theorem 8.5.3 ) , finite element interpolation theory , and we obtain that there exists such that for all it follows from the triangle inequality , an inverse inequality ( see lemma [ inverselem ] ) , the stability of , and that thus , which yields , and hence , .[ localstabilitylem ] for and , define let with . then there holds to ease notation , set and .recalling , we have by lemma [ stabilitylem ] , set , so that .denote by the indicator function of .since on , we have moreover , we have and consequently , make the presentation clear , we state the precise assumptions on the non - divergence form pde problem .let ^{n\times n} ] is positive definite satisfying , and problem has a unique strong solution which satisfies the calderon - zygmund estimate .the formulation of our dg finite element method for non - divergence form pdes is relatively simple , which is inspired by the finite element method for divergence form pdes and relied only on an unorthodox integration by parts . to motivate its derivation, we first look at how one would construct standard finite element methods for problem when the coefficient matrix belongs to ^{n\times n} ] . in our setting ,the formulation is not viable any more because does not exist as a function . to circumvent this issue, we apply the dg integration by parts formula to the first term on the left - hand side of with and in is understood piecewise , we get \hspace{-0.075cm}\bigr]}v_h\ , ds = \int_\omega fv_h\ , dx\qquad \forall v_h\in v_h.\end{aligned}\ ] ] here we have used the fact that \hspace{-0.075cm}\bigr]}=0 ] and no _ a priori _ knowledge of the location of the singularities of are required in the meshing procedure .\(b ) the dg finite element method is a primal method with the single unknown .it can be implemented on current finite element software supporting element boundary integration .\(c ) from its derivation we see that is equivalent to the standard finite element method provided is smooth .in addition , if is constant then reduces to this feature will be crucially used in the convergence analysis later .( d ) in the one - dimensional and piecewise linear case ( i.e. , and ) , the method on a uniform mesh is equivalent to where , and represents the nodal basis for . as in section [ sec-2.3 ] , using the bilinear form we can define the finite element approximation ( or projection ) of on , that is , we define by trivially , can be rewritten as : find such that similar to the argument for , we can extend the domain of to the broken sobolev space , that is , ( abusing the notation ) we define by the main objective of this subsection is to establish a stability estimate for the operator on the finite element space . from this result, the existence , uniqueness and error estimate for will naturally follow .the stability proof relies on several technical estimates which we derive below . essentially , the underlying strategy , known as a perturbation argument in the pde literature , is to treat the operator locally as a perturbation of a stable operator with constant coefficients .the following lemma quantifies this statement .[ operatordiffform ] for any , there exists and such that for any with since is continuous on , it is uniformly continuous . therefore for every there exists such that if satisfy , there holds .consequently for any with .set and consider , and .since , it follows from , , , , and that \hspace{-0.075cm}\bigr]}{v_h}\ , ds\\ & \qquad \le \|a-{a_{\text{\rm 0}}}\|_{l^\infty(b_{r_\delta})}\bigl ( \|d^2_h w\|_{l^p(b_{r_\delta } ) } \|v_h\|_{l^{p^\prime}(b_{r_\delta})}\\ & \qquad\qquad + \bigl({\sum_{e\in { { \mathcal{e}_h}^i } } } h_e^{1 - 2p } \big\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\big\|_{l^p(e\cap \bar{b}_{r_\delta})}^p\bigr)^{\frac{1}{p } } \bigl({\sum_{e\in { { \mathcal{e}_h}^i } } } h_e\|v_h\|_{l^{p^\prime}(e\cap \bar{b}_{r_\delta})}^{p^\prime}\bigr)^{\frac{1}{p^\prime}}\bigr)\\ & \qquad { \lesssim}\|a-{a_{\text{\rm 0}}}\|_{l^\infty(b_{r_\delta } ) } \|w\|_{w^{2,p}_h(b_{r_\delta } ) } \|v_h\|_{l^{p^\prime}(b_{r_\delta } ) } { \lesssim}\delta \|w\|_{w^{2,p}_h(b_{r_\delta } ) } \|v_h\|_{l^{p^\prime}(b_{r_\delta})}.\end{aligned}\ ] ] the desired inequality now follows from the definition of .[ localnondivlemma ] there exists and such that for any with . for to be determined below , let as in lemma [ operatordiffform ] .let and set .then by lemmas [ localstabilitylem ] and [ operatordiffform ] with and , we have for any for sufficiently small ( depending only on ) , we can kick back the first term on the right - hand side .this completes the proof .[ lcontlemma ] let and be as in lemma [ localnondivlemma ] .for any , there holds set . by the definition of , , and , we have for any \hspace{-0.075cm}\bigr ] } v_h\ , ds\\ & { \lesssim}\|d^2 _ h w\|_{l^p(b_1 ) } \|v_h\|_{l^{p^\prime}(b_1 ) } \\ & \qquad + \bigl(\sum_{e\in { \mathcal{e}_h}^i } h_e^{1-p } \big\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\big\|_{l^p(e\cap \bar{b}_1)}^p\bigr)^{\frac{1}{p } } \bigl(\sum_{e\in { \mathcal{e}_h}^i } h_e \|v_h\|_{l^{p^\prime}(e\cap \bar{b}_1)}^{p^\prime}\bigr)^{\frac{1}{p^\prime}}\\ & { \lesssim}\bigl ( \|d^2 _ h w\|_{l^p(b_1 ) } + \big(\sum_{e\in { \mathcal{e}_h}^i } h_e^{1-p } \big\|{\bigl[\hspace{-0.075cm}\bigl[{\nabla}w\bigr]\hspace{-0.075cm}\bigr]}\big\|_{l^p(e)}^p\big)^{\frac{1}{p}}\bigr ) \|v_h\|_{l^{p^\prime}(b_1)}\\ & { \lesssim}\|w\|_{w^{2,p}_h(b_1 ) } \|v_h\|_{l^{p^\prime}(b_1)}.\end{aligned}\ ] ] the desired inequality now follows from the definition of .[ localstabilitylemma ] let be as in lemma [ localnondivlemma ] .then there holds for we divide the proof into two steps ._ step 1 _ : for any , let and be as in lemma [ localnondivlemma ] , let , , and set for .let be a cut - off function satisfying we first note that and for any .therefore , by lemmas [ superlem ] ( with ) and [ localnondivlemma ] , we have applying lemmas [ lcontlemma ] and [ superlem ] , we obtain to derive an upper bound of the last term in , we write for , \hspace{-0.075cm}\bigr ] } v_h\ , ds\\ & = -\int_{b_3 } \bigl ( \eta a : d_h^2 w_h + 2a\nabla\eta\cdot \nabla w_h + w_h a : d_h^2 \eta \bigr ) v_h \ , dx + \sum_{e\in { \mathcal{e}_h}^i } \int_{e\cap \bar{b}_3 } { \bigl[\hspace{-0.075cm}\bigl[a{\nabla}w_h\bigr]\hspace{-0.075cm}\bigr ] } \eta v_h\ , ds \nonumber \\ & = \bigl ( { \mathcal{l}_h}w_h , i_h ( \eta v_h)\bigr ) - \int_{b_3 } \bigl ( 2a\nabla\eta\cdot \nabla w_h + w_h a : d_h^2 \eta \bigr ) v_h \ , dx \nonumber \\ & \qquad\nonumber -\int_{b_3 } ( a : d^2_h w_h)(\eta v_h- i_h ( \eta v_h))\ , dx + \sum_{e\in { \mathcal{e}_h}^i } \int_{e\cap \bar{b}_3 } { \bigl[\hspace{-0.075cm}\bigl[a{\nabla}w_h\bigr]\hspace{-0.075cm}\bigr ] } ( \eta v_h - i_h ( \eta v_h))\ , ds.\end{aligned}\ ] ] by hlder s inequality , lemmas [ tracelemma][inverselem ] , [ superlem ] , and we obtain which implies that applying this upper bound to yields : we now use a covering argument to obtain the global estimate . to this end , let with sufficiently large ( but independent of ) such that . setting and , we have by since , we have consequently , since is independent of , we have finally , an application of lemma [ discreteinterp ] yields applying the cauchy - schwarz inequality to the last term completes the proof . using arguments analogous to those in lemma [ localstabilitylemma ], we also have the following stability estimate for the formal adjoint operator . due to its length and technical nature ,we give the proof in the appendix .[ dualgardingestimatelemma ] there exists an such that provided and .denote by the formal adjoint operator of .then inequality is equivalent to the stability estimate thus , the adjoint operator is injective on .since is finite dimensional , on is an isomorphism .this implies that is also an isomorphism on ; the stability of the operator is addressed in the next theorem , the main result of this section .[ maintheorem ] suppose that , and .then there holds the following stability estimate : consequently , there exists a unique solution to satisfying for a given , lemma [ dualgardingestimatelemma ] guarantees the existence of a unique satisfying by we have the last inequality is an easy consequence of hlder s inequality , lemma [ discreteinterp ] and the poincar - friedrichs inequality . taking in, we have and therefore applying this estimate in proves . finally , to show existence and uniqueness of the finite element method it suffices to show the estimate .this immediately follows from and hlder s inequality : the stability estimate in theorem [ maintheorem ] immediately gives us the following error estimate in the semi - norm .[ errorestimatethm1 ] assume that the hypotheses of theorem [ maintheorem ] are satisfied .let and denote the solution to and , respectively .then there holds consequently , if , for some , there holds where . by theorem [ maintheorem ] and the consistency of the method, we have applying the triangle inequality yields .in this section we present several numerical experiments to show the efficacy of the finite element method , as well as to validate the convergence theory .in addition , we perform numerical experiments where the coefficient matrix is not continuous and/or degenerate .while these situations violate some of the assumptions given in section [ sec-3.1 ] , the tests show that the finite element method is effective for these cases as well .in this test we take , the coefficient matrix to be and choose such that as the exact solution .the resulting and piecewise errors for various values of polynomial degree and discretization parameter depicted in figure [ figuretest1 ] .the figure clearly indicates that the errors have the following behavior : .in addition , the numerical experiments suggest that ( i ) the method converges with optimal order in the -norm and ( ii ) the method is convergent in the piecewise linear case ( ) .( left ) and piecewise ( right ) errors for test problem 1 with polynomial degree .the figures show that the error converges with order , where as the piecewise error converges with order .,title="fig:",width=240,height=220 ] ( left ) and piecewise ( right ) errors for test problem 1 with polynomial degree .the figures show that the error converges with order , where as the piecewise error converges with order .,title="fig:",width=240,height=220 ] for the second set of numerical experiments , we take the domain to be the square , and take the coefficient matrix to be we choose the data such that the exact solution is given by .we note that for .in particular , for and for . in order to apply theorem [ errorestimatethm1 ] to this test problem , we recall that the degree nodal interpolant of with satisfies for .since for , theorem [ errorestimatethm1 ] then predicts the convergence rate for any . note that a slight modification of these arguments also shows that .the errors of the finite element method for test 2 using piecewise linear , quadratic and cubic polynomials are depicted in figure [ figuretest2 ] . as predicted by the theory ,the error converges with order if the polynomial degree is greater than or equal to two .similar to the first test problem , the numerical experiments also show that the error converges with optimal order , i.e. , .( left ) and piecewise ( right ) errors for test problem 2 with polynomial degree .the figures show that the error converges with order , where as the piecewise error converges with order .,title="fig:",width=240,height=220 ] ( left ) and piecewise ( right ) errors for test problem 2 with polynomial degree .the figures show that the error converges with order , where as the piecewise error converges with order .,title="fig:",width=240,height=220 ] for the third and final set of test problems , we take , and exact solution .we remark that the choice of the matrix and solution is motivated by aronson s example for the infinity - laplace equation .in particular , the function satisfies the quasi - linear pde , where . noting that , we see that . unlike the first two test problems , the matrix is not uniformly elliptic , as for all .therefore the theory given in the previous sections does not apply .we also note that the exact solution satisfies the regularity for , and therefore for .the resulting errors of the finite element method using piecewise linear and quadratic polynomials are plotted in figure [ figuretest3 ] .in addition , we plot the computed solution and error in figure [ figuretestpicture ] with and . while this problem is outside the scope of the theory ,the experiments show that the method converges , and the following rates are observed : ( left ) and piecewise ( right ) errors for the degenerate test problem 3 with polynomial degree and .the figures show that the error converges with order and the error converges with order .,title="fig:",width=240,height=220 ] ( left ) and piecewise ( right ) errors for the degenerate test problem 3 with polynomial degree and .the figures show that the error converges with order and the error converges with order .,title="fig:",width=240,height=220 ] and .,title="fig:",width=240,height=220 ] and .,title="fig:",width=240,height=220 ] 99 , _ the tuba family of plate elements for the matrix displacement _ , aero ., 72:701709 , 1968 . , _ special finite element methods for a class of second order elliptic problems with rough coefficients _ , siam j. numer . anal . , 31(4):945981 , 1994 . , _ sur la gnralisation du problme de dirichlet _ , math .ann . , 69(1):82136 , 1910 . ,_ the mathematical theory of finite element methods _ , third edition , springer , 2008 .s.c . brenner and l .-y . sung _ interior penalty methods for fourth order elliptic boundary value problems on polygonal domains _, 22/23:83118 , 2005 ., _ properties of the solutions of the linearized monge - ampre equation _ , amer . j. math . , 119(2):423465 , 1997 . , _ the finite element method for elliptic problems _ , north - holland , amsterdam , 1978 . , _ the stability in and of the -projection onto finite element function spaces _ , math .comp . 48:521532 , 1987 . , _ discontinuous galerkin methods for non variational problems _ , arxiv:1304.2265v1 [ math.na ] . , _ partial differential equations _ , graduate studies in mathematics , volume 19 , ams , providence , 2002 . , _ mixed finite element methods for the fully nonlinear monge - ampre equation based on the vanishing moment method _ , siam j. numer, 47(2):12261250 , 2009 ., _ controlled markov processes and viscosity solutions _ , second edition , springer , 2006 . ,_ potential space estimates for grenn potentials in convex domains _ , proc .soc . , 119(1):225233 , 1993 . ,_ elliptic partial differential equations of second order _ , springer - verlag , berlin , 2001 . ,_ elliptic problems on nonsmooth domains _ , pitman publishing inc . , 1985 . ,_ elliptic partial differential equations _ , courant lecture notes in mathematics , 1new york university , courant institute of mathematical sciences , new york ; ams , providence , ri , 1997 m. jensen and i. smears , _ on the convergence of finite element methods for hamilton - jacobi - bellman equations _ , siam j. numer ., 51(1):137162 , 2013 . , _ linear and quasilinear elliptic equations _ , translated from the russian by scripta technica , inc . ,translation editor : leon ehrenpreis academic press , new york - london , 1968 . , _ a finite element method for second order nonvariational elliptic problems _ , siam j. sci ., 33(2):786801 , 2011 . , _ quadratic finite element approximations of the monge - ampre equation _ , j. sci ., 54(1):200226 , 2013 . ,_ pointwise error estimates and asymptotic error expansion inequalities for the finite element method on irregular grids : part i. global estimates _ , math ., 67(223):877899 , 1998 ., _ some new error estimates for ritz galerkin methods with minimal regularity assumptions _ , math ., 65(213):1927 , 1996 . , _ discontinuous galerkin finite element approximation of non - divergence form elliptic equations with cords coefficients _ , siam j. numer ., 51(4):20882106 , 2013 . , _ polynomial approximation on tetrahedrons in the finite element method _ ,j. approximation theory , 7:334351 , 1973 . , _ a family of 3d continuously differentiable finite elements on tetrahedral grids _ , appl ., 59(1):219233 , 2009 .here , we provide the proof of lemma [ superlem ] . as a first step ,we use standard interpolation estimates to obtain for , since and , we find where an inverse estimate was applied to derive the last inequality . combining with and using the hypothesis then gives us therefore for we have thus , is satisfied . to obtain the second estimate , we first use , an an inverse estimate to get by similar arguments we find therefore by lemma [ traceline ] and , we obtain \hspace{-0.075cm}\bigr]}\big\|_{l^p(e\cap \bar{d})}^p\\ & { \lesssim}\mathop{\sum_{t\in { \mathcal{t}_h}}}_{t\cap d\neq \emptyset}\big[\|d^2(\eta v_h - i_h ( \eta v_h))\|_{l^p(t)}^p + h^{-p}\|{\nabla}(\eta v_h - i_h ( \eta v_h))\big\|_{l^p(t)}^p\big]\\ & { \lesssim}\mathop{\sum_{t\in { \mathcal{t}_h}}}_{t\cap d\neq \emptyset } \frac{1}{d^{2p } } \|v_h\|_{w^{1,p}(t)}^p\le \frac{1}{d^{2p } } \|v_h\|_{w^{1,p}(d_h)}^p.\end{aligned}\ ] ] taking the root of this last expression yields the estimate .the proof of uses the exact same arguments and is therefore omitted .to prove lemma [ dualgardingestimatelemma ] we introduce the discrete -type norm and the norm ( defined for functions ) the desired estimate is then equivalent to where we recall that is the adjoint operator of . due to its length, we break up the proof of into three steps ._ step 1 : a local estimate . _+ the first step in the derivation of ( equivalently , ) is to prove a local version of this estimate , analogous to lemma [ localnondivlemma ] . to this end , for fixed , let , , and be as in lemmas [ operatordiffform][localnondivlemma ] , with to be determined . fora fixed , let satisfy in with multiplying the pde by , integrating over , and using the consistency of yields therefore , for any , there holds where is given by with . now take to be the elliptic projection of with respect to , i.e. , lemma [ stabilitylem ] ensures that is well defined and satisfies the estimate combining lemma [ operatordiffform ] , and , we have taking sufficiently small and rearranging terms gives the local stability estimate for finite element functions with compact support : _ step 2 : a global grding - type inequality . _ + we now follow the proof of lemma [ localstabilitylemma ] to derive a global grding - type inequality for the adjoint problem .let be given in the first step of the proof , , and .let satisfy the conditions in lemma [ localstabilitylemma ] ( cf . ) . by the triangle inequalityand we have for any applying lemmas [ lcontlemma ] , lemma [ superlem ] ( with ) and an inverse estimate yields the goal now is to replace appearing in the right hand side of by plus low order terms .to this end , we write for ( cf . [ add9c ] ) , \\ & \quad\nonumber= a_h(i_h(w_h \eta),v_h)+a_h(w_h\eta -i_h(w_h\eta),v_h)+ \big[a_h(w_h,\eta v_h)-a_h(w_h\eta , v_h)\big]\\ & \quad{\nonumber}= : i_1+i_2+i_3.\end{aligned}\ ] ] to derive an upper bound of , we use and properties of the interpolant and cut off function : next , we apply lemmas [ lcontlemma ] , [ superlem ] and an inverse estimate to bound : to estimate , we add and subtract and expand terms to obtain \\ & \nonumber = -\int_{b_3 } \big(w_h a_0:d^2 \eta + 2 a_0{\nabla}\eta \cdot { \nabla}w_h\big ) v_h\ , dx\\ & \nonumber \qquad -\int_{b_3 } \big(w_h ( a - a_0):d^2 \eta + 2 ( a - a_0){\nabla}\eta \cdot { \nabla}w_h\big ) v_h\ , dx = : k_1+k_2.\end{aligned}\ ] ] applying hlder s inequality and lemmas [ discretepoincare][discreteholder ] yields similarly , by lemma [ discretepoincare ] and , we obtain combining results in the following upper bound of : applying the estimates to , to results in and therefore by , combining and yields finally , we use the exact same covering argument in the proof of lemma [ localstabilitylemma ] to obtain taking sufficiently small and kicking back the last term then yields the grding - type estimate _ step 3 : a duality argument _ + in the last step of the proof , we shall combine a duality argument and to obtain the desired result . define the set since is precompact in , and due to the elliptic regularity estimate , the set is precompact in .therefore by ( * ? ? ?* lemma 5 ) , for every , there exists a such that for each and there exists satisfying note that implies . for we shall use to denote the solution to .we then have by lemma [ lcontlemma ] , for any and , choosing so that is satisfied ( with ) and using the definition of the norm results in finally we apply this last estimate in to obtain taking sufficiently small and kicking back a term to the left hand side yields .this completes the proof .[ discretepoincare ] there holds for any with , denote by the argyris finite element space , and let be the enriching operator constructed in by averaging .the arguments in and scaling show that , for , where is given by . since and , the usual poincar inequality gives therefore by adding and subtracting terms , we obtain for , where again , we have used the assumption .the proof is complete .[ discreteholder ] for any smooth function , and , there holds let be the enriching operator in lemma [ discretepoincare ] satisfying. since we have combining this estimate with the triangle inequality , , and an inverse estimate gives
|
this paper is concerned with finite element approximations of strong solutions of second - order linear elliptic partial differential equations ( pdes ) in non - divergence form with continuous coefficients . a nonstandard ( primal ) finite element method , which uses finite - dimensional subspaces consisting globally continuous piecewise polynomial functions , is proposed and analyzed . the main novelty of the finite element method is to introduce an interior penalty term , which penalizes the jump of the flux across the interior element edges / faces , to augment a nonsymmetric piecewise defined and pde induced bilinear form . existence , uniqueness and error estimate in a discrete energy norm are proved for the proposed finite element method . this is achieved by establishing a discrete calderon zygmund type estimate and mimicking strong solution pde techniques at the discrete level . numerical experiments are provided to test the performance of proposed finite element method and to validate the convergence theory .
|
the first generation of gravitational wave detectors is either already online and gathering scientific data ( ligo , geo600 , tama ) or about to start taking data ( virgo ) .ligo and geo600 have successfully completed several short data taking runs ( so called science runs ) in coincidence .tama has accumulated over 2000 hours of data and quite a big portion of this data was taken in coincidence with ligo and geo600 .all detectors are currently in the commissioning stage and are steadily approaching their design sensitivities .improvements in the performance of the detectors are carried out in several directions : ( i ) sensitivity improvements ( tracing and reducing noise level from different subsystems ) ( ii ) increasing duty cycle ( time spent in acquiring the data suitable for astrophysical analysis as a fraction of the total operational time ) , and ( iii ) improving the data quality ( stationarity ) .however , at the present state the data is neither stationary nor gaussian over time scales greater than few minutes .the detector output contains various spurious transient events .unfortunately , the output of an optimal filter reflects these events , especially various glitches . by glitch herewe mean a short duration spurious transient ( of almost delta - function shape ) with a broad band spectrum that leads to a high signal - to - noise ratio ( snr ) at the output of matched filtering .distinguishing these events from the real events of astrophysical origin and dropping them out of consideration is called vetoing .in addition to the main gravitational wave channel , interferometers record a large volume of auxiliary data from environmental monitors and various signals from the many detector subsystems . these monitors help to find correlations between abnormalities in environmental or in instrumental behaviour and events in the strain channel with high snr .the transients which correlate both in the strain and auxiliary channels ( occure in both within a coincidence window ) can be discarded on the ground of noise coupling between the strain channel and detector s subsystems ( provided we understand the physical reasons for such a coupling mechanism ) .this is what is regarded as instrumental vetoes .the instrumental vetoes are helpful for removing some fake events , however , it is not enough .we have other events which are of artificial nature , but the information which would help us to remove these events either was not recorded or is not recognised .so in addition to instrumental vetoes , we need to apply signal based vetoes : vetoes which are based on our knowledge about a signal s shape in the frequency- and/or time - domain . for signal based vetoes, we need to construct a statistic which helps us to discriminate false signals from the true ones .the time - frequency discriminator suggested in is an example of such a statistic .this vetoing statistic is used in a search for gravitational waves from the binary systems consisting of two compact objects ( neutron stars ( ns ) , black holes ( bh ) , ... ) orbiting around each other in an inspiralling trajectory due to loss of orbital energy and angular momentum through gravitational radiation .a lot of effort has been put into modeling the waveform from coalescing binaries .the waveforms ( often referred to as chirps ) are modelled with reasonable accuracy , so that matched filtering can be employed to search the data for these signals . in the case of the discriminator , we use the time - frequency properties of the chirp in order to discard ( to veto out ) any spurious event which produces an snr above a preset threshold on the matched filter output .the performance of might depend on the number of bins used in computing the statistic .in this paper we suggest a possible way to optimize the discriminator for the number of bins .we use software injections ( adding simulated signals ) into data taken by the geo600 detector during the first science run ( s1 ) in order to study the distribution of the statistic for simulated signals and for noise - generated events .the optimal number of bins is the one which maximizes detection probability for a given false alarm rate .this method is quite generic and can be used for tuning any vetoing statistic which depends on one or several parameters .though the discriminator works reasonably well , it is still desirable to have additional independent signal based vetoes , which would either increase our confidence or improve our ability to separate genuine events from spurious ones .some investigations have been already made in this direction .in addition to signal based vetoes , a heuristic veto method was suggested in .it is based on counting the number of snr threshold crossings within a short time window . in this paperwe suggest several new signal based statistics which can compliment or enhance its performance .we introduce a statistic inspired by the kolmogorov - smirnov `` goodness - of - fit '' test , we call it the -statistic .we derive its probability distribution function in the case of signals buried in gaussian noise .we have also suggested a few other -like and -like statistics and show that their combination could increase vetoing efficiency even further . throughout the paperwe have used the following assumptions and simplifications .we shall assume that the waveforms used in our simulations , `` taylor '' approximants ( t1 ) at second post - newtonian order in the notations used in , are the exact representation of the astrophysical signal .the study performed in this paper is not restricted by the waveform model and could be repeated for any other model at the desirable post - newtonian order .the waveforms depend on several parameters , some of these parameters are intrinsic to the system like the masses and spins , while others are extrinsic like the time and phase of arrival of the gravitational wave signal . to search for such signals we use a bank of templates , which can be seen as a grid in the parameter space .separation of templates in the parameter space is defined by the allowed loss in the snr ( or equivalently by a loss in the detection probability ) .the detector output is usually filtered through a bank of templates for parameter estimation . for the sake of simplicitywe have used a single template with parameters identical , or very close , to those of the signal used in the monte - carlo simulation described in section [ iii ] .this paper is structured as follows .we start in section [ ii ] by recalling the widely used time - frequency discriminator . in section [ iii ] , we describe the method to optimize the veto for the number of bins . though we show its performance for optimization , the method is applicable to any discriminator which depends on some free parameters .section [ iv ] is dedicated to alternative vetoing statistics . therewe start with the -statistic , then we show few more examples of - and -like statistics ( and correspondingly ) which can potentially increase the vetoing efficiency further . for instance we show that the combination of and statistics ( namely their product ) give the best performance for a day s worth geo600 data .we summarize main results in the concluding section [ v ] and some detailed derivations are given in appendix [ app1 ] .in this section we introduce the notation which will be used throughout the paper and we reformulate the discriminator using new notations . this should be useful in the following sections where we discuss optimization and alternative signal - based vetoing statistics . throughout this paperwe assume that the signal is of a known phase with known time of arrival without loss of generality .indeed , we can use phase and time of arrival taken from the maximization of snr .alternatively , one can extend the derivations below in a manner similar to to deal with the unknown phase . the detector output sampled at denoted by , where is noise and is a signal , which corresponds to the gravitational wave of amplitude .since we will be working mainly in the frequency domain , we use tilde - notation for a fourier image of the time series : .the discrete fourier transform is defined as where , and is a number of points . in order to introduce the discriminator we need to define the following quantities s_i & = & 2_f_k = f_i-1^f_i f , i= 1 , , n ; s = 2_f_k = f_0^f_n f , + q_i & = & _ f_k = f_i-1^f_i ( + c.c.)f , i= 1 , ,f_k = f_0^f_n ( + c.c.)f . note that this notation is different from that used in . herethe one - sided noise power spectral density ( psd ) , , defined as is assumed to be known , as well as `` * '' mean complex conjugate and . we have chosen to work with discrete time and frequency series to be close to reality . here and after we use for the average over ensemble and for the second moment of the distribution .the frequency boundaries correspond to the frequency at which the gravitational wave signal enters the sensitivity band of the instrument and the frequency at the last stable orbit , is beyond the nyquist frequency , , should be taken as . ]( sometimes it is also referred to as the frequency at the innermost stable circular orbit ) . in this notations corresponds to the snr ( up to a numerical factor which does not play any role in the further analysis ) and is a part of the total snr accumulated in the frequency band between and .we choose a normalization for the templates so that .let us emphasize again , that we have assumed that we know the phase and time of arrival , so they are incorporated in the definition of the waveform . for the discriminator , we choose the frequency bands ( _ bins _ ) , so that there is an equal power of signal in each band : .then the discriminator can be written in the notations adopted here as follows ^2 = n_k=1^n ( q_i - q / n)^2 . [ chi2 ] if the detector noise is gaussian , then the above statistic obeys a distribution with degrees of freedom .the main idea behind the discriminator is to split the template into sub - templates defined in different frequency bands , so that if the data contains the genuine gravitational wave signal , the contributions ( ) from each sub - template to the total snr ( ) are equal ( ) . in the presence of a chirp in the data orif the data is pure gaussian noise , the value of is low .however , if the data contains a glitch which is not consistent with the inspiral signal , then the value of is large .this statistic is very efficient in vetoing all spurious events that cause large snr in the matched filter output .it was used in the search pipeline for setting an upper limit on the rate of coalescing ns binaries .if we want to apply this vetoing statistic in a binary bh search we should do some modifications of the discriminator eq .( [ chi2 ] ) to increase its efficiency . in practice, it might be difficult to split in bands of exactly the same power for signals from high mass systems , in other words it might be difficult to achieve exactly . indeed , the bandwidth of the signal from binary bh decreases with increasing total mass , , where we used , and is the total mass .in addition we work with a finite frequency resolution , which we might want to decrease to save computational time .finally , the accuracy of splitting the total frequency band depends on the number of bins . based on thiswe suggest a modification of the discriminator , which does not change it statistical properties , but enhances its performance .we introduce which is close to , but not exactly equal to it. then we should redefine statistic according to ^2 = _ i=1^n [ chi2mod ] we refer to for more details on this modification and its properties .in this section we would like to present a method for optimizing parameter - based vetoing statistics .this method also helps to tune the veto threshold for a signal based statistic .though the main focus in this section will be on the optimization of the statistic with respect to the number of bins , this method can also be applied to a general case ( see section [ iv ] ) .first we need to define playground data .playground data is a small subset of the available data chosen to represent the statistical properties of the whole data set .the main idea is to use software injections of the chirps ( adding simulated signals ) into playground data and compare the distribution of for the injected signals and spurious events .there is a trade off between the number of software injections : on the one hand we should not populate the data stream with too many chirps as it will corrupt the estimation of psd , on the other hand the number of injections should not be too small , so that we can accumulate sufficiently large number of samples ( `` sufficiently large '' should be quantified , see ) .another issue is the amplitude of injected signals : the amplitude should be realistic , which means close to the snr threshold used for the search .parameters of the injected chirps ( such as masses , spins , etc . )should be either fixed ( optimization with respect to the particular signal ) or correspond to the range of parameters used for templates in the bank .a generalization could be optimization with respect to several ( group of ) signals and the use of different number of bins for different ( set of ) parameters .that could happen in reality : the search for binary ns and binary bh might have different optimal number of bins . to ease our way through we give an example of the optimization of for signals from the system .we injected a waveform with mass parameters and snr=13 in each 5th segment of analyzed data .each segment was 16 seconds long .then 2.5 hours of geo600 s1 data was filtered through the template taylort1 ( at 2-nd post - newtonian order ) with mass parameters .the template taylort1 corresponds to `` t1 '' in . by having a slight mismatch in masses of the system ,we have tried to mimic a possible mismatch due to the coarseness of the template bank .we have separated triggers which correspond to the injected signals from the spurious events by using a 5 msec window around the time of injection .snr threshold was chosen to be 6 .then we have produced histograms for distribution for injected / detected signals and for spurious events .this procedure was performed for different number of bins for statistic .one can see the results in figure [ chi21 ] .the solid line histogram shows the distribution of for signals and the shaded histogram corresponds to the distribution of for spurious events with .we want the distribution of for injected signals be separated as much as possible from the distribution of for the spurious events .the optimal number of bins is the one which corresponds to the minimum overlap between those two distributions .one can see that for the case considered above the optimal number lies somewhere close to 20 .we need a more rigorous way to define the optimal number of bins , so that we need to quantify the overlap between the two distributions . herewe will apply the standard detection technique .first we need to normalize the distributions ( corresponds to the distribution of for the injected signals ) and ( corresponds to the distribution of for the spurious events ) so that _ 0^+ p_1 ( ) d= 1 , _0^+p_2 ( ) d=1 . defines the false alarm probability distribution function , so that we can fix the false alarm probability according to _ 0^(n)p_2(,n)d= . by fixing the false alarm probability , we are essentially fixing the threshold , , on .note that the threshold is a function of the number of bins and the false alarm probability . for real data , can not be computed analytically , since depends on spurious events , or , rather on the similarity of spurious events to the chirp signal .thus the purpose of the playground data is to characterize the non - stationarities in the data ._ we will call the number of bins optimal if for a given it maximizes the detection probability _ _ 0^(n , ) p_1(,n ) d= p_d . in other words , n_opt = max_n ( _ 0^(n , ) p_1(,n ) d ) .[ opt ] note , that we know only for chirps plus gaussian noise .the detector s noise , however , is not gaussian over a long time scale , so that is also , strictly speaking , unknown to us .this is why we have used software injections . as a bonus we also derived a threshold on , , which should be used in the analysis of the full data set .as one can see , this method can be applied to _ any _ signal based vetoing statistic . in section [ iv ] we will apply this method to determine the efficiency of other statistics . as an example , we can apply eq .( [ opt ] ) to the simulation described above and quantify the results presented in fig .[ chi21 ] .optimization of .detection probability and threshold on for various number of bins .false alarm probability in all cases was 1% ..[tab1 ] [ cols="^,^,^,^,^,^,^,^,^ " , ] .the solid line is a cubic spline interpolation.,width=340,height=264 ] the results given in table [ tab1 ] ( especially ) should be taken with caution .we have injected only 214 signals , and it might not be enough to make a definite statement .however it is a very good indication on what is the optimal number of bins .we have quite a large number ( 2550 ) of spurious events with , so that the statement about the threshold for a given false alarm probability is pretty solid .it should also be mentioned that we have truncated a tail of the distribution for spurious events by neglecting 5% of all events with largest ( we continue 5% truncation for false alarm distribution in the section [ iv ] as well ) .we have also performed a cubic spline interpolation between these points ( see fig .[ splin ] ) to show that the optimal number of bins indeed lies somewhere close to 20 . at the end of this sectionwe would like to mention that an optimal parameter might not exist , or it could be not the obvious one .in this section we will consider other signal based veto statistics .we start with a statistic that was inspired by the kolmogorov - smirnov `` goodness - of - fit '' test .we will show its statistical properties in the case of gaussian noise .then , we will consider some possible modifications of that statistic and another -like statistic , which we will call .we show their performance using geo600 s1 data . the original kolmogorov - smirnov `` goodness - of - fit '' test compares two cumulative probability distributions , , ( see figure [ ksexampl ] ) , and the test statistic is the maximum distance between curves and . is a theoretical distribution and is an observed one .kolmogorov - smirnov statistic is .,width=226,height=226 ] here we suggest a vetoing statistic which is somewhat similar to the kolmogorov - smirnov one , or better to say that the new statistic was inspired by the kolmogorov - smirnov test .we start by defining a few more quantities : _ i & = & 2_f_k = f_0^f_i f ; i= 1, ... ,m , f_m= f_lso , _ m=1 .+ q_i & = & _ f_k = f_0^f_i ( + c.c . ) f ; q_m = q , y_k = f , where is defined by the frequency resolution .the main idea is to compare two cumulative functions : the cumulative signal power within the signal s frequency band and the cumulative snr , which is essentially the correlation between the detector output and a template within the same frequency band .introduce the vetoing statistic according to d= max_i|q_i - _ iq| , i= 1, ... ,m-1 [ d ] and let us call it -statistic .however , we have found that , in practice , another statistic , : performs better .nevertheless we start with -statistic and postpone consideration of to the next subsection .the main question which we want to address is what is the probability of in the presence of a true chirp in gaussian noise .although we know that the detector s noise is not gaussian , we can treat it as gaussian noise plus non - stationarities ( spurious transient events ) , and we try to discriminate those non - stationarities from the genuine gravitational wave signals .we refer the reader to appendix [ app1 ] for detailed calculations and we quote here only the final results .if we introduce ( so that ) , then the probability distribution function is the multivariate gaussian probability distribution function and pr(d > d ) = 1 - _ -d^d ( - ) , where the covariance matrix , , is defined in eq .( [ cov ] ) . to show the performance of the -test , we have computed for a glitch that produced snr=16 at the output of the matched filtering and for the simulated chirp added to the data .the result is presented in the fig .[ d_test ] .the upper two panels show : the top graph is plotted for a true chirp , and the middle graph is for a spurious event .the dashed line corresponds to the expected cumulative snr ( ) and the solid line is the actual accumulation ( ) .the lower panel shows the distance ( ) as a function of frequency .the solid line here corresponds to the injected signal and the dashed line is for a spurious event. -test .comparison of the cumulative snr versus expected ( solid and dashed line correspondingly ) for injected chirp ( the top graph ) and for a spurious event ( middle graph ) .the bottom plot shows distance as a function of frequency ( the solid line is for injected chirp and the dashed line is for spurious event).,width=377,height=302 ] as one can see , this test works in practice .however we have found that the -statistic , defined above , performs better .one reason for this is that for the loud gravitational wave signals , we might have large due to slight mismatch in parameters caused by the coarseness of the template bank .we start with another -like discriminator .the suggested statistic is ^2 = n _ i=1^n .the interesting fact is that the tama group is using a similar ( related to the inverse of this quantity ) statistic for the purpose of detection . in the following considerationwe will omit the number of bins as it is just an overall scaling factor which does not affect vetoing .one can see that introduced in eq .( [ chi2mod ] ) is related to the new statistic according to .it is possible to derive the probability distribution function for for gaussian noise following the same line as described in appendix [ app1 ] .unfortunately , the expression is quite messy , especially for the large number of bins and it is not very useful in practice . to check the performance of this statistic we have conducted simulations similar to the ones described in section [ iii ] .namely , we have injected a chirp signal into a day s worth of s1 geo600 data and plotted the two distributions in the upper half of fig .[ altrd1 ] .the shaded histogram in the upper plot is a distribution of for spurious events with and the solid line curve is a distribution of for injected chirp signals .we have chosen 20 bins to compute . applying the scheme defined in the section [ iii ] , we find that the detection probability is 95.9% and threshold is 16.47 for a false alarm probability of 1% .note that we did not use playground data for these simulations , so that our result might be biased by the choice of a particular data set .next , we will modify -statistic according to = max_i | - _ i| .define .we will skip the derivation of the probability distribution function in gaussian noise .as in the case of the statistic , the probability could not be expressed in the nice close form , and , therefore , is not useful in practical applications . the performance of statistic is also shown in the fig . [ altrd1 ] ( lower graph ) . to produce this picturewe have used the same simulation as for .the detection probability for the -test is 94.3% and the threshold is 0.21 for a false alarm probability of 1% .and vetoing statistics are presented on the upper and lower plot correspondingly .the shaded histogram corresponds to spurious events , and the solid line histogram is distribution of vetoing statistic for injected signals .we have used one day s worth of s1 geo600 data to conduct these simulations.,width=377,height=340 ] another possible modification of the -statistic is choosing not the largest distance , but the percentile value , in other words , the maximum distance after throwing away , say , 3% of the largest distances .the percentile value could be considered as a parameter for the -statistic , and could be optimized for .to finish with -like statistic , let us give a few other possibilities : d^ * & = & max_i | |,[ad ] + v = d_+ + d_- & = & max_i ( -_i ) + max_i ( _ i - ) . [ kuiper ] the first one , defined by eq .( [ ad ] ) , is the analogue of anderson - darling statistic and the second one , eq .( [ kuiper ] ) , is the analogue of kuiper statistic .the interesting fact is that the product of statistics works even better than each of them separately and one can see this in the figure [ prodct ] .the detection probability in this case is 98.3% and the threshold is 4.7 for a false alarm probability of 1% . for injected signals ( solid line histogram ) and for spurious events ( shaded histogram ) .we have used the same day - long geo600 data as for producing results presented in the fig .[ altrd1 ] ., width=491,height=340 ] the reason that the product of two statistics works even better than each of them separately could be because and might be better suited for different types of spurious events , and equally good for the true signals .the statistics in the product supplement each other to veto larger number of spurious events .we have tried to optimize with respect to the number of bins , following the same line and conducting similar simulations as described in the section [ iii ] .however , we have not found the obvious choice for the optimal number of bins .this is because the detection probability as a function of the number of bins for fluctuates slightly about a constant value for the number of bins between 18 and 40 .in this paper we have considered several signal based vetoes . those are various statistics based on our knowledge of the signal we search for , which help us in discriminating genuine gravitational wave signal from spurious events of instrumental or environmental origin .we have outlined the method to optimize -like statistic for the number of bins .this method is based on adding simulated signals to real data and studying the distribution of the vetoing statistic for injected signals and spurious events .the optimal number of bins is the one which maximizes the detection probability for a fixed false alarm probablity .this method also automatically provides us with the vetoing threshold .we have considered two other very promising signal based vetoes : the -like discriminator , and the statistic which was inspired by the kolmogorov - smirnov `` goodness - of - fit '' test .using again simulated injections into geo600 s1 data we have shown that both those statistics could give a very high detection probability ( % ) for a given false alarm probability ( 1% ) .we have also pointed out that we can achieve even better performance if we take the product of the two statistics as a new veto .finally , let us emphasize , that the results of the simulations presented here are data dependent , and the exact numbers for efficiency may vary for different detectors and/or for different data sets of the same detector .however , as it follows from the analytical evaluations and indicated from the conducted simulations , we should expect good performance for all signal based vetoes considered in this paper .the research of s. babak was supported by pparc grant ppa / g / o/2001/00485 .s. babak would like to thank bruce allen and r. balasubramanian for helpful and stimulating discussions .the authors would like to thank b.s .sathyaprakash for helping to clarify the manuscript and for very useful suggestions and comments. finally , the authors are also grateful to the geo600 collaboration for making available the data taken by geo600 detector during s1 run .this appendix is dedicated to deriving the probability that the -statistic , introduced in ( [ d ] ) , is larger than a chosen value .the derivations presented here are conducted along the line similar to the one described in appendix a of .we assume that the detector s noise is gaussian .introduce , then .the main question we want to address is what is the probability of in the presence of a true chirp : pr(d >d ) & = & pr(max_i \{|y_i| } > d ) = 1 - pr(max_i \{|y_i| } < d ) = 1-pr(|y_1|<d, ... ,|y_m-1|<d ) + & = & 1 -_-d^d ..._-d^dp(y_1, ... ,y_m-1)dy_1 ... dy_m-1 . we need to find the probability distribution and we start with statistical properties of : we know that are independent gaussian random variables. we can find their mean and variance , e(y_k ) & = & 2a = a(_k - _ k-1 ) a_k , + var(y_k ) & = & _ k , _ k = 2f . taking into account the fact that are independent and have normal distribution , , we can write p(y_1, ...,y_m ) = _ i=1^m 1 . &dx_1 ... dx_m-1 & |p(x_1, ...,x_m-1)f(x_1, ... ,x_m-1 ) = + & dy_1 ... dy_m & p(y_1, ... ,y_m ) f(y_1 - _ 1_k=1^my_k , ... , _ k=1^m-1y_k - _ m-1_k=1^my_k ) and choose .this yields p(y_1, ... ,y_m-1 ) = dy_1 ... dy_m p(y_1, ...,y_m)(y_1 - _ 1_k=1^my_k -y_1 ) ... + ( _ k=1^m-1y_k - _ m-1_k=1^my_k -y_m-1).[int1 ] under the following change of variables of integration y_1 & = & z_1 + _ 1w ; w= _ k=1^m y_k + y_i & = & z_i - z_i-1 + _ iw , i=2, ... ,m-1 + y_m & = & -z_m-1 + _ m w + j & = & det = _ k=1^m_k=1,the integral ( [ int1 ] ) takes the form p(y_1, ... ,y_m-1 ) = dz_1 ... dz_m-1dw ( z_1-y_1) ...(z_m-1-y_m-1 ) .the argument of the exponent can be expressed in term of new variables according to _i=1^m + ( w - a)^2= * zc*^-1*z*^t + ( w - a)^2 , where in the expression above we used , * z * is a vector column , and is inverse of the covariance matrix , * c * , _ ij = ( 1_i + 1_i+1)_ij - 1_j_i+1 j -1_i_i j+1.[cov ] note that taking all above into account and performing integration over we arrive at the required probability distribution function pr(d >d ) = 1 - _ -d^d ( - ) one also can compute mean and variance for each : e(y_i ) & = & 0 , + cov_ij(y_iy_j ) & = & _ i(1-_j ) .we would like to emphasize , that like in the case or discriminator , and , correspondingly , do not depend on the signal amplitude .d. sigg , _ class .quantum grav . _* 21 * , s409 ( 2004 ) b. wilke _et al _ , _ class .quantum grav . _* 21 * , s417 ( 2004 ) r. takahashi _ et al _ , _ class .quantum grav . _* 21 * , s697 ( 2004 ) f. acernese _et al _ , _ class .quantum grav . _ * 21 * , s385 ( 2004 ) blanchet l. , faye g. , iyer b. , and joguet b. , _ phys. rev . _ * d65 * , 061501(r ) , see also 064005 ( 2002 ) buonanno a. , and damour t. , _ phys rev . _ * d62 * , 064015 ( 2000 ) damour t. , _ phys ._ * d64 * , 124013 ( 2001 ) damour t. , iyer b and sathyaprakash b. , _ phys ._ * d63 * , 044023 ( 2001 ) smirnov n.v . , _ bull .( 1939 ) press w.h ., et al , _ numerical recipes in c _ , cambridge univ . press , ( 1992 ) anderson t.w . and darling d.a . , _ anal . of math .* 23 * , 193 ( 1952 ) kuiper n.h . , _ proceedings of koninklijke nederlandse akademic van wetenschappen _ , ser a , * 63 * , 38 ( 1962 )
|
the matched filtering technique is used to search for gravitational wave signals of a known form in the data taken by ground - based detectors . however , the analyzed data contains a number of artifacts arising from various broad - band transients ( glitches ) of instrumental or environmental origin which can appear with high signal - to - noise ratio on the matched filtering output . this paper describes several techniques to discriminate genuine events from the false ones , based on our knowledge of the signals we look for . starting with the discriminator , we show how it may be optimized for free parameters . we then introduce several alternative vetoing statistics and discuss their performance using data from the geo600 detector .
|
many inference problems take the form of model selection .the example we focus on here is using radial velocity ( rv ) data to determine how many planets orbit a given star .a -planet fit is a representation a more precise version of this formula is ( [ eq : vt ] ) below .we wish to infer the `` parameter '' from rv data .the bayesian approach does not select a specific , but gives posterior probabilities .bayesian model selection hinges on the _ fully marginalized likelihood _ integral ( fml ) , also called the _bayesian evidence _ integral . in the abstract formulation, there is data , , and a family of models .the prior probability of model is .the probability of observing data is the fml , denoted .the posterior probability of model is in general , model has parameters . in our planet - selection problem , there are two overall parameters per data source and 5 parameters describing the orbit of each planet , so , where is the number of data sources .the prior probability density for is .the probability density for the data in model with parameters is the _ likelihood function _ , .the overall probability of is the integral over all possible parameter values this paper suggests a way to compute this challenging integral .other approaches include reversible jump mcmc , parallel tempering , nested sampling , diffusive nested sampling , and population monte carlo .our method has computational advantages over these in the case in which the data are good enough such that the posterior parameter distribution , is much more localized than the prior distribution , .the generic evidence integral fml problem is to evaluate integrals of the form we assume it is possible to evaluate and .we find an approximation of the integrand and consider a _ geometric path _ , .the corresponding integrals are the starting integral can be known in closed form or easy to evaluate .the desired integral is .our method is motivated by the multi - canonical monte carlo approach used in statistical physics to evaluate the partition function as a function of temperature , .we are also inspired by to use geometric path , but we differ from them that we do not sample paths . for our problems, we use a gaussian approximation to the posterior as . in this case, is what the _ bayesian information criterion _( bic ) uses as an estimate of the desired as a model selection criterion .variance estimation and error bars are an essential part of any monte carlo computation .our algorithm chooses steps using explicit mcmc variance estimates that use estimated auto - correlation times .this leads to a robust algorithm and estimates of with explicit uncertainty estimates or error bars .the estimated error bars agree with error bars from experiments with multiple independent evaluations .the basic geometric path idea goes back at least to the multi - histogram thermodynamic integration method of , which was our motivation .adaptions for computing the fml were developed by several authors , see , , and references there .we differ from them in certain technical but important ways .we choose the steps ( see below for notation and definitions ) adaptively during the computation using on - the - fly estimates of the variance as a function of .this leads to efficient and robust evaluations with specified error bars .also , we choose the starting distribution as multivariate gaussian whose covariance matches the estimated covariance matrix of the posterior .it seems to make a large difference for our applications , in which the components of are highly correlated in the posterior , to get the covariance structure of the posterior right from the beginning .we apply our algorithm to evaluating the fml of multi - companion models fitting for rv data from hip 88048 ( oph ) taken as part of the lick k - giant search .we choose hip 88048 to study because it has two confirmed brown - dwarf companions of approximately 530-d period and 3210-d period , but may hide additional companions .also the noise level of hip 88048 is low , so over - fitting should be more obvious when excessive amount of companions are added to the model .we evaluate the fml of 1 , 2 , 3 and 4-companion models based on the data .the result shows that 2 companion model has the largest fml among them given various prior distributions .we also apply the algorithm to evaluate the fml of multi - planet models for gliese 581 .we use the combination of both harps data and hires data .we evaluate the fml of 3 , 4 , 5 , and 6-planet models .we find that the 5 and 6-planet models have the largest fmls among the four models .but the fml difference between 5 and 6-planet model is much smaller than reported in .we point out two philosophical issues : the criteria for model selection , and the role of the prior .there are ( at least ) two reasons to do model selection .one is to estimate the number of planets or companions of a given star above a threshold size .another is to accurately predict future measurements .these can lead to different model selection criteria and different results .a model with fewer than the correct number of planets has fewer parameters and therefore may suffer less from over - fitting .cross validation is a model selection criterion based on measuring predictive power , while bayesian fml criteria are based on bayesian estimates of the number of planets .the need to specify somewhat subjective priors is a weakness of bayesian estimation .priors contain normalization factors that depend on things such as the range of allowed amplitudes . in cases where the prior is uncertain, bayesians hope that the posterior is robust with respect to details of the prior .the present model selection results are less robust with respect to the prior , see fig . ( [ fig:282lnzk ] ) . as a simple illustration ,suppose each planet has a parameter whose prior is uniformly distributed in a range ] .then the posterior is almost the same if we assume a smaller range ] .see for a different thoughtful discussion of priors in model selection .there are many aspects of the multi - planet priors that need to be examined . in the present study of hip 88048, we focus only on the possibility of small planets .all stars may have some satellites .it may be a more interesting question to ask how many planets there are above a given observable size .the example of fig .( [ fig:282lnzk ] ) shows that the data slightly favor two planets to three with no lower constraint on the size .but with a constraint of at least , the relative weights differ by a factor of .section [ sec : m - cmc ] explains the geometric - path monte carlo algorithm that we use .section [ sec : rosenbrock ] describes a simple model computation that allows us to validate the algorithm , both the computed answer and the estimated variance .section [ sec : tszyj ] presents fml computations for the interesting star hip 88048 .we are able to conclude that it probably has two significant planets even though there are suggestions of a third .any planets beyond the two confirmed ones are too small to be identified using the rv data we have . finally , in section[ sec : discussion ] , we discuss our findings and future projects .to implement the geometric - path monte carlo algorithm , we first find a distribution , which is an approximation to the posterior distribution and the normalization of which is known exactly .we then define a mixture of and the posterior distribution according to the geometric path , where is a normalization factor , and labels the geometric path and ranges between 0 and 1 .( [ eq : p - beta ] ) is similar to the geometric path used in bridge sampling or path sampling .it is easy to see that and where is , the fml .note that is known exactly .( [ fig:282m3gmc ] ) illustrates the change of with .we estimate for an increasing sequence , starting with , and continuing to .if is known , then is found from using the integral in ( [ eq : epk ] ) expresses as an expectation under .we write this as \ , , \ ] ] where the representations ( [ eq : z - beta_2 ] ) may be combined to yield the product formula we estimate the factors using mcmc sampling of then multiply these estimates to estimate . in our method , we choose a gaussian distribution as , and it can be expressed as where is the dimension , the mean vector and the inverse of the covariance of the gaussian .there are various ways to find a suitable and . one could take to be the global maximum of and to be the hessian matrix of at .this corresponds to using the laplace integration approximation to the evidence integral ( [ eq : evidence - integral ] ) , see e.g. , , which is the basis of the _ bic _ , or _bayesian information criterion _ , see e.g. , .our approach uses less computational infrastructure .we use an mcmc sampler to sample the posterior , then take to be the empirical mean and the inverse of the empirical covariance matrix of these samples. the mcmc sampler that we use to sample the posterior and is the affine invariant ensemble sampler by stretch move from the emcee package .this algorithm has the advantage of being able to sample highly anisotropic distributions without problem - dependent tuning .we use the mcmc estimator of , which is where are samples from . define as the variance of .we use the variance estimator described in , which is where is an estimate of the auto - correlation time of the chain of , estimated using the self - consistent window method . in principle( see ) , we should use the autocorrelation time of .but this is more expensive to compute .we hope is not a strong function of the exponent .the estimation error , , is likely to be of the order of its standard deviation , .the _ relative error _ is the estimation error normalized by the quantity being estimated .a standard estimate of the relative error is we choose to be the largest with , where is a pre - set value .it is possible to do this because while as long as there are no 0 s in .the situation that the chain of contains 0 usually happens when and one is sampling . because there may be samples from which have zero prior or likelihood , the numerator in eqn .( [ eq : y - sample ] ) is then zero . in such cases, there is a non - zero lower limit of , which we denote as .if , one can take a small enough as , so that achieves the lower limit .both and are tuning parameters of this algorithm . how to set them depends on specific problems .one may need to do a few trial runs to set and values that are sensible .it is convenient to describe the above mcmc estimation errors in the following way . given enough burn - in , the mcmc estimates are nearly unbiased .for large , they are approximately normal .we choose so that the error standard deviation is roughly .therefore , we may use the approximate error expression , where the are `` standard '' independent gaussians with mean zero and variance one .we use the natural estimator of the product ( [ eq : zpr ] ) , which is with the above error description , this may be approximated as this leads directly to an estimate of the standard error of , which is at the end of a run we have a good simple error bar for the computed evidence integral , because everything on the right side of ( [ eq : z - var - est ] ) is known or estimated .we apply the algorithm to an integral involving the rosenbrock function , because the two variables in the rosenbrock functions are highly correlated and their distributions deviate from gaussian distribution greatly , which we hope will mimic some properties of the posterior of real problems .specifically , we will try to evaluate the following integral where }(\theta_1)\ , \mathbbm{1}_{[-5,\,5]}(\theta_2)\ , , \label{eq : rpi}\ ] ] and the contour plot of is shown in fig .( [ fig : rosen ] ) .a direct quadrature with points in the square \times [ -5,5]$ ] gives .we ran the above algorithm with samples for each and variance control parameter .the mean vector and the inverse of the covariance matrix are obtained from sampling the posterior : we repeat the evaluation times to get multiple independent results .the algorithm turned out to use values of .the mean of the estimates was this agrees well with answer computed by quadrature .the standard deviation of the evaluations is this agrees reasonably well with the standard error predicted by ( [ eq : z - var - est ] ) , which is .hip 88048 ( ophiuchi ) is a k0iii star , and has mass and radius .we have in total 131 radial velocity data from hip 88048 .we estimated orbital parameters for two brown - dwarf companions using the method of .the parameter means and standard errors are listed in tab .( [ tab:282param ] ) .gliese 581 ( hip 74995 ) is a m3v star , and has mass and radius .with all the available harps data , reported a total of four planets around the star . by combining both haprs and hires data , reported two additional planets . did a bayesian analysis on harps data from gliese 581 and reported that 5-planet model has the largest marginalized likelihood .the 5 orbital parameters per companion are the velocity amplitude , the mean angular speed , the longitude of ascending node , the eccentricity , and the longitude of periastron .there are two additional overall fitting parameters , the velocity offset and the jitter .there are parameters for a -companion model for hip 88048 and parameters for gliese 581 .we use standard prior distributions as described in and . for amplitude , we have where we set . for gliese 581 ,we use and . for hip 88048, we use in order to include most substellar companions . and we choose three different lower bounds on for hip 88048 : , and . for angular speed , we have where we set and . for gliese 581 , we use . for hip 88048, we use . for eccentricity ,we use a beta distribution with one of the hyper - parameters 1 and the other one 5 , so that the distribution has more weight at around 0 , and also goes all the way to 1 .the prior for is for both and , we simply use uniform distribution between and as their priors . for the velocity offset , we use uniform distribution between and as its prior . for jitter , we have where we choose , , and .we also include a factor in the prior forbidding the orbits of the companions from crossing each other .we require that the radius of a companion in an inner orbit at apoastron is smaller than the radius of a companion in an outer orbit at periastron .so for -companion model , the overall prior is \ , , \ ] ] where is an indicator function , if orbits not crossed is true for , is 1 ; otherwise , 0 .we have a factorial in the formula , because we require the periods of the companions to be in a monotonic order , for the sake of sampling .the coefficient is an overall normalization . to estimate the bayesian evidence , which is the purpose of this paper , we need these normalization factors to be correct .we have confirmed that our priors are correctly normalized by using a likelihood and the geometric - path monte carlo described in this paper for integration .the likelihood function for rv data from a single source is \ ; , \label{eq : mpl}\ ] ] where are the data , and is the number of rv measurements . the model radial velocity is given by , \ , , \label{eq : vt}\ ] ] where the true anomaly is a function of .it is found by solving first for the mean anomoly in then solving we have omitted the companion indexes in eqn .( [ eq : true - anomaly ] ) and eqn .( [ eq : mean - anomaly ] ) .if there are multiple data sources , there will be corresponding number of and .we fit the rv data from hip 88048 with 1 , 2 , 3 and 4-companion model .the fits and residuals from the four models are shown in fig .( [ fig:282-all - fits ] ) .the 2-companion model gives a better fit than 1-companion .but the 2 and 3-companion fits are nearly indistinguishable .the histograms of some of the parameters are shown in fig .( [ fig:282 - 2-hist ] ) , fig .( [ fig:282 - 3-hist ] ) , and fig .( [ fig:282 - 4-hist ] ) . for 3 and 4-companionmodel , the histograms of the two large companions ( large ) are very similar to the histograms from the 2-companion model . note that the periods of the small companions ( small ) in fig .( [ fig:282 - 3-hist ] ) and fig .( [ fig:282 - 4-hist ] ) are all badly constrained .but they do show some significant peaks .the histograms of small companions eccentricities in fig .( [ fig:282 - 3-hist ] ) and fig .( [ fig:282 - 4-hist ] ) are very similar to the prior for eccentricity given in eqn .( [ eq : prior - e ] ) .the fully marginalized likelihoods of the 4 models given different are shown in tab .( [ tab:282zk ] ) .the logarithm of the fml of the 2 , 3 and 4-companion models given different are shown in fig .( [ fig:282lnzk ] ) .we set and use samples to determine each for -companion model .the error bars are obtained via eqn .( [ eq : z - var - est ] ) .let represent the fully marginalized likelihood of -companion model given some prior distribution .we verified the error bar for given using independent evaluations .this gave , which is consistent with the in tab .( [ tab:282zk ] ) .all these results have the same unit , where is the number of observations .the units are omitted to make the presentation clean .we fit the combination of harps and hires data from gliese 581 with 3 , 4 , 5 and 6-planet model .the histograms of the posterior of some of the parameters in the 6-planet model are shown in fig .( [ fig : g581 - 6 - 2ls ] ) .the fml of these four models are we set and use samples to determine each for -planet model , but we allow the number of samples to increase until we have a valid estimation of the auto - correlation time. our estimated bayes factor between between 5-planet and 6-planet model is far larger than reported in .we can not confidently decide between 5 and 6-planet models solely based on eqn .( [ eq : g581z ] ) . for any bayesian approach ,the outcomes depend more or less on the choice of prior distributions .this is especially true in the case of the fml .the results in tab .( [ tab:282zk ] ) shows that the 2-companion model has the largest fml for all three .but for different , the bayes factors of the 2-companion model against other models are different .when , is only a little more than twice .but when , is more than 10 times , and when , is more than 1000 times .so we can not rule out or confirm the existence of a 3rd companion in the system , but we can confidently rule out a 3rd companion which has a radial velocity contribution larger than . as for gliese 581 , eqn .( [ eq : g581z ] ) shows that the bayes factor between 5-planet and 6-planet model is about .we can then use this number and estimate the bayes factor between these two models for priors with different .since the maximum radial velocity from gliese 581 in either harps or hires data set is less than , the likelihood , eqn .( [ eq : mpl ] ) , will be almost zero if the amplitude of any planet is larger than .so by changing to any value larger than , the only thing that will affect the fml is the normalization term in eqn .( [ eq : prior - k ] ) . for example , if we had chosen , the bayes factor between 5-planet and 6-planet model would become . on the other hand , if , the bayes factor between the two models would become .which model is favored by the fml is very sensitive to the choice of prior .in this paper , we implement geometric - path monte carlo to evaluate the fully marginalized likelihood of various models .we find that the estimator of the fml in geometric - path monte carlo is nearly unbiased , and the estimation of an uncertainty or error bar is straightforward and reasonable .we apply the algorithm to a rosenbrock trial problem , our estimates of the fully marginalized likelihood match the value from direct quadrature and the error bar match the results from repeated runs .we are also able to obtain the fml of multi - companion models for radial velocity data from hip 88048 and gliese 581 given various prior distributions .the geometric - path monte carlo algorithm is a very fast method considering the challenging nature of the fully marginalized likelihood . for hip 88048 ,the computation time for and is within hours .the computation of can be finished within a day , and a little more than a day .all the above computations were done using a present - day workstation machine with a single core .there are two important tuning parameters in geometric - path monte carlo , and . currently , our choice of them is somewhat arbitrary. it may be possible to find optimal and so that it takes the minimum computation time to achieve a desired uncertainty .but this is not trivial . for example , having a smaller and keeping unchanged does not guarantee a smaller uncertainty , because it may take more steps which increases the uncertainty and more steps also mean more burn - in . in practice , one may need to do a few trial runs to find a reasonable and .the performance of the geometric - path monte carlo is constrained by the performance of the mcmc sampler one uses . when sampling difficult , it may be very challenging to get a good estimation of the auto - correlation time .in such cases , decreasing step size may improve the performance .currently we do not have a good understanding of the dependence of on .from what we have observed , of the chain of is generally larger than that of the chain of , and sometimes by a large amount .because is unknown and to be obtained from the procedure , we need to make sure that we have a valid estimation of of for a wide range of .our results for gliese 581 are different from previous ones. this could be due to several reasons .first of all , we have different priors . from section ( [ sec : sp ] ), we can see that the fml is very sensitive to choice of priors .second , the fml results in are evaluated using only harps data , while we use the combination of both harps and hires data .third , there could be sampling issues with either of our evaluations that we are not aware of . for our method, potential sampling problems usually reveal themselves when can not be confidently estimated .but we have reasonable estimations of for all our evaluations. a detailed look at the computations for hip 88048 makes it clear that there may be drawbacks to the bayesian fml approach to estimating the number of planets or companions .( [ fig:282 - 3-hist - zi ] ) shows a histogram of the orbital angular speed of a possible third planet / companion .this shows a very strong signal with an approximately 54-day period .( [ fig:282 - 3-hist - zi ] ) also shows that there is considerable probability in this narrow peak .we have conducted computational experiments with fake data that contains signals only from the two `` confirmed '' companions and additive gaussian independent `` measurement '' error .( [ fig:282 - 3-hist - zi - f ] ) shows surprising narrow peaks in the posterior period for a non - existent third companion , but that these peaks contain very little probability .it is possible that a frequentist , hypothesis testing approach would find the 54-day period statistically significant even though the bayesian approach , with the priors we used , gives the 3-companion model less evidence than the 2-companion model .we thank brendon brewer ( university of auckland ) , daniel foreman - mackey ( nyu ) , ross fadely ( nyu ) for valuable discussions .we want to particularly thank andreas quirrenbach ( university of heidelberg ) and christian schwab ( yale ) for generously sharing data with us .we would also like to thank ewan cameron ( oxford university ) for helpful comments .partial support for fh and dwh was provided by nasa grant nnx12ai50 g and nsf grant iis-1124794 .partial support for jg was provided by the doe office of advanced scientific computing under grant de - fg02 - 88er25053 . .the upper left figure shows the fit and residual of 1 companion model .the upper right one shows those of 2-companion model .the lower left is for 3 companion model .and the lower right is for 4 companion model .100 fits drawn from the posterior are plotted for all four models .the residual plots are of the optimum fit for all four models . ] .the histogram of the amplitude of the 1st companion indicates that small object is favored .the histogram of its period indicates that the period of the 1st companion is poorly constrained and there are many peaks in the histogram .the histogram of its eccentricity shows that the data almost provides no information for eccentricity of the 1st companion , and the posterior is very close to the prior for eccentricity given in eqn .( [ eq : prior - e ] ) . ] .the histograms of the amplitudes of the 1st and 2nd companions indicate that small objects are favored .the histograms of their periods indicate that the periods are poorly constrained .the histograms of their eccentricities are both very similar to the prior for eccentricity given in eqn .( [ eq : prior - e ] ) . ] in 3-companion model for rv data from hip 88048 on linear scale .the figure on bottom right shows the same histogram on log scale .the figure on top right shows in detail the highest peak at around 54-day period .the cumulative probability figure on bottom left shows that the highest peak occupies considerable amount of probability . ] in 3-companion model for fake rv data on linear scale .the figure on bottom right shows the same histogram on log scale .the figure on top right shows in detail the highest peak at around 8-day period .the cumulative probability figure on bottom left shows that the highest peak occupies very limited amount of probability .( the fake data is made with two companions , the parameters of which are taken from optimum 2-companion fit for rv data from hip 88048 , plus independent gaussian noise , the standard deviation of which is the standard deviation of the real rv data residual of the optimum 2-companion fit . ) ]
|
the fully marginalized likelihood , or bayesian evidence , is of great importance in probabilistic data analysis , because it is involved in calculating the posterior probability of a model or re - weighting a mixture of models conditioned on data . it is , however , extremely challenging to compute . this paper presents a geometric - path monte carlo method , inspired by multi - canonical monte carlo to evaluate the fully marginalized likelihood . we show that the algorithm is very fast and easy to implement and produces a justified uncertainty estimate on the fully marginalized likelihood . the algorithm performs efficiently on a trial problem and multi - companion model fitting for radial velocity data . for the trial problem , the algorithm returns the correct fully marginalized likelihood , and the estimated uncertainty is also consistent with the standard deviation of results from multiple runs . we apply the algorithm to the problem of fitting radial velocity data from hip 88048 ( oph ) and gliese 581 . we evaluate the fully marginalized likelihood of 1 , 2 , 3 , and 4-companion models given data from hip 88048 and various choices of prior distributions . we consider prior distributions with three different minimum radial velocity amplitude . under all three priors , the 2-companion model has the largest marginalized likelihood , but the detailed values depend strongly on . we also evaluate the fully marginalized likelihood of 3 , 4 , 5 , and 6-planet model given data from gliese 581 and find that the fully marginalized likelihood of the 5-planet model is too close to that of the 6-planet model for us to confidently decide between them .
|
models in complex networks science aim to reproduce some common empirical statistical features observed across many different real systems , from the internet to society .many of those models are able to recreate prominent recurrent attributes , such as the small - world property and scale - free degree distributions with characteristic exponents between and as measured for networks in the real world .other characteristics , such as the presence , the shape , and the intensity of correlations , are also unavoidable in models intending to help us to understand how these complex systems self - organize and evolve . the first reference to correlations in networks appearing in the literature is the clustering coefficient , which refers correlations among three vertices .the clustering is a measure of transitivity which quantifies the likelihood that two neighbors of a vertex are neighbors themselves .then , it is a measure of the number of triangles present in a graph .in addition to the empirical evidence that the vast majority of real networks display a high density of triangles , the concept of clustering is also relevant due to the fact that triangles are together with edges the most common building blocks taking part in more complex but elementary recurring subgraphs , the so - called motifs .it has been argued that networks large - scale topological organization is closely related to their local motifs structure so that these subgraphs could be related to the functionality of the network and can be fundamental in determining its community structure .all these mean that a correct quantification and modeling of the clustering properties of networks is a matter of great importance .however , most modeling efforts beyond the degree distribution have focused in the reproduction of two point correlations patterns , typified by the average nearest neighbors degree , so that clustering is just obtained as a byproduct . in most synthetic networks, it vanishes in the thermodynamic limit , but , as to many other respects , scale - free networks with divergent second moment stand as a special case .the decay of their clustering with the increase of the network size is so slow that relatively large networks with an appreciable high cohesiveness can be obtained .nevertheless , it remains to be an indirect effect and no control over its intensity or shape is practicable .therefore , an independent modeling of clustering is required and a few growing linear preferential attachment mechanism have been suggested . one of the proposed models reproduces a large clustering coefficient by adding nodes which connect to the two extremities of a randomly chosen network edge , thus forming a triangle .the resulting network has the power - law degree distribution of the barabsi - albert model , with , and since each new vertex induces the creation of at least one triangle , the model generate networks with finite clustering coefficient .a generalization on this model which allows to tune the average degree to , with an even integer , considers new nodes connected to the ends of randomly selected edges .two vertices and three vertices correlations can be calculated analytically through a rate equation formalism .the clustering spectrum is here finite in the infinite size limit and scales as .those models do not allow much freedom in the form of the resulting clustering coefficient , neither in the ensuing degree distribution , so that , although a valuable first approach , they constitute a timid attempt as clustering generators . in this paper, we make headway by introducing a generator of random networks where both the degree - dependent clustering coefficient and the degree distribution are tunable . after a brief review of several clustering measures in sectionii , the algorithm is presented in section iii . in section iv, we check the validity of the algorithm using numerical simulations .section v is devoted to the theoretical explanation of the constraints that degree - degree correlations impose in the clustering .we find that assortativity allows higher levels of clustering , whereas disassortativity imposes tighter bounds . as a particular case, we analyze this effect for the class of scale - free networks .we end the section by examining some empirical networks , finding a good agreement with our calculations .finally , conclusions are drawn in section vi .several alternative definitions have been proposed over time to quantify clustering in networks .the simplest measure is defined as this scalar quantity does not give much information about local properties of different vertices because it just counts the overall number of triangles regardless of how these triangles are placed among the different vertices of the network . the clustering coefficient , first introduced by watts and strogatz ,provides instead local information and is calculated as where is the number of triangles passing through vertex and is its degree .the average of the local clustering coefficients over the set of vertices of the network , , is usually known in the literature as the clustering coefficient .watts and strogatz were also the first in pointing out that real networks display a level of clustering typically much larger than in a classical random network of comparable size , , with the average degree and the number of nodes in the network . although and are sometimes taken as equivalent , they may be very different , even though both measures are defined in the interval ] , generating more assortative networks as approaches .to check the feasibility and reliability of the algorithm , we have performed extensive numerical simulations , generating networks with different types of degree distributions and different levels of clustering .the chosen forms for the degree distribuion are poisson , exponential , and scale - free .the degree dependent clustering coefficient is chosen to be .the numerical prefactor is set to and the exponent takes values , , and .the size of the generated networks is and each curve is an average over three different realizations .simulation results are shown in figs .[ fig6 ] , [ fig7 ] , and [ fig8 ] , which correspond to poisson and exponential degree distribution with average degree , and scale - free degree distributions with exponent , respectively .as it can be seen , the degree dependent clustering coefficient is well reproduced in all cases just by decreasing the value of if necessary ( the values of used in each simulation are specified in the caption of the corresponding figure ) . the standard procedurewe follow is to start with and to check whether the tail of is well reproduced .if not , we decrease its value until the entire curve fits the expected shape .[ p_k ] shows the degree distributions generated by the algorithm for the simulations of the previous figures , confirming that , indeed , the generated degree distributions match the expected ones .as we advanced in the previous section , degree - degree correlations constraint the maximum level of clustering a network can reach .a naive explanation for this is that , if the neighbors of a given node have all of them a small degree , the number of connected neighbors ( and hence , the clustering of such node ) will be bounded .this is the main idea behind the new measure of clustering introduced in .however , we can make a step forward and quantify analytically this effect .to do so , we need to define new quantities which take into account the properties of vertices that belong to the same triangle .let us define the multiplicity of an edge , , as the number of triangles in which the edge connecting vertices and participates .this quantity is the analog to the number of triangles attached to a vertex , .these two quantities are related through the trivial identity which is valid for any network configuration .the matrix is the adjacency matrix , giving the value if there is an edge between vertices and and otherwise .it is possible to find a relation between multiplicity , degree distributions and clustering .summing the above equation for all vertices of a given degree class we get now , there are some key relations which can be used where is the average multiplicity of the edges connecting the classes and , and is the number of edges between those degree classes . finally , taking into account eq .( [ c(k ) ] ) and the fact that the joint degree distribution satisfies , we obtain the following closure condition for the network let us emphasize that this equation is , in fact , an identity fulfilled by any network and , thus , it is , for instance , at the same level as the degree detailed balance condition derived in .these identities are important because , given their universal nature , they can be used to derive properties of networks regardless their specific details . as an example , in we used the detailed balance condition to prove the divergence of the maximum eigenvalue of the connectivity matrix that rules the epidemic spreading in scale - free networks , which , in turn , implies the absence of epidemic threshold in this type of networks .the multiplicity matrix is , _ per se _ , a very interesting object that gives a more detailed description on how triangles are shared by vertices of different degrees . in principle , does not factorize and , therefore , non trivial correlations can be found . the global average multiplicity of the network , , can be computed as values of close to zero mean that there are no triangles .when , triangles are mostly disjoint and their number can be approximated as , and , when , triangles jam into the edges , that is , many triangles share common edges . we are now equipped with the necessary tools to analyze the interplay between degree - degree correlations and clustering .the key point is to realize that the multiplicity matrix satisfies the inequality which comes from the fact that the degrees of the nodes at the ends of an edge determine the maximum number of triangles this edge can hold . multiplying this inequality by and summing over get where we have used the identity eq .( [ db ] ) .this inequality , in turn , can be rewritten as notice that is always in the interval ] and its domain extends beyond values that scale as , disassortative correlations are unavoidable for high degrees .almost all real scale - free networks fulfill these conditions and , hence , it is important to analyze how these negative correlations constraint the behavior of the clustering coefficient .let us assume a power law decay of the average nearest neighbors degree of the form .one can prove that this function diverges in the limit of very large networks as , where is the maximum degree of the network .then , the prefactor must scale in the same way which , in turn , implies that the reduced average nearest neighbors degree behaves as then , from eq .( [ inequality_approx ] ) the exponent of the degree dependent clustering coefficient , , must verify the following inequality just as an example , in the case of the internet at the autonomous system level , the reported values for these three exponents ( , , and ) satisfy this inequality close to the limit ( ) .the interplay between degree correlations and clustering can also be observed in real networks .we have measured the functions and for several empirical data sets , finding that the inequality eq .( [ inequality ] ) is always satisfied .the analyzed networks are the internet at the autonomous system level ( as ) , the protein interaction network of the yeast _ s. cerevisiae _ ( pin ) , an intra - university e - mail network , the web of trust of pgp , the network of coauthorships among academics , and the world trade web ( wtw ) of trade relationships among countries . in fig .[ inequality2 ] we plot the clustering coefficient as a function of .each dot in these figures correspond to a different degree class .as clearly seen , in all cases the empirical measures lie below the diagonal line , which indicates that the inequality eq .( [ inequality ] ) is always preserved . in fig .[ constraint_realnets ] we show the ratio .the rate of variation of this fraction is small and , thus , the degree dependent clustering coefficient can be computed as , where is a slowly varying function of that , in many cases , can be fitted by a logarithmic function .we have introduced and tested a new algorithm that generates _ ad hoc _ clustered networks with given degree distribution and degree dependent clustering coefficient. this algorithm will be useful for analyzing , in a controlled way , the role that clustering has on many dynamical processes that take place on top of networks .we have also introduced a new formalism which backs our algorithm and allows to quantify clustering in a more rigorous manner . in particular , an universal closure condition for networks is found to relate the degree dependent clustering coefficient , degree - degree correlations and the number of triangles passing through edges connecting vertices of different degree classes . using this relation ,we have found how the correlation pattern of the network constraints the function . in particular ,assortative networks are allowed to have high levels of clustering whereas disassortative ones are more limited .overall , we hope that a more accurate shaping of synthetic networks will improve our understanding of real ones . at this respect , we believe our algorithm will be useful for the community working on complex networks science .we acknowledge a. vespignani , r. pastor - satorras and a. arenas for valuable suggestions . this work has been partially supported by dges of the spanish government ,fis2004 - 05923-co2 - 02 , and ec - fet open project cosin ist-2001 - 33555 .m. b. acknowledges financial support from the mcyt ( spain ) through its ramn y cajal program .
|
we present a generator of random networks where both the degree - dependent clustering coefficient and the degree distribution are tunable . following the same philosophy as in the configuration model , the degree distribution and the clustering coefficient for each class of nodes of degree are fixed _ ad hoc _ and _ a priori_. the algorithm generates corresponding topologies by applying first a closure of triangles and secondly the classical closure of remaining free stubs . the procedure unveils an universal relation among clustering and degree - degree correlations for all networks , where the level of assortativity establishes an upper limit to the level of clustering . maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior . correlation measures in real networks are seen to observe this structural bound .
|
the study of photometrically or astrometrically varying sources has led to many important discoveries in astronomy , e.g. stellar masses from eclipsing binaries ; distance scales from stellar parallaxes and pulsating stars ; the theory of gravity from planetary motions and the physics of accretion disks from observations of cataclysmic variables. with new wide field imagers on survey telescopes , large surveys of variable objects have become possible . over the next few yearsall sky variability surveys will start with pan - starrs ( kaiser 2007 ) and lsst ( walker 2003 ) .near infra - red technology has also improved and the uk infra - red telescope wide field camera ( ukirt - wfcam ) and the visible and infra - red survey telescope for astronomy ( vista ; emerson et al .2004 ) are the first near infrared cameras capable of undertaking large synoptic surveys . herewe discuss the design and implementation of a dynamical relational database science archive for archiving synoptic data observed by ukirt - wfcam - wfcam science archive ( wsa ; hambly et al .2008 ) - and by vista - vista science archive ( vsa ) .the basic design of the wsa and vsa is described in detail in hambly et al .this work is an extension to incorporate new tables related to using multi - epoch data and is an improvement to our initial design ( cross et al .figure [ fig : synopticerm ] shows the new tables and their relationship to existing tables in two entity - relation models ( erms ) .the left - hand erm is for single or uncorrelated multi - wavelength observations and the right - hand side is for correlated multi - wavelength observations .there are five new tables in the new schema .two of them only appear in the correlated multi - epoch erm and the other three appear in both erms .the new tables are : * the * synopticmergelog * table links the different bandpass frames that are taken within a short time of each other in a correlated multi - epoch survey .it is much like the * mergelog * , but has the additional primary key attribute of _ meanmjdobs_. * the * synopticsource * table is the merged source catalogue created by merging the detection catalogues from the frames in * synopticmergelog*. this has similar attributes to the * source * table , but with a primary key similar to the * detection * table .this table makes it easier to get colour information at each epoch and makes the * bestmatch * table smaller . * the * bestmatch * table links the * source * table - the unique list of merged bandpass sources from deep stacks - to each observation : either in * synopticsource * for a correlated bandpass data set or in * detection * for an uncorrelated or single passband data set . *the * variability * table contains the statistical analysis of the multi - epoch data for each source in the * bestmatch * table .this includes both astrometric and photometric analysis and classifications . *the * varframesetinfo * table contains the fits for data across a whole frameset ( as defined in * mergelog * ) .this is useful for understanding the limits of the data in each frame set .the model for the astrometric fit is also recorded here .these tables can be used together to find and categorise objects .the * variability * table can be used to find objects which have interesting statistical properties .linking this to the * source * table and to neighbour tables of external surveys can select objects with very specific morphologies , variable properties , and colours . only with very specific selectionscan a scientist hope to find useful targets for follow up in a database of objects .the * varframesetinfo * table can help define the noise properties of each frameset in each selection .the * bestmatch * can then be used with either the * detection * or the * synopticsource * to display the light curve of an object . the attributes in each of these tables are described in detail in the wsa schema browserbrowser.html ]once new data has been ingested and quality controlled , curation proceeds as described in hambly et al .( 2008 ) and collins et al .( 2009 ) : * production of deep stacks and catalogues . * deep stacks in different filters merged into * source * table . *intermediate stacks are recalibrated against deep stacks * creation of * synopticsource * and * synopticmergelog * * neighbour tables created * creation of * bestmatch * * creation of * variability * and * varframesetinfo * these steps are controlled at each stage by the properties in four curation tables : * requiredstack * , * requiredfilters*,*requiredneighbours * and * requiredsynoptic*. the first three of these are discussed in collins et al .* requiredsynoptic * specifies whether a survey has correlated passbands and the correlation timescale for the * synopticmergelog*. the new steps that have been added in are the recalibration of intermediate stacks that typically improves the zeropoints by mag and the synoptic table creation .the recalibration is done by comparing bright stars in each intermediate stacks with ones in the deep stack .frames which show big changes in zeropoint are deprecated at this stage .the matching of intermediate stacks to each unique source is the most important step .the * sourcexdetection * neighbour table is the starting point for the creation of the * bestmatch * table .the neighbour table links the source to all detections within a set radius .the * bestmatch * links sources to the nearest match within a smaller radius .if there is no match , and the position lies within an observed frame , then a default row is inserted .flags are used to alert users to duplicate matches or non - detections close to the edge of a frame .the * bestmatch * table is a link between sources and observations , and allows users to track objects that disappear ( an eclipse , or fading after an outburst ) .the statistics in the * variability * table are based on good matches in the * bestmatch * table .the details of all these steps are in cross et al .( 2009 ) . at each stagethere are tight constraints on speed , since surveys such as the vista variables in via lactea ( vvv ) survey - a synoptic survey of the galactic plane and bulge - will have sources , each with 100 observations .we show some example data from two large data sets , the ukidss ( dye et al .2006 ) deep extragalactic survey ( dxs ) and the ukirt standard star calibration data ( cal ) .fig [ fig : examples ] shows an example magnitude - rms plot ( useful for determining which objects are variables ) and a set of correlated light - curves for a single star .this star was selected in the archive for its variable characteristics from the wsa .these plots can be produced using simple sql queries that are described in the wsa sql cookbookmulti - epoch ] .collins , r. s. et al .2009 , in asp conf ser .xxx , adass xviii , ed .d. a. bohlender , d. durand & p. dowler ( san fransisco : asp),[a.09 ] cross , n. j. g. et al .2009 , in preparation cross , n. j. g. et al .2007 , , p54 dye , s. et al .2006 , mnras , 372 , 1227 emerson j. p. sutherland w.j . ,2004 , eso messenger , 117 , 27 hambly , n. c. et al .2008 , mnras , 384 , 637 kaiser n. 2007 , `` proceedings of the advanced maui optical and space surveillance conference '' , ed .s. ryan , the maui economic development board , p9 walker a. 2003 , mmsai , 74 , 999
|
the vista data flow system comprises nightly pipeline and archiving of near infrared data from ukirt - wfcam and vista . this includes multi - epoch data which can be used to find moving and variable objects . we have developed a new model for archiving these data which gives the user an extremely flexible and reliable data set that is easy to query through an sql interface . we have introduced several new database tables into our schema for deep / synoptic datasets . we have also developed a set of curation procedures , which give additional quality control and automation . we discuss the methods used and show some example data . our design is particularly effective on correlated data - sets , where the observations in different filters are synchronised . it is scalable to large vista datasets which will be observed in the next few years and to future surveys such as pan - starrs and lsst .
|
the lower bound technique is a useful tool in the ergodic theory of markov processes .it has been used by doeblin ( see ) to show mixing of a markov chain whose transition probabilities possess a uniform lower bound .a somewhat different approach , relying on the analysis of the operator dual to the transition probability , has been applied by lasota and yorke , see , for instance , .for example , in , they show that the existence of a lower bound for the iterates of the frobenius perron operator ( that corresponds to a piecewise monotonic transformation on the unit interval ) implies the existence of a stationary distribution for the deterministic markov chain describing the iterates of the transformation .in fact , the invariant measure is then unique in the class of measures that are absolutely continuous with respect to one - dimensional lebesgue measure .it is also statistically stable , that is , the law of the chain , starting from any initial distribution that is absolutely continuous , converges to the invariant measure in the total variation metric .this technique has been extended to more general markov chains , including those which correspond to iterated function systems ; see , for example , .however , most of the existing results are formulated for markov chains taking values in finite - dimensional spaces ; see , for example , for a review of the topic . generally speaking , the lower bound technique which we have in mind involves deriving ergodic properties of the markov process from the fact that there exists a `` small '' set in the state space .for instance , it could be compact , such that the time averages of the mass of the process are concentrated over that set for all sufficiently large times .if this set is compact , then one can deduce the existence of an invariant probability measure without much difficulty .the question of extending the lower bound technique to markov processes taking values in polish spaces that are not locally compact is quite a delicate matter .this situation typically occurs for processes that are solutions of stochastic partial differential equations ( spdes ) .the value of the process is then usually an element of an infinite - dimensional hilbert or banach space .we stress here that to prove the existence of a stationary measure , it is not enough only to ensure the lower bound on the transition probability over some `` thin '' set .one can show ( see the counterexample provided in ) that even if the mass of the process contained in any neighborhood of a given point is separated from zero for all times , an invariant measure may fail to exist .in fact , some general results concerning the existence of an invariant measure and its statistical stability for a discrete - time markov chain have been formulated in ; see theorems 3.13.3 . in the present paper , we are concerned with the question of finding a criterion for the existence of a unique , invariant , ergodic probability measure for a continuous - time feller markov process taking values in a polish space ; see theorems [ mtheorem ] and [ theorem ] below .suppose that is its transition probability semigroup .in our first result ( see theorem [ mtheorem ] ) , we show that there exists a unique , invariant probability measure for the process , provided that for any lipschitz , bounded function , the family of functions is uniformly continuous at any point of ( we call this the _ e - property _ of the semigroup ) and there exists such that for any , here , denotes the ball in centered at with radius . observe that , in contrast to the doeblin condition , we do not require that the lower bound in ( [ intro-1 ] ) is uniform in the state variable .if some conditions on uniformity over bounded sets are added [ see ( [ th1 ] ) and ( [ th2 ] ) below ] , then one can also deduce the stability of the ergodic averages corresponding to ; see theorem [ theorem ] .we call this , after , _weak- mean ergodicity ._ this general result is applied to solutions of stochastic evolution equations in hilbert spaces . in theorem[ tgeneral ] , we show the uniqueness and ergodicity of an invariant measure , provided that the transition semigroup has the e - property and the ( deterministic ) semi - dynamical system corresponding to the equation without the noise has an attractor which admits a unique invariant measure . this is a natural generalization of the results known for so - called _ dissipative systems _ ; see , for example , .a different approach to proving the uniqueness of an invariant measure for a stochastic evolution equation is based on the strong feller property of the transition semigroup ( see and ) or , in a more refined form , on _ the asymptotic strong feller property _ ( see ) . in our theorem[ tgeneral ] , we do not require either of these properties of the corresponding semigroup . roughly speaking , we assume : ( 1 ) the existence of a global compact attractor for the system without the noise [ hypothesis ( i ) ] ; ( 2 ) the existence of a lyapunov function [ hypothesis ( ii ) ] ; ( 3 ) some form of stochastic stability of the system after the noise is added [ hypothesis ( iii ) ] ; ( 4 ) the e - property ( see section [ sec2 ] ) .this allows us to show lower bounds for the transition probabilities and then use theorems [ mtheorem ] and [ theorem ] . as an application of theorem [ tgeneral ] , we consider , in sections [ observable - section ] and [ sec5 ] , the lagrangian observation process corresponding to the passive tracer model , where is a time space stationary random , gaussian and markovian velocity field .one can show that when the field is sufficiently regular [ see ( [ h1 ] ) ] , the process is a solution of a certain evolution equation in a hilbert space ; see ( [ zet ] ) below .with the help of the technique developed by hairer and mattingly ( see also and ) , we verify the assumptions of theorem [ tgeneral ] when is periodic in the variable and satisfies a mixing hypothesis in the temporal variable ; see ( [ h2 ] ) .the latter reflects , physically , quite a natural assumption that the mixing time for the velocity field gets shorter on smaller spatial scales . as a consequence of the statistical stability property of the ergodic invariant measure for the lagrangian velocity , we obtain the weak law of large numbers for the passive tracer model in a compressible environment ; see theorem [ ttracer ] .it generalizes the corresponding result that holds in the incompressible case , which can be easily deduced due to the fact that the invariant measure is known explicitly in that situation ; see .let be a polish metric space .let be the space of all borel subsets of and let [ resp . , be the banach space of all bounded , measurable ( resp ., continuous ) functions on equipped with the supremum norm .we denote by the space of all bounded lipschitz continuous functions on .denote by the smallest lipschitz constant of .let be the transition semigroup of a markov family , taking values in . throughout this paper, we shall assume that the semigroup is _ feller _ , that is , .we shall also assume that the markov family is stochastically continuous , which implies that for all and .[ def3.1 ] we say that a transition semigroup has _the e_-_property _ if the family of functions is equicontinuous at every point of for any bounded and lipschitz continuous function , that is , if such that is then called an _e - process_. an e - process is an extension to continuous time of the notion of an e - chain introduced in section 6.4 of .given , we denote by the space of all probability borel measures on .for brevity , we write instead of .let be the dual semigroup defined on by the formula for . recall that is_ invariant _ for the semigroup [ or the markov family if for all . fora given and , define .we write in the particular case when .let denote by the ball in with center at and radius , and by `` - '' the limit in the sense of weak convergence of measures .the proof of the following result is given in section [ sec2.3 ] .[ mtheorem ] assume that has the e - property and that there exists such that for every and , the semigroup then admits a unique , invariant probability measure .moreover , for any that is supported in .we remark here that the set may not be the entire space .this issue is investigated more closely in . among other results ,it is shown there that if the semigroup satisfies the assumptions of theorem [ mtheorem ] , then the set is closed .below , we present an elementary example of a semigroup satisfying the assumptions of the above theorem , for which .let \cup[1 , + \infty) ] be the transition function defined by the formula define the markov operator corresponding to , that is , finally , let be the semigroup given by the formula it is obvious that the semigroup is feller .we check that satisfies the assumptions of theorem [ mtheorem ] and that ] . here , we interpret as equal to .after straightforward calculations , we obtain that for , we have and are uniformly convergent on . finally , it can be seen from ( [ p - t - e ] ) that for any and , we have which proves that .following , page 95 , we introduce the notion of weak- mean ergodicity .a semigroup is called _weak- mean ergodic _ if there exists a measure such that in some important cases , it is easy to show that .for example , if is given by a stochastic evolution equation in a hilbert space , then it is enough to show that there exist a compactly embedded space and a locally bounded , measurable function that satisfies such that clearly , if , then the assumptions of theorem [ mtheorem ] guarantee weak- mean ergodicity . in theorem[ theorem ] below , the weak- mean ergodicity is deduced from a version of ( [ mt1 ] ) that holds uniformly on bounded sets .of course , ( [ add-1 ] ) implies uniqueness of invariant measure for .moreover , for any stochastically continuous feller semigroup , its weak- mean ergodicity also implies ergodicity of , that is , that any borel set which satisfies , -a.s . for all , must be -trivial .this can be seen from , for instance , part ( iv ) of theorem 3.2.4 of .note that condition ( [ add-1 ] ) is equivalent to _ every point of being generic _ , in the sense of , that is , indeed , ( [ add-1 ] ) obviously implies ( [ add-2 ] ) since it suffices to take , .conversely , assuming ( [ add-2 ] ) , we can write , for any and , and ( [ add-1 ] ) follows .the proof of the following result is given in section [ sec2.4 ] .[ theorem ] let satisfy the assumptions of theorem [ mtheorem ] .assume , also , that there exists such that for every bounded set and , we have suppose , further , that for every and , there exists a bounded borel set such that then , besides the existence of a unique invariant measure for , the following are true : \(1 ) the semigroup is weak- mean ergodic ; \(2 ) for any and , the weak law of large numbers holds , that is , here , is the markov process that corresponds to the given semigroup , whose initial distribution is and whose path measure is .the convergence takes place in probability . using theorems [ mtheorem ] and [ theorem ] , we establish the weak- mean ergodicity for the family defined by the stochastic evolution equation here, is a real , separable hilbert space , is the generator of a -semigroup acting on , maps ( not necessarily continuously ) into , is a bounded linear operator from another hilbert space to and is a cylindrical wiener process on defined over a certain filtered probability space .let be an -measurable random variable .by a solution of ( [ e1 ] ) starting from , we mean a solution to the stochastic integral equation ( the so - called _ mild solution _ ) ( see , e.g. , ) , where the stochastic integral appearing on the right - hand side is understood in the sense of it . we suppose that for every , there is a unique mild solution of ( [ e1 ] ) starting from and that ( [ e1 ] ) defines a markov family in that way .we assume that for any , the process is stochastically continuous .the corresponding transition semigroup is given by , , and we assume that it is feller . is called a _ lyapunov function _ if it is measurable , bounded on bounded sets and we shall assume that the deterministic equation defines a continuous semi - dynamical system , that is , for each , there exists a unique continuous solution to ( [ 082102 ] ) that we denote by and for a given , the mapping is measurable .furthermore , we have for all and . [ def4.2 ] a set called _ a global attractor _ for the semi - dynamical system if : 1 .it is invariant under the semi - dynamical system , that is , for any and ; 2 . for any , there exists such that for and .[ def4.3 ] the family , is _ stochastically stable _ if in section [ spdes ] , from theorems [ mtheorem ] and [ theorem ] , we derive the following result concerning ergodicity of . [ tgeneral ]assume that : the semi - dynamical system defined by ( [ 082102 ] ) has a compact , global attractor ; admits a lyapunov function , that is , the family , is stochastically stable and where ; its transition semigroup has the e - property . , then admits a unique , invariant measure and is weak- mean ergodic .moreover , for any bounded , lipschitz observable , the weak law of large numbers holds : observe that condition ( [ condiii ] ) in theorem [ tgeneral ] is trivially satisfied if is a singleton . also , this condition holds if the semi - dynamical system , obtained after removing the noise , admits a global attractor that is contained in the support of the transition probability function of the solutions of ( [ e1 ] ) corresponding to the starting point at the attractor ( this situation occurs , e.g. , if the noise is nondegenerate ) .another situation when ( [ condiii ] ) can be guaranteed occurs if we assume ( [ et240 ] ) and uniqueness of an invariant probability measure for . from stochastic stability condition ( [ et240 ] ), it is clear that the support of such a measure is contained in any for .we do not know , however , whether there exists an example of a semi - dynamical system corresponding to ( [ 082102 ] ) with a nonsingle point attractor and such that it admits a unique invariant measure .the e - property used in theorem [ tgeneral ] can be understood as an intermediary between the strong dissipativity property of and asymptotic strong feller property ( see ) .a trivial example of a transition probability semigroup that is neither dissipative ( in the sense of ) nor asymptotic strong feller , but satisfies the e - property , is furnished by the dynamical system on a unit circle given by , where is an irrational real . for more examples of markov processes that have the e - property , but are neither dissipative nor have the asymptotic strong feller property , see .a careful analysis of the current proof shows that the e - property could be viewed as a consequence of a certain version of the asymptotic strong feller property concerning time averages of the transition operators .we shall investigate this point in more detail in a forthcoming paper .our last result follows from an application of the above theorem and concerns the weak law of large numbers for the passive tracer in a compressible random flow .the trajectory of a particle is then described by the solution of an ordinary differential equation , where , is a -dimensional random vector field .this is a simple model used in statistical hydrodynamics that describes transport of matter in a turbulent flow .we assume that is mean zero , stationary , spatially periodic , gaussian and markov in a time random field .its covariance matrix \ ] ] is given by its fourier coefficients , .here , , the _ energy spectrum _ ] , we obtain for some constant . this , of course , implies ( [ h2 ] ) .for the proof of the following lemma the reader is referred to ; see the argument given on pages 517 and 518 .[ szarek - lasota ] suppose that is not tight .there then exist an , a sequence of compact sets and an increasing sequence of positive integers satisfying and recall that is defined by ( [ pr1 ] ) .[ prop ] suppose that has the e - property and admits an invariant probability measure .then .let be the invariant measure in question .assume , contrary to our claim , that is not tight for some .then , according to lemma [ szarek - lasota ] , there exist a strictly increasing sequence of positive numbers , a positive number and a sequence of compact sets such that and ( [ a2 ] ) holds .we will derive the assertion from the claim that there exist sequences , and an increasing sequence of integers such that for any , and here , , with , denotes the -neighborhood of .moreover , and and , . temporarily admitting the above claim , we show how to complete the proof of the proposition .first , observe that ( [ a2 ] ) and condition ( [ a3 ] ) together imply that the series is uniformly convergent and . also , note that for such that , we have , or , for at most one .therefore , for such points , .this , in particular , implies that . from ( [ a1 ] ) and ( [ a3])([a5 ] ) , it follows that by virtue of ( [ a1 ] ) , the first term on the right - hand side of ( [ add5 ] ) is greater than or equal to . combining the second and the third terms, we obtain that their absolute value equals \nu_n({{d}}y)\,{{d}}s \biggr|\stackrel{(\fontsize{8.36}{10}\selectfont{\mbox{\ref{a5}}})}{\le}\frac{\varepsilon}{4}.\ ] ] the fourth term is less than or equal to , by virtue of ( [ a4 ] ) . summarizing , we have shown that \nu_n({{d}}y ) > \frac{\varepsilon}{2}\end{aligned}\ ] ] for every positive integer .hence , there must be a sequence such that ] would satisfy ( [ u3 ] ) , with in place of , admitted them instead of and , respectively . condition ( [ u2 ] ) need not , however , hold in such case . to remedy this , we average over a long time , using the operator corresponding to a sufficiently large , and use lemma [ lemma ] . more precisely , since [ thus , also , for any ] , by lemma [ lemma ], we can choose such that let and for , .by virtue of ( [ 073101 ] ) , we immediately see that furthermore , from ( [ u5b ] ) , positivity of and the definitions of and measures , we obtain that when is chosen sufficiently large .to verify ( [ u4 ] ) , note that from ( [ u6b ] ) , it follows that for all .denote the integrals appearing in the first and the second terms on the right - hand side of ( [ 073102 ] ) by and , respectively .condition ( [ u4 ] ) will follow if we could demonstrate that the upper limits , as , of both of these terms are smaller than .to estimate , we use lemma [ lemma ] and condition ( [ u4 ] ) , which holds for , .we then obtain on the other hand , since , , we obtain , from equicontinuity condition ( [ u4b ] ) , hence , ( [ u4 ] ) holds for , , and function . summarizing, we have shown that .however , we also have , which is clearly impossible . therefore, we conclude that .taking theorem [ mtheorem ] into account , the proof of the first part of the theorem will be completed as soon as we can show that .note that condition ( [ th1 ] ) implies that supp .indeed , let be a bounded set such that .we can then write , for any and , according to proposition [ prop ] , the above implies that .now , fix an arbitrary .let be the family of all closed sets which possess a finite -net , that is , there exists a finite set , say , for which . to prove that the family is tight , it suffices to show that for every , there exists such that for more details , see , for example , pages 517 and 518 of . in light of lemma [ lemma] , this condition would follow if we could prove that for given , and , one can find and such that fix an .since , we can find such that ( [ th3 ] ) holds with in place of and .let be the -neighborhood of .[ support ] there exists such that in addition , if is as above , then for any and , we can choose such that the claim made in ( [ 081901 ] ) follows if we can show that there exists such that to prove ( [ th4 ] ) , suppose that is a lipschitz function such that .since is equicontinuous at , we can find such that for all .we then have and , using ( [ th3 ] ) , we conclude that estimate ( [ 081902 ] ) follows directly from ( [ 081901 ] ) and lemma [ lemma ] .let us return to the proof of theorem [ theorem ] .let be as in the above lemma and let denote the supremum of all sums such that there exist and for some . in light of lemma [ support] , to deduce ( [ th3b ] ) , it is enough to show that .assume , therefore , that let be a bounded subset of , let be such that and let let , and be such that and for a given , we let by virtue of lemma [ lemma ] , we can choose such that for .thus , from ( [ th888 ] ) , we obtain that for such , however , this means that for , choose such that let . of course , . from ( [ th999 ] ) andthe definitions of , , we obtain , however , that for as above , hence , , which clearly contradicts ( [ 081904 ] ) . recall that is the path measure corresponding to , the initial distribution of .let be the corresponding expectation and .it then suffices to show that =d_*\ ] ] and ^ 2= d_*^2.\ ] ] equality ( [ 090602 ] ) is an obvious consequence of weak- mean ergodicity . to show ( [ 090603 ] ) , observe that the expression under the limit equals \\[-8pt ] & & \qquad=\frac{2}{t^2}\int_0^t(t - s ) \biggl(\int_{\mathcal x } p_s(\psi\psi_{t - s})\,{{d}}\mu\biggr)\,{{d}}s,\nonumber\end{aligned}\ ] ] where the following lemma then holds .[ lm090601 ] for any and a compact set , there exists such that it suffices to show equicontinuity of on any compact set .the proof then follows from pointwise convergence of to as and the arzela ascoli theorem .the equicontinuity of the above family of functions is a direct consequence of the e - property and a simple covering argument . now , suppose that .one can find a compact set such that then where and according to lemma [ lm090601 ] , we can find such that ( [ 090606 ] ) holds with the compact set and .we then obtain also , note that the limit on the right - hand side of ( [ 090603 ] ) therefore equals what follows , we are going to verify the assumptions of theorem [ theorem ] . first , observe that ( [ th2 ] ) follows from ( ii ) and chebyshev s inequality .the e - property implies equicontinuity of at any point for any bounded , lipschitz function .what remains to be shown , therefore , is condition ( [ th1 ] ) .the rest of the proof is devoted to that objective .it will be given in five steps .[ stepi ] we show that we can find a bounded borel set and a positive constant such that to prove this , observe , by ( ii ) and chebyshev s inequality , that for every , there exists a bounded borel set such that .let be a bounded , open set such that and let be such that . since is equicontinuous at , we can find such that for all and .therefore , we have since the attractor is compact , we can find a finite covering , , of .the claim made in ( [ tw1 ] ) therefore holds for and sufficiently small so that .[ stepii ] let be as in step [ stepi ] .we prove that for every bounded borel set , there exists a such that from the fact that is a global attractor for ( [ 082102 ] ) , for any and a bounded borel set , there exists an such that for all . by ( [ et240 ] ), we have we therefore obtain that let be the constant given in step [ stepi ] .then & & \qquad=\mathop{\lim\inf}_{t\uparrow\infty } \frac{1}{t}\int_0^t { p}_{s+l}\mathbf{1}_{b}(x)\,{{d}}s\nonumber\\ & & \qquad=\mathop{\lim\inf}_{t\uparrow\infty } \frac{1}{t}\int_0^t { p}_{s+l}^*\delta_{x}(b)\,{{d}}s\nonumber\\ & & \qquad= \mathop{\lim\inf}_{t\uparrow\infty } \frac{1}{t}\int_0^t \int_{\mathcal x}p_s\mathbf { 1}_{b}(z){p}_{l}^*\delta _ { z}({{d}}z)\,{{d}}s\nonumber\\ & & \qquad\ge\mathop{\lim\inf}_{t\uparrow\infty } \frac{1}{t}\int_0^t \int_{\mathcal k+r^*b(0,1 ) } p_s\mathbf{1}_{b}(z){p}_{l}^*\delta_{x}({{d}}z ) \,{{d}}s\\ & & \hspace*{-12.62pt}\qquad\mathop{\mathop{\ge}\limits^{\mathrm{and}\ \mathrm{fatou}}}\limits^{\mathrm{fubini } } \int_{\mathcal k+r^ * b(0,1)}\liminf_{t\uparrow\infty}q^t(z , b ) { p}_{l}^*\delta_{x}({{d}}z ) \nonumber\\ & & \hspace*{-4.16pt}\qquad\stackrel{(\fontsize{8.36}{10}\selectfont{\mbox{\ref{tw1}}})}{\ge}\frac{1}{2 } \int_{\mathcal x}\mathbf{1}_{\mathcal k+r^*b(0,1)}(z){p}_{l}^*\delta _ { x}({{d}}z)\nonumber\\ & & \qquad=\frac{1}{2 } { p}_{l}\mathbf{1}_{\mathcal k+r^ * b(0,1)}(x)\nonumber\\ & & \hspace*{-4.16pt}\qquad\stackrel{(\fontsize{8.36}{10}\selectfont{\mbox{\ref{082201}}})}{\ge } \gamma:=\frac{p(r^*,d)}{2 } \qquad\forall x\in d.\nonumber\end{aligned}\ ] ] [ stepiii ] we show here that for every bounded borel set and any radius , there exists a such that we therefore fix and . from step [ stepii ] ,we know that there exist a bounded set and a positive constant such that ( [ tw3 ] ) holds . by ( [ et240 ] ), we have , as in ( [ tw7 ] ) , using ( [ 082201 ] ) , we can further estimate the last right - hand side of ( [ tw7b ] ) from below by we therefore obtain ( [ tw8 ] ) with .[ stepiv ] choose .we are going to show that for every , there exist a finite set of positive numbers and a positive constant satisfying let for be such that . by the feller property of , we may find , for any , a positive constant such that since is compact , we may choose such that , where for .choose such that .[ stepv ] fix a bounded borel subset , and .let a positive constant and a finite set be such that ( [ tw2 ] ) holds .set from step [ stepiii ] , it follows that there exists such that ( [ tw8 ] ) holds for .denote by the cardinality of .we can easily check that \\[-8pt ] & & \qquad=\ # s\mathop{\lim\inf}_{t\uparrow \infty}q^t(x , b(z,\delta ) ) \qquad\forall x\in d.\nonumber\end{aligned}\ ] ] on the other hand , we have \\[-8pt ] & & \hspace*{6.5pt}\qquad\ge \int_{\mathcal k+ \tilde rb(0,1)}\sum_{q\in s}p_{q}\mathbf{1 } _ { b(z,\delta)}(y)q^t(x,{{d}}y)\nonumber\\ & & \qquad\stackrel{(\fontsize{8.36}{10}\selectfont{\mbox{\ref{082401}}})}{\ge } uq^t\bigl(x,\mathcal k+ \tilde rb(0,1)\bigr ) \qquad\forall x\in d.\nonumber\end{aligned}\ ] ] combining ( [ tw8 ] ) with ( [ tw5 ] ) , we obtain and , finally , by ( [ tw4 ] ) , this shows that condition ( [ th1 ] ) is satisfied with . with no loss of generality , we will assume that the initial position of the tracer . by definition , where is the observation process .recall that is a stationary solution to ( [ zet ] ) .obviously , uniqueness and the law of a stationary solution do not depend on the particular choice of the wiener process .therefore , and where , as before , stands for the law of a random element and is , by theorem [ pttheorem ] , a unique ( in law ) stationary solution of the equation \,{{d}}t + { q}^{1/2 } \,{{d}}w(t).\ ] ] let be given by .the proof of the first part of the theorem will be completed as soon as we can show that the limit ( in probability ) exists and is equal to , where is the unique invariant measure for the markov family defined by ( [ ew2 ] ) .since the semigroup satisfies the e - property and is weak- mean ergodic , part ( 2 ) of theorem [ theorem ] implies that for any bounded lipschitz continuous function , since is embedded in the space of bounded continuous functions , is lipschitz .the theorem then follows by an easy truncation argument .this section is in preparation for the proof of theorem [ ttracer ] . given an , we denote by the sobolev space which is the completion of with respect to the norm where are the fourier coefficients of .note that if .let be an operator on defined by with the domain since the operator is self - adjoint , it generates a -semigroup on .moreover , for , is the restriction of and is the restriction of . from now on, we will omit the subscript when it causes no confusion , writing and instead of and , respectively .let be a symmetric positive definite bounded linear operator on given by let be the constant appearing in ( [ h1 ] ) and let and .note that , by sobolev embedding ( see , e.g. , theorem 7.10 , page 155 of ) , and hence there exists a constant such that for any , the operator is bounded from any to . its norm can be easily estimated by let , . the hilbert schmidt norm of the operator ( see appendix c of ) is given by taking into account assumptions ( [ h1 ] ) and ( [ h2 ] ) , we easily obtain the following lemma .[ l1 ] for each , the operator is hilbert schmidt from to and there exists such that for any and , the operator is bounded from into and let be a cylindrical wiener process in defined on a filtered probability space . by lemma [ l1](i ) and theorem 5.9 , page 127 of , for any , there exists a unique , continuous in , -valued process solving , in the mild sense , the ornstein uhlenbeck equation moreover , ( [ eou ] ) defines a markov family on ( see section 9.2 of ) and the law of on is its unique invariant probability measure ( see theorem 11.7 of ) . note that , since , for any fixed , the realization of is lipschitz in the variable . if the filtered probability space is sufficiently rich , that is , if there exists an -measurable random variable with law , then the stationary solution to ( [ eou ] ) can be found as a stochastic process over .its law on the space of trajectories coincides with the law of . since the realizations of are lipschitz in the spatial variable , equation ( [ et1 ] ) , with in place of , has a unique solution , , for given initial data .in fact , with no loss of generality , we may , and shall , assume that . in what follows , we will also denote by the solution of ( [ et1 ] ) corresponding to the stationary right - hand side .let be the _lagrangian observation of the environment process _ or , in short , _ the observation process_. it is known ( see and ) that solves the equations \,{{d}}t+ q^{1/2}\,{{d}}\tilde w(t),\nonumber\\[-8pt]\\[-8pt ] \mathcal z(0,\cdot ) & = & v\bigl(0 , \mathbf x(0)+\cdot\bigr),\nonumber\end{aligned}\ ] ] where is a certain cylindrical wiener process on the original probability space and \\[-8pt ] \eqntext{\psi,\phi\in\mathcal x , \xi\in\mathbb{t}^d.}\end{aligned}\ ] ] by ( [ ea1 ] ) , is a continuous bilinear form mapping from into . for a given an -measurable random variable which is square - integrable in and a cylindrical wiener process in ,consider the spde \,{{d}}t + { q}^{1/2 } \,{{d}}w(t),\qquad z(0)=z_0.\ ] ] taking into account lemma [ l1](ii ) , the local existence and uniqueness of a mild solution follow by a standard banach fixed point argument .for a different type of argument , based on the euler approximation scheme , see section 4.2 of . global existence also follows ; see the proof of the moment estimates in section [ sec5.3.2 ] below .given , let denote the value at of a solution to ( [ ew2 ] ) satisfying .since the existence of a solution follows from the banach fixed point argument , is a stochastically continuous markov family and its transition semigroup is feller ; for details see , for example , or .note that the following result on ergodicity of the observation process , besides being of independent interest , will be crucial for the proof of theorem [ ttracer ] .[ pttheorem ] under assumptions ( [ h1 ] ) and ( [ h2 ] ) , the transition semigroup for the family is weak- mean ergodic . to prove the above theorem , we verify the hypotheses of theorem [ tgeneral ] .note that is the global attractor for the semi - dynamical system defined by the deterministic problem clearly , this guarantees the uniqueness of an invariant measure for the corresponding semi - dynamical system ; see definition [ def4.2 ] .our claim follows from the exponential stability of , namely , where is strictly positive by ( [ h2 ] ) .indeed , differentiating over , we obtain the last term on the right - hand side vanishes , while the first one can be estimated from above by . combining these observations with gronwall s inequality ,we obtain ( [ et22 ] ) .let be the ball in with center at and radius .we will show that for any and any integer , recall that is the solution to ( [ eou ] ) satisfying .let solve the problem we then obtain where the first equality means equality in law . since is gaussian , there is a constant such that hence , there is a constant such that for , note that there is a constant such that where appears in ( [ h1 ] ) and ( [ et23 ] ) indeed follows .define .this satisfies equation ( [ zet ] ) and so the laws of and are identical .on the other hand , for and we have that satisfies ( [ et21 ] ) .to show stochastic stability , it suffices to prove that let be the stochastic convolution process it is a centered , gaussian , random element in the banach space ,\mathcal x) ] . note that . since is a centered , gaussian , random element in the banach space ,\mathcal x) ] . since on , we can choose sufficiently small so that where is chosen in such a way that on . hence , ( [ et24 ] ) follows .it suffices to show that for any and , there exists a positive constant such that here , denotes the frchet derivative of a given function .indeed , let be supported in the ball of radius , centered at and such that .suppose that is an orthonormal base in and is the orthonormal projection onto .define one can deduce ( see part 2 of the proof of theorem 1.2 , pages 164 and 165 in ) that for any , the sequence satisfies and pointwise .in addition , and .let be arbitrary and .we can write \| x - y\| _ { \mathcal x}.\end{aligned}\ ] ] this shows equicontinuity of for an arbitrary lipschitz function in the neighborhood of any and the e - property follows . to prove ( [ et26 ] ) , we adopt the method from .first , note that ] , where ] . then &= & \mathbb{e } \ { d\psi(z^x(t))[\omega _ t(x ) ] \ } + \mathbb{e } \ { d \psi(z^x(t))[\rho_t(v , x ) ] \}\\ & = & \mathbb{e } \{{\mathcal d}_g \psi(z^x(t ) ) \ } + \mathbb{e } \ { d\psi(z^x(t))[\rho_t(v , x ) ] \ } \\ & \stackrel{(\fontsize{8.36}{10}\selectfont{\mbox{\ref{083003}}})}{= } & \mathbb{e } \biggl\{\psi(z^\xi(t))\int_0^t\langle g(s),q^{1/2}\,{{d}}w(s)\rangle_{\mathcal x } \biggr\}\\ & & { } + \mathbb{e } \ { d\psi(z^x(t))[\rho_t(v , x ) ] \}.\end{aligned}\ ] ] we have and \ }|\le\|\psi\| _ { c_b^1({\mathcal x } ) } \mathbb{e } \|\rho_t(v , x)\|_{\mathcal x}.\ ] ] hence , by ( [ et29 ] ) and ( [ et210 ] ) , we derive the desired estimate ( [ et26 ] ) with - { \mathcal d}_gz^x(t)\|_{\mathcal x}.\end{aligned}\ ] ] therefore , the e - process property would be shown if we could prove proposition [ m - lm ] .let us denote by the orthogonal projection onto .write given an integer , let be the solution of the problem we adopt the convention that let where \\ & & { } + \tfrac12\pi_{<n}\zeta^n(v , x)(t)\|\pi_{<n}\zeta^n(v , x)(t)\| _ { \mathcal x}^{-1}\nonumber\end{aligned}\ ] ] and where will be specified later .note that takes values in a finite - dimensional space , where is invertible , by the definition of the space .recall that - { \mathcal d}_gz^x(t) ] and obey equations ( [ et27 ] ) and ( [ et28 ] ) , respectively . hence , satisfies since , we conclude that and solve the same linear evolution equation with the same initial value .thus , the assertion of the lemma follows .[ l3 ] for each , we have for all .applying to both sides of ( [ et211 ] ) , we obtain \\[-8pt ] \zeta^n(v , x)(0)&=&v.\nonumber\end{aligned}\ ] ] multiplying both sides of ( [ et216 ] ) by , we obtain that satisfies since , ] . therefore , ( [ pitu ] ) is a direct consequence of the fernique theorem ( see , e.g. , ) .to prove the second part of the lemma , first observe that for any , multiplying both sides of ( [ et211 ] ) by and remembering that for , we obtain that , for those times , define note that there exists a constant such that therefore , using gronwall s inequality , we obtain , for , where .we have , therefore , by virtue of the cauchy schwarz inequality , where is given by ( [ 082903 ] ) .write the proof of part ( ii ) of the lemma will be completed as soon as we can show that there exists an such that , for all , to do this , it is enough to show that to do this , note that for any , is a strong solution to the equation therefore , we can apply the it formula to and the function as a result , we obtain taking into account the spectral gap property of , we obtain therefore , where and since there exists a constant such that for all , and , for all .let .we can choose sufficiently large such that for , since is a martingale , we have shown , therefore , that for , letting , we obtain ( [ 083110 ] ) .the authors wish to express their gratitude to an anonymous referee for thorough reading of the manuscript and valuable remarks .we also would like to express our thanks to z. brzeniak for many enlightening discussions on the subject of the article .
|
we formulate a criterion for the existence and uniqueness of an invariant measure for a markov process taking values in a polish phase space . in addition , weak- ergodicity , that is , the weak convergence of the ergodic averages of the laws of the process starting from any initial distribution , is established . the principal assumptions are the existence of a lower bound for the ergodic averages of the transition probability function and its local uniform continuity . the latter is called the _ e - property_. the general result is applied to solutions of some stochastic evolution equations in hilbert spaces . as an example , we consider an evolution equation whose solution describes the lagrangian observations of the velocity field in the passive tracer model . the weak- mean ergodicity of the corresponding invariant measure is used to derive the law of large numbers for the trajectory of a tracer . , and . .
|
recently , an interesting new approach for physical security in massive multiple - input multiple - output ( ) communication systems was introduced by dean and goldsmith and called `` physical layer cryptography '' , or a massive physical layer cryptosystem ( ) . in this scenario , the channel state information ( ) is known at the legitimate transmitter as well as all the other adversaries and legitimate receivers .the eavesdropper has also the knowledge of the between legitimate users .the idea is to replace the information - theoretic security guarantees of previous physical layer security methods with the weaker complexity - based security guarantees used in cryptography .more precisely , the idea of is to precode the information data at the transmitter , based on the known between the legitimate users , so that the decoding of the received vector would be computationally easy for the legitimate user but computationally hard for the adversary .the goal of this approach is to trade - off a weaker , but still practical , complexity - based security guarantee in order to avoid the less practical additional assumptions required by existing information - theoretic techniques , such as higher noise level in and/or less antennas for the adversary than for legitimate parties in , while still retaining the `` no secret key '' location - based decryption feature of physical - layer security methods . in , a presented that is claimed to achieve the above goal of the complexity - based approach , using a singular value decomposition ( ) precoding technique and -pam constellations at the transmitter .namely , it is claimed that , under a certain condition on the number of legitimate sender s transmit antennas and the noise level in the adversary s channel ( which we call the _ hardness condition _ of ) , the message decoding problem for the adversary ( eavesdropper ) , termed the problem in , is as hard to solve on average as it is to solve a standard conjectured hard lattice problem in dimension in the worst - case , in particular , the variant of the approximate shortest vector problem in arbitrary lattices of dimension , with approximation factor polynomial in .for these problems , no polynomial - time algorithm is known , and the best known algorithms run in time exponential in the number of transmit antennas , which is typically infeasible when is in the range of few hundreds ( as in the case of massive ) .significantly , this computational hardness of is claimed to hold even if the adversary is allowed to use a large number of receive antennas _ polynomially larger _ than and used by the legitimate parties , and with the same noise level as the legitimate receiver ( ) . consequently ,under the widely believed conjecture that no polynomial - time algorithms for in dimension exist and the hardness condition of , the authors of conclude that their and the corresponding problem is secure against adversaries with run - time polynomial in ._ our contribution ._ in this contribution , we further analyse the complexity - based initiated in , to improve the understanding of its potential and limitations .our contributions are summarized below : * we show , using a linear receiver known as zero - forcing ( ) , an algorithm with run - time polynomial in for the problem faced by an adversary against the in .we analyze the decoding success probability of this algorithm and prove that it is even if the _ hardness condition _ of is satisfied , if the ratio exceeds a small factor at most logarithmic in , i.e. .this contradicts the hardness of the problem conjectured in to hold for much larger polynomial ratios .moreover , we show that the decoding success probability of an adversary against the of using the decoder is approximately the same ( or greater than ) as the decoding success probability of the legitimate receiver if is approximately greater than or equal to , assuming an equal noise level for adversary and legitimate receivers .our first contribution implies that the precoder - based in still requires for security an undesirable assumption limiting to be less than that of the legitimate receiver , similar to previous information - theoretic techniques .* as our second contribution , we investigate the potential of the general approach of assuming decoding by the both adversary and legitimate receiver , by studying the generalized scenario where one allows arbitrary precoding matrices by the legitimate transmitter in place of the precoder of the scheme in . to do so , we define a decoding advantage ratio for the legitimate user over the adversary , which is approximately the ratio of the maximum noise power tolerated by the legitimate user s decoder to the maximum noise power tolerated by the adversary s decoder ( for the same `` high '' success probability ) .we derive a general upper bound on this advantage ratio , and show that , even in the general scenario , the advantage ratio tends to 1 ( implying no advantage ) , if the ratio exceeds a small constant factor ( ) . thus a linear limitation ( in the number of legitimate user antennas ) onthe number of adversary antennas seems inherent to the security of this approach . on the positive side , we show that , in the case when legitimate parties and the adversary all have the same number of antennas ( ) , the upper bound on the advantage ratio is quadratic in and we give experimental evidence that this upper bound can be approximately achieved using an inverse precoder .* notation . *the notation denotes that the real number is much greater than .we let denotes the absolute value of .vectors will be column - wise and denoted by bold small letters .let be a vector , then its -th entry is represented by .a matrix ] denotes the probability of the event `` '' . the standard gaussian distribution on with zero mean and variance is denoted by .we denote by the assignment to random variable a sample from the probability distribution .we first summarize the notion of real lattices and ( of a matrix ) which are essential for the rest of the paper . a -dimensional _ lattice _ with a basis set is the set of all integer linear combinations of basis vectors .every matrix admits a singular value decomposition ( ) , where the matrices and are two orthogonal matrices and is a rectangular diagonal matrix with non - negative diagonal elements . by abusing the notation, we denote the moore penrose pseudo - inverse of by , that is , where the pseudo - inverse of is denoted by and can be obtained by taking the reciprocal of each non - zero entry on the diagonal of and finally transposing the matrix .we consider a slow - fading wiretap channel model .the real - valued channel from user to user is denoted by .we also denote the channel from to the adversary by an matrix .the entries of and are identically and independently distributed ( i.i.d . ) based on a gaussian distribution .these channel matrices are assumed to be constant for long time as we employ precoders at the transmitter .this model can be written as : the entries of , for , are drawn from a constellation for an integer .the components of the noise vectors and are i.i.d . based on gaussian distributions and , respectively .we assume to evaluate the potential of the dean - goldsmith model to provide security based on computational complexity assumptions , without a `` degraded noise '' assumption on the eavesdropper . in this communication setup, the is available at all the transmitter and receivers .in fact , users and know the channel matrix ( via some channel identification process ) , while adversary has the knowledge of both channel matrices and .the knowledge of allows to perform a linear precoding to the message before transmission .more specifically , in , to send a message to , user performs an precoding as follows .let of be given as .the user transmits instead of and applies a filter matrix to the received vector . with this , the received vectors at and are as follows : where .note that since and are both orthogonal matrices , the vector and the matrix continue to be i.i.d .gaussian vector and matrix , with components of zero mean and variances and , respectively .although dean - goldsmith do not provide a correctness analysis , we provide one here for completeness .since is diagonal , user recovers an estimate of the -th coordinate / layer of , by performing two operations dividing and rounding as follows : .it is now easy to see that the decoding process succeeds if for all .since each is distributed as , the decoding error probability , that incorrectly decodes , is , by a union bound , upper bounded by times the probability of decoding error at the worst layer : where we have used the bound on the tail of the standard gaussian distribution . by choosing parameters such that , one can ensure that s error probability is less than any .unlike decoding by user , for decoding by the adversary , the authors of claimed that the complexity of a problem called in the `` search '' variant of the `` decoding problem '' ( to be called from here on ) , namely recovering from and , with non - negligible probability , under certain parameter settings , upon using massive systems with large number of transmit antennas , is as hard as solving standard lattice problems in the worst - case .more precisely , it was claimed in that , upon considering above conditions , user will face an exponential complexity in decoding the message .the above cryptosystem is called the _ massive physical layer cryptosystem ( ) _ , and the above problem of recovering from is called in the `` search '' variant of the `` decoding problem '' . for our security analysis , we focus here for simplicity on this variant .we say that the problem is _ hard _ ( and the is _ secure _ in the sense of `` one - wayness '' ) if any attack algorithm against with run - time has negligible success probability .more precisely , in theorem of , a polynomial - time complexity reduction is claimed from worst - case instances of the problem in arbitrary lattices of dimension , to the problem with transmit antennas , noise parameter and constellation size , assuming the following minimum noise level for the equivalent channel in between and holds : the reduction is quantum when and classical when , and is claimed to hold for _ any polynomial number of receive antennas _ .we show in the next section , however , that in fact for for some constant , there exists an efficient algorithm for .since is independent of the number of receive antennas , the condition turns out to be not sufficient to provide security of the . we will provide our detailed analysis in the next section .in this section , we introduce a simple and efficient attack based on linear receivers .we first introduce the attack and analyze its components .the eavesdropper receives .let be the of the equivalent channel .thus , we get , where both and are orthogonal matrices and equals , where the last equality holds since the singular values of and are the same .note that knows and its from the assumption that ( s)he knows the channel between and . at this point ,user performs a attack .s(he ) computes where .user is now able to recover an estimate of the -th coordinate of , by rounding : .we now investigate the distribution of in .[ lem : noiseateve ] the components of in are distributed as with . note that has the same distribution as since is orthogonal .hence , , the -th coordinate of the vector is distributed as , for all .we also note that s are independent with different variances .now let denotes the -th row of .we find the distribution of since the linear combination at is distributed as a linear combination of independent gaussian distributions , is distributed as since , for all , the random variable is distributed as with where the last equality holds because is orthogonal . the above explained attack succeeds if for all .let denotes the decoding error probability that incorrectly recovers using attack . based on lemma [ lem : noiseateve ], we have by comparing and , we see that the noise conditions for decoding by users and are the same if both users have the same number of receive antennas and the distributions of channels and are the same .this implies that user is able to decode under the same constraints / conditions as .moreover , if , then the adversary is capable of decoding higher noise . before starting this section , we mention a theorem from regarding the least / largest singular value of matrix variate gaussian distribution .this theorem relates the least / largest singular value of a gaussian matrix to the number of its columns and rows asymptotically .[ th : sv ] let be an matrix with i.i.d .entries distributed as . if and tend to infinity in such a way that tends to a limit ] , and suppose that as .then , for all sufficiently large , the probability that incorrectly decodes the message using a zf decoder is upper bounded by , if let be the set of all channel matrices such that .note that with vanishing probability as , by theorem [ th : sv ] .we have : where the first inequality is due the facts that and , the second inequality is true based on and theorem [ th : sv ] , the third inequality uses the well - known upper bound for the tail of a gaussian distribution and the last inequality follows from the definition of . by letting , the sufficient condition can be obtained .comparing conditions and , we conclude that if exceeds a small factor at most logarithmic in , i.e. we can have both conditions satisfied and yet theorem [ th : pezf ] shows that can be efficiently solved , i.e. this contradicts the hardness of the problem conjectured in to hold for much larger polynomial ratios . to analytically investigate the advantage of decoding at over , we define the following advantage ratio . [ definition : advratio ] for fixed channel matrices and , the ratio called the advantage of over .we note from and that is the ratio between the maximum noise power tolerated by s zf decoder to the maximum noise power tolerated by s zf decoder , for the same decoding error probability in both cases .first , we study this advantage ratio asymptotically .we use theorem [ th : sv ] to obtain the following result .[ prop : rectangularmmplcdisadv ] let be the channel between and and be the channel between and , both with i.i.d .elements each with distribution .fix real ] , and suppose that and as , so that . then , using a general precoding matrix in , we have almost surely as .hence , in the case and , we have .moreover , if for some , then .it is easy to see the two inequalities below hold for every , , and : hence , the advantage ratio can be upper bounded as using theorem [ th : sv ] for the the numerator and the denominator of the rhs of , respectively , and , we get in the case and , the latter inequality gives . also , the inequality implies ( using ) that , and the rhs of the latter is for all , which implies .[ anupperboundonadvantageratio ] the above analysis shows that one can not hope to achieve an advantage ratio greater than 1 , if the the adversary uses a number of antennas significantly larger than used by the legitimate parties ( by more than a constant factor ) .we now explore what advantage ratio can achieve if we add a new constraint to , namely the number of adversary antennas is limited to be the same as the number of legitimate transmit and receive antennas .that is , we study the advantage ratio when the channel matrices and are square matrices and not rectangular .we show that under this simple constraint , the advantage ratio is capable of getting larger than and as big as .we employ the following result in our analysis .[ th : squarelsv ] let be a matrix with i.i.d .entries distributed as .the least singular value of satisfies = \exp\left(-x^2/2-x\right).\ ] ] we note that for a similar result on the largest singular value for square matrices , theorem [ th : sv ] is enough . using the above theorem along with theorem [ th : sv ] , one can further upper bound and estimate the advantage ratio .more precisely , we have where is obtained based on .as , based on theorem [ th : squarelsv ] , the denominator of the rhs of is except with probability for any fixed , and thus is with the same probability .the following proposition is now outstanding .[ prop : squareub ] let be fixed , and be matrices as in proposition [ prop : rectangularmmplcdisadv ] with . using a general precoder to send the plain text , the maximum possible that can achieve over , is of order , except with probability .the above proposition implies that user _ may _ be able to decode the message , with noise power up to times greater than is able to handle .such an advantage was not available in scheme proposed in due to the lack of constraint on the number of receive antennas for and the use of svd precoder .we present below experimental evidence that this upper bound can be approached using an _ inverse _ precoder . this inverse precoder may not be power efficient as it may need a lot of power enhancement at , however it gives us a benchmark on the achievable advantage ratio . in this framework , the equivalent channel between legitimate users is the identity matrix and the channel between users and is . in fig .[ fig : advratioinverse ] , we have shown the value of for square channel matrices of size .for refrence , we also plot the mean value along with .clearly , in most cases the advantage ratio is within a small factor ( compared to ) of . the advantage ratio for square channels of size using inverse precoder.,title="fig:",width=321 ][ sec7 ] our results suggest several natural open problems for future work .the implied contradiction between our first contribution and the conjectured hardness of in for implies either a polynomial - time algorithm for worst - case or that the complexity reduction of ( theorem 1 of ) between and does not hold under the hardness condition of .we believe the second possibility is the correct one , and that there is a gap in the proof of theorem 1 of .we do not yet know if the gap can be filled to give a worst - case to average - case reduction under a revised hardness condition .this is left for future work .our generalized upper bound on legitimate user to adversary decoding advantage suggests the complexity - based approach does not remove the needed linear limitation on the number of adversary antennas versus the number of legitimate party antennas , that is also suffered by previous information - theoretic methods .can a more general complexity - based approach to physical - layer security avoid this limitation ?finally , our positive result for the inverse precoder suggests that if the adversary is limited to have the same number of antennas as the legitimate parties , the complexity - based approach may provide practical security .this suggests the following questions : how secure is this inverse precoding scheme against more general decoding attacks ( other than ) ? can a security reduction from a worst - case standard lattice problem be given for this case ?how does the practicality of the resulting scheme compare to existing physical - layer security schemes based on information - theoretic security arguments ? can the efficiency of those schemes be improved by the complexity - based approach ?9 t. dean and a. goldsmith , `` physical - layer cryptography through massive , '' _ information theory workshop ( itw ) , 2013 ieee , _ pp . 15 , 9 - 13 sept .extended version is also available online at : http://arxiv.org/abs/1310.1861 , j. wang , j. lee , f. wang , and t. quek , `` secure communication via jamming in massive mimo rician channels , '' _ globecom workshops ( gc wkshps ) , 2013 ieee _ , pp . 340345 , 8 - 12 dec .a.d . wyner , `` the wire - tap channel , '' _ bell system technical journal , _ vol . 54 , issue . 8 pp . 13551387 , oct .
|
in this paper , we present a zero - forcing ( ) attack on the physical layer cryptography scheme based on massive multiple - input multiple - output ( ) . the scheme uses singular value decomposition ( ) precoder . we show that the eavesdropper can decrypt / decode the information data under the same condition as the legitimate receiver . we then study the advantage for decoding by the legitimate user over the eavesdropper in a generalized scheme using an arbitrary precoder at the transmitter . on the negative side , we show that if the eavesdropper uses a number of receive antennas much larger than the number of legitimate user antennas , then there is no advantage , independent of the precoding scheme employed at the transmitter . on the positive side , for the case where the adversary is limited to have the same number of antennas as legitimate users , we give an upper bound on the advantage and show that this bound can be approached using an inverse precoder . physical layer cryptography , massive , zero - forcing , singular value , precoding .
|
the stochastic resonance ( sr ) constitutes a cooperative phenomenon wherein the addition of noise to the information carrying signal can improve in a paradoxical manner the detection and transduction of signals in nonlinear systems ( see , e.g. , for an introductory overview and for a comprehensive survey and references ) .clearly , this effect could play a prominent role for the function of sensory biology . as such ,the beneficial role of ambient and external noises has been addressed not only theoretically ( see , e.g. , ) , but also has been manifested experimentally on different levels of biological organization e.g. , in the human visual perception and tactile sensation , in the cricket cercal sensory systems , or also in the mammalian neuronal networks and even earlier for the mechanoreceptive system in crayfish .presumably , the molecular mechanisms of the biological sr have their roots in stochastic properties of the ion channel arrays of the receptor cell membranes .this stimulates the interest to study sr in biological ion channels .one of the outstanding challenges in sr - research therefore is the quest to answer whether and how sr occurs in single and/or coupled ion channels .these channels are the evolution s solution of enabling membranes made out of fat to participate in electrical signaling .they are formed of special membrane proteins . in spite of the great diversity , these natural occurring nanotubesshare some common features .most importantly , the channels are functionally bistable , i.e. they are either _ open _ , allowing specific ions to cross the membrane , or are _ closed_ .the regulation of the ion flow is achieved by means of the so - called gating dynamics , i.e. , those intrinsic stochastic transitions occurring inside the ion channel that regulate the dynamics of open and closed states .the key feature of gating dynamics is that the opening - closing transition rates depend strongly on external factors such as the membrane potential ( voltage - gated ion channels ) , membrane tension ( mechanosensitive ion channels ) , or presence of chemical ligands ( ligand - gated ion channels ) .this sensitivity allows one to look upon the corresponding ion channels as a kind of single - molecular sensors which transmits an input information to the signal - modulated ion current response .recently , it has been demonstrated experimentally by bezrukov and vodyanoy that a parallel ensemble of independent , although _ artificial _ ( alamethicin ) voltage - gated ion channels does exhibit sr behavior , when the information - carrying voltage signal is perturbed by a noisy component .these authors have put forward the so - called _ non - dynamical model _ of sr .it is based on the statistical analysis of the `` doubly stochastic '' , periodically driven poisson process with corresponding voltage - dependent spiking rate .conceptually , such a model can be adequate to those situations only where the channel is closed on average with openings constituting relatively rare events .an experimental challenge is to verify whether the sr effect persists for _ single _ natural biological ion channels under realistic conditions .moreover , a second challenge is to extend the theoretical description in to account properly for the distribution of dwell times spent by the channel in the conducting state .the previous research on sr in ion channels has exclusively been restricted to the case of conventional sr , i.e. , sr with a periodic input signal . in a more general situation , however , input aperiodic signals can be drawn from some statistical distribution .this case of the so - termed _ aperiodic _ sr has recently been put forward for neuronal systems .note that the important assumption of dealing with a signal realization that is taken from a stationary process has been made in all previous studies . in practice, however , one frequently meets a situation where this stationarity assumption is not rigorously valid , because the signal has a finite duration on the time - scale set by observation . in this_ nonstationary _ situation , both the spectral and the cross - correlation sr measures are inadequate .a preferable approach is then to look for sr from the perspective of statistical information transduction .as is elucidated with this work , information theory can indeed provide a _ unified _ framework to address different types of sr , including _nonstationary _ sr .it is the main purpose of this work to investigate the possibility to enhance the transmission of information in a _ion channel in presence of a dose of noise .this task will be accomplished within a simplistic two - state markovian model for the ion channel conductance .already within such an idealization , our analysis in terms of information theory measures turns out to be rather involved .in principle , the microscopic description of the gating dynamics should be based upon the detailed understanding of the structure of channel s `` gating dynamics '' .present state of the art assumes that the voltage - sensitive gates are represented by mobile charged - helix fragments of the channel protein which can dynamically block the ion conducting pathway .therefore , the gating dynamics can be described by diffusive motion of gating `` particles '' in an effective potential .then , kramers diffusion theory and its extension to the realm of _ fluctuating barriers _ ( see , e.g. , for a review and further references ) can be utilized to describe the gating dynamics .such a type of procedure , however , is still in its infancy . for our purpose , it suffices to follow a well - established phenomenological road provided by a discrete phenomenological modeling . the simplest two - state model of this kind reflects the functional bistability of ion channels . the dichotomous fluctuations between the conducting and nonconducting conformations of _ single _ ion channels are clearly seen in the patch clamp experiments . the statistical distributions of sojourn times of the open channel state and the closed channel state ,respectively , are generically not exponentially distributed .however , one can characterize these time distributions by its average , , to dwell the open ( o ) state , and by its corresponding average , , to stay in the closed ( c ) state .these two averages depend on the transmembrane voltage .then , the actual multistate gating dynamics can be approximately mapped onto the effective two - state dynamics described by the simple kinetic scheme \longrightarrow \\[-1.5em ] \longleftarrow \\[-1.9em ] k_c(v ) \end{array } o \\\end{aligned}\ ] ] with corresponding voltage - dependent effective transition rates and , respectively .although such a two - state markov description presents a rather crude approximation , it captures the main features of gating dynamics of the voltage - sensitive ion channels the dichotomous nature and the voltage - dependence of transition rates .moreover , this model yields by construction the correct mean open ( closed ) dwell times , and the stationary probability for the channel to stay open , i.e. .an example for the experimental dependence of the transition rates on voltage can be found for a channel in ref . and is depicted in fig .we note that in contrast to the closing rate the _ effective _ opening rate has _ no exponential _ dependence on the voltage .thus , these two rates are not symmetric ( with respect to dependence on , cf .1 ) . the reason being that the two - state description results as the _ reduction _ of an intrinsic multistate ( or multi - well ) gating dynamics and thus presents only a shadow of real behavior . in this sense , the markovian approximation models the true non - markovian dynamics on a coarse grained time scale . to proceed, one has to generalize this working model to the case with time - dependent voltages .here we distinguish among three components of the voltage : ( i ) the constant bias voltage , ( ii ) some time - dependent , unbiased signal , and ( iii ) a noisy component voltage .the noisy voltage is assumed to be a stationary gaussian markovian noise with zero average and root mean squared ( r.m.s . )amplitude . moreover , it possesses a frequency bandwidth .let us restrict our treatment to the situation where _ both _ the signal and the external noise are slowly varying on the time - scale set by diffusive motions occurring within the open ( or closed ) conformation .this time - scale typically lies in the range as manifested experimentally by the fast events in channel activation .we thus can apply the _ fluctuating rate _model assuming that the transition rates ] is typically of the order of milliseconds .then , the choice of a noise bandwidth satisfying , i.e. , , presents a consistent specification for the fluctuating rate description .the role of external noise is thus reduced within the same two - state approximation merely to forming new , noise - dressed time - dependent transition rates \rangle_n ] , where is the conductivity of the open channel and is the `` reversal '' potential ( nernst potential ) for ion flow . when the channel is closed , the ion flow is negligible and the current is zero .we recall that the current passing through the open channel is generally time - dependent in accordance with the externally applied signal .however , we will assume that the information about signal is encoded in the switching events of current between zero and , and _ not _ in the additional modulation of .in other words , the information is assumed to be encoded in the signal - modulated _ conductance _ fluctuations between and zero .moreover , one can describe the resulting current fluctuations in terms of conductance fluctuations , i.e. , \ ] ] wherein as a two - state random point process .the sample space of within the time interval ] given that this state was occupied with probability one at .analogous expressions , with indices changed from to , hold obviously also for the complementary quantities and .then , the multi - time probability densities emerge as for a given even number of flips , and for odd number of flips , respectively .the probability densities for the other subspace ending in the open state ( labeled with ) can be written down by use of a simple interchange of the indices and in eqs .( [ q0 ] ) - ( [ qodd ] ) .the above reasoning yields a _complete _ probabilistic description of the stochastic switching process that is related to the conductance fluctuations . in terms of the stochastic path description , the probability that the channel is open at the instant time therefore given by an analogous expression holds also for the probability of the closed conformation . upon differentiating and with respect to time can check that these time - dependent probabilities indeed satisfy the kinetic equations ( [ balance ] ) .in the following we derive the general theory for various information measures that can be used to quantify the information gain obtained from an input signal being transduced by the ion channel current realizations when is switched on , versus the case with being switched off .intuitively , this information describes the difference in uncertainty about the current realizations in the absence and in the presence of the signal .we start out by reviewing the necessary background .let us first consider a _ discrete random variable _ . as demonstrated by k. shannon in 1948 ( his expression was discovered independently by n. wiener ) , the information entropy provides a measure for the uncertainty about a particular realization of . in eq .( [ sh1 ] ) , the set denotes the normalized probabilities for the realizations to occur , .the positive constant in ( [ sh1 ] ) defines the unit used in measurement .if the information entropy is measured in binary units , then , natural units yield , and digits give .this measure attains a minimum ( being zero ) if and only if one for a particular value of , and all others satisfying .it reaches a maximum if . the information entropy for a probability distribution is therefore a measure of how strongly it is peaked about a given alternative .uncertainty _ is consequently large for spread out distributions and small for concentrated ones .the application of an external signal ( perturbation ) results in a change of probabilities and consequently in entropy .the gained information is then defined by the corresponding change in entropy , i.e. . the generalization of the information concept onto the case of continuous variable presents no principal difficulties . in this casea proper definition of entropy reads \nonumber \\ & \equiv & -\kappa\int p(x)\ln [ p(x)]dx-\kappa\ln \delta x;\end{aligned}\ ] ] wherein is the probability density and denotes the precision with which the variable can be measured ( coarse graining of cell size ) . as is clearly seen from eq .( [ sh2 ] ) , the _ absolute _ entropy of a continuous variable is not well defined since it diverges in the limit . nevertheless , the _ entropy difference : = information _ is well - defined and _ does not _ depend on the precision .the generalization of information theory onto the case of stochastic processes is not trivial . in our case, the proper definition of entropy of the switch - point process , considered on the time interval ] in eq .( [ tau - s ] ) with respect to time we obtain after some involved algebra ( cf .appendix b ) the result }{dt}=-\kappa\sum_{\alpha = o , c}\bar k_{\alpha}(t)\ln \big(\bar k_{\alpha}(t)\delta\tau / e \big)p_{\bar\alpha}(t)\;,\end{aligned}\ ] ] where , if and _ vice versa_. together with eq . ( [ balance ] ) and the definition ( [ t - inf ] ) the prominent result in eq .( [ result1 ] ) allows one to express the -information for arbitrary signal through straightforward quadratures .the -information concept has been used in fact to analyze the information transfer in neuronal systems in ref . . however , the strong dependence of -information on the time precision presents surely an undesirable _ subjective _ feature . in search for _ objective _information measures we consider the information transfer in terms of the mutual information measure . to introduce the reader to the mutual information concept, we follow the reasoning of shannon : the signals are drawn from some statistical distribution characterized by the probability density functional ] for the corresponding stochastic processes and .moreover , one can define the averaged probability densities for the process _ in the presence of the process _ , where the path integral ] is the relative entropy of the _ averaged _ process defined similarly to eq .( [ kul ] ) , but with the averaged multi - time probability densities .the averaged information gain provides thus an upper bound for the mutual information .moreover , applying a weak gaussian signal in the limit one can show that the difference between the mutual information and the averaged information gain in ( [ ineq ] ) is of order , where denotes the r.m.s .amplitude of signals . on the other hand, it is shown below that the averaged information gain per unit time is of the order and does not depend , within the given lowest order approximation , on other statistical parameters of signal .thus , the upper bound for mutual information in eq .( [ ineq ] ) can indeed be achieved with an accuracy of .this fact opens a way to calculate the informational capacity for weak signals .the information gain can be evaluated from eq .( [ kul ] ) without further problems . by differentiating ] , and the noise averaged rates are given in the appendix a for a k channel in eqs .( [ r1 ] ) and ( [ r2 ] ) . in the case of stationary stochastic signals or for a periodic driving , eq .( [ res1 ] ) provides after stochastic averaging , or averaging over the driving period of applied voltage , respectively , the stationary rate of information gain . for signals of finite durationthe total information gain is directly proportional to the total intensity of signal , as a result we find that weak signals of the the same intensity produce equal information gains .the occurrence of three different kinds of sr behavior , i.e. , periodic , aperiodic , and nonstationary sr clearly depends on the behavior of the form function _ vs. _ the r.m.s .noise amplitude .we recall that the static voltage ( membrane potential ) controls whether the ion channel is on average open or closed , cf .fig.[fig1 ] . in fig .[ fig2 ] , we depict the behavior of the function _ vs. _ the r.m.s . noise amplitude for different values of the applied static voltage .if the k ion channel is closed on average we observe that the information gain becomes strongly be amplified by noise , and even can pass through a maximum , i.e. sr occurs , cf .in contrast , when the stationary probability for an open channel becomes appreciably large , the addition of an additional dose of noise can only deteriorate the detection of signal . as a result ,the information gain decreases monotonically with increasing noise amplitude , cf .2b . this _no_-sr behavior occurs already at a static bias of mv yielding .note also , if the channel is predominantly open , the information gain becomes practically insensitive to the external noise , cf .the bottom curve in fig . 2b .although the information gain can slightly be increased versus the noisy intensity in this case ( ) , this effect is hardly of importance because the overall information gain diminishes drastically with increasing the static voltage ( see fig .the occurrence of sr in the considered single ion channel thus requires that the channel is predominantly resting in its closed state .let us now summarize the main results of this work .we have studied an illustrative two - state model for a single ion channel gating dynamics from an information theoretic point of view .the channel serves as an information channel transducing information from the applied time - dependent voltage signal to the ion current fluctuations .three different information theory measures have been developed to characterize stochastic resonance . from our viewpointit is advantageous to use an information measure which is independent of time resolution .we argued that the rate of information gain constitutes a unified characteristic measure for periodic ( conventional ) , aperiodic and nonstationary stochastic resonance . for conventional ( periodic ) sr and aperiodic srthis measure yields the averaged information gain per unit time .moreover , for weak stochastic signals it gives also the informational capacity , i.e. , the maximal mutual information which can be transferred per unit time for random signals with a fixed r.m.s .the concept of information gain can also be applied to the case of _ nonstationary _ deterministic signals with finite duration , i.e. , nonstationary sr , cf .( [ res2 ] ) , ( [ res3 ] ) .our main result is the closed formula for the rate of information gain in ( [ eq1 ] ) : it can be evaluated in a straightforward manner by using the corresponding probabilities of the two - state gating dynamics in ( [ balance ] ) .the information gain itself follows upon a time integration . in presence of weak drivingwe derived handy analytical results given in eqs .( [ res1 ] ) , ( [ res2 ] ) , and ( [ res3 ] ) . for voltage input signals referring to a stationary process the averaged rate of the information gain is determined by the r.m.s .amplitude of the signal input and by the form factor . in the case of a nonstationary signal of finite duration , the total information gain is the product of this very form function and the integrated signal intensity .the experimental procedure of determining the rate of information gain can be formulated along the lines used for the -entropy in ref .first , one finds the corresponding probability histograms in the presence and in the absence of signal and then evaluates the information gain for the related binary stochastic chains .naturally , this so obtained information gain will still depend on the time resolution .however , in contrast to the -information , this experimentally determined information gain should exhibit a much weaker dependence on the time resolution . by using increasingly smaller time grids , the experimentally obtained rate of information gain will approach a definite value .our theoretical results have been applied to investigate the phenomenon of stochastic resonance in a potassium - selective _ shaker ir _ion channel , as depicted with in fig .interestingly enough , we find that periodic , aperiodic or nonstationary sr for this sort of ion channel , as quantified by the rate of information gain , is exhibited only for a situation in which the channel resides on average in the closed state .this type of behavior is rooted in the asymmetry of two rates and ; with depicting a characteristic steep , threshold - like behavior , cf .fig . 1 .our sr - feature is similar to the study of parallel sr in an array of alamethicin channels , although the two situations are , however , not directly comparable .we note that the amount of transmitted information crucially depends on the membrane potential . for the studied model ,the information transfer is optimized at zero noise level near mv when the opening probability becomes appreciable ( note the upper curve in fig .however , under such optimal conditions the addition of external noise has the effect of only further deteriorating the rate of information transfer ( fig .2b ) . upon further increasing the static bias the ion channel probability to stay open increases .the rate of information transfer then diminishes and becomes practically insensitive to the input noise level .these results hopefully will motivate researchers to measure the predicted sr behavior in single potassium ion channels . ever since the discovery of the sr phenomenon , the quest to use noise to optimize and control the transduction and relay of biological information has been one of the holy grails of sr research .given this challenge , such and related experiments are much needed in order to settle the issue in question .the authors gratefully acknowledge the support of this work by the german - israel - foundation g.i.f . g-411 - 018.05/95 , as well as by the deutsche forschungsgemeinschaft ( sfb 486 and ha1517/13 - 2 ) .the opening and closing rates for the effective two - state model can be found from the voltage - dependent average dwell times .the latter can be determined from the experimental recordings .the experimental dependence of the effective transition rates on voltage for the potassium - selective channel _ shaker ir _ embedded in the membrane of a _ xenopus _ oocyte at _ fixed _ temperature have been fitted by a hodgkin - huxley type of data parameterization .this corresponding fitting procedure yields which are depicted in fig .[ fig1 ] . note that we replace the original fit of the closing rate in by a new expression in eq .( a1 ) . unlike to ,our fit of experimental data in is valid now also for positive voltages .one should emphasize , that the two rates in eq .( a1 ) are strongly asymmetric with respect to their dependence on voltage . in particular , the opening rate depicts a steep , threshold - like behavior , see fig .1 . in this work we explicitly use these experimental findings in our calculations . the rates in eq .( [ rates ] ) are measured in and the voltage in mv . according to our model study, the input voltage reads when no additional signal is applied .these eqs .( [ rates ] ) must be averaged over the realizations of to obtain the noise averaged rates and . for a gaussian voltage noise averaging of the exponential in the first equation in ( [ rates ] ) is governed by the second cumulant , yielding where .the averaged opening rate unfortunately can not be analytically simplified further . however, this rate along with its derivative can readily be evaluated numerically from eq .( [ r2 ] ) .the purpose of this appendix to provide the readers with some details of calculation of the entropic measures for the continuous time random point two - state process considered in this paper .first , we note two useful properties of the multi - time probability densities which can be established from eqs .( [ qeven ] ) , ( [ qodd ] ) .namely , and the index in eqs . ( [ a1 ] ) , ( [ a2 ] ) takes the values ; and the index takes the value , if , and _vice versa_. using eqs .( [ a1 ] ) , ( [ a2 ] ) one can check that given in eq .( [ solution ] ) do satisfy eq .( [ balance ] ) .furthermore , let us consider the -entropy in eq .( [ tau - s ] ) as a sum of two contributions , =\kappa\sum_{\alpha = o , c}s_{\alpha}(t) ] in eq .( [ kul ] ) into the sum of two contributions , = \kappa\sum_{\alpha = o , c } k_{\alpha}(t) ] , with ( a. feinstein , _ foundations of information theory _ , ( mc graw hill , new york , 1958 ) ) determine then the shannon entropy uniquely .p. gaspard and x .- j .wang , phys .rep . * 235 * , 292 ( 1993 ) .f. rieke , d. warland , r. de ruyter van steveninck , and w. bialek , _ spikes : exploring the neural code _( mit press , cambridge , ma , 1997 ) .strong , r. koberle , r. r. de ruyter van steveninck , and w. bialek , phys .lett . * 80 * , 197 ( 1997 ) .the many facets of entropy are beautifully outlined in : a. wherl , rep .phys . * 30*,119 ( 1991 ) ; see also : c. beck and f. schlgl , _ thermodynamics of chaotic systems : an introduction _ , ( cambridge university press , cambridge , 1993 ) . the informational capacity of an informational channel is defined as the maximal rate of mutual information obtained for all possible statistical distributions of input signals withamplitude .a. l. hodgkin and a. f. huxley , j. physiol .* 117 * , 500 ( 1952 ) .
|
we identify a unifying measure for stochastic resonance ( sr ) in voltage dependent ion channels which comprises periodic ( conventional ) , aperiodic and nonstationary sr . within a simplest setting , the gating dynamics is governed by two - state conductance fluctuations , which switch at random time points between two values . the corresponding continuous time point process is analyzed by virtue of information theory . in pursuing this goal we evaluate for our dynamics the -information , the mutual information and the rate of information gain . as a main result we find an analytical formula for the rate of information gain that solely involves the probability of the two channel states and their noise averaged rates . for small voltage signals it simplifies to a handy expression . our findings are applied to study sr in a potassium channel . we find that sr occurs only when the closed state is predominantly dwelled . upon increasing the probability for the open channel state the application of an extra dose of noise monotonically deteriorates the rate of information gain , i.e. , no sr behavior occurs . + 2
|
consider the distribution function of a beta random variable given by ^{-1 } \int _ { 0}^{y}t^{\alpha -1 } ( 1-t)^{\beta -1 } dt , \label{cs}\ ] ] for where and the beta function ] for in ( [ cs1 ] ) , respectively .the underlying distribution of income in thurow ( 1970 ) with _a=1 _ in ( [ tp1 ] ) is therefore also a special case with a distribution function _ f _ of a uniform distribution over the interval .the singh - maddala distribution with a density function of ^{\beta+1 } ] which is the distribution function of the scaled student t distribution on 2 degrees of freedom , with scaling factor .the skewed - t reduces to the symmetric student s t distribution when _ a = b _ and becomes skewed when _a. it is unimodal and heavy tailed , and the skewness measured based on the third moment is a monotone increasing function of _ a _ for fixed _ b _ and a monotone decreasing function of _ b _ for fixed _a_. the log - f distribution is a special case of family ( [ cs1 ] ) with the standard logistic distribution _f(x)= _ which can also be other types of generalized logistic distributions .et al_. ( 2002 ) presented examples of application areas including survival analyses in which log - normal , weibull , log - logistic , and generalized gamma was shown to be special cases of the log - f model ; see kalbfleish and prentice ( 1980 ) .the log - f is unimodal and can be symmetric , or skewed - to the left or to the right . a generalized four - parameter version of log - f with location parameter _ a _ , scale parameter _ b _, shape parameters replacing _ x _ by in the distribution function _f(x ) _ will be fit to the income data in this paper .nadarajan ( 2004)derived the moment generating function , skewness , kurtosis and other properties for the beta - exponential ( be ) distribution with an exponential _ f_. both measures of skewness and kurtosis are shown to decrease monotonically with the parameters and famoye , etc .( 2005 ) studied the beta - weibull distribution with _ _f(x)=__1-exp(- ) .the beta - weibull is unimodal , and the mode is at the point of 0 when _ b 1_. that is , beta - weibull distribution ( bw ) has a inversed - j shape when _b 1_. note that the exponential distribution is a special case of weibull distribution .nadarajan and ktoz ( 2004 ) investigated the unimodal beta - gumbel distribution in the hope of attracting wider applicability in engineering due the wide applications of the gumbel distribution in the field and showed that it has a single mode and an increasing hazard function . * * the following table 1 lists moments and means for various beta-_f _ distributions to be fit to the size distribution of income .the means in the table will be calculated as a check of the validity of the parameters produced from computer algorithms in the next section .let be the distribution function of a normal random variable with mean and standard deviation and digamma the euler s psi function ; see gradshteyn and ryzhik ( 2000 ) define & mean + gb1 & & & + gb2 & & & + beta - normal(bn ) & & & + skew - t & & & + log - f & & & + beta - exponential(be ) & 1-exp(- ) & & + beta - weibull(bw ) & 1-exp(- ) & & +the income data were in a grouped format with only the frequency and mean income of each group given .g _ and _ g _ be the respective cumulative distribution and probability density function of a beta random variable in ( [ cs ] ) .let = = the column vectors of parameters associated with the beta distribution _g _ and the distribution function _ f _ in ( [ cs ] ) and ( [ cs1 ] ) , respectively . define the probability ^{-1 } \int _ { f(x_{i-1 } ) } ^{f(x_{i } ) } t^{\alpha -1 } ( 1-t)^{\beta -1 } dt .\label{hp1}\ ] ] it is the proportion of the population in the _ _ i__th of the _ r _ income groups defined by the interval .the likelihood function for the data is therefore given by ^{n_{i } } } { n_{i } ! } \ ] ] where , is the frequency of the _ _ i__th group and .the maximum log - likelihood estimators are obtained by maximizing it is well known that the resulting estimators by maximizing the multinomial likelihood function in ( [ hp2 ] ) is less efficient than the ones based on individual observation , it is asymptotically efficient . note that the group probability in ( [ hp1 ] ) can be obtained by first evaluating the cdf of a beta random variable at and then computing the difference between the two values .this reduces the complexity of programming required to calculate the integrations , because algorithms for evaluations of cdf are available readily in most statistical software .next , the first derivative will be presented .let (,)_t_. the first derivative of respect to note that the parameter vector are the parameters involved in the function _the derivatives of in ( [ hp1 ] ) with respect to and are given by ;\ ] ] ;\ ] ] \frac{df(x_{i } ; \theta _ { f } ) } { d\theta _ { f } } -g\left[f(x_{i-1 } ; \theta _ { f } ) ; \theta _ { g } , \theta _ { f } \right]\frac{df(x_{i-1 } ; \theta _ { f } ) } { d\theta _ { f } } ; \label{hp4}\ ] ] and = the nonlinear optimization subroutines in sas can be employed by specifying the equation in ( [ hp2])to be maximized and the gradient function in ( [ hp3 ] ) .both the likelihood function to be maximized and the gradient function vary with the distribution functions under consideration , and the resulting functional forms of ( [ hp4 ] ) can be tedious and therefore not presented for any consideration here .the nonlinear newton - raphson method in sas was employed with the specification of the function to be minimized and the corresponding gradient function .the income data were in a group format and can be found on the census bureau s web site .the first group consists of families making less than ,000 , and the last group of more than ,000 . in the evaluation of( [ hp1 ] ) and ( [ hp2 ] ) , the value of the cdf set to be 0 at the lower boundary of the first class and 1 at the upper boundary of the last class in our sas programs .the results for years 2003 , 2004 and 2005 are reported in tables 2 , 3 and 4 .the mean ( ) and frequency ( ) for each group are reported on the census bureau s web site and the approximated mean income for each year can be calculated by .the approximated sample mean incomes ( in ,000 ) for 2003 , 2004 and 2005 are 6.598 , 6.140 and 7.040 , respectively . note that year 2004 has the lowest means among the three years .the estimated means in the following tables are calculated using the estimated parameter values in the mean expressions given table 1 .the resulting estimated means using the skewed - t appear to overestimate .the sum of squared errors ( sse ) between the relative frequency the estimated frequency or the absolute errors ( sae ) , and chi - square are also reported .the generalized four - parameter log - f distribution appears to yield the best fit in terms of chi - squares and sae , and the generalized beta of the second type ( gb2 ) in terms of sse .overall , the log - f performs well which is consistent with jones belief that log - f provides the most tractable instances of families with power and exponential tails .the two - parameter skew-_t _ performs relatively poor in the results . as in mcdonald ( 1984 ), the generalized beta of the second type provides better fit than the generalized beta of the first type ( gb1 ) . trailing behind the log - f and gb2is the beta - weibull .the three - parameter beta - exponential and beta - weibull provide better fit than the gb1 in terms of all measures of goodness fit .thought the skew - t has second worst performance , it appears to perform much better than beta - normal .the beta - normal distribution noticeably performs the worst .note that the normal distribution itself is a poor fit for skewed data .next , in order to have a better picture on how the tail for each distribution fitted to the data , the estimated density functions based on the 2005 income data are presented in the following graph . the skewed - t appears to result a thicker tail than others . in summary, the log - f provides the best relative fit and then followed by the generalized beta of the second type . among other distributions in the family of the generalized beta distribution that were fit to the data , the beta - normal appears to perform poorly .the two - parameter skew - t distribution can probably extended to four - parameter one whose mathematical properties including moments and shapes needs further studied .jones , m. c. ( 2001 ) . a skew - t distribution , in country - regionc.a .charalambides , m. v. kourtras , and placen .balarishnan , eds , ._ probability and statistical models with applications _ , pp .269 - 278 .chapman and hall , london .
|
the mathematical properties of a family of generalized beta distribution , including beta - normal , skewed - t , log - f , beta - exponential , beta - weibull distributions have recently been studied in several publications . this paper applies these distributions to the modeling of the size distribution of income and computes the maximum likelihood estimation estimates of parameters . their performances are compared to the widely used generalized beta distributions of the first and second types in terms of measures of goodness of fit .
|
in this paper we consider the following system of degenerate parabolic equations { \partial}_th=&{\partial}_x\left ( f{\partial}_xf\right)+r_\mu{\partial}_x\left [ ( h - f ) { \partial}_xh\right]+r { \partial}_x\left ( f{\partial}_xh\right ) , \end{array } \right .{ \quad ( t , x)\in ( 0,\infty)\times ( 0,l),}\ ] ] which models two - phase flows in porous media under the assumption that the thickness of the two fluid layers is small .indeed , the system has been obtained in by passing to the limit of small layer thickness in the muskat problem studied in ( with homogeneous neumann boundary condition ) .similar methods to those presented in have been used in and , where it is rigorously shown that , in the absence of gravity , appropriate scaled classical solutions of the stokes and one - phase hele - shaw problems with surface tension converge to solutions of thin film equations with for stoke s problem and for the hele - shaw problem . in our setting a nonnegative function expressing the height of the interface between the fluids while is the height of the interface separating the fluid located in the upper part of the porous medium from air , cf .figure [ f:1 ] .{figure.eps}\ ] ] we assume that the bottom of the porous medium , which is located at is impermeable and that the air is at constant pressure normalised to be zero .the parameters and are given by where , [ resp . , denote the density and viscosity of the fluid located below [ resp . above ] in the porous medium .of course , we have to supplement system with initial conditions and we impose no - flux boundary conditions at and : it turns out that the system is parabolic if we assume that and that is , the denser fluid lies beneath .existence and uniqueness of classical solutions to have been established in this parabolic setting in .furthermore , it is also shown that the steady states of are flat and that they attract at an exponential rate in solutions which are initially close by . in this paperwe are interested in the degenerate case which appears when we allow and on some subset of owing to the loss of uniform parabolicity , existence of classical solutions can no longer be established by using parabolic theory and we have to work within an appropriate weak setting . furthermore , the system is quasilinear and , as a further difficulty , each equation contains highest order derivatives of both unknowns and , i.e. it is strongly coupled . in order to study the problem we shall employ some of the methods used in to investigate the spreading of insoluble surfactant .however , in our case the situation is more involved since we have two sources of degeneracy , namely when and become zero .it turns out that by choosing as unknowns , the system is more symmetric : { \partial}_tg=&r_\mu{\partial}_x\left ( g{\partial}_xf\right)+r_\mu { \partial}_x\left(g{\partial}_xg\right ) , \end{array } \right . { \quad ( t , x)\in ( 0,\infty)\times ( 0,l),}\ ] ] since, up to multiplicative constants , the first equation can be obtained from the second by simply interchanging and .corresponding to we introduce the following energy functionals : \ , dx\ ] ] and \ , dx.\ ] ] it is not difficult to see that both energy functionals and dissipate along classical solutions of . while in the classical setting the functional plays an important role in the study of the stability properties of equilibria , in the weak setting we strongly rely on the weaker energy which , nevertheless , provides us with suitable estimates for solutions of a regularised problem and enables us to pass to the limit to obtain weak solutions .note also that appears quite natural in the context of , while , when considering , one would not expect to have an energy functional of this form .our main results read as follows : [ t:1 ] assume that given with and there exists a global weak solution of satisfying * , in * for all and & \int_0^l g(t)\psi\ , dx-\int_0^l g_0\psi\ , dx= { - } r_\mu\int_0^t\int_0^l \left(g{\partial}_xf+ g{\partial}_xg\right){\partial}_x\psi\ , dx\ , dt\end{aligned}\ ] ] for all moreover , the weak solutions satisfy ( b ) \quad & { { \mathcal e}}_1(f(t),g(t))+\int_0^t\int_0^l \left [ \frac{1}{2 } |{\partial}_xf|^2 + \frac{r}{1 + 2r } |{\partial}_xg|^2 \right]\ , dx\ , dt\leq { { { \mathcal e}}_1(f_0,g_0),}\\[1ex ] ( c)\quad & { { \mathcal e}}_2(f(t ) , g(t))+\int_0^t\int_0^l \left [ f\left((1+r){\partial}_xf+r{\partial}_xg\right)^2+rr_\mu g({\partial}_xf+{\partial}_xg)^2 \right]\ , dx\ , dt\leq { { \mathcal e}}_2(f_{0},g_{0})\end{aligned}\ ] ] for almost all . [ rem:1 ] if for instance , a solution to is where solves the classical porous medium equation in with homogeneous neumann boundary conditions and initial condition . additionally to the existence result, we show that the weak solutions constructed in theorem [ t:1 ] converge at an exponential rate towards the unique flat equilibrium ( which is determined by mass conservation ) in the : [ t:2 ] under the assumptions of theorem [ t:1 ] , there exist positive constants and such that [ rem:2 ] theorem [ t:2 ] suggests that degenerate solutions become classical after evolving over a certain finite period of time , and therefore would converge in the towards the corresponding equilibrium , cf . .the outline of the paper is as follows . in section [ sec:1 ]we regularise the system and prove that the regularised system has global classical solutions , the global existence being a consequence of their boundedness away from zero and in the purpose of the regularisation is twofold : on the one hand , the regularised system is expected to be uniformly parabolic and this is achieved by modifying and the initial data such that the comparison principle applied to each equation separately guarantees that and for some . on the other hand , the regularised system is expected to be weakly coupled , a property which is satisfied by a suitable mollification of in the first equation of and in the second equation of . the energy functional turns out to provide useful estimates for the regularised system as well . in section [ sec:2 ]we show that the classical global solutions of the regularised problem converge , in appropriate norms , towards a weak solution of , and that they satisfy similar energy inequalities as the classical solutions of for both energy functionals and finally , we give a detailed proof of theorem [ t:2 ] . throughout the paper, we set and for ] .we also denote positive constants that may vary from line to line and depend only on , , , and by or , .the dependence of such constants upon additional parameters will be indicated explicitly .in this section we introduce a regularised system which possesses global solutions provided they are bounded in and also bounded away from zero . in section [ sec:2 ] we show that these solutions converge towards weak solutions of .we fix two nonnegative functions and in ( the initial data of system ) and first introduce the space we note that for each the elliptic operator is an isomorphism . this property is preserved when considering the restriction )\,:\,{\partial}_x f(0)={\partial}_xf(l)=0\ } \to c^{\alpha}([0,l ] ) , \qquad \alpha\in ( 0,1).\ ] ] given we let then and consider the following regularised problem [ eq : rs ] { \partial}_tg_{\varepsilon}=&r_\mu{\partial}_x\left ( ( g_{\varepsilon}-{\varepsilon}){\partial}_x{f_{\varepsilon}}\right)+r_\mu { \partial}_x\left(g_{\varepsilon}{\partial}_x{g_{\varepsilon}}\right ) , \end{array } \right . { \quad( t , x)\in ( 0,\infty)\times ( 0,l),}\ ] ] supplemented with homogeneous neumann boundary conditions and with regularised initial data note that the regularised initial data and , invoking the elliptic maximum principle , we have letting and we obtain by multiplying the relation by and integrating over the following relation which gives a uniform -bound for the regularised initial data : concerning the solvability of problem , we use quasilinear parabolic theory , as presented in , to prove the following result : [ t:3 ] for each problem possesses a unique global nonnegative solution with moreover , we have in order to prove this global result , we establish the following lemma which gives a criterion for global existence of classical solutions of : [ l:1 ] given , the problem possesses a unique maximal strong solution on a maximal interval satisfying moreover , if for every there exists such that }\|x_{\varepsilon}(t)\|_{h^1}\le c({\varepsilon},t),\ ] ] then the solution is globally defined , i.e. let be fixed . to lighten our notationwe omit the subscript in the remainder of this proof .note first that problem has a quasilinear structure , in the sense that is equivalent to the system of equations : { \mathcal{b}}x&=&0&\text{on } & ( 0,\infty)\times \{0,l\},\\[1ex ] x(0)&=&x_{0}&\text{on } & ( 0,l ) , \end{array } \right.\ ] ] where the new variable is with and the operators and are respectively given by r_\mu ( g-{\varepsilon}){\partial}_x{f } \end{array } \right).\ ] ] letting 0&r_\mu g \end{array } \right),\ ] ] the operator is defined by the relation we shall prove first that has a weak solution defined on a maximal time interval , for which we have a weak criterion for global existence .we then improve in successive steps the regularity of the solution to show that it is actually a classical solution , so that this criterion guarantees also global existence of classical solutions .given ] is known to satisfy , cf . , \{f\in h^{2\alpha}\ , : \,\text { for }\}&,&\alpha>3/4 .\end{array } \right.\ ] ] furthermore , for each ] meaning that the elements of belong to ) ] is bounded in and bounded away from for all then which yields the desired criterion .we show now that this weak solution has even more regularity , to conclude in the end that the existence time of the strong solution of coincides with that of the weak solution ( of course they are identical on each interval where they are defined ) .indeed , given it holds that hence , if we conclude from ( * ? ? ?* theorem 7.2 ) and ( * ? ? ?* proposition 1.1.5 ) that is actually hlder continuous choosing and we have that ), ] for all defining , we observe that and finally , with , our choice for implies and the assertions of ( * ? ? ?* theorem 11.3 ) are all fulfilled .whence , the linear problem { \mathcal{b}}y&=&0&\text{on } & ( \delta , { t_+})\times \{0,l\},\\[1ex ] y(\delta)&=&x(\delta)&\text{on } & ( 0,l ) , \end{array } \right.\ ] ] possesses a unique strong , that is in view of ( * ? ? ?* remark 11.1 ) we conclude that both and are weak of , whence we infer from ( * ? ? ? * theorem 11.2 ) that and so interpolating as we did previously and taking into account that was arbitrarily chosen , we have )) ] and , using a similar argument for , we conclude that next , owing to and the nonnegativity of and , it readily follows from that the of and is conserved in time , that is , for all .the next step is to improve the previous to an as required by lemma [ l:1 ] . to this end , we shall use the energy for the regularised problem , see below . as a preliminary step ,we collect some properties of the functions defined in in the next lemma .[ l:2.5 ] for all the proof of is similar to that of .we next multiply the equation by , integrate over and use the cauchy - schwarz inequality to estimate the right - hand side and obtain and . [ l:3 ] given we have that using and hlder s inequality , we get = & -\int_0^l\left((1+r)|{\partial}_xf_{\varepsilon}|^2+r\frac{f_{\varepsilon}-{\varepsilon}}{f_{\varepsilon}}{\partial}_xf_{\varepsilon}{\partial}_xg_{\varepsilon}+r\frac{g_{\varepsilon}-{\varepsilon}}{g_{\varepsilon}}{\partial}_xg_{\varepsilon}{\partial}_xf_{\varepsilon}+r|{\partial}_xg_{\varepsilon}|^2\right)\ , dx\\[1ex ] \leq & { - ( 1+r)\|{\partial}_xf_{\varepsilon}\|_2 ^ 2 + r \|{\partial}_x f_{\varepsilon}\|_2\|{\partial}_x g_{\varepsilon}\|_2 + r \|{\partial}_x g_{\varepsilon}\|_2 \|{\partial}_x f_{\varepsilon}\|_2 - r \|{\partial}_x g_{\varepsilon}\|_2 ^ 2}\\[1ex ] \leq & -\frac{1}{2}\|{\partial}_xf_{\varepsilon}\|_2 ^ 2 - \left ( \frac{1 + 2r}{2 } \|{\partial}_xf_{\varepsilon}\|_2 ^ 2 - 2r\|{\partial}_xf_{\varepsilon}\|_2\|{\partial}_xg_{\varepsilon}\|_2+r\|{\partial}_xg_{\varepsilon}\|_2 ^ 2\right)\\[1ex ] \leq&-\frac{1}{2}\|{\partial}_xf_{\varepsilon}\|_2 ^ 2-\frac{r}{1 + 2r}\|{\partial}_xg_{\varepsilon}\|_2 ^ 2.\end{aligned}\ ] ] integrating with respect to time , we obtain the desired assertion . since for all relation gives a uniform estimate in of in in dependence only of the initial condition . indeed , on the one hand , since for all , we have so that for all by . on the other hand , owing to the poincar - wirtinger inequality and , we have a similar bound being available for , we infer from , , and the nonnegativity of that , for , \le c_2(t ) . \label{spip}\end{aligned}\ ] ] we next use this estimate to prove that the solution of is bounded in for all . while the estimates were independent of up to now , the next ones have a strong dependence upon which explains the need of a regularisation of the original system . [ l:4 ]given there exists a constant such that the solution of fulfills ..}\ ] ] using and the poincar - wirtinger inequality , we finally obtain that ] , we let furthermore , given we let be the global strong solution of the regularised problem constructed in theorem [ t:3 ] .we shall prove that converges , in appropriate function spaces over , towards a pair of functions which turns out to be a weak solution of in the sense of theorem [ t:1 ] .recall that , by , , , and , satisfies the following estimates ( b)&\quad \|f_{0{\varepsilon}}\|_{2}+\|g_{0{\varepsilon}}\|_{2}\leq\|f_{0}\|_{2}+\|g_0\|_{2}+2 { \sqrt{l}},\\[1ex ] ( c)&\quad \|f_{\varepsilon}(t)\|_1=\|f_0\|_1+{\varepsilon}l , \qquad \|g_{\varepsilon}(t)\|_1=\|g_0\|_1+{\varepsilon}l , \\[1ex ] ( d)&\quad { { \mathcal e}}_1(f_{\varepsilon}(t ) , g_{\varepsilon}(t))+\int_0^t \left ( \frac{1}{2 } \|{\partial}_xf_{\varepsilon}\|_2 ^ 2+\frac{r}{1 + 2r } \|{\partial}_x g_{\varepsilon}\|_2 ^ 2 \right)\ , ds\leq { { \mathcal e}}_1(f_{\varepsilon}(0 ) , g_{\varepsilon}(0 ) ) , \end{aligned}\ ] ] for . using , we show that : [ l:5 ] let there exists a positive constant such that , for all we have \label{eq : ue2 } ( ii)\quad&\int_0^t\|{\partial}_t { h_{\varepsilon}(t)}\|_{(w^1_6)'}^{6/5}\ , dt\leq { c_4(t)}.\end{aligned}\ ] ] the estimate for in is obtained from the energy estimate , by taking also into account relations , , , , the nonnegativity of , and the poincar - wirtinger inequality as in the proof of . in order to prove the second estimate of , we note that , since is continuously embedded in and is bounded in , is uniformly bounded with respect to in for all the claimed then follows from the inequality .next , an obvious consequence of the definition of is that for ] provided that this will allow us to identify a limit point for each of these sequences , and find in this way a candidate for solving .indeed , we have : invoking the rellich - kondrachov theorem , we have the following sequence of embeddings )\hookrightarrow \left(w^1_6\right)',\qquad \alpha<1/2,\ ] ] with compact embedding ). ] using the uniform estimates deduced at the beginning of this section , we now establish the existence of a weak solution of .owing to lemma [ l:6 ] , there are )) } \end{aligned}\ ] ] furthermore , by lemma [ l:5 ] , the subsequences and are uniformly bounded in the hilbert space hence , we may extract further subsequences ( denoted again by and ) which converge weakly : in fact , we have that indeed , follows by multiplying the relation by a test function in , integrating by parts , and letting then with the help of and . in view of - , we then have & g_{{\varepsilon}_k}{\partial}_x f_{{\varepsilon}_k}\rightharpoonup g{\partial}_xf ,\quad g_{{\varepsilon}_k}{\partial}_xg_{{\varepsilon}_k}\rightharpoonup g{\partial}_x g\qquad \text{in } \end{aligned}\ ] ] using the fact that are strong solutions of , we obtain by integration with respect to space and time that \ { \partial}_x\psi\ , dx\ , dt,\\[1ex ] \int_0^l g_{{\varepsilon}_k}(t)\psi\ , dx-\int_0^lg_{0{\varepsilon}_k}\psi\ , dx= & { - } r_\mu\int_{q_t } \left [ ( g_{{\varepsilon}_k}-{\varepsilon}_k ) { \partial}_x{f_{{\varepsilon}_k } } + g_{{\varepsilon}_k } { \partial}_x{g_{{\varepsilon}_k } } \right]\ { \partial}_x\psi\ , dx\ , dt , \end{aligned}\ ] ] for all and since by classical arguments , we may pass to the limit as in and use , , , and to conclude that is a weak solution of in the sense of theorem [ t:1 ] .the fact that can be defined globally follows by using a standard cantor s diagonal argument ( using a sequence ) .we show now that the weak solution found above satisfies the energy estimate for recall that , by lemma [ l:3 ] , we have for all on the one hand , note that and fatou s lemma ensure that while implies \int_{q_t}|{\partial}_xg|^2\ , dx\ , dt&\leq\liminf_{k\to\infty}\int_{q_t}|{\partial}_xg_{{\varepsilon}_k}|^2\ , dx\ , dt .\end{aligned}\ ] ] we still have to pass to the limit in the right - hand side of . by, we may assume that and converge almost everywhere towards and respectively .furthermore , since for we have for all and all measurable subsets of meaning that the family is uniformly integrable . clearly , the same is true also for we infer then from vitali s convergence theorem , cf .* theorem 2.24 ) , that the limit of the right - hand side of exists and whence , passing to the limit in , we obtain in view of and the desired estimate .finally , we show that weak solutions of satisfy \ , dx\ , dt\leq { { \mathcal e}}_2(f_{0},g_{0})\ ] ] for in virtue of , , and we have which implies that and in consequently , and , by a similar argument , now , owing to , the sequence defined in lemma [ l:4.5 ] is bounded in and we then infer from lemma [ l:4.5 ] that both and are bounded in .the previous weak convergences in may then be improved to weak convergence in ( upon extracting a further subsequence if necessary ) and we can then pass to the limit in to conclude that holds true , using weak lower semicontinuity arguments in the left - hand side and the property in in the right - hand side .in this last part of the paper we prove our second main result , theorem [ t:2 ] . the proof is based on the interplay between estimates for the two energy functionals and with the specification that we use to estimate the time derivative of the stronger energy functional , and obtain exponential decay of weak solutions in the .recall from that for all and introducing \ , dx\\[1ex ] & + \frac{1}{2}\int_0^l \left\ { ( f_{{\varepsilon}_k}-a_k)^2+r\left[(f_{{\varepsilon}_k}-a_k)^2+(g_{{\varepsilon}_k}-b_k)^2+(g_{{\varepsilon}_k}-b_k)(f_{{\varepsilon}_k}-a_k)\right . \right.\\[1ex ] & \hspace{3cm}\left .\left.+(f_{{\varepsilon}_k}-a_k)(g_{{\varepsilon}_k}-b_k)\right ] \right\}\ , dx,\end{aligned}\ ] ] we infer from ( c ) and the proofs of lemma [ l:3 ] and [ l:4.5 ] that \ , dx\\[1ex ] & + \frac{1}{2 } \frac{d}{dt}\int_0^l \left\ { ( 1+r)f_{{\varepsilon}_k}^2+r\left[g_{{\varepsilon}_k}^2+f_{{\varepsilon}_k}g_{{\varepsilon}_k}+g_{{\varepsilon}_k}f_{{\varepsilon}_k}\right ] \right\}\ , dx\\[1ex ] \leq&-\frac{1}{2}\|{\partial}_xf_{{\varepsilon}_k}\|_2 ^ 2-\frac{r}{1 + 2r}\|{\partial}_x g_{{\varepsilon}_k}\|_2 ^ 2 + { \varepsilon}_k c_3 \left ( \|{\partial}_xf_{{\varepsilon}_k}\|_2 ^ 2+\|{\partial}_xg_{{\varepsilon}_k}\|_2 ^ 2 \right)\\[1ex ] \leq & -\frac{1}{3 } \|{\partial}_xf_{{\varepsilon}_k}\|_2 ^ 2-\frac{r}{2 + 2r}\|{\partial}_x g_{{\varepsilon}_k}\|_2 ^ 2 \end{aligned}\ ] ] provided is large enough . using the poincar - wirtinger inequality, we find a positive constant such that for large and all we show now that the right - hand side of can be bounded by for some small positive number . indeed , arguing as in lemma [ l:2.5 ] , we find that \ , dx \right|\nonumber\\[1ex ] & & \phantom{space } { \leq \|g_{{\varepsilon}_k}-b_k\|_2 \|f_{{\varepsilon}_k}-a_k\|_2 + \|f_{{\varepsilon}_k}-a_k\|_2 \|g_{{\varepsilon}_k}-b_k\|_2 } \nonumber\\[1ex ] & & \phantom{space } { \leq 2\|g_{{\varepsilon}_k}-b_k\|_2 \|f_{{\varepsilon}_k}-a_k\|_2 } \leq \|f_{{\varepsilon}_k}-a_k\|_2 ^ 2+\| g_{{\varepsilon}_k}-b_k\|_2 ^ 2.\label{eq : pp1 } \end{aligned}\ ] ] recalling , we end up with & \phantom{space}\geq \int_0^l \left [ a_k^2\left(\frac{f_{{\varepsilon}_k}}{a_k}\ln\left ( \frac{f_{{\varepsilon}_k}}{a_k } \right)-\frac{f_{{\varepsilon}_k}}{a_k}+1\right)+b_k^2\left(\frac{g_{{\varepsilon}_k}}{b_k}\ln\left ( \frac{g_{{\varepsilon}_k}}{b_k } \right ) -\frac{g_{{\varepsilon}_k}}{b_k}+1\right ) \right]\ , dx\\[1ex ] & \phantom{space}\geq \ ! \min_k\left\{a_k,\frac{r_\mu b_k}{r}\right\}\int_0^l \left [ \left(f_{{\varepsilon}_k}\ln\left ( \frac{f_{{\varepsilon}_k}}{a_k } \right ) -f_{{\varepsilon}_k}+a_k\right)+\frac{r}{r_\mu } \left(g_{{\varepsilon}_k}\ln\left ( \frac{g_{{\varepsilon}_k}}{b_k } \right ) - g_{{\varepsilon}_k}+b_k\right ) \right]\ , dx . \end{aligned}\ ] ] combining , , and , we conclude that if and then for some positive constant and sufficiently large .whence , which yields , for , the desired estimate by and , as stated in theorem [ t:2 ] . if [ resp . , then [ resp . , while [ resp . ] is a weak solution of the one - dimensional porous medium equation and converges therefore even in the to flat equilibria ( if , then uniqueness of solutions to implies that and ) , cf .* theorem 20.16 ) .convergence in this stronger norm is due to the fact that comparison methods may be used for the one - dimensional porous media equation , while for our system they fail because of the structure of the system .+ 9999 : _ nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems , _ in : h. schmeisser , h. triebel ( eds . ) , function spaces , differential operators and nonlinear analysis . teubner - texte zur math .133 , 9126 , teubner , stuttgart , leipzig 1993 .
|
we prove global existence of nonnegative weak solutions to a degenerate parabolic system which models the interaction of two thin fluid films in a porous medium . furthermore , we show that these weak solutions converge at an exponential rate towards flat equilibria .
|
random matrix theory ( rmt ) , which is capable of eliminating random properties from financial time series , has been previously introduced and applied in the field of finance .the rmt employs eigenvalues and eigenvectors to generate a correlation matrix and time series data with various properties .it has been verified that the eigenvalues , which belong to the range beyond the range of the random matrix , bear certain economic implications , such as market factor and industrial factors . meanwhile , many studies that have employed the rmt in econophysics are quite similar to studies addressing the deterministic factors of the stock pricing mechanism in the financial field .these studies are also reminiscent of principal component analysis , a multivariate statistical analysis used to examine deterministic factors . in the field of finance ,these studies have been conducted in combination to develop pricing mechanism models , including the one- , three- , and multi - factor models .the deterministic factors utilized in each model are the market , industrial , macro - economic , and company factors ; these did not differ from the results confirmed by the rmt . identifying the factors that affectthe value of the eigenvalue has been an interesting research topic , because the eigenvalue is a crucial parameter not only in finance studies based on multivariate statistics , but also in econophysics studies based on the rmt . as was the case in previous studies , the values of eigenvalues elicited from the financial timeseries data of various countries differ , and clear differences were determined to exist in the largest eigenvalue . among the influential factors mentioned thus far in studies involving the rmt , the length of the time series data and the number of stocks influenced the eigenvalue probability density function of the correlation matrix .the findings of finance studies suggested that the largest eigenvalue contributes a large fraction to the variance of returns , and its relative importance increases with the number of stocks more dramatically than others .that is to say , the value of the eigenvalue is clearly affected by the number of stocks . the these studies employed multivariate statistics techniques ( approximate factor model ) . in this study, we investigate empirically the relationship between eigenvalues via the rmt and the number of stocks comprising the correlation matrix , as the number of stocks increases . also , unlike previous studies , we reinforce these results by assessing whether the properties of the eigenvalue change as a function of the numbers and types of stocks within the correlation matrix .we determined that the eigenvalue elicited via the rmt method is directly affected by the variation in the number of stocks in the correlation matrix . on the other hand ,the largest eigenvalue maintains its properties regardless of the changes in the numbers and types of stocks in the correlation matrix , whereas other eigenvalues that exceed the range of the random matrix evidence different properties when there were changes in the number and types of stocks .these results suggest that although the largest eigenvalue is affected directly by the number of stocks in the correlation matrix , the properties of the largest eigenvalue do not change .this paper is constructed as follows .after the introduction , chapter ii provides the data and methods employed in this study . in chapter iii, we show the results obtained in relation to our established research aims .finally , we summarize the findings and conclusions of this study .we evaluated the daily data of stock prices on the korean and japanese markets ( from datastream ) .the stocks were selected via the following process .first , we selected stocks with consecutive daily stock prices for the 18 years from january 1990 to december 2007 .second , the stocks in industry sectors with four or less stocks were excluded .third , the stocks with extreme outliers , in terms of the descriptive statistics of stock returns , skewness , and kurtosis , were also excluded .the data selected were stocks from the korean kospi and stocks from the japanese topix . the stock returns , ,were calculated by the logarithmic changes of the prices , in which represents the stock price on day .the number of stocks was determined as follows .the minimum number of stocks in the correlation matrix is set at 50 , with an increment of 10 .for the korean market , the number begins at the minimum value of 50 ( ) , and was increased in increments of 10 for 16 rounds , up to 200 ( ) . for the japanese stocks ,the number was increased for 36 rounds , up to 400 ( ) . in order to minimize the selection bias ,100 iterations were conducted for each number of stocks , and the types of stocks in each iteration are not identical .the rmt was introduced as a method for the control and adjustment of the correlation matrix with measurement errors in a financial time series . according to the statistical properties of the correlation matrix created by the random interactions , if the length of the time series , , and the number of stocks , , is infinite , the eigenvalue , , the probability density function of the correlation matrix , , is defined by \nonumber\end{aligned}\ ] ] where and correspond to the maximum and minimum eigenvalues , respectively .we employ eigenvalues in the range beyond the maximum eigenvalue , , , on the basis of the eigenvalue range of the random matrices . in this study , eigenvalues deviated from the random matrix in the korean stocks , and deviated from the random matrix for the japanese stocks . additionally , in order to determine whether the properties of the eigenvalue change according to changes occurring in the numbers and types of stocks of the correlation matrix , we utilize time series data reflective of the properties of each eigenvalue created using the following equation : where is the eigenvector of stock that reflects the eigenvalue properties , and is the return of stock at time . from the correlation matrix of each stock , the time series data of each eigenvalue beyond this range was created from eq .[ eq:2 ] . then , via correlation analysis among the created time series data , we attempted to determine whether there was any change in the properties of the eigenvalue , both between and within the number of stocks , respectively .first of all , we conducted an empirical examination of the economic meanings of eigenvalues that deviated from the range of the random matrix .according to previous studies , these eigenvalue properties have economic meaning , and can function as market , industrial , and macro - economic factors . because our objective is to determine the effects of eigenvalue properties in accordance with the change in the number of stocks in the correlation matrix , it is necessary to assess whether each eigenvalue does have economic meaning .we created time series data with economic meaning based on the method extensively utilized in finance and econophysics studies , and then examined the relationship between created time series data with economic meanings and those from eq .( 2 ) in order to reflect the properties of each eigenvalue .we created the time series data with economic meaning via two methods : equal - weighted returns , and factor scores , via factor analysis in multivariate statistics .first , the equal - weighted return is the average return for stocks : , where represents the number of stocks in the industry .the overall average return , , is time series data with market properties , , and the average return for each industry , , has industrial attributes .there are 14 types of equal - weighted returns , , , for the korean data and 18 types for the japanese data , including the time series data with market factors , , respectively .second , in the field of finance , the time series data of deterministic factors of the multi - factor model were created by factor analysis in multivariate statistics .factor analysis , a method that is extensively utilized in the field of social science , can reduce the many variables in the given data set to just a few factors . via factor analysis , we selected significant factors that are regarded as having economic significance , and created the time series data having the properties of significant factors , which are called factor scores in statistics .we rendered factor scores identical to the number of eigenvalues beyond the range of the random matrix .in other words , because 13 eigenvalues in the korean data deviated from the random matrix , 13 factor scores , were ultimately created . for the japanese data ,there were 19 factor scores .[ fig:1 ] presents our findings .the x - axis shows the eigenvalues elicited via the rmt , 358 for korea and 1,099 for japan ; the y - axis represents the correlation .[ fig:1](a ) and ( c ) display the correlation of the maximum values , ] , after measuring the correlation between the factor scores created by factor analysis and the time series data from eq .2 , whereas the value varies from 1 to 13 for the korean data and 1 to 19 for the japanese data . in the figure ,the vertical dot - lines denote the maximum eigenvalue , in the range of the random matrix , and the horizontal dot - lines represent the benchmark correlation value , based on previous studies .[ fig:1](a ) and ( b ) correspond to the korean data , and fig .[ fig:1](c ) and ( d ) are representative of the japanese data . according to our findings , the eigenvalues beyond the range of the rmt evidence relatively high correlations for equal - weighted return and factor scores , whereas they have very low correlations for other eigenvalues . as in previous studies , we confirmed empirically that the properties of eigenvalues that deviated from the range of the random matrix had economic implications , including market and industrial factors . in this section, we evaluated the effects in the eigenvalues beyond the random range as the number of stocks in the correlation matrix increased .the results are provided in fig .[ fig:2 ] .the x - axis reflects the number of stocks within the correlation matrix : for korea and for japan .the y - axis represents the eigenvalue . in order to avoid selection bias ,100 iterations were conducted for each number of stocks , and the types of stocks selected in each iteration were not identical . in the figure ,the results are shown in the error - bar in order to represent effectively the observed results of 100 iterations .[ fig:2](a ) corresponds to the korean data , and fig .[ fig:2](b ) represents the japanese data .we determined that as the number of stocks in the correlation matrix increases , the value of the eigenvalue increases proportionally .moreover , we observed from this figure that the largest eigenvalue is significantly greater than the other eigenvalues that deviated from the random matrix .using these results , we confirmed that the eigenvalues beyond the random range of the rmt were a function of the number of stocks .unlike the case in previous studies , we reinforced the observed results by assessing whether the properties of the eigenvalues can be influenced by changes in the numbers and types of stocks in the correlation matrix . in order to investigate this objective, we categorize the relationship between the eigenvalue time series using different numbers of stocks , ] , , respectively . in cases in which there is no change in the eigenvalue properties , the degree of correlation will converge to .otherwise , the degree of correlation will approach zero .first of all , the findings of the relationship between the eigenvalue time series data from _ different _ number of stocks are shown in fig .[ fig:3 ] . with the total number of stocks and the number of specific stocks within a correlation matrix, we selected 100 cases from the possible stock combinations , without identical types of stocks that comprise the correlation matrix .accordingly , correlations were calculated , and we measure the mean and the standard deviation ^ 2 / ( 10000 - 1)}$ ] .the number of cases for the calculation of and were 120 ( = ) for korea and 630 ( = ) for japan , because the measurements were calculated for every number of stocks , from the minimum to the maximum of korea , and of japan .because 13(19 ) eigenvalues were beyond the random matrices from the korean ( japanese ) data , the aforementioned testing process was repeated for each of the eigenvalues . in fig .[ fig:3 ] , the measured mean and standard deviation are indicated in box - plots .[ fig:3](a ) and ( c ) correspond to the means of the correlation , and fig .[ fig:3](b ) and ( d ) represent the standard deviations , and in fig .[ fig:3](a ) and ( b ) show the results from korean and fig .[ fig:3](c ) and ( d ) from japanese .it was interesting to note that the properties of the largest eigenvalue do not change with the number and types of stocks in a correlation matrix .the mean with the properties of the largest eigenvalue was quite high , [ fig .[ fig:3](a ) & ( c ) ] , but the standard deviation was quite small , [ fig .[ fig:3](b ) & ( d ) ] . on the other hand , other eigenvalues that deviated from the random matrix have very small mean values with high standard deviation values .this indicates that the change in the eigenvalue properties is extremely sensitive to changes in the numbers and types of stocks .next , the findings of the relationship between the eigenvalue time series data from _ identical _ number of stocks are shown in fig .[ fig:4 ] .we also selected cases from the possible stock combinations .accordingly , 4,950(= ) correlations were calculated , and we measured the mean , and the standard deviation , .the number of cases used to calculate the mean and standard deviation were for korea and for japan ; additionally , the aforementioned testing process was repeated for each eigenvalue . in fig .[ fig:4 ] , the measured mean and standard deviation are shown in box - plots .[ fig:4](a ) and ( c ) are box - plots of the mean , , and fig .4(b ) and ( d ) for the standard deviation , . fig .[ fig:4](a ) and ( b ) correspond to korea and fig .[ fig:4](c ) and ( d ) are representative of japan . according to the observed results, we determined that the properties of the largest eigenvalue did not change with the type of stocks within a correlation matrix with an identical number of stocks .in other words , the mean correlation among the time series data of the largest eigenvalue is quite high , [ fig . [fig:4](a ) & ( c ) ] , but the standard deviation is quite small , [ fig .[ fig:3](b ) & ( d ) ] , regardless of the types of stocks in a correlation matrix .on the other hand , other eigenvalues beyond the random range evidence small means and high standard deviation values .this indicates that the eigenvalue properties are sensitive to changes in the type of stocks. to summarize , we determined herein that even if the value of the eigenvalue elicited via the rmt increases in proportion with the number of stocks in the correlation matrix , the largest eigenvalue maintains its identical properties , regardless of the number and types of stocks in the dataset .however , other eigenvalues evidence different features .the reason for this is as follows .the primary common factor in the field of finance is the market factor that is included in every stock , and the largest eigenvalue has the properties of market factors . because every stock incorporates market factors regardless of the number and types of stocks , the properties of the largest eigenvalue are not influenced by changes in the number and type of stockshowever , others , including industrial factors , are limited to the stocks in particular industries .because other eigenvalues have industrial factors , they are extremely sensitive to the numbers and types of stocks . finally , these findings suggest that studies in which the properties of eigenvalues elicited via the rmt are employed should consider that eigenvalue properties can vary in accordance with the data for eigenvalues other than the largest eigenvalue .in the fields of finance and econophysics , the extraction of significant information from the correlation matrix is a fascinating research topic .the field of finance has previously employed multivariate statistics , including principal component analysis , and the rmt was introduced in the field of econophysics .we conducted an empirical study as to how the value of the eigenvalue elicited via the rmt is influenced by the number of stocks in the correlation matrix .additionally , we reinforced the observed result by assessing whether the properties of the eigenvalues change with the number and types of stocks comprising the correlation matrix .we determined that the value of the eigenvalue increases in proportion to the number of stocks in the correlation matrix .in particular , the largest eigenvalue increases to a greater degree than the other eigenvalues that deviate from the random matrix . furthermore , we determined that the largest eigenvalue maintains its identical properties , regardless of the numbers and types of stocks in the correlation matrix .this is attributable to the fact that the properties of the largest eigenvalue are concerned with the market factors incorporated in every stock .however , the properties of other eigenvalues beyond the random range have industrial factors limited to specific stock groups . in this case, the numbers and types of stocks can influence the attributes of each eigenvalue elicited via the rmt .this work was supported by the korea science and engineering foundation ( kosef ) grant funded by the korea government ( mest ) ( no .r01 - 2008 - 000 - 21065 - 0 ) , and for two years by pusan national university research grant .v. plerou , p. gopikrishnan , b. rosenow , l. a. amaral , and h. e. stanley(1999 ) , universal and nonuniversal properties of cross correlations in financial time series " , physical review letters 83 , 1471 - 1474 . , which is reflective of the properties of each eigenvalue .in the figure , the x - axis indicates the eigenvalues elicited via the rmt method , and the y - axis represents the correlation .1 ( a ) & ( c ) display the correlation of the maximum values with equal - weighted returns , , and fig .1(b ) ( d ) show the maximum correlation with factor scores , . additionally , fig .1 ( a ) ( b ) depict the results from the korean data and fig .1 ( c ) ( d ) depict the results from the japanese data.,scaledwidth=100.0% ]
|
in this study , we attempted to determine how eigenvalues change , according to random matrix theory ( rmt ) , in stock market data as the number of stocks comprising the correlation matrix changes . specifically , we tested for changes in the eigenvalue properties as a function of the number and type of stocks in the correlation matrix . we determined that the value of the eigenvalue increases in proportion with the number of stocks . furthermore , we noted that the largest eigenvalue maintains its identical properties , regardless of the number and type , whereas other eigenvalues evidence different features .
|
kac proposed in 1954 a random process to model the dynamics of a dilute gas .the process models the velocities of particles in as they evolve under elastic collisions .the case is of main interest , but we will allow any .since no account is taken of particle positions , any physical justification for the model relies on assumptions of spatial homogeneity and rapid mixing .it is thus impossible to give a physical meaning to the number of particles . yet , on the mathematical side , we have to make a choice .hence it is of interest to show consistency for sufficiently large values of .kac s process depends on a choice of collision kernel .this is a finite measurable kernel on which is chosen to model physical characteristics of the gas .the collision kernel specifies the rate for collisions of pairs of particles with incoming relative velocity and outgoing direction of separation .since collisions are assumed to conserve momentum and energy , for a pair of particles with pre - collision velocities and , and hence relative velocity , the post - collision velocities and are determined by the direction of separation through we will often write for the direction of approach , given by . we assume throughout that , for all , is a probability measure , supported on , and that the following standard scaling and symmetry properties hold . for and , and for any isometry of , we have our main results require further that the map is lipschitz on for the total variation norm on measures on .then there is a constant such that , for all , here and throughout , we denote the total variation norm by .the boltzmann sphere is the set of probability measures on such that to denote the identity function on . ] for , write for the subset of of normalized empirical measures of the form .the kac process with collision kernel and particle number is the markov chain in with generator given on bounded measurable functions by where the choice of state - space is possible because in each collision the number of particles , the momentum and the energy are conserved .there is no kac process on because this set is empty . for ,the transition rates of the kac process are bounded by on .hence , by the elementary theory of markov chains , given any initial state , there exists a kac process in starting from , the law of this process is unique , and almost surely it takes only finitely many values in any compact time interval .it is of special interest to model particles colliding as hard spheres . under plausible physical assumptions, this leads , by a well - known calculation , to the choice of kernel , where ] , , and . then there exist constants and with the following property .let with , and let and be kac processes in and such that then , with probability exceeding , for all ] and all with ,there is a constant such that , for all with and any kac processes in and in , with probability exceeding , for all ] during which does not jump , there is no contribution to the left - hand side of ( [ mdv ] ) from , so the same calculation yields the following pathwise estimate : \times\mathbb{r}^d}\bigl(1+|v|^2\bigr)\bigl { \vert}m^n(dr , dv)\bigr{\vert}\le12\int_s^{s ' } \bigl\langle1+|v|^3,\mu_r^n\bigr\rangle \,dr.\ ] ]we derive some moment inequalities for the kac process , which we shall use later .the basic arguments are standard for the boltzmann equation and are applied to the kac process in , lemma 5.4 .we have quantified the moment - improving property and added some maximal inequalities .we begin with the povzner inequality . for all ,there is a constant such that , for all and for , \label{pov } \\[-8pt ] \nonumber & & \qquad\le-\beta \bigl({\vert}v{\vert}^p+{\vert}v_*{\vert}^p\bigr)+\beta^{-1}\bigl({\vert}v{\vert}{\vert}v _ * { \vert}^{p-1}+{\vert}v{\vert}^{p-1}{\vert}v_*{\vert}\bigr).\end{aligned}\ ] ] here is a proof for the class of collision kernels we consider .note first that \label{cpp } \\[-8pt ] \nonumber & \le & { \vert}v { \vert}^p+{\vert}v_*{\vert}^p+c(p ) \bigl({\vert}v { \vert}{\vert}v_*{\vert}^{p-1}+|v|^{p-1}{\vert}v_*{\vert}\bigr).\end{aligned}\ ] ] it suffices by symmetry to consider the case .set , then and , where .note that for -almost all .we use the inequalities and to see that , for all ] with , we have proof of proposition [ jpe ] consider first the case and .fix , and consider the branching particle system starting from at time and run up to explosion . note that , at a branching event with colliding particle velocity , the total number of particles in the system increases by , and the total kinetic energy increases by .hence makes jumps of size at rate .set , and set note that .we use the estimate to see that where .hence , by optional stopping , the process is a supermartingale . on taking expectations, we obtain so and then the right - hand side does not depend on , so we must have almost surely as .hence almost surely , and the claimed estimate follows by monotone convergence . for , there is a constant such that \label{pof } \\[-8pt ] \nonumber & \le & c(p ) \bigl({\vert}v { \vert}^{p-2}{\vert}v_*{\vert}^2+{\vert}v_*{\vert}^p\bigr)\end{aligned}\ ] ] and then , for another constant , the argument used for then gives the desired estimate in the case .the argument is the same for .we now describe a coupling of linearized kac processes starting from different initial velocities , constructed to branch at the same times and with the same sampled velocities and angles , as far as possible . to simplify , we begin without the signs .define sets which we treat as disjoint .consider the continuous - time branching process in with the following branching mechanism .for each particle ( of type ) , there are three possible transitions .first , at rate for and , the particle dies and is replaced by three particles , and in .here we are writing for and for for short .call this a coupled transition .second , at rate , the particle dies and is replaced by four particles , and in and in .third , at rate , the particle dies and is replaced by , and in and in .the second and third will be called decoupling transitions .finally , for , each particle in , at rate , dies and is replaced by three particles , , and in .it is easy to check , by the triangle inequality , that in each coupled transition , we have and .fix , and suppose we start with one particle at time .write for the empirical process of particle types on . then , inductively , is supported on pairs with . for , write for the projection to the component , and write for the bijection .define a measure on by it is straightforward to check that and are copies of the markov process starting from and , respectively . for ,consider the signed space .the process lifts in an obvious way to a branching process in starting from in , where the `` '' offspring switch signs , just as in . by liftwe mean that for the projection .we write for the expectation over this process . for , set then is a linearized kac process with environment starting from .[ ghu ] assume condition ( [ lcp ] ) .then moreover , for all , there is a constant such that the decoupling transition occurs at rate and increases by .on adding the rate for the other decoupling transition , we see that a decoupling transition which increases by occurs at total rate by condition ( [ lcp ] ) , for all pairs in the support of for all .hence the drift of due to decoupling transitions is no greater than on the other hand , by the same estimates used in proposition [ jpe ] , the drift of due to branching of uncoupled particles is no greater than hence the following process is a supermartingale : set . since , by proposition [ jpe ] and the first of the claimed estimates follows by gronwall s lemma . for , a straightforward modification of this argument , using and ( [ poe ] ) , leads to the second estimate .proof of proposition [ fse ] for all and all , we have to see the second inequality , note that and then symmetrize .we write the proof for the case and .set . by proposition [ jpe ] , for all , we have so , since for all , by proposition [ jpe ] , we combine this with lemma [ ghu ] to obtain which implies that and in conjunction with ( [ fhg ] ) gives the claimed estimate .proof of proposition [ fsf ] it will suffice to consider the case . write .let and be independent linearized kac processes starting from at times and , respectively . write for the first branch time of and for the velocities of the new particles formed in at time . by the markov property of the branching process and using proposition [ jpe ] , on the event , while now , so and , using the inequality , we have so proof of proposition [ fmu ] recall that now and . in particular for all . set , and write for the positive and negative parts of the signed measure on . consider a branching particle system in , with the same branching rules as above , but where , instead of starting with just one particle at time , we initiate particles randomly in the system according to a poisson random measure on of intensity we use the same notation as above for the empirical measures associated to the branching process , and signify the new rule for initiating particles by writing now for the expectation .define , for , a signed measure on by \times v}e_{(s , v)}(\tilde \lambda _ t ) \theta(ds , dv).\ ] ] then , by proposition [ jpe ] , \times\mathbb{r } ^d}\bigl(1+{\vert}v{\vert}^2 \bigr)\bigl{\vert}\theta(ds , dv)\bigr{\vert}\ ] ] and , by estimate ( [ mdv ] ) , \times\mathbb{r}^d}\bigl(1+{\vert}v{\vert}^2\bigr)\bigl { \vert}\theta(ds , dv)\bigr{\vert}\\ & & \qquad\le\bigl\langle1+{\vert}v{\vert}^2,\mu_0^n+\mu_0^{n ' } \bigr\rangle+\int_{[0,t]\times \mathbb{r } ^d}\bigl(1+{\vert}v{\vert}^2 \bigr)\bigl{\vert}m(ds , dv)\bigr{\vert}<\infty.\end{aligned}\ ] ] we see in particular that is bounded on compacts in . under , the pair of empirical processes of positive and negative particles evolves as a markov chain , which makes jumps at rate and makes jumps at rate .using proposition [ jpe ] for integrability , under , for any bounded measurable function , the following process is a martingale : taking expectations and setting , we obtain then \times v}e_{(s , v ) } \langle f,\tilde\lambda_t\rangle \theta(ds , dv ) \\ & = & \bigl\langle f_{0t},\mu_0^n- \mu_0^{n'}\bigr\rangle+\int_0^t \langle f_{st},dm_s\rangle \\ & = & \bigl\langle f,\mu_0^n-\mu_0^{n ' } \bigr\rangle+\langle f , m_t\rangle+\int_0^t \bigl\langle f,2q(\rho_r,{\lambda } _ r)\bigr\rangle \,dr.\end{aligned}\ ] ] here , we used ( [ defl ] ) for the first equality , and for the third we substituted for and using ( [ fst ] ) and then rearranged the integrals using fubini to make , as given by ( [ defl ] ) , appear on the inside . since is an arbitrary bounded measurable function , we have shown that note the estimate of total variation , for the second inequality , we used ] when neither nor jump , the process of signed measures is thus locally bounded and right continuous in total variation .hence the measure is finite , and is absolutely continuous with respect to this measure for all ] and a function .define a random function on \times\mathbb{r}^d ] and . we will derive estimates for the second term , which then apply also to the third , because .the notation conceals the fact that the integrand depends on the terminal time .worse , depends on , so is anticipating , and martingale estimates can not be applied directly even at the individual time . for , set and }m_3(t) ] with , where for , set , and set }m_3(t)+\sum _ { \ell\in\mathbb{n}}2^{(\beta -1)\ell+1}\beta ^{-1}\sup _ { t\in[2^{-\ell},2^{-\ell+1}]}m_3(t).\ ] ] by proposition [ me ] , there is a constant such that , for , }m_3(s )\bigr)\le c \bigl(t^{p-3}\vee t\bigr){\lambda}\ ] ] so note that for all , so for with , hence , by proposition [ fsf ] , ( [ lte ] ) remains valid for , provided and and have their new meanings . fix ] .there exist and such that \times b(r) ] .write where is the average value of on and . then where now , by ( [ le ] ) , for all , we have and , for , by ( [ lte ] ) , we have , for , where is the average value of on \times\ { v\} ] , for some constant , depending only on and .so , by doob s -inequality , }(s)\bar m(dv , dv_*,d\sigma , ds ) \\\nonumber & \le&\frac{cr^{(5-q)^+}}n\mathbb{e}\int_0^t\bigl \langle{\vert}v{\vert}^q,\mu ^n_s\bigr \rangle \,ds.\end{aligned}\ ] ] on the other hand and \label{ae } \\[-8pt ] \nonumber & & \hspace*{22pt}\qquad\qquad{}\times1_{(0,t]}(s)\bar m(dv , dv_*,d \sigma , ds ) \\\nonumber & & \qquad \le c\mathbb{e}\int_0^t\bigl\langle{\vert}v{\vert}^q,\mu^n_s\bigr\rangle \,ds.\end{aligned}\ ] ] we combine ( [ fmue ] ) , ( [ hfma ] ) , ( [ fma ] ) , ( [ iv ] ) , ( [ fmap ] ) , ( [ fmaq ] ) , ( [ if ] ) , ( [ ig ] ) , ( [ qe ] ) , ( [ ih ] ) and ( [ ae ] ) to see that , for all ] , an optimization over , and now shows the existence of an for which the estimate claimed in theorem [ mr ] holds . for large , the reader may check the optimization yields a value for close to . the proof givencan be varied by replacing the one - step discrete approximation by a chaining argument .see the proof of proposition [ dat ] for this idea in a simple context .this gives for sufficiently large .we omit the details because theorem [ mr10 ] gives a stronger result . here is the dimension of space time , reflecting the fact that we maximize over a class of functions on \times\mathbb{r}^d ] associated to ] to obtain an improved bound .weshowed in propositions [ fse ] and [ fsf ] that the linearized kac process is continuous in its initial data . for the proof of our main estimate with optimal rate , we will need also continuity in the environment. the following notation will be convenient . for and a function on ,we will write for the reweighted function and write for the smallest constant such that , for all , we have denote by the set of all functions on with .we earlier wrote for and for .we will use the cases and .suppose that and are processes of measures on , both satisfying ( [ skp ] ) .given and a function of quadratic growth on , define for ] , we have where is the average value of on and where ^d ] , there is a constant such that , for all , we have \label{feta } \\[-8pt ] \nonumber & & \qquad\le c\kappa n^{-1/d}e^{cm^*(p+3+\delta)t } \biggl(\mathbb{e}\int _ 0^t\bigl\langle{\vert}v{\vert}^{2p+5 + 2\delta } , \mu_s^n\bigr\rangle \,ds \biggr)^{1/2}.\end{aligned}\ ] ] the same inequality holds for if we replace by .here we have written for the norm in .this estimate will be applied in the next section , using the moment estimates derived in section [ mom ] to control the right - hand side .we will use also the following comparison estimate for two nonrandom processes and satisfying ( [ skp ] ) . fix .write we assume that . for and , define for with where is a linearized kac process in environment starting from .[ wmmf ] for all , and all ] .set , and for , set . for and any integer , there is a unique way to partition by a set of translates of .also , there is a unique way to partition by a set of translates of . fix and , for all , we have for , set , and note that . for and , set , where is the unique element of containing , and note that .set , then for all , for all and all .fix ] , we have we use cauchy schwarz in ( [ fee ] ) to obtain set note that for all and all . set and .then so by proposition [ jpe ] , note that , so hence , for some constant , we have then , by doob s -inequality , on the other hand , we have where the measures and are as defined in section [ martk ] .we split the integral using and use the -isometry for integrals with respect to the compensated measure to obtain where the constant varies from line to line . in the final inequality , we dealt with the second term on the right by writing , applying cauchy schwarz and then using the fact that recall that , , and .note that and .hence , on letting , we deduce that , for some constant , \label{fet } \\[-8pt ] \nonumber & & \qquad\le cn^{-1/d}e^{cm^*(p+2+\delta)t } \biggl(\mathbb{e}\int _ 0^t\bigl\langle { \vert}v{\vert}^{2p+3 + 2\delta},\mu _ s^n\bigr\rangle \,ds \biggr)^{1/2}.\end{aligned}\ ] ] this is not the inequality ( [ feta ] ) we seek because rather than appears on the right - hand side .however , it will prove to be a useful first step .we now turn to the proof of ( [ feta ] ) . it will suffice to deal with the case where for some .set .then , for all ] , and hence \label{hee } \\[-8pt ] \nonumber & & \qquad\quad{}+\sum_{j = j_0 + 1}^j\biggl{\vert}\sup _ { t\le t}\sup_{f\in\mathcal { f}(p)}\int_0^t \bigl\langle f_{s\tau_j(t)}-f_{s\tau_{j-1}(t)},dm^n_s \bigr\rangle\biggr{\vert}_2\\ \nonumber & & \qquad\quad{}+\biggl{\vert}\sup_{t\le t } \sup_{f\in\mathcal{f}(p)}\int_0^t\bigl \langle f_{st}-f_{s\tau _ j(t)},dm^n_s\bigr \rangle\biggr{\vert}_2.\end{aligned}\ ] ] fix and , for , set . note that , for ] , set .write }\sup_{f\in\mathcal{f}(p)}\int _0^t\bigl\langle f^{(i)}_s , dm^n_s \bigr\rangle\ ] ] and note that set where is the constant from ( [ fet ] ) .we replace by , by and by in ( [ fet ] ) to see that then so there is a constant such that \label{dee } \\[-8pt ] \nonumber & & \qquad\qquad \le cm^*(p+1)t\kappa n^{-1/d}e^{cm^*(p+3+\delta)t } \biggl(\mathbb{e}\int _ 0^t\bigl\langle { \vert}v{\vert}^{2p+5 + 2\delta},\mu_s^n\bigr\rangle \,ds \biggr)^{1/2}.\end{aligned}\ ] ] finally , we can take , and for all in proposition [ fsg ] to obtain for all .hence so estimate ( [ gee ] ) shows that , as , hence ( [ feta ] ) follows from ( [ fet ] ) , ( [ hee ] ) and ( [ dee ] ) .the proof is the same for except that we get in place of in ( [ fet ] ) and ( [ dee ] ) .proof of proposition [ wmmf ] fix and .we follow the preceding proof to obtain , for , \\[-8pt ] \nonumber & & \qquad=\sum _ { \ell=2}^l\sum_{k=0}^k \sum_{b\in\mathcal{p}_{k,\ell}}2^{-(1+\delta)k}a_b\int _ 0^t\bigl\langle \tilde h^b_{st},dm^n_s \bigr\rangle+\int_0^t\bigl\langle\tilde g_{st},dm^n_s\bigr\rangle,\end{aligned}\ ] ] where , and . by proposition[ fsg ] , we have where and .note that so by lemma [ dcl ] , where and . we continue to follow the steps of the preceding proof to arrive at , , and by their values and let to deduce that , for some constant , now \label{fetu } \\[-8pt ] \nonumber & & \qquad\quad{}+\sum_{j = j_0 + 1}^j\biggl{\vert}\sup _ {t\le t}\sup_{f\in\mathcal { f}(p)}\int_0^t \bigl\langle \tilde f_{s\tau_j(t)}-\tilde f_{s\tau_{j-1}(t)},dm^n_s \bigr\rangle\biggr{\vert}_2 \\ \nonumber & & \quad\qquad { } + \biggl{\vert}\sup_{t\le t } \sup_{f\in\mathcal{f}(p)}\int_0^t\bigl \langle\tilde f_{st}-\tilde f_{s\tau_j(t)},dm^n_s \bigr\rangle\biggr{\vert}_2,\end{aligned}\ ] ] and the final term tends to as .we consider the case where and for all , from which the general case follows by the triangle inequality .then for all and , so . we then use ( [ fett ] ) for the first term on the right in ( [ fetu ] ) , use ( [ dee ] ) for the sum over and let to obtain the claimed estimate .we seek to show that , for and , for and any two kac processes and with collision kernel , which are adapted to a common filtration , with probability exceeding , for all ] and therefore are anticipating. it will suffice to consider the case where ] .we apply propositions [ wmme ] and [ wmmf ] conditionally on to obtain , for some constant , and }\sup_{f\in\mathcal{f}}\int _ { t_i}^t\bigl\langle \bigl(e_{st}^j - e_{st}^{j-1 } \bigr)f , dm^n_s\bigr\rangle1_{\{\langle1+|v|^{5+\delta } , \rho_{t_i}\rangle\le a\}}\biggr{\vert}_2 \\ & & \qquad \leca2^{-j}\kappa e^{ca2^{-j}}n^{-1/d } \biggl ( \mathbb{e}\int_{t_i}^{t_{i+1}}\bigl\langle \biggr)^{1/2}.\end{aligned}\ ] ] by proposition [ me ] , there is a constant such that hence , for constants , here , we absorbed into in the second inequality by changing the constant . by proposition [ fsg ] , there is an absolute constant such that , for all and , so , on , we have and so as , \label{ec } \\[-8pt ] \nonumber & & \qquad\le ca\kappa e^{cat}2^{-j}\biggl{\vert}\int_0^t \bigl\langle1+|v|^3,\bigl{\vert}dm_s^n\bigr { \vert}\bigr\rangle\biggr{\vert}_2\to0.\end{aligned}\ ] ] finally , we use estimates ( [ ea ] ) , ( [ eb ] ) and ( [ ec ] ) in ( [ ebit ] ) and let to obtain a constant such that an analogous estimate holds for , and theorem [ mr10 ] then follows by chebyshev s inequality .recall that ] . for all ,there is a constant such that , for all , we have here , denotes the wasserstein- distance for the euclidean metric on . for completeness , and since it may be read as a warm - up for the proof of proposition [ wmme ] , we give a proof .fix . for ,we can partition as a set of translates of ^d ] , and consider the event note that . on , by some simple estimation , we have , so . hence , in particular , there is a constant such that now , for all , we have and so hence as .since and , we have as for all by the weak law of large numbers . for , there is a constant , depending only on and , such that for , since , may be chosen so that also and so for ] , now , from ( [ f1 ] ) , ( [ f2 ] ) and ( [ f3 ] ) , for all , all ] , and .then there exists a constant with the following property . for all and any kac process in with , with probability exceeding , for all ] and in such that and . by proposition [ dap ] and theorem [ mr ] , there exists an increasing sequence in such that , for all , with probability exceeding , and then for all by borel cantelli , almost surely , these inequalities hold for all sufficiently large , so the sequence is cauchy in the skorohod space , and hence converges , with limit say , since is complete . by fatou s lemma and the moment estimate ( [ nme ] ) , is locally bounded in almost surely .fix a function on satisfying and for all . from ( [ ucp ] ) , since , we see that uniformly on compact time intervals almost surely .consider the equation with in the limit .estimate ( [ dme ] ) implies that uniformly on compact time intervals in probability .moreover , where and , by some straightforward estimation , for all .hence , we can pass to the limit uniformly on compact time intervals in probability to obtain for all , almost surely .a separability argument shows that almost surely , this equation holds for all such functions and all .so , almost surely , is a solution , and in particular , a locally bounded solution in exists. now let be any locally bounded solution in starting from , and let be any kac process in . then where now . the argument of section [ bpr ] applies without essential change to show that , for all and all functions on , we have where and where is a linearized kac process in environment .next , the argument of section [ pmr ] applies to show that , for all \langle be proved for by checking that the arguments leading to the estimate for apply also when is replaced by .alternatively , we can find so that and , by proposition [ dap ] , with probability exceeding . then , by theorem [ mr ] and ( [ wnm ] ) , with probability exceeding , for all , we have finally , we can take and let to see that for all , so is the only solution which is locally bounded in . we can combine theorem [ cbe ] with proposition [ dap ] to obtain the following stochastic approximation for solutions to the spatially homogeneous boltzmann equation .[ cor ] assume that the collision kernel satisfies conditions ( [ lbp ] ) and ( [ lcp ] ) .let for some , and let be the unique locally bounded solution to ( [ wb ] ) in starting from . write for the random variable in constructed by sampling from as in proposition [ dap ] , and conditioning on , let be a kac process starting from .then , for all {\lambda}\ge\langle constants and , such that with probability exceeding , for all , for , we can take when , and the estimate holds with in place of when .on the other hand , if one views the spatially homogeneous boltzmann equation as a means to compute approximations to the kac process , the following estimate provides a measure of accuracy for this procedure .[ cor2 ] assume that the collision kernel satisfies conditions ( [ lbp ] ) and ( [ lcp ] ) .fix , ] , we have .the same holds for if we replace by .use ( [ nmg ] ) to find a constant such that with probability exceeding .then apply theorem [ cbe ] with in place of to find the desired constant .we state and prove a basic lemma on the time - evolution of signed measures , which allows us to control the evolution of the total variation when the signed measures are given by an integral over time .let be a measurable space .write ( resp ., ) for the set of finite measures ( resp . , signed measures of finite total variation ) on . for , write for the associated total variation measure and for the total variation .[ bml ] assume that is separable .let .let and be given , along with a measurable map \to\mathcal{m} ] and .set then there exists a measurable map \times e\to\{-1,0,1\} ] , we have and a version of the lemma , without the hypothesis of separability and for the case where \to\mathcal{m} ] is continuous in total variation , has been proved by lu and mouhot , lemma 5.1 .we will use a substantially different argument , which allows us to replace this hypothesis of continuity with the existence of a reference measure .proof of lemma [ bml ] there exists an increasing sequence of finite -algebras generating .write for the partition of generating .consider the finite measure on . by scaling wereduce to the case where is a probability measure .for each ] and for each ] with and any function ,x\in e) ] and all .write for the set of finite subsets of \cap ( t\mathbb{q}) ] has total variation bounded by .hence , for , we can define a cdlg map \to[-1,1] ] .we have as we showed above . for and , we have in the limit so .define \times e\to\ { -1,0,1\} ] . for any function on ] and all .since is absolutely continuous with respect to for all ] .since is absolutely continuous with respect to , we have as almost everywhere for .hence , on letting , we obtain on for all ] . by dominated convergence , for all and all $ ] , we have and hence , on taking above and letting , we obtain the desired identity .i am grateful to clment mouhot and richard nickl for several discussions in the course of this work , and to a helpful referee whose comments led to improvements in the paper .
|
an explicit estimate is derived for kac s mean - field model of colliding hard spheres , which compares , in a wasserstein distance , the empirical velocity distributions for two versions of the model based on different numbers of particles . for suitable initial data , with high probability , the two processes agree to within a tolerance of order , where is the smaller particle number and is the dimension , provided that . from this estimate we can deduce that the spatially homogeneous boltzmann equation is well posed in a class of measure - valued processes and provides a good approximation to the kac process when the number of particles is large . we also prove in an appendix a basic lemma on the total variation of time - integrals of time - dependent signed measures . ./style / arxiv - general.cfg
|
first of all , i would like to apologize for not covering a number of items , often very interesting , which were discussed during this conference .i certainly do not feel competent to address most of the theoretical issues , so this talk will be entirely devoted to experimental results . furthermore , a number of reviews on specific subjects were presented , which it makes no sense to try to summarize : high energy cosmic neutrinos , rare kaon decays , polarized structure functions .there will be ample opportunities in forthcoming rencontres de moriond to come back to future projects such as b - factories , lhc , long baseline neutrino experiments , ams , or to ongoing experiments which are a bit too young this year to deliver results , such as ktev , na48 or the neutron electric dipole moment measurement at the ill .these topics will therefore not be covered either .even with these restrictions , it will be impossible to do justice to the vast amount of material which has been presented in the past week , and i can only reiterate my apologies to those who may feel that their contribution is not adequately referred to in the following .this presentation will be divided in three ( unequal ) chapters : tests and measurements within the standard model , searches and hints beyond the standard model , and finally neutrino oscillations .most of the results presented at this conference were stamped as preliminary ; therefore the original contributions should be checked in addition to this summary before quoting any results . for the written version of this talk , the figures have not been incorporated since they can be found easily in these proceedings , with the exception of those belonging to contributions not available to the author at the time of writing ._ contributions by d.w . gerdes _ _ and r. raja _ top quarks are produced at the tevatron in collisions at tev , where both cdf and d0 accumulated . with such statistics ,the main goals of the experiments are , in this field , the measurements of the top quark mass and of its pair production cross section .top quarks decay according to , so that pair production leads to three final state topologies , depending on whether both , one or none of the ws decay leptonically ( ) : dileptons , lepton plus jets , all hadronic .leptons are selected as isolated electrons or muons with large transverse energy .the presence of neutrinos is inferred from a large amount of missing transverse energy .jets are required to carry substantial , and multijet events exhibit a spherical pattern .finally , b - jets are tagged by soft leptons or , in the case of cdf , by secondary vertices . in the dilepton topology , the two leptons shouldnot be compatible with a decay , there should be substantial and two additional jets should be detected .cdf select nine events over a background of 2.1 , and d0 five over 1.4 .four of the cdf events , all in the channel , have a in excess of 100 gev , which is larger than the typical expectation from pairs . in the lepton plus jet topology , at least three jets , and a b - tagare required .cdf select 34 events over a background of 9.3 , and d0 11 over 2.4 .cdf use this sample for the cross - section measurement , and supplement it with untagged four - jet events to reconstruct the top - quark mass ( fig . 2 of ) .a topological analysis is also performed by d0 , requiring at least four jets but not imposing any b - tag .the aplanarity and the sum of the jet transverse energies are used by means either of cuts to select 19 events over a background of 8.7 ( fig . 1 of ) from which a cross - section measurement is inferred , or of a maximum likelihood fit to extract a measurement of the top quark mass ( fig . 3 of ) .the production cross - section is determined to be pb and pb by cdf and by d0 , respectively , from the dilepton and lepton plus jet samples . for a top mass of 175 ,the theoretical expectations are around 5 pb . using the lepton plus jet topology , cdf and d0 measure masses of and ._ contributions by a. gordon _ _ and d.wood_ w bosons are produced via the drell - yan process in collisions .the measurement of the w mass is performed through a fit to the reconstructed transverse mass of the decay .the transverse mass is calculated as , where is the transverse momentum of the recoiling hadronic system .these measurements are now limited by systematic errors .the scale of the lepton energy is calibrated using events containing decays ; the resolution on the energy of the hadronic system is determined using minimum bias events ; the model for the transverse momentum distribution of the produced w bosons is controlled with events containing decays instead .the cdf results are obtained using decays ( fig . 5 of ) , while d0 use the channel instead ( fig . 5 of ) .averaging with the results obtained from run 1a , cdf measure a w mass of .the d0 result of has since then be updated to , as quoted in . _contributions by a. valassi _ _ and m.a .thomson _ there are two very different methods to measure the w mass in collisions at lep 2 .the one relies on the behaviour of the w pair production cross section near threshold .the other explicitly reconstructs the mass of the final state w bosons from their decay products .luminosities of about 10 were collected in 1996 by each of the lep experiments both at 161 and 172 gev .the measurement at threshold was performed at a centre - of - mass energy of 161.33 gev which maximizes the sensitivity of the cross section to the value of the w mass .( a single measurement at this optimal energy has been shown to be more efficient than a more detailed scan of the threshold region . ) depending whether both , one or none of the produced ws decay leptonically , the final state arising from w pair production consists in _i ) _ an acoplanar pair of leptons , _ ii ) _ an isolated lepton , missing energy and two hadronic jets , or _iii ) _ four jets . the first two topologies , which account for 11% and 44% of the final states , respectively , are rather easy to select since they do not suffer from any significant standard model background . the four - jet topology ,on the other hand , is more difficult to disentangle from the large qcd background , and multivariate analyses are therefore used to retain sensitivity . from the cross section measurement of pb ,averaged over the four lep experiments , a w mass value of is inferred .the cross section measurement was repeated at 172 gev and the result is well compatible with the standard model expectation ( fig . 3 of ) .while the sensitivity to the w mass is reduced , the larger statistics allow a direct measurement of the w hadronic branching ratio , obtained from the comparison of the cross sections in the various topologies .this measurement does not compete yet with the indirect determination performed at the tevatron using the ratio of the production cross sections for to ( , as reported in , hence ) .it relies however on fewer theoretical inputs .the direct reconstruction of the w mass has been performed at 172 gev where the statistics is largest .typically , in the lepton plus two - jet topology where a neutrino escapes detection , a 2c - fit is performed , imposing equality of the two w masses ; in the four - jet topology , a 5c - fit is performed in a similar fashion , or a 4c - fit supplemented by a rescaling of the two dijet energies to the beam energy .there are a number of subtleties such as the choice of jet pairing in the four - jet topology , the type of functions fitted to the resulting mass distributions , the bias corrections .clear mass peaks are observed ( fig . 3 of ) , and an average w mass of is determined .this measurement is still limited by statistical errors , but theoretical issues such as the effect of colour reconnection in the four - jet topology and technical challenges such as the precise beam energy calibration will become relevant very soon .the average of the top quark mass measurements at the tevatron is .the tevatron ( plus ua2 ) average for the w mass is .the impact of these results , which tend to favour a light higgs boson , can be seen in fig .6 of .the average w mass resulting from the measurements performed at lep 2 is , well consistent with the value from hadron colliders given above .the grand average is . _contributions by d. wood _ _ and s. mele _ the search for an anomalous coupling has been pursued at colliders since many years in final states involving a w boson and a high photon .the results are traditionally expressed in terms of the and parameters , which have zero value in the standard model .recent d0 results are shown in fig . 2 of .they are perfectly compatible with the standard model and exclude a theory involving only electromagnetism . with the increased statistics , the search for anomalous couplings has now been extended to ww , wz and production . at lep 2 ,w pair production involves , in addition to -channel neutrino exchange , -channel z and photon exchange .it is therefore possible to test the wwz and couplings , but the two are hard to disentangle and thus a direct comparison with the results from the tevatron is not easy .moreover , there are strong indirect constraints on anomalous couplings resulting from the precision measurements at lep 1 , except for some specific parameter combinations called `` blind directions '' .the analysis is therefore restricted to such combinations , _e.g. _ the parameter .both the total cross section and the angular distribution of w pair production provide constraints on the triple gauge boson couplings , as can be seen in fig . 2 of .here too , a theory with no wwz vertex is excluded at more than 95% cl . in principle , single w production at lep 2 , through the reaction which proceeds dominantly via the fusion mechanism , could give access to the vertex with no contamination from the wwz coupling . such an analysis has been attempted , but the results are still far from competing with those from the tevatron . _contributions by a. bhm _ _ and p. rowson _ over four million hadronic events have been collected by each of the four lep experiments in the vicinity of the z peak .much lower statistics were accumulated by sld at the slc , but with the outstanding specificity of a large polarization of the electron beam .about 150 k events were collected with an average polarization of 77% ( to which50 k with from earlier runs can be added ) .the combined results from the lep scan of the z resonance are : and for the z mass and width , and .this last quantity , the ratio of the hadronic to the leptonic widths , seems a bit high with respect to the standard model expectation ( given ) .the electroweak mixing angle is determined from a number of independent asymmetry measurements , the consistency of the results providing a strong test of the standard model . as can be seen in fig .9 of , the of the various determinations is only of 15.1 for 6 dof , with the lep asymmetry and the left - right polarized asymmetry from sld contributing most to this large value .the remark can be made that the error on the measurement of the left - right asymmetry is dominated by the systematic uncertainty on the beam polarization .bringing the sld measurement of in agreement with the lep average would require a 5% mismeasurement of the average value of the polarization , which seems a bit hard to swallow compared to the quoted systematic error of less than 1% .it should also be remembered that the delicate measurements of the asymmetry and of the polarization at lep are not yet final . with these restrictions in mind ,the grand average is at the moment . _contribution by j. steinberger _ another controversial precision measurement at the z peak is that of , the fraction of hadronic z decays into .the interest of this quantity is that it is sensitive to contributions of heavy particles through corrections to the zbb vertex ( from standard or non standard processes ) .for instance , the contribution of the top quark reduces the expected value by 1.2% .last year , the measurement of , together with , had been said to exclude the standard model at more than 99% cl .such a statement simply ignored that systematic errors are often of a highly non - gaussian nature , and indeed a lot of effort went , in the past year , into the understanding and the control of these systematic errors .the most precise measurements of rely on the technique of hemisphere tagging , in which the lifetime and the mass play the major roles . at sld , the small beam spot characteristic of linear colliders and the availability of a vertex detector located at only 3 cm of the beam axis and with three - dimensional readout allow b purities of 98% to be achieved with an efficiency of 35% , after a simple mass cut as shown in fig. 2 of . the hemisphere tagging technique allows and the b tagging efficiency to be determined simultaneously from the data using the total number of tagged hemispheres and of events in which both hemispheres are tagged : to solve these equations for and , the hemisphere correlation and the efficiency for charm have to be taken from monte carlo and are responsible for the largest systematic uncertainties .( the uncertainties on the other correlations and on translate into a very small systematic error on . ) the value of is taken from the standard model , or the dependence of the result on is explicitly stated . thanks to more detailed assessments of track reconstruction defects , to a reduction of hemisphere correlations using techniques such as the reconstruction of separate primary vertices in both hemispheres , to a more thorough evaluation of physical effects such as gluon splitting into , the systematic uncertainties seem to be under a much better control in the recent measurements than in the earlier ones . taking only the most recent aleph , delphi , opal and sld results leads to `` jack s average '' of , in excellent agreement with the standard model expectation of .this agreement is somewhat spoilt if all existing measurements are introduced in the average , , but the discrepancy with the standard model expectation is now reduced to the 1.8 level . _contribution by a. bhm _ a global fit to all lep data leads to an indirect determination of the top mass , , in agreement with the direct measurement at the tevatron .the fact that this value is on the low side is related to the difficulties with and discussed above .the tendency , as can be seen in fig .10 of , is to favour a light higgs boson .taking into account the direct top and w mass measurements from the tevatron , a higgs mass is predicted , with at 95% cl . _contribution by d. gel _ cross section measurements in collisions were also performed at lep 2 , up to a centre - of - mass energy of 172 gev .the agreement with the standard model is as good as statistics allow , as can be seen in fig .9 of .these measurements constrain the interference term which is normally set to its standard model value in the fits to the z peak data . if this constraint is not imposed , the precision on the z mass is only 6.1 from lep 1 data , and becomes 3.1 using the lep 2 ( and topaz ) data in addition .this is not very far from the 1.9 precision achieved when setting the interference term to its standard model value . here ,only a few highlights in and b physics will be sketched , for completeness .a factor of four improvement in the precision on michel parameters in decays has been achieved by cleo .lepton universality is now tested at the 0.3% level in decays .the comparison of the branching ratios for and provides a test of e universality at that level .the compatibility of the decay leptonic branching ratio , for a massless fermion , with the lifetime , fs , provides a test at the same level , given the value of the mass measured at bes .the vector and axial - vector structure functions in hadronic decays have been measured separately , allowing an improvement of 30% on the theoretical error on , which is of interest in view of the forthcoming measurement of that quantity at brookhaven .a collection of rare b decays , mediated by penguin diagrams , has been investigated by cleo .mostly , limits have been set , but the process was observed , with a branching ratio of .surprisingly enough , a measurement of was performed at lep by aleph , .the value of is determined to be , using the decays . all exclusive bhadron lifetimes are measured , and their ratios are found to be compatible with expectation , except for the lifetime which remains low . a new measurement of the lifetime was performed with an accuracy of 56 fs by delphi , using the signature of the slow pion from in the decay .a similar precision was reached by cdf .b - mixing has been studied at lep , sld and cdf .a variety of methods is used to measure , leading to the lep average of .( the breakdown of systematic errors , necessary for a proper averaging , was not available from cdf and sld at the time of the conference . )the aleph and delphi combined lower limit on is 9.2 ps , which becomes interesting not only from a technical point of view ._ contributions by p. gay _ _ and s. rosier - lees _ the search for the `` standard model higgs boson '' really belongs to this section on supersymmetry .this is because , in the minimal standard model , there is essentially no room , given the large top quark mass , for a higgs boson light enough to be discovered at lep 2 , the only place where this search can be conducted efficiently these days .moreover , in large regions of the parameter space of the mssm ( the minimal supersymmetric extension of the standard model ) , the properties of the lightest higgs boson make it indistinguishable in practice from its standard model equivalent .the main production mechanism is the higgsstrahlung process , , which leads to various topologies , depending on the h and z decay modes , of which three are the most important .acoplanar jets result from the and decays ; the decay leads to two isolated energetic leptons in a hadronic environment instead ; a four - jet topology is reached when both the higgs and the z decay into hadrons .a crucial feature affecting the searches in the various topologies is the large decay branching ratio of the higgs into , 85% .although most of the signal ends up in a four - jet final state , this topology had not been considered at lep 1 because of the overwhelming background from hadronic z decays , with a typical signal to background ratio of . at lep 2 on the contrary ,this ratio is of order , which renders worthwhile the search in this channel .an efficient b - tagging is the key to the reduction of both the qcd and the ww backgrounds .this tool is also instrumental in the acoplanar jet topology to eliminate the backgrounds from w pairs ( with one w decaying into hadrons , the other into ) , and from single w production in the reaction , where the spectator electron remains undetected in the beam pipe . in all channels , the constraint that the decay products of the z should have a mass compatible with is also highly discriminating , a feature which could not be used at lep 1 where the final state z was produced off - shell . for a 70 higgs boson mass ,about 10 events would have been produced in each of the experiments .typically , efficiencies of are achieved for a background expectation of one event .no signal was observed , resulting in the case of aleph in a mass lower limit of 70.7 , as shown in fig .1a of .when the results of the four experiments are combined , a sensitivity in excess of 75 should be reached . _contributions by p. gay _ _ and s. rosier - lees _ in the mssm , two higgs doublets are needed , which leads to three neutral higgs bosons , the cp - even h and h , and the cp - odd a , and to a pair of charged higgs bosons . while h and are expected to be out of the lep 2 reach , h should be fairly light .in addition to , the ratio of the two higgs field vacuum expectation values is the important parameter for phenomenology .compared to the standard model case , the higgsstrahlung production cross section is reduced by the factor , where is the mixing angle in the cp - even higgs sector .the results from the standard model higgs searches reported above can therefore be turned into limits on as a function of .these limits are most constraining for low values of ( _ i.e. _ for close to unity ) , as can be seen in fig .1b of . for large values of, the complementary process becomes dominant , with a cross section proportional to , in which case h and a are almost mass degenerate . since both h and a decay predominantly into , the main final state consists in four b quark jets .the topology has also been addressed , adding some sensitivity to the search .the result obtained by aleph combining the searches for both the hz and ha final state is shown in fig .1b of .it can be seen that , although the boundaries of the region theoretically allowed depend on additional parameters of the model , in particular on the mixing in the top squark sector , the experimental result shows hardly any dependence on those . a lower mass limit of 62.5 holds for both h and a , for any . _contribution by s. rosier - lees _ in the standard scenario , r - parity is conserved and the lightest supersymmetric particle ( lsp ) is a neutralino , .a variety of searches for supersymmetric particles have been performed at lep 2 , as reported in detail in . herethe example of chargino pair production , , will be sketched for illustration . in most of the parameter space ,the decay modes of charginos in the mass range relevant at lep 2 are and .the topologies arising from pair production are therefore _ i ) _ an acoplanar lepton pair , _ ii ) _ an isolated lepton in a hadronic environment or _iii ) _ multijets . in all cases , there should be substantial missing energy .the analyses addressing these various final states are further split according to the mass difference : for small mass differences , the main background comes from interactions , while for very large mass differences , the signal resembles w pair production .no signal was detected above background in any of those searches . since the production cross section and the decay branching ratios are quite model dependent , it is difficult to derive a hard limit for the chargino mass . for gaugino - like charginos ,the mass is about half the chargino mass , and the cross section is largest if sneutrinos are heavy ; in that case , the kinematic limit of 86 is reached .the limit is lowered to 72 , allowing for any .for higgsino - like charginos , the mass difference tends to be small but the cross section does not depend on ; in that case , the mass limit is 80 for mass differences in excess of 5 . in the mssm , the results on chargino searches at lep 2 can be combined with the lep 1 constraints on neutralinos to set limits on the mass of the lightest neutralino .assuming heavy sneutrinos , a lower limit of 25 is obtained , irrespective of .the limit increases with as shown in fig .[ fig : mchi ] . assuming unification of all gaugino masses and of all squark and slepton masses at the gut scale ,the constraints from lep 2 can be compared with those inferred at the tevatron from the absence of any squark or gluino signal .this has been done by opal , as shown in fig .[ fig : opal ] .it can be seen that , for large squark masses , the indirect lep 2 limit on the gluino mass is of almost 280 , well above the direct limit of 160 .( the exact values depend on and ; the cdf parameter choice has been used for comparison ) .other lep 2 mass limits are 67 for and 75 for ( assuming = 2 ) , both for ; the stop mass limit varies between 69 and 75 , depending on and on the mixing angle in the stop sector . _contributions by p. azzi _ _ , j.d . _ and j.h . dann _ in 1995 , cdf reported the observation of a spectacular event containing two electrons and two photons , all with in excess of 30 gev , and with a missing transverse energy of 53 gev . there is no convincing explanation of this event within the standard model , which triggered a lot of theoretical activity .two classes of supersymmetric interpretation were proposed , both advocating that this event originates from selectron pair production . in the light gravitino scenario ,the usual decay takes place , followed by . in the almost standard mssm scenario ,the decay takes place instead , with . here, the gut relation among gaugino masses has to be dropped , and is an almost pure photino while is an almost pure higgsino . if one of these interpretations is correct , there should be many other channels leading to final states containing two photons and missing .this signature has been searched inclusively by both cdf and d0 , and no signal was observed ( other than the previously reported event ) .this can be seen in fig . 2 of and in fig . 1 of , the sensitivity of these searches is not sufficient to completely rule out either of the two proposed scenarii .the usual phenomenology of supersymmetry at lep is also deeply modified in these two models , the clearest signature becoming a pair of acoplanar photons with missing energy . in the light gravitino scenario ,this final state results from , with , while in the higgsino lsp scenario , it is reached through , followed by .both reactions proceed dominantly via t - channel selectron exchange .the signatures are somewhat different in the two scenarii because the light gravitino is practically massless , in contrast to the higgsino lsp . requiring two energetic photons at large angle with respect to the beameliminates the standard model background from .the absence of any signal allows mass limits in excess of 70 to be set in the light gravitino scenario , for selectron masses around 100 .this excludes half of the domain compatible with the kinematics of the cdf event , as can be seen in fig .[ fig : dann ] .the constraints on the higgsino lsp scenario are much milder .single photons could also arise from the processes or .no significant effect was observed beyond the standard model expectation from . _contribution by g.w . last year at moriond , the aleph collaboration reported the observation of an excess of four - jet events in the data collected at 130136 gev . choosing the pairing of jets such that the dijet mass difference is smallest, nine events were found to cluster close to 105 in the dijet mass sum , while less than one were expected .the other collaborations did not observe any similar effect .a working group was set up by the lepc , involving members from the four collaborations , in order to study if this discrepancy could be explained by some experimental artefact .the conclusion is twofold : the aleph events are real and do cluster in the dijet mass sum as reported ; the other collaborations have similar sensitivity to such events and would have seen them if they had been present in their data samples . with the additional data collected at 161 and 172 gev , the effect appears enhanced in the aleph data and still does not show up elsewhere , as can be seen in fig . 3 of . in the mass window from 102 to 110, aleph find altogether 18 events for a background expectation of 3.1 , while delphi , l3 and opal together find nine events for an expectation of 9.2 .the probability that the aleph observation arises from a fluctuation of the standard model is extremely low , and so is the probability that the three other collaborations see nothing if the aleph signal is real ( precise numbers are of little interest at this point ) .only additional data will allow this issue to be settled unambiguously . _contributions by e. perez _ _ and b. straub _ at hera , 27.5 gev positrons collide on 820 gev protons , the centre - of - mass energy thus being gev . the following results are based on 14 and 20 of data collected by the h1 and zeus experiments , respectively . the kinematics of deep inelastic electron - proton scattering is sketched in fig . 1 of .the relevant variables are , and , the three being related by .the centre - of - mass energy in the electron - quark collision is related to by , and the scattering angle in that frame is such that .four independent quantities can be measured in the laboratory : the electron energy and angle with respect to the beam , and ; the same variables for the hadronic jet , and .two of these quantities are sufficient to reconstruct the kinematics of the event .h1 choose the electron variables and , while zeus rather use the two angles , and .both methods have their virtues and defects .the electron energy is sensitive to the absolute calibration of the calorimeter ; the measurement using the angles is more affected by initial state radiation .both collaborations therefore use an alternative method as a check .they claim that their systematic errors are controlled at a level of 8.5% at high and that the backgrounds are negligible in that regime .the observation of zeus is summarized in fig .[ fig : zeus ] .the number of events observed for gev is 191 , for an expectation of 196 .however , for gev , only 0.145 events are expected while two are observed ; and four events are found with and , to be compared to an expectation of 0.91 .similar information from h1 is available in fig . 3 of , where the variable is used rather than .specific projections with additional cuts are displayed in fig .5 of , from which it can be inferred that an excess compared to the standard model expectation is observed for gev .this excess is more apparent for large values of and of , and the effect is maximal for in a 25 window around 200 and for where seven events are observed while only one is expected .even if it seems clear that both experiments find more events at high than they had expected , the question of the compatibility of their observations can be raised .indeed , the largest effects seen by the two experiments take place in disconnected regions . in the high high corner where zeus count four events , h1 find none . in the mass window where h1 count seven events ,zeus find two .the obvious conclusion in this somewhat confused situation is : `` wait ( for more data ) and see ... '' _ contributions by j.h . _ and s. rosier - lees _ if the accumulation of the h1 high events in a narrow electron - quark mass window is not due to a statistical fluctuation , it could be interpreted as a signal of the resonant production of a first generation leptoquark or of a squark with r - parity violation . in the latter case , the coupling at the ed vertex would be of the order of 0.03/ . with such a large coupling value ,the limits on neutrinoless double beta decay exclude a interpretation , and those on the decay almost rule out a , leaving a as sole candidate. such a squark or leptoquark could be pair produced in collisions , with a cross section independent of the value of the coupling .there are three possible final states to consider , depending on whether both , one or none of the two squarks / leptoquarks decay into eq , the alternative decay mode being .thus , the final state may consist of two high electrons and two jets , of one high electron , and two jets , or of two jets and large .only d0 reported on a search for first generation leptoquarks , the result of which is shown in fig . 4 of .this search has no sensitivity to masses as high as 200 , but it was not optimized for that mass range and progress is to be expected soon .the same interpretation of the high events in terms of squarks or leptoquarks leads to the prediction that the cross section for should be distorted by a contribution from the -channel exchange of such an object .this has been investigated by opal at lep 2 , but the present sensitivity is an order of magnitude larger than what would be needed .before summarizing the neutrino oscillation results presented at this conference , it may be worth recalling a few basics . in a two - flavour oscillation scheme , the probability for a neutrino born as to be detectedas reads where is the distance of the detector to the source and is the neutrino energy , in m / mev or km / gev ; is measured in ev .if a given experiment reaches a sensitivity for the oscillation probability , this translates into a limit of on at large , and of on for .the reach in is therefore characterized by the value of . as can be seen in table [ tab : nuosc ] ,the various experiments cover a huge range of values , with only little overlap , a feature which renders cross checks difficult ..typical values for the various kinds of neutrino oscillation experiments .also indicated are the relevant types of oscillation . [ cols="^,^,<,^",options="header " , ] there are at the moment three independent indications for neutrino oscillations : * solar neutrinos , with and in the msw interpretation ; * atmospheric neutrinos , with and ; * lsnd , with and . _contributions by c. galbiati _ _ and y. takeushi _ the radiochemical experiments using gallium are the only ones with a threshold low enough to be sensitive to the pp neutrinos .the final results from gallex were reported .the measurement is solar neutrino units , to be compared with expectations in the range from the solar models .first results from super - kamiokande , based on 102 days of data taking were presented .super - kamiokande is a huge water detector with a fiducial mass of 22,000 tons , sensitive only to the boron neutrinos . the early observation by kamiokande of neutrinos coming from the direction of the sun is beautifully confirmed , as can be seen in fig . 2 of . the deficit by a factor 0.44 with respect to expectation remains and , as shown in fig4 of , there is no conspicuous energy modulation .no results on atmospheric neutrinos were reported .taking these results together with those from the homestake chlorine experiment , there is essentially no room left for the beryllium neutrinos , which may be accommodated by the msw effect .crucial tests will be provided by borexino , which is aimed at the real time detection of beryllium neutrinos , by super - kamiokande when the statistics are sufficient to allow fine distortions of the energy spectrum to be detected , and by sudbury which should be able to measure the neutral to charged current ratio . _contributions by k. eitel _ _ and d.h . the principle of a neutrino oscillation experiment at a beam stop is very simple .a high intensity proton beam is absorbed in a target .the positive pions produced come to rest and decay into while the negative ones are absorbed by nuclear capture .the decay muons also come to rest and decay into .the only particles reaching the detector , located at a few tens of metres from the target , are the neutrinos , of which all species are present except for . both karmen and lsnd search for the appearance of this forbidden neutrino which could originate from a oscillation . the reaction used for detection is .the positron should be detected with an energy limited to mev .the signature of the neutron is a delayed 2 mev -ray from the neutron capture .the advantage of karmen is the sharp time structure of the proton beam .lsnd benefits from a larger detector mass and from particle identification .the by now well known signal of oscillation observed by lsnd is shown in fig . 2 of .it corresponds to an oscillation probability of .the new information reported in is that a very preliminary 3 signal of 11 events over a background of 11 is also observed in the in - flight decays .these come from an imperfect containment of the produced positive pions in the water target . here, a oscillation signal is searched for , with very different signatures ( harder electron energy spectrum , no neutron ) and systematics .it will be interesting to see what happens to this new signal ; and the results from the upgraded karmen experiment , where the veto against cosmic muons has been greatly improved , are also eagerly awaited . _contributions by m. vander donckt _ _ , b.a . _ and a. de santo _ the two cern neutrino experiments , chorus and nomad , are aimed at the detection of oscillations in the cosmologically relevant mass domain of a few ev .this mass range can also be expected , given the indications from the solar neutrino experiments and invoking the see - saw mechanism .the principle of chorus is the direct observation of the decay vertex , hence the use of the emulsion technique .considerable r&d efforts went into the automation of the scanning of those emulsions .a limit of on has been achieved for large , based on the pilot analysis of a small fraction of the data and using the channel only . in the case of nomad , the goal is to fully reconstruct the event kinematics to identify the presence of a by the missing momentum carried away by its decay neutrino(s ) , hence the need for a low density target and for excellent momentum and energy resolutions .the limit on presently achieved is , at the level of the best previous result .when they have collected and analysed their full statistics , both experiments should reach the level .projects to increase the sensitivity by an order of magnitude are submitted both at fermilab and at cern , as reviewed in .a search for oscillations was also performed by nomad .in contrast to the case , this search is not background free since it suffers from the contamination in the beam .the analysis therefore relies on a precise knowledge of the component of the beam .detailed monte carlo calculations of the cern neutrino beam are available , but the flux can also be determined from the data itself by a reconstruction of the various sources of electron neutrinos .this is shown in fig . 2 of : using the appropriate charged current interactions, the dominant component can be monitored using the high energy tail of the spectrum ; similarly , the component can be derived from the spectrum .it can be seen in fig . 4 of that the set of and values giving the best fit to the lsnd data is clearly excluded .altogether , as shown in fig . 5 of , this analysis excludes the large domain allowed by lsnd .accessing the lower region would need a medium baseline experiment , as discussed in .is it useful to conclude a summary ? it can certainly be said that this has been one of the very good rencontres de moriond .snow was excellent , and the weather superb ( too bad for the summary speaker ) .many enthusiastic young and sometimes older speakers were given an opportunity to defend their work in front of a demanding audience . thanks to you all , a lot of fresh high quality results were presented .last but not least , tran s hospitality has been , as usual , perfect .we all look forward to moriond 98 .gerdes , _ `` top quark production and decay at the tevatron''_. r. raja , _ `` top quark mass measurements from the tevatron''_. a. gordon , _ `` preliminary measurement of the w mass in the muon channel at cdf with run 1b data''_. d. wood , _ `` electroweak physics from d0''_. a. valassi , _ `` measurement of the w mass at lep2 from the ww cross - section''_. m.a .thomson , _ `` direct reconstruction of at lep''_. s. mele , _`` anomalous couplings at lep 2''_. s. mele , _ `` observation of single w production at lep''_. a. bhm , _ `` results from the measurements of electroweak processes at lep 1''_. p. rowson , _ `` new electroweak results from sld''_. j. steinberger , _ a brief history of the running of . d. gel , _ `` electroweak measurements at lep 2''_. c. jessop , _ `` new results in and charm physics from cleo''_. s. gentile , _`` physics of the lepton''_. r. alemany , _ `` new evaluation of ( g-2 ) of the muon and ''_. c. ogrady , _ `` new rare b decay results from cleo''_. f. parodi , _ `` b physics at lep''_. t. usher , _ `` time dependent mixing at sld''_. j.f. de trocniz , _ `` beauty physics at the tevatron''_. p. gay , _ `` searching for higgs bosons at lep 2 with aleph detector''_. s. rosier - lees , _ `` higgs and susy at lep 2''_. p. azzi , _ `` search for new phenomena with the cdf detector''_. j.d .`` beyond the standard model : new particle searches at d0''_. j.h .dann , _ `` search for susy with photons at lep 2''_. g.w .wilson , _ `` searches for new particles at lep 2 and 4-jet status''_. e. perez , _`` observation of events at very high in ep collisions at hera''_. b. straub , _ `` search for a deviation from the standard model in scattering at high and with the zeus detector at hera''_. j. bouchez , _`` future oscillation experiments at accelerators''_. c. galbiati , _`` gallex results , status of solar neutrinos , and the future experiment borexino''_. y. takeushi , _ `` the first results from super - kamiokande''_. k. eitel , _ `` the karmen upgrade and first results''_. d.h .`` lsnd neutrino oscillation results''_. m. vander donckt ,_ `` results of the chorus oscillation experiment''_. b.a .`` first results from the nomad experiment at cern''_. a. de santo , _ `` first results from the oscillations search in the nomad experiment''_.
|
a brief summary of the experimental results presented at this conference is given . = cmr10 scaled 1 u = -10 mm ( 160,1 ) ( 130,85 ) ( 130,80 ) = 3 mm plus 1 mm
|
we consider in this paper numerical computation of highly oscillatory integrals defined on a bounded interval whose integrands have the form , where the wave number is large , the amplitude function may have weak singularities , and the oscillator has stationary points of certain order .computing highly oscillatory integrals is of importance in wide application areas ranging from quantum chemistry , computerized tomography , electrodynamics and fluid mechanics . for a large wave number , the integrands oscillate rapidly and cancel themselves over most of the range .cancelation dose not occur in the neighborhoods of critical points of the integrand ( the endpoints of the integration domain and the stationary points of the oscillator ) .efficiency of a quadrature of highly oscillatory integrals depends on the behavior of functions and near the critical points .traditional methods for evaluating oscillatory integrals become expensive when the wave number is large , since the number of the evaluations of the integrand used grows linearly with the wave number in order to obtain certain order of accuracy .the calculation of the integrals is widely perceived as a _issue . calculating oscillatory integralsrequires special effort .the interest in the highly oscillatory integrals has led to much progress in developing numerical quadrature formulas for computing these integrals . in the literature , there are mainly four classes of methods for the computation : asymptotic methods , filon - type methods , levin - type methods and numerical steepest descent methods .the basis for convergence analysis of these quadrature rules is the asymptotic expansion of the oscillatory integral , an asymptotic expansion in negative powers of the wave number .the leading terms in the asymptotic expansion may be derived from integration by parts for the case when the oscillator has no stationary point . for the case when the oscillator has stationary points ,the main tool is the method of stationary phase . for a fixed wave number, the convergence order of the asymptotic method is rather low . to overcome this weakness ,the filon - type methods were proposed , which replace the amplitude function by a suitable interpolating function . in many situationsthe convergence order of the filon - type methods is significantly higher than that of the asymptotic methods. a thorough qualitative understanding of these methods and the analysis of their convergence order may be found in for the univariate case and in for the multivariate case . in these methods ,interpolation at the chebyshev points ensures convergence .a drawback of the filon - type methods is that they require to compute the moments , which themselves are oscillatory integrals . for the cases having nonlinear oscillators , it is not always possible to compute the moments exactly . in ,the moment - free filon - type methods were developed .an entirely different approach without computing the moments is the levin collocation method .the levin - type methods reduce computation of the oscillatory integral to a simple problem of finding the antiderivative of the integrand , where satisfies the differential equation .the filon - type methods and the levin - type methods with polynomials bases are identical for the cases having the linear oscillator but not for the cases having the nonlinear oscillator .numerical steepest descent methods for removing the oscillation converts the real integration interval into a path in the complex plane , with a standard quadrature method used to calculate the resulting complex integral .although many methods were proposed for computing oscillatory integrals in the literatures , there are still a big room for improving their approximation accuracy and computational efficiency . the filon - type andlevin - type methods require interpolating the derivatives of the amplitude at critical points in order to achieve a higher convergence order .even though computing derivatives can be avoided by allowing the interpolation points to approach the critical points as the wave number increases for the formula proposed in , the moments can not always be explicitly computed . in particular, certain special functions were used for calculating the oscillatory integrals in the case when has singularities and has stationary points .the formulas proposed in do not require computing the special functions and the moments of these formulas can be computed exactly , at the expenses of computing the inverse of the oscillator , which takes up much computing time .numerical steepest descent methods also require computing the inverse of the oscillator or high order derivatives of the integrand .the purpose of this paper is to develop efficient composite quadrature rules for computing highly oscillatory integrals with singularities and stationary points .the methods to be described in this paper require neither computing the moments of the integrand , inverting the oscillator , nor calculating the derivatives of and those of .the main idea used here is to divide the integration interval into subintervals according to the wave number and the singularity of the integrand . to avoid using the special functions , we first split the integration interval into the subintervals according to the singularity of and the stationary points of such that the integrand on the subintervals either has a weak singularity but no oscillation , or has oscillation but no singularity or stationary point .the weakly singular integrals are calculated by the classical quadratures using graded points . to avoid using the derivatives of and those of and avoid computing the inverse of the oscillator , we design a composite quadrature formulas using a partition of the subinterval formed according to the wave number and the property of the oscillator for the oscillatory integrals .these formulas can improve the approximation accuracy effectively , since the convergence order of the formulas computing the oscillatory integrals with smooth integrand and without stationary point may be increased by adding more internal interpolation nodes .specifically , we develop two classes of composite moment - free quadrature formulas for the highly oscillatory integrals . class one uses a fixed number of quadrature nodes in each subinterval and has a polynomial order of convergence .this class of formulas are stable and easy to implement .class two uses variate numbers of quadrature nodes in the subintervals and achieves an exponential order of convergence .convergence order of this class of formulas is higher than that of the first class .the quadrature formulas proposed in this paper have the following advantages . comparing with the existing formulas ,the proposed formulas need not computing the inverse of the nonlinear oscillator , or utilizing the incomplete gamma function for the oscillator with stationary points .these formulas not only reduce the computational complexity , but also enhance the approximation accuracy . the approximation accuracy of these formulas is higher than that of the existing formulas for the case when the oscillator integral has stationary points and the oscillator is not easy to invert . we organize this paper in seven sections . in section 2 , we present an improved moment - free filon - type method for the oscillatory integrals developed in . in section 3 , we design a partition of the integration interval and propose composite moment - free filon - type methods for the oscillatory integrals with smoothing integrand and without a stationary point . in sections 4 and 5 , we develop the composite moment - free quadratures defined on a mesh according to the wave number and the properties of the integrand for the oscillatory integrals with both singularities and stationary points .the formulas proposed in section 4 have a polynomial order of convergence , and those in section 5 have an exponential order of convergence .numerical experiments are presented in section 6 to confirm the theoretical estimates on the accuracy of the proposed formulas .moreover , we compare the numerical performance of the proposed formulas with that of those recently proposed in .we summarize our conclusions in section 7 .the goal of this paper is to develop quadrature methods for evaluating oscillatory integrals in the form :=\int_if(x){\rm e}^{{\rm i}\kappa g(x)}{\rm d } x,\ ] ] where ] of whose integrand has no singularities or stationary points .we then design an appropriate partition of according to the wave number , the singularities of and the stationary points of and employ the basic quadrature formula for each of the integrals defined on the subintervals ] , is continuous on ] and has no stationary point in ] . for a fixed positive integer we approximate by its lagrange interpolation polynomial of degree . since is differentiable on ] . letting , we choose interpolation nodes in ] and , where the coefficients , , are the divided differences of , that is , ] , to approximate the integral . in formula , the integrals }_\kappa[w_j,\widetilde{g}] ] or ] and . according to , we have the following error estimate }[f , g]\leq \dfrac{3(m+1)}{m!\kappa^2}{\left\vert\psi^{(m+1)}\right\vert}_{\infty}\sigma^m(b - a)^m.\ ] ]estimate demonstrates that the decay of the error of the filon - type method is of order .the decay of is also for the linear oscillator ( see , ) . for a quadrature formula that approximates integral , we use to denote the number of evaluations of the integrand used in the formula . according to , we have that }_{\kappa , m}[f , g]\right)\leq m+1 ] is affected by . when , the error bound in is not affected by at all . when , the error bound in decreases exponentially with respect to as .when , the error bound in grows exponentially with respect to as .for the case to accelerate convergence we subdivide ] and }[f , g] ] for .this leads to the quadrature formula }[f , g] ] .this quadrature formula will be used in the following sections for computing the oscillatory integrals with the integrand without a singularity or a stationary point . in the following theorem, we analyze the error }[f , g ] : = { \left\vert\mathcal{i}_\kappa^{[a , b]}[f , g]-\mathcal{q}^{[a , b]}_{n,\kappa , m}[f , g]\right\vert} ] , then for }[f , g]\leq \dfrac{3(m+1)}{m!\kappa^2n^{m-1}}{\left\vert\psi^{(m+1)}\right\vert}_{\infty}\sigma^{m}(b - a)^m,\ ] ] and }[f , g]\right)\leq nm+1. ] by first employing estimate with ] and then summing up both sides of the resulting inequality over . noting , this leads to estimate .according to the algorithm , noting that the nodes for are used twice in the algorithm , we conclude that }[f , g]\right)\leq n\mathcal{n}\left(\mathcal{q}_{1,\kappa , m}^{[a , b]}[f , g]\right)-(n-1)\leq nm+1 ] .formula will serve as a basic quadrature formula in this paper for developing sophisticated formulas for computing singular oscillatory integrals . in the remaining sections of this paper, we shall consider the following three cases : 1 .when and are smooth and has no stationary point or inflection point in , according to the wave number we design a partition , and write =\sum_{j\in\mathbb{z}_{n}^+}\mathcal{i}_\kappa^{[x_{j-1},x_{j}]}[f , g] ] for .when has a weak singularity only at the origin and is smooth without a stationary point or an inflection point in , we first divide into two subintervals ] such that the integrand of }[f , g] ] does not have singularity .the integral }[f , g] ] is computed by the method described in item ( i ) .3 . when has a weak singularity only at the origin and is smooth with one stationary point at the origin and has no inflection point in , we first divide into two subintervals ] does not rapidly oscillate and that of ] and ] .however , traditional quadratures for computing the second integral on the right hand side of lead to prohibitive costs for a large . inspired by the quadratures for singular integrals using graded points proposed in , for with we suggest the partition of ] .computing the integral ] for .we shall use formula to calculate these integrals . for this purpose, we define the quantities since is monotonically increasing on , we have for that \right\}}~\text{and}~ m_j\leq\sigma.\ ] ] we shall develop two quadrature methods .method one uses a fixed number of quadrature points in each of the subintervals and has a polynomial order ( in terms of the wave number ) of convergence .method two uses variable number of quadrature points in the subintervals and achieves an exponential order ( in terms of the wave number ) of convergence .we first describe the method having a polynomial convergence order .we choose a fixed positive integer . for each , we use }[f , g] ] .integral ] and the number \right) ] .[ prop : sec3p ] for with , let . if ] . the proof is done by applying theorem [ thm : sec2filonm ] on each of the subintervals ] . for , we apply with being replaced by and by to conclude that \leq\dfrac{3(m+1)}{m!\kappa^2}{\left\vert\psi^{(m+1)}\right\vert}_{\infty}m_jh_j^{m} ] . using theorem [ thm : sec2filonm ] again yields that \right)\leq\sum_{j\in\mathbb{z}_n^+}(n_jm+1)-(n-1 ) \leq \left\lceil\sigma\right\rceil nm+1 ] and \right)\leq nm+1 ] to approximate }[f , g] ] defined by is then approximated by the quadrature formula :=\sum_{j\in\mathbb{z}_n^+}\mathcal{q}^{[x_{j-1},x_j]}_{n_j,\kappa , m_j}[f , g].\ ] ] we next study the error :={\left\vert\mathcal{i}_\kappa[f , g]-\mathcal{q}_{\kappa , n}[f , g]\right\vert} ] .to this end , we first establish two technical lemmas .[ lem : sec3orderp ] there exists a positive constant such that for all with , .we prove this result by estimating the lower bound of the set . for , we have that .thus , we obtain that , which is bounded by a constant .[ lem : sec3pre ] there exists a positive constant such that for all , that satisfy and with , by the definition of , we see that for .condition implies that since .we observe from the stirling formula for using inequality with , we conclude that there exists a positive constant such that for all with and with , . on the other hand, condition implies that .this together with the above inequality ensures that there exists a positive constant such that for all , satisfying and with , .this concludes the desired result . for a function , we let for .we are now ready to establish the estimate for ] , then there exists a positive constant such that for all and satisfying , \leq c\sigma ( n-2)^{-1/2}\kappa^{-n-1}{\left\vert\psi\right\vert}_{(n-1)n+1}.\ ] ] for with , there holds the estimate \right)\leq\left\lceil\sigma\right\rceil \left(n(n-1)\ln{n}+n^2\right)+1 ] for , and then sum them over . by employing with being replaced by and by and , we obtain for that \leq\dfrac{3\sigma(m_j+1)}{m_j!\kappa^2 } h_j^{m_j}{\left\vert\psi^{(m_j+1)}\right\vert}_{\infty}.\ ] ] for , we have that \leq\dfrac{6\sigma \kappa^{-n-1}}{(n-2)!}{\left\vert\psi^{(n)}\right\vert}_{\infty} ] . since for with , , we conclude that there exists a positive constant such that for all , satisfying , and with , \leq c\sigma\dfrac{(n-2)^{-1/2}}{m_j-1}\kappa^{-n-1}{\left\vert\psi^{(m_j+1)}\right\vert}_{\infty} ] over and applying lemma [ lem : sec3orderp ] , we observe that there exists a positive constant such that for all and satisfying & \leq&\dfrac{6\sigma}{(n-2)!}\kappa^{-n-1}{\left\vert\psi^{(n)}\right\vert}_{\infty}+c\sigma\kappa^{-n-1 } { \left\{\sum_{j=2}^{n}\dfrac{(n-2)^{-{1}/{2}}}{m_j-1}\right\}}{\left\vert\psi\right\vert}_{(n-1)n+1}\\ & \leq&c\sigma(n-2)^{-1/2}\kappa^{-n-1}{\left\vert\psi\right\vert}_{(n-1)n+1}.\end{aligned}\ ] ] it remains to estimate the number of functional evaluations used in the quadrature formula . to this end, we note that theorem [ thm : sec2filonm ] yields \right ) & \leq&\sum_{j\in\mathbb{z}_n^+}\big{\{}\left\lceil\sigma\right\rceil \left(n(n-1)/(n+1-j)+1\right)+1\big{\}}-(n-1)\\ & \leq&\left\lceil\sigma\right\rceil\left\{n+n(n-1 ) \sum_{j\in\mathbb{z}_n^+}1/(n+1-j)\right\}+1.\end{aligned}\ ] ] for by using we have that \right ) \leq\left\lceil\sigma\right\rceil\left\{n+n(n-1)(\ln(n)+1)\right\}+1 = \left\lceil\sigma\right\rceil \left(n(n-1)\ln{n}+n^2\right)+1,\end{aligned}\ ] ] which completes the proof . as a direct consequence of theorem [ thm : sec3e ] , we have the next estimates for the case having the linear oscillator . if , then there exists a positive constant such that for all and satisfying , \leq c ( n-2)^{-{1}/{2}}\kappa^{-n-1}{\left\vertf\right\vert}_{(n-1)n+1} ] .the filon - type methods and the levin - type methods achieving a convergence order higher than require the evaluation of derivatives of and .the filon - clenshaw - curtis rules for the oscillatory integrals with nonlinear oscillator requires the evaluation of .quadrature methods developed in this section do not require computing derivatives of or those of , nor evaluating .moreover , since condition implies that , theorem [ thm : sec3e ] demonstrates that the quadrature ] . for the case , the oscillator does not have a stationary point .we now write the integral as the sum of two integrals : a weakly singular integral without rapid oscillation and an oscillatory integral without a singularity or a stationary point .according to the assumption on , by the taylor theorem , for each there exists a constant ] and ] and for we set :=\int_0 ^ 1\phi(x){\rm d}x ] , the integral is rewritten as =\mathcal{i}^{\mu}[f , g]+\mathcal{i}_{\kappa}^{\lambda}[f , g] ] that appears in .the integrand defined by does not oscillate rapidly .the classical quadrature rules for weakly singular integrals developed in can then be used to treat the singularity .below , we briefly review the quadrature rules .we begin with describing the gauss - legendre quadrature rule for integral }[\psi]:=\int_{a}^{b}\psi(x){\rm d}x ] . given ,we denote by the zeros of the legendre polynomial of degree and by ^{-2} ] .there is a constant ] to }[\psi] ] .we now recall the integral method proposed in . given , let .for with , according to the parameter we choose points given by for . the quadrature rule for ] by zero and using }[\varphi_\kappa] ] for .integral ] .we need a lemma that estimates the norm of , for , where for .this requires the use of the fa di bruno formula for derivatives of the composition of two functions . for a fixed ,if the derivatives of order of two functions and are defined , then where for , where the sum is taken over all -tuples , , satisfying the constraints and . for and for , we let for brevity .note that are the stirling numbers of the second kind .they have the property that for with and satisfying , . for , the bell number has the bound that . [lem : sec4singularity ] if for some satisfying assumption , then there exists a positive constant such that for all , . for , by using, we obtain that from the definition of , we have two inequalities below . if with , we have that for with , by assumption [ sec4:assume ] there exists a constant ] , . applying the leibniz formula to the function yields .\end{aligned}\ ] ] from the assumption on , there exists a positive constant such that for all , and ] , using lemma [ lem : sec4singularity ] and formula in the inequality above , we obtain the desired estimate .we need a technical result for the integral of a function of for some .[ lem : sec4s ] if is of for some , then there exists a positive constant such that for all and , .we prove this result by bounding by and computing the resulting integral exactly . for a quadrature formula for approximation of, we use to denote the number of evaluations of the integrand used in the formula . with the above preparation, we estimate the error ] in the following theorem .[ thm : sec4sp]let .if is of for some and satisfies assumption , then there exists a positive constant such that for all , with \leq c\kappa_{\sigma(r)}^{-(1+\mu)/(r+1)}s^{-2m},\ ] ] and \right)\leq ( s-1)m ] and :={\left\vert\mathcal{q}_{m}^{[x_j , x_{j+1}]}[\varphi_{\kappa } ] -\mathcal{i}^{[x_j , x_{j+1}]}[\varphi_{\kappa}]\right\vert} ] and then sum them over .we first consider ] such that \leq h_j^{2m+1}{\left\vert\varphi_\kappa^{(2m)}(\xi_j)\right\vert}/(2^{2m}(2m+1)!) ] . summing up the bound of errors ] .for each , we use quadrature formula to compute an approximation }[f , g] ] , where is a positive integer to be specified later .we estimate the error :={\left\vert\mathcal{i}_{\kappa}^{[x_{j-1},x_{j}]}[f , g ] -\mathcal{q}_{n_j,\kappa , m}^{[x_{j-1},x_{j}]}[f , g]\right\vert} ] for . from , we have for all that \leq\dfrac{3(m+1)}{m!\kappa^2n_j^{m-1}}{\left\vert\psi^{(m+1)}\right\vert}_{\infty}m_j^mh_j^m ] with the assumption on , we observe that there exists a positive constant such that for all , and , \leq c\dfrac{{\left\vertg(1)\right\vert}^{-\alpha}}{(m-1)!\kappa^2n_j^{m-1}}q_j^m{(g(x_{j-1}))^{\alpha-1}}x_{j-1}^{-m}h_j^m.\end{aligned}\ ] ] note that and for .substituting these relations into the inequality above yields the desired result .we estimate an upper bound of the quantity .[ lem : sec4bound ] if satisfies assumption , then for all and , . using the definition of and , we have that and for .thus , and . for , we obtain for and that .we now discuss the choice of .lemma [ lem : sec4bound ] demonstrates that the upper bound of is independent of for the case and it depends on for the case .therefore , we need to consider these two cases separately . for the case , we choose for the case , we choose so that for all are independent of . with this choice of , we use }[f , g] ] for . thus ,integral ] and \right) ] .we prove by estimating ] . since , we see that . substituting this inequality into the inequality above yields that there exists a positive constant such that for all , and , \leq c { \left\vertg(1)\right\vert}^{-\alpha}\delta_0^{\alpha-2}\sigma_0\kappa^{-2}\kappa_{\sigma_0}^{1-\alpha}\left(\beta_n^0\right)^m ] is obtained by using formula , the choice of and lemma [ lem : sec4bound ] with . in passing , we comment on the hypothesis imposed to in the last theorem . if for some and satisfies assumption [ sec4:assume ] , then is of .a proof of this conclusion may be found in . as a direct consequence of theorem [ thm : sec4p0 ] , we have the next estimates for the case with the linear oscillator for , where we have that and .[ cor : sec4linearp ] if is of for some and some , then there exists a positive constant such that for all and satisfying , \leq c{\kappa^{-\alpha-1}}{\ln{\kappa}}\theta_n^{m-1} ] .we now consider the case .[ thm : sec4p ] let .if is of for some and some and satisfies assumption , then there exists a positive constant such that for all and satisfying , \leq c ( r+1 ) { \left\vertg(1)\right\vert}^{-\alpha}(\delta(r))^{\alpha-1}\kappa^{-2 } \kappa_{\sigma(r)}^{1-\alpha}{\ln{\kappa_{\sigma(r)}}}(\beta_n(r))^{m-1}.\ ] ] there holds \right ) \leq\left\lceil\big{(}2(r+1)\sigma(r)/\delta(r)\big{)}^{m/(m-1)}\right\rceil nm+1 ] . according to formula and lemma [ lem : sec4bound ] ,we obtain that \right ) \leq\sum_{j\in\mathbb{z}_n^+}{\left\{n_jm+1\right\}}-(n-1 ) \leq\left\lceil\left((r+1)\kappa_{\sigma(r)}^{1/n}\sigma(r)/\delta(r)\right)^{m/(m-1)}\right\rceil nm+1 ] , with two fixed positive integers and .in this section , we develop a quadrature method for computing the oscillatory integral having an exponential order of convergence . as in section 4, we shall write integral as the sum of a weakly singular integral and an oscillatory integral , and treat them separately .specifically , we shall develop a quadrature rule for computing having an exponential order in terms of a constant of convergence and a quadrature rule for computing having an exponential order in terms of the wave number of convergence .we now describe the quadrature rule for the integral ] we borrow an idea from ( see also ) .we begin with describing a partition of . for and with , let be a partition of with nodes defined by and for . for , we choose for . the quadrature rule for computing ] by 0 and using }[\varphi_\kappa] ] , for .integral ] . to this end , we impose the following hypothesis on .[ sec5:assume ] there exists a positive constant such that for all , .we first estimate the norm of with for as section 4 .[ lem : sec5wnorm ] if satisfies assumption , then for all and , .the proof of this lemma is similar to that of lemma [ lem : sec4singularity ] . using assumption [ sec5:assume ] and applying with andwe observe that for all , , , and , this together with yields that for all , and , since .note that for from the recurrence relation of the bell number involving binomial coefficients .this together with the inequality above yields the conclusion . in the next lemma ,we study a property of the derivatives of defined as in .[ lem : sec5sder ] if is of for some and satisfies assumption , then there exist two positive constants and such that for all , and ] and . by the assumption on ,there exists a positive constant such that for all , and ] , .the desired estimate of this lemma is then obtained from this inequality with .we need two lemmas regarding the parameters for .[ lem : sec5orderse1 ] let with and .if , , , then . moreover , there holds for .we prove the first result by contradiction .assume to the contrary that there exist , with such that .without loss of generality , we assume . then , there exists a positive integer such that .we then observe that , which leads to .this contradicts the assumption and thus .the second inequality follows from the assumptions that and , from which we conclude that .[ lem : sec5orderse2 ] if , and , then there exists a positive constant such that for all with and , , where .the proof is similar to that of lemma 4.1 in . using the second result of lemma [ lem : sec5orderse1 ] and the fact , we obtain that .applying inequality with , we find that .using the bound of bell number with , we have that . combining these three inequalities yields , where .it suffices to prove that there exists a positive constant such that for all with and , . according to the definition of and , we observe that .it follows for satisfying that .on the other hand , for satisfying , we have that , which is a constant .this proves the desired inequality .we are now ready to estimate the error ] of the quadrature rule ] . in particular , if for , then the upper bound in reduces to .we prove by estimating ] , for , and then summing them over .we first consider ] such that \leq h_j^{2m_j+1}/(2^{2m_j}(2m_j+1)!){\left\vert\varphi_\kappa^{(2m_j)}(\xi_j)\right\vert} ] .summing up the bound of errors ] . according to for with in lemma [ lem : sec5orderse1 ] , we observe that .it follows that . substituting this result andthe definition of into the inequality above yields the estimate .the bound of the number of functional evaluations used in quadrature may be obtained directly . when , we substitute and into estimate to yield the special result .we next develop a quadrature rule for the oscillatory integrals .this is done by choosing variable numbers of quadrature nodes in the subintervals of .specifically , for and for the partition of with nodes defined by , we let and for .for each , we use }[f , g] ] .integral ] for .[ lem : sec5sube ] if is of for some and satisfies assumption , then there exists a positive constant such that for all , and \leq c \dfrac{{\left\vertg(1)\right\vert}^{-\alpha}(\delta(r))^{\alpha-1}}{(m_j-1)!\kappa^2n_j^{m_j-1}}q_j^{m_j}x_{j-1}^{(\alpha-1)(r+1)}\left(\beta_n(r)\right)^{m_j}.\end{aligned}\ ] ] this lemma may be proved in the same way as lemma [ lem : sec4sube ] with .we next estimate the error :={\left\vert\mathcal{i}_\kappa^{\lambda}[f , g ] -\mathcal{q}_{\kappa , n}^{\lambda}[f , g]\right\vert} ] and \right) ] .we prove by estimating ] . applying with the definition of , we observe that .combining these two inequalities yields a positive constant such that for all , and , \leq c \dfrac{{\left\vertg(1)\right\vert}^{-\alpha}\sigma_0\delta_0^{\alpha-2}}{(m_j-1)\kappa^2 } \dfrac{\left(\tau_n^0\right)^{m_j - n}}{(m_j-2)!}\lambda_0^{-1}\left(\rho_n^0\right)^n ] .summing up the inequality above over , we obtain that there exists a positive constant such that for all and satisfying , \leq c { \left\vertg(1)\right\vert}^{-\alpha } \sigma_0\delta_0^{\alpha-2}(n-1)^{-1/2}\kappa^{-2}\lambda_0^{-1}\left(\rho_n^0\right)^n\sum_{j\in\mathbb{z}_{n}^+}(m_j-1)^{-1}.\end{aligned}\ ] ] this together with lemma [ lem : sec5orderp ] and leads to the estimate .according to formula and lemma [ lem : sec4bound ] with , we obtain that for , \right ) \leq\sum_{j\in\mathbb{z}_n^+}\big{\{}\left\lceil\sigma_0/\delta_0\right\rceil m_j+1\big{\}}-(n-1 ) \leq\left\lceil\sigma_0/\delta_0\right\rceil\big{(}2n^2+\lceil1-\alpha\rceil(n^2+n)\big{)}/2 + 1 ] , where . for , there holds the estimate \right ) \leq \big{(}2n^2+\lceil1-\mu\rceil(n^2+n)\big{)}/2 + 1 ] . the proof is handled in the same way as that of theorem [ thm : sec5e0 ] . using and the definition of , we obtain that , for . by employing lemmas [ lem : sec4bound ] and [ lem : sec5sube ] , the choice of and the inequality above, there exists a positive constant such that for all , and , \leq c \dfrac{(r+1){\left\vertg(1)\right\vert}^{-\alpha}\sigma(r)(\delta(r))^{\alpha-2}\kappa_{\sigma(r)}^{1/n}}{(m_j-1)\kappa^2 } \dfrac{\left(\tau_n(r)\right)^{m_j - n}}{(m_j-2)!}\lambda_r^{-1}(\rho_n(r))^n.\end{aligned}\ ] ] using the first result of lemma [ lem : sec5pre ] , we obtain that there exists a positive constant such that for all , and satisfying , \leq c ( r+1){\left\vertg(1)\right\vert}^{-\alpha}\sigma(r)(\delta(r))^{\alpha-2}\kappa_{\sigma(r)}^{1/n}(n-1)^{-1/2}\kappa^{-2 } \lambda_r^{-1}(\rho_n(r))^n/(m_j-1).\end{aligned}\ ] ] this together with the second result of lemma [ lem : sec5pre ] and ensures that there exists a positive constant such that for all , and satisfying , \leq c ( r+1){\left\vertg(1)\right\vert}^{-\alpha}\sigma(r)(\delta(r))^{\alpha-2}\kappa^{-2}\kappa_{\sigma(r)}^{1/(r+1)}(\rho_n(r))^n/(m_j-1).\ ] ] summing up the inequality above over and applying lemma [ lem : sec5orderp ] , we obtain the estimate . it remains to estimate \right) ] .this yields the last conclusion .note that the decay of the bound of ] .we next estimate the error ] under reasonable hypotheses .note that for implies that .if and , then there exists a positive constant such that for all and that satisfy and , \leq c \kappa^{-(1+\mu)/(r+1 ) } \gamma^{(1+\mu)(n-1)} ] . in the case , there exists a positive constant such that for all and satisfying and , \leq c \kappa^{-1}\gamma^{(1+\mu)(n-1)} ] . in this example, we consider the functions and for .the exact value of the corresponding integral can be computed exactly , and thus , the errors presented below are computed using the exact value . captypetable captypefigure + a : re of ] we present in table [ sec6:tab1egp1 ] the relative error ( re ) of ] .we plot the error ] is faster than , but is slower than .captypefigure + a : ] we next compare the performance of the proposed quadrature formulas with that of the existing formulas described in . to this end , we recall the quadrature formulas of . following , by we denote the index of singularity of , denote the degree of the polynomial interpolant , denote the grading parameter and denote the number of the subintervals .the filon - clenshaw - curtis ( fcc ) rule proposed in for the integral }[f , g] ] , where for ] ,the integral }[f , g] ] . while for with ,the integrand of on ] for .let }(f) ] .the cfcc rule is formed by }(f):=\widetilde{\mathcal{i}_{\kappa}}^{[x_0,x_1]}(f)+ \sum_{j\in\mathbb{z}_{m-1}^+}h_j{\rme}^{{\rm i}\kappa d_j}\mathcal{i}_{h_j\kappa}^{[-1,1]}[\mathcal{q}_n(f_j),g] ] for . when is a strictly increasing nonlinear function and has a single stationary point at zero , we calculate the integral instead by a change of variables and mapping ] and :=\int_0 ^ 1x^{\alpha}{\rm e}^{{\rm i}\kappa x}{\rm d } x ] and =\left(\gamma(1+\alpha){\rm e}^{{\rm i}\pi(1+\alpha)}({\rm i}\kappa)^{-(1+\alpha)}+ { \rm e}^{{\rm i}\kappa}/({\rm i}\kappa)\right)(1+\mathcal{o}(1/\kappa)) ] and :=\gamma(1+\alpha){\rm e}^{{\rm i}\pi(1+\alpha)}({\rm i}\kappa)^{-(1+\alpha)}+ { \rm e}^{{\rm i}\kappa}/({\rm i}\kappa) ] and ] , which approximates the integral ] are computed by using ] and the cfcc formula }(f_1) ] and the cfcc formula }(f_2) ] and the cfcc formula }(f_3) ] , which approximates ] .captypefigure + a : re of ] captypefigure + a : ) ] numerical results of this example are presented in figure [ sec6:fig1egp3][sec6:fig4egp3 ] and table [ sec6:tab1egp3 ] .we plot in figure [ sec6:fig1egp3 ] the re values of the formula ] ( a ) and the re values ( b ) , and those of the formula ] and those of the cfcc formula }\left((f / g')\circ g^{(-1)}\right) ] b : re of ] .we consider in this example the functions and for , the same as those in example [ egp:1 ] .captypetable numerical results of this example are reported in table [ sec6:tab1ege1 ] and figure [ sec6:fig1ege1 ] .we list in table [ sec6:tab1ege1 ] the re values of the formula ] in figure [ sec6:fig1ege1 ] ( a ) and the error ] increases as grows for a fixed and the asymptotic order of convergence is for .it concurs with the theoretical estimate .comparing figure [ sec6:fig1ege1 ] ( b ) with figure [ sec6:fig2egp1 ] in example [ egp:1 ] , we can obtain an order of convergence faster than by using variable number of quadrature nodes in the subintervals .captypefigure + a : re of ] in the following two examples , we validate the efficiency of the quadrature rule proposed in section 5 for calculating the oscillatory integrals with a stationary point , without / with singular . [ ege:2]this example is to verify the estimates established in theorems [ thm : sec5se ] and [ thm : sec5e ] for the cmfe formula ] .we consider in this example the functions and for . when we compute the errors of the quadrature formulas , the true value of this integral is computed by using /3 ] and those of the cfcc formula }_{\kappa,4,m,16}\left((f / g')\circ g^{(-1)}\right) ] , which are listed in row 2 of table [ sec6:tab1ege2 ] and the re values obtained from the cfcc formula }_{\kappa,4,m,16}\left((f / g')\circ g^{(-1)}\right) ] with with and .captypefigure + a : re of ] captypetable we conclude from the results in table [ sec6:tab1ege2 ] and figure [ sec6:fig1ege2 ] ( a ) that for a fixed the approximation accuracy of the formula ] increases slow as grows .furthermore , from the results presented in table [ sec6:tab1ege2 ] , we observe that for , and large , quadrature formula ) ] , which uses more number of functional evaluations .in fact , from figure [ sec6:fig1ege2 ] ( b ) , we see that the number of functional evaluations used in formula ] , which approximates the integral ] b : ) ] and those of the cfcc formula }_{\kappa,4,m,21}\left((f / g')\circ g^{(-1)}\right) ] , which are listed in row 2 of table [ sec6:tab1ege3 ] and the re values of the cfcc formula }_{\kappa,4,m,21}\left((f / g')\circ g^{(-1)}\right) ] with with and . from figure [ sec6:fig1ege3 ] ( a ) and table [ sec6:tab1ege3 ] , we conclude that the accuracy of ] by using the cmfe formulas , the fcc and cfcc formulas .we consider the functions , and for .captypetable captypetable tables [ sec6:tab1ege4 ] lists approximation values produced by the cmfe formula ] and computing time they consume . while table [ sec6:tab2ege4 ] lists approximation values produced by the cmfe formula ] andcomputing time they consume .both tables show that the cmfe formula consumes significantly less cpu time than the fcc , cfcc formulas when they produce comparable approximation results even when the cmfe formula uses more functional evaluations .we develop in this paper composite quadrature formulas for computing highly oscillatory integrals defined on a finite interval with both singularities and stationary points .the partitions of the integration interval used in the composite quadrature formulas are designed according to the degree of oscillation and the singularity . in each of the subintervals , we use piecewisepolynomial interpolants to approximate the integrand to form two classes of formulas having polynomial ( resp .exponential ) order of convergence by using fixed ( resp .variable ) number of interpolation nodes .numerical experiments are carried out to confirm the theoretical results on the accuracy of the proposed formulas and to compare them with existing methods .numerical results show that the proposed formulas outperform the existing methods in both approximation accuracy and computational efficiency .v. domnguez , i. g. graham and t. kim , filon - clenshaw - curtis rules for highly - oscillatory integrals with algebraic singularities and stationary points , siam j. numer ., * 51 * ( 2013 ) , 15421566 .v. domnguez , public domain code , http://www.unavarra.es/personal/victor_dominguez/clenshawcu-rtisrule .l. filon , on a quadrature formula for trigonometric integrals , proc .edinburgh , * 49 * ( 1928 ) , 3847 .e. a. flinn , a modification of filon s method of numerical integration , j. acm , * 7 * ( 1960 ) , 181184 .a. iserles , on the numerical quadrature of highly oscillating integrals ii : irregular oscillators , i m a j. numer ., * 25 * ( 2005 ) , 2544 .a. iserles and s. p. nrsett , efficient quadrature of highly - oscillatory integrals using derivatives , proc .a , * 461 * ( 2005 ) , 13831399 .
|
we develop two classes of composite moment - free numerical quadratures for computing highly oscillatory integrals having integrable singularities and stationary points . the first class of the quadrature rules has a polynomial order of convergence and the second class has an exponential order of convergence . we first modify the moment - free filon - type method for the oscillatory integrals without a singularity or a stationary point to accelerate their convergence . we then consider the oscillatory integrals without a singularity or a stationary point and then those with singularities and stationary points . the composite quadrature rules are developed based on partitioning the integration domain according to the wave number and the singularity of the integrand . the integral defined on a subinterval has either a weak singularity without rapid oscillation or oscillation without a singularity . the classical quadrature rules for weakly singular integrals using graded points are employed for the singular integral without rapid oscillation and the modified moment - free filon - type method is used for the oscillatory integrals without a singularity . unlike the existing methods , the proposed methods do not have to compute the inverse of the oscillator . numerical experiments are presented to demonstrate the approximation accuracy and the computational efficiency of the proposed methods . numerical results show that the proposed methods outperform methods published most recently . key words : oscillatory integrals , algebraic singularities , stationary points , moment - free filon - type method , graded points .
|
the application of general relativity ( gr ) to the large length scales relevant in cosmology , necessarily requires an averaging operation to be performed on the einstein equations ( ellis ) .the nonlinearity of gr then implies that such an averaging will modify the einstein equations on these large scales .symbolically , this happens since \neq\langle e[g]\rangle ] arises from a product of the normalisation of the initial power spectrum , and the factor which arises in the transfer function integral , where is the wavenumber corresponding to the radiation - matter equality scale . ] .this indicates that at least for epochs around the last scattering epoch , the backreaction due to averaging was negligible .the real situation is somewhat more complex than this simple calculation indicates . on the one hand ,the time evolution of is needed in order to solve the equations satisfied by the perturbations , as we effectively did above by assuming a form for . on the other hand ,the evolution of the _ perturbations _ is needed to compute the correction terms .until these corrections are known , the evolution of the scale factor can not be determined ; and until we know this evolution , we can not solve for the perturbations . to break this circle , we will adopt an iterative procedure .we first compute a `` zeroth iteration '' estimate for the backreaction , by assuming a fixed standard background such as scdm , evolve the perturbations and compute the time dependence of the objects , denoted .now , using these _ known _ functions of time , we form a new estimate for the background using the modified equations , and hence calculate the `` first iteration '' estimate . this process can then be repeated , and is expected to converge as long as perturbation theory in the metric remains a valid approximation . with these ideas in mind , we can go ahead and compute the `` zeroth iteration '' estimate for the backreaction in the zalaletdinov framework .this is given by the following equations ( paranjape ) \ , , \label{eq4}\ ] ] \ , , \label{eq5}\ ] ] where we have defined . , and are negative definite and their magnitudes have been plotted .the vertical line marks the epoch of matter radiation equality .,width=264 ] here , the prime denotes a derivative wrt .conformal time ( ) , with , is the fourier space transfer function defined by , and is the initial power spectrum of .the results of a numerical calculation are shown in figure [ fig1 ] , where all functions are normalised by the hubble parameter , and confirm that this zeroth iteration estimate in fact gives a negligible contribution .further , were we to carry out the next iteration , we would essentially obtain no difference between the zeroth and first iteration scale factors upto the accuracy of the calculation , and hence this iteration has effectively converged at the first step itself .we see that the dominant contribution to the backreaction at late times , is due to a curvature - like term , as expected from our simple estimate above . in order to obtain a correction which grows faster than this , we need a nonstandard evolution of the metric potential , which can only happen if the _ scale factor _ evolves very differently from the scdm model , which in turn would require a significant contribution from the backreaction .the same circle of dependencies as before , now implies that _ as long as the metric is perturbed flrw _ , the backreaction appears to be dynamically suppressed .secondly , as figures [ fig2 ] and [ fig3 ] show , scales which are approaching nonlinearity , _ do not _ contribute significantly to the backreaction , which is a consequence of the suppression of small scale power by the transfer function .we will return to this point when discussing the backreaction during epochs of nonlinear structure formation ., namely the function , at three sample values of the scale factor .the function dies down rapidly for large , with the value at some being progressively smaller with increasing scale factor .the declining behaviour of the curves for and extrapolates to large .,width=264 ] this shows that nonlinear scales do not impact the backreaction integrals significantly ., width=264 ]let us now ask whether one can make meaningful statements concerning the backreaction during epochs of nonlinear structure formation , when matter density contrasts become very large and perturbation theory in the matter variables has broken down .we begin by considering some order of magnitude estimates .let us start with the assumption that although the matter perturbations are large , one can still expand the _metric _ as a perturbation around flrw .we are looking for either self - consistent solutions using this assumption , or any indication that this assumption is not valid .given that the metric has the form ( [ eq3 ] ) ( and further assuming as before ) , the relevant gravitational equation at late times and at length scales small comparable to , is the poisson equation given by where is the density contrast of cdm .as before , we can estimate the dominant backreaction component to be .now , for an over / under - density of physical size , treating and on dimensional grounds , we have for voids , we can set , and then , since we have assumed .this shows that sub - hubble underdense voids are expected to give a negligible backreaction . for overdense regions we need to be more careful ,since here can grow very large . in a typical spherical collapse scenario ,the following relations hold , which lead to \ , .\label{eq14}\ ] ] it would therefore appear that at late enough times , the perturbative expansion in the metric breaks down with , and the backreaction grows large .however , the crucial question one needs to answer is the following : is this situation actually realised in the universe , or are we simply taking these models too far ?we make the claim that perturbation theory in the metric _ does not _break down at late times , since _ observed peculiar velocities remain small_. the spherical collapse model is not a good approximation when _ model _ peculiar velocities in the collapsing phase grow large . to support this claim , we will work with an exact toy model of spherical collapse .the model we consider was presented by paranjape & singh ( ) , and can be summarized as follows .the matter content of the model is spherically symmetric pressureless `` dust '' , and hence the relevant exact solution is the lematre - tolman - bondi ( ltb ) metric given by here is the proper time measured by observers with fixed coordinate , which is comoving with the dust . is the physical area radius of the dust shell labelled by , and satisfies the equation . here is the mass contained inside each comoving shell , and a dot denotes a derivative with respect to the proper time . the energy density of dust measured by an observer comoving with it satisfies the equation , where the prime now denotes a derivative with respect to the ltb radius .initial conditions are set at a scale factor value of , and are chosen such that the initial situation describes an flrw expansion with a perturbative central overdensity out to radius , surrounded by a perturbative underdensity out to radius , with appropriately chosen values for the various parameters in the model ( see table 1 of paranjape & singh ) .figure [ fig4 ] shows the evolution of the overdensity contrast in the central region . clearly , at late times the situation is completely nonlinearnevertheless , it can be shown that a coordinate transformation in this model can bring its metric to the form ( [ eq3 ] ) , _ provided _ one has where .physically is the `` comoving '' peculiar velocity. the metric potentials ( which are actually equal at the leading order , see van acoleyen ) have the expressions , , where and are obtained by integrating and .a numerical calculation shows that and hence the metric potentials do in fact remain small for the entire evolution , for _ this _ model .further , the infall peculiar velocity can only become large if the true infall velocity is large , in which case the specific background chosen to define the peculiar velocity , becomes irrelevant ( since ) .hence , the fact that relativistic infall velocities are _ not _ observed in real clusters etc . , leads us to expect very generally that the perturbed flrw form for the metric should in fact be recoverable even at late times .finally , figure [ fig5 ] shows the dominant contribution to the backreaction in the toy model ( paranjape & singh ) . there is a significant departure from a curvature - like behaviour , due to evolution of the metric potentials .more importantly , the maximum value of the backreaction here is , as opposed to as seen in the linear theory .this can be understood by noting that the inhomogeneity of our toy model is only on relatively small , nonlinear scales ( ) , and the value of the backreaction is therefore consistent with our earlier observation that nonlinear scales contribute negligibly to the total backreaction . to conclude, we have seen that as long as the metric has the perturbed flrw form , the backreaction remains small .further , there are strong reasons to expect that the metric remains a perturbation around flrw even at late times during nonlinear structure formation , a claim that is supported by our toy model calculation .it should be possible to test this claim in -body simulations as well .it appears therefore , that backreaction can not explain the observed acceleration of the universe .99 ellis , g. f. r. 1984 , in _ general relativity and gravitation _( d. reidel publishing co. , dordrecht ) , eds .b. bertotti buchert , t. 2000 , gen .hirata , c. m. and seljak , u. 2005 , phys .d72 , 083501 .ishibashi , a. and wald , r. 2006 , class ., 23 , 235 .kolb , e. w. , matarresse s. and riotto a. 2006 , new j. phys ., 8 , 322 .paranjape , a. 2008 , phys .d78 , 063522 .paranjape , a. and singh , t. p. 2007d75 , 064004 .paranjape , a. and singh , t. p. 2008a, jcap , 0803 , 023 .paranjape , a. and singh , t. p. 2008b, phys . rev .101 , 181101 .rsnen , s. 2006 , class .grav . , 23 , 1823 .van acoleyen , k. 2008 , jcap , 0810 , 028 .zalaletdinov , r. 1992 , gen .grav . , 24 , 1015 .zalaletdinov , r. 1993 , gen .grav . , 25 , 673 .
|
there is an ongoing debate in the literature concerning the effects of averaging out inhomogeneities ( `` backreaction '' ) in cosmology . in particular , some simple models of structure formation studied in the literature seem to indicate that the backreaction can play a significant role at late times , and it has also been suggested that the standard perturbed flrw framework is no longer a good approximation during structure formation , when the density contrast becomes nonlinear . in this work we use zalaletdinov s covariant averaging scheme ( macroscopic gravity or mg ) to show that as long as the metric of the universe can be described by the perturbed flrw form , the corrections due to averaging remain negligibly small . further , using a fully relativistic and reasonably generic model of pressureless spherical collapse , we show that as long as matter velocities remain small ( which is true in our model ) , the perturbed flrw form of the metric can be explicitly recovered . together , these results imply that the backreaction remains small even during nonlinear structure formation , and we confirm this within the toy model with a numerical calculation .
|
atmospheric blocking is a fundamental large - scale weather phenomena in mid - high latitudes in the atmosphere that has a profound effect on local and regional climates .the life cycle of an atmospheric blocking always brings about large - scale weather or short terms climate anomalies .therefore , it is rather important to predict an atmospheric blocking in regional midterm weather forecast and short - term climate trend prediction .there are three types of patterns for an atmospheric blocking anticyclone , i. e. , monopole type blocking ( or omega type blocking ) , dipole type blocking , and multi - pole type blocking . during the past many years , the dipole type blocking has been studied a lot since its first discovery by rex .malguzzi and malanotte - rizzoli first used the korteweg de - vries ( kdv ) rossby soliton theory to study dipole type blocking . while unfortunately their analytical results failed to describe the onset , developing , and decay of a blocking system .in fact , the important atmospheric blocking can be explained by many different theories except for kdv type equations .recently , luo et al . proposed the envelope rossby soliton theory based on the deduced nonlinear schrdinger ( nls ) type equations and successfully explained the blocking life cycle numerically .more recently , we have found that variable coefficient kdv equation can analytically features the life cycle of a dipole blocking if introducing a time - dependent background field .in addition , it has been revealed in ref . that none zero boundary values and time - dependent background westerly are vital factors to explain a life cycle of a dipole type blocking . in this paper , we are motivated to introduce time into the background flow and boundary conditions to derive new types of equations from a two - layered fluid to investigate atmospheric blocking systems .the paper is organized as follows . in section 2 ,a type of coupled variable coefficient modified kdv type system is derived from a two - layered fluid model .then we give a special analytical solution with many arbitrary functions and constants in section 3 . in section 4 ,we assume some special values of the parameters in the analytical solution to obtain an approximate analytical expression for the stream functions which can describe a typical monopole type blocking event .the whole life cycle of the monopole blocking is graphically displayed , which really captures the feature of a real observational monopole blocking case happened in the 2008 snow storm in china .last section is a short summary and discussion .the starting two - layered fluid model is where and . in eqs . , is a weak coupling constant between two layers of the fluid and , , where is the earth s radius , is the angular frequency of the earth s rotation and is the latitude , is the characteristic velocity scale .the derivation of the dimensionless equations and is based on the characteristic horizontal length scale m and the characteristic horizontal velocity scale m/s .a type of coupled kdv equations has been derived from the system - , and its painlev property and soliton solutions have also been discussed .it is a common treatment to rewrite the stream function by two parts , namely , with being the background flow term , introduce the stretched variables , ( is a constant ) , and take the background field only as a linear function of , and then expand the stream function as . recently ,considering the fact that background flow and the shear of the flow both could be time dependent , a new treatment was adopted in refs . to view background field as a function arbitrarily depending on and then further expand it as .below , a new type of coupled variable coefficient modified kdv type system is derived in a different way from that used in ref .first , we introduce a variable transformation , and denote .then , we rewrite the stream functions with the stretched variables , ( is a constant ) , and finally make the expansions , and .in addition , we introduce .then we substitute all the expansions into eqs . and with eqs . and ,and then vanish all the coefficients of each order of . for notation simplicity ,the primes are dropped out in the following . in the first order of , we obtain where and satisfy and respectively .the general solutions of eqs . and read and where are arbitrary integration functions of . in the second order of , we have where are determined by a system of equations presented in appendix a. in the third order of ,if we assume and requiring satisfy a system of equations ( we do not write them down here for they are too long while can be easily retrieved following the above procedures ) , then we arrive at a coupled variable coefficient modified kdv type system with being arbitrary functions of the indicated variable .since equations - constitute a coupled variable coefficient nonlinear system , it is not easy to obtain its general solution . herewe present a quite special solution of the system .it is easy to see that if we suppose with constant , then eqs . anddegenerate to one modified kdv type equation where are given by when further , it can be verified that the modified kdv equation can be transformed to the standard one if where , is given by and are arbitrary functions , with conditions hence , it is easy to obtain exact solutions of eq .based on the solutions of eq . through eq . with eq . .one classical soliton solution of the mkdv equation is with a constant .now just using this typical solution , we can easily write down a special solution of the original system - as \right\},\label{repsi1}\\ & & \psi_2\approx u_0(a_{22}y,\tau)+\epsilon u_1(a_{22}y,\tau)+\epsilon b_2(a_{22}y,\tau)\left\{c_1\epsilon(a_{11}x+a_{12}y - c_0 t ) -\frac{m_6f_1f_2 ^ 2}{2a^2f_{4\tau}}\right.\nonumber\\ & & \qquad \left.+f_2k{\rm sech}\left[\frac{6kf_1\epsilon}{\beta_0 ^ 2a^2}(a_{11}x+a_{12}y - c_0t)-\frac{3kf_3}{2a^4 } -k^3f_4\right]\right\},\label{repsi2}\end{aligned}\ ] ] with , and determined by eqs . and , respectively . from the analytical solution of stream functions and , it is easy to derive the corresponding background westerly flows , and , which are all left as arbitrary functions of the indicated variables .by selecting the arbitrary functions and constants appropriately , the approximate analytical solution - can be responsible for different kinds of atmospheric blocking phenomenon . herewe give a typical example when the functions and constants are taken as in this case , the solution can describe a monopole type blocking event as depicted in figure [ fig ] .considering the fact that the basic flow , mainly the basic westerly and the shear of the basic westerly , plays a significant role on the blocking developing process , it is hence reasonable to introduce the related term . , title="fig:",width=170 ] , title="fig:",width=170 ] , title="fig:",width=170 ] + , title="fig:",width=170 ] , title="fig:",width=170 ] , title="fig:",width=170 ] + , title="fig:",width=170 ] , title="fig:",width=170 ]evidently , a whole life cycle of a monopole type blocking event , namely , the onset , development , maintenance , and decay processes , are clearly presented in figure [ fig ] .the streamlines are gradually deformed , and the anticyclonic high in the north develops at the second day ( fig .it is strengthened daily . at around the fourth day ( fig .1e ) , it is at its strongest stage and then become weaker and eventually disappear after the seventh day ( figs .1e - h ) . obviously , fig .1 possesses the phenomenon s salient features including their spatial - scale and structure , amplitude , life cycle , and duration .therefore , fig . 1 is a very typical monopole type blocking episode in one layer of the fluid .a real observational blocking case happened during 19 feb 2008 to 26 feb 2008 is shown in fig .[ fig2 ] appearing a monopole pattern .it is easily found that the life cycle of this blocking lasts about eight days experiencing three stages : onset ( 19 - 20 feb , 2008 ) , mature ( 21 - 22 feb , 2008 ) and decay ( 23 - 26 feb , 2008 ) periods , which resembles those displayed in figure 1 .-axis is longitude , and the -axis is latitude .contour interval is 6 gpdm.[fig2],width=453 ]considering a time - dependent basic westerly , we have derived a type of coupled variable coefficient modified kdv type system from a two - layered fluid model , with the time dependent coefficients resulted from the time dependent basic flows and time dependent boundary conditions .instead of possessing a linear meridional shear , the mean flow is assumed to be a combination of a quadratic function of and .the boundary conditions remain as unknown functions with some complicated relations . under a set of parameters , our analytical solution nicely described a real observational blocking life cycle from 19 feb to 26 feb 2008 , indicating the onset , mature and decay phases in its developing process in one of the layers of the fluid . in the other layer ,similar or different types of blocking can be found when setting the unknown parameters .therefore , it is revealed that blocking can also be governed by modified kdv type system and its analytical solutions can also features the life cycle of a blocking .it is worth further investigations on how variations of the time - dependent background westerlies influence the type of a blocking and the evolution of a blocking during its life period since the role of the weak westerlies on a blocking is still unclear .the work was supported by the national natural science foundation of china ( no . 10735030 , no .10547124 , no .90503006 , and no . 40305009 ) , national basic research program of china ( 973 program ) ( no .2007cb814800 and no .2005cb422301),specialized research fund for the doctoral program of higher education ( 20070248120 ) , program for changjiang scholars and innovative research team in university ( pcsirt no .irt0734 ) , and program for new century excellent talents in university ( ncet ) .999 rex d f , tellus , 2 ( 1950a ) 196 + rex d f , tellus , 2 ( 1950b ) 275 pedlosky j , geophysical fluid dynamics , 1979 ( new york : springer ) huang f , tang x y and lou sy , j. atmos .64 ( 2007 ) 52 + tang x y , huang f and lou s y , chin .phys . lett .23 ( 2006 ) 887 lou s y , tong b , hu h c and tang x y , j. phys . a : math39 ( 2006 ) 513 tong b , man j and lou s y , commun .45 ( 2006 ) 965 malguzzi p and malanotte - rizzoli p , j. atmos .( 1984 ) 2620 luo d h , huang f and diao y , j. geophys .106 ( 2001 ) 31795 + luo d h , j. atmos .62 ( 2005 ) 5 ; 62 ( 2005 ) 22 are determined by the following equations
|
a type of coupled variable coefficient modified korteweg - de vries system is derived from a two - layered fluid system . it is known that the formation , maintenance , and collapse of an atmospheric blocking are always related with large - scale weather or shorts term climate anomalies . one special analytical solution of the obtained system successfully features the evolution cycle of an atmospheric monopole type blocking event . in particular , our theoretical results captures a real monopole type blocking case happened during 19 feb 2008 to 26 feb 2008 can be well described by our analytical solution . = 0.6 cm
|
many studies have focused on increasing the performance of machine - based computational systems over the last decades . as a result , much progress has been made allowing increasingly complex problems to be efficiently solved . however , despite these advances , there are still tasks that can not be accurately and efficiently performed even when the most sophisticated algorithms and computing architectures are used .examples of such tasks are those related to natural language processing , image understanding and creativity .a common factor in these kinds of tasks is their suitability to human abilities ; human beings can solve them with high efficiency and accuracy . in the last years, there has emerged a new computing approach that takes advantage of human abilities to execute these kinds of tasks .such approach has been named _ human computation _ . applications designed to execute on human computation systems may encompass one or multiple tasks .they are called _ distributed human computation applications _ when they are composed of multiple tasks , and each individual task can be performed by a different human being , called _worker_. in the last years , distributed computing systems have been developed to support the execution of this type of application .they gather a crowd of workers connected to the internet and manage them to execute application tasks .the precursor of such systems is recaptcha .currently , there is a broad diversity of distributed human computation applications and distributed systems devoted to execute them , such as : games with a purpose , contests sites , online labor markets , and volunteer thinking systems . in this paper , we focus on online labor markets and volunteer thinking systems .online labor markets gather a crowd of workers that have a financial motivation .the precursor of this type of system is the amazon mechanical turk platform ( mturk.com ) .such plaform reports to have more than registered workers , and receives between and new tasks to be executed per day ( mturk-tracker.com ) at the time of writing .volunteer thinking systems , in turn , gather a crowd of workers willing to execute tasks without any financial compensation .one of the precursors of this type of system is the zooniverse citizen - science platform ( zooniverse.org ) .currently , zooniverse hosts scientific projects and has over one million registered workers .only galaxy zoo , the largest project at zooniverse , had million tasks performed by workers in a year of operation .thus , both labor markets and volunteer thinking are large - scale distributed human computation systems . because the computing units in human computation systems are human beings ,both the design and management of applications tap into concepts and theories from multiple disciplines .quinn and bederson conducted one of the first efforts to delimit such concepts and theories .they present a taxonomy for human computation , highlighting differences and similarities to related concepts , such as collective intelligence and crowdsourcing .yuen et al . , in turn , focus on distinguishing different types of human computation systems and platforms .more recently , kittur et al . built a theoretical framework to analyze future perspectives in developing online labor markets that are attractive and fair to workers .differently from previous efforts , in this study , we analyze human computation under the perspective of _ programmers seeking to improve the design of distributed human computation applications _ and _ managers seeking to increase the effectiveness of distributed human computation systems_. to conduct this study , we propose a theoretical framework that integrates theories about human aspects , design and management ( d&m ) strategies , and quality of service ( qos ) requirements .human aspects include characteristics that impact workers ability to perform tasks ( e.g. , cognitive system and emotion ) , their interests ( e.g. , motivation and preferences ) , and their differences and relations ( e.g. , individual differences and social behavior ) . qos requirements , in turn , are metrics directly related to how application owners measure applications and systems effectiveness . these metrics are typically defined in terms of time , cost , fidelity , and security .finally , d&m strategies consist of strategies related to how the application is designed and managed . they involve activities such as application composition , task assignment , dependency management , and fault prevention and tolerance .this framework allows us to perform a literature review that expands previous literature reviews to build a vision of human computation focused on distributed systems issues .we emphasize our analysis on three perspectives : findings on relevant human aspects which impact d&m decisions ; major d&m strategies that have been proposed to deal with human aspects ; and open challenges and how they relate to other disciplines . besides providing a distributed systems viewpoint of this new kind of computational system, our analysis also puts into perspective the fact that human computation introduces new challenges in terms of effective d&m strategies .although these challenges are essentially distributed systems challenges , some of them do not exist in machine - based distributed systems , as they are related to human aspects .these challenges call for combining distributed systems design with theories and mechanisms used in other areas of knowledge where there is extensive theory on treating human aspects , such as cognitive science , behavioral sciences , and management sciences . in the following ,we briefly describe the human computation ecosystem addressed in this paper .then , we present our theoretical framework . after that , we analyze the literature in the light of our framework .this is followed by the discussion of challenges and perspectives for future research .finally , we present our conclusions .the core agents in a distributed human computation ecosystem are : requesters , workers , and platform . _ requesters _ act in the system by submitting human computation _applications_. an application is a set of _ tasks _ with or without dependencies among them . typically , a human computation task consists of some input data ( e.g. , image , text ) and a set of instructions .there are several types of instructions , such as : transcription of an item content ( e.g. , recaptcha tasks ) , classification of an item ( e.g. , galaxy zoo tasks ) , generation of creative content about an item , ranking and matching items , etc ._ workers _ are the human beings who act as human computers in the system executing one or more tasks .they generate the task output by performing the instructions upon the received items . after executing a task, the worker provides its _output_. the application output is an aggregation of the outputs of all their tasks . in paid systems , when a task is performed , the requester may accept the solution if the task was satisfactorily performed ; otherwise , she / he can reject it .workers are paid only when their solutions are accepted .the receiving of tasks to be executed , the provision of their outputs , and the receiving of the payment for the performed tasks occur via a human computation platform .the _ platform _ is a distributed system that acts as a middleware receiving requester applications and managing the execution of their tasks by the workers .platforms manage several aspects of tasks execution , such as : providing an interface and language for tasks specification , performing task replication , maintaining a job board with a set of tasks waiting to be performed , controlling the state of each task from the submission up to its completion .examples of platforms with such characteristics are the online labor markets amazon mechanical turk ( mturk.com ) and crowdflower ( crowdflower.com ) , and the volunteer thinking systems zooniverse ( zooniverse.org ) and crowdcrafting ( crowdcrafting.org ) .online labor markets also implement functionalities that allow requesters to communicate with workers that performed their tasks , provide feedback about their outputs , and pay for the tasks performed by them .some studies have analyzed human computation ecosystem . in general , they focus mainly on proposing a taxonomy for the area and discussing platform issues .quinn et al . propose a taxonomy for human computation that delimits its similarities and differences when compared to others fields based on human work , such as : crowdsourcing , social computing , and collective intelligence .vukovic and yuen et al .focus on classifying human computation platforms based on their function ( e.g. , design and innovation ) , mode ( e.g. , competition and marketplace platforms ) , or algorithms .dustdar and truong and crouser and chang focus on hybrid platforms based on the collaboration between machine and human computing units .dustdar and truong focused on strategies to provide machine computation and human computation as a service , using a single interface .crouser and chang propose a framework of affordances , i.e. , properties that are inherent to human and properties that are inherent to machine , so that they complement each other . differently from these previous efforts , in the present work we focus on analyzing strategies for designing and managing distributed applications onto human computation platforms .our main focus is not to survey existing human computation platforms , but to analyze d&m strategies that have been proposed to be used in these kind of platforms .our analysis is based on a theoretical framework built upon theories and concepts from multiple disciplines dealing with ( i ) human aspects , such as : motivation theory , self - determination theory , sense of community theory , human error theory , coordination theory and human - in - the - loop literature , and ( ii ) applications design , applications management and qos aspects , such as : the great principles of computing ; application design methodologies ; taxonomies for application management in grid computing , web services , and organizations .theoretical frameworks have several distinct roles . most important for us , they allow researchers to look for patterns in observations and to inform designers of relevant aspects for a situation . our framework is designed to assist the analysis of the diverse aspects related to human computation applications .it is organized in three dimensions which represent different perspectives in which it is possible to approach human computation : _ qos requirements _ , _ strategies _ , and _ human aspects_. each dimension is closely connected to an agent in the human computation ecosystem : qos requirements are requesters effectiveness measures ; d&m strategies are mainly related to how platforms manage application execution ; and human aspects are worker characteristics .each dimension is composed of a set of factors .figure [ fig1 ] provides an overview of the framework . considering their relations ,it is clear that the dimensions are not independent .the definition of d&m strategies is affected by both the qos requirements and the human aspects .qos requirements reflect requester objectives that should guide the design of suitable d&m strategies .human aspects , in turn , consist in workers characteristics that delimit a restriction space where d&m strategies may act aiming at optimizing qos requirements .d&m strategies generate a final output whose quality is a measure of their capacity of optimizing qos requirements taking into account human aspects . in the following we detail our framework by discussing the theories that support its dimensions and their factors .the qos requirements dimension encompasses a set of quantitative characteristics that indicate requesters objectives and how they evaluate application effectiveness .qos requirements have been mainly addressed in two distinct areas : process management for organizations , and qos for software systems .based on the literature from these areas , we define qos requirements in terms of the following factors : * _ time _ refers to the urgency to transform an input into an output .it includes response time , and delays on work queues , and a time limit ( deadline ) for generating an output ; * _ cost _ refers to expenditure on the application execution .it is usually divided into enactment and realization cost .enactment cost concerns expenditure on d&m strategies , and realization cost , expenditure on application execution . * _ fidelity _ reflects how well a task is executed , according to its instructions .fidelity can not be described in a universal formula and it is related to specific properties and characteristics that define the meaning of `` well executed '' .it is a quantitative indicative of accuracy and quality . *_ reproducibility _ refers to obtain similar output when the application is executed at different times and/or by different group of workers taken from the same population . *_ security _ relates to the confidentiality of application tasks and the trustworthiness of resources that execute them . based on application design and management methodologies in machine - based computation and in organizations , we define five factors for the d&m strategies dimension : application composition , incentives and rewards , dependency management , task assignment , output aggregation , and fault prevention and tolerance ._ application composition ._ it consists of two major activities : problem decomposition and application structuring .problem decomposition includes the following decisions : tasks granularity , e.g. , generating fewer task that require more effort to be executed ( coarse - grained ) or generating many small tasks that require less effort to be executed ( fine - grained ) ; worker interfaces for the tasks , i.e. , the interface design of the web page that shows the instructions of the work to be done .application structuring consists in how to compose the application considering possible dependencies between its tasks . as exemplified in figures[ fig2 ] and [ fig3 ] , the two major application structuring patterns are _ bag - of - tasks _ and _ workflow_. bag - of - tasks applications are composed of a set of independent tasks . for example , a group of human intelligence tasks ( hits ) in mturk platform .workflow applications , in turn , are composed of a set of tasks organized in a sequence of connected steps . each step is composed by one or more tasks .independent tasks are usually grouped in the same workflow step .interdependent tasks , in turn , constitute different workflow steps ._ incentives and rewards ._ incentive are put in place when the participants exhibit distinct objectives and the information about them are decentralized . in human computation systems , requesters andworkers may have different interests .for example , some workers may be interested in increasing their earnings , while requesters are interested in which tasks are performed with greater accuracy .incentive strategies are used to align the interests of requesters and workers .they are usually put in place to incentivize workers to exhibit a behavior and achieve a performance level desired by the requester , which includes executing more tasks , improving accuracy and staying longer in the system .incentives can be broadly divided into non - monetary and monetary .examples of non - monetary incentives are badges provided as a recognition for workers achievements , and rank leaderboard for the workers to gauge themselves against peers .monetary incentives are usually associated with a reward scheme , which defines the conditions for a worker to be rewarded , e.g. , providing an output identical to the majority of other workers who perform the task .game theory is an important theoretical guide to incentivize workers engagament and effort in human computation systems ._ dependency management ._ it focuses on the coordination between interdependent tasks . a framework of dependencies between tasks in human computation is presented by minder et al .it is mainly based on the coordination theory .dependencies among tasks can be broadly divided into four levels : serialization , visibility , cooperation , and temporal .serialization dependencies specify whether tasks in the application require a serial execution .such dependencies are usually defined in application structure by routing operations , such as : sequence , parallelism , choice and loops .visibility dependencies define whether the work performed in a task must be visible to the other tasks ( e.g. , when a task updates a global variable ) .cooperation dependencies , in turn , define which tasks hold a shared object at each time and can perform operations on it without restriction . finally , temporal dependencies specify whether a set of tasks must perform operations in a particular temporal dependency . _ task assignment ._ it defines how to choose which worker will execute a task . the strategies can be broadly divided into scheduling , job board , and recommendation .scheduling is a push assignment ; workers receive and execute tasks assigned to them . scheduling strategiesassign tasks to workers trying to optimize one or more qos requirements .it is usually based on application and/or workers information .job board , in turn , is a pull assignment ; workers use search and browser functionalities to choose the tasks they want to execute .it allows workers to select those tasks they expect to enable them to maximize their metrics , such as : earnings , preferences , and enjoyment .finally , recommendation is a hybrid assignment ; workers receive a set of tasks and they choose which of them they want to perform .recommendation is mapped into scheduling when the amount of tasks recommended is , and it is mapped into job board when all tasks are recommended . _ output aggregation ._ it is concerned with aggregating sets of individual task outputs to identify the correct output or to obtain a better output .it is interchangeably called judgment aggregation and crowd consensus .an aggregation function may be implemented in several ways and it may be executed by a machine or a human .a simple example is that of different task outputs that constitute different parts of the application output ; thus , the aggregation is only a merge of task outputs .a more sophisticated aggregation function may ask workers to improve available task outputs and generate an application output . note that output aggregation is an _ offline _ procedure , i.e., it is executed after the outputs of all application tasks have already been obtained .there are also _ online _ procedures which involve failure detection in each task output , as well as strategies to detect and manage cheating workers .we discuss online procedures in fault tolerance strategies ._ fault prevention and tolerance ._ faults are events which potentially may cause a failure .a failure is a malfunction or incorrect output .thus , fault prevention consists in avoiding events which may cause failures and fault tolerance consists in identifying and managing failures after they occur . to analyze human error in human computation systems, we join together human error concepts from human error theory and concepts related to the implementation of fault prevention and tolerance in computing system from human - in - the - loop and machine - based distributed systems literatures . to execute a task , a human first constructs a mental plan of a sequence of actions that ends with the conclusion of the task .three types of failures may occur in this process : _ mistakes _ , _ lapses _ and _ slips_. mistake is a failure in constructing a suitable plan to execute the task , then the plan is not correct .lapses and slips are failures in the execution of a correct plan .lapses occur when the worker forgets to perform one action of the plan .finally , slips occur when the worker performs incorrectly an action of the plan .a diversity of faults can generate such failures , for example : lack of knowledge or time to execute the task , and stochastic cognitive variability such as variability of attention .fault prevention strategies usually focus on methodologies for design , testing , and validation of tasks instructions , and testing resources capabilities .fault tolerance , in turn , consists of four phases : failure detection , damage confinement , failure recovery , and fault treatment .failure detection consists of identifying the presence of a failure , e.g. , identifying that a task output is incorrect .damage confinement aims at determining the boundaries of failures and preventing their propagation , e.g. , identifying which tasks outputs are incorrect and preventing that other tasks make use of these outputs .failure recovery tries to bring the flow of execution to a consistent state , e.g. , re - executing tasks that produced incorrect outputs .finally , fault treatment involves treating faults to prevent the occurrence of new failures . in our context ,human aspects are human beings characteristics that determine the way they perform tasks .these aspects have been widely addressed in psychology studies .they can be broadly divided into the following factors : cognitive system , motivation , preferences , social behavior , emotion , individual differences , and intraindividual changes ._ cognitive system . _its function includes several processes of task execution , such as information processing , understanding , and learning .it specifies processes organization on long - term and short - term memory .long - term memory is where knowledge is stored . in turn , short - term memory is a working memory used to process information in the sense of organizing , contrasting , and comparing .humans are able to deal with few items of information simultaneously in their working memory , and any interactions between items held in their working memory also require working memory capacity , reducing the number of items that can be dealt with simultaneously .cognitive overload occurs when tasks processing exceeds working memory capacity . _ motivation ._ from the motivation theory viewpoint , humans are guided by motivation impulses or goals , i.e. , the desire to do / obtain new things and to achieve new conditions .incentive studies explore the way such goals influence human behavior .considering the self - determination theory , motivation is broadly divided into intrinsic and extrinsic . in task execution ,intrinsic motivation may consist of workers internal desires to perform a particular task because it gives them pleasure or allows them to develop a particular skill .extrinsic motivation , in turn , consists of factors external to the workers and unrelated to the task they are executing ._ preferences ._ humans exhibit personal preferences .such preferences are explained on the basis of two types of influences : their own past experiences and the experiences of others which are directly observable by them . as an example of workers preferences in task ,consider the case where , after feeling bored several times when executing high - time - consuming tasks , workers always choose to execute only low - time - consuming tasks ._ social behavior ._ sociality means group / community organization to perform activities .in general , communities form and persist because the individual takes advantage of them and thereby serve their interests .sense of community theory suggests that members develop sense of community based on membership , influence , integration and fulfillment of needs , and shared emotional connection .this behavior may influence the way community members behave and execute tasks in the system ._ emotion ._ emotion can be defined as a human complex psychological and physiological state that allows humans to sense whether a given event in their environment is more desirable or less desirable .emotion concerns , for instance , mood , affection , feeling and opinion .it interacts with and influences other human aspects relevant to task execution effectiveness .for example , it influences cognitive system functions related to perception , learning and reasoning . _ intraindividual variability and change . _humans show intraindividual variability and change .intraindividual variability is a short - term or transient fluctuation characterized by factors as : wobble , inconsistency , and noise .intraindividual change is a long - term lasting change as a result of learning , development , or aging . _individual differences . _humans show variability between themselves in several factors , such as decision making and performance . in this studywe focus mainly on individual differences in terms of the following three human competencies : knowledge , skills , and abilities .knowledge refers to an organized body of information applied directly in the execution of a task .skill refers to the proficiency in tasks execution , usually measured qualitatively and quantitatively .abilities are those appropriate on - the - job behaviors needed to bring both knowledge and skills in task execution .now we turn to map the literature on human computation by using our theoretical framework .this is done through a literature review focused on analyzing : a ) how human computation studies on d&m strategies have dealt with human aspects discussed in our framework to satisfy the qos requirements of requesters ; and b ) what are the relevant results regarding these human aspects upon which future d&m strategies decisions can be based . throughout this section , for each d&m factor , we discuss the human computation studies and extract the major implications for system design related to human factors. application composition consists of two major activities : task design and application structuring . _ task design _ impacts on the ability of worker to complete tasks quickly , effectively , and accurately .poorly designed tasks , which show high cognitive load , can cause fatigue of workers , compromising their understanding of instructions , decreasing their productivity and increasing their errors .this usually occurs because of the limitations of the human working memory .this is the case of tasks where humans are asked to choose the best item among several options . to perform this task, humans compare the items and choose the best of them on their working memory .such kind of task generates a cognitive load , which increases in proportion as the number of items to be compared increases .the higher the cognitive load , the higher error chances .tasks can also be designed to motivate workers to put more effort in generating correct outputs .huang et al .show that one way to achieve it is to tell workers explicitly that their work will be used to evaluate the output provided by other workers . _application structuring _ studies can be broadly divided into static workflows and dynamic workflows .example of static workflow composition is that used in soylent .soylent is a word processor add - in that uses mturk s workers ( mturk.com ) to perform text shortening .soylent implements the find - fix - verify workflow , i.e. , find areas of the text that can be shortened , fix and shorten the highlighted area without changing the meaning of the text , and verify if the new text retain its meaning .this task distinction captures human individual differences mainly in terms of the type of tasks workers want to perform , i.e. , find , fix , or verify tasks .static workflows can be optimized .cascade and deluge are examples of optimized workflows to taxonomy creation tasks .example of dynamic workflow composition is that used in turkomatic . in turkomatic, workers compose the workflow dynamically on a collaborative planning process . when a worker receives a task , she / he performs it only if it is simple to be executed ; otherwise , the worker subdivides it into two other tasks .a problem occurs when workers generate unclear tasks that will be executed by other workers .other approach is proposed by lin et al .they assume that workflow composition results in different output quality when executed by different groups and they propose the availability of multiple workflows composition with independent difficulty level and dynamically switch between them to obtain higher accuracy in a given group of workers .finally , bozzon et al .propose a system that dynamically controls the composition and execution of the application , and reacts to some specific situations , such as : achievement of a suitable output and identification of a spammer .it allows the system to adapt the application to changes in workers characteristics and behavior .* implications for systems design . *we extract two major guidelines from this discussion : application designers must avoid task cognitive overload , in what requiring a small amount of specialized ability , skill and knowledge in a same task can contribute ; given that workers in a platform display individual differences , application designers can take advantage of worker diversity by developing applications with different types of tasks , each type requiring a different skill ; this may be done either by defining a static composition of tasks that require different skills or by using dynamic composition to adapt to different groups of workers .incentive and reward schemes have been designed to incentivize specific behaviors and maximize requesters qos requirements .unfortunately , there is no consensus in the literature on the effect of incentives on worker behavior .its effect seems to vary with the type of task . in some tasks, increasing financial incentives allows one to increase the amount of workers interested in executing the task , but not necessarily the quality of the outputs .in other tasks , quality may be improved by increasing intrinsic motivations .incentives also relates to other aspects of task design , for example , some studies show that incentives work better when they are placed in the focal position in task worker interface , and task fidelity is improved when financial incentives are combined with a task design that asks workers to think about the output provided by other workers for the same task . besides defining right incentives, requesters must also define a suitable reward scheme .witkowski et al .analyze a scheme that pays workers only if his / her output agrees with those provided by others , and that penalizes with negative payment in the case his / her output disagrees .they show that such scheme acts as a self selection mechanism that makes workers with lower quality choose not to participate .rao et al .show that a reward scheme that informs workers that they will be paid if their outputs are similar to the majority motivates workers to reflect more on what other workers would choose .this generates a higher percentage of correct outputs and the obtained outputs are closer to the correct one .huang et al .analyze three schemes in a group of workers : ( i ) individual , in which reward depends only on worker performance ; ( ii ) teamwork , in which workers in the group are paid similarly based on the average of their performance ; and ( iii ) competition , in which workers in the group are opponents and they are paid based on their differences of performance .the effectiveness of these schemes tends to vary with other application settings such as social transparency . * implications for systems design .* we extract two major guidelines from this discussion : given that the effect of incentives on intrinsic and extrinsic motivations appears to be different in different types of tasks , designers must test how combinations of such incentives contribute towards the desired quality in their specific context ; task peformance can be improved by using incentives that motivate workers to reflect more on what output other workers would provide for the same task .we broadly divided task assignment strategies into scheduling , job board , and recommendation ._ task scheduling _strategies try to optimize some qos requirements by exploiting information about the affinity between tasks and workers .scheduling strategies in human computation literature have considered several human aspects .some strategies consider workers emotional states allocating tasks that are appropriate to current worker smood .heidari and kearns take into account task difficulty and workers abilities .they analyze the case in which workers can decide between execute one task or forward it .when one worker decides to forward a task , it will be scheduled to a more qualified worker .it generates a forwarding structure on task scheduling that improves outputs quality .waterhouse proposes a strategy based on a probabilistic metric of mutual information between workers .the strategy tunes the assignment of tasks to the workers that will increase the amount of information gain .other thread of scheduling strategies is inspired by the hierarchical structure in today s organizations , exploring individual differences .they consider working teams and roles , such as supervisors and workers .tasks are first assigned to supervisors , who assign them to workers in their team taking into account the qualification and skills of each worker .skill - based scheduling considers different workers qualification levels and that qualification level increases in the proportion that workers adequately perform more tasks .there are also approaches that use information and contents liked by the worker on social networks to automatically match workers preferences and task requirements ._ job board _strategies are used mainly in online labor market platforms , where tasks are usually made available together with their rewards to workers in boards .job boards allow workers to choose tasks that fit their preferences ; thus , task instructions must be clear and attractive to workers .requesters define job parameters so as to address workers interests and make their tasks attractive to workers . for example, automan adjusts tasks price and tasks allocation time to motivate more workers to try executing them . by adjusting these parameters , automan tries to optimize qos requirements addressing workers time and financial incentives .toomim et al .propose a method to characterize workers preferences for some interfaces and tasks .this information is used in future task design ._ task recommendation _ strategies recommend tasks to workers according to some affinity criteria .the platforms odesk ( odesk.com ) and elance ( elance.com ) are based on job boards , but they also make use of a recommendation system to inform workers about new jobs that match their skills .we are not aware of studies that evaluate the effectiveness of such functionalities on these platforms .propose a matrix completion approach to infer workers preferences in pairwise comparison tasks .the method may be useful to improve task recommendation strategies . in job board and tasks recommendation approaches , it is also required a mechanism that allows requesters to choose which of the candidate workers will perform the task or which solution will be paid . in research and practice ,three strategies that explore these dimensions are : _ auction _ , in which the task is allocated to the worker that charges the lowest value ( e.g. , odesk.com ) ; _ challenge _ , in which all workers perform the available tasks , but only the best solution is paid ( e.g. , topcoder.com ) ; and _ registration order _ , in which the task is allocated to the first worker that signed up to run it ( e.g. , mturk.com ) . to the best of our knowledge, no study was conducted to compare the performance of these approaches and to indicate in what situations each should be used . * implications for systems design . * two basic guidelines to highlight in this context are : given that most of platforms are based on job boards and that in this environment the effectiveness of a task relies on its ability to gain attention of suitable workers , requesters must provide task descriptions with information that allows workers to easily identify if the task match their skills , preferences and interests ; requesters must avoid generating too restrictive tasks in order to take advantage of the diversity and larger quantities of workers that job boards and recommendation strategies give access to .we focus on analyzing the dependency management strategies that take into account human aspects , while ensuring temporal , serialization , visibility , and cooperation dependencies .most studies address only temporal dependencies , in which a set of tasks must be performed in a particular order ( e.g. , ) or in a synchronous way ( e.g. , ) ._ serialization _ dependency studies in human computation have focused maily on applications with loop or without loop .example of human computation applications without loops are those that deal with planning activities .such applications usually include a sequence of steps such as : decomposition of the problem into small tasks , execution of each task and aggregation of these partial task outputs to obtain the final output as a result to the problem .this is the case of turkomatic , crowdforge , crowdplan , and combination of creative design .application with loops , in turn , include some iterative processes as in find - fix - and - verify and iterative improvement ._ visibility _ dependency is common in working group .it usually needs a shared environment that unobtrusively offers up - to - date group context and explicit notification of each user s action when appropriate .mao et al . and zhang et al . address visibility dependencies in human computation applications . in their studies , workers try to achieve a global goal .this goal can be , for example , a collaborative itinerary planning , a collaborative graph coloring .in these cases , workers can see outputs generated to correlated tasks .such type of task is usually related to agreement or consensus and the visibility decision may impact both worker behavior as well as the time required to obtain an output ._ cooperation _ dependencies are also related to working group .mobi and turkserver allow one to implement applications that contain cooperative tasks .zhang et al . show that the unified view of the task status allows workers to coordinate and communicate more effectively with one another , allowing them to view and build upon each other s ideas .platforms such as mturk maintain workers invisible and not able to communicate with each other .turkopticon is a tool that allows one to interrupt such invisibility , making possible workers to communicate among themselves . * implications for systems design . *the major guideline regarding dependency management is that designers must consider that some degree of visibility and communication between workers may be positive for application performance in terms of time and accuracy .it seems that workers should be allowed : to see the status of tasks that are interdependent with his / her task in order to synchronize its execution with any global task constraint ; and to communicate with other workers that are executing tasks interdependent with his / her task in order to improve cooperation .there are a range of output aggregation strategies in human computation , most of them are already discussed by law and von ahn and nguyen et al .we focus on discussing studies that account for human aspects .an example of comparable outputs aggregation strategy is majority vote , in which the output of the application is the most frequent task output .this strategy assumes that the majority of the workers assigned to any task are reliable .majority vote does not perform properly when the error chance in task execution is high .sheng et al .investigate the impact of the number of task executions on output accuracy and shows that quality of the output is improved by using additional workers only when the workers accuracy is higher than .they propose a repeated - labeling technique that selects data points for which application quality should be improved by the acquisition of multiple task outputs .diverse studies have been devoted to aggregating a set of outputs and obtaining an accurate output taking into account workers expertise and task characteristics .whitehill et al .consider that an output accuracy depend on the difficulty of the task and expertise of the worker .they proposed generative model of labels , abilities , and difficulties ( glad ) which estimates these parameters using expectation maximization ( em ) probabilistic models and evaluate tasks output in a way that experts outputs count more .hovy et al .propose multi - annotator competence estimation ( mace ) which uses em to identify which annotators are trustworthy and considers this information to predict outputs .wang et al .propose a recursive algorithm to improve the efficiency of compute em models in these contexts .dalvi et al .propose a technique to estimate worker reliabilities and improve output aggregation .the strategy is based on measuring agreement between pairs of workers .another output aggregation challenge arises in unstructured outputs , e.g. , open - ended and image annotation tasks . in this case , a way to find the best output is to apply a vote - on - the - best strategy in which workers evaluate the quality of each output or they choose which of them exhibits the highest quality .it exploits individual differences , given that some workers are better at identifying the correct outputs than producing them themselves .when the set of options is too large , it may be difficult for workers choose the best item .an alternative in this case is to develop a second human computation application in which few items are compared in each task and the best item is chosen by tournament ( e.g. , ) .other peculiarity of unstructured outputs is that even poor outputs may be useful .for example , other workers can aggregate such poor outputs , and generate a new better output .the quality of the aggregation can also be improved by using estimations of the difficulty level of tasks and skills of workers . * implications for systems design . * when developing output aggregation strategies , designers must weigh at least three parameters that impact on the quality of the final output : task cognitive load ; the amount of different workers that provided task outputs , i.e. , redundancy degree ; and the accuracy of each worker that provided the outputs . as in the literature , the value of each of parameters can be obtained in a statistical evaluation , considering that the accuracy of the final output tends to be higher with more accurate estimation of these parameters . the prevention of faults in task instructions can be done by using offline and/or online pilot tests .offline tests are conducted with accessible people that can provide feedback on issues and improvements in the tasks instructions .online tests , in turn , are driven onto a platform , and they are more realistic than offline tests . in this case , workers may not be accessible to provide feedback about the task instructions , but their outputs can be analyzed to identify problems .the prevention of undesired workers is usually done by using qualification tests .they consist in requiring the execution of a gold standard test that certifies whether the worker has the skills and qualifications required to perform application tasks .only workers who perform accurately are considered qualified .a downside of this approach is not considering changes in workers behavior after executing the test .malicious workers usually change their behavior over time .crowdscape is a system that allows requesters to select workers based on both task output quality and workers behavioral changes . studies also have been devoted to fault tolerance which consists in four phases : failure detection , damage confinement , failure recovery , and fault treatment ._ failure detection _ has been made by using : conformance voting , which allows one to detect poorly executed tasks ; and timing , which allows one to detect worker evasion , i.e. , the worker is assigned to perform a task , but gives up executing and do not deallocate the task . in conformance vote , one worker or a group of workersevaluate whether a task output is correct .when the output is not correct , the task needs to be re - executed by another worker .timing , in turn , defines a maximum time that a task can remain allocated to a worker ; it is expected that a worker provides an output up to this time .if that time expires without an output being provided , the task is deallocated and made available to another worker ._ damage confinement _ is usually made by using error recovery techniques in each task or workflow step .it prevents that damages propagate to the next workflow step .this propagation occurs , for example , in workflow derailment problems , which arises when an error in the task executions prevents the workflow conclusion ._ failure recovery _ has been made by using majority voting , alternative , and human assessment .these strategies exploit human individual differences by using redundancy of workers .if different and independent workers provide the same output to a task , it increases the confidence that the task is being performed in accordance with its instructions . in majority voting ,several workers perform the same task in parallel and the most frequent output is considered correct . in alternative strategies , in turn , a worker executes the task and , if an error occurs , the task is executed again by another worker . in these redundancy - based strategies ,the impact of the redundancy degree on output accuracy is highly dependent on the type of task .increasing the redundancy of workers does not necessarily increase the confidence that the correct output will be obtained .furthermore , the perception of redundancy by the workers may have a negative effect on their motivation and work quality .the more co - workers working in the same task are perceived by workers , the lower their work quality .this occurs because workers demotivate thinking that their effort does not count for much .finally , in human assessment strategies , the outputs generated by a worker are evaluated by others .this can be implemented in two ways : arbitration and peer review . in arbitration, two workers independently execute the tasks and another worker evaluates their outputs and solve disagreements . in peer review, the output provided by each worker is reviewed by another worker .hansen et al .show that in text transcribe tasks , the peer review strategy is significantly more efficient , but not as accurate for certain tasks as the arbitration strategy ._ fault treatment _ has been made by fixing problems in task design , and by eliminating or reducing the reputation of unskilled or malicious workers .for example , topcoder maintains a historical track of the number of tasks each worker chooses to execute , but did not conclude .this track is used to estimate the probability that the worker chooses tasks and do not execute it .ipeirotis et al .propose to separate systematic errors from bias due to , for example , an intraindividual variability such as distraction .this distinction allows also a better estimation of accuracy and reputation of the worker . such estimation may be used to prevent assigning to a worker tasks for which he / she is not qualified or that he / she will not complete the execution .another important aspect in fault treatment is to provide feedback to workers about his / her work .it helps workers to learn how to accurately execute the task ( intraindividual changes ) and avoid errors due to lapses ( intraindividual variability ) . * implications for systems design . *the three major guidelines extracted from this discussion are : designers must test the task worker interface and check workers skills / reputation ; to this end pilot tests and qualification tests can be applied ; redundancy is the basis of fault tolerance strategies , but requesters must generate tasks that maximize the number of workers capable of executing it , increasing the potential of redundancy of the task ; and requesters must provide workers assessment and feedback in order to allow them to learn from tasks they perform incorrectly .in the last section , we analyzed the human computation literature and its implication for design in the light of our theoretical framework .now we turn to discuss challenges and perspectives in d&m strategies .although our list is by no means exhaustive , it offers examples of topics in need of further work and directions that appear promising .table [ tab1 ] synthesizes the contributions on the relationships between d&m strategies and human aspects identified in the last section .as shown , there are several relationships for which we could not find any study .this state of affairs indicates a large amount of research still to be conducted after mapping the impact of human aspects on d&m effectiveness .two other issues that still require further understanding are : adequate combinations of d&m strategies ; and the impact of d&m strategies on workers cognition and behavior .@ & * application * & * incentives * & * dependency * & * task*&*output*&*fault * + & * composition * & * and rewards * & * management * & * assignment*&*aggregation*&*tolerance * + + cognitive system & &&& & & + + motivation & & & & & & + + preferences & & & & & & + + social behavior & & & & & & + + emotion & & & & & & + + individual differences & & & & & & + + intraindividual variability & & & & & & + + intraindividual changes & & & & & & + + it is intuitive that one d&m strategy may impact on the effectiveness of another . for example , by generating a fine - grained application composition to account for the human cognitive system , one may generate undesired effects : designing tasks too susceptible to cheater workers , which reduces the effectiveness of fault tolerance strategies ; or generating a large number of tasks with a too high number of dependencies among them , which may reduce parallelism in task execution and impact on dependency management .more empirical research on how to adequately combine d&m strategies in distributed human computation is still required .besides the requesters perspective that tries to understand how to take advantage of human aspects to achieve qos requirements , studies must also identify possible side - effects of the strategies on workers cognition and behavior .two cognitive effects that may be relevant to consider are : _ framing effect _ workers may generate different outputs based on how a task is designed , and _hawthorne effect _ workers may alter their behavior when they know that they are being observed .two behavioral effects are collusion , an agreement between workers to act similarly ( e.g. , planning collusion against requesters which submit poorly designed tasks ) , and sabotage , workers change their behavior to take advantage of the system ( e.g. , inhibiting competitors in a `` maximum observability '' audition ) . also , there is room for studies focused on workers and on the fair relationship between workers and requesters ._ application composition ._ the main human aspects factors that have been addressed in application composition are cognitive system and motivation / incentives . by taking into account such factors in the context of task execution , human computation application compositionis clearly related to the disciplines : _ ecological interface _ , and _ goal setting _ecological interface principles are grounded on how the human cognitive system works and its effects on information understanding and processing . such principles may support the development of task designs to avoid cognitive overload and improve task execution effectiveness .goal setting studies , in turn , may help better defining both task instructions and the way their outputs will be evaluated by the requester .knowledge of such topics and reasoning about their relationships to human computation can help in the formulation of new strategies ._ task assignment ._ studies on task assignment have mainly taken into account : preferences and individual differences .two other disciplines that take into account these aspects in task assignment are _ person - job fit _ and _ achievement motivation _the domain of person - job fit research consists on tasks characteristics , worker characteristics , and required outcomes .it emphasizes the fit and matching of workers and tasks in the prediction of both worker and organizational outcomes .achievement motivation is a motivation for demonstrating high rather than low ability .this motivation influences the tasks a human chooses to perform , i.e. , his / her preferences . according to this concept ,individuals select tasks they expect to enable them to maximize their chances of demonstrating high ability and avoiding demonstrating low ability .these concepts may inspire tasks scheduling and task recommendation strategies in human computation ._ dependency management ._ ensuring tasks dependencies and still extracting the greatest potential ( optimizing qos requirement ) of a crowd of workers is one of the main challenges of dependency management strategies .similar challenge has been addressed in at least two other disciplines : _ work teams _ and _ groupware _ .both disciplines focus on group behavior and performance .work team studies usually focus on group work in an organization not necessarily performed through a computer system .groupware is generally associated with small groups working collaboratively through a computer system .experiences on how human aspects are addressed in these disciplines may inspire solutions that consider these factors in human computation ._ output aggregation ._ two important areas related to output aggregation are _ judgment aggregation _ and _ social choice theory _judgment aggregation is the subject of a growing body of work in economics , political science , philosophy and related disciplines .it aims at aggregating consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions . in these situations ,majority voting can not ensure an equally consistent collective conclusion .social choice theory , in turn , is a theoretical framework for analysis of combining individual preferences , and interests to reach a collective decision or social welfare . according to this theory any choice for the entire group should reflect the desires / options of the individual to the extent possible .the studies that have been conducted in these disciplines seem to be related to human computation output aggregation .a better mapping of their similarity and differences may help in the reuse and development of new output / judgment aggregation strategies ._ fault prevention and tolerance ._ besides preventing and tolerating faults , one should also consider how to evaluate system qos in the presence of human faults .for example , fault tolerance is mainly based on task redundancy , but defining the appropriate level of redundancy is a challenging task . maintaining a low level of redundancy may not recover failures and maintaining a high level of redundancy can lead to a high financial cost or high volunteer effort to run the entire application .this kind of study has been conducted in other disciplines such as : _ human aspects evaluation _ and _ performability _ .human aspects evaluation is an assessment of the conformity between the performance of a worker and its desired performance .performability , in turn , focuses on modeling and measuring system qos degradation in the presence of faults .experiences on performability and human aspects evaluation may be useful to address qos requirements in the presence of worker faults .in this paper , we analyzed the design and management of distributed human computation applications .our contribution is three - fold : we integrated a set of theories in a theoretical framework for analyzing distributed human computation applications ; by using this theoretical framework , we analyzed human computation literature putting into perspective the results in this literature on how to leverage human aspects of workers in d&m strategies in order to satisfy the qos requirements of requesters ; and we highlighted open challenges in human computation and discussed their relationship with other disciplines from a distributed systems viewpoint .our framework builds on studies in different disciplines to discuss advances and perspectives in a variety of immediate practical needs in distributed human computation systems .our literature analysis shows that d&m strategies have accounted for some human aspects to achieve qos requirements .however , it also shows that there are still many unexplored aspects and open challenges . inevitably , a better understanding of how humans behave in human computation systems and a proper delimitation of the human aspects involved is essential to overcome these challenges .we hope our study inspires both discussion and further research in this direction .the authors declare that they have no competing interests .lp , fb , na , and ls jointly designed the theoretical framework used to contextualize and discuss the literature in this survey .lp drafted most of the manuscript and conducted the bulk of the review of the literature .fb , na and ls did a smaller portion of the review of the literature and revised the manuscript in several interactions .all authors read and approved the final manuscript .lesandro ponciano thanks the support provided by capes / brazil in all aspects of this research .francisco brasileiro acknowledges the support received from cnpq / brazil in all aspects of this research .quinn aj , bederson bb ( 2011 ) human computation : a survey and taxonomy of a growing field . in : proceedings of the sigchi conference on human factors in computing systems ( chi ) .acm , new york , pp 14031412 yuen m - c , chen l - j , king i ( 2009 ) a survey of human computation systems . in : proceedings of the international conference on computational science and engineering ( cse ) , vol .4 . ieee computer society , washington , dc , pp 723728 yuen m - c , king i , leung k - s ( 2011 ) a survey of crowdsourcing systems . in : proceedings of the international conference on privacy , security , risk and trust ( passat ) .ieee computer society , washington , dc , pp 766773 archak n ( 2010 ) money , glory and cheap talk : analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com . in : proceedings of the international world wide web conference ( www ) .acm , new york , pp 2130 ross j , irani l , silberman m , zaldivar a , tomlinson b ( 2010 ) who are the crowdworkers ? : shifting demographics in mechanical turk . in : proceedings of the acm sigchi conference on human factors in computing systems , extended abstracts ( ea ) acm new york , pp 28632872 ball nm ( 2013 ) canfar+ skytree : mining massive datasets as an essential part of the future of astronomy . in : american astronomical society meeting abstracts , vol .american astronomical society , washington , dc kittur a , nickerson jv , bernstein m , gerber e , shaw a , zimmerman j , lease m , horton j ( 2013 ) the future of crowd work . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm new york , pp 13011318 cj , schawinski k , slosar a , land k , bamford s , thomas d , raddick mj , nichol rc , szalay a , andreescu d , murray p , vandenberg j ( 2008 ) galaxy zoo : morphologies derived from visual inspection of galaxies from the sloan digital sky survey .mon notices r astronomical soc 389 : 11791189 de arajo rm ( 2013 ) 99designs : an analysis of creative competition in crowdsourced design . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .palo alto , pp 1724 cirne w , paranhos d , costa l , santos - neto e , brasileiro f , sauv j , silva fa , barros co , silveira c ( 2003 ) running bag - of - tasks applications on computational grids : the mygrid approach . in : proceedings of the international conference on parallel processing .ieee computer society , washington , dc , pp 407416 little g , chilton lb , goldman m , miller rc ( 2010 ) turkit : human computation algorithms on mechanical turk . in : proceedings of the acm symposium on user interface software and technology ( uist ) .acm , new york , pp 5766 khanna s , ratan a , davis j , thies w ( 2010 ) evaluating and improving the usability of mechanical turk for low - income workers in india . in : proceedings of the acm annual symposium on computing for development ( acm dev ) .acm , new york , pp 110 kulkarni a , can m , hartmann b ( 2012 ) collaboratively crowdsourcing workflows with turkomatic . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 10031012 sun y - a , roy s , little g ( 2011 ) beyond independent agreement : a tournament selection approach for quality assurance of human computation tasks . in : proceedings of the aaai workshop on human computation ( hcomp ) , p. 113118 .aaai , palo alto , ca , usa huang s - w , fu w - t ( 2013 ) enhancing reliability using peer consistency evaluation in human computation . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 639648 bernstein ms , little g , miller rc , hartmann b , ackerman ms , karger dr , crowell d , panovich k ( 2010 ) soylent : a word processor with a crowd inside . in : proceedings of the acm symposium on user interface software and technology ( uist ) .acm , new york , pp 313322 chilton lb , little g , edge d , weld ds , landay ja ( 2013 ) cascade : crowdsourcing taxonomy creation . in : proceedings of the sigchi conference on human factors in computing systems ( chi ) .acm , new york , pp 19992008 bragg j , mausam , weld ds ( 2013 ) crowdsourcing multi - label classification for taxonomy creation . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto , pp 2533 lin ch , mausam , weld ds ( 2012 ) dynamically switching between synergistic workflows for crowdsourcing . in : proceedings of the aaai conference on artificial intelligence ( aaai ) .aaai , palo alto , pp 8793 bozzon a , brambilla m , ceri s , mauri a ( 2013 ) reactive crowdsourcing . in : proceedings of the international world wide web conference ( www ) international world wide web conferences steering committee ( iw3c2 ) , geneva , pp 153164 singla a , krause a ( 2013 ) truthful incentives in crowdsourcing tasks using regret minimization mechanisms . in : proceedings of the international world wide web conference ( www ) .international world wide web conferences steering committee ( iw3c2 ) , geneva , pp 11671177 singer y , mittal m ( 2013 ) pricing mechanisms for crowdsourcing markets . in : proceedings of the international world wide web conference ( www ) .international world wide web conferences steering committee ( iw3c2 ) , geneva , pp 11571166 rogstadius j , kostakos v , kittur a , smus b , laredo j , vukovic m ( 2011 ) an assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets . in : proceedings of the international conference on weblogs and social media ( icwsm ) .aaai , palo alto , pp 321328 chandler d , horton jj ( 2011 ) labor allocation in paid crowdsourcing : experimental evidence on positioning , nudges and prices . in : proceedings of the aaai workshop on human computation ( hcomp ) .aaai , palo alto , pp 1419 shaw ad , horton jj , chen dl ( 2011 ) designing incentives for inexpert human raters . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 275284 witkowski j , bachrach y , key p , parkes dc ( 2013 ) dwelling on the negative : incentivizing effort in peer prediction . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto, pp 190197 rao h , huang s - w , fu w - t ( 2013 ) what will others choose ?how a majority vote reward scheme can improve human computation in a spatial location identification task . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto , pp 130137 huang s - w , fu w - t ( 2013 ) do nt hide in the crowd ! :increasing social transparency between peer workers improves crowdsourcing outcomes . in : proceedings of the sigchi conference on human factors in computing systems ( chi ) .acm , new york , pp 621630 waterhouse tp ( 2013 ) pay by the bit : an information - theoretic metric for collective human judgment . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 623638 noronha j , hysen e , zhang h , gajos kz ( 2011 ) platemate : crowdsourcing nutritional analysis from food photographs . in : proceedings of the acm symposium on user interface software and technology ( uist ) .acm , new york , pp 112 difallah de , demartini g , cudr - mauroux p ( 2013 ) pick - a - crowd : tell me what you like , and i ll tell you what to do . in : proceedings of the international world wide web conference ( www ) . international worldwide web conferences steering committee ( iw3c2 ) , geneva , pp 367377 lee u , kim j , yi e , sung j , gerla m ( 2013 ) analyzing crowd workers in mobile pay - for - answer qa . in : proceedings of the sigchi conference on human factors in computing systems ( chi )acm new york , pp 533542 jacques jt , kristensson po ( 2013 ) crowdsourcing a hit : measuring workers pre - task interactions on microtask markets . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto , pp 8693 toomim m , kriplean t , prtner c , landay j ( 2011 ) utility of human - computer interactions : toward a science of preference measurement . in : proceedings of the sigchi conference on human factors in computing systems (chi ) acm new york , pp 22752284 yi j , jin r , jain s , jain ak ( 2013 ) inferring users preferences from crowdsourced pairwise comparisons : a matrix completion approach . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto , pp 207215 chen jj , menezes nj , bradley ad , north t ( 2011 ) opportunities for crowdsourcing research on amazon mechanical turk . in : proceedings of the chi workshop on crowdsourcing and human computation .acm , new york , pp 14 mao a , parkes dc , procaccia ad , zhang h ( 2011 ) human computation and multiagent systems : an algorithmic perspective . in : proceedings of the aaai conference on artificial intelligence ( aaai ) .aaai palo alto , pp 16 law e , zhang h ( 2011 ) towards large - scale collaborative planning : answering high - level search queries using human computation . in : proceedings of the aaai conference on artificial intelligence ( aaai ) .aaai palo alto , pp 12101215 zhang h , law e , miller r , gajos k , parkes d , horvitz e ( 2012 ) human computation tasks with global constraints . in : proceedings of the sigchi conference on human factors in computing systems ( chi ) .acm , new york , pp 217226 irani lc , silberman ms ( 2013 ) turkopticon : interrupting worker invisibility in amazon mechanical turk . in : proceedings of the sigchi conference on human factors in computing systems ( chi ) .acm , new york , pp 611620 law e , von ahn l ( 2011 ) human computation : an integrated approach to learning from the crowd . in :synthesis lectures on artificial intelligence and machine learning series .morgan & claypool , san rafael , ca , united states nguyen qvh , nguyen thanh t , lam ngoc t , aberer k ( 2013 ) an evaluation of aggregation techniques in crowdsourcing . in : proceedings of the international conference on web information systems engineering ( wise ) .springer , new york , pp 115 sheng vs , provost f , ipeirotis pg ( 2008 ) get another label ? improving data quality and data mining using multiple , noisy labelers . in : proceedings of the acm sigkdd international conference on knowledge discovery and data mining ( kdd ) .acm , new york , pp 614622 whitehill j , ruvolo p , fan wu t , bergsma j , movellan j ( 2009 ) whose vote should count more : optimal integration of labels from labelers of unknown expertise . in : advances in neural information processing systems .curran associates , inc . ,red hook , pp 20352043 hovy d , berg - kirkpatrick t , vaswani a , hovy e ( 2013 ) learning whom to trust with mace . in : proceedings of the conference of the north american chapter of the association of computational linguistics , human language technologies ( naacl - hlt ) .association for computational linguistics , stroudsburg , pp 11201130 wang d , abdelzaher t , kaplan l , aggarwal cc ( 2013 ) recursive fact - fnding : a streaming approach to truth estimation in crowdsourcing applications . in : proceedings of the international conference on distributed computing systems ( icdcs ) .ieee computer society , washington , dc , pp 530539 dalvi n , dasgupta a , kumar r , rastogi v ( 2013 ) aggregating crowdsourced binary ratings . in : proceedings of the international world wide web conference ( www ) . international worldwide web conferences steering committee ( iw3c2 ) , geneva , pp 285294 salek m , bachrach y , key p ( 2013 ) hotspotting - a probabilistic graphical model for image object localization through crowdsourcing . in : proceedings of the aaai conference on artificial intelligence ( aaai ) .aaai , palo alto , pp 11561162 vuurens j , vries apd , eickhoff c ( 2011 ) how much spam can you take ?an analysis of crowdsourcing results to increase accuracy . in : proceedings of the acm sigir workshop on crowdsourcing for information retrieval ( cir ) .acm , new york , pp4855 rzeszotarski j , kittur a ( 2012 ) crowdscape : interactively visualizing user behavior and output . in : proceedings of the acm symposium on user interface software and technology ( uist ) .acm , new york , pp 5562 amir o , shahar y , gal y , ilani l ( 2013 ) on the verification complexity of group decision - making tasks . in : proceedings of the first aaai conference on human computation and crowdsourcing ( hcomp ) .aaai , palo alto , pp 28 kinnaird p , dabbish l , kiesler s , faste h ( 2013 ) co - worker transparency in a microtask marketplace . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 12851290 hansen dl , schone pj , corey d , reid m , gehring j ( 2013 ) quality control mechanisms for crowdsourcing : peer review , arbitration , and expertise at familysearch indexing . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 649660 dow s , kulkarni a , klemmer s , hartmann b ( 2012 ) shepherding the crowd yields better work . in : proceedings of the acm conference on computer - supported cooperative work and social computing ( cswc ) .acm , new york , pp 10131022 karsenty a , beaudouin - lafon m ( 1993 ) an algorithm for distributed groupware applications . in : proceedings of the international conference on distributed computing systems ( icdcs ) .ieee computer society , new york , pp 195202
|
a human computation system can be viewed as a distributed system in which the processors are humans , called workers . such systems harness the cognitive power of a group of workers connected to the internet to execute relatively simple tasks , whose solutions , once grouped , solve a problem that systems equipped with only machines could not solve satisfactorily . examples of such systems are amazon mechanical turk and the zooniverse platform . a human computation application comprises a group of tasks , each of them can be performed by one worker . tasks might have dependencies among each other . in this study , we propose a theoretical framework to analyze such type of application from a distributed systems point of view . our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached : quality - of - service requirements , design and management strategies , and human aspects . by using this framework , we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications . in doing so , besides integrating and organizing what has been done in this direction , we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of , for example , task assignment , dependency management , and fault prevention and tolerance . we discuss how they are related to distributed systems and other areas of knowledge . journal of internet services and applications 2014 , 5:10 , springer london , issn : 1867 - 4828 + http://dx.doi.org/10.1186/s13174-014-0010-4
|
one important aspect of the study of dynamical systems is the study of noise on the underlying deterministic dynamics .although one might expect the deterministic dynamics would be only slightly perturbed in the presence of small noise , there are now many examples where noise causes a dramatic measurable change in behavior , such as noise induced switching between attractors in continuous systems , and noise induced extinction in finite size systems . in systems transitioning between coexisting stable states ,much research has been done primarily because switching can be now investigated for a large variety of well - controlled micro- and mesoscopic systems , such as trapped electrons and atoms , josephson junctions , and nano- and micro - mechanical oscillators . in these systems ,observed fluctuations are usually due to thermal or externally applied noise . however , as systems become smaller , an increasingly important role may be played also by non - gaussian noise .it may come , for example , from one or a few two - state fluctuators hopping at random between the states , in which case the noise may be often described as a telegraph noise .it may also be induced by poisson noise . in finite size populations or systems, extinction occurs in discrete , finite populations undergoing stochastic effects due to random transitions or perturbations .the origins of stochasticity may be internal to the system or may arise from the external environment , and in most cases is non - gaussian .extinction depends on the nature and strength of the noise , outbreak amplitude and seasonal phase occurrence . for large populations ,the intensity of internal population noise is generally small .however , a rare , large fluctuation can occur with non - zero probability and the system may be able to reach the extinct state .since the extinct state is absorbing due to effective stochastic forces , eventual extinction is guaranteed when there is no source of reintroduction .models of finite populations , which include extinction processes , are effectively described using the master equation formalism , and predict probabilities of rare events . for many problems involving extinction in large populations ,if the probability distribution of the population is quasi - stationary , the probability of extinction is a function that decreases exponentially with increasing population size .the exponent in this function scales as a deterministic quantity called the action .it can be shown that a trajectory that brings the system to extinction is very likely to lie along a most probable path , called the optimal path .it is a major property that a deterministic quantity such as the action can predict the probability of extinction , which is inherently a stochastic process , and is also formulated in continuous systems driven by noise .locating the optimal path is important since the quantity of interest , whether it is the switching or extinction rate , depends on the probability to traverse this path .therefore , a stochastic control strategy based on the switching or extinction rates can be determined by its effect on the optimal path .the optimal path formalism converts the entire stochastic problem to a mechanistic dynamical systems problem with definitive properties .first , the optimal path is a solution to a hamiltonian dynamical system . in the case of continuous stochastic models ,the dimension of the system is twice that of the original stochastic problem .the other dimensions are conjugate momenta , and typically represent the physical force of the noise which induces escape from a basin of attraction to either switch or go extinct .finally , due to the symplectic structure of the resulting hamiltonian system , it can be shown that both attractors and saddles of the original system become saddles of the hamiltonian system .one of the main obstacles to finding the optimal path is that it is an inherently unstable object .that is , if one one starts near the path described by the hamiltonian system , then after a short time , the dynamics leaves the neighborhood of the path .in addition , although the path may be hyperbolic near the saddle points , it may not be hyperbolic along the rest of the path .solving such problems using shooting methods for simple epidemic models , or mixed shooting using forward and backward iteration will in general be inadequate to handle even the simplest unstable paths in higher dimensions .therefore , it is the goal of this paper to exemplify a robust numerical method to solve for the optimal path using a general accurate discrete formulation applied to the hamiltonian two point boundary value problem .the method we employ here to compute the optimal paths is similar to the generalized minimum action method ( gmam ) , which is a blend of the string method and minimum action method . both are iterative methods which globally minimize the action along the path .these other techniques differ from ours primarily in that our formulation of the problem allows a direct , fully explicit iterative scheme , while the gmam , in particular , employs a semi - implicit scheme .the numerical scheme presented here should provide an easy to employ alternative to the methods discussed above for stochastic optimization problems which are formulated as hamiltonian two - point boundary value problems .the paper is organized as follows .we first briefly present the general sde problem , and the formulation of the corresponding deterministic hamiltonian system by treating the switching as a rare event .we then present the details of the numerical approximation technique we will use to find the path which maximize the probability of switching . using this technique , we then demonstrate finding the optimal path from a stable focus to the unstable saddle for the unforced duffing equation , then compute optimal extinction pathways for a simple epidemic model , and finally adapt our method to find optimal transition paths in stochastic delay differential systems .consider a general stochastic differential equation of the form where represents the physical quantity in state space and and the matrix is given by , where the s are general nonlinear functions .we suppose the noise is a vector having a gaussian distribution with intensity , and independent components .it is characterized by its probability density functional , =\frac{1}{2}\int dt\,dt'\,{{\bm \xi}}(t){{\bm \xi}}(t').\ ] ] we wish to determine the path with the maximum probability of traveling from the initial state to the final state , where the initial and final states are equilibria of the noise - free ( i.e. ) version of eq .[ general : ode ] given by .these states typically characterize a generic problem of study in stochastic systems , such as switching between attractors , escape from a basin of attraction , or extinction of population .we assume the noise intensity is sufficiently small so that in our analysis sample paths will limit on an optimal path as .we also remark that is formally the time derivative of a brownian motion , sometimes referred to as white noise . for sufficiently small , and examining the tail of the distribution for a large fluctuation ( which is assumed to be a rare event ) , the probability of observing such a large fluctuation scales exponentially by , where , , \label{exponent}\ ] ] where the lagrange multipliers also correspond to the conjugate momenta of the equivalent hamilton - jacobi formulation of this problem .the exponent of eq .[ rare : event ] is called the action , and corresponds to the minimizer of the action in the hamilton - jacobi formulation which occurs along the optimal path .this path will minimize the integral of eq .[ exponent ] , and is found by setting the variations along the path to zero .the resulting equations of motion for the states and lagrange multipliers are given by here =[\frac{\partial \bm{g}}{\partial \bm{x}}(\bm{x})]_{ijk}[{\boldsymbol{p}}]_j[{\boldsymbol{p}}]_k ] . similarly ,if , then - , which implies that is an eigenvalue of with eigenvector ^t ] unless otherwise specified .ideally , is picked large enough so that and .i.e. the steady states are obtained up to machine precision for double precision numbers . ] onto ] into equal segments . in practice , however , the simple uniform step size is not always the best choice since the optimal path tends to stay very near the stable points throughout most of the domain , and sometimes makes a relatively sharp transition near the center of the domain . in this case , it is helpful to use a nonuniform grid to resolve the sharp transition region using a fine mesh , and to use a coarse mesh near the edges where the solution is mostly flat . thus , for the nonuniform time step , yielding the time series and corresponding function values , the derivative is approximated by the operator , at this point , we can write the generic system of nonlinear algebraic equations : and solve this system using a general newton s method for nonlinear systems of equations . to properly apply newton s method here ,let be the extended vector of dimension containing the j - th newton iterate ( recalling that are defined on the timeseries given by ) .then will represent the initial guess .let be the function defined by eq .[ discrete : equations ] acting on .then to find the zeros of we employ a newton scheme .a new newton iterate is given by solving the linear system , using any one of a variety of methods such as lu decomposition or the generalized minimal residual method ( gmres ) with appropriate preconditioners . throughout this paperwe will use lu decomposition with partial pivots optimized for a sparse linear system . herethe jacobian is computed approximately using a central difference scheme .formally , this method is second order with respect to .the initial guess for this algorithm is constructed by the knowledge that the optimal path spends most of its time near the stable equilibria , and has a brief but sometimes sharp transition between the two states .one choice that has worked in practice is using functions like , with ( where the parameter adjusts the sharpness of the jump ) , which have horizontal asymptotes at the appropriate critical values .usually , though not always , is set so that for , .note that the zero - energy surface constraint is not imposed .rather , the initial guess will start out near ( at ) and ( at ) which lie asymptotically close to the zero energy surface , and thus the final solution will have to lie on this surface since such a solution is time invariant ( i.e. ) . at each iterate , both the residual error and the hamiltonian are checked at each point , and both must reach a desired tolerance in the norm before the procedure is completed .once the optimal path is computed , the action ( i.e. the exponent ) along the optimal path may be obtained with a simple integral , .\label{action : integral}\ ] ] exhaustive convergence tests on the residual error for a variety of test problems for both the uniform and non - uniform grids have demonstrated the second order convergence for this method .we have noted some dependency on the initial guess in terms of the overall speed of convergence ( or divergence for a particularly bad guess ) .the method generally produces a unique solution ( up to possibly a horizontal shift in the time series seen in a few examples , which does not affect the path integrals of interest ) .this method has been reliable for a wide parameter regime for each of the test problems , with the limitations to be discussed below on a case by case basis .this method , which we will henceforth refer to as the iterative action minimization method ( iamm ) , has several distinct advantages over other methods , the foremost of which is straightforward scalability to higher dimensions .for very high dimensional problems , the systems will eventually become too large to treat easily with a single processor , but this algorithm has proven efficient for up to six dimensional problems with a single processor .further , this method lends itself to infinite dimensional problems , such as time delay stochastic differential equations , as will be demonstrated below .we demonstrate the numerical techniques by examining several bistable dynamical systems . using methods discussed above , we explicitly approximate the optimal path between the two states andthen numerically integrate along the path directly to compute the action .one of the standard nonlinear dynamical systems which exhibits bi - stability is duffing s equation .this equation is used to model certain types of nonlinear damped oscillators , and here we consider the singularly perturbed and unforced version , here and control the size and nonlinear response of the restoring force , while controls the friction or damping on the system .the terms and are uncorrelated white noise sources applied to the acceleration and velocity respectively .when the perturbation fast and slow manifolds can be identified , while gives the unconstrained case . rescaling time , applying eq .[ hamiltonian ] , and following the methodology above , we can write the system in the following general form , with corresponding equations of motion , we will restrict our focus to the additive white noise case , .note that the hamiltonian system emits three known steady states and , all of which are saddle points .these steady states correspond to zero - noise critical points of eq .[ 2dduff ] , , a saddle point , and , the centers of the stable foci .the path from to can be found deterministically when , as any solution perturbed from the saddle node point will move along the solution curves and end up at either stable focus .a more interesting case is the optimal path from one of the focus points to the saddle - node point , which will require non - trivial momentum .such momenta model the small noise effects which organize to force the trajectory across the basin of attraction , thus escaping from one attractor to the other .figure [ duff_bapts ] shows both the deterministic and optimal paths as computed using the iamm developed above . for the deterministic path , the algorithm correctly predicts that the noise will be zero along this path , and that the action will be zero as the probability of going from to is one , and thus not a rare event . on the other hand ,the path from to involves nontrivial action , and the effect of the noise along the optimal path is shown .this path lies on the zero - energy surface , and maximizes the probability of traveling from to for arbitrarily small noise intensities , .figure [ duff_converge ] shows the residual error at each iterate used to generate the data for figure [ duff_bapts ] until the convergence criteria is reached .when , we can derive an analytical formulation for the action by using a center manifold analysis on eq .[ duffham ] by following the work of . herethe center manifold is given by , and we approximate it as , by substituting equation [ centermani ] into equation [ 2dduff ] and equating like powers of , we arrive at a one dimensional form of equation [ 2dduff ] for the lowest order terms involving , here the contribution of the uncorrelated noise terms are contained in a single noise source .the hamiltonian form of [ 1dduff ] is , we can find the nontrivial relationship ( )between and on the zero energy surface directly , , and integrate along this path to predict the action along the optimal path . since one may be interested in how the action scales relative to the distance between the two critical points we substitute the values and to better illustrate the scaling with respect to .thus , with this substitution , we seek the action from to . from equation [action : integral ] , this is a simple integral , thus , we can predict , for example , that the action from to should scale like the square of near the center manifold . indeed , varying the parameter for the four - dimensional system , eq .[ 4dduff ] , predicts the same order scaling as seen below in fig .[ duff_center_scale ] .we also consider the scaling with respect to the damping parameter , and again , near the center manifold , the predictions are born out by integrating along the optimal path . indeed, the scaling for the action predicted from the lowest order terms in the center manifold analysis seems to persist in the two - dimensional model .since it is not clear if the same relationship will work further away from the center manifold , we check the scaling with respect to and when in eq.[4dduff ] in figure [ duff_scale ] .interestingly , the leading order scaling of the action with respect to is still , just as it was near the center manifold .meanwhile , the scaling with respect to is markedly different .thus , we can predict both near and away from the center manifold how the action will scale as a function of the distance between the two equilibrium points .next we consider a simple susceptible - infected - susceptible ( sis ) epidemic model in the general noise case , as considered by .this model is defined by a master equation which describes the probability of fluctuations between a susceptible or infected category in a population of individuals .assuming is sufficiently large , we can write a mean field system of equations for the change of the population fractions of susceptible and infected individuals , denoted and respectively , for simplicity , the population in the mean field is assumed constant , i.e. births and deaths are equal . here is the natural birth and death rate of both the susceptible and infected populations , is the contact rate , and is the natural recovery rate of the infected population .since we have assumed the population fraction of susceptible and infected individuals are conserved , we have . under this constraint , assuming a small random fluctuation of only the infected individuals , we can use a methodology similar to the one described above ( and worked out in detail in ) to write a hamiltonian system , we shall refer to equation [ 1dsi ] as the 1d sis model equation .the optimal path will extend from the endemic state at to the extinct state , where . using the methods introduced above , the optimal path ( at typical parameter values )is given below .integrating the momentum along this path gives the action , which is proportional to the probability of extinction .in this simple 1d sis model , the predicted action scaling as a function of is given by solving the hamiltonian directly for and solving the integral [ action : integral ] .\ ] ] since this is one of the rare cases the optimal path can be found analytically , it is a great test case for our method . a comparison of the analytical action to the numerical action is shown in figure [ 1dsisscale ] . as predicted by an analytical expression and our numerical method . here , , and while is varied ., title="fig : " ] in the case of an sis epidemic model with independent fluctuations on both the susceptible ( ) and infected ( ) populations , the hamiltonian form of the equations is given by , and we refer to this form as the 2d sis model .the two states of interest for this system are the endemic state , and the nontrivial extinct state , . using the procedure discussed above , we compute the optimal path from the endemic to the extinct state , and show a typical result in figure [ sis_path ] . instead of comparing the action scaling predicted here to an analytical formulation ,we instead compare it to a monte - carlo simulation of the master equation for the initial system with a fixed population size of 20,000 individuals . in the monte - carlo simulation ,a gillespie algorithm is employed on the sis model , and from all the simulations , a probability of extinction is computed for several values of . from these , a mean extinction time is derived , and the log of this mean extinction time should scale like the action predicted from the action integrals , from eq . [action : integral ] , computed using our approach .figure [ sis_action ] shows a comparison of the two approaches , and good agreement is seen between these two independent methods .is varied while and . ] as demonstrated in , the optimal path to extinction coincides with ridges , i.e. maximal values , of the finite time lyapunov exponents ( ftles ) .forgoston and others propose finding ftle ridges as a method of computing optimal paths . here , we demonstrate that our approximation to the optimal path does indeed locally maximize the ftle .we proceed by using the methods outlined in to approximate the ftles at points on our optimal path , and at nearby points transverse to the optimal path .we begin by picking a point on the optimal path ( generated by our method ) , and on nearby points some small distance away from the path . for the given vector field , we assume we have a flow passing through initial point , , such that .the local linear variation at is defined by . using a fourth order runge - kutta method, we can integrate all the initial points , , forward in time over a fixed interval , and compute the finite time deformation rate of the local coordinates ( i.e. right cauchy - green tensor ) .the maximal eigenvalue of will give the ftle .consider the case of the 1d sis extinction model from above . here , we will use a path computed above and compare the values near this path to the local ftles . in figure[ lyap1]a we plot the ftles over a square domain , and show that a local maximum ( ridge ) is attained precisely where the computed optimal path predicts . in higher dimensions ,the maximal lyapunov exponent is still exhibited along the optimal path .for the 2d sis model , note that the optimal path exists in four dimensions .thus , the transverse direction is the set of points obtained by rotating a normal vector to the path around two euler angles ( which forms a 3d sphere for a given radius ) . to illustrate ftles in this higher dimension framework, we must consider the `` shell '' around the initial starting point ( and orthogonal to the path ) at a fixed radius , and then compute ftles on this shell all along the optimal path .we expect that the maximal ftle will occur along the optimal path , relative to nearby points on the transverse sphere . to illustrate this , we define as a unit vector orthogonal to the tangent vector of the optimal path at a given time , i.e. , and then examine the ftle as a function of the two euler angles for a given .figure [ lyap1]b shows just one cross section ( over a short time interval ) , obtained by setting both rotation angles to either or the anitipodal angle , of this high dimensional object , as a function of , and demonstrates that the maximal ftle is , indeed , along the optimal path .one advantage of the iamm is that it allows the solution of stochastic delay - differential equations of the form , schwartz et .al have demonstrated that the methodology introduced in section 2 can be adapted to write this system as a hamiltonian system , where .the equations of motion are given by , note the appearance of both delay and advance terms in [ delay : eom ] . because of the appearance of the delay term , the hamiltonian is no longer time invariant , and unlike in the previous examples , where , the zero - energy condition is not conserved .we shall consider a one dimensional test case where , where the steady states are given by and , for .again , we will assume additive noise , and derive the hamiltonian system , and the corresponding equations of motion , the iamm needs only a few minor adjustments to compute these paths numerically .primarily , the presence of the delay and advance terms will add additional entries into our linear system eq .[ discrete : equations ] .our method uses a non - uniform timestep , and so may not coincide exactly with one of our points at time . to overcome this , we can use lagrange interpolation on the closest four points such that .since we keep the time domain fixed the needed for each can be easily computed before the iterative scheme is started , and fewer or more terms can be used in the lagrange interpolation scheme depending upon desired accuracy .further , if or , i.e. the delay or advance terms fall outside of our numerical domain , then we can set or respectively . to demonstrate the effectiveness of the iamm in solving these stochastic delay problems , we show sample paths in figure [ delaypath ] and compare the scaling of the action versus a monte - carlo simulation of the stochastic delay difference equation in figure [ delayscaling ] , and note the good agreement . with and .] . herethe solid line indicates the iamm predication , and the circles represent a monte - carlo simulation of 1000 runs . ]we have considered the problem of finding the trajectory in stochastic dynamical systems that optimizes the probability of switching between two states , or causes one or more components to go extinct . in computing such a trajectory , called the optimal path, we needed to consider a numerical technique which could solve a hamiltonian system with asymptotic boundary conditions in time .we have developed a numerical method , which we call the iterative action minimizing method ( iamm ) , for finding the optimal path of transition between two steady states in stochastic dynamical systems .this method is ideal for systems which can be written as two - point boundary value problems governed by hamiltonian systems .we have validated the iamm by presenting a variety of problems of interest , and have compared the numerical results with either analytic results or monte - carlo simulations of full stochastic systems . as demonstrated here ,the iamm method is robust enough to be applicable to a variety of different types of problems , including continuous sde systems , such as the duffing equation , discrete epidemic models of finite population size , such as the sis model , and stochastic delay differential equations , in which the deterministic problem is infinite dimensional .the methodology is straightforward enough to generalize to higher dimensions , in contrast to other commonly used methods , such as the shooting method , which is a major advantage of the iamm. the primary limitations of this method are scaling issues in very high state space dimensions , and the finesse required in picking an initial guess that guarantees convergence , both of which are typical of iterative methods of quasi - newton type . in the limit of small noise or large system size , however , due to the robustness and ease of generalization to complex and high dimensional dynamical systems , the method offers a considerable advantage over simulating large systems , or systems which require many monte carlo runs to generate statistics of the transitions paths . as a result , we expect this method will be useful in efficiently solving a large variety of optimal transition problems in the field of stochastic dynamical systems .the authors gratefully acknowledge the office of naval research for their support under n0001412wx20083 , and support of the nrl base research program n0001412wx30002 .brandon lindley is currently an nrc postdoctoral fellow .we thank lora billings for providing the monte carlo data used in figures [ sis_action ] and [ delayscaling ] , and eric forgoston for a preliminary reading of this manuscript .44 natexlab#1#1[2]#2 , , , ., , , , ( ) . , , , , , , , , , , , , ( ) . , , , ( ) ., , , , , , , , ( ) . , , , , , , , ( ) . , , , ( ) ., , , , , ( ) . , , , , , , , ( ) . , , , , , ( ) . , , , ( ) ., , , , , , ( ) . , , , ( ) ., , , , ( ) . , , , , , ( ) . , , , , , ( ) . , , , ( ) ., , , , ( ) . , , , , ( ) ., , , ( ) . , , , , ( ) ., , , , , ( ) . , , , , ( ) ., , , ( ) . , , , ( ) ., , , ( ) ., , , ( ) ., , , ( ) ., , , , ( ) . , , , , ( ) ., , , , , edition , ., , , ( ) . , , , , ( ) . , , ( ) ., , ( ) . , , , , ( ) ., , , , , ( ) .
|
we present a numerical method for computing optimal transition pathways and transition rates in systems of stochastic differential equations ( sdes ) . in particular , we compute the most probable transition path of stochastic equations by minimizing the effective action in a corresponding deterministic hamiltonian system . the numerical method presented here involves using an iterative scheme for solving a two - point boundary value problem for the hamiltonian system . we validate our method by applying it to both continuous stochastic systems , such as nonlinear oscillators governed by the duffing equation , and finite discrete systems , such as epidemic problems , which are governed by a set of master equations . furthermore , we demonstrate that this method is capable of dealing with stochastic systems of delay differential equations .
|
humans and other animals often show cooperation in social dilemma situations , in which defection apparently seems more lucrative than cooperation .a main mechanism governing cooperation in such situations is direct reciprocity , in which the same pairs of players repeatedly interact to realize mutual cooperation . in fact , individuals who do not repeatedly interact also cooperate with others . in thissituation , reputation - based indirect reciprocity , also known as downstream reciprocity , is a viable mechanism for cooperation . in this mechanism , which i refer to as indirect reciprocity for simplicity , individuals carry their own reputation scores , which represent an evaluation of their past actions toward others .individuals are motivated to cooperate to gain good reputations so that they are helped by others in the future or to reward ( punish ) good ( bad ) others .indirect reciprocity facilitates cooperation in a larger population than in the case of direct reciprocity because unacquainted players can cooperate with each other .although evidence of indirect reciprocity is relatively scarce for nonhumans ( but see ) , it is widely accepted as explanation for cooperation in humans .humans , in particular , belong to groups identified by traits , such as age , ethnicity , and culture .individuals presumably interact more frequently with ingroup than outgroup members .group structure has been a main topic of research in social psychology and sociology for many decades and in network science .experimental evidence suggests that , when the population of players has group structure , two phenomena that are not captured by existing models of indirect reciprocity take place .first , in group - structured populations , humans and even insect larvae show various forms of ingroup favoritism . in social dilemma games, individuals behave more cooperatively toward ingroup than outgroup members ( e.g. , ) .ingroup favoritism in social dilemma situations may occur as a result of indirect reciprocity confined in the group .in contrast , ingroup favoritism in social dilemma games is not pareto efficient because individuals would receive larger payoffs if they also cooperated across groups . underwhat conditions are ingroup favoritism and intergroup cooperation sustained by indirect reciprocity ?can they bistable ?ingroup favoritism , which has also been analyzed in the context of tag - based cooperation , the green beard effect , and the armpit effect , has been considered to be a theoretical challenge ( e.g. , ) . nevertheless , recent research has revealed their mechanisms , including the loose coupling of altruistic trait and tag in inheritance , a relatively fast mutation that simultaneously changes strategy and tag , a tag s relatively fast mutation as compared to the strategy s mutation , conflicts between groups , partial knowledge of others strategies , and gene - culture coevolution .however , indirect reciprocity accounts for ingroup favoritism , as is relevant to previous experiments is lacking .second , in a population with group structure , individuals tend to approximate outgroup individuals characteristics by a single value attached to the group . this type of stereotype is known as outgroup homogeneity in social psychology , and it posits that outgroup members tend to be regarded to resemble each other more than they actually do .it is also reasonable from the viewpoint of cognitive burden of remembering each individual s properties that humans generally resort to outgroup homogeneity .therefore , in indirect reciprocity games in group structured populations , it seems to be natural to assume outgroup homogeneity . in other words, individuals may not care about or have access to personal reputations of those in different groups and approximate an outgroup individual s reputation by a group reputation .some previous models analyzed the situations in which players do not have access to individuals reputations .this is simply because it may be difficult for an individual in a large population to separately keep track of other people s reputations even if gossiping helps dissemination of information .this case of incomplete information has been theoretically modeled by introducing the probability that an individual sees others reputations in each interaction . however , these studies do not have to do with the approximation of individuals personal reputations by group reputations . by analyzing a model of an indirect reciprocity game based on group reputation, i provide an indirect reciprocity account for ingroup favoritism for the first time .in addition , through an exhaustive search , i identify all the different types of stable homogeneous populations that yield full cooperation ( intragroup and intergroup cooperation ) or ingroup favoritism .i assume that the population is composed of infinitely many groups each of which is of infinite size .each player belongs to one group .players are involved in a series of the donation game , which is essentially a type of prisoner s dilemma game . in each round , a donor and recipient are selected from the population in a completely random manner .each player is equally likely to be selected as donor or recipient .the donor may refer to the recipient s reputation and select one of the two actions , cooperation ( c ) or defection ( d ) .if the donor cooperates , the donor pays cost , and the recipient receives benefit . if the donor defects , the payoffs to the donor and recipient are equal to 0 . because the roles are asymmetric in a single game , the present game differs from the one - shot or standard iterated versions of the prisoner s dilemma game .this game is widely used for studying mechanisms for cooperation including indirect reciprocity .rounds are repeated a sufficient number of times with different pairs of donors and recipients .because the population is infinite , no pair of players meets more than once , thereby avoiding the possibility of direct reciprocity ( e.g. , ) .the payoff to each player is defined as the average payoff per round .the groups to which the donor and recipient belong are denoted by and , respectively .the simultaneously selected donor and recipient belong to the same group with probability ( i.e. , ; fig .[ fig : ingroup outgroup observers]a ) and different groups with probability ( i.e. , ; fig . [ fig : ingroup outgroup observers]b ) . at the end of each round , observers assign binary reputations , good ( g ) or bad ( b ) , to the donor and donor s group ( ) according to a given social norm .i consider up to so - called second - order social norms with which the observers assign g or b as a function of the donor s action and the reputation ( i.e. , g or b ) of the recipient or recipient s group ( ) .representative second - order social norms are shown in fig .[ fig : norms ] . under image scoring ( `` scoring '' in fig .[ fig : norms ] ) , an observer regards a donor s action c or d to be g or b , respectively , regardless of the recipient s reputation . in the absence of a group - structured population , scoringdoes not realize cooperation based on indirect reciprocity unless certain specific conditions are met .simple standing ( `` standing '' in fig .[ fig : norms ] ) , and stern judging ( `` judging '' in fig .[ fig : norms ] ; also known as kandori ) enable full cooperation .shunning also enables full cooperation if the players reputations are initially c and the number of rounds is finite or if the players reputations are partially invisible . in the presence of group structure , four possible locations of the observerare schematically shown in fig .[ fig : ingroup outgroup observers ] .i call the observer belonging to an `` ingroup '' observer .otherwise , the observer is called an `` outgroup '' observer .the observers can adopt different social norms for the four cases , as summarized in fig .[ fig : ingroup outgroup observers ] . when the donor and recipient belong to the same group ( fig .[ fig : ingroup outgroup observers]a ) , the ingroup observer uses the norm denoted by to update the donor s personal reputation .in this situation , the outgroup observer does not update the donor s or s reputation ( but see appendix [ sec : appendix variant ] ) .when the donor and recipient belong to different groups ( fig .[ fig : ingroup outgroup observers]b ) , the ingroup observer uses the norm denoted by to update the donor s personal reputation .in this situation , the outgroup observer uses the norm denoted by to update s reputation .these four cases are explained in more detail in sec .[ sub : update ] .the distinction between and allows the ingroup observer to use a double standard for assessing donors .for example , a donor defecting against an ingroup g recipient may be regarded to be b , whereas a defection against an outgroup g recipient may be regarded as g. such different assessments would not be allowed if and are not distinguished .i call , , and subnorms .all the players are assumed to share the subnorms .the typical norms shown in fig .[ fig : norms ] can be used as subnorms .a subnorm is specified by assigning g or b to each combination of the donor s action ( i.e. , c or d ) and recipient s reputation ( i.e. , g or b ) .therefore , there are subnorms .an entire social norm of a population consists of a combination of the three subnorms , and there are social norms .the action rule refers to the mapping from the recipient s reputation ( i.e. , g or b ) to the donor s action ( i.e. , c or d ) . the allc and alld donors cooperate and defect , respectively , regardless of the recipient s reputation . a discriminator ( disc ) donor cooperates or defects when the recipient s reputation is g or b , respectively .an anti - discriminator ( antidisc ) donor cooperates or defects when the recipient s reputation is b or g , respectively .the donor is allowed to use different action rules toward ingroup and outgroup recipients .for example , a donor who adopts allc and alld toward ingroup and outgroup recipients , respectively , implements reputation - independent ingroup favoritism .there are action rules .a donor refers to the recipient s personal reputation when ( fig .[ fig : ingroup outgroup observers]a ) and to s group reputation when ( fig .[ fig : ingroup outgroup observers]b ) . in each round ,the ingroup and outgroup observers update the donor s and s reputations , respectively .if , the donor is assumed to recognize the recipient s personal reputation ( fig .[ fig : ingroup outgroup observers]a ) .an ingroup observer in this situation updates the donor s personal reputation on the basis of the donor s action , the recipient s personal reputation , and subnorm .an outgroup observer in this situation is assumed not to update s reputation because such an observer does not know the recipient s personal reputation , although the donor does .then , the outgroup observer may want to refrain from evaluating the donor because the donor and the observer use different information about the recipient .i also analyzed a variant of the model in which the outgroup observer updates s reputation in this situation .the results are roughly the same as those obtained for the original model ( appendix [ sec : appendix variant ] ) .if , the donor is assumed to recognize s reputation , but not the recipient s personal reputation ( fig .[ fig : ingroup outgroup observers]b ) .an ingroup observer in this situation updates the donor s personal reputation on the basis of the donor s action , s reputation , and subnorm .both the donor and observer refer to s reputation and not to the recipient s personal reputation .an outgroup observer in this situation updates s reputation based on the donor s action , s reputation , and subnorm .an outgroup observer knows the recipient s personal reputation if the observer and recipient are in the same group .however , the observer is assumed to ignore this information for two reasons .first , it is evident for the observer that the donor does not have access to the recipient s personal reputation . to explain the second reason ,let us consider an outgroup observer who belongs to in a certain round .assume that this observer assigns a new reputation to according to a subnorm different from one used when the observer does not belong to .the same observer does not belong to when the observer updates the s group reputation next time .this is because the probability that the observer belongs to is infinitesimally small because of the assumption of infinite groups .therefore , the subnorm used when the observer belongs to is rarely used and immaterial in the present model . finally , observers commit reputation assessment error . with probability , ingroup and outgroupobservers independently assign the reputation opposite to the intended one to the donor and , respectively .i introduce this error because g and b players must coexist in the population to distinguish the payoff values for different pairs of action rule and social norm ( action norm pair ) ; such a distinction is necessary for the stability analysis in the following discussion . for simplicity ,i neglect other types of error . to examine the stability of an action rule under a given social norm , i consider two types of mutants .the first is a single mutant that invades a group .there are types of single mutants. a single mutant does not affect the action rule , norm , or reputation of the group that the mutant belongs to because of the assumption of infinite group size .the second type is a group mutant .a homogeneous group composed of mutants may make the mutant type stronger than the resident type .for example , a group composed of players who cooperate with ingroup recipients and defect against outgroup recipients may invade a fully cooperative population if any intergroup interaction ( i.e. , c or d ) is regarded to be g under . by definition ,a group mutant is a homogeneous group of mutants that is different from the resident players in either the action rule or social norm .i consider two varieties of group mutants , as described in sec .[ sec : results ] . consider a homogeneous resident population in which all players share an action norm pair .i will examine the stability of this population against invasion by single and group mutants . for this purpose ,i calculate the fraction of players with a g reputation , probability of cooperation , and payoff after infinitely many rounds .denote by and the equilibrium probabilities that the player s and group s reputations are g , respectively . the self - consistent equation for given by + r^{\rm out}\left [ p_{\rm g}^ * \phi_{\rm g}^{\rm in}(\sigma^{\rm out } ) + ( 1-p_{\rm g}^ * ) \phi_{\rm b}^{\rm in}(\sigma^{\rm out } ) \right ] , \label{eq : p equilibrium}\ ] ] where and are the action rules ( i.e. , allc , disc , antidisc , or alld ) that the donor adopts toward ingroup and outgroup recipients , respectively . and are the probabilities that the ingroup observer , based on , assigns reputation g to a donor who has played with a g or b ingroup recipient ( i.e. , ) , respectively ( fig .[ fig : ingroup outgroup observers]a ) .similarly and apply when the recipient is in a different group ( i.e. , ) and the observer uses ( fig . [fig : ingroup outgroup observers]b ) .it should be noted that and , for example , may differ from each other even if .owing to the reputation assignment error , , , , holds true . for example , if the donor is disc toward ingroup recipients and subnorm is scoring , and . the self - consistent equation for is given by , \label{eq : pg equilibrium case a}\ ] ] where and are the probabilities that the outgroup observer , based on , assigns reputation g to the donor s group when the donor has played with a g or b outgroup recipient ( i.e. , ) , respectively ( fig .[ fig : ingroup outgroup observers]b ) . the first term on the right - hand side of eq. corresponds to the fact that s reputation is not updated in the situation illustrated in fig .[ fig : ingroup outgroup observers]a .equations and lead to } { 1-r^{\rm in}\phi_{\rm g}^{\rm in}(\sigma^{\rm in})+r^{\rm in}\phi_{\rm b}^{\rm in}(\sigma^{\rm in})}\ ] ] and to examine the stability of the action rule ( , ) against invasion by single mutants under a given social norm , i consider a single mutant with action rule ( , ) . because the group is assumed to be infinitely large ,a single mutant does not change the reputation of the invaded group .the equilibrium probability that a mutant receives personal reputation g is given by + r^{\rm out}\left [ p_{\rmg}^*\phi_{\rm g}^{\rm in}(\sigma^{\rm out \prime } ) + ( 1-p_{\rm g}^*)\phi_{\rm b}^{\rm in}(\sigma^{\rm out \prime})\right ] .\label{eq : p'*}\ ] ] when the probability that the donor and have a g reputation is equal to and , respectively , the resident donor cooperates with probability where is the probability that a donor with action rule cooperates when the recipient s personal or group reputation is g with probability . and ( or ) are the probabilities that a donor cooperates with a g and b recipient , respectively .allc , disc , antidisc , and alld correspond to , and , respectively .the payoff to a resident ( , )player is given by + b \left[r^{\rm in}\psi ( \sigma^{\rm in } , p^ * ) + r^{\rm out}\psi(\sigma^{\rm out } , p_{\rm g}^ * ) \right ] .\label{eq : pi}\ ] ] the payoff to a ( , )mutant invading the homogeneous population of the resident action norm pair is given by + b\left [ r^{\rm in}\psi(\sigma^{\rm in } , p^{\prime * } ) + r^{\rm out}\psi(\sigma^{\rm out } , p_{\rm g}^ * ) \right ] .\label{eq : pi'}\ ] ] if for any mutant , the pair of the action rule ( , ) and social norm ( , , ) is stable against invasion by single mutants . fora mutant group composed of players sharing an action norm pair , let denote the equilibrium probability that the mutant group has group reputation g. i obtain \notag\\ & + r^{\rm out}\left [ p_{\rm g}^*\phi_{\rm g}^{\rm in\prime}(\sigma^{\rm out\prime } ) + ( 1-p_{\rm g}^*)\phi_{\rm b}^{\rm in\prime}(\sigma^{\rm out\prime})\right ] \label{eq : p ' * group mutation}\end{aligned}\ ] ] and ,\label{eq : pg ' * group mutation case a}\ ] ] where or is the probability that an ingroup observer assigns reputation g to a mutant donor who has played with a g or b ingroup recipient , respectively .even if and are the same , will be generally different from because the ingroup observer in the mutant group may use a subnorm that is different from one used in the resident population .parallel definitions apply to and .equations and yield } { 1-r^{\rm in}\phi_{\rm g}^{\rm in\prime}(\sigma^{\rm in\prime})+r^{\rm in}\phi_{\rm b}^{\rm in\prime}(\sigma^{\rm in\prime } ) } \label{eq : p ' * group mutation final}\ ] ] and respectively . the payoff to a mutant player in the mutant group is given by + b\left[r^{\rm in}\psi(\sigma^{\rm in\prime } , p^{\prime * } ) + r^{\rm out}\psi(\sigma^{\rm out } , p_{\rm g}^{\prime * } ) \right ] .\label{eq : pi g'}\ ] ] if holds true for any group mutant player , the resident population is stable against invasion by group mutants .there are 16 action rules and social norms , which leads to action norm pairs .because of the symmetry with respect to the swapping of g and b , i neglect action norm pairs in which the action rule ( i.e. , allc , disc , antidisc , or alld ) toward ingroup recipients is antidisc without loss of generality .such an action norm pair can be converted to disc by swapping g and b in the action rule and social norm . the model is also invariant if g and b group reputations are completely swapped in the action rule toward outgroup recipients and subnorms and .therefore , i can also neglect the action norm pairs with antidisc without loss of generality . this symmetry consideration leaves action norm pairs ( fig .[ fig : equilibrium selection ] ) .i exhaustively examined the stability of all action norm pairs .a similar exhaustive search was first conducted in for an indirect reciprocity game without group structure in the population . in the following , ( eq . ) mentions the player s payoff in the resident population in the limit of no reputation assignment error , i.e. , .i first describe action rules that are stable against invasion by single mutants under a given social norm .i identified them using eqs .. under any given social norm , action rule ( , ) ( alld , alld ) is stable and yields .other action norm pairs also yield , but there are 588 stable action norm pairs with ( fig .[ fig : equilibrium selection ] ) . for a given social norm , at most one action rule that yieldsa positive payoff is stable .for all 588 solutions , the condition for stability against invasion by single mutants ( i.e. , , where and are given by eqs . and , respectively ) is given by equation implies that cooperation is likely when the benefit - to - cost ratio is large , which is a standard result for different mechanisms of cooperation in social dilemma games .cooperation is also likely when intragroup interaction is relatively more frequent than intergroup interaction ( i.e. , large ) .the stability of these 588 action norm pairs against invasion by group mutants was also examined based on eqs . .properly setting the variety of group mutants is not a trivial issue . at most , types of group mutants that differ from the resident population in either action rule or social norm are possible. however , an arbitrarily selected homogeneous mutant group may be fragile to invasion by different single mutants into the mutant group .although i do not model evolutionary dynamics , evolution would not allow the emergence and maintenance of such weak mutant groups . with this in mind , i consider two group mutation scenarios. single mutants may invade the resident population when eq .is violated . in this scenario 1 ,the mutants are assumed to differ from the resident population in the action rule , but not the social norm , for simplicity .there are such mutants , and some of them , including ( , ) ( alld , alld ) , can invade the resident population when .such mutant action rules may spread to occupy a single group when eq .is violated .i consider the stability of the resident population against the homogeneous groups of mutants that invade the resident population as single mutants when . among the 588 action norm pairs that yield , 440 pairs are stable against group mutation . among these 440 pairs ,i focus on those yielding perfect intragroup cooperation , i.e. , those yielding , where and are given in sec .[ sec : analysis ] .for the other stable pairs , see appendix [ sec : outgroup favoritism ] .this criterion is satisfied by 270 pairs ( fig .[ fig : equilibrium selection ] ) .for all 270 pairs , every player obtains personal reputation g ( i.e. , ) , and the donor cooperates with ingroup recipients because the recipients have reputation g ( i.e. , disc ) . in all 270 pairs , is either standing ( gbgg in shorthand notation ) , judging ( gbbg ) , or shunning ( gbbb ) ( refer to fig .[ fig : norms ] for definitions of these norms ) . in the shorthand notation ,the first , second , third , and fourth letters ( either g or b ) indicate the donor s or s new reputation when the donor cooperates with a g recipient , the donor defects against a g recipient , the donor cooperates with a b recipient , and the donor defects against a b recipient , respectively . standing , judging , and shunning in are exchangeable for any fixed combination of disc , , , and . therefore , there are combinations of , , and , which are summarized in table [ tab : stable pairs 1 ] .an asterisk indicates an entry that can be either g or b. for example , gbg indicates standing ( gbgg ) or judging ( gbbg ) .the probability of cooperation toward outgroup recipients , payoff ( ; eq . ) , and the probability that a group has a g reputation ( ; eq . )are also shown in table [ tab : stable pairs 1 ] . the stable action norm pairs can be classified into three categories . *full cooperation : donors behave as disc toward outgroup recipients , i.e. , disc and cooperate with both ingroup and outgroup recipients with probability 1 .accordingly , and .+ in this case , indirect reciprocity among different groups as well as that within single groups is realized .action rule disc is stable if is either standing ( gbgg ) , judging ( gbbg ) , or shunning ( gbbb ) and is either standing or judging .the condition for stability against group mutation is the mildest one ( i.e. , ) for each action norm pair . + under full cooperation , and must be one that stabilizes cooperation in the standard indirect reciprocity game without a group - structured population .the ingroup observer monitors donors actions toward outgroup recipients through the use of standing , judging , or shunning , even though ingroup players are not directly harmed if donors defect against outgroup recipients .the ingroup observer does so because donors defection against outgroup recipients would negatively affect the group s reputation . * partial ingroup favoritism : donors adopt disc and cooperate with ingroup recipients with probability 1 and outgroup recipients with probability .accordingly , and .+ in this case , action rule disc is stable if is either standing ( gbgg ) or judging ( gbbg ) , and is either scoring ( gbgb ) or shunning ( gbbb ) .the condition for stability against group mutation is shown in table [ tab : condition partial ingroup favoritism ] .* perfect ingroup favoritism : donors adopt alld and always cooperate with ingroup recipients and never with outgroup recipients regardless of the recipient s group reputation .accordingly , .+ table [ tab : stable pairs 1 ] suggests that action rule ( , ) ( disc , alld ) can be stable for any subnorm .this is true because the group reputation , whose update rule is given by , is irrelevant in the current situation ; the donor anyways defects against outgroup recipients .nevertheless , determines that is consistent with ingroup cooperation through the probability of a g group reputation .+ when gg , the outgroup observer evaluates defection against outgroup recipients to be g ( fig .[ fig : ingroup outgroup observers]b ) .therefore , . in this case , gbb , gbg , and ggg stabilize perfect ingroup favoritism . under any of these ,the ingroup observer assigns g to a donor that defects against a recipient in a g outgroup because the second entry of is equal to g in each case .therefore , , and full ingroup cooperation is stable .+ when gb or bg , the outgroup observer evaluates defection against outgroup recipients to be g with probability .therefore , . in this case , gg stabilizes perfect ingroup favoritism .under such an , the ingroup observer assigns g to a donor that defects against a recipient in a g outgroup because the second and fourth entries of are equal to g. + when bb , the outgroup observer evaluates defection against outgroup recipients to be b. therefore , .in this case , bbg , bgg , and ggg stabilize perfect ingroup favoritism .under such an , the ingroup observer assigns g to a donor that defects against a recipient in a g outgroup because the fourth entry of is equal to g. + in all the cases , the stability against invasion by group mutants requires . in scenario 2 of group mutation, it is hypothesized that a group of mutants immigrates from a different population that is stable against invasion by single mutants .such a group mutant may appear owing to the encounter of different stable cultures ( i.e. , action norm pairs ) .the pairs that are stable against invasion by single mutants and yield zero payoff , such as the population of alld players , must be also included in the group mutant list .it should be noted that a mutant group may have a different social norm from that for the resident population . among the 588 action norm pairs that are stable against single mutation , no pair is stable against group mutation .however , 140 pairs are stable against group mutation for any in a relaxed sense that the resident player s payoff is not smaller than the group mutant s payoff , i.e. , ( fig .[ fig : equilibrium selection ] ) .the homogeneous population of each pair is neutrally invaded by some group mutants , i.e. , .therefore , i examine the evolutionary stability ( e.g. , ) against group mutation . in other words , for the group mutants yielding , i require when the resident players are replaced by group mutants .all 140 action norm pairs are evolutionarily stable except that each pair is still neutrally invaded by their cousins .for example , four action norm pairs specified by disc , gbg , gbg , gbgg neutrally invade each other .these pairs yield the same payoff and are evolutionarily stable against invasion by the other group mutants .therefore , i conclude that the four pairs collectively form a set of stable solutions. other sets of stable solutions consist of four or eight neutrally invadable action norm pairs that yield the same payoff and differ only in and . all 140 pairs realize perfect intragroup cooperation such that the players have g personal reputations and disc ( fig .[ fig : equilibrium selection ] ) .subnorm gbgg ( i.e. , standing ) or gbbg ( i.e. , judging ) is exchangeable for any fixed combination of disc , , , and . therefore , there are possible combinations of , , and , which are listed in table [ tab : stable pairs 2 ] .the 140 pairs are a subset of the 270 pairs stable under scenario 1 .the stable sets of action norm pairs can be classified into three categories .( 1 ) full cooperation occurs if all the subnorms are standing or judging .as already mentioned as an example , under gbgg , the four action norm pairs ( disc , disc , gbgg , gbgg ) , ( disc , disc , gbgg , gbbg ) , ( disc , disc , gbbg , gbgg ) , and ( disc , disc , gbbg , gbbg ) can neutrally invade each other . similarly ,if gbbg , the same four action norm pairs constitute a set realizing stable full cooperation .these two sets of four pairs are evolutionarily stable against invasion by each other . in total , there are eight pairs that realize full cooperation .( 2 ) partial ingroup favoritism occurs for a set of four action norm pairs .( 3 ) perfect ingroup favoritism occurs under the same subnorms as those for scenario 1 . for a fixed , the same eight action norm pairs ( disc , alld , gbg , gg ) yield the same payoff , can neutrally invade each other , and are evolutionarily stable against the other group mutants .in fact , players may not differentiate between the three subnorms .players may use a common norm for assessing ingroup donors irrespective of the location of recipients .table [ tab : stable pairs 1 ] indicates that , if is imposed for the resident population , but not for mutants , perfect ingroup favoritism is excluded . under scenario 1 ,full cooperation is stable when standing , judging , or shunning and standing or judging .partial ingroup favoritism is stable when standing or judging and scoring or shunning . under scenario 2 ,full cooperation is stable when standing or judging and standing or judging .partial ingroup favoritism is stable when standing or judging and shunning .alternatively , players may use a common norm for assessing donors playing with outgroup recipients irrespective of the location of donors . if is allowed and is imposed , partial ingroup favoritism is excluded . under scenario 1 ,full cooperation is stable when standing , judging , or shunning and standing or judging .perfect ingroup favoritism is stable when standing , judging , or shunning and gg .the results under scenario 2 differ from those under scenario 1 only in that shunning is disallowed .finally , if all the three subnorms are forced to be equal , only full cooperation is stable , and the norm is standing or judging .this holds true for both scenarios 1 and 2 .i identified the pairs of action rule and social norm that are stable against invasion by single and group mutants in the game of group - structured indirect reciprocity . full cooperation ( i.e. , cooperation within and across groups )based on personal and group reputations , partial ingroup favoritism , and perfect ingroup favoritism are stable under different social norms .perfect ingroup favoritism is attained only when the donor defects against outgroup recipients regardless of their reputation ( i.e. , alld ) .perfect ingroup favoritism does not occur with the combination of a donor that is ready to cooperate with g outgroup recipients ( i.e. , disc ) and a b group reputation .the mechanism for ingroup favoritism revealed in this study is distinct from those proposed previously ( see sec . [sec : introduction ] ) .the major condition for either full cooperation , partial ingroup favoritism , and perfect ingroup favoritism , depending on the assumed social norm , is given by . in only 3 out of 270 social norms in scenario 1 , an additional condition for imposed ( sec .[ sub : scenario 1 ] ) . in general, different mechanisms of cooperation can be understood in an unified manner such that cooperation occurs if and only if is larger than a threshold value .for example , must be larger than the inverse of the relatedness parameter and the inverse of the discount factor in kin selection and direct reciprocity , respectively .the present result also fits this view ; corresponds to in the case of kin selection .i assumed that players approximate personal reputations of individuals in other groups by group reputations ( i.e. , outgroup homogeneity ) .adoption of outgroup homogeneity may be evolutionarily beneficial for players owing to the reduction in the cognitive burden of recognizing others personal reputations .instead , the players pay potential costs of not being able to know the personal reputations of individuals in other groups . to explore evolutionary origins of group reputation, one has to examine competition between players using the group reputation and players not using it .it would also be necessary to introduce a parameter representing the cost of obtaining personal reputations of outgroup individuals .such an analysis is warranted for future work .all the players are assumed to use the same social norm .this assumption may be justified for well - mixed populations but less so for populations with group structure because group structure implies relatively little intergroup communication .it seems to be more natural to assume that subnorms and , which are used to evaluate actions of ingroup donors , depend on groups . under scenario 2[ sub : scenario 2 ] ) , any stable action norm pair is neutrally invaded by its cousins who are different in and .this result implies that different groups can use different norms .for example , for all the solutions shown in table [ tab : stable pairs 2 ] , some groups can use gbgg ( i.e. , standing ) , while other groups in the same population can use gbbg ( i.e. , judging ) . to better understandthe possibility of heterogeneous social norms , analyzing a population composed of a small number of groups , probably by different methods , would be helpful .indirect reciprocity based on group reputation is distinct from any type of group selection .this is true for both full cooperation and ingroup favoritism .there are two dominant variants of group selection that serve as mechanisms for cooperation in social dilemma games .the first type is group competition , in which selection pressure acts on groups such that a group with a large mean payoff would replace one with a small mean payoff .models with group competition induce ingroup favoritism , altruistic punishment , and evolution of the judging social norm in the standard game of indirect reciprocity whereby players interact within each group .in contrast , the present study is not concerned with evolutionary dynamics including group competition .the group mutant is assumed to statically compare the payoff to the resident group with that to the mutant group .the second type of group selection requires assortative reproduction in the sense that the offspring have a higher probability of belonging to specific groups than to other groups depending on the offspring s genotype .it is mathematically identical with kin selection .this variant of group selection is also irrelevant to the present model , which is not concerned with the reproduction process .the analysis in this study is purely static .i avoided examining evolutionary dynamics for two reasons .first , the discovered mechanism for cooperation may be confused with group selection in the presence of evolutionary dynamics .second , the model becomes needlessly complicated .introducing evolutionary dynamics implies that one specifies a rule for reproduction .offspring may be assumed to belong to the parent s group or to migrate to another group .it may then be necessary to consider the treatment of , for example , the heterogeneous group size . because evolutionary dynamics are neglected, the present model explains neither emergence of full cooperation and ingroup favoritism nor the likelihood of different solutions , which is a main limitation of the present study .i stress that the concept of group mutants is introduced to sift the set of stable action norm pairs .unless group competition is assumed , the concept of group mutants does not particularly promote cooperation in evolutionary dynamics . under a proper social norm , fullcooperation or ingroup favoritism is stable if ( i.e. , eq .is satisfied ) in most cases . with probability ,the donor , recipient , and observer are engaged in the standard ( i.e. , no group structure ) indirect reciprocity game limited to a single group ( fig .[ fig : ingroup outgroup observers]a ) . in the standard indirect reciprocity game under incomplete information, is quite often the condition for cooperation , where is the probability that the recipient s reputation is observed .this holds true when indicates the observation probability for the donor or that for both the donor and observer . because is also equal to the probability that the donor sees the recipient s personal reputation, resembles .in fact , replacing by in eq .yields . if a player is capable of recognizing the personal reputation of a fixed number of others , the maximum population size for which indirect reciprocity is possible in the standard indirect reciprocity game scales as .the consistency between eq .and implies that the concept of group reputation does not increase the maximum population size for which indirect reciprocity occurs .however , under group competition ( sec .[ sub : group selection ] ) , full cooperation and ingroup favoritism can be stable even if the restriction imposed by eq .is removed .to explain this point , assume that the population is subjected to evolutionary dynamics such that players with relatively large payoffs would bear more offspring in the same group and group competition occurs .the rate of group competition is denoted by , where is the mean time interval between successive group competition events .emergence of a single mutant occurs with rate .selection and reproduction of single players occur with rate . if eq .is violated , single mutants emerge in time .then , some types of mutants , including the alld mutant , spread in the invaded group in time under scenario 1 of group mutation .the invaded group presumably possesses a smaller group - averaged payoff than other resident groups because the resident population is stable against invasion by group mutants as long as , in all but three of 270 action norm pairs ( table [ tab : condition partial ingroup favoritism ] ) . if , such an invaded group is likely to be eradicated by group competition because group competition occurs much faster than the emergence of single mutants . in this case , full cooperation or ingroup favoritism , depending on the given social norm , can be maintained in the absence of eq . .this discussion does not involve timescale .group competition is needed to remove eq . .if eq . is imposed , cooperation occurs without group competition . in this section ,i discuss possible linkages between the present model and the previous experiments examining indirect reciprocity and third - party punishments .yamagishi and colleagues conducted a series of laboratory experiments to show that ingroup favoritism is induced by a group heuristic . with a group heuristic, donors cooperate with ingroup recipients because the donors expect repayment from other ingroup players .donors do not use the information about others reputations in these experiments .in contrast , players use personal reputations of ingroup members in the present model .nevertheless , the previous experiments and the current model do not contradict each other . in another laboratory experiment ,mifune et al . showed that presentation of eye - like painting promotes donor s cooperation toward ingroup recipients in the dictator game .for expository purposes , i define serious subnorm to be either standing , judging , or shunning .if the eye - like painting approximates an ingroup observer obeying a serious subnorm , this experimental result is consistent with the present theory because ingroup cooperation is theoretically stable when the ingroup observer adopts a serious subnorm . because the painting does not increase the cooperation toward outgroup recipients , it may not turn to a serious subnorm for some psychological reason .humans may use double standards , i.e. , , which favor ingroup favoritism in my model .other behavioral experiments have addressed the relationship between third - party altruistic punishments and ingroup favoritism . in precise terms , third - party punishments and reputation - based indirect reciprocityare distinct mechanisms for cooperation . nevertheless , belowi discuss possible linkages between these experiments and my model . in indigenous communities in papua new guinea ,the amount of punishment is larger if the punisher belongs to the donor s group than to a different group ( compare abc and ab cases in their fig .1 ) . their results suggest that the ingroup observer may use a serious subnorm and the outgroup observer may not .furthermore , given that the punisher is in the donor s group , the amount of punishment is larger if the donor and recipient belong to the same group ( fig .[ fig : ingroup outgroup observers]a , if the punisher is identified with the ingroup observer ) than if they belong to different groups ( fig .[ fig : ingroup outgroup observers]b ; compare the abc and ac cases in fig . 1 of ) .in this situation , the ingroup observer may use a serious subnorm when the donor plays with ingroup recipients ( fig .[ fig : ingroup outgroup observers]a ) and use a nonserious subnorm when the donor plays with outgroup recipients ( fig .[ fig : ingroup outgroup observers]b ) .my model reproduces ingroup favoritism under these conditions .however , my model and others are not concerned with a main finding in that the amount of punishment is larger when the punisher and recipient belong to the same group . for the reasons stated in sec .[ sub : update ] , i did not assume that observers make their judgments differently when they belong to the recipient s group and to a different group .to theoretically explain the main finding in , one should explicitly analyze the case of a finite number of groups . in different laboratory experiments ,the amount of punishment is larger for an ingroup donor s defection than an outgroup donor s defection .my results are consistent with their results in that , for ingroup favoritism , the donor s action must be seriously evaluated by the ingroup observer using and not seriously by the outgroup observer using .although ingroup favoritism seems to be a canonical behavior of humans , reduction of ingroup bias would induce intergroup cooperation and is socially preferable .full cooperation is pareto efficient , whereas ingroup favoritism is not .various psychological and sociological mechanisms for reducing the ingroup bias , such as guilt , `` auto - motive '' control , retraining , empathy , and decategorization have been proposed .my results provide theory - based possibilities of reducing ingroup bias .first , if the social norm is fixed , conversion from ingroup favoritism to full cooperation is theoretically impossible because full cooperation and ingroup favoritism do not coexist under a given social norm .therefore , advising players to change their behavior toward outgroup recipients from alld to disc is not recommended unless the social norm is also altered .conversion from ingroup favoritism to full cooperation requires a change in the social norm such that players as observers seriously assess ingroup donors actions toward outgroup recipients ( with ) and outgroup outgroup interaction ( with ) . in particular ,if is a serious subnorm , perfect ingroup favoritism with no intergroup cooperation disappears ( sec . [sub : simpler norms ] ) .second , if the three subnorms are the same , the perfect and partial ingroup favoritism is eradicated .the coincidence of only two subnorms is insufficient to induce full cooperation ( sec .[ sub : simpler norms ] ) .the subnorms that exclude the ingroup bias and realize full cooperation are standing or judging .therefore , without speaking of serious subnorms , forcing players to use the same subnorms consistently in assessing donors in different situations may be also effective in inducing full cooperation .ingroup favoritism has been mostly an experimental question except for some recent theoretical studies .this study is a first step toward understanding and even manipulating the dichotomy between full cooperation and ingroup favoritism in the context of indirect reciprocity .[ sec : appendix variant ] in this section , i analyze a variant of the model in which outgroup observers update the group reputation of donors involved in ingroup interaction ( i.e. , ) .i assume that the outgroup observer uses the donor s action , the recipient s personal reputation , and , to update s ( not the donor s personal ) reputation . the equivalent of eq . under this reputation update ruleis given by + r^{\rm out } \left [ p_{\rm g}^ * \phi_{\rm g}^{\rm out}(\sigma^{\rm out } ) + ( 1-p_{\rm g}^ * ) \phi_{\rm b}^{\rm out}(\sigma^{\rm out } ) \right ] .\label{eq : p_g equilibrium case b}\ ] ] i obtain and by solving the set of linear equations and .equations , and are unchanged . as compared to the case of the original reputation update rule ( original case for short ) , eq .is replaced by \notag\\ & + r^{\rm out}\left [ p_{\rm g}^*\phi_{\rm g}^{\rm out}(\sigma^{\rm out\prime } ) + ( 1-p_{\rm g}^*)\phi_{\rm b}^{\rm out}(\sigma^{\rm out\prime } ) \right].\label{eq : pg ' * group mutation case b}\end{aligned}\ ] ] the equivalent of eq .is obtained by substituting eq . in eq . .because of the symmetry with respect to g and b , i exclude action rules having antidisc from the exhaustive search , as i did in the original case ( sec . [ sub : against single mutants ] ) . it should be noted that one can not eliminate action norm pairs with antidisc on the basis of symmetry consideration , which is different from the original case .this is because a player s personal and group reputations are interrelated through the behavior of the outgroup observer when . under the modified reputation update rule , there are 725 action norm pairs that are stable against invasion by single mutants and yield . under scenario 1 ,507 out of the 725 pairs are stable against group mutation , and 324 out of the 507 pairs yield perfect ingroup cooperation . the 324 action norm pairs are classified as follows .first , 68 pairs yield full cooperation with either ( disc , disc ) or ( disc , antidisc ) .second , 14 pairs yield partial ingroup favoritism with ( disc , antidisc ) .third , 236 pairs yield perfect ingroup favoritism with ( disc , alld ) .fourth , 6 pairs yield perfect ingroup favoritism with ( disc , antidisc ) . as in the original case , disc , and either standing , judging , or shunning for these pairs .in contrast to the original case , ( disc , antidisc ) can be stable , yield perfect ingroup cooperation , and even yield outgroup cooperation , under some social norms .in such a situation , the values of the personal and group reputations ( i.e. , g and b ) have opposite meanings . in other words , a g butnot b personal reputation elicits intragroup cooperation , while a b but not g group reputation elicits intergroup cooperation .therefore , action rule ( disc , antidisc ) in this situation can be regarded as a relative of ( disc , disc ) in the situation in which the values of the personal and group reputations have the same meaning . on this basis , i consider that the present results are similar to those obtained for the original case ( table [ tab : stable pairs 1 ] ) .in particular , only full cooperation is stable under standing or judging if , , and are assumed to be the same .under scenario 2 , 144 out of 725 pairs are stable against group mutation , and all of them yield perfect ingroup cooperation .the 140 pairs that survive in the original case ( sec .[ sub : scenario 2 ] ) also survive under the modified reputation update rule .the action rule in the additional four ( ) pairs is ( disc , antidisc ) .another difference from the original case is that the action norm pairs that yield partial ingroup favoritism in table [ tab : stable pairs 2 ] realize full cooperation in the present case . otherwise , the results are the same as those in the original case . in summary ,16 pairs realize full cooperation , and 128 pairs realize perfect ingroup favoritism .as is the case for scenario 1 , only full cooperation is stable with standing or judging if the three subnorms are assumed to be the same .[ sec : outgroup favoritism ] under sceinario 1 in the original case , 270 out of 440 stable action norm pairs with a positive payoff realize perfect intragroup cooperation ( sec .[ sub : scenario 1 ] ) . the other 170 stable action norm pairs yielding are summarized in table [ tab : stable pairs 1 imperfect ] .for all the stable action norm pairs shown , disc .table [ tab : stable pairs 1 imperfect ] indicates that outgroup favoritism does not occur .there are 18 rows in table [ tab : stable pairs 1 imperfect ] .for the two action norm pairs shown in the first row , the stability condition is given by and .for the two action norm pairs shown in the sixth row , the stability condition is given by and .for the four action norm pairs shown in the sixteenth row , the stability condition is given by .for all the other action norm pairs , the stability condition is given by .i thank mitsuhiro nakamura and hisashi ohtsuki for valuable discussions and acknowledge the support provided through grants - in - aid for scientific research ( nos .20760258 and 23681033 , and innovative areas `` systems molecular ethology''(no .20115009 ) ) from mext , japan .jones , e. e. , wood , g. c. , quattrone , g. a. , 1981 . perceived variability of personal characteristics in in - groups and out - groups :the role of knowledge and evaluation .psychol . bull . 7 , 523528 .rand , d. g. , pfeiffer , t. , dreber , a. , sheketoff , r. w. , wernerfelt , n. c. , benkler , y. , 2009 .dynamic remodeling of in - group bias during the 2008 presidential election .usa 106 , 61876191 ..stable action norm pairs with perfect ingroup cooperation under scenario 1 .the probability of cooperation with outgroup recipients , , and are the values in the limit . gbgg ( standing ) , gbbg ( judging ) , or gbbb ( shunning ) .norm pairs only different in were distinguished when counting the number of stable action norm pairs .an asterisk indicates that both g and b apply . [cols="^,^,^,^,^,^,^ " , ]
|
indirect reciprocity in which players cooperate with unacquainted other players having good reputations is a mechanism for cooperation in relatively large populations subjected to social dilemma situations . when the population has group structure , as is often found in social networks , players in experiments are considered to show behavior that deviates from existing theoretical models of indirect reciprocity . first , players often show ingroup favoritism ( i.e. , cooperation only within the group ) rather than full cooperation ( i.e. , cooperation within and across groups ) , even though the latter is pareto efficient . second , in general , humans approximate outgroup members personal characteristics , presumably including the reputation used for indirect reciprocity , by a single value attached to the group . humans use such a stereotypic approximation , a phenomenon known as outgroup homogeneity in social psychology . i propose a model of indirect reciprocity in populations with group structure to examine the possibility of ingroup favoritism and full cooperation . in accordance with outgroup homogeneity , i assume that players approximate outgroup members personal reputations by a single reputation value attached to the group . i show that ingroup favoritism and full cooperation are stable under different social norms ( i.e. , rules for assigning reputations ) such that they do not coexist in a single model . if players are forced to consistently use the same social norm for assessing different types of interactions ( i.e. , ingroup versus outgroup interactions ) , only full cooperation survives . the discovered mechanism is distinct from any form of group selection . the results also suggest potential methods for reducing ingroup bias to shift the equilibrium from ingroup favoritism to full cooperation .
|
the korteveg de vries equation appears as a model for the propagation of weakly nonlinear dispersive waves in several fields . among themthere are gravity driven waves on a surface of an incompressible irrotational inviscid fluid , ion acoustic waves in plasma , impulse propagation in electric circuits and so on . in the shallow water wave problemthe kdv equation corresponds to the case when the bottom is even .there have been many attempts to study nonlinear waves in the case of an uneven bottom because of its significance , for instance in such phenomena as tsunamis . among the first papers dealing with a slowly varying bottomare papers of mei and le mhaut and grimshaw . when taking an appropriate average of vertical variables one arrives at green - nagdi type equations .van groesen and pudyaprasetya studied uni - directional waves over a slowly varying bottom within the hamilton approach , obtaining a forced kdv - type equation .an extensive study of wave propagation over an uneven bottom conducted before 2000 is summarized in dingemans s monograph .the papers are examples of approaches that combine linear and nonlinear theories .the gardner equation and the forced kdv equation , were also extensively investigated in this context , see , e.g. , . in previous papers , we derived a new kdv - type equation containing terms which come directly from an uneven bottom .these terms , however , appear naturally only if euler equations for the fluid motion are considered up to second order in small parameters , whereas the kdv equation is obtained in first order approximation .there are no analytic solutions for the above equation . in presented several cases of numerical simulations for that equation obtained using the finite difference method ( fdm ) with periodic boundary conditions .it was demonstrated in that finite element method ( fem ) describes properly the dynamics of the kdv equation ( [ kdvm ] ) , which is the equation in a moving frame of reference .the first aim of this paper is to construct an effective fem method for solving higher order kdv equations , both with even bottom and uneven bottom .the second goal is to compare the results obtained in this numerical scheme with some of the results obtained earlier using the finite difference method in and in .the paper is organized as follows : in section [ prel ] we review the kdv equation ( [ kdv1 ] ) , the extended kdv equation ( [ etaab ] ) and kdv - type equation containing direct terms from bottom variation ( [ etaabd ] ) , all expressed in scaled dimensionless variables . in section [ numm ] the construction of the numerical method for solving these equations within the fem is described .coupled sets of nonlinear equations for coefficients of expansion of solutions to these equations in a basis of piecewise linear functions are obtained . in section [ nsym ] several examples of numerical simulations are presented .extended kdv type equations , derived by some of the authors in , second order in small parameters , have the following form ( written in scaled dimensionless coordinates , in a fixed coordinate system ) . for the case with an uneven bottom details of the derivation of the second order equation ( [ etaabd ] ) from the set of euler equations with appropriate boundary conditions can be found in . in ( [ etaabd ] ) , stands for a wave profile and denotes a bottom profile .subscripts are used for notation of partial derivatives , that is , for instance , and so on .small parameters are defined by ratios of the amplitude of the wave profile , the depth of undisturbed water , average wavelength and the amplitude of the bottom changes for details of the transformation of the original dimensional variables to the nondimensional , scaled ones used here , see , e.g. , .it should be emphasized that in equation ( [ etaabd ] ) all three terms originating from an uneven bottom are second order in small parameters .these terms appear from the boundary condition at the bottom which is already in second order with coefficient , see equation ( 5 ) in or equation ( 10 ) in .then in the final second order equation ( [ etaabd ] ) we write them in the form in order to epmhasize that they all come from the second order perturbation approach . for detailswe refer to the mentioned papers .in the case of an even bottom ( ) equation ( [ etaabd ] ) is reduced to the second order kdv type equation and when it becomes identical to eq .( 21 ) in . equation ( [ etaab ] ) was obtained earlier by marchant and smyth and called the _ extended kdv _ equation .limitation to first order approximation in small parameters gives the kdv in a fixed system of coordinates the standard , mathematical form of the kdv equation is obtained from ( [ kdv1 ] ) by transformation to a moving reference frame . substituting obtains from ( [ etaab ] ) the equation or finally , when , in this paper we attempt to solve numerically the equation ( [ etaabd ] ) for several cases of bottom topography and different initial conditions . in several pointswe follow the method applied by debussche and printems .however , the method is extended to higher order kdv type equations with plain bottom ( [ etaab ] ) and with bottom fluctuations ( [ etaabd ] ) . for both cases we work in a fixed reference system , necessary for a bottom profile depending on the position .the emergence of soliton solutions to the kdv equation was observed in numerics fifty years ago by .several numerical methods used for solving the kdv equation are discussed in . among themare the finite difference explicit method , the finite difference implicit method and several versions of the pseudospectral method , as in .it is also worth mentioning papers using the fem and galerkin methods .most numerical applications use periodic boundary conditions , but there exist also works that apply dirichlet boundary conditions on a finite interval .the authors are trying to construct a method which will be applicable not only for the numerical simulation of an evolution of nonlinear waves governed by equations ( [ etaabd ] ) or ( [ etaab ] ) but also for their stochastic versions .such stochastic equations will be studied in the next paper .since stochastic noise is irregular , solutions are not necessarily smooth , neither in time nor space . a finite element method ( fem )seems to be suitable for such a case .we have adapted the crank nicholson scheme for time evolution , beginning with the kdv equation ( [ kdv1 ] ) in a fixed coordinate system .note that .denote also and .let us choose time step .then the kdv equation ( [ kdv1 ] ) in the crank nicholson scheme can be written as a set of coupled first order differential equations where for second order equations ( [ etaabd ] ) or ( [ etaab ] ) we need to introduce two new auxiliary variables : and .note that , .moreover , and this setting allows us to write the crank nicholson scheme for ( [ etaab ] ) as the following set of first order equations & = & 0 , \nonumber\\ \frac{\partial } { \partial x}\eta^{n + \frac{1}{2}}-v^{n + \frac{1}{2 } } & = & 0 , \nonumber\\ \frac{\partial } { \partial x } v^{n + \frac{1}{2}}-w^{n + \frac{1}{2 } } & = & 0 , \nonumber \\ \frac{\partial } { \partial x } w^{n + \frac{1}{2}}-p^{n + \frac{1}{2 } } & = & 0 , \nonumber\\ \frac{\partial } { \partial x } p^{n + \frac{1}{2}}-q^{n + \frac{1}{2 } } & = & 0 , \nonumber \end{aligned}\ ] ] where for the second order kdv type equation with an uneven bottom ( [ etaabd ] ) the first equation in the set ( [ crab ] ) has to be supplemented by terms originating from bottom variations , yielding & = & 0 , \nonumber\end{aligned}\ ] ] where .below we focus on the second order equations ( [ etaab ] ) and ( [ crab ] ) , pointing out contributions from bottom variation later .following the arguments given by debussche and printems we apply the petrov - galerkin discretization and finite element method .we use piecewise linear shape functions and piecewise constant test functions .we consider wave motion on the interval ] .note , that since we use scaled variables and definition ( [ smallp ] ) the amplitude of the soliton is equal 1 . in figs .[ gauss]-[well ] we use the same initial conditions .the soliton motion shown in fig .1 is in agreement with the numerical results obtained with the finite difference method in . with parameters resulting distortion of the kdv soliton due to second order terms in ( [ etaab ] ) , ( [ matr3 ] ) is in the form of a small amplitude wavetrain created behind the main wave .we may question whether the fem numerical approach to the extended kdv ( [ matr4 ] ) is precise enough to reveal the details of soliton distortion caused by a varying bottom .the examples plotted in figs .[ gauss]-[well ] show that it is indeed the case . in all the presented calculations the amplitude of the bottom variations is .the bottom profile is plotted as a black line below zero on a different scale than the wave profile . in fig .[ gauss ] the motion of the kdv soliton over a wide bottom hump of gaussian shape is presented . here, the bottom function is . in the scaled variables the undisturbed surface of the water ( dashed lines ) is at .the soliton profiles shown in fig .[ gauss ] are almost the same as the profiles obtained with the finite differences method ( fdm ) used in .there are small differences due to smaller precision of our fem calculations .the fem allows for the use of larger time steps then fdm .however , in the fem the computing time grows rapidly with the increase in the number of the mesh , since calculation of the inverse of the jacobian matrices becomes time consuming . fig .[ 2gauss ] displays the motion of the kdv soliton above a double humped gaussian shaped bottom defined by .here both gaussians are rather narrow and therefore distortions of the wave shape from the ideal soliton are smaller than those in fig . 2 . in fig[ well ] we see the influence of a bottom well with horizontal size extending the soliton s wavelength .the bottom function is chosen as ] governs the shape of the wave .for the cnoidal solution converges to a cosine function . for the cnoidal wave forms peaked crests and flat troughs , such that for the distance between crests increases to infinity and the cnoidal wave converges to a soliton solution .[ cn1 ] shows the time evolution of the cnoidal wave according to the extended kdv equation ( [ etaab ] ) , that is , the second order kdv equation with flat bottom .the parameters of the simulation are : , . with this value of wavelength of the cnoidal wave is equal to dimensionless units , and calculations were performed on the interval of that length , ] the evolution was calculated according to equation ( [ etaabd ] ) and numerical scheme ( [ matr4 ] ) .profiles of the wave are plotted at time instants , where .[ cn2 ] shows that during the wave motion over the obstacle a kind of slower wave with smaller amplitude is created following the main peak . in fig .[ cn3 ] we present the initially cnoidal wave moving over an extended , almost flat hump . in this simulation .the intial condition is given by with .because is smaller than in the previous cases , the wavelength of the cnoidal wave is also smaller , .calculations were made on the interval ] .precision of fem method in the fixed frame can be tested by calculation of a root mean square ( rms ) of deviations of wave profile obtained numerically from those obtained from the analytic solution .denote by and the values of the solutions at given mesh point an time instant , analytic and numerical , respectively .then the rms is expressed as we checked our implementation of the fem on the interval $ ] using several different sizes of the mesh and several time values .[ ff8 ] displays the rms ( [ var ] ) values for .it shows that deviations from analytic solution decrease substantialy with decreasing .small assures a very high precision in numerical simulations , however , at the expense of large computation time .another tests ( not shown here ) in which was fixed and rms was calculated as a function of time showed that for rms increases with time linerly and very slowly . when the bottom is not flat simulations _ have to be done in the fixed reference frame_. for our purposes we needed to choose the intervals of the order of 70 or 80 .even for the size of jacobian matrices ( [ j2 ] ) reaches ( 4000 ) and its inversion is time consuming . in a compromise between numerical precision and reasonable computing times we made our simulations with .this choice resulted in about one week of computing time for a single run on the cluster . in spite of the insufficient precision the results presented in figs . 1 - 7 reproduce details of evolution known from our previous studies , obtained with the finite difference method .these details , resulting from second order terms in extended kdv ( [ etaab ] ) , are seen in fig .[ plaskie ] as a wavetrain of small amplitude created behind the main one ( compare with fig . 2 in ) .a similar wavetrain behind the main one was observed in numerical simulations by , see e.g. fig . 2 therein .for waves moving with presence of bottom obstacle these secondary waves behind the main one are amplified by interaction with the bottom and new faster secondary waves appear ( see , e.g. , figs . 2 - 4 ) .these effects were already observed by us , see figs . 6 and 7 in .* a weak formulation of the finite element method ( fem ) for extended kdv equation ( [ etaab ] ) can be effectively used for numerical calculations of the time evolution of both soliton and cnoidal waves when calculations are done in a moving frame . * since numerical calculations for equation ( [ etaabd ] ) have to be performed in a fixed frame , the presented fem method is not as effective as the fdm method used by us in previous papers because the computer time necessary for obtaining sufficiently high precision becomes impractical . on the other hand , the presented results ( though not as precise as fdm ones ) exhibit all secondary structures generated by higher order terms of the equations .* first tests of numerical solutions to second order kdv type equations with a stochastic term seem to be very promising .miura , r. m. , gardner , c. s. and kruskal , m. d. ( 1968 ) . korteweg de vries equation and generalizations ii .existence of conservation laws and constants of motion , journal of mathematical physics 9(8 ) : 12041209 .pelinovsky , e. , choi , b. , talipova , t. , woo , s. and kim , d. ( 2010 ) .solitary wave transformation on the underwater step : theory and numerical experiments , applied mathe- matics and computation 217(4 ) : 17041718 .taha , t. r. and ablowitz , m. j. ( 1984 ) .analytical and numerical aspects of certain nonlinear evolution equations iii .numerical , the korteweg de vries equation , journal of computational physics 55(2 ) : 231253 .yi , n. , huang , y. and liu , h. ( 2013 ) . a direct discontinous galerkin method for the generalized korteweg - de vries equation : energy conservation and boundary effect , jour- nal of computational physics 242 : 351366 .
|
the finite element method ( fem ) is applied to obtain numerical solutions to a recently derived nonlinear equation for the shallow water wave problem . a weak formulation and the petrov - galerkin method are used . it is shown that the fem gives a reasonable description of the wave dynamics of soliton waves governed by extended kdv equations . some new results for several cases of bottom shapes are presented . the numerical scheme presented here is suitable for taking into account stochastic effects , which will be discussed in a subsequent paper .
|
an increasing number of space missions of astrophysical interest are avoiding orbits around the earth , to improve their environmental conditions .in particular , orbits far from the earth around the lagrangian point of the earth - sun system ( ) are currently being selected especially by infrared and microwave missions , like by nasa ( bennett et al .1996 ) and planck ( tauber 2000 ) , herschel ( pilbratt 2000 ) and gaia ( perryman 2001 ) by esa . while this solution is often essential for the successful scientific return of the missions , non - trivial practical problems need to be solved ; among these , the visibility of the spacecraft from the ground station .if the spacecraft is not visible all the time , it needs to have some built - in autonomy to perform its functions independently from ground control . in particular ,both house - keeping and scientific data need to be stored on - board , to be subsequently down - linked to the ground station during the next period of visibility .the data have to be safely " transmitted to earth , with minimal loss of data during the communications period , due to the high cost of each telemetry ( tm ) packet for space missions and to the wealth of scientific information encoded in each packet .however some information will be eventually lost , since the cost of guaranteeing completely faultless communications and ground systems would be unbearable , if ever feasible .therefore , great attention has to be devoted to assess the total amount of tm that it is possible to loose without affecting the scientific return of the considered space mission . in this paperwe want to address this issue and we adopt a monte - carlo ( mc ) approach to take more easily and faithfully into account the properties of the observing strategy of the experiment under consideration . as a working case we consider the impact of tm losses for the planck low frequency instrument ( lfi , see mandolesi et al .1998 ) , designed to map the whole sky in temperature and polarization at frequencies between 30 and 100 ghz and observe the cmb anisotropy with an angular ( fwhm ) resolution from to and a sensitivity per ( fwhm ) resolution element from to 13 in the measure of the antenna temperature fluctuations ( worst in the measure of fluctuations of the stokes polarization parameters and ) .it is , however , worth to note that the formalism and approach developed here are quite general and applicable in practice to any kind of experiment with redundant observing strategy ( like ) , _ i.e. _ where the same sky region is observed on several different time scales . for the specific working case adopted a total number of 100,000 simulations representing real cases of lost tm have been considered and analyzed in terms of probability of not observing sky regions and of dimension of unobserved regions .our approach works once details on the observing strategy of the mission under consideration are available and properly coded . in our working casethe orbit selected for the planck satellite is a tight lissajous orbit around the lagrangian point of the sun - earth system .the spacecraft spins at 1 rpm and , in the simplest scanning strategy , the spin axis is kept on the anti - solar direction at constant solar aspect angle by a re - pointing of 2.5 every hour .the two intruments ( lfi and the high frequency instrument , see puget et al .1998 ) on the focal plane of an aplanatic telescope of 1.5 meter aperture have a field of view at from the spin - axis direction .they therefore trace large circles in the sky and the 1-hour averaged circle is the basic planck scan circle . in the nominal 14 months mission ,200 basic circles will be considered , covering twice nearly the whole sky , 5,100 circles for each sky coverage .data continuously acquired are packed into tm packets and sent to a single ground station ( located in perth - australia ) during the connection period ( 23 hours a day ) . in case of failure of communications with the ground station datacan be stored on - board for a maximum amount of 48 hours of data .after this period data are progressively deleted and lost .as for other missions , at least 95% of the total tm is guaranteed to be finally available for further analysis .higher percentages of received tm , for example up to 98% , may require another operating ground antenna , and/or have other large additional costs .we observe that loosing the 5% ( 2% ) of the 5,100 scan circles of a single sky coverage means to lose 10 ( 4 ) 24-hour - tm - blocks , corresponding to a set of unobserved `` stripes '' with a global width of 10 ( 4 ) at low ecliptic latitudes .it is therefore of paramount importance to evaluate the impact of the lost tm on the effective sky coverage . in the specific case of planck - lfi ,the antenna beams corresponding to the various feed horns arranged in the focal plane are located on a ring subtending an angular radius of about on the telescope field of view about the telescope optical axis .the focal plane arrangement of the feed horns at different frequencies shows potentially dangerous situations for the 30 ghz channels ( only 2 placed along the same scan direction ) and the 70 ghz ( placed along a small arc with an extension of only degree in the direction orthogonal to the scan direction ) . in these casesthere is no possibility to compensate for the loss of a given sky area with retained tm observed by other detectors at the same frequency .this is for example the case of the 100 ghz channels , that span an angle of about 3 - 4 degrees : sky is effectively lost only if it is not possible to communicate with the satellite for more than 3 days , or if the data stored on - board are not downlinked in time .the first is a very unlikely case , while to cope with the second , an appropriate downlinking strategy shall be devised .it is worth to mention that a similar situation is valid also for , _e.g. _ , and gaia . however to properly address the issue of loosing tm packets for these missions , details on their observing strategy as well as on - board data storage capabilities are required . when coding the properties of the planck observing strategy , we made some simplifying assumptions for the only purpose of computing the percentage of lost tm : * each sky coverage ( 7 months long ) is composed of an integer number of scan circles ; * the number of scan circles is the same for each sky coverage ; * scan circles from subsequent sky coverages overlap exactly ; * tm is lost in chunks 48 or 24 hours long ( the first is the total amount of data that can be stored on - board and implies having lost 2 or 1 complete days of data ) .furthermore , the pointing stability of the spacecraft assures that the real situation will be not much different from the case considered here .given the mission duration and percentage of lost tm we randomly extract from the full tm stream the lost scan circles .we then overlap the different sky coverages to form a single tm stream that refers to the whole sky . in this streamwe consider the total number of scan circles lost , their mean and their maximum dimension .we ran over 100,000 mc simulations to derive the probability distribution function of the lost tm .we assume two different mission durations implying 2 and 3 complete sky coverages . each single sky coverage is composed of a total of 5,100 individual 1 hour scan circles ( corresponding to 7 months ) .two total amounts of lost tm are considered : 5 and 2% . in tablei we report results from our simulations respectively for 48 and 24 hours blocks of lost tm : is the percentage of simulations with no loss of scan circles at all after coadding a given number of sky coverages ( sc ) while is the percentage of simulations for which at least one scan circle is lost at the end of the coadding procedure .the other two columns report the mean and maximum number of scan circles lost in our 100,000 simulations ..results from 100,000 simulations with realistic loss of tm [ cols="<,^,^,^,^,^",options="header " , ] inspection of tableii shows that the area of lost sky depends on the pixel size of the map only weakly : when the pixel size decreases by a half the area of the lost sky increases only by a tiny fraction .the improvement represented by the case of 5% lost tm with 3 sky coverages with respect to the case with 2% lost tm but only 2 sky coverages is quite clear .we have derived through mc simulations the probability of loosing tm packets for a space - borne instrument performing a full - sky survey .of course , it is obvious that the best results for a sky coverage in terms of completeness are obtained when the largest fraction of tm is retained and the number of repeated full sky observations is increased .however , our analysis clearly shows that it is better to cover the sky more times with a lower fraction of tm retained than less times with a higher tm fraction .in this respect we note that the 5 year full sky mission gaia assumes 5 repetitions of essentially the same scanning strategy , year by year . even in the most conservative case in which a given sky region is observed only one time per year , and considering that : _ i ) _ the total capacity of the on - board data recorder is about 1 day , _ ii ) _one day is also the time scale for the re - pointing of the symmetry axis of gaia scanning strategy , and _iii ) _ the field of view is about ( not far from the planck beam size at lower frequencies ) , we have a probability to loose a given sky region with 95% of guaranteed tm is less that % .this is another remarkable example in which the increase in the number of full sky surveys of the mission allows to significantly reduce the probability of loosing scientific information . in a space mission ,trading off the pergentage of guaranteed tm delivered to the ground versus number of full sky surveys has an impact in terms of costs : the setup needed to guarantee a higher tm fraction may imply more ground stations to follow the satellite and more ground personnel .cost - wise , it could be preferable to make an extension of mission time that implies , _ e.g. _ in the case of planck , only another seven months of operations . in any case, a careful costs - to - benefits analysis needs to be carried out .
|
in this paper we discuss the issue of loosing telemetry ( tm ) data due to different reasons ( _ e.g. _ spacecraft - ground transmissions ) while performing a full - sky survey with space - borne instrumentation . this is a particularly important issue considering the current and future space missions ( like planck from esa and from nasa ) operating from an orbit far from earth with short periods of visibility from ground stations . we consider , as a working case , the low frequency instrument ( lfi ) on - board the planck satellite albeit the approach developed here can be easily applied to any kind of experiment that makes use of an observing ( scanning ) strategy which assumes repeated pointings of the same region of the sky on different time scales . the issue is addressed by means of a monte carlo approach . our analysis clearly shows that , under quite general conditions , it is better to cover the sky more times with a lower fraction of tm retained than less times with a higher guaranteed tm fraction . in the case of planck , an extension of mission time to allow a third sky coverage with 95% of the total tm guaranteed provides a significant reduction of the probability to loose scientific information with respect to an increase of the total guaranteed tm to 98% with the two nominal sky coverages . , , , 0.5truecm astronomical and space - research instrumentation , astronomical observations : radio , microwave , and submillimeter 95.55.n , 95.85.bh , 96.30.ys
|
interface dynamics in a hele - shaw cell where the motion of the viscous fluids is confined to the narrow gap between two closely spaced parallel glass plates is a problem of considerable interest both from a theoretical standpoint as a moving free boundary problem and from a practical perspective in view of its connection to other important physical systems , such as flows in porous media , dendritic cristal growth and directional solidification .the hele - shaw system is particularly interesting when one fluid is much less viscous than the other and surface tension effects are neglected , for in this case the problem becomes quite tractable mathematically and many steady and time - dependent solutions have been found since the pioneering work by .recently , deep mathematical connections have been discovered between hele - shaw flows and other problems in mathematical physics , such as integrable systems , random matrix and quantum gravity .there is also a close relation between interface dynamics in a hele - shaw cell and an important growth model known as loewner evolution .motivated by these findings , interest in hele - shaw flows ( or laplacian growth , as it is also known ) has grown well beyond its original hydrodynamics context and there is now an extensive literature on the subject and its mathematical ramifications ; for an overview , see e.g. the recent monograph by . yet , despite all these developments , the hele - shaw system continues to surprise and reveal more mathematical structures underlying its dynamics .recent investigations by motivated in part by the problem of interface dynamics in a hele - shaw cell lead to the discovery of a new class of special functions ( called the secondary schottky - klein prime functions ) associated with planar multiply connected domains .these functions are particularly useful to tackle potential - theory problems involving multiply connected domains with _ mixed boundary conditions_. one such problem and the main theme of the present paper is the motion of bubbles in a hele - shaw channel , in which case the velocity potential satisfies dirichlet boundary conditions on the bubble interfaces and neumann conditions on the channel walls . in this paper ,the formalism of the secondary prime functions is used to construct exact solutions for the problem of multiple bubbles steadily translating in a hele - shaw channel , both for a finite assembly of bubbles and for a periodic stream of bubbles with an arbitrary number of bubbles per period cell .the problem of multiple fingers penetrating into the channel and moving together with an assembly of bubbles is also analysed as a particular case of the multi - bubble solutions ( when some of the bubbles becomes infinitely elongated ) . in all cases ,the solutions are given in terms of a conformal mapping from a multiply connected circular domain in an auxiliary complex -plane to the flow region exterior to the bubbles in the complex -plane .this mapping is written as the sum of two analytic functions corresponding to the complex potentials in the laboratory and co - moving frames that map the circular domain onto slit domains .analytical formulae for these slit maps are obtained in terms of the secondary schottky - klein ( s - k ) prime functions , which then allows us to obtain an explicit solution for the desired mapping . in the case of a finite assembly of bubbles , a generalised method of imagesis used at first to construct the relevant complex potentials and then the resulting expressions ( containing an infinite product of terms ) are recast in terms of the secondary prime functions .this function theoretic formulation is more advantageous in that not only it has a firmer mathematical basis ( the theory of compact riemann surfaces and their associated prime functions ) but it can also handle more general cases , such as periodic solutions , that are not easily tackled by the heuristic method of images .solutions for multiple steady bubbles in a hele - shaw channel were first obtained by the author for the cases when the bubbles either are symmetrical about the channel centreline or have fore - and - aft symmetry . in such cases ,the fluid region can be reduced by virtue of symmetry to a simply connected domain , whereby the desired mappings can be constructed via the schwarz - christoffel formula .solutions for an arbitrary number of steady bubbles in an _ unbounded _ cell were obtained by in terms of the ( primary ) the schottky - klein prime function . also considered the case of a finite assembly of steady bubbles in a channel with no assumed symmetry , but it was subsequently found that the second family of prime functions used in his approach was not correctly defined ( dg crowdy 2011 , personal communication ) .exact solutions for this problem were later obtained by using an alternative method based on the generalised schwarz - christoffel mapping for multiply connected domains .their solution is expressed in the form of an indefinite integral whose integrand consists of products of ( primary ) s - k prime functions and which contains several accessories parameters that need to be determined numerically .the solutions reported here for multiple bubbles in a channel are based on an entirely different approach and have the advantage that they are given by an explicit analytical formula in terms of the secondary prime functions , with no accessory parameters whatsoever .furthermore , they can be used to generate new solutions for multiple fingers moving together with an assembly of bubbles , as will be seen later . in the case of a periodic array of bubbles ,solutions were first obtained by for a single stream of symmetrical bubbles .this class of solutions was later extended to include the case of multiple bubbles per period cell under certain symmetry assumptions .the new family of periodic solutions reported here is much more general in that it describes an arbitrary stream of _ groups of bubbles _ , with no symmetry restriction on the geometrical arrangement of the bubbles within a period cell . hereagain the solutions are given in analytical form in terms of the secondary prime functions , making the computation of the bubble shapes a rather simple task , once the preimage domain in the -plane is specified . in light of the existence of this large class of exact solutions for multiple bubbles, it can be argued that the variety of forms observed by , in his experiments on bubbles rising in an inclined hele - shaw cell , is in part a manifestation of this multitude of solutions .further studies would of course be required for a more direct comparison between theory and experiments , but it worth noting that a good agreement was already obtained for the case of a small bubble at the nose of a larger bubble .it is to be noted , however , that not all exact solutions reported here are expected to have experimental counterparts , since they correspond to an idealised model where surface tension and three - dimensional thin - film effects are neglected . the analysis presented here for hele - shaw bubbles might find applications in other related problems , such as hollow vortices and streamer discharges in a strong electric field .a hollow vortex is a vortex whose fluid in the interior is at rest ( and hence at constant pressure ) , and so it can be viewed as bubble with non - zero circulation . the formalism of the primary s - k prime functions has been used to find solutions for a pair of translating hollow vortices as well as for a von krmn street of hollow vortices .it is thus possible that the present method of analysis , involving multiple bubbles , may be adapted to study more general configurations of hollow vortices . in the case of steady streamers in strong electric fields , the governing equations are identical to those for hele - shaw flows , with a streamer corresponding to a bubble or finger , and so the solutions reported herein are likely to be relevant for the problem of multiple interacting streamers .the paper is organized as follows . in [ sec : pf ] , the problem of an assembly of a finite number of steady bubbles in a hele - shaw channel is formulated in terms of a conformal mapping from a circular domain in an auxiliary complex -plane to the fluid region in the physical -plane .the schottky groups associated with this circular domain and their corresponding schottky - klein prime functions are discussed in [ sec : sk ] .the formalism of the secondary prime functions is then used , in [ sec : gs ] , to obtain an analytical solution for the mapping .for ease of presentation , here the problem is first solved by the method of images and then the results are recast in terms of the prime functions .configurations with multiple fingers moving together with a group of bubbles are discussed in [ sec : mf ] , as a particular case of the general solution for an assembly of bubbles . in [ sec : ps ] , the case of a periodic array of steady bubbles in a hele - shaw channel is considered . herea fully fledged function - theoretic approach is used to construct an explicit solution for the corresponding mapping in terms of the secondary prime functions .we conclude the paper by briefly discussing , in [ sec : dis ] , the main features of our results as well as other possible applications of the analysis presented herein .here we consider the problem of an assembly of a number of bubbles of a fluid of negligible viscosity translating uniformly with speed parallel to the axis in a horizontal hele - shaw channel filled with a viscous fluid ; see figure [ fig:1a ] for a schematic for the case . to avoid a proliferation of factors of in our expressions, it is assumed that the channel has width equal to .far from the bubbles , the fluid flow is supposed to be uniform with speed .it is also assumed that the pressure inside each bubble is constant and surface tension effects are neglected , so that the viscous fluid pressure on each bubble boundary is taken to be constant ( i.e. equal to the pressure inside the bubble ) .as is well known , the motion of a viscous fluid in a hele - shaw cell can be described by a complex potential where and the velocity potential is given by darcy s law : here is the cell gap , is the fluid viscosity , is the pressure , and is the stream function conjugated to . as the far - field flow is uniform with unity velocity , it follows that now let denote the fluid region in the -plane exterior to the bubbles , and denote by , for , the boundary of the -th bubble ; see figure [ fig:1a ] .the complex potential must then be analytic in and satisfy the following boundary conditions : =0\qquad\mbox{on}\qquad y=0 , \label{eq : bc1a}\ ] ] =\pi \qquad\mbox{on}\qquad y= \pi , \label{eq : bc1b}\ ] ] =\textrm{constant}\qquad \mbox{for}\qquad z\in\partial d_{j}. \label{eq : bc2}\ ] ] conditions ( [ eq : bc1a ] ) and ( [ eq : bc1a ] ) simply state that the channel walls at and are streamlines of the flow , whereas ( [ eq : bc2 ] ) follows from the fact that the pressure is constant on each bubble boundary . from ( [ eq : bc1a])([eq : bc2 ] ) one then concludes that the flow domain , , in the -plane is a horizontal strip , of width , containing vertical slits in its interior , where each slit corresponds to a bubble in the -plane ; see figure [ fig:1b ] .it is convenient to introduce a second complex potential , , defined by which describes the flow in a frame of reference co - travelling with the bubbles . from ( [ eq : winfty ] ) and ( [ eq : tau ] ) it follows that which implies in turn that =0\qquad\mbox{on}\qquad y=0 , \label{eq : bc3a}\ ] ] and =(1-u)\pi\qquad\mbox{on}\qquad y= \pi .\label{eq : bc3b}\ ] ] as the bubble boundaries are streamlines of the flow in the co - moving frame , it also follows that =\textrm{constant}\qquad \mbox{for}\qquad z\in\partial d_{j}. \label{eq : bc4}\ ] ] from ( [ eq : bc3a])([eq : bc4 ] ) one readily sees that the flow domain , , in the -plane is a strip of width , with horizontal slits in its interior ; see figure [ fig:1c ] .we shall seek a solution for the free boundary problem defined in [ sec : cp ] in terms of a conformal mapping from a bounded circular domain in an auxiliary complex -plane to the fluid region . to be specific , let be the domain obtained from the unit disk by excising non - overlapping smaller disks .a schematic of is shown in figure [ fig:1d ] for the triply connected case .label the unit circle by and the inner circular boundaries by ; and let and denote respectively the centres and radii of the circles , .( note that and . )the mapping is chosen such that the unit circle maps to the channel walls , whilst the inner circles map to the bubble boundaries .this implies that will necessarily have two logarithmic singularities , denoted by and , on the unit circle , which map in the -plane to the two end points of the channel , , respectively . by the degrees of freedom afforded by the riemann - koebe mapping theorem , we can set and . with this choice , the upper unit semicircle , , maps to the upper channel wall ( ) and the lower unit semicircle , , maps to the lower wall ( ); see figure [ fig:1 ] .it is expedient to augment the flow region by reflecting the original channel in its lower wall , thus generating an extended channel defined by , where a bar denotes complex conjugation .( note that this extended channel contains bubbles where each bubble in the lower half - channel is the mirror image of a corresponding bubble in the upper half - channel . ) accordingly , the extended flow domain , denoted by , in the auxiliary -plane is obtained by adding to its reflection in : where defines reflection in .in addition , a branch cut must be inserted along so that the lower and upper sides of this cut map to the upper and lower walls of extended channel , , respectively .a schematic of is shown in figure [ fig : f0 ] for the case .let us denote by the reflection of the circle in , i.e. .the region defined in ( [ eq : f0 ] ) thus corresponds to the exterior of the circles and , for . for convenience of notation , define the following set of labels and let denote the set of circles bounding , that is , next , introduce the functions and through the following compositions : the mapping conformally maps onto a slit strip domain in the -plane defined by .similarly , the function maps onto the slit strip domain in the -plane .note , in particular , that both and must have logarithmic singularities at .these singularities act as a sink and a source , respectively , for the flows generated by these complex potentials a fact that will be exploited in [ sec : gs ] to compute explicit formulae for and via the method of images .once these functions are known , the desired mapping function that describes the bubble shapes is then given by , \label{eq : z1}\ ] ] as follows from ( [ eq : tau ] ) .for any circular region as defined in ( [ eq : f0 ] ) , one can define a so - called schottky group generated by the mbius transformations that ` pair ' the circles and .associated with this schottky group and its subgroups there can be defined special transcendental functions , called primary and secondary schottky - klein prime functions .these functions naturally appear in the context of hele - shaw flows with multiples bubbles , as will be seen in [ sec : gs][sec : ps ] , and so it was thought desirable to present here a brief introduction to the s - k prime functions .consider the circular domain defined in ( [ eq : f0 ] ) .for , denote by the reflection map in the circle which is defined by now , introduce the following mbius maps note that consists of the composition of a reflection in followed by a reflection in , i.e. . alternatively , reflection in be expressed in terms of as , or more explicitly : one important property of the maps is the following relation : where denotes the inverse of .this relation can be derived using geometrical arguments , or it can be verified directly .now it may be verified that , for , maps the interior of onto the exterior of .conversely , the map maps the exterior of onto the interior of .the set consisting of all compositions of the maps , , defines what is called a classical schottky group .the region ( which we recall consists of the exterior of the circles ) is called a _ fundamental region _ of the group and the maps are called the _fundamental generators _ of the group . for any given schottky group and fundamental region , the s - k prime function , , can be defined for any two points .the s - k prime function admits the following infinite product representation : where is the set such that for all , excluding the identity , either or ( but not both ) is contained in .for example , if is included in , then must be excluded .the s - k prime function is intimately connected with the theory of compact riemann surfaces , but for the present purposes it suffices to think of it as a special computable function .efficient algorithms for computing the s - k prime function ( which do not rely on the infinite product representation ) have been developed by and . for later use , it is convenient to quote here the following relation : where the prefactor depends on the values of and ( but not on the point ) .note in particular that the product in ( [ eq : skratio ] ) is over the entire group . for a derivation of this formula , see , e.g. . given a schottky group as defined above ,a family of schottky subgroups , for , can be defined , and prime functions can naturally be associated to them .these so - called secondary prime functions were introduced by , as building blocks for constructing conformal mappings for mixed slit domains , and are briefly reviewed here . for a given , with , define to be the set of all elements such that contains only combinations of an _ even _ number of the maps , , , but that can contain any number ( even or odd ) of the other maps , i.e. for .for example , for the case and , one can show that the group is generated by the following maps : ; it can indeed be verified that these maps and their inverses generate only ( and all ) combinations of an even number of the maps and , but where any number of the maps can appear .now it may be verified that the set defined above is itself a schottky group , which is obviously a subgroup of the original group . associated with the group one can define a corresponding prime function . to avoid confusion with the primary s - k prime function introduced in [ sec : t ] ( and associated with the original group ) , this _ secondary _ s - k prime function associated with is denoted by .this function admits a product representation as in ( [ eq : sk ] ) , with the only difference that the product is over the set whose definition mirrors that of the set .now fix an integer such that . from the definition of , one can verify that any element of the original group is either an element of or a composite map of the form , for some .for example , for the case , and , we have that , but , where . using this decomposition property of the group , together with identity ( [ eq : skratio ] ), one can establish the following relation between the primary and secondary prime functions : where the prefactor depends only on and .note , in particular , that for the subgroup consists of all _ even _ combinations of the maps , ; here the generators of the group are the mbius maps ( excluding the identity ) which map the interior of the circles onto the exterior of their images in .this subgroup and its associated s - k prime function , , play a crucial role in constructing solutions for a finite assembly of bubbles in a hele - shaw channel , as discussed in [ sec : gs ] .the function , on the other hand , appears in the problem of periodic arrays of bubbles to be discussed in [ sec : ps ] .in this section the formalism of the s - k prime functions is used to construct explicit formulae for the complex potentials and introduced in [ sec : cm ] . a crucial step in this task is the computation of the infinite sets of images ( associated with the sink and source at ) which are necessary to enforce the appropriate boundary conditions on the circles bounding the flow region in the -plane . the location of these images can be expressed in terms of the action of one of the groups on the positions of the original source and sink .the specific group required ( i.e. the value of ) depends on whether the flow is described in the laboratory frame or in the co - moving frame .we begin by considering the complex potential in the co - moving frame .recall that in the co - moving frame each bubble is a streamline of the complex potential , thus implying that the circles are streamlines of the flow generated by in the -plane. these boundary condition can be satisfied with a judicious choice of images , as discussed below .consider first the source at .reflection of this source in each of the circles yields a set of image sources at the positions , ; see ( [ eq : vartheta ] ) .here we used the fact the image of a point source with respect to a streamline circle is a source of the same strength and located at the corresponding reflection point .now , for any given image source , , for some , its subsequent reflection in a circle , , yields another source at the point , where we used ( [ eq : vartheta ] ) and ( [ eq : theta_inv ] ) . generalising this argument, one can show that reflection in of the first set of images generates second - level image sources at the points .more generally , it may be verifed that after reflections of the original source in , one obtains a set of sources whose locations are given by . continuing this procedure _ ad infinitum _ , one obtains an infinite set of image sources located at the following points : .similar procedure for the sink at yields a system of image sinks at the points .the velocity potential produced by the set of sources and sinks computed above is given by where the prefactor was determined from the requirement that the logarithmic singularities of at have the appropriate strength .in other words , when going around either one of these singularities ( from one side of the cut to the other ) the jump in must equal , which corresponds to the width of the extended strip domain in the -plane .using ( [ eq : skratio ] ) , one can rewrite ( [ eq : tprod ] ) in terms of the s - k prime functions : where the value of the constant is not relevant .alternative derivations of this formula directly from the properties of the s - k prime function were given by and .the derivation presented above is arguably more intuitive in that it is based on the well - known method of images . for later use , it is convenient to make use of ( [ eq : skratio2 ] ) and rewrite ( [ eq : tsk ] ) in terms of the secondary prime functions : where and is an unimportant constant . herewe start by recalling that the bubbles boundaries are equipotentials of the complex potential in the laboratory frame , and so the circles must be equipotentials of the flow described by . using a similar approach as in [ sec : t ] , one can readily compute the system of images required to satisfy these boundary conditions . the only difference to bear in mindis that the image of a _ source _ with respect to an equipotential circle is a _ sink _ , and vice - versa . consider the source at .its first set of images with respect to reflections in the circles correspond to sinks at the positions .reflection of these sinks in yields sources at the points . upon further reflections of these sinks in onegets a third - level set of sources , and so on , where at each successive level of reflection sources generate sinks and vice - versa . in other words , after a sequence of an even number of reflections of the original source one gets back a source , whereas an odd number of such reflections produces a sink .the system of images associated with the primary source at thus consists of the following two infinite sets : a. sources at the points , b. sinks at the points , where we recall that is the set of all _ even _ combinations of the maps , , and is an arbitrary integer .similarly , associated with the sink at one finds the following system of images : a. sinks at the points , b. sources at the points .note that in writing down the locations of the images in sets ii ) and iv ) above , use was made of the fact that that any combination of an _ odd _ number of the maps , , can be written as for some , as discussed in [ sec : sg ] .( here is an integer that can be chosen arbitrarily ; in specific computations it is convenient to set . ) given the sets of sources and sinks in i)iv ) above , it then follows that the resulting complex potential is where the prefactor ( unity ) was chosen so that the width of the extended channel in the -plane is equal to . now using ( [ eq : skratio ] ), one can rewrite ( [ eq : wprod ] ) in terms of the secondary s - k prime functions : where the constant is immaterial for the flow field .once the complex potentials and have been obtained , the mapping function immediately follows from ( [ eq : z1 ] ) , as discussed next . after substituting ( [ eq : t ] ) and( [ eq : wsk ] ) into ( [ eq : z1 ] ) and performing some simplification , one finds where an overall additive constant ( needed to fix the origin in the -plane ) was omitted .now recall that the choice of the points and as the preimages of the channel end points was entirely arbitrary ( as allowed by the riemann mapping theorem ) .thus , the solution ( [ eq : z2 ] ) can be written in more general form as where with .( although and can be any two distinct points on , it is convenient to choose and , as already anticipated . ) in ( [ eq : zu ] ) , denotes a solution for an arbitrary velocity and represents the corresponding solution ( i.e. for the same choice of domain ) for .one then sees that knowledge of the solutions with determines all other solutions for any .this property was first noticed by in the context of the taylor - saffman solution for a single bubble ; it was later shown to hold for any number of bubbles .equation ( [ eq : zu ] ) gives an explicit representation of this result .the coordinates ( of each bubble interface , , are given in parametric form by with as in ( [ eq : zu ] ) .note that all the geometrical information about the bubble configuration described by the solution above is encapsulated in the prescription of the preimage domain .this domain is characterised by its conformal moduli , which correspond physically to the area and centroids of each of the bubbles .thus , once the conformal moduli of are prescribed , a solution for a specific assembly of bubbles is obtained , examples of which are given next .as already mentioned in the introduction , solutions for multiple steady bubbles in a hele - shaw channel were recently found by in terms of an indefinite integral whose integrand consists of a product of primary s - k prime functions and which contains several accessory parameters that need to be determined numerically . the solution ( [ eq : zu ] ) , in contradistinction , is expressed as an explicit analytical formula in terms of the secondary prime functions , with no accessory parameters .given a domain , the corresponding bubble shapes can be readily obtained upon computation of the relevant prime functions . in this context , it is important to point out that the numerical scheme developed by and for the computation of the primary s - k function one that avoids the infinite product and relies on a more rapidly convergent laurent series can be easily adapted for the evaluation of the secondary prime functions ; see for details .the numerical computation of can thus be performed in an efficient manner for domains of arbitrary connectivity . using this method , we have reproduced at considerably less computational cost the specific solutions reported by .other examples of multi - bubble configurations are discussed below , where it is assumed that . in the particular case that the bubbles either are symmetrical about the centreline or have fore - and - aft symmetry , solutions can be obtained by reducing the flow region to a simply connected domain and then applying the standard schwarz - christoffel formula .these symmetrical solutions can be recovered from our formula by simply prescribing a domain with the appropriate symmetry .more precisely , centreline symmetry is enforced by choosing the centres of all circles on the real axis , whereas bubbles with fore - and - aft symmetry are obtained by placing the centres of on the imaginary axis .two examples of assemblies of symmetrical bubbles are shown in figure [ fig:3cent ] . more generally , if is reflectionally symmetric about the real ( imaginary ) axis , then the resulting bubble configuration has centreline ( fore - and - aft ) symmetry but not all bubbles will necessarily have the symmetry of the overall solution ( this happens only in the two cases just mentioned ) .for instance , in figure [ fig : f0below ] we show examples of three - bubble assemblies in which the configuration as a whole has either centreline or fore - and - aft symmetry , but where only one of bubbles ( the largest one in each case ) possesses the overall symmetry of the solution , whilst the other two bubbles are totally asymmetric .are , , , . ] a domain that entails no symmetry yields , of course , a completely asymmetric bubble configuration .an example of this case is given in figure [ fig:3asym ] for a assembly of three asymmetric bubbles .solutions for a higher number of bubbles can be handled in similar manner .in the instance that some of the bubbles within a multi - bubble solution become infinitely elongated , whilst the other bubbles remain of finite area , one obtains a situation where multiple fingers penetrate into the channel with an assembly of bubbles moving ahead of the fingers . to be specific ,consider a situation with fingers and bubbles , where the -th finger has a width and is separated from the finger to its right by a fluid gap of width .a schematic of the flow domain in the -plane is shown in figure [ fig : fbz ] for the case and .as before , it is convenient to work in an extended channel , , containing fingers and , bubbles where each additional interface is the reflection in the real axis of an interface in the original channel .the solution to this multifinger problem can be obtained from the multi - bubble solution given in [ sec : gs ] by starting with an assembly of bubbles in the extended channel and then taking the limit in which bubbles become infinitely elongated so as to yield the desired fingers .this limit can be easily obtained in the -plane , as follows .the pairs of circles and , for , corresponding to the bubbles that will become fingers should coalesce into a single circle that _ orthogonally _ intersects the unit circle .the other circles , , remain as they are ( and so they will map to bubbles of finite area ) .the resulting flow domain , denoted by , is shown in figure [ fig : f0tilde ] for the case and . before proceeding further ,let us establish some notation .label by the circle orthogonal to , and let and denote its center and radius , respectively. from the orthogonality condition one has ( without loss generality one can set and ; this choice will be implied in the remainder of this section . )the mebius map associated with reflection in , see ( [ eq : theta ] ) , is given by it can be verified that this map is of order two : .( here 1 denotes the identity map . )one may also verify that is invariant under in the following sense : where and denote the segments of that are inside and outside , respectively .in fact , one can show that maps the interior of onto its exterior .in analogy with the group introduced in [ sec : sg ] , we define the schottky group as the set consisting of all even combinations of the maps .( the generators of this group are the maps , which send the interior of onto the exterior of their images in . ) as discussed above , the orthogonal circle is to be mapped to the fingers , whereas the circles , , map to the bubbles .this implies , in particular , that the function must have logarithmic branch points on , corresponding to the left end points of the fingers ( i.e. ) .let us denote by the singularities that lie on and by those lying on , where note , in particular , that and are the points of intersection between and , and so and , as these are the two fixed points of the map .thus , the limit in which a multi - bubble solution generates a multifinger configuration is accomplished by replacing the original logarithmic singularity at with logarithmic singularities at the points .it then follows that in this case the solution ( for ) given in ( [ eq : zu2 ] ) becomes ^{\alpha_j/2}}{\tilde\omega_m(\zeta , 1)}\right ) , \label{eq : zfb}\end{aligned}\ ] ] where is the s - k prime function defined over the group and the parameters must satisfy the condition to ensure single - valuedness of in .note in particular that the gap separation between two adjacent fingers is given by .notice furthermore that in this case ( i.e. ) the combined width of the fingers is , as required by fluid mass conservation .once the parameters and the conformal moduli of the domain are prescribed , a specific solution for fingers moving together with an assembly of bubbles in a hele - shaw channel is obtained .the shapes of the different interfaces correspond to the images under the mapping ( [ eq : zfb ] ) of the circles , .for example , the coordinates of the -th finger are given by where .similar expression as in ( [ eq : xy ] ) is obtained for the bubble coordinates .a related solution for one symmetrical finger with an assembly of symmetrical bubbles in front of it was obtained before using schwarz - christoffel methods .the present solutions are much more general in that they describe any number of fingers and bubbles , with no symmetry assumption , and have furthermore the advantage of being given by an explicit mapping function from which the shapes of the interfaces can be easily computed .an example with one asymmetric finger and one asymmetric bubble ( ) is shown in figure [ fig:1fb ] , whereas figure [ fig : f0fb ] shows a configuration with two fingers and one bubble ( , ) . and the parameters are , , and . ] and ) corresponding to the parameters , , and . ] before leaving this section , it is perhaps worth mentioning that in the case when no bubble is present ( ) , the s - k prime function becomes a monomial , i.e. , yielding a solution ( for fingers only ) of the form ^{\alpha_j/2}}{\zeta- 1}\right ) .\label{eq : zmf}\end{aligned}\ ] ] this solution describes [ in a different representation ] the multifinger solutions obtained by . of particular interest is the case of a single finger ( ) for which the expression above reduces to where and .this recovers ( in different representation ) the asymmetryic finger obtained by as a generalisation of their solution for a symmetrical finger ( ) .in this section we consider the case of a periodic array of bubbles steadily moving in a hele - shaw channel .the problem is formulated in a general setup where it is supposed that there is an arbitrary number of bubbles per period cell and no assumption is made as to the geometrical arrangement of the bubbles within a period cell ; see figure [ fig : zp1 ] for a schematic .this contrasts with all previous periodic solutions where symmetry requirements are imposed _ a priori_. as in the case of a finite assembly of bubbles discussed in [ sec : gs ] , exact solutions for the present problem are obtained in the terms of a conformal mapping from a circular domain to the flow region exterior to the bubbles ( within a period cell ) . here , however , a fully fledged function theoretic approach is used , whereby the desired mapping functions are obtained by directly exploiting the properties of the secondary prime functions .consider a periodic assembly of bubbles moving with speed in a hele - shaw .here there are no factors of to worry about and so we set the width of channel to unity . the average fluid velocity , , across the channel in the -direction is also normalised to unity , i.e. .the streamwise period is denoted by , and it is assumed that there are bubbles per period cell . because of the flow periodicity , the problem can be restricted to the fluid region , , within one period cell .a schematic of is shown in figure [ fig:1ap ] for the case .now let be our usual circular domain with inner circles ; see figure [ fig:1bp ] .we shall seek a conformal mapping from to , such that the unit circle maps to the lower edge of the period cell ( ) , the inner circle maps to the upper edge ( ) , and the other inner circles , , map to the bubble boundaries .in addition , we insert a branch cut from a point on to a point on such that the two sides of this cut map to the left and right edges of the period cell , respectively ; see figure [ fig:1bp ] .the specific ` path ' of the branch cut is not relevant as it only affects the choice of period cell in the -plane .( in the examples discussed below , the branch cut is placed on the positive real axis for convenience . ) as before , let and denote the complex potentials in the laboratory and co - moving frames , respectively .carrying out an analysis analogous to that presented in [ sec : cm ] , one can show that maps onto a `` rectangular '' region , , in the -plane which is bounded by two horizontal edges located at and ( the images of and ) and by two ` curved ' lateral edges ( the image of the brach cut from to ) , and which contains slits in its interior ( the images of , ) ; see figure [ fig:1cp ] . in the same vein, one may verify that maps onto a rectangular " domain , , in the -plane with horizontal slits ; see figure [ fig:1dp ] . to obtain ,let us first introduce the following transformation which maps onto a domain in a subsidiary -plane which consists of a concentric annulus with radial slits ; see figure [ fig : radial ] .the real parameter allows us the freedom to vary the modulus of the annulus in the -plane .now , it was shown by that the function where and , maps the circular domain conformally onto a concentric annulus with radial slits . here is mapped to the outer circumference of the annulus and maps to the inner circumference , whereas , , map to the slits . using ( [ eq : ew ] ) and ( [ eq : s ] ) to solve for then yields ,\label{eq : wp}\end{aligned}\ ] ] where is a real constant .the value of is determined from the condition that the rectangular " domain has a height equal to , that is , =1,\end{aligned}\ ] ] where the points and correspond to the intersections of the branch cut with the circles and , respectively ; see figures [ fig:1bp ] and [ fig:1cp ] . similarly ,to obtain the mapping one first applies an exponential transformation which maps onto a domain consisting of an annulus with concentric circular - arc slits ; see figure [ fig : circ ] .next , we recall that as shown by the function where , maps onto the desired annular slit domain .( here maps to the outer circumference of the annulus , maps to the inner circumference , and , , map to the circular slits . ) using ( [ eq : et ] ) and ( [ eq : etat ] ) , one then finds that , \label{eq : tp}\end{aligned}\ ] ] where the prefactor is determined from the requirement that the height of the domain is equal to : =u-1.\end{aligned}\ ] ] in view of ( [ eq : skratio2 ] ) , one can rewrite ( [ eq : tp ] ) in terms of the secondary prime functions : .\label{eq : tp2}\end{aligned}\ ] ] inserting ( [ eq : wp ] ) and ( [ eq : tp2 ] ) into ( [ eq : z1 ] ) , and performing some straightforward rearrangements , then yields the desired mapping : + \frac{\mathrm{i}k_+}{u}\log\left[\frac{\omega_{m-1}(\zeta , \theta_l(\alpha ) ) } { \omega_{m-1}(\zeta , \theta_l(\theta_m(\alpha))}\right ] , \label{eq : zp}\end{aligned}\ ] ] where ) . herethe lateral edges of the period cell , ( dashed lines ) , are equipotentials of the flow .the conformal moduli of are as follows : , , , . ] using the degrees of freedom allowed by the riemann - koebe mapping theorem , we can place the center of the circle at the origin , i.e. , .its radius , , is then a free parameter that essentially controls the period .the remaining parameters , corresponding to the conformal moduli of , determine the centroid and area of the bubbles in a period cell .thus , once the domain is prescribed a specific solution for a periodic assembly of bubbles can be readily computed from ( [ eq : zp ] ) . as seen above ,obtaining specific solutions for a periodic array of hele - shaw bubbles requires computation of the secondary prime functions for .this can be done by using the infinite product ( [ eq : sk ] ) defined over the appropriate group .( alternative numerical schemes to compute are known only for the cases and ; see . ) for the present purposes , it suffices to truncate the infinite product at the fourth - level maps ( i.e. maps involving up to four generators of the group ) . including higher order maps makes no discernible difference within the scale of the figures .and the conformal moduli of are , , , , .the dashed lines indicate the lateral edges of the period cell as given by the image of the branch cut inserted along the real axis in the domain . ]an example of a periodic array of bubbles with only one bubble per period cell is shown in figure [ fig:1p ] .it follows from symmetry considerations that in this case the bubble has fore - and - aft symmetry , so that the vertical lines at are equipotentials of the flow .in particular , it is worth noting that when the bubble is also symmetrical about the centreline , our solution recovers the stream of symmetrical bubbles obtained by . as discussed before , periodic solutions with more general symmetrical arrangements , such as those found by , can easily be reproduced in our formalism by simply choosing with the appropriate symmetry .furthermore , the analytical solution given in ( [ eq : zp ] ) can handle asymmetric configurations with equal ease , as illustrated below .an example of a staggered two - file array of unequal bubbles is shown in figure [ fig : f0p ] .note that in this case the lateral edges of the period cell , indicated as dashed lines in the figure , are _ not _ equipotentials .bubble configurations with a higher number of bubbles per period cell can be computed in a similar manner .an example with three asymmetric bubbles per unit cell is shown in figure [ fig:3p ] .) corresponding to the following choice of parameters : , , , , , . here again the dashed lines indicate the edges of the period cell . ]the motion of an assembly of bubbles in a hele - shaw channel is a free boundary problem which is made more difficult by the fact that the relevant field ( the velocity potential ) is defined over a multiple connected domain and on whose boundaries it satisfies mixed boundary conditions .this means that in the complex potential plane the flow region is a slit strip domain of mixed type : the bubbles are described by _ vertical _ slits , whilst the channel walls correspond to _ horizontal _ lines .recently , a formalism based on the secondary s - k prime function was developed to construct a large class of such mixed slit maps . herethe formalism of the secondary prime functions was used to compute exact solutions for multiple bubbles steadily translating in a hele - shaw channel in various configurations : i ) finite assembly of bubbles ; ii ) multiple fingers moving together with an assembly of bubbles ; and iii ) periodic array of bubbles . in all cases considered , analytical formulae in terms of the secondary prime functionswere obtained for the conformal mapping from a circular domain to the corresponding flows region in the physical plane .several examples of specific solutions for these distinct arrangements were given .it is important to emphasise that , taken together , the results reported here represent the complete set of solutions for multiple steady bubbles and fingers in a horizontal hele - shaw channel when surface tension is neglected .a variant of the hele - shaw problem that has received less attention is the case when the cell is rotated about the centerline away from the horizontal . to the best of our knowledge ,the only known solution for this situation was obtained by for the case of a non - symmetric finger .it would be interesting to investigate whether this solution can be extended for the case of multiple bubbles in a rotated cell .the additional complication here is that the flow region in the complex potential plane consists of a strip with _ slanted _ slits ( rather than vertical ones ) , and conformal mappings to this type of slit domains are not yet known .another possible extension of the present research would be to consider time evolving bubbles in a hele - shaw cell .recently , a general class of time - dependent solutions for a single bubble in a hele - shaw channel was obtained by in the terms of a conformal mapping from an annulus to the fluid region outside the bubble .it is thus natural to expect that with the help of the secondary prime functions this result can be extended to include time - dependent solutions for an arbitrary number of bubbles .work in this direction is currently underway .it is also hoped that our methods can be adapted to study other related physical systems , such as equilibrium configurations of multiple hollow vortices and the formation of multiple streamers in strong electric fields .the author is appreciative of the hospitality of the department of mathematics at imperial college london ( icl ) where this work was carried out .he acknowledges financial support from a scholarship from the conselho nacional de desenvolvimento cientifico e tecnologico ( cnpq , brazil ) under the science without borders program for a sabbatical stay at icl .helpful discussions with darren crowdy are greatly acknowledged .green , c. c. & vasconcelos , g. l. 2014 multiple steadily translating bubbles in a hele - shaw channel .a _ * 470 * , 20130698 .gubiec , t. & and szymczak , p. 2008fingered growth in channel geometry : a loewner - equation approach .e _ * 77 * , 041602 .silva , a. m. p. & vasconcelos , g. l. 2011 doubly periodic array of bubbles in a hele - shaw cell .a _ * 467 * , 346360 .silva , a. m. p. & vasconcelos , g. l. 2013 stream of asymmetric bubbles in a hele - shaw channel .e _ * 87 * , 055001 .taylor , g. i. , & saffman , p.g .1959 a note on the motion of bubbles in a hele - shaw cell and porous medium ._ q. j. mech .maths _ * 12 * , 265279 .vasconcelos , g. l. 1994 multiple bubbles in a hele - shaw cell . _ phys rev .e _ * 50 * , r3306r3309 .vasconcelos , g. l. 1998 exact solutions for steady fingers in a hele - shaw cell ._ e * 58 * , 68586860 .vasconcelos g. l. 2001 exact solutions for steady bubbles in a hele - shaw cell with rectangular geometry ._ j. fluid mech . _ * 444 * , 175 - 198 .vasconcelos , g. l. , marshall , j. s. & crowdy , d. g. 2014 secondary schottky - klein prime functions associated with planar multiply connected domains . _ proc .a _ * 471 * , 20140688 .
|
analytical solutions for both a finite assembly and a periodic array of bubbles steadily moving in a hele - shaw channel are presented . the particular case of multiple fingers penetrating into the channel and moving jointly with an assembly of bubbles is also analysed . the solutions are given by a conformal mapping from a multiply connected circular domain in an auxiliary complex plane to the fluid region exterior to the bubbles . in all cases the desired mapping is written explicitly in terms of certain special transcendental functions , known as the secondary schottky - klein prime functions . taken together , the solutions reported here represent the complete set of solutions for steady bubbles ( and fingers ) in a horizontal hele - shaw channel when surface tension is neglected . all previous solutions under these assumptions are particular cases of the general solutions reported here . other possible applications of the formalism described here are also discussed .
|
hotelling s spatial model of competition , first introduced in 1929 , has had a large and varied influence on a number of fields .it has been applied not only in the original context of firms selecting geographic locations along main street " so as to maximise their share of the market , but also to that of producers deciding on how much variety to incorporate into their products .downs has adapted it with minor modifications to model an election : in particular , the ideological position - taking behaviour of political candidates in their effort to win votes . the model in its simplest form features a number of candidates ( firms ) adopting positions on a one - dimensional manifold , usually taken to be the interval ] , the issue space , on which candidates adopt positions .since the two- and three - candidate cases are well - known in the first we have the classical median voter result , and in the latter case no ncne exist ( see the end of this section)we assume there are candidates . candidate position is denoted and a strategy profile ^m ] and if \subseteq [ 0,1] ] are indifferent between candidates and , but prefer them both to .the voters in the interval ] , the second summand represents the contribution to the score of voters from the interval ] we have if and only if .this means that after this move only candidates previously at the same location as change their positions in the voters rankings .as tends to zero , the measure of voters between and tends to zero . given a scoring rule ,sometimes we will consider _ subrules _ of .a subrule of is a vector where and .thus , if , a subrule is itself a scoring rule corresponding to an election with candidates . if , then does not define a scoring rule and we say is a _ constant subrule _ of .an important parameter of the rule will be the average score denoted .our equilibrium concept is the standard nash equilibrium in pure strategies . to define it we introduce the following standard notation .let be a strategy profile .then by we mean the profile where the candidate adopts position instead of , whereas all other candidates do not change their strategies .[ nasheq ] a strategy profile is in nash equilibrium if for all ] . a nash equilibrium ( ne ) is said to be a _ convergent nash equilibrium _( cne ) if all candidates adopt the same position , i.e. , . if , in a nash equilibrium , at least two candidates adopt distinct positions , we say it is a _ nonconvergent nash equilibrium _( ncne ) .if a strategy profile is an ne , then we will say that is its _type_. for example , the type of a cne is .before we begin presenting our results , we restate cox characterisation of cne for arbitrary scoring rules .[ cne ] given a scoring rule , the profile is a cne if and only if where for the inequality in theorem [ cne ] to be satisfied for some , it must be that .the number encodes important information about the competing incentives characteristic of a given scoring rule : in particular , it measures the first - to - average drop in the value of the points relative to the first - to - last drop . motivated by the above theorem , cox defined a _ _ best - rewarding rule _ _ to be a rule with . here , the incentive for candidates to receive first place in a voter s ranking outweighs the incentive to receive only an average one . on the other hand ,if the rule is said to be _ worst - punishing_. for such rules , receiving a first - place ranking is not all that better than an average ranking the main thing is to avoid being ranked last .rules satisfying are called _intermediate_. hence , cox s result says that a rule has cne if and only if it is worst - punishing or intermediate , and the possible equilibrium positions are in some interval of sufficiently centrist points .the parameter attains its maximum value of for plurality rule , given by .its minimum value of , on the other hand , occurs for the antiplurality ( veto ) rule , given by .it is closely related to myerson s cox threshold " , which in our notation is .in addition , cox also observed that in any ncne the most extreme positions , and , must be occupied by at least two candidates and , hence , in the case of a three - candidate election , no ncne exist .suppose in a profile there are candidates in positions occupying locations and suppose candidate is not alone at the location she occupies .we will now investigate how the score of candidate changes when changes position from to ] and ] is and the contribution from the interval is for some , since candidate 1 is tied in the rankings of all voters in . if 1 moves infinitesimally to the right , then these contributions to become and , respectively .indeed , is still ranked at worst by voters in and hence loses nothing . also , rises in all other voters rankings .if for at least one , this move is strictly beneficial .this infinitesimal `` move '' is not a real move .however , if it is strictly beneficial , then by proposition [ insidetheinterval2 ] we may conclude that a sufficiently small move to the right will also be beneficial .if for all , then we consider the move by candidate 1 to the right , to a position .the intervals where voters rank candidate 1 similarly will then be .\ ] ] we see that candidate 1 has increased the length of the first interval , from which she receives , at the expense of the far - right interval , from which she receives , while keeping the lengths of all other intervals unchanged .so this move is beneficial .so for ncne we must have .similarly , .this already allows us to make the following observation which rules out ncne for a class of scoring rules .[ generalprops2 ] if is a scoring rule such that for some , then allows no ncne . by lemma [ generalprops1 ] , .hence , , a contradiction .the rules specified in corollary [ generalprops2 ] are usually worst - punishing .however , there are some that are slightly best - rewarding , as in the following example .this shows that there exist best - rewarding rules which , unlike plurality , do not allow ncne . by theorem [ cne ] , they have no cne either , thus they have no nash equilibria whatsoever .if is odd , consider -approval with .that is , , where the first positions are ones .then so the rule is best - rewarding but by corollary [ generalprops2 ] it has no ncne . note that starting with this rule we can produce a family of best - rewarding rules that have no ncne . clearly , the scores could have been , instead of zeros , arbitrary scores with the property that and the rule would still have been best - rewarding , but without ncne .the following lemma says that in an ncne no candidates may occupy the most extreme positions on the issue space , namely 0 or 1 .this will allow us to always assume that it is possible for a candidate to make a move into the end intervals and ] . now suppose candidate 1 moves infinitesimally to the right .then , in the limit , 1 s score is where is the new contribution to 1 s score from the interval . since 1has moved up in the ranking of all the voters in , . also , since we have by proposition [ insidetheinterval2 ] , .hence , candidate 1 benefits from moving to the right , and hence is not in ncne .thus , .similarly , .the next lemma places an upper bound on the length of the interval between two occupied positions , or between the boundary of the issue space and the nearest occupied position . from this, we will be able to derive a lower bound on the number of occupied positions for a given scoring rule .[ upperboundintervals ] given a scoring rule , if is in ncne , then both of the following conditions must be satisfied : 1 . and ; 2 . for any such that .since , there exists a real number with the property that and . rearranging this equation , one verifies that . at any profile , there will be at least one candidate who garners a total score .hence , if is an end interval , namely ] , then we must have . to show this we assume without loss of generality that for ] , together with intervals of the form for .so , using lemma [ upperboundintervals ] , we see that whence the result .we round up since is an integer .thus , the amount of dispersion observed is increasing in .when increases above , the number of occupied positions required for ncne increases to at least three .when exceeds , the number of occupied positions must be at least four .the maximum value of is attained for plurality , so there must be at least occupied positions .we will return to consider these bounds in section [ highlybrsection ] where , with the help of a few more lemmas , we will find that for most rules that are sufficiently best - rewarding , there are no ncne at all .[ 45cand2 ] suppose at profile candidate is at and . then . in particular ,when is in ncne , .again , the issue space can be divided into subintervals of voters who all rank in the same position .the immediate interval around is , where : ] if ; and , ] if .the contribution to from the interval is the contribution to from this interval is then and the contribution to is the contribution to from any interval to the left of , consisting of voters who all rank similarly , is for some .the contribution to is since when candidate moves infinitesimally to the left she rises one place in the rankings of these voters .the contribution to is , since this move causes to fall one place in these voters rankings . in the same way ,the contribution to from any interval to the right of , consisting of voters who all rank identically , is for some while the contribution to is and the contribution to is .hence , since for any subinterval , or the sum of the contributions to and is twice the contribution to from the same subinterval .for to be in ncne we need both and .this is only possible when .[ 45cand3 ] if or then a necessary condition for ncne is .let .by lemma [ 45cand2 ] we have . hence ,if 1 moves to a position then for ncne we need .hence , the slope of the linear function is nonpositive . by proposition [ insidetheinterval2 ]we then have , which can happen only if . propositions [ insidetheinterval1 ] and [ insidetheinterval2 ] allow us to conclude that unpaired candidates are actually quite rare .in particular , if is even and , there can be none whatsoever .if is odd and then the only candidate that could possibly be unpaired is the median candidate .lemmas [ 45cand2 ] and [ 45cand3 ] tell us the only rules that allow paired candidates at the end positions are the rules of the form .the consequences for elections with small number of candidates are as follows : if , since the only possible profile for ncne is the one with two distinct positions occupied by two candidates apiece , we must have ; for , all possible partitions of the candidates ( 2 - 1 - 2 and 3 - 2 ) involve end positions occupied by exactly two candidates , hence , for ncne we need . for , too , we can conclude that the partitions 2 - 1 - 1 - 2 , 2 - 2 - 2 and 2 - 4 are possibilities only for rules satisfying ( such as plurality , for which the first two of these three partitions allow ncne , see or ) .this leaves only the partition 3 - 3 as a possible ncne for rules which are not of this kind .we will elaborate on these results in section [ specialcases ] .in this section we will identify three quite broad classes of scoring rules for which ncne do not exist or do not exist with a few well - defined exceptions .the first class is all scoring rules with convex scores .these are best - rewarding rules and hence do not allow cne . we show that such rules do not have ncne ( in fact , they have no ne whatsoever ) with the exception of some derivatives of borda rule .the second class consists of rules that satisfy a certain condition on the speed with which the scores are decreasing ; we call such rules weakly concave .rules that have concave scores or symmetric scores ( we will explain what this means later ) belong to this class .these , in contrast , are are worst - punishing or intermediate , hence , allow cne by theorem [ cne ] .we show that weakly concave rules with a mild additional condition do not have ncne .we show that if a weakly concave rule has an ncne , then it is highly nonsymmetric .we give an example of such ncne .we do not know , however , whether or not rules with concave scores may have ncne .we leave this question open .the third class consists of rules that are highly best - rewarding .we say that the rule is _ convex _ if we note that as soon as for some , all the subsequent scores must also be equal for convexity to be satisfied .we aim to show that such rules , with one class of possible exceptions , have no ncne ; moreover they have no nash equilibria at all .firstly we show that a convex scoring rule is either best - rewarding or intermediate .in fact , we show a bit more .[ convexscores1 ] let be a scoring rule .then is convex if and only if every nonconstant -candidate subrule , where , has .suppose satisfies .it suffices to show the rule itself is best - rewarding or intermediate , since any nonconstant subrule also satisfies .we have , for any , in particular , all we need is suppose is even .equation implies then suppose is odd .then implies then , letting , we have so in both cases , which is equivalent to .conversely , suppose every nonconstant subrule is best - rewarding or intermediate .then any 3-candidate subrule has , which is equivalent to , so is satisfied .[ convexscores2 ] let be a convex scoring rule . then both of the following conditions : 1 .all inequalities in are equalities , 2 . satisfies , are equivalent to being a borda rule .\(a ) let be the common value of all the differences in. then .subtracting from all scores does not change the rule .dividing all the scores by after that does not change it either .but then we will get the canonical borda score vector with .\(b ) the condition implies from which .an equality here is possible only if we had all equalities in and this is possible only if we had equalities in .now the result follows from ( a ) .recall that by theorem [ cne ] , there can be no cne for a best - rewarding rule : there is always an incentive to deviate to one side , as capturing the first - place ranking of the voters on one side is more valuable than sharing the total score from all the voters . in the case of ncne under a rule with convex scores , this phenomenon repeats itself at a local level . from the point of view of a candidate at some occupied position , the issue space can be partitioned into a number of subintervals .each subinterval corresponds to a subset of voters who all rank the candidate in the same way , and is associated with some subrule of the original rule . sincethis subrule is best - rewarding by proposition [ convexscores1 ] , there is an incentive to capture the maximum possible ranking from some fraction of the voters in the given subinterval rather than share the votes from the subinterval .the combined effect of these local incentives produces an overall incentive to deviate .now we can prove the main theorem of this section .[ convexscores2 ] let be a scoring rule with convex scores and let be such that .then there are no ncne , unless the subrule is borda and ( i.e. , more than half the scores are constant ) .let be a profile .consider candidate 1 at . without loss of generality ,assume , since at least one of the two end positions has less than half the candidates .let ] .the rest of the issue space to the right of can be partitioned into subintervals ,\ ] ] where voters in each of these intervals rank candidate 1 in the same way . more specifically ,candidate 1 shares -th through to -th place in the rankings of all voters in , for some such that .then 1 s score is if candidate 1 moves infinitesimally to the left , then similarly , if she moves infinitesimally to the right , then let be in ncne. then and .this implies that .that is , which implies we know that the convexity of the scores implies for all such that . thus , each term on the left - hand side of is nonnegative .if one or more of these terms is positive , then we have a contradiction and hence no ncne exist .the only other possibility is that all these terms are equal to zero , which by proposition [ convexscores2 ] implies each of the subrules appearing in the expression is equal to borda or is constant ( in particular , the rule must be borda , since by lemma [ generalprops1 ] ) .in particular we get .if this is the case then , so for to be in ncne we must have . then , for to be in ncne we must have for any , that is , the score can not increase as 1 moves to the right from . by proposition [ insidetheinterval2 ] ,the slope of the linear function for is and since it is nonincreasing we have . on the other hand , since .we conclude therefore that .this means that the scores have stabilised on or earlier than , whence . as , we must now conclude that .hence , there are no ncne unless the subrule is borda and . for the special case where has convex scores , the subrule is borda and the scores from through are constant , theorem [ convexscores2 ] says nothing and for good reason since here ncne can actually exist .this will follow from theorem [ multipositional ] and example [ exceptionexamp ] .rules satisfying the conditions of theorem [ convexscores2 ] include borda ( for which nonexistence of ncne also follows from theorem [ wpnoncne2 ] ) as well as the following examples .[ convexscoresexample ] the following rules have convex scores and , hence , no ncne . 1 .given , define a scoring rule by for , where .that is , multiply the previous score by the same factor each time . 2 . given ,define a rule by for , where .that is , add an increasing amount each time .the rule , for any .the significance of this is that even a slight deviation from plurality destroys the ncne which plurality is known to possess .we say that the rule is _ concave _ if most our positive results are , however , applicable to a larger class of rules which we call weakly concave .we say that a scoring rule is _ weakly concave _ if it obeys the following property : for all .that is , the difference between consecutive scores at the top end must not be larger than the corresponding difference at the bottom end .if we always have an equality in we say that the rule is _symmetric_. [ wpnoncne1 ] a weakly concave rule is either worst - punishing or intermediate .that is , .note that is condition with the inequalities reversed . hence , reversing all the inequalities in proposition [ convexscores1 ] , we obtain . before we can prove the main result of this section, we will need one more lemma .[ wpnoncnelemma ] if is a weakly concave rule , then for all . moreover , if satisfies for some , then inequality holds for all .let . equation implies that whence which , on dividing by , gives .this proves the first part of the lemma .now suppose that holds for some .the statement will be proved by induction if we can prove that holds for . if the statement follows from the first part of the lemma .so assume .we have this rearranges to give since , we have and hence by we conclude . putting things together ,we get thus , equation holds for , which proves the induction step .next , we show a weakly concave rule has no ncne in which each of the end positions is occupied by less than half the candidates .if also holds for , which is condition below , then we can rule out ncne altogether .[ wpnoncne2 ] any weakly concave scoring rule has no ncne in which .if , in addition , satisfies then no ncne exist .suppose .consider candidate 1 at position , which is occupied by candidates .consider intervals ] .if 1 makes an infinitesimal move to the right of , then in the rankings of voters in she falls behind the other candidates originally at . on the other hand ,1 rises ahead of these candidates in the rankings of all other voters. then the score candidate 1 loses by making this move , , is on the other hand , 1 s gain from this move , , is _ at least _ the gain from : where we have used . for this profile to be an ncne, we need this move not be beneficial for candidate 1 .that is , we need , or since by lemma [ generalprops1 ] we know that , and hence the common multiple on both sides of the inequality is nonzero , this implies since is also assumed to be not greater than , similar considerations with respect to candidate give that )\geq \ell([0,(x^1+x^q)/2 ] ) ] and ] , 2 . ] , be the full - electorates " corresponding each occupied position .for each ] .the following theorem provides a method of constructing rules for which multipositional ncne exist .[ multipositional ] let be a composite number with .consider an -candidate scoring rule .then the profile given by is in ncne if and only if the following two conditions hold : 1 .}\max\{\ell(i_i^l),\ell(i_i^r)\ } \leq ( 1-c(s',r))\min_{i \in [ q]}\{\ell(i_i)\} ] , where .the idea is that for this kind of scoring rule , each occupied position is isolated " from the rest of the issue space , since a candidate at this position receives nothing from voters who rank her or worse .so the candidates have to compete `` locally '' .note that condition ( a ) can only be satisfied if , since it implies }\max\{\ell(i_i^l),\ell(i_i^r)\ } & \leq ( 1-c(s',r))\min_{i \in [ q]}\{\ell(i_i)\ } \\ & \leq 2(1-c(s',r))\max_{i \in [ q]}\max\{\ell(i_i^l),\ell(i_i^r)\},\end{aligned}\ ] ] from which follows .that is , though the scoring rule is best - rewarding , the subrule , for to be in ncne , must be worst - punishing or intermediate .hence , comparing this with theorem [ cne ] , we see that locally each occupied position behaves with respect to the rule in a similar way to a cne on the whole issue space. consider candidate at position .since all of s score is garnered from the immediate full - electorate , s score is suppose that moves to some position between two occupied positions or between an occupied position and the boundary of the issue space . in the latter case, is now ranked first by , at best , all voters in the intervals or . in the former case , when for some , candidate is ranked first by voters in the interval ] , is in ncne if and only if , where . since each is the same length , condition ( b ) of theorem [ multipositional ] is satisfied .condition ( a ) reduces to , whence the requirement that .now let us look at some examples .consider -candidate -approval rule with .the condition holds if and only if , suppose this is true . by appending zeros to the end of , we can extend to -approval with candidates for any .then theorem [ multipositional ] implies there exist ncne in which candidates position themselves at each of the distinct locations . as a special case ,consider 1-approval , which is just plurality : . for any even ,if we set then we obtain with .so the profile where two candidates locate at each position so as to divide the space into equally sized intervals is an ncne , and it is the only one in which there are two candidates at each position .we can not have , as then we would have .so plurality has no equilibria in which more than two candidates locate at each position , in agreement with the well - known results of eaton and lipsey and denzau et al . .[ exceptionexamp ] let , that is , is borda .let , of length , , be the rule resulting from appending zeros to .then , so there exists an ncne in which candidates position themselves at the halfway points of equally sized full - electorates that partition the issue space .recall that theorem [ convexscores2 ] stated that a rule with convex scores has no ncne , unless the nonconstant part of the scoring rule is exactly borda and is shorter than the constant part .the rule in example [ exceptionexamp ] is precisely such a rule .hence , the exception in theorem [ convexscores2 ] does indeed need to be made . for a given scoring rule , ncne with different partitions of the candidates can exist simultaneously .consider 3-approval , , with . thenif we have , so there are ncne with five distinct positions occupied by four candidates apiece . at the same time , if we have , so there are also ncne with four distinct positions occupied by five candidates apiece .a cne is the simplest kind of nash equilibrium that may exist .we now turn our attention to what would be the next simplest kind an ncne in which there are only two occupied positions . herewe restrict ourselves to the case where is even and the equilibrium positions are symmetric .we saw in example [ 8 - 4ncne ] that bipositional equilibria are not necessary symmetric .at the end of this section we will give an example of a nonsymmetric bipositional equilibria for .later it will become clear that this is the smallest value of for which nonsymmetric bipositional equilibria exist .[ bipositional ] suppose is even .then the profile , with and , is in ncne if and only if both and if in addition , then the profile is in ncne whenever moreover , can always be satisfied . by the symmetry of the positions , and . at ,all candidates receive -th of the points , so for all .note that it is necessary that , since otherwise there would need to be more than candidates at each position by lemma [ generalprops1 ] . by symmetry , for ncneit is enough to require candidate 1 not be able to deviate profitably , and there are only three moves to consider : a move to , which is always better than a move to , since 1 is ranked one place higher for half of voter in the middle interval ; a move to , which is the best move out of any into the middle interval since the slope of in that interval is nonnegative by proposition [ insidetheinterval2 ] ; and , finally , a move to .for the first one we have for ncne , it must be that , which yields the requirement for the second move we have the fact that yields finally , since in an ncne , it must be that there are no other moves to consider , so if the position is valid , that is , satisfies , and is in the range , then we have an ncne .the condition combined with implies the strict inequality in .the condition means that we need the right - hand side of to be strictly greater than zero , which is always true .finally , note that the requirement that is implied by , since if then we have a contradiction . to prove the second statement ,suppose a scoring rule satisfies both and .since is equivalent to , the right - hand side of always satisfies similarly , the left - hand side satisfies putting together and , we see that it will always be possible to find valid values of in the desired range .again consider -approval with . clearly , is satisfied. then we have symmetric bipositional ncne whenever , which is valid whenever .thus , as decreases and the rule becomes more best - rewarding , the more extreme positions are possible , until we reach the point where a bipositional equilibrium is no longer viable .ncne with more than two positions then become possible , as can be seen by theorem [ multipositional ] and as one would expect by corollary [ lowerboundq ] .theorem [ bipositional ] allows us to conclude that bipositional ncne may exist for both best - rewarding and worst - punishing rules , as we will see in the examples below .[ bipositionalexample ] let .consider the following rules : 1 .we have , so is satisfied .equation reduces to , so the profile is in ncne for any .note that , so this rule is worst - punishing .we have , so is satisfied . by equation, ncne occurs whenever .this rule is intermediate since .3 . .we have , so is satisfied and we have ncne when .so this rule allows only one symmetric bipositional ncne .here we have , so is best - rewarding .the first two rules also allow cne , so we see that cne and ncne can coexist for the same rule .the third rule , on the other hand , has no cne .finally , we give an example of a bipositional equilibrium in which the number of candidates is different at the two positions , though the positions themselves turn out to be symmetrically located .[ nonsymbipositionalexample ] let .consider the rule .then the profile with and , is an ncne .we omit the details .in the special cases of and we can provide a complete characterisation of the rules allowing ncne . for we can identify all types of possible equilibria . [ 4cand ] given and scoring rule , ncne exist if and only if both the following conditions are satisfied : 1 . , 2 . . moreover , the ncne is unique and symmetric , with equilibrium profile , where by lemma [ generalprops1 ] , an ncne with must have exactly two distinct positions , , with .hence , by lemma [ 45cand3 ] , it is necessary that . by lemma [ generalprops1 ], we also need .hence , ( b ) is necessary . by lemma [ noextremepositions ], we have and . by lemma [ 45cand2 ] , in ncne we have .that is , which implies .this is only possible if .considering the symmetric moves by candidate 4 gives or .hence , . then , since , we have from which , after substituting for , equation follows .for this to be a valid position , we need , from which it follows that .this is equivalent to , so ( a ) is necessary . for sufficiency ,notice that a rule satisfying conditions ( a ) and ( b ) also satisfies the conditions of theorem [ bipositional ] , from which we conclude that the profile given by is actually in ncne .for the five - candidate case , there are two ways to partition the candidates that might result in an ncne .as it turns out , one of them is not possible .[ 5cand1 ] for , there are no ncne of the form .first note that , without loss of generality , we can assume , since it is easy to see that subtracting from each score does not change the rule .by lemma [ 45cand3 ] we have and by lemma [ generalprops1 ] we have .hence our rule is one of those studied in subsection [ rulesabbb0 ] .but then by lemma [ n1=2 ] we can not have three candidates at position .[ 5cand2 ] given and scoring rule , ncne exist if and only if both the following conditions are satisfied : 1 . , 2 . .moreover , the ncne is unique and symmetric , with equilibrium profile , where as in lemma [ 5cand1 ] , we lose no generality in assuming . by lemma [ 5cand1 ] and [ generalprops1 ] ,the profile must be of the form .also , by lemma [ 45cand3 ] and by lemma [ generalprops1 ] , so condition ( b ) is necessary . by lemma [ noextremepositions ] , the end points of the issue space are not occupied . as in the proof of theorem [ 4cand ] , considering moves by candidate 1 to and , together with lemma [ 45cand2 ] ,gives .similar considerations for candidate 5 give , hence .let and ( all positions in these intervals yield the same score by proposition [ insidetheinterval2 ] ) .again by lemma [ 45cand2 ] , we need which implies and hence . the same considerations with respect to candidate give that .so we have equality and , consequently , . we knowthat .this yields from which , after substituting and , equation follows . for this to be a valid position ,we need .this gives , which is equivalent to , so condition ( a ) is necessary .now sufficiency .suppose ( a ) and ( b ) are satisfied .note that neither candidate nor candidate can move into , , or $ ] beneficially .again , this follows by symmetry and by lemma [ 45cand2 ] .we check the remaining possibilities .as the move by candidate 1 to is not beneficial .also so there is no reason for candidate 1 to move to . by symmetry , then , no moves by candidate 5 are beneficial . finally , consider moves by candidate 3 . any move inside the interval does not change her score . a move to a position infinitesimally to the left of gives the requirement which is satisfied . finally ,if candidate 3 moves to her score is by symmetry , then , no moves are beneficial for this candidate .there are no more moves to consider , hence this is an ncne .thus , in both the four- and five - candidate cases , we see that ncne exist only for a subset of best - rewarding rules those for which all scores except first and last are equally valuable . in both cases ,the amount of dispersion observed in the candidates positions depends on the difference between and and is maximal when , that is , when the rule is plurality .as grows towards , the positions of candidates become less extreme , converging at the median voter position when . as increases beyond this point , by theorem [ cne ] we know that infinitely many cne are possible in an interval that becomes increasingly wide .hence , there is a bifurcation point that divides cne from ncne when or .as we move away from this point , more extreme positions are possible on one side they take the form of cne , and on the other side they are ncne .is large ) , they observe ncne where the candidates are configured as in our ncne . as the level of uncertainty increases from zero ( the value of decreases ) , they also observe the candidates positions becoming less extreme . beyond a certain point ,only convergent equilibria are observed .we note , however , that in addition to these , they also observe other kinds of equilibria that do not arise in our model . ] since for the equilibria are no longer unique even for plurality , it makes sense to describe only their types . [ 6cand2 ]given and scoring rule .then there are four possible types of equilibria split in two groups : the equilibria of the first group occur for rules that satisfy 1 . , 2 . .the equilibria within each group can coexist .no equilibrium of the first group can coexist with an equilibrium of the second group .we have to show that equilibria of types do not exist .what they all have in common is that they have two candidates at one of the extreme positions .suppose such an equilibrium exists .as before we note that , without loss of generality , we can assume .by lemma [ 45cand3 ] we have and by lemma [ generalprops1 ] we have .hence , our rule is one of those studied in subsection [ rulesabbb0 ] .but then by theorem [ nomorethantwo ] we can not have three or more candidates at any given position .this rules out all equilibria and shows that equilibria of the first group are incompatible with equilibria of the second .example [ bipositionalexample](i ) demonstrates that equilibria of the third type exist and may coexist with cne as the rule in this case can be worst punishing .equilibria of the second type are shown to exist for plurality in .we have developed an algorithm to determine whether an ncne exists for a given -candidate scoring rule .the algorithm works by generating a list of all possible clusterings of the candidates . for each clustering it produces a linear program ( lp ) .the variables of the lp are the political positions of the clusters .the lp has the basic constraints to ensure that all of the political positions are in order and between and .note that a standard lp does not have strict inequalities , but we can maximise the minimum distance between the positions so that we will find a solution where clusters have distinct political positions , if such a solution exists .the non - trivial constraints are those used to ensure that the candidates can not improve their score by switching to a different position .we expand the reals with political positions and that occur immediately before and after each variable . for each pair of variables , we have three constraints requiring that the score a candidate would gain from moving from to , , or must be at most zero .we know that if a candidate can improve their score , they can improve it by moving to one of these positions . while ( or ) is not a real number , we knowthat if a candidate can improve their score by moving to they can also improve their score by moving to for a sufficiently small real number .this algorithm is not polynomial .although lp solving can be polynomial , the number of clusters considered is not polynomial . in spite of this , for the small sample problems considered in this paperthe performance of the algorithm is close to instantaneous .the algorithm only demonstrates whether a particular scoring rule gives an ncne .we intend to adapt the algorithm to use a quadratic constraint solver in place of the lp to allow us to find whether a class of rules has an ncne. the class of concave rules will be the prime target .the distinguishing features of this paper are the use of scoring rules and the focus on multicandidate nonconvergent equilibria .we now briefly describe the related literature with respect to these aspects . in light of the probabilistic interpretation of the scoring rule mentioned in the introduction, we feel the need to emphasise the differences between our approach and the probabilistic voting literature , which is a huge subject in its own right . for an introduction to the field ,see coughlin or , for a survey , duggan . to the best of our knowledge, these works usually involve some distance dependent function to give the probabilities as , for example , in multinomial logit choice models .these functions lead to an expected vote share that depends on distance and does not have discontinuities when candidates positions coincide .a scoring rule , on the other hand , is somewhat simpler : it behaves like a step function that only depends on ordinal information . whether the second ranked candidate is just beyond the first ranked candidate or on the other side of the issue space has no bearing on the probability of the voter voting for the more distant candidate .moreover , the discontinuities associated with the deterministic model persist in our model .other models include the stochastic model of anderson et al . and enelow et al . .the former again involves a function depending on distance , and the latter involves a finite number of voters in a multidimensional space .de palma et al . look at a model incorporating uncertainty and numerically calculate equilibria for up to six candidates , comparing them with the deterministic model .interestingly , some of the observed equilibria for four and five candidates show similarities to those we find in our model ( see the footnote following theorem [ 5cand2 ] ) .scoring rules and similar voting systems have appeared in spatial models before .however , apart from cox , all the work has been done in somewhat different contexts .myerson looks at the incentives inherent in different scoring rules and the political implications for such matters as corruption , barriers to entry and strategic voting .his model consists of a simpler issue space , with candidates deciding between two policy positions yes " or no " . in , myerson compares various scoring rules with respect to the campaign promises they encourage candidates to make and , in , he investigates scoring rules from the voter s perspective in poisson voting games .laslier and maniquet look at multicandidate elections under approval voting when the voters are strategic .myerson and weber introduce the concept of a voting equilibrium " , where voters take into account not only their personal preferences but also whether contenders are serious , and compare plurality and approval voting in a three - candidate positioning game similar to ours .we focus only on candidate strategies . as for the multicandidate aspect ,this paper is most closely related to denzau et al . and , before them , eaton and lipsey .the latter consider plurality rule , and the former extend these results to `` generalised rank functions '' .that is , the candidates objectives depend to some degree both on market share and rank ( say , the number of candidates with a larger market share ) . for a review emphasising multicandidate competition , see shepsle .many other more realistic refinements of the hotelling model have been constructed , incorporating uncertainty , incomplete information , incumbency and underdog effects , and so on .however , the price of added realism is that they are more complicated , and considering more than two candidates is often intractable . for a survey focusing on variations of the two candidate case ,see duggan or osborne . as for the economic interpretation ,our results are related only to those models in which price competition and transport costs do not come into play , such as in , again , eaton and lipsey and denzau et al . .the most basic model incorporating price competition is a game of two interdependent stages , the location selecting stage and the price setting stage .the details can be found in many standard economics textbooks , such as vega - redondo .in this paper , we have investigated how the particular scoring rule in use influences the candidates position - taking behaviour .we have looked at the equilibrium properties of a number of different classes of scoring rules .we were able to identify several broad classes of scoring rules disallowing ncne completely .for other large classes , we found that ncne can exist and we calculated a number of them .as cox and myerson found previously , the parameter plays a prominent role in determining what kind of equilibria are possible as increases , the amount of dispersion tends to increase also though usually the value of is not the only factor of importance .the manner in which the scores in the score vector are decreasing is pivotal we saw , for example , that all rules with entirely convex scores and many with entirely concave scores fail to possess ncne .the conditions under which ncne do exist can at times be quite stringent , as is made clear by the four- and five - candidate cases . in his investigation of cne , cox found that the value appears as a cut - off point between existence and nonexistence of cne : they exist if and only if , i.e. , when the rule is worst - punishing . when there are four or five candidates , a similar phenomenon occurs with respect to ncne :only when may ncne exist , though , as mentioned above , it is not guaranteed .thus , the two kinds of equilibrium are mutually exclusive . when there are six or more candidates , however , this transition from cne to ncne is no longer clear - cut .there exist worst - punishing and intermediate rules that allow both types of equilibria , as seen in theorem [ bipositional ] .a number of questions remain open .though we have investigated a wide variety of scoring rules , these are by no means all of them .the equilibrium behaviour of many rules most notably concave ones remains unknown . on the other hand , many of the rules most frequently appearing in the literature plurality , borda , -approval and so on fall nicely into the cases we have considered .another point of interest : most ncne discovered in this paper by theoretical considerations have been ones in which the same number of candidates locate at each position .the mechanics behind less regular or asymmetric equilibria obtained as a result of computational experiments remains unclear .it is interesting that a class of weakly concave rules can have only highly asymmetric equilibria .hence a concave rule may have only asymmetric equilibria or none at all . at this pointit is unknown whether any concave rules actually permit such asymmetric ncne .of course , there are a number of simplifications in our framework in comparison to cox s one .the main one is the assumption that the voters are uniformly distributed along the issue space , though this was partially justified in the first footnote of section [ themodel ] . characterisation of cne holds for an arbitrary nonatomic distribution of voter ideal points .the existence of ncne , however , is much more vulnerable to changes in the distribution . in a similar framework ,osborne finds that , for plurality rule , the uniform distribution is a special casealmost all " other distributions exclude the possibility of ncne .it would be interesting to see whether our results can be adapted to other distributions .another limiting assumption is the unidimensionality of the issue space .cox provides a version of theorem [ cne ] for a multidimensional space . whether our results on ncne can be extended to this situationis also not known .anderson , s. , kats , a. and j .- f thisse ( 1994 ) ` probabilistic voting and platform selection in multi - party elections ' , _ social choice and welfare _ 11 : 305322 .aragons , e. and d. xefteris ( 2012 ) ` candidate quality in a downsian model with a continuous policy space ' , _ games and economic behavior _ 75:464 - 480 .chamberlin , e. ( 1933 ) _ the theory of monopolistic competition_. cambridge : harvard university press .coughlin , p. j. ( 1992 ) _ probabilistic voting theory_. cambridge england ; new york : cambridge university press .cox , g. w. ( 1987 ) ` electoral equilibrium under alternative voting institutions ' , _ american journal of political science _ 31 : 82108 .cox , g. w. ( 1990 ) ` multicandidate spatial competition ' , in _ advances in the spatial theory of voting _enelow , j. m. and m. j. hinich ) .cambridge : cambridge university press , 179198 .de palma , a. , hong , g. and j .- f thisse ( 1990 ) ` equilibria in multi - party competition under uncertainty ' , _ social choice and welfare _ 7 : 247259 .denzau , a. , kats , a. and s. slutsky ( 1985 ) ` multi - agent equilibria with market share and ranking objectives ' , _ social choice and welfare _ 2 : 96117 .downs , a. ( 1957 ) an economic theory of political action in a democracy , _ the journal of political economy _ 65 : 135150 .duggan , j. ( 2005 ) ` a survey of equilibrium analysis in spatial models of elections ' .unpublished manuscript available at http://www.rochester.edu/college/psc/duggan/papers/existsurvey4.pdf .eaton , c. b. and r. g. lipsey ( 1975 ) ` the principle of minimum differentiation reconsidered : some new developments in the theory of spatial competition ' , _ the review of economic studies _ 42 : 2749 .hotelling , h. ( 1929 ) ` stability in competition ' , _ economic journal _ 39 : 4159 .laslier , j .- f and f. maniquet ( 2010 ) ` classical electoral competition under approval voting ' , in _ handbook on approval voting _laslier , j .- f and m. remzi sanver ) .new york : springer , 415430 .lin , t. , enelow , j. m. and dorrusen h. ( 1999 ) ` equilibrium in multicandidate probabilistic spatial voting ' , _ public choice _ 98 : 5982 .myerson , r. b. and r. j. weber ( 1993 ) ` a theory of voting equilibria ' , _ american political science review _ 87 : 102114 .myerson , r. b. ( 1993 ) ` incentives to cultivate favored minorities under alternative electoral systems ' , _ american political science review _ 87 : 856869 .myerson , r. b. ( 1999 ) ` theoretical comparisons of electoral systems ' , _european economic review _ 43 : 671697 .myerson , r. b. ( 2002 ) ` comparison of scoring rules in poisson voting games ' , _ journal of economic theory _ 103 : 219251 .osborne , m. ( 1993 ) ` candidate positioning and entry in a political competition ' , _ games and economic behavior _ 5 : 133151 .osborne , m. ( 1995 ) ` spatial models of political competition under plurality rule : a survey of some explanations of the number of candidates and the positions they take ' , _ canadian journal of economics _28 : 261301 .shepsle , k. a. ( 2001 ) _ models of multiparty electoral competition_. london : routledge .stigler , g. j. ( 1972 ) ` economic competition and political competition ' , _ public choice _ 13 : 91106 .vega - redondo , f. ( 2003 ) _ economics and the theory of games_. new york : cambridge university press .
|
we use hotelling s spatial model of competition to investigate the position - taking behaviour of political candidates under a class of electoral systems known as scoring rules . in a scoring rule election , voters rank all the candidates running for office , following which the candidates are assigned points according to a vector of nonincreasing scores . convergent nash equilibria in which all candidates adopt the same policy were characterised by cox . here , we investigate nonconvergent equilibria , where candidates adopt divergent policies . we identify a number of classes of scoring rules exhibiting a range of different equilibrium properties . for some of these , nonconvergent equilibria do not exist . for others , nonconvergent equilibria in which candidates cluster at positions spread across the issue space are observed . in particular , we prove that the class of convex rules does not have nash equilibria ( convergent or nonconvergent ) with the exception of some derivatives of borda rule . finally , we examine the special cases of four- , five- and six- candidate elections . in the former two cases , we provide a complete characterisation of nonconvergent equilibria .
|
security protocols , in particular those for for anonymity and fair exchange , often use randomization to achieve their targets . since they usually involve more than one agent , they also give rise to concurrent and interactive activities that can be best modeled by nondeterminism .thus it is convenient to specify them using a formalism which is able to represent both _probabilistic _ and _ nondeterministic _ behavior .formalisms of this kind have been explored in both automata theory and in process algebra .see also for comparative and more inclusive overviews . due to the presence of nondeterminism , in such formalismsit is not possible to define the probability of events in _ absolute _ terms .we need first to decide how each nondeterministic choice during the execution will be solved .this decision function is called _ scheduler_.once the scheduler is fixed , the behavior of the system ( _ relatively _ to the given scheduler ) becomes fully probabilistic and a probability measure can be defined following standard techniques .it has been observed by several researchers that in security the notion of scheduler needs to be restricted , or otherwise any secret choice of the protocol could be revealed by making the choice of the scheduler depend on it .this issue was for instance one of the main topics of discussion at the panel of csfw 2006 .we illustrate it here with an example on anonymity .we use the standard ccs notation , plus a construct of probabilistic choice representing a process that evolves into with probability and into with probability . the following system _sys _ consists of one receiver and two senders which communicate via private channels respectively . which of the two senders is successful is decided probabilistically by . after reception, sends a signal _ok_. the signal _ ok _ is not private , but since it is the same in both cases , in principle an external observer should not be able to infer from it the identity of the sender ( or ) .so the system should be anonymous .however , consider a team of two attackers and defined as and consider the parallel composition .we have that , under certain schedulers , the system is no longer anonymous .more precisely , a scheduler could leak the identity of the sender via the channels by forcing to synchronize with on _ ok _ if has chosen the first alternative , and with otherwise .this is because in general a scheduler can see the whole history of the computation , in particular the random choices , even those which are supposed to be private .note that the visibility of the synchronization channels to the scheduler is not crucial for this example : we would have the same problem , for instance , if , were both defined as , as , and as .the above example demonstrates that , with the standard definition of scheduler , it is not possible to represent a truly private random choice ( or a truly private nondeterministic choice , for the matter ) with the current probabilistic process calculi .this is a clear shortcoming when we want to use these formalisms for the specification and verification of security protocols .there is another issue related to verification : a private choice has certain algebraic properties that would be useful in proving equivalences between processes .in fact , if the outcome of a choice remains private , then it should not matter at which point of the execution the process makes such choice , until it actually uses it .consider for instance and defined as follows [ cols="^,^ " , ] process receives a value and then decides randomly whether it will accept the value or .process does exactly the same thing except that the choice is performed before the reception of the value .if the random choices in and are private , intuitively we should have that and are equivalent ( ) .this is because it should not matter whether the choice is done before or after receiving a message , as long as the outcome of the choice is completely invisible to any other process or observer .however , consider the parallel context . under any scheduler probability at most to perform . with , on the other hand , the scheduler can choose between and based on the outcome of the probabilistic choice , thus making the maximum probability of equal to .the execution trees of and are shown in figure [ fig : exectrees ] .+ in general when represents a private choice we would like to have \approx c[\tau.p ] + _ p c[\tau.q ] \label{eq : equivcontext}\ ] ] for all processes and all contexts _ not containing replication ( or recursion)_. in the case of replication the above can not hold since makes available each time the choice between and , while chooses once and for all which of the two ( or ) should be replicated . similarly for recursion .the reason why we need a is explained in section [ sec : testing ] .the algebraic property ( [ eq : equivcontext ] ) expresses in an abstract way the privacy of the probabilistic choice . moreover , this property is also useful for the verification of security properties .the interested reader can find in an example of application to a fair exchange protocol . in principle( [ eq : equivcontext ] ) should be useful for any kind of verification in the process algebra style .we propose a process - algebraic approach to the problem of hiding the outcome of random choices .our framework is based on a calculus obtained by adding to ccs an internal probabilistic choice construct .this calculus , to which we refer as ccs , is a variant of the one studied in , the main differences being that we use replication instead than recursion , and we lift some restrictions that were imposed in to obtain a complete axiomatization .the semantics of ccs is given in terms of segala s _ simple probabilistic automata _ .in order to limit the power of the scheduler , we extend ccs with terms representing explicitly the notion of scheduler .the latter interact with the original processes via a labeling system .this will allow to specify at the syntactic level ( by a suitable labeling ) which choices should be visible to schedulers , and which ones should not .the main contributions of this paper are : * a process calculus ccs in which the scheduler is represented as a process , and whose power can therefore be controlled at the syntactic level . * an application of ccs to an extended anonymity example ( the dining cryptographers protocol , dcp ) .we also briefly outline how to extend ccs so to allow the definition of private nondeterministic choice , and we apply it to the dcp with nondeterministic master . to our knowledgethis is the first formal treatment of the scheduling problem in dcp and the first formalization of a nondeterministic master for the ( probabilistic ) dcp . * the adaptation of the standard notions of probabilistic testing preorders to ccs , and the `` sanity check '' that they are still precongruences with respect to all the operators except the nondeterministic sum . for the latter we have the problem that and are must equivalent , but and are not .this is typical for the ccs : usually it does not preserve weak equivalences . *the proof that , under suitable conditions on the labelings of , and , ccs satisfies the property expressed by ( [ eq : equivcontext ] ) , where is probabilistic testing equivalence . the works that are most closely related to ours are . in those paperthe authors consider probabilistic automata and introduce a restriction on the scheduler to the purpose of making them suitable to applications in security protocols .their approach is based on dividing the actions of each component of the system in equivalence classes ( _ tasks _ ) .the order of execution of different tasks is decided in advance by a so - called _ task scheduler_. the remaining nondeterminism within a taskis solved by a second scheduler , which models the standard _ adversarial scheduler _ of the cryptographic community .this second entity has limited knowledge about the other components : it sees only the information that they communicate during execution .in contrast to the above approach , our definition of scheduler is based on a labeling system , and the same action can receive different labels during the execution , so our `` equivalence classes '' ( schedulable actions with the same label ) can change dynamically .however we do nt know at the moment whether this difference determines a separation in the expressive power .the main difference , anyway , is that our framework is process - algebraic and we focus on testing preorders , their congruence properties , and the conditions under which certain equivalences hold .another work along these lines is , which uses partitions on the state - space to obtain partial - information schedulers .however in that paper the authors consider a synchronous parallel composition , so the setting is rather different . in the next section we briefly recall some basic notions . in section [ sec : ccss ] we define a preliminary version of the language ccs and of the corresponding notion of scheduler . in section [ sec : expressiveness ] we compare our notion of scheduler with the more standard `` semantic '' notion , and we improve the definition of ccs so to retrieve the full expressive power of the semantic schedulers . in section [ sec : testing ]we study the probabilistic testing preorders , their compositionality properties , and the conditions under which ( [ eq : equivcontext ] ) holds .section [ sec : application ] presents an application to security .section [ sec : conclusion ] concludes .in this section we briefly recall some preliminary notions about the simple probabilistic automata and ccs .a _ discrete probability measure _ over a set is a function ] the probability measure obtained as a convex sum of the measures .a _ _ simple probabilistic automaton __ is a tuple where is a set of states , is the _ initial state _ , is a set of actions and is a _ transition relation_. intuitively , if then there is a transition from the state performing the action and leading to a distribution over the states of the automaton .the idea is that the choice of transition among the available ones in is performed nondeterministically , and the choice of the target state among the ones allowed by ( i.e. those states such that ) is performed probabilistically .a probabilistic automaton is _ fully probabilistic _ if from each state of there is at most one transition available .an execution of a probabilistic automaton is a ( possibly infinite ) sequence of alternating states and actions , such that , and for each and hold .we will use to denote the last state of a finite execution , and and to represent the set of all the finite and of all the executions of , respectively . a _ scheduler _ of a probabilistic automaton a function such that implies that .the idea is that a scheduler selects a transition among the ones available in and it can base his decision on the history of the execution .the _ execution tree _ of relative to the scheduler , denoted by , is a fully probabilistic automaton such that , , , and if and only if for some and .intuitively , is produced by unfolding the executions of and resolving all deterministic choices using .note that is a simple and fully probabilistic automaton .let range over a countable set of _ channel names_. the syntax of ccs is the following : {@{\textrm{\hspace{20pt}}}l@{}l@{\textrm{\hspace{20pt}}}l } \multicolumn{2}{l}{\alpha : : = a { \\,|\ \,}\bar{a } { \\,|\ \,}\tau } & \textrm{\textbf{prefixes } } \\[2pt ] \multicolumn{2}{l}{p , q : : = } & \textrm{\textbf{processes } } \\[2pt ] & \alpha.p & \textrm{prefix } \\[2pt ] { \ \,|\ \ , } & p { \;|\;}q & \textrm{parallel } \\[2pt ] { \ \,|\ \ , } & p + q & \textrm{nondeterministic choice } \\[2pt ] { \ \,|\ \ , } & { \textstyle{\sum_{i}\:}}p_ip_i & \textrm{internal probabilistic choice } \\[2pt ] { \ \,|\ \ , } & ( \nu a)p & \textrm{restriction } \\[2pt ] { \ \,|\ \ , } & !p & \textrm{replication } \\[2pt ] { \ \,|\ \ , } & 0 & \textrm{nil } \end{array}\ ] ] \textrm{sum1 } & { \frac{\displaystyle p { \overset{\alpha}{\longrightarrow } } \mu } { \displaystyle p + q { \overset{\alpha}{\longrightarrow } } \mu } } & \textsc{par1 } & { \frac{\displaystyle p { \overset{\alpha}{\longrightarrow } } \mu } { \displaystyle p { \;|\;}q { \overset{\alpha}{\longrightarrow } } \mu { \;|\;}q } } \\[20pt ] \textrm{com } & { \frac{\displaystyle p { \overset{a}{\longrightarrow } } \delta(p ' ) \quad q { \overset{{\overline{a}}}{\longrightarrow } } { } \delta(q')}{\displaystyle p { \;|\;}q { \overset{\tau}{\longrightarrow } } \delta(p ' { \;|\;}q ' ) } } & \textrm{prob } & { \frac{\displaystyle } { \displaystyle { \textstyle{\sum_{i}\ : } } p_ip_i { \overset{\tau}{\longrightarrow } } { \textstyle{\sum_{i}\:}}[p_i]\delta(p_i ) } } \\[20pt ] \textrm{bang1 } & { \frac{\displaystyle p { \overset{\alpha}{\longrightarrow } } \mu } { \displaystyle ! p { \overset{\alpha}{\longrightarrow } } \mu { \;|\;}!p } } & \textrm{bang2 } & { \frac{\displaystyle p { \overset{a}{\longrightarrow } } \delta(p_1 ) \quad p { \overset{{\overline{a}}}{\longrightarrow } } { } \delta(p_2)}{\displaystyle ! p { \overset{\tau}{\longrightarrow } } \delta(p_1 { \;|\;}p_2 { \;|\;}!p ) } } \end{array } ] let range over a countable set of _ channel names _ and over a countable set of _ atomic labels_. the syntax of ccs , shown in figure [ fig : syntax ] , is the same as the one of ccs except for the presence of labels .these are used to select the subprocess which `` performs '' a transition . sinceonly the operators with an initial rule can originate a transition , we only need to assign labels to the prefix and to the probabilistic sum .we use labels of the form where is an atomic label and the index is a finite string of and , possibly empty for . ] .indexes are used to avoid multiple copies of the same label in case of replication , which occurs dynamically due to the the bang operator . as explained in the semantics , each time a processis replicated we relabel it using appropriate indexes .a scheduler selects a sub - process for execution on the basis of its label , so we use to represent a scheduler that selects the process with label and continues as . in the case of synchronizationwe need to select two processes simultaneously , hence we need a scheduler the form .a complete process is a process put in parallel with a scheduler , for example .note that for processes with an infinite execution path we need schedulers of infinite length . \textrm{sum1 } & { \frac{\displaystyle p \parallel s { \overset{\alpha}{\longrightarrow } } \mu}{\displaystyle p + q \parallel s { \overset{\alpha}{\longrightarrow } } \mu } } & \textsc{par1 } & { \frac{\displaystyle p \parallel s { \overset{\alpha}{\longrightarrow } } \mu}{\displaystyle p { \;|\;}q \parallel s { \overset{\alpha}{\longrightarrow } } \mu { \;|\;}q } } \\[22pt ] \textrm{com } & \multicolumn{3}{l } { { \frac{\displaystyle p \parallel \sigma(l_1 ) { \overset{a}{\longrightarrow } } \delta(p ' \parallel 0 ) \qquad q \parallel \sigma(l_2 ) { \overset{{\overline{a}}}{\longrightarrow } } { } \delta(q ' \parallel 0)}{\displaystyle p { \;|\;}q \parallel \sigma(l_1,l_2).s { \overset{\tau}{\longrightarrow } } \delta(p'{\;|\;}q ' \parallel s ) } } } \\[22pt ] \textrm{bang1 } & { \frac{\displaystyle p \parallel s { \overset{\alpha}{\longrightarrow } } \mu}{\displaystyle !p \parallel s { \overset{\alpha}{\longrightarrow } } \rho_0(\mu ) { \;|\;}\rho_1(!p ) } } & \textrm{prob } & { \frac{\displaystyle } { \displaystyle l{\hspace{-2pt}:\hspace{-2pt}}{\textstyle{\sum_{i}\:}}p_ip_i\parallel \sigma(l).s { \overset{\tau}{\longrightarrow } } { \textstyle{\sum_{i}\:}}[p_i]\delta(p_i \parallel s ) } } \\[22pt ] \textrm{bang2 } & \multicolumn{3}{l } { { \frac{\displaystyle p \parallel \sigma(l_1 ) { \overset{a}{\longrightarrow } } \delta(p_1 \parallel 0 ) \qquad p \parallel \sigma(l_2 ) { \overset{{\overline{a}}}{\longrightarrow } } { } \delta(p_2 \parallel 0)}{\displaystyle ! p \parallel \sigma(l_1,l_2).s { \overset{\tau}{\longrightarrow } } \delta(\rho_0(p_1){\;|\;}\rho_{10}(p_2){\;|\;}\rho_{11}(!p ) \parallel s ) } } } \end{array } ] for all contexts .may and must testing are precongruences if we restrict to contexts with fresh labelings and without occurrences of .this result is essentially an adaptation to our framework of the analogous precongruence property in .let be ccs processes such that and let be a context with a fresh labeling and in which does not occur. then { \sqsubseteq_{\textrm{\textbf{may}}}}c[q] ] .we have but ,c[q] ] can pass the test with probability by selecting the correct branch of based on the outcome of the probabilistic choice . in ] and ] ) the scheduler can not find out whether the second operand of the choice is or unless it commits to selecting the second operand. for example let .then is not testing equivalent to since they can be separated by and a scheduler that resolves to and to .however , if we take then is testing equivalent to since the scheduler will have to resolve both branches of in the same way ( even though we still have non - determinism ) .the problem with replication is simply the persistence of the processes .it is clear that can not be equivalent in any way to since the first replicates only one of while the second replicates both .however theorem [ thm : distoversum ] together with proposition [ prop : precongruence ] imply that + _ p c[l_1{\hspace{-2pt}:\hspace{-2pt}}\tau.q ] ) ] { \approx_{\textrm{\textbf{may}}}}c'[c[l{\hspace{-2pt}:\hspace{-2pt}}(p + _ p q ) ] ] \label{eq : twocontexts}\ ] ] where is a context without bang and is a context without . the same is also true for .this means that we can lift the sum towards the root of the context until we reach a bang .intuitively we can not move the sum outside the bang since each replicated copy must perform a different probabilistic choice with a possibly different outcome .theorem [ thm : distoversum ] shows that the probabilistic choice is indeed private to the process and invisible to the scheduler .the process can perform it at any time , even in the very beginning of the execution , without making any difference to an outside observer .in this section we discuss an application of our framework to anonymity .in particular , we show how to specify the dining cryptographers protocol so that it is robust to scheduler - based attacks .we first propose a method to encode _ secret value passing _ , which will turn out to be useful for the specification : \\l{\hspace{-2pt}:\hspace{-2pt}}\bar{c}{\langlev\rangle}.p & { \overset{\scriptscriptstyle\delta}{= } } & l{\hspace{-2pt}:\hspace{-2pt}}{\overline{cv}}.p\end{aligned}\ ] ] this is the usual encoding of value passing in css except that we use the same label in all the branches of the nondeterministic sum . to ensure that the resulting labeling will be deterministic we should restrict the channels and make sure that there will be at most one output on .we will write for .for example , the labeling of the following process is deterministic : this case is a combination of the cases ( [ eq : detlab1 ] ) and ( [ eq : detlab3 ] ) of proposition [ prop : detlab ] .the two outputs on are on different branches of the probabilistic sum , so during an execution at most one of them will be available .thus there is no ambiguity in scheduling the sum produced by .the scheduler will perform a synchronization on or , whatever is available after the probabilistic choice . in other words , using the labels we manage to hide the information about which value was transmitted to . crypt_i & { \overset{\scriptscriptstyle\delta}{= } } & \underbrace { m_i(pay ) } _ { l_{5,i}}. \underbrace { c_{i , i}(coin_1 ) } _ { l_{6,i}}. \underbrace { c_{i , i\oplus 1}(coin_2 ) } _ { l_{7,i}}. \underbrace { { \overline{out}}_i{\langlepay\otimes coin_1\otimes coin_2\rangle } } _ { l_{8,i } } \\[2pt ] coin_i & { \overset{\scriptscriptstyle\delta}{= } } & l_{9,i}{\hspace{-2pt}:\hspace{-2pt } } ( ( \underbrace { \bar{c}_{i , i}{\langle0\rangle } } _ { l_{10,i } } { \;|\;}\underbrace { \bar{c}_{i\ominus 1,i}{\langle0\rangle } } _ { l_{11,i } } ) + _ { 0.5 } ( \underbrace { \bar{c}_{i , i}{\langle1\rangle } } _ { l_{10,i } } { \;|\;}\underbrace { \bar{c}_{i\ominus 1,i}{\langle1\rangle } } _ { l_{11,i } } ) ) \\[2pt ] prot & { \overset{\scriptscriptstyle\delta}{= } } & ( \nu \vec{m } ) ( master { \;|\;}(\nu \vec{c } ) ( { \textstyle{\prod_{i=0}^{2}\ : } } crypt_i { \;|\;}{\textstyle{\prod_{i=0}^{2}\ : } } coin_i ) ) \end{aligned}\ ] ] the problem of the dining cryptographers is the following : three cryptographers are dining together . at the end of the dinner , the bill has to be paid by either one of them or by another agent called the master .the master decides who will pay and then informs each of them separately whether he has to pay or not .the cryptographers would like to find out whether the payer is the master or one of them .however , in the latter case , they also wish to keep the payer anonymous . the dining cryptographers protocol ( dcp ) solves the above problem as follows : each cryptographer tosses a fair coin which is visible to himself and his neighbor to the right .each cryptographer checks the two adjacent coins and , if he is not paying , announces _ agree _ if they are the same and _ disagree _ otherwise .however , the paying cryptographer will say the opposite . it can be proved that if the number of _ disagrees _ is even , then the master is paying ; otherwise , one of the cryptographers is paying .an external observer is supposed to see only the three announcements . as discussed in , dcp satisfies anonymity if we abstract from their order .if their order is observable , on the contrary , then a scheduler can reveal the identity of the payer to simply by forcing the payer to make his announcement first .of course , this is possible only if the scheduler is unrestricted and can choose its strategy depending on the decision of the master ( or on the results of the coins ) .in our framework we can solve the problem by giving a specification of the dcp in which the choices of the master and of the coins are made invisible to the scheduler .the specification is shown in figure [ fig : dining ] .we use some meta - syntax for brevity : the symbols and represent the addition and subtraction modulo 3 , while represents the addition modulo 2 ( xor ) .the notation stands for if and otherwise .there are many sources of nondeterminism : the order of communication between the master and the cryptographers , the order of reception of the coins , and the order of the announcements .the crucial points of our specification , which make the nondeterministic choices independent from the probabilistic ones , are : ( a ) all communications internal to the protocol ( master - cryptographers and cryptographers - coins ) are done by secret value passing , and ( b ) in each probabilistic choice the different branches have the same labels .for example , all branches of the master contain an output on , always labeled by , but with different values each time .thanks to the above independence , the specification satisfy strong probabilistic anonymity .there are various equivalent definitions of this property , we follow here the version presented in .let represent an observable ( the sequence of announcements ) , and represent the conditional probability , under the scheduler , that the protocol produces given that the master has selected cryptographer as the payer . the protocol in figure [ fig : dining ]satisfies the following property : for all schedulers and all observables , note that different schedulers will produce different traces ( we still have nondeterminism ) but they will not depend on the choice of the master .some previous treatment of the dcp , including , had solved the problem of the leak of information due to too - powerful schedulers by simply considering as observable sets of announcements instead than sequences .thus one could think that using a true concurrent semantics , for instance event structures , would solve the problem .we would like to remark that this is false : true concurrency would weaken the scheduler enough in the case of the dcp , but not in general .for instance , it would not help in the anonymity example in the introduction . cp & : : = p \parallel s , t \end{array } & \begin{array}{ll } \textrm{indep } & { \frac{\displaystyle p \parallel t { \overset{\alpha}{\longrightarrow } } \mu } { \displaystyle \begin{array}{c } l{\hspace{-2pt}:\hspace{-2pt}}\{p\ } \parallel \sigma(l).s , t { \overset{\alpha}{\longrightarrow } } \mu ' \\ \textrm{where } \mu'(p ' \parallel s , t ' ) = \mu(p ' \parallel t ' ) \end{array } } } \end{array } \end{array}\ ] ] we sketch here a method to hide also certain nondeterministic choices from the scheduler , and we show an application to the variant of the dining cryptographers with nondeterministic master .first we need to extend the calculus with the concept of a second _ independent _ scheduler that we assume to solve the nondeterministic choices that we want to make transparent to the main scheduler .the new syntax and semantics are shown in figure [ fig : independent ] . represents a process where the scheduling of is protected from the main scheduler .the scheduler can `` ask '' to schedule by selecting the label .then resolves the nondeterminism of as expressed by the indep rule .note that we need to adjust also the other rules of the semantics to take into account , but this change is straightforward .we assume that does not collaborate with so we do not need to worry about the labels in . to model the dining cryptographers with nondeterministic master we replace the process in figure [ fig : dining ] by the following one . essentially we have replaced the probabilistic choice by a _ protected _ nondeterministic one .note that the labels of the operands are different but this is not a problem since this choice will be scheduled by .note also that after the choice we still have the same labels , however the labeling is still deterministic , similarly to the case [ eq : detlab2 ] of proposition [ prop : detlab ] . in case of a nondeterministic selection of the culprit , and a probabilistic anonymity protocol , the notion of strong probabilistic anonymity has not been established yet , although some possible definitions have been discussed in .our framework makes it possible to give a natural and precise definition .a protocol with nondeterministic selection of the culprit satisfies strong probabilistic anonymity iff for all observables , schedulers , and independent schedulers which select different culprits , we have : . we can prove the above property for our protocol : the dcp with nondeterministic selection of the culprit specified in this section satisfies strong probabilistic anonymity .we have proposed a process - calculus approach to the problem of limiting the power of the scheduler so that it does not reveal the outcome of hidden random choices , and we have shown its applications to the specification of information - hiding protocols . we have also discussed a feature , namely the distributivity of certain contexts over random choices , that makes our calculus appealing for verification .finally , we have considered the probabilistic testing preorders and shown that they are precongruences in our calculus .our plans for future work are in two directions : ( a ) we would like to investigate the possibility of giving a game - theoretic characterization of our notion of scheduler , and ( b ) we would like to incorporate our ideas in some existing probabilistic model checker , for instance prism .we would like to thank vincent danos for having pointed out to us an attack to the dining cryptographers protocol based on the order of the scheduler , which has inspired this work .10 vardi , m.y . : automatic verification of probabilistic concurrent finite - state programs . in : proc . of the 26th annual symp . on foundations of computer science , ieee computer society press ( 1985 ) 327338 hansson , h. , jonsson , b. : a framework for reasoning about time and reliability . in : proc . of the 10th symposium on real - time systems , ieee computer society press ( 1989 ) 102111 yi , w. , larsen , k.g . : testing probabilistic and nondeterministic processes . in : proc . of the 12th ifip international symposium on protocol specification , testing and verification , north holland ( 1992 ) segala , r. : modeling and verification of randomized distributed real - time systems .phd thesis , department of electrical engineering and computer science , massachusetts institute of technology ( 1995 ) available as technical report mit / lcs / tr-676 .segala , r. , lynch , n. : probabilistic simulations for probabilistic processes .nordic journal of computing * 2 * ( 1995 ) hansson , h. , jonsson , b. : a calculus for communicating systems with time and probabitilies . in : proc . of the 11th symposium on real - time systems , ieee computer society press ( 1990 ) 278287 bandini , e. , segala ,r. : axiomatizations for probabilistic bisimulation . in : proc . of the 28th international colloquium on automata , languages and programming .lncs 2076 , springer ( 2001 ) 370381 andova , s. : probabilistic process algebra .phd thesis , technische universiteit eindhoven ( 2002 ) mislove , m. , ouaknine , j. , worrell , j. : axioms for probability and nondeterminism . in : proc . of the 10th int .wksh . on expressiveness in concurrency( express 03 ) .volume 96 of entcs , elsevier ( 2004 ) palamidessi , c. , herescu , o.m .: a randomized encoding of the -calculus with mixed choice .theoretical computer science * 335 * ( 2005 ) 373404 + http://www.lix.polytechnique.fr/~catuscia / papers / prob_enc / report.% pdf[http://www.lix.polytechnique.fr/~catuscia / papers / prob_enc / report.% pdf ] .deng , y. , palamidessi , c. , pang , j. : compositional reasoning for probabilistic finite - state behaviors . in : processes , terms and cycles : steps on the road to infinity .lncs 3838 . springer ( 2005 ) 309337 + http://www.lix.polytechnique.fr/~catuscia / papers / yuxin / bookjw / par.%pdf[http://www.lix.polytechnique.fr/~catuscia / papers / yuxin / bookjw / par.% pdf ] .sokolova , a. , vink , e.d .: probabilistic automata : system types , parallel composition and comparison . in : validation of stochastic systems : a guide to current research .lncs 2925 .springer ( 2004 ) 143 jonsson , b. , larsen , k.g ., yi , w. : probabilistic extensions of process algebras . in : handbook of process algebras .elsevier ( 2001 ) 685710 chatzikokolakis , k. , palamidessi , c. : a framework for analyzing probabilistic protocols and its application to the partial secrets exchange .theoretical computer science . to appear . a short version of this paper appeared in the _ proc .of the symp . on trustworthy global computing _ , lncs 3705 , 146 - 162 .springer , 2005 .+ http://www.lix.polytechnique.fr/~catuscia / papers / partialsecrets / tc% sreport.pdf[http://www.lix.polytechnique.fr/~catuscia / papers / partialsecrets / tc% sreport.pdf ] .de alfaro , l. , henzinger , t.a . ,jhala , r. : compositional methods for probabilistic systems . in : proceedings of concur 2001 .lncs 2154 , springer ( 2001 ) mitchell , j.c ., ramanathan , a. , scedrov , a. , teague , v. : a probabilistic polynomial - time process calculus for the analysis of cryptographic protocols .theoretical computer science * 353 * ( 2006 ) 118164 canetti , r. , cheung , l. , kaynar , d. , liskov , m. , lynch , n. , pereira , o. , segala , r. : task - structured probabilistic i / o automata . in : proc . of the 8th int .workshop on discrete event systems ( wodes06 ) , ( 2006 ) canetti , r. , cheung , l. , kaynar , d.k . ,liskov , m. , lynch , n.a . , pereira , o. , segala , r. : time - bounded task - pioas : a framework for analyzing security protocols . in : procof disc 06 .lncs 4167 , springer ( 2006 ) 238253 nicola , r.d . , hennessy , m.c.b .: testing equivalences for processes .theoretical computer science * 34 * ( 1984 ) 83133 abadi , m. , gordon , a.d . : a calculus for cryptographic protocols : the spi calculus .information and computation * 148 * ( 1999 ) 170 chaum , d. : the dining cryptographers problem : unconditional sender and recipient untraceability .journal of cryptology * 1 * ( 1988 ) 6575 bhargava , m. , palamidessi , c. : probabilistic anonymity . in : proc . of concur 2005 .lncs 3653 , springer ( 2005 ) 171185 + http://www.lix.polytechnique.fr/~catuscia / papers / anonymity / concur.% pdf[http://www.lix.polytechnique.fr/~catuscia / papers / anonymity / concur.% pdf ] .in this appendix we give the proof of the main technical result of our paper .* theorem [ thm : distoversum ] * let be ccs processes and a context with a fresh labeling and without occurrences of bang .then + _ p c[l_0{\hspace{-2pt}:\hspace{-2pt}}\tau.q ] ) & { \approx_{\textrm{\textbf{may } } } } & c[l{\hspace{-2pt}:\hspace{-2pt}}(p + _ pq ) ] \quad \textrm{and } \\ l{\hspace{-2pt}:\hspace{-2pt } } ( c[l_0{\hspace{-2pt}:\hspace{-2pt}}\tau.p ] + _ p c[l_0{\hspace{-2pt}:\hspace{-2pt}}\tau.q ] ) & { \approx_{\textrm{\textbf{must } } } } & c[l{\hspace{-2pt}:\hspace{-2pt}}(p + _ p q ) ] \end{aligned}\ ] ] + since we will always use the label for all probabilistic sum , and for and , we will omit these labels to make the proof more readable. we will also denote by .let + _ p c[\tau.q] ] .we will prove that for all tests and for all schedulers there exists such that and vice versa .this implies both and . in order for the scheduler of to be non - blocking, it has to be of the form , since the only possible transition of is the probabilistic choice labeled by .by ( [ eq : pom1 ] ) we have + c[\tau.q ] , \sigma(l).s_1 , o ) = p\ { p_\omega}(c[\tau.p],s_1,o ) + \bar{p}\ { p_\omega}(c[\tau.q],s_1,o)\ ] ] the proof will be by induction on the structure of .let range over tests with fresh labelings , let range over nonblocking schedulers for both ] ( such that is a nonblocking scheduler for ) and let range over nonblocking schedulers for .the induction hypothesis is : ,s_1,o ) + \bar{p}\ { p_\omega}(c[\tau.q],s_1,o ) = { p_\omega}(c[p + _ p q ] , s_2 , o ) \quad \textrm{and }\\ \leftarrow)\ \forall o\ \forall s_2\ \exists s_1 : \\\qquad p\ { p_\omega}(c[\tau.p],s_1,o ) + \bar{p}\ { p_\omega}(c[\tau.q],s_1,o ) = { p_\omega}(c[p + _ p q ] , s_2 , o ) \end{array}\ ] ] we have the following cases for : * case ] and ] must be of the form . *case + since we only consider contexts with fresh labelings , is itself a test , and thus for ( ) we have {\;|\;}r , s_1 , o ) + \bar{p}\ { p_\omega}(c'[\tau.q ] { \;|\;}r , s_1 , o ) } \\[3pt ] & = & { p_\omega}(c'[p + _ p q ] , s_2 , r{\;|\;}o ) & \textrm{ind .hyp . } \\[3pt ] & = & { p_\omega}(c'[p + _ p q]{\;|\;}r , s_2 , o ) & \textrm{(\ref{eq3 } ) } \\[3pt ] & = & { p_\omega}(r_2 , s_2 , o ) \end{array}\ ] ] for ( ) we can perform the above derivation in the opposite direction . * case + since we consider only contexts with fresh labelings , the labels of are disjoint from those of , thus the scheduler of a process of the form + _ q r) ] and a scheduler containing labels of .moreover + _ q r ) , s , o ) } \nonumber\\ & = & q\ { p_\omega}(c'[x ] , s_c+s_r , o ) + \bar{q}\ { p_\omega}(r , s_c+s_r , o ) \nonumber\\ & = & q\ { p_\omega}(c'[x ] , s_c , o ) + \bar{q}\ { p_\omega}(r , s_r , o ) \label{eq : pom3 } \end{aligned}\ ] ] as a consequence , the scheduler of ] has to be of the form . + for ( ) we have + _q r ) , s_1 , o ) + \bar{p}\ { p_\omega}(l_1{\hspace{-2pt}:\hspace{-2pt } } ( c'[\tau.q ] + _ q r ) , s_1 , o ) } \\[3pt ] & = & { p_\omega}(l_1{\hspace{-2pt}:\hspace{-2pt } } ( c'[p+_pq ] + _ q r ) , \sigma(l_1).(s_c'+s_r ) , o ) & \textrm{(\ref{eq : pom3 } ) } \\ & = & { p_\omega}(r_2 , s_2 , o ) \end{array}\ ] ] for ( ) we can perform the above derivation in the opposite direction , given that a scheduler for + _ q r) ] .the scheduler of this process has to choose between ] such that . in this case + r ,s_r , o ) = { p_\omega}(r , s_r , o ) \label{eq4}\ ] ] * * or + r { \;|\;}o ) \parallel s_c { \overset{\alpha}{\longrightarrow } } \mu ] . in this case + r ,s_c , o ) = { p_\omega}(c'[l_0{\hspace{-2pt}:\hspace{-2pt}}\tau.p ] , s_c , o ) \label{eq5}\ ] ] + now consider the process + r ] .thus and will select and ] has the same transitions as { \;|\;}(\nu a)o)$ ] .the result follows by the induction hypothesis .
|
when dealing with process calculi and automata which express both nondeterministic and probabilistic behavior , it is customary to introduce the notion of scheduler to solve the nondeterminism . it has been observed that for certain applications , notably those in security , the scheduler needs to be restricted so not to reveal the outcome of the protocol s random choices , or otherwise the model of adversary would be too strong even for `` obviously correct '' protocols . we propose a process - algebraic framework in which the control on the scheduler can be specified in syntactic terms , and we show how to apply it to solve the problem mentioned above . we also consider the definition of ( probabilistic ) may and must preorders , and we show that they are precongruences with respect to the restricted schedulers . furthermore , we show that all the operators of the language , except replication , distribute over probabilistic summation , which is a useful property for verification .
|
the evolving nature of digital collections comes with an extra difficulty : due to various but constant influences inherent in updates , the interpretability of the data keeps on changing .this manifests itself as concept drift or semantic drift , the gradual change of a concept s semantic value as it is perceived by a community . despite terminology differences ,the problem is real and with the increasing scale of digital collections , its importance is expected to grow .if we add drifts in cultural values as well , the fallout from their combination brings memory institutions in a vulnerable position as regards long term digital preservation .we illustrate this on a museum example , the subject index of the tate galleries , london . in our example , semantic drifts lead to limited access by information retrieval ( ir ) . the methodology we apply to demonstrate our point is vector field semantics by emergent self - organizing maps ( esom ) , because the interpretation of semantic drift needs a theory of update semantics , integrated with a vector field rather than a vector space representation of content . further ,given such content dynamics , we argue that for its modeling , one can fall back on tested concepts from classical ( newtonian ) mechanics and differential geometry .for such a framework , e.g. similarity between objects or features can be considered an attractive force , and changes over time manifest in content drifts have a quasi - physical explanation .the main contributions of this paper are the following : 1 . a methodology for the detection , measurement and interpretation of semantic drift; 2 . on drift examples , an improved understanding of how semantic content as a vector field ` behaves ' over time by falling back on physics as a metaphor ; 3 . as a consequence of the above , the concept of semantic potential as a combined measure of semantic relatedness and semantic importanceevolving semantics ( also often referred to as ` semantic change ' ) is an active and growing area of research into language change that observes and measures the phenomenon of changes in the meaning of concepts within knowledge representation models , along with their potential replacement by other meanings over time .therefore it can have drastic consequences on the use of knowledge representation models in applications .semantic change relates to various lines of research such as ontology change , evolution , management and versioning , but it also entails ambiguous terms of slightly different meanings , interchanging shifts with drifts and versioning , and applied to concepts , semantics and topics , always related to the thematic composition of collections .a related term is semantic decay as a metric : it has been empirically shown that the more a concept is reused , the less semantically rich it becomes .though largely counter - intuitive , this derivation is based on the fact that frequent usage of terms in diverse domains leads to relaxing the initially strict semantics related to them .the opposite would hold if a term was persistently used within a single domain ( or in to a great extent similar domains ) , which would lead to its gradual specialization and enrichment of its semantics . herewe mention four relevant directions , all of them contributors to our understanding of a complex issue in their overlap . by advanced access to digital collections we mean the spectrum of automatic indexing , automatic classification , ir , and information visualization .all of the aforementioned can have a temporal aspect : trend analysis , emergence of concepts or ideas , representation of the past and the future , network dynamic , shaping and decay of communities , and in general , any web research topic where a dynamic understanding is superior to a static view , requires integration of the time dimension .examples comprise e.g the presentation , organization and exploration of search results in the context of web dynamics and analytics including the dynamics of user behaviour ; interacting with ephemeral content of the historical web , visualizing the evolution of image content tags , or temporal topic detection without citation analysis .a related but separate research area for the above is in the overlap of cultural heritage and ir . for an ir model to be successful, its relationship with at least one major theory of word meaning has to be demonstrated . with no such connection , meaning in numbersbecomes the puzzle of the ghost in the machine .for the vector space ir model ( vsm ) - underlying many of today s competitive ir products and services - such a connection can be demonstrated ; for others like pagerank , the link between graph theory and linear algebra leads to the same interpretation .namely , in both cases , the theory of word semantics cross - pollinating numbers with meaning is of a contextual kind , formalized by the distributional hypothesis which posits that words occurring in similar contexts tend to have similar meanings . as a result, the respective models can imitate the field - like continuity of conceptual content . however , unless we consider the vsm roots of both the probabilistic relevance model and its spinoffs including bm25 , such a link is still waiting to be shown between probability and semantics .although several attempts exist to this end , a brief overview should be helpful .looking for a good fit with some reasonably formalized theory of semantics , two immediate questions emerge .first , can the observed features be regarded as entries in a vocabulary ?if so , distributional semantics applies and , given more complex representations , other types may do so as well .the second question is , do they form sentences ?for example , one could regard a workflow ( process ) a sentence , in which case compositional semantics applies . if not , theories of word semantics should be considered only .below we shall depart from this assumption .notwithstanding the fact that vector space in its most basic form is not semantic , its ability to yield results which make sense goes back to the fact that the context of sentence content is partially preserved even after having eliminated stop - words which are useless for document indexing .this means that wittgenstein s contextual theory of meaning ( ` meaning is use ' ) holds , also pronounced by the distributional hypothesis .this is exploited by more advanced vector based indexing and retrieval models such as latent semantic analysis ( lsa ) or random indexing , as well as by neural language models , ranging from the simple recurrent networks , and their very popular flavour , long short - term memory , or the recently proposed global vector for word representation , which are currently considered to be the state - of - the - art approach for text representation .however , we should also remember another approach paraphrased as ` meaning is change ' , namely the stimulus - response theory of meaning proposed e.g. by bloomfield ] in anthropological linguistics and morris ] in behavioral semiotics , plus the biological theory of meaning .these authors stress that the meaning of an action is in its consequences .consequently word semantics should be represented not as a vector space with position vectors only , but as a dynamic vector field with both position and direction vectors .as white suggests , linguistics , like physics , has four binding forces : 1 . the strong nuclear force , which is the strongest ` glue ' in physics , corresponds to word uninterruptability ( binding morphemes into words ) ; 2 .electromagnetism , which is less strong , corresponds to grammar and binds words into sentences ; 3 . the weak nuclear force, being even less strong , compares to texture or cohesion ( also called coherence ) , binding sentences into texts ; 4 .finally gravity as the weakest force acts like intercohesion or intercoherence which binds texts into literatures ( i.e. documents into collections or databases ) .mainstream linguistics traditionally deals with forces 1 and 2 , while discourse analysis and text linguistics are particularly concerned with force 3 .the field most identified with the study of force 4 is information science . as the concept of force implies , referring here to attraction , it takes energy to keep things together , therefore the energy doing so is stored in agglomerations of observables of different kinds in different magnitudes , and can be released from such structures .a notable difference between physical and linguistic systems is that extracting work content , i.e. ` energy ' from symbols by reading or copying them does not annihilate symbolic content .looking now at the same problem from another angle , in the above and related efforts , ` energy ' inherent in all four types can be the model of e.g. a type 2 , i.e. electromagnetism - like attractive - repulsive binding force such as lexical attraction , also known as syntactic word affinity or sentence cohesion , such as by modeling dependency grammar by mutual information . in a text categorization and/or ir setting ,a similar phenomenon is term dependence based on their co - occurrence . a radial basis function ( rbf ) kernel, being an exponentially decaying feature transformation , has the capacity to generate a potential surface and hence create the impression of gravity , providing one with distance - based decay of interaction strength , plus a scalar scaling factor for the interaction , i.e. .we know that semantic kernels and the metric tensor are related , hence some kind of a functional equivalent of gravitation shapes the curvature of classification space . at the same time , gravitation as a classification paradigm or a clustering principle is considered as a model for certain symptoms of content behavior .in order to combine semantics from computational linguistics with evolution , we select the theory of semantic fields and blend it with multivariate statistics plus the concept of fields in classical mechanics to bring it closer to veltman s update semantics , and to enable machine learning .our working hypothesis for experiment design is as follows : * semantic drifts can be modeled on an evolving vector field as suggested by ; * to follow up on the analogy from semantic kernels defining the curvature of classification space and let this curvature evolve , newton s universal law of gravitation can be adapted to the idea of the dynamic library . to this end , we model similarity by , with term dislocations over epochs stored in distance matrices . ignoring g, we shall use the pagerank value of index terms on their respective hierarchical levels for mass values .since force is the negative gradient of potential , i.e. , we can compute this potential surface over the respective term sets to conceptualize the driving mechanism of semantic drifts ; * the potential following from the gravity model manifests two kinds of interaction between entries in the indexing vocabulary of a collection . over time , changes in collection composition lead to different proportions of semantic similarity vs. authenticity between term pairs , expressed as a cohesive force between features and/or objects . in the various flavours of the vsm , we work with an matrix in which columns are indexed by documents and rows by terms. we shall focus here on the term vectors only , which identify specific locations in the -dimensional space spanned by the documents .a scalar or vector field is defined at all points in space , so it is insufficient to have a value at the discrete locations identified by the term vectors . to assign a vector value to each point in space, we work on a two - dimensional surface .all term vectors have a location on this surface .all the other points on the surface which do not have a vector assigned to them are interpolated .the assignment of points on the surface and the term vectors is done by training a self - organizing map , that is , a grid of artificial neurons .each node in the grid is associated with a weight vector of dimensions , matching the term vectors .taking a term vector , we search for the closest weight vector , and pull it slightly closer to the term vector , repeating the procedure with the weight vectors of the neighboring neurons , with decreasing weight as we get further away from the best matching unit .then we take the next term vector and repeat this from finding the best matching unit until every term vector is processed .we call a training round that uses all term vectors an epoch. we can have subsequent training epochs with a smaller neighborhood radius and a lower learning rate . while there is no criterion for a convergence, we can continue training epochs until the topology of the network no longer shows major changes .the resulting map reflects the local topology of the original high - dimensional space .since we would like to train large maps to get a meaningful approximation in the space between term vectors , we turn to a high - performance implementation called somoclu .the task of drift detection , measurement and interpretation is carried out in three basic steps as follows : * step 1 : somoclu maps the high - dimensional topology of multivariate data to a low - dimensional ( 2-d ) embedding by esom .the algorithm is initialized by lsa , principal component analysis ( pca ) , or random indexing , and creates a vector field over a rectangular grid of nodes of an artificial neural network , adding continuity by interpolation among grid nodes . due to this interpolation, content is mapped onto those nodes of the neural network that represent best matching units ( bmus ) .* step 2 : clustering over this low - dimensional topology marks up the cluster boundaries to which bmus belong .their clusters are located within ridges or watersheds .content splitting tendencies are indicated by the ridge wall width and height around such basins so that the method yields an overlay of two aligned contour maps in change , i.e. content structure vs. tension structure . in somoclu ,nine clustering methods are available .because self - organizing maps , including esom , reproduce the local but not the global topology of data , the clusters should be locally meaningful and consistent on a neighborhood level only .* step 3 : evolving cluster interpretation by semantic consistency check can be measured relative to an anchor ( non - shifting ) term used as the origin of the 2-d coordinate system , or by distance changes from a cluster centroid , etc . in parallel , to support semiautomatic evaluation , variable cluster content can be expressed for comparison by histograms , pie diagrams , or other visualization methods .tate holds the national collection of british art from 1500 to the present day and international modern and contemporary art .the collection embraces all media , from painting , drawing , sculpture and prints to photography , video and film , installation and performance .the 19th century holdings are dominated by the turner bequest with cca 30,000 works of art on paper , including watercolors , drawings and 300 oil paintings .the catalog metadata for the 69,202 artworks that tate owns or jointly owns with the national galleries of scotland are available in json format as open data . ] out of the above , 53,698 records are timestamped .the artefacts are indexed by tate s own hierarchical subject index which has three levels , from general to specific index terms . ] to study the robust core of a dynamically changing indexing vocabulary , we filtered the dataset for a start .as statistics for the tate holdings show two acquisition peaks in 1796 - 1844 ( 33,625 artworks ) and 1960 - 2009 ( 12,756 artworks ) , we focused on these two periods broken down into 10 five - years epochs each , with altogether 46,381 artworks . in the 19th century period ,subject index level 1 had 22 unique general index terms ( 21 of them persistent over ten epochs ) , level 2 had 203 unique intermediate index terms ( 142 of them persistent ) , and level 3 had 6624 unique specific index terms ( 225 of them persistent ) . in the 20th century period ,level 1 had 24 unique terms ( 22 of them persistent ) , level 2 used 211 unique terms ( 177 of them persistent ) , and level 3 had 7536 unique terms ( 288 of them persistent over ten epochs ) .table [ tab:1 ] displays a sample entry from the subject index .following text pre - processing , which included the application of tokenization and stop - word removal on all three levels of concepts in the subject index , adjacency matrices and subsequently graphs were created using the co - occurrence of the terms in the artworks as undirected , weighted edges .these matrices were then used to extract an importance measure for each term by employing the pagerank algorithm , and to create esom maps using the somoclu implementation. for each of the 80 epochs ( 2 periods x 4 levels x 10 epochs ) , the esom s codebook was first initialized by employing pca with randomized svd , which was then used for mapping the high - dimensional co - occurrence data to an esom with a toroid topology .the results were represented on the two - dimensional projection of the toroid using different granularities according to the indexing level ( 20x12 = level 1 , 40x24 = level 2 , 50x30 = level 3 , 60x40 = all levels together ) . introducing the least displaced term per indexing level over a period as an anchor against which all term drifts on that levelcould be measured , we tracked the tension vs. content structure of evolving term semantics and evaluated the resulting term clusters for their semantic consistency ..sample index terms describing a turner self - portrait [ cols= " < " , ] & artist , painter + the input matrices were processed by somoclu as described above and the codebook of each esom was clustered using the affinity propagation algorithm .the results were tested for robustness by hierarchical cluster analysis ( hca ) , using euclidean distance as similarity measure and farthest neighbor ( complete ) linkage to maximize distance between clusters , keeping them thereby both distinct and coherent .the esom - based cluster maps expressed the evolving semantics of the collection as a series of 2-dimensional landscapes over 10 epochs times two periods .term drift detection , measurement and interpretation were based on these maps . to enable drift measurement, we generated a parallel set of maps with the term of greatest importance over all periods as its anchor point .importance was defined by the reciprocal rank fusion coefficient which combined the pagerank values of each term over all periods .this relative location was used for the computation of respective term - term distance matrices over every epoch of each period .term dislocations over epochs were logged , recording both the splits of term clusters mapped onto a single grid node in a previous epoch , or the merger of two formally independent nodes labelled with different terms into a single one .these splits and merges were used to define the drift rate and subsequently the stability of the lexical field . finally , as per the second point of the working hypothesis , the gravity and potential surfaces for every epoch were computed . when computing gravity and potential , the property of mass was expressed via each term s pagerank score and the distance by measuring the normalized ( sum to 1 ) euclidean distance between the corresponding bmu vectors .index term drift detection , measurement and evaluation were based on the analysis of esom maps , leading to drift logs on all indexing levels .parallel to that , covering every time step of collection development , we also extracted normalized histograms to describe the evolving topical composition of the collection , and respective pie charts to describe the thematic composition of the clusters .further , to check cluster robustness , hca dendrograms were computed for term - term matrices , also compared with those from term - document matrices .on one hand , these gave us a detailed overview of semantic drift in the analyzed periods .on the other hand , the observed dynamics could be modeled on the gravitational force and its generating potential .a more detailed report would go beyond the opportunities of this paper .however , some key indications were the following .content mapping means that term membership for every cluster in every time step is recorded and term positions and dislocations over time with regard to an anchor position are computed , thereby recording the evolving distance structure of indexing terminology .this amounts to drift detection and its exact measurement . adding a drift log results in extracted lists of index terms on all indexing hierarchy levels plus their percentage contrasted with the totals .drifts can be partitioned into splits and merges . in case of a split ,two concept labels that used to be mapped on the same grid node in one epoch become separated and tag two nodes in the next phase , while for a merge , the opposite holds . from an ir perspective splits decrease recall and merges decrease precision , limiting the quality of access ; from the perspective of long term digital preservation , they indicate at - risk indexing terminology .splits and merges were listed by somoclu for every epoch over both periods .for instance a sample semantic drift log file recorded that due to new entries in the catalog in 1796 - 1800 , by 1800 on subject index level 2 , for drifting words i.e. ` art ' , ` works ' , ` scientific ' , ` measuring ' , ` monuments ' , ` places ' , ` workspaces ' .therefore , based on the same subject index terms , anyone using this tool in 1800 would have been unable to retrieve the same objects as in 1796 . in a vector field , all the terms and their respective semantic tags are in constant flux due to external social pressures , such as e.g. new topics over items in the collection due to the composition of donations or fashion . without data about these pressuresquasi embedding and shaping the tate collection , the correlations between social factors and semantic composition of the collection could not be explicitly computed and named .still , some trends could be visually recognized over both series of maps , going back to their relatively constant semantic structure where temporary content dislocations did not seriously disturb the relationships between terms , i.e. neighboring labels tended to stick with one another , such as ` towns , cities , villages ' vs. ` inland ' and ` natural ' .in other words , the lexical fields as locally represented by somoclu remained relatively stable .the stability of these fields was measured in terms of drift rates which were computed by detecting the splits and merges that happened to the bmus ( e.g. [ fig:1 ] ) .specifically , we were not looking at the distance they travelled , rather at the fact that they formed or joined or moved away from a cluster ( i.e. a bmu ) in between epochs .overall , in this particular collection , splits between level 1 concepts took place occasionally , whereas both splits and merges occurred on indexing levels 2 - 3 on a regular basis .the drift rate was increasingly high : for level 2 index terms , it was 19 - 22 % in the 1796 - 1845 period vs. 15 - 27.5 % in 1960 - 2009 , whereas for level 3 terms it was 29 - 57 % ( 1796 - 1845 ) vs. 54 - 61 % ( 1960 - 2009 ) .these percentages suggest that the more specific the subject index becomes , the more volatile its terminology , especially with regard to modern art .0.48 0.48 to describe the composition of the social tensions shaping this collection , one can compare e.g. the level 2 indexing vocabularies for both periods .in general , this is where one witnesses the workings of language change , part producing new concepts , part letting certain index terms decay .e.g. focus is shifting from a concept to its variant ( e.g. ` nation ' to ` nationality ' ) , a renaissance of interest in the transcendent beyond traditional notions of religion and the supernatural ( ` occultism ' , ` magic ' , ` tales ' ) , fascination for the new instead of the old , or a loss of interest in ` royalty ' and ` rank ' . toys and concepts like ` tradition ' , the ` world ' , ` culture ' , ` education ' , ` films ' , ` games ' , ` electricity ' and ` appliances ' make a debut in art .a representation of such tendencies in content change with manifest tensions is visualized in figure [ fig:1 ] .here , tendency means a projected possible , but not necessarily continuous , trend - should the composition of the collection continue to evolve over the next epoch like it used to develop over the past one , the indicated splits and merges would be more probable to form new content agglomerations than random ones . as we were left with the impression that in a statistically constructed vector field of term semantics drifts are the norm and not the exception , to account for such dynamics we computed a series of epoch - specific gravitational fields and their generating potential for a first overview . with bmu vector distances between term pairs and their pagerank values for ` term mass ', both types of surfaces expressed the interplay between semantic similarity and term importance in a social perspective ( figure [ fig:2 ] ) .\(a ) + 0.20 0.20 0.20 0.20 0.20 + 0.20 0.20 0.20 0.20 0.20 + ( b ) + 0.20 0.20 0.20 0.20 0.20 + 0.20 0.20 0.20 0.20 0.20in the above test , we resolved semantic drift detection , drift measurement , and partly resolved drift interpretation by the automatic evaluation of term cluster consistency . for the detection task , our detailed and thoroughly documented findings indicated that in an evolving collection , as could be expected from the idea of the dynamic library where vector space update results in displaced cluster centroids , drifts occur on a regular basis and become more frequent with increasing index term specificity .apart from surveying the evolving semantic content structure , somoclu also mapped the parallel evolution of classification tension structure , a precondition to future modeling and anomaly prediction .further we computed those evolving epoch - specific potential surfaces whose negative gradient was term similarity combined with term importance as an attractive force between feature or object pairs .this potential can be seen as the conceptual consequence of the semantic differential , a forerunner to modern latent semantic methods .this semantic potential , in turn , suggests that physics as a metaphor is useful because it yields new , helpful concepts to model the dynamics of meaning , itself important for knowledge organization and knowledge management .our effort belongs to the field of _ social mechanics _ , a 21st century repercussion of ideas dating back as far as 1769 when american political theorist james madison (1751 - 1836 ) , the so - called ` father of the constitution ' and the united states fourth president , was said to be studying a primitive form of it at princeton .after him and over the centuries to come , prominent thinkers often tried to understand society s workings e.g. by means of thermodynamics or mechanics . in our implementation , social mechanicsis a variant of classical mechanics because the concept of mass we apply to features in general and index terms in particular , is a relative ( evolving ) one , depending on language use as its social context and implemented by the distributional hypothesis . by doing so ,the ` meaning as change ' paradigm receives experimental support inasmuch as ` term mass ' corresponds to work investment during update , with the reconfiguration of semantic spaces and fields being proportional to it . in order to explore the semantic potential , to connect measures of semantic relatedness with centrality values such as pagerank for ` term mass ' will be subject to future research , with substantial input expected e.g. from or .this research received funding by the european commission seventh framework programme under grant agreement number fp7 - 601138 pericles .sndor darnyi is grateful to emma tonkin ( university of bristol ) for early discussions on the subject .e. adar , m. dontcheva , j. fogarty , and d. s. weld .zoetrope : interacting with the ephemeral web . in _ proceedings of the 21st annual acm symposium on user interface software and technology _ , pages 239248 .acm , 2008 .d. beeferman , a. berger , and j. lafferty . a model of lexical attraction and repulsion .in _ proceedings of acl-97 , 35th annual meeting of the association for computational linguistics _ , pages 373380 , july 1997 .g. v. cormack , c. l. clarke , and s. buettcher .reciprocal rank fusion outperforms condorcet and individual rank learning methods . in _ proceedings of the 32nd international acm sigir conference on research and development in information retrieval _ , pages 758759 .acm , 2009 .f. de jong , h. rode , and d. hiemstra .temporal language models for the disclosure of historical text . in _ humanities , computers and cultural heritage : proceedings of the xvith international conference of the association for history and computing ( ahc 2005 ) _ , pages 161168 ,amsterdam , the netherlands , september 2005 .royal netherlands academy of arts and sciences .imported from ewi / db pms [ db - utwente : inpr:0000003683 ] .i. frommholz , b. larsen , b. piwowarski , m. lalmas , p. ingwersen , and k. van rijsbergen .supporting polyrepresentation in a quantum - inspired geometrical retrieval framework . in _ proceedings of the third symposium on information interaction in context _ , pages 115124 .acm , 2010 .p. kanerva , j. kristofersson , and a. holst . random indexing of text samples for latent semantic analysis . in _ proceedings of cogsci-00 , 22nd annual conference of the cognitive science society _ ,volume 1036 , 2000 .a. meroo - peuela , c. guret , r. hoekstra , and s. schlobach . detecting and reporting extensional concept drift in statistical linked data . in _1st international workshop on semantic statistics ( semstats 2013 ) , iswc .ceur _ , 2013 .a. moschitti .kernel engineering for fast and easy design of natural language applications . in _ proceedings of the 23rd international conference on computational linguistics : kernel engineering for fast and easy design of natural language applications _, pages 191 .association for computational linguistics , 2010 .s. pulman .distributional semantic models . in c.heunen , m. sadrzadeh , and e. grefenstette , editors , _ quantum physics and linguistics : a compositional , diagrammatic discourse _ , pages 333358 .oxford university press , isbn 978 - 0 - 19 - 964629 - 6 , 2013 .k. radinsky , f. diaz , s. dumais , m. shokouhi , a. dong , and y. chang .temporal web dynamics and its application to information retrieval . in _ proceedings of the sixth acm international conference on web search and data mining _ , pages 781782 .acm , 2013 .m. sadrzadeh and e. greffenstette .a compositional distributional semantics , two concrete constructions , and some experimental evaluations . in _ proceedings of qi-11 , 5th international quantum interaction symposium _ , aberdeen , uk , june 2011 .b. shaparenko , r. caruana , j. gehrke , and t. joachims .identifying temporal patterns and key players in document collections . in _ proceedings of the ieee icdm workshop on temporal data mining : algorithms , theory and applications ( tdm-05 ) _ , pages 165174 , 2005 .a. tosi , i. olier , and a. vellido .probability ridges and distortion flows : visualizing multivariate time series using a variational bayesian manifold learning method . in _ advances in self - organizing maps and learning vector quantization _ , pages 5564 .springer , 2014 .m. uschold . creating , integrating and maintaining local and global ontologies . in _ proceedings of the first workshop on ontology learning ( ol-2000 ) in conjunction with the 14th european conference on artificial intelligence ( ecai-2000)_. citeseer , 2000 .h. white .cross - textual cohesion and coherence . in _ proceedings of the workshop on discourse architectures : the design and analysis of computer - mediated conversation _ , minneapolis ,mn , usa , april 2002 .p. wittek , s. darnyi , e. kontopoulos , t. moysiadis , and i. kompatsiaris . monitoring term drift based on semantic consistency in an evolving vector field . in _ proceedings of ijcnn-15 ,international joint conference on neural networks _, 2015 .p. wittek , b. koopman , g. zuccon , and s. darnyi . combining word semantics within complex hilbert space for information retrieval . in _ proceedings of qi-13 , 7th international quantum interaction symposium _, pages 160171 , july 2013 .
|
in accessibility tests for digital preservation , over time we experience drifts of localized and labelled content in statistical models of evolving semantics represented as a vector field . this articulates the need to detect , measure , interpret and model outcomes of knowledge dynamics . to this end we employ a high - performance machine learning algorithm for the training of extremely large emergent self - organizing maps for exploratory data analysis . the working hypothesis we present here is that the dynamics of semantic drifts can be modeled on a relaxed version of newtonian mechanics called social mechanics . by using term distances as a measure of semantic relatedness vs. their pagerank values indicating social importance and applied as variable ` term mass ' , gravitation as a metaphor to express changes in the semantic content of a vector field lends a new perspective for experimentation . from ` term gravitation ' over time , one can compute its generating potential whose fluctuations manifest modifications in pairwise term similarity vs. social importance , thereby updating osgood s semantic differential . the dataset examined is the public catalog metadata of tate galleries , london .
|
schrdinger s cat paradox dramatically illustrates a macroscopic object being in a quantum superposition of two macroscopically different states .although this famous thought experiment depicts an extreme example , the existence of such superpositions and entanglement at macroscopic levels is not excluded by quantum theory .considerable experimental efforts have gone on to push the envelope by superposing ever larger quantum systems . there have also been attempts to characterize and quantify quantumness in a macroscopic sense . several general measures for quantifying such _quantum macroscopicity _ have been suggested in recent studies .however , those measures tend to operate within quite different contexts such as interference in the phase space , usefulness for quantum metrology , and the minimal modification of quantum theory .meanwhile , a resource theory of quantum coherence has recently been proposed . in ref . , the amount of quantum coherence could be quantified as a physical resource to achieve tasks beyond classical types of resources . in this viewpoint, recent studies have discovered connections between quantum coherence and other fields of resource theory , including quantum correlation and quantum thermodynamics .recently , an axiomatic approach towards macroscopic quantum coherence was suggested and several existing measures were investigated based on it . in this paper, we suggest a measure of macroscopic coherence based on the state disturbance induced by a coarse - grained measurement .we show that the disturbance - based measure satisfies the recently proposed criteria of macroscopic coherence , but in some cases can not yield consistent results without additional constraints .this problem is overcome in our study by introducing coarse - graining of the measurement depending on the system size .we also prove an inequality which relates the wigner - yanase - dyson skew information ( and consequently the quantum fisher information ) and the state disturbance induced by coarse - grained measurement , from which we argue that an appropriate limit to yield a consistent measure is the classical limit .our operational viewpoint on quantum macroscopicity allows one to effectively identify the quantum coherence between the macroscopically - separated components of a superposition .our approach can be well applied to both spin and bosonic systems , and we present several examples that lead to reasonable results .we first review some preliminary concepts regarding macroscopic quantum coherence . let us consider a measurement observable described by a hermitian operator .the eigenstates of the observable defines a natural orthonormal basis , which can be used to quantify the amount of coherence in the system .previous measures of quantum coherence give the same value for every superposition in the form of , without any regard for physical measurement outcomes represented by components and , which are and respectively .in other words , they did not consider how correctly and are discriminated by an actual measurement . in an attempt to quantify macroscopic quantum coherence however, we should give some consideration to the outcomes of a physical measurement .recently , yadin and vedral proposed a set of conditions that should be satisfied by a proper measure of macroscopic coherence . in their proposed resource theory of macroscopic coherence , the free operation is characterized as an operation satisfying the condition , where . with respect to this set of free operations, they proposed that any reasonable measure of macroscopic quantum coherence should satisfy the following conditions : ( m1 ) and _ if and only if _ .( m2a ) it is non - increasing under any trace - preserving free operation , .( m2b ) it is non - increasing under any selective free operation , for , where .( m3 ) it is convex , .( m4 ) if , which indicates that the coherence between distant modes are weighed more heavily than modes that are close to each other .known examples of measures that satisfy these conditions are the quantum fisher information and wigner - yanase - dyson skew information .we say that macroscopic coherence is coherence of a quantum superposition between two macroscopically distinct states .in other words , the component states of the superposition are supposed to yield two distinct outcomes when a measurement on a macroscopic scale is performed .we may employ the concept of a coarse - grained measurement to describe such a macroscopic measurement . in order to construct a coarse - grained measurement ,we first define a smoothing function ] and the quantum relative entropy is defined by .details and proof can be found in the appendix .for the rest of the paper , we assume that the measure is based on the bures distance .theorem [ ms ] allows us to define a new family of macroscopic quantum coherence measures parametrized by the measurement precision .we notice that the disturbance - based measure with certain values of leads to obviously unreasonable results although it satisfies all the conditions in ref . .the following example shows that a product of microscopic superpositions has a larger value of than the ghz - state when is sufficiently small .this is contrary to our understanding and previous results that the latter state is clearly in a macroscopic superposition while the former is not . consider a magnetization measurement on a system of spin- particles , of the same type studied by poulin .the measurement is defined by a hermitian operator where and is the standard pauli z operator .the observable represents a collective measurement of the overall spins rather than addressing each individual spin .the fidelity between the pre- and post measurement states for -particle product states with is given by using the approximation of the binomial distribution to the normal distribution for .on the other hand , in the case of -particle greenberger-horne-zeilinger ( ghz ) states with , we have ) ] further reinforces our argument .[ skewinfo ] coarse - grained measurement disturbance is lower bounded by wigner - yanase - dyson skew information , for a pure state , we have where is the variance of the observable , which is identical to for a pure state . the above inequality well represents the intuition that the more precise the measurements and the more coherence present within the system , the more the measurement will disturb the quantum state .this inequality relates previous studies using the variance as quantum macroscopicity to a coarse - grained measurement .previous study have argued that the scaling of the quantum fisher information with the number of particles characterizes whether a -particle system is macroscopically quantum .moreover , the wigner - yanase - dyson skew information is a closely related with the quantum fisher information by the following relation where the quantum fisher information is given by for eigendecomposition of . according to the argument on macroscopic quantumness , quantum states with can be interpreted as classical(or at least microscopic quantum ) while the states with may be considered macroscopic quantum .theorem [ skewinfo ] naturally manifests itself in the disturbance - based measure .provided the level of coarse graining is chosen to be , a state with will result in a measurement disturbance close to zero .for example , the macroscopic coherence for a product of microscopic quantum states is close to zero according to our measure , since the wigner - yanase - dyson skew information scales with the order of .in contrast , a non - classical skew information , for example in the case of a ghz state , allows the measure to reach its maximum value of for .this observation allows us to circumvent the inconsistency observed in the previous section .we will therefore impose the classical limit as the appropriate level of coarse graining for our disturbance based measure .another example in the spin system is a rotated dicke state given by , where is a sum overall all symmetric permutations , and is the rotation operator with and . in the case of and ,the macroscopic coherence of the state depends on the excitation number .such a state approaches being a product state(or a spin - coherent state ) when or .figure [ spin ] compares the behavior of between rotated dicke , ghz and product states for varying levels of the coarse graining parameter .we also observe that at the classical limit of , rotated dicke states with excitation number result in higher levels of macroscopic coherence than the ghz - state .this property does not persist however , if we were to continue decreasing the amount of measurement precision ( i.e. increase ) . for a sufficiently large , the ghz - state tends to have the highest level of macroscopic coherence among all the states considered .our disturbance - based measure appears to capture ideas from both the more general quantum coherence measures given by ref . and the macroscopic coherence measures based on the variance of the observable since it encodes information about _ how many states are currently in superposition _ as well as _ how far apart these superposed states are _ with respect to the given measurement observable and the measurement precision . for quadrature measurement for bosonic system with the same mean particle number .a fock state ( dot - dashed line ) , a superposition of coherent states ( double - dot - dashed line ) , and a coherent state ( solid line ) are investigated .dashed lines refer to the bound given by eq .( [ ieq3 ] ) . ]we also apply the disturbance - based measure to bosonic systems described by the annihilation operator and the creation operator .since a bosonic system can contain many particles in a single mode , the system may be considered macroscopic when the mean particle number is large . in this case , the particle number and the quadrature are natural candidates for measurement observables .we now consider the value of with respect to an -quadrature measurement .figure [ boson ] shows the disturbance - based measure for typical states of a bosonic system .again , we see that for small values of , the coherent state contains non - trivial macroscopic quantumness , which is against physical intuition. however , rapidly decreases with and becomes essentially zero at the imposed classical limit of . in comparison , a superposition of coherent states(scs ) and the fock state give non - trivial values of at the classical limit of .all these observations are compatible with the common expectation that coherent states are classical , while scs and the fock states are considered macroscopically quantum .we proposed a disturbance - based measure of macroscopic coherence through coarse grained - measurements .our argument stems from physical grounds that a precise measurement will affect all the coherence present in the system , while a sufficiently imprecise measurement will affect only the portion of the coherence between classically distinct states .we demonstrate that our disturbance - based measure satisfies a series of properties to quantify macroscopic coherence laid out in ref . . in the process , we point out that conditions for macroscopic coherence proposed in ref . is insufficient to yield consistent results without additional constraints .this inconsistency can be overcome by fixing the level of coarse - graining to an appropriate classical limit given by the classical limit .we also demonstrate an inequality relating the measurement - induced disturbance and wigner - yanase - dyson skew information and argue that the this kind of classical limit is necessary to produce a reliable measure of macroscopic coherence .we emphasize that the proposed measure provides an operational point of view on macroscopic quantumness that can be quantified by the degree of disturbance throughout the measurement of a given imprecision .the imprecision of the measurement allows us to focus on the coherence between macroscopically distinct states by blurring the interference below the measurement resolution. we can thus identify whether the quantum state is in a macroscopic superposition by investigating the state disturbance throughout the measurement only with a macroscopic resolution .as we have demonstrated for both spin and bosonic systems , our approach is not limited to a specific quantum system but can be applied to arbitrary macroscopic observables and quantum systems with large particle numbers .we expect that the viewpoint concerning the state disturbance induced by coarse - grained measurement may lead to greater insights on macroscopic quantum effects and coherence .this work was supported by the national research foundation of korea ( nrf ) through a grant funded by the korea government ( msip ) ( grant no .2010 - 0018295 ) and by the kist institutional program ( project no .2e26680 - 16-p025 ). h. k. was supported by the global ph.d .fellowship program through the nrf funded by the ministry of education ( grant no .2012 - 003435 ) .in this section , we prove that and satisfy the conditions ( m1 ) ( m4 ) .we first prove the following proposition : [ ep ] -coherence preserving operation commutes with any coarse - grained measurement process for any state , i.e. we now prove conditions that ( m1 ) ( m4 ) are satisfied : ( m1 ) note that if and only if .furthermore , is a convex sum of projections , thus this condition can be achieved when .( m2a ) by using proposition .[ ep ] , we show that for trace - preserving free operation , where the inequality comes from the monotonicity of fidelity function under trace - preserving operation .( m2b ) we first observe that when . can be expressed using ancillary state : .note that fidelity is non - increasing under partial trace and satisfies following properties for a set of projection operators : . using these propertieswe can show that , since fidelity is invariant under unitary operations .then by using proposition .[ ep ] , we finally get : ( m3 ) convexity can be proved by using joint concavity of fidelity , .( m4 ) this condition can be proved by direct calculation : .note that the value of strictly increases with respect to for every .the measure based on relative entropy can be proved similarly .( m1 ) and ( m3 ) directly comes from the elementary properties of relative entropy .( m2a ) can be proved by proposition .[ ep ] and monotonicity of relative entropy under trace preserving map , and ( m2b ) can be proved a same argument above by noticing .( m4 ) can be also proved by direct calculation of \right) ] is wigner - yanase - dyson skew information .m. brune , e. hagley , j. dreyer , x. matre , a. maali , c. wunderlich , j. m. raimond , and s. haroche , phys .. lett . * 77 * , 4887 ( 1996 ) . c. monroe , d. m. meekhof , b. e. king , d. j. wineland , science * 272 * 1131 ( 1996 ) . m. arndt , o. nairz , j. vos - andreae , c. keller , g. van der zouw , and a. zeilinger , nature * 401 * , 680 ( 1999 ) .a. ourjoumtsev , h. jeong , r. tualle - brouri , and p. grangier , nature * 448 * , 784 ( 2007 ) .b. vlastakis , g. kirchmair , z. leghtas , s. e. nigg , l. frunzio , s. m. girvin , m. mirrahimi , m. h. devoret , and r. j. schoelkopf , science * 342 * , 607 ( 2013 ) .t. kovachy , p. asenbaum , c. overstreet , c. a. donnelly , s. m. dickerson , a. sugarbaker , j. m. hogan , and m. a. kasevich , nature * 528 * , 530 ( 2015 ) .a. j. leggett , prog .. suppl . * 69 * , 80 ( 1980 ) .w. dr , c. simon , and j.i .cirac , phys .89 * 210402 ( 2002 ) .a. shimizu and t. miyadera , phys .89 * 270403 ( 2002 ) .g. bjrk and p.g.l .mana , j. opt .b : quantum semiclassical opt .* 6 * 429 ( 2004 ) .a. shimizu and t. morimae , phys .lett . * 95 * 090401 ( 2005 ) .e.g. cavalcanti and m.d .reid , phys .97 * 170405 ( 2006 ) .korsbakken , k.b .whaley , j. dubois , and j.i .cirac , phys .rev . a * 75 * 042106 ( 2007 ) .lee and h. jeong , phys .lett . * 106 * , 220401 ( 2011 ) ; h. jeong , m. kang , and c. -w .lee , arxiv:1108.0212 ( 2011 ) .f. frwis and w. dr , new j. phys.*14 * , 093039 ( 2012 ) .s. nimmrichter , and k. hornberger , phys .lett . * 110 * , 160403 ( 2013 ) .p. sekatski , n. sangouard , and n. gisin , phys .rev . a * 89 * , 012116 ( 2014 ) .b. yadin and v. vedral , phys .a * 92 * , 022356 ( 2015 ) .f. frwis , n. sangouard , and n. gisin , opt . comm . * 337 * , 2 ( 2015 ). h. jeong , m. kang , and h. kwon , opt. comm . * 337 * , 12 ( 2015 ) .m. kang , c. -w .lee , j. bang , s. -w .lee , c. -y .park , and h. jeong , arxiv:1510.02876 ( 2015 ) .b. yadin and v. vedral , phys .a * 93 * , 022122 ( 2016 ) .a. streltsov , u. singh , h. s. dhar , m. n. bera , and g. adesso , phys .. lett . * 115 * , 020403 ( 2015 ) .z. xi , y li and h. fan , sci .rep . * 5 * , 10922 ( 2015 ) .j. ma , b. yadin , d. girolami , v. vedral , and m. gu , phys .lett . * 116 * , 160407 ( 2016 ) .p. wikliski , m. studziski , m. horodecki , and j. oppenheim , phys .lett . * 115 * , 210403 ( 2015 ) .m. lostaglio , k. korzekwa , d. jennings , and t. rudolph , phys .021001 ( 2015 ) . m. lostaglio , d. jennings , and t. rudolph , nat . commun .* 6 * , 6383 ( 2015 ) .nielsen and i.l .chuang , _ quantum computation and quantum information _ , ( cambridge university press , cambridge , england , 2010 ) .e. h. lieb and w. e. thirring , _inequalities for the moments of the eigenvalues of the schrdinger hamiltonian and their relation to sobolev inequalities , in studies in mathematical physics _ , e. lieb , b. simon , and a. wightman eds . , princeton university press , pp .269 - 303 ( 1976 ) ; h. araki , lett .* 19 * , 167 ( 1990 ). f. hansen and g. k. pedersen , bull .london math .35 * , 553 ( 2003 ) .
|
we propose a measure of macroscopic coherence based on the degree of disturbance caused by a coarse - grained measurement . based on our measure , we point out that recently proposed criteria of macroscopic coherence may lead to inconsistent results when considering certain states such as a product of microscopic superpositions . an inequality relation is proved that relates the wigner - yanase - dyson skew information and the measurement disturbance , providing arguments as to why our approach is able to rule out such inconsistencies . our work provides a general framework of quantifying macroscopic coherence from an operational point of view , based on the relationship between the precision of the measurement and disturbance of the quantum state .
|
communications and network coding ( nc ) have recently emerged as strong candidate technologies for many future wireless applications , such as relay aided cellular networks , . since their inception in and , they have been extensively studied to improve performance and throughput of wireless networks , respectively . in particular , theory and experimentshave shown that they can be extremely useful for wireless networks with disruptive channel and connectivity conditions .however , similar to many other technologies , multi hop / cooperative communications and nc are not without limitations , . due to practical hardware limitations , _e.g. _ , the half duplex constraint , relay transmissions consume extra bandwidth , which implies that using cooperative diversity typically results in a loss of system throughput . on the other hand, nc is very susceptible to transmission errors caused by noise , fading , and interference .in fact , the algebraic operations performed at the network nodes introduce some packet dependencies in a way that the injection of even a single erroneous packet has the potential to corrupt every packet received at the destination , . due to their complementary merits and limitations, it seems very natural to synergically exploit cooperation and nc to take advantage of their key benefits while overcoming their limitations .for example , nc can be an effective enabler to recover the throughput loss experienced by multi hop / cooperative networking , while the redundancy inherently provided by cooperation might significantly help to alleviate the error propagation problem that arises when mixing the packets . in this context ,multi source multi relay networks , which exploit cooperation and nc for performance and throughput improvement , are receiving an always increasing interest for their inherent flexibility to achieving excellent performance and diversity / multiplexing tradeoffs . more specifically ,considerable attention is currently devoted to understanding the achievable performance of such networks when both cooperation and nc operations are pushed down to the physical layer , and their joint design and optimization are closely tied to conventional physical layer functionalities , such as modulation , channel coding , and receiver design , . in particular , how to tackle the error propagation problem to guaranteeing a given quality of service requirement , _e.g. _ , a distributed diversity order , plays a crucial role when these networks are deployed in error prone environments , _e.g. _ , in a wireless context .for example , simple case studies in , , , and have shown that a diversity loss occurs if cooperative protocols or detection algorithms are not adequately designed . to counteract this issue ,many solutions have been proposed in the literature , which can be divided in two main categories : i ) adaptive ( or dynamic ) solutions , _e.g. _ , , , , , and , which avoid unnecessary error propagation that can be caused when encoding and forwarding erroneous data packets ; and ii ) non adaptive solutions _ e.g. _ , , , , , and , which allow erroneous packets to propagate through the network but exploit optimal detection mechanisms at the destination to counteract the error propagation .each category has its own merits and limitations .adaptive solutions rely , in general , on the following assumptions , , , : a ) network code and cooperative strategy are adapted to the channel conditions and to the outcome / reliability of the detection process at the relay nodes .this requires some overhead , since the network code must be communicated to the destination for correct detection ; b ) powerful enough channel codes at the physical later are assumed to guaranteeing that the error performance is dominated by outage events ( according to the shannon definition of outage capacity ) ( * ? ? ?ii ) ; and c ) the adoption of ideal cyclic redundancy check ( crc ) mechanisms for error detection , which guarantees that a packet is either dropped or injected into the network without errors ( _ i.e. _ , erasure channel model ) .however , recent results have shown that , in addition to be highly spectral inefficient as an entire packet is blocked if just one bit is in error , relaying based on crc might not be very effective in block fading channels , .an interesting link adaptive solution , which does not require crc for error detection and avoids full csi ( channel state information ) information at the relays , has been proposed in . therein , the achievable diversity ( using the singleton bound ) is studied under the assumption that _ ad hoc _ interleavers are used , while no analysis of the coding gain is conducted .non adaptive solutions rely , in general , on the following assumptions , , : a ) neither error correction nor error detection mechanisms are needed at the physical layer , but the relays just regenerate the incoming packets and forward them to the final destination ( _ i.e. _ , error channel model ) .this results in a simple design of the relay nodes , as well as in a spectral efficient transmission scheme as the received packets are never blocked ; and b ) the possibility to receive packets with errors needs powerful detection mechanisms at the destination , which require csi of the whole network to counteract the error propagation problem and to achieve full diversity .similar to adaptive solutions , this requires some overhead .as far as adaptive solutions are concerned , , , have recently provided a comprehensive study of the diversity / multiplexing tradeoff for general multi source multi relay networks , and have shown that the design of diversity achieving network codes is equivalent to the design of systematic maximum distance separable ( mds ) codes for erasure channels .thus , well established and general methods for the design of network codes exist , which can be borrowed from classical coding theory . on the other hand , as far as non adaptive solutions are concerned ,theoretical analysis and guidelines for system optimization are available only for specific network topologies and network codes . to the best of the authors knowledge , a general framework for performance analysis and code design over fading channels is still missing .motivated by these considerations , in this paper we focus our attention on non adaptive solutions with a threefold objective : i ) to develop a general analytical framework to compute the average bit error probability ( abep ) of multi source multi relay cooperative networks with arbitrary binary encoding vectors and realistic channel conditions over all the wireless links ; ii ) to provide guidelines for network code design to achieve a given diversity and coding gain tradeoff ; and iii ) to understand the impact of the error propagation problem and the role played by csi at the destination on the achievable diversity order and coding gain . more specifically , by carefully looking at recent literature related to the performance analysis and code design for non adaptive solutions , the following contributions are worth being mentioned : i ) in , the authors study a simple three node network without nc ( a simple repetition code is considered ) and they show that instantaneous csi is needed at the destination to achieve full diversity .no closed form expression of the coding gain is given ; ii ) in and , the authors introduce and study complex field network coding ( cfnc ) , which does not rely on galois field operations and exploit interference and multi user detection to increase throughput and diversity .the analysis is valid for arbitrary network topologies .however , only the diversity order is computed analytically , while the coding gain is studied by simulation ; iii ) in , the authors study a simple three node network with binary nc . unlike other papers , channel coding is considered in the analysis . however ,the error performance is mainly estimated through monte carlo simulations ; iv ) in , the author considers multiple relay nodes but a simple repetition code is used ( no nc ) .main contribution is the study of the impact of channel estimation errors on the achievable diversity ; v ) in , the authors study a network topology with multiple sources but with just one relay .also , a very specific network code is analyzed .this paper provides a simple and effective method to accurately computing the coding gain of error prone cooperative networks with nc ;vi ) in , the authors analyze generic multi source multi relay networks with binary nc , but error free source to relay links are considered , and the performance ( coding gain ) is computed by using monte carlo simulations ; vii ) in and , we have studied the performance of network coded cooperative networks with realistic source to relay wireless channels . however , the analysis is useful only for two source two relay networks and for a very specific binary network code ; viii ) in , a general framework to study the abep for arbitrary modulation schemes is provided , but a simple three node network without nc is considered ; and ix ) in , the authors study a three node network with a simple repetition code .exact results are provided for coding gain and diversity order .finally , in and , nc with error prone source to relay links is studied , but the analysis is applicable only to noisy channels , while channel fading and distributed diversity issues are not investigated .according to this up to date analysis of the state of the art , it follows that no general framework for performance analysis and design of non adaptive solutions exists in the literature , which is useful for generic network topologies , for arbitrary encoding vectors , and which provides an accurate characterization of diversity order and coding gain as a function of the csi available at the destination .motivated by these considerations , in this paper we focus our attention on a general multi source multi relay network with realistic and error prone channels over all the wireless links . for analytical tractability ( and to keep the implementation complexity of relays at a low level , ) ,we consider a binary network code , a binary phase shift keying ( bpsk ) modulation , and the demodulate and forward ( demf ) relay protocol . with these assumptions ,the main contributions and outcomes of this paper are as follows : i ) a maximum likelihood ( ml ) optimum demodulator is proposed , which allows the destination to exploit the distributed diversity inherently provided by cooperation and nc .the demodulator takes into account demodulation errors that might occur at the relay nodes , as well as forwarding and nc operations .it is shown that the demodulator resembles a chase combiner with hard decision decoding at the physical layer ; ii ) a simple but accurate framework to compute the end to end abep of each source is proposed .the framework provides a closed form expression of diversity order and coding gain , and it clearly highlights the impact of error propagation and nc on the end to end performance ; iii ) it is proved that each source node can achieve a diversity order that is equal to the separation vector , of the network code . in particular , it is shown that the optimization of network codes is equivalent to the design of systematic linear block codes for fully interleaved fading channels , and that equal and unequal error protection ( eep / uep ) properties are preserved ; and iv ) the impact of csi at the destination is studied , and it is shown that half of the diversity order is lost if the destination is unable to account for possible demodulation errors at the relays . the paper is organized as follows . in section [ systemmodel ] , network topology and system model are introduced . in section [ receiverdesign ] , the ml optimum demodulator that accounts for demodulation errors at the relays is proposed . in section [ frameworkabep ] , a closed form expression of the end to end abep is given . in section [ diversitycodinggain ] , diversity order and coding gain are studied for arbitrary binary network codes and network topologies . in section[ results ] , numerical results are presented to substantiate analysis and findings .finally , section [ conclusion ] concludes this paper .we consider a generic multi source multi relay network with sources ( for ) , relays ( for ) , and , without loss of generality , a single destination .we consider the baseline time division multiple access ( tdma ) protocol , where each transmission takes place in a different time slot , and multiple access interference can be neglected .we assume that direct links between sources and destination exist , and that the relays help the sources to deliver the information packets to the final destination .the cooperative protocol is composed of two main phases : i ) the broadcasting phase ; and ii ) the relaying phase . during the first phase ,the source transmits the information packet intended to the destination in time slot for .these packets are overheard by the relays too , which store them in their buffers for further processing .this phase lasts time slots . during the second phase ,the relay forwards a linear combination , _i.e. _ , nc is applied , of some received packets to the destination in time slot for .we consider a non adaptive demf relay protocol , which means that each relay demodulates the received packets , but perform nc and forward them regardless of their reliability . as a result, packets with erroneous bits can be injected into the network. however , these packets can be adequately used at the destination , by exploiting advanced detection and signal processing algorithms at the physical layer , to improve the system performance . according to the working operation of the protocol , broadcasting and relaying phases last time slots . since information packets are transmitted by the sources , the protocol offers a fixed rate , , that is equal to . in this paper , we are interested in understanding how the operations , _i.e. _ , nc , performed at the relays affect the end to end performance for this given rate .main objective is understanding the performance of cooperative networks with nc when physical layer terminologies are exploited to counteract the error propagation problem , and , more specifically , when demodulation and network decoding are jointly performed at the destination ( _ i.e. _ , cross layer decoding ) . for analytical tractability and simplicity ,we retain three main reasonable assumptions : i ) uncoded transmissions with no channel coding are considered .accordingly , there is no loss of generality in considering symbol by symbol transmission .some preliminary results with channel coding are available in ; ii ) bpsk modulation is assumed to keep the analytical complexity at a low level ; and iii ) binary nc at the relays is investigated .however , unlike many current papers in the literature , _ e.g. _ , , , , and references therein , no assumption about the encoding vectors is made .these assumptions are widespread used in related literature _ e.g. _ , , , , and the references therein . according to the assumptions above , the generic source broadcasts , in time slot , a bpsk modulated signal , , with average energy , _ i.e. _ , , where is the bit emitted by .then , the signals received at relays for and destination are : where is the fading coefficient from node to node , which is a circular symmetric complex gaussian random variable ( rv ) with zero mean and variance per dimension ( rayleigh fading , we provide some comments on how to extend the analysis to other fading distributions . ] ) . owing to the distributed nature of the network , independent but non identically identically distributed ( i.n.i.d . ) fading is considered .in particular , let be the distance between nodes and , and be the path loss exponent , we have , . also , is the complex additive white gaussian noise ( awgn ) at the input of node and related to the transmission from node to node .the awgn in different time slots is independent and identically distributed ( i.i.d . ) with zero mean and variance per dimension . upon reception of and in time slot , the relay for and the destination demodulate these received signals by using the ml optimum criterion , as follows : where denotes the demodulated bit and denotes the trial bit used in the hypothesis testing problem .more specifically , and are the estimates of at relay for , and at destination , respectively .we note that ( [ eq_2 ] ) needs csi about the source to relay and the relay to destination channels at relay and destination nodes , respectively . in this paper, we assume that csi is perfectly known at the receiver while it is not known at the transmitter .this is obtained through adequate training . after estimating and , the destination keeps the demodulated bit for further processing , as described in section [ receiverdesign ] , while the relays initiate the relaying phase .more specifically , the generic relay , , performs the following three operations : i ) it applies binary nc on the set of demodulated bits for ; ii ) it remodulates the network coded bit by using bpsk modulation ; and iii ) it transmits the modulated bit to the destination during time slot for .once again , we emphasize that all the demodulated bits are considered in this phase , even though they are wrongly detected , _i.e. _ , .as far as nc is concerned , we denote the network coded bit at relay by , where : i ) denotes the encoding function at relay ; ii ) denotes exclusive or ( xor ) operations ; and iii ) ^t ] and ] and ^t ] is the matrix containing the encoding vectors of all the relays , and ^t ] is the entry of vector ; vii ) , where is the kronecker delta function , _i.e. _ , if and elsewhere ; and viii ) is the probability , averaged over fading channel statistics , of detecting when , instead , is actually transmitted , and these are the only two codewords possibly being transmitted .the kronecker delta function takes into account that a wrong demodulated codeword might not result in an error for the source , , under analysis .the next step is the computation of the apep for a generic pair of distributed codewords .we proceed in two steps : i ) the pep conditioned on fading channels is computed ; and ii ) the conditioning is removed .the decision metric in ( [ eq_6 ] ) can be rewritten in a more compact form as follows : \left| { { \bf{\hat c}}\left [ m \right ] - { \bf{\tilde c}}\left [ m \right ] } \right| } \right\ } } = \lambda \left ( { { \bf{\tilde c } } } \right)\ ] ] where we have defined : ^t ] , ^t ] contributes to the summation if and only if \ne { \bf{\bar c}}\left [ m \right] ] . the cardinality , , of is given by the hamming distance between and , _i.e. _ , - { \bf{\bar c}}\left [ m \right ] } \right|} ] is a discrete rv which can only assume values } ] with probability ] , respectively , where ^t ] is : ,{\bf{\bar c}}\left [ m \right ] } \right ) } \left ( { \left . \xi\right|m \in \theta \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } \right ) = { \bf{p}}\left [ m \right]\delta \left ( { \xi - { \bf{w}}\left [ m \right ] } \right ) + \left ( { 1 - { \bf{p}}\left [ m \right ] } \right)\delta \left ( { \xi + { \bf{w}}\left [ m \right ] } \right)\ ] ] where denotes the dirac delta function . since the rvs ,{\bf{\bar c}}\left [ m \right ] } \right)} ] has a pdf given by the convolution of the pdfs of the individual rvs ,{\bf{\bar c}}\left [ m \right ] } \right)} ] , where is its element , _i.e. _ , the combination of the indexes in .the cardinality of is ._ proof _ : we proceed in two steps : i ) first , we describe the step by step methodology to compute in ( [ eq_17 ] ) for ; and ii ) then , we describe how the approach can be generalized to generic .let us start with . in this case , we have , and in ( [ eq_16 ] )can be computed by using some properties of the dirac delta function . by doing so , and substituting the obtained pdf in ,we get : {\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]{\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right]\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + \left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] } \right)\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right)\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right)\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right)\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right)\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] } \right)\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right)\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] } \right)\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right)\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right]{\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right)\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right]{\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right)\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]{\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] } \right)\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ \end{split}\ ] ] where is the heaviside function : if and elsewhere . the pep in ( [ eq_18 ] ) can be simplified and can be written in a form that is more useful to compute the average over fading statistics .the main considerations to this end are as follows : i ) since , by definition ( see ( [ eq_6 ] ) ) , > 0 ] and } } \right ) = 1 ] for . furthermore , by exploiting i ) and ii ) , the resulting terms containing the heaviside function can be grouped in three pairs of two addends each .for example , a pair in ( [ eq_18 ] ) is with : {\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ z_2 = { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right]\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 3 \right ) } } \right ] } \right ) \\ \end{array } \right.\ ] ] while the other two pairs can be obtained by direct inspection of ( [ eq_18 ] ) accordingly . for generic , pairs as shown in ( [ eq_19 ] ) can be obtained : \mathcal{h}\left ( { \sum\limits_{k \in { \rm \mathcal{a } } } { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right ] } - \sum\limits_{k \in \bar { \rm \mathcal{a } } } { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right ] } } \right ) } \right\ } }\\ z_2 = \prod\limits_{k \in \bar { \rm \mathcal{a } } } { \left\ { { { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right]\mathcal{h}\left ( { - \sum\limits_{k \in { \rm \mathcal{a } } } { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right ] } + \sum\limits_{k \in \bar { \rm \mathcal{a } } } { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right ] } } \right ) } \right\ } } \\ \end{array } \right.\ ] ] where and are two sets of indexes such that and . by taking into account that , for high snr , we have = \ln \left [ { { { \left ( { 1 - { \bf{p}}\left [ m \right ] } \right ) } \mathord{\left/ { \vphantom { { \left ( { 1 - { \bf{p}}\left [ m \right ] } \right ) } { { \bf{p}}\left [ m \right ] } } } \right .\kern-\nulldelimiterspace } { { \bf{p}}\left [ m \right ] } } } \right ] \to - \ln \left ( { { \bf{p}}\left [ m \right ] } \right) ] , and because only one of these latter terms is explicitly present in ( [ eq_16 ] ) .furthermore , the need to compute all possible combinations of the indexes clearly explains the definition of in ( [ eq_17 ] ) .the only thing left is to understand why in each summation the index must belong to the set .the motivation is as follows . when computing the convolution in ( [ eq_16 ] ) , the total number of addends in the final result is .in fact , the convolution of pdfs is computed , each one given by the summation of two terms . among all these terms ,} ] are treated separately in ( [ eq_17 ] ) .more specifically , } ] in zero because of the properties of the heaviside function .the remaining are grouped in pairs of two addends , as shown in ( [ eq_19 ] ) .furthermore , each pair reduces to only one addend as shown in ( [ eq_21 ] ) .accordingly , the number of terms in ( [ eq_17 ] ) can not be larger than . in other words , when the cumulative inequality in is no longer satisfied , we can stop computing the summations in ( [ eq_17 ] ) .this concludes the proof . _ proposition [ pep ] _ is very general and can be applied to any .however , it is not an exact result , as it holds for high snr only . for the special case , an exact expression of the pep in ( [ eq_14 ] ) can be obtained , which , in general , has to be preferred as it is accurate for any snr . in _corollary [ pep_dh2 ] _ , we provide the exact expression of the pep in ( [ eq_14 ] ) without any high snr approximations .[ pep_dh2 ] if , then and the pep in ( [ eq_14 ] ) is equal to : ,{\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right\}\ ] ] _ proof _ : the proof follows from analytical steps similar to ( [ eq_18 ] ) in _ proposition [ pep]_. in particular , we have : {\bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right)\mathcal{h}\left ( { { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right ) \\ & + { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right]\left ( { 1 - { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] } \right)\mathcal{h}\left ( { - { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 1 \right ) } } \right ] + { \bf{w}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( 2 \right ) } } \right ] } \right ) \\ \end{split}\ ] ] unlike _ proposition [ pep ] _ , there is no need to exploit the high snr approximation \to 1 ] is not used in ( [ eq_25 ] ) .this provides a better ( and exact ) estimate of the pep .however , this procedure can not be readily generalized to network codes with , without having a more complicated expression of the pep , which is not useful for further analysis , and , more specifically , to remove the conditioning over fading statistics .the aim of this section is to provide a closed form and insightful expression of the apep , _i.e. _ , to average the pep in ( [ eq_17 ] ) over fading channel statistics . in spite of the apparent complexity of ( [ eq_17 ] ), _ proposition [ apep ] _ shows that a surprisingly simple , compact , and insightful result can be obtained for i.n.i.d . fading .[ apep ] let us consider the rayleigh fading channel model introduced in section [ systemmodel ] .the apep , , is as follows : \\ & \times \prod\limits_{m = 1}^{n_s + n_r } { \chi \left\ { { { \bf{\delta } } _ { { \bf{c}},{\bf{\bar c } } } \left [ m \right]{\bf{\bar \sigma } } _ { \rm{srd}}^{\left ( { \bf{g } } \right ) } \left [ m \right ] } \right\ } } \\ \end{split}\ ] ] where : and : i ) denotes the expectation operator computed over all fading gains of the network model introduced in section [ systemmodel ] ; ii ) if and if ; iii ) ; iv ) ; v ) ^t ] , where is a all zero vector ; vi ) ^t ] , and ^t ] and ^t ] . this remark holds for generic i.n.i.d .channels , and it implies the identity ( for ) : } \right ) } } \right ] } , \prod\limits_{h \in \bar { \rm a } } { { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( { { \bf{v}}_k \left [ h \right ] } \right ) } } \right ] } } \right\ } } } \right\ } = \mathcal{n}_d^{\left ( { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } \right ) } t_2\ ] ] where is given in ( [ eq_29 ] ) , and is the number of terms in ( [ eq_27 ] ) that are actually summed in ( [ eq_30 ] ) . by putting together these considerations , and by taking into account that there are summations with different in ( [ eq_17 ] ) , we obtain ( [ eq_26 ] ) . the only missing thing in our proof is to show that has the closed form expression given in ( [ eq_27 ] ) .this result follows from the definition of for in ( [ eq_17 ] ) .in fact , since , the number of elements in each summation in ( [ eq_30 ] ) is : i ) either , if we have not reached the maximum number of indexes that can be summed , _i.e. _ , ; ii ) or , in the last summation , the remaining indexes if the cumulative summation in exceeds this maximum number of indexes . equation ( [ eq_27 ] ) summarizes in formulas these two cases .this concludes the proof . similar to _ proposition [ pep ] _ , the exact apep can be obtained if , as given in _ corollary [ apep_dh2]_. [ apep_dh2 ] let us consider the rayleigh fading channel model introduced in section [ systemmodel ] . then , with given in ( [ eq_23 ] ) for is as follows : {\bf{\bar \sigma } } _ { { \rm{srd}}}^{\left ( { \bf{g } } \right ) } \left [ m \right ] } \right\}}\ ] ] where the same symbols and notation as in _ proposition [ apep ] _ are used . _ proof_ : it follows from ( [ eq_26 ] ) with , by neglecting the `` 1 '' term as shown in _corollary [ pep_dh2]_. _ proposition [ apep ] _ is general and it can be applied to arbitrary i.n.i.d fading channels and network topologies with generic binary nc . however , it is interesting to see what happens to the network performance for some special channel models and operating conditions , which are often studied to shed lights on the fundamental behavior of complex systems . in this section ,we are interested in providing some simplified results for three notable scenarios of interest : i ) i.i.d .fading , where we have for every wireless link ; ii ) i.n.i.d . fading withhigh reliable source to relay links , which is often assumed to simplify the analysis , but , as described in _ proposition [ crossoverprobability ] _ , it does not account for the error propagation effect due to nc ; and iii ) i.i.d .scenario with high reliable source to relay links .the end to end apep of these three scenarios is summarized in _ corollary [ apep_iid ] _ , _ corollary [ apep_idealsr ] _ , and _ corollary [ apep_iid_idealsr ] _ , respectively .[ apep_iid ] if the fading channels are i.i.d . with , then the apep in _ proposition [ apep ] _ and in _ corollary [ apep_dh2 ]_ can be simplified by taking into account the following identity : {\bf{\bar \sigma } } _ { { \rm{srd}}}^{\left ( { \bf{g } } \right ) } \left [ m \right ] } \right\ } } = \left ( { \sigma _ 0 ^ 2 } \right)^ { - d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } \prod\limits_{m = 1}^{n_s + n_r } { \chi \left\ { { { \bf{\delta } } _ { { \bf{c}},{\bf{\bar c } } } \left [ m \right]{\bf{g}}^{\left ( 0 \right ) } \left [ m \right ] } \right\}}\ ] ] where : i ) is a all one vector ; ii ) for , where is the number of sources whose data is network coded at relay node ; and iii ) ^t ] ._ proof _ : if the source to relay channels are very reliable , we have for and .thus , by definition , .so , the simplified expression of follows by taking into account the definition of and as block matrices .this concludes the proof . two important conclusions can be drawn from _corollary [ apep_idealsr]_. first , we notice that the apep is affected by the encoding operations performed at the relays only through the codeword s distance , which is the number of distinct elements between and .this provides a very simple criterion to choose the network code for performance optimization .second , since } \right|_{\sigma _ { s_t r_q } ^2 \to \infty } \le \left .{ { \bf{\bar \sigma } } _ { { \rm{srd}}}^{\left ( { \bf{g } } \right ) } \left [ m \right ] } \right|_{\sigma _ { s_t r_q } ^2 < \infty } ] , where and are coding gain and diversity order of for , respectively .this result is summarized in _ proposition [ codingdiversitygain]_. [ codingdiversitygain ] given the abep in ( [ eq_12 ] ) and the apep in ( [ eq_26 ] ) , diversity order and coding gain of are : \\g_c^{\left ( { s_t } \right ) } = 4\left\ { { \frac{1}{{2^{n_s } } } \sum\limits_{\scriptstyle { \bf{b}},{\bf{\bar b } } \atop \scriptstyle d_h \left ( { { \bf{c}},\bar { \bf{c } } } \right ) = { \rm{sv}}\left [ t \right ] } { \left [ \begin{array}{l } \left ( { 1 + 2\sqrt \pi \gamma \left ( { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) + \frac{1}{2 } } \right)\sum\limits_{d = 1}^{\left\lfloor { { { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } \mathord{\left/ { \vphantom { { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } 2 } } \right .\kern-\nulldelimiterspace } 2 } } \right\rfloor } { \frac{{\mathcal{n}_d^{\left ( { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } \right ) } } } { { \gamma \left ( { d + \frac{1}{2 } } \right)\gamma \left ( { d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) - d + \frac{1}{2 } } \right ) } } } } \right ) \\ \times\left ( { \prod\limits_{m = 1}^{n_s + n_r } { \chi \left\ { { { \bf{\delta } } _ { { \bf{c}},{\bf{\bar c } } } \left [ m \right]{\bf{\bar \sigma } } _ { { \rm{srd}}}^{\left ( { \bf{g } } \right ) } \left [ m \right ] } \right\ } } } \right)\bar \delta \left ( { { \bf{c}}\left [ t \right],{\bf{\bar c}}\left [ t \right ] } \right ) \\ \end{array } \right ] } } \right\}^ { - \frac{1}{{g_d^{\left ( { s_t } \right ) } } } } \\ \end{array } \right.\ ] ] where is known , in coding theory , as `` separation vector '' ( sv ) ( * ? ? ?1 ) , and , for a given codebook , its entry , _i.e. _ , ] . _ proof _ : first of all , let us study . from ( [ eq_26 ] ) in _ proposition [ apep ] _we notice that the apep has diversity order , which is the hamming distance between the pair of codewords and .furthermore , from ( [ eq_12 ] ) we know that this apep contributes to the abep of source if and only if the bits of and are different , _i.e. _ , if and only if \ne { \bf{\bar c}}\left [ t \right] ] .accordingly , in ( [ eq_26 ] ) only the apeps having a diversity order , _i.e. _ , a hamming distance , equal to : \ne { \bf{\bar c}}\left [ t \right]\;{\rm{and}}\;{\bf{c}}^ { ' } \left [ t \right ] \ne { \bf{\bar c}}^ { ' } \left [ t \right ] } \right\}\ ] ] will dominate the performance for high snr . in fact , all the other apeps will decay much faster with the snr , thus providing a negligible contribution . in formulas , the abep in ( [ eq_12 ] ) can be re written as : ,{\bf{\bar c}}\left [ t \right ] } \right ) } \right ] } } \to \frac{1}{{2^{n_s } } } \sum\limits_{\scriptstyle { \bf{b}},{\bf{\bar b } } \atop \scriptstyle d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) = d_h^{\left ( { \min } \right ) } \left ( t \right ) } { \left [ { { \rm{apep}}\left ( { { \bf{c } } \to { \bf{\bar c } } } \right)\bar \delta \left ( { { \bf{c}}\left [ t \right],{\bf{\bar c}}\left [ t \right ] } \right ) } \right]}\ ] ] from ( [ eq_35 ] ) and ( [ eq_36 ] ) , by definition ( * ? ? ? * def .1 ) , is exactly the entry of sv , _i.e. _ , ] .this concludes the proof .even though the overall analytical derivation and proof to get ( [ eq_26 ] ) in _ proposition [ apep ] _ are quite analytically involving , the final expression of the apep turns out to be very compact , elegant , and simple to compute . in section [ results ] , via monte carlo simulations , we will substantiate its accuracy for high snr .in addition , the framework is very insightful , as it provides , via direct inspection , important considerations on how the network code affects the performance of the cooperative network , as well as how it can be optimized to improve the end to end performance .important insights from the analytical framework are as follows .=0em = 1.5em _ end to end diversity order_. as far as diversity is concerned , in _ proposition [ codingdiversitygain ] _ we have proved that each source can achieve a diversity order that is equal to the separation vector of the network code .this is a very important result as it shows that even though a dual hop network is considered , which is prone to error propagation due to relaying and to demodulation errors that might happen at each relay node , the distance properties of the network code are still preserved as far as the end to end performance is concerned .this result allows us to conclude that , if we want to guarantee a given diversity order for a given source , we can use conventional linear block codes as network codes , and be sure that the end to end diversity order ( and , thus , the error correction capabilities , , ) of these codes is preserved even in the presence of error propagation due to relaying and nc operations .this result and its proof is , to the best of the authors knowledge , new , as it is often assumed a priori that the presence of error propagation does not affect the diversity properties of the network code . on a practical point view, this result suggests that , as far as only the diversity order is concerned , the network codes can be designed by using the same optimization criteria as for single hop networks .finally , we note that the result obtained in this paper is more general than , as our proof is not based on the singleton bound , and , more important , no _ ad hoc _ interleavers are needed to achieve a distributed diversity equal to the sv . _ comparison with single hop network and classical coding theory_. it is interesting to compare the result about the achievable diversity order in _ proposition [ codingdiversitygain ] _ with the diversity order that is achievable in single hop networks . from ( * ? ? ?1461 ) ) , ( * ? ? ?12 ) , and ( * ? ? ?ii ) , we know that single hop networks operating in fully interleaved fading channels and using soft decision decoding have a diversity order that is equal to the minimum distance of the linear code .the result in _ proposition [ codingdiversitygain ] _ can be seen as a generalization of the analysis of single hop networks in , , to dual hop networks with nc .it is important to emphasize that in our analysis we have taken into account realistic communication and channel conditions , which include demodulation errors at the relays and practical forwarding mechanisms . also , our results are in agreement with , where the error correction properties of network codes for the single source scenario have been studied , and a strong connection with classical coding theory has been established .our analysis extends the analysis to multi source networks , provides closed form expressions of important performance metrics , and accounts for practical communication constraints . finally , we note that even though relays and destination compute hard decision estimates of the incoming signals and send them to the network layer to exploit the redundancy introduced by cooperation and nc , the diversity order is the same as in single hop networks with soft decision decoding .the reason is that at the network layer we take into account the reliability of each bit through a demodulator that resembles the chase combiner ( see also section [ msd_decoder ] ) ._ comparison with adaptive nc solutions_. in section [ introduction ] , we have mentioned that another class of network code designs aims at guaranteeing a given end to end diversity order without injecting erroneous packets into the network . in these solutions ,the network code changes according to the detection outcome at each relay node .results and analysis in , , and have established a strong connection between the design of diversity achieving network codes and linear block codes for erasure channels .more specifically , , , and have shown that mds codes can be used as network codes to achieve distributed diversity for erasure channels .the analysis conducted in the present paper complements design and optimization of network codes for _ erasure channels _ to the performance analysis and design of such codes for _ error channels _ , where all the bits are forwarded to the destination regardless of their reliability . _end to end coding gain_. as far as the coding gain in _ proposition [ codingdiversitygain ] _ is concerned , and unlike the analysis of the diversity order , there are differences between single and dual hop networks with and without nc . in fact , in _ corollary [ apep_idealsr ] _ , we have shown that both demodulation errors at the relays and dual hop relaying introduce a coding gain loss if compared to single hop transmissions .thus , even though nc and relaying , via a proper receiver design , do not reduce the diversity order inherently provided by the distributed network code , they do reduce the coding gain , which results in a performance degradation that depends on the quality of the source to relay channels .however , this performance degradation might be reduced , and even completely compensated , through adequate network code optimization and design .in fact , _ proposition [ codingdiversitygain ] _ and _ corollary [ apep_idealsr ] _ provide closed form expressions of the coding gain for both scenarios where we account for realistic and ideal source to relay channels . a good criterion to design the network code , _i.e. _ , to exploit the inherent redundancy introduced by nc , might be to choose the generator matrix of the network code such that the following condition , for each source node , is satisfied : it is worth being mentioned that , in general , the most important criterion to satisfy is the diversity order requirement , as it has a more pronounced effect on the system performance .the optimization condition in ( [ eq_37 ] ) can be taken into consideration if there is no reduction on the achievable diversity order for a given rate .finally , we emphasize that both diversity order and coding gain can be adjusted by adding or removing relay nodes from the network , which , however , has an effect on the achievable rate as shown in section [ systemmodel ] .the framework proposed in this paper can be exploited for many network optimizations , such as : i ) designing the network code to achieve the best diversity order and coding gain for a given number of sources and relays ( _ i.e. _ , for a given rate ) ; or ii ) designing the network code to have the minimum number of source and relay nodes ( _ i.e. _ , to maximize the rate ) , for a given diversity order and coding gain . _ eep / uep capabilities_. the diversity analysis in _ proposition [ codingdiversitygain ] _ has pointed out that each source of the network can achieve a diversity order that is given by the separation vector of the network code . in other words , each source can achieve a different diversity order . in coding theory ,this class of codes is known as uep codes , and it can be very useful when different sources have to transmit data with a different quality of service requirement or priority .in other words , the network code might be designed to take into account the individual requirement of each source , instead of being designed by looking at the worst case scenario only .for example , let us consider a network with three sources , with one of them having data to be transmitted with very low abep. looking at the worst case scenario , we should optimize the system , and , thus , the network code as well , such that this source has , for a fixed transmit power , a very high diversity order .if we can not tune the diversity order of each source individually , we are forced to adopt a network code that provides the same high diversity order for all the sources of the network , which might have an impact on the achievable rate ( see section [ systemmodel ] ) .our analysis showcases that uep codes usually exploited in classical coding theory could be used to find the best trade off between the diversity order achieved by each source and the rate of the network . in our opinion, this provides design flexibility , and introduces a finer level of granularity for system optimization , which has not been investigated yet for adaptive nc schemes .in fact , in general , network codes are designed such that all the sources have the same diversity order , , . our framework provides a systematic way to guarantee unequal diversity orders for each source .another interesting application that might benefit from uep capabilities provided by nc can be found in and ( * ? ? ?* ch 3 and ch . 5 ) .more specifically , in , uep capability is called `` incremental diversity '' .the idea is that energy consumption can be reduced if source nodes located farther from the destination can transmit with the same power as closer source nodes , and exploit uep properties to achieve the same end to end performance .in other words , the incremental diversity offered by uep network codes might be used to have even energy consumption among the nodes of the network with important implications for green applications .another application for energy saving is the exploitation of the proposed framework as an utility function for energy efficient network formation through coalition formation games ( * ? ? ?* ch . 5 ) ._ generalization of the performance analysis of dual hop cooperative protocols_. the framework proposed in this paper for can be thought as a generalization of the many results available in the literature for cooperative networks without nc . among the many papers described in section [ introduction ] ,let us consider , as an example , . in , it is shown that a dual hop three node network using the demf protocol can achieve full diversity equal to if the receiver has a reliable estimate of the instantaneous error probability at the relay .this result is included , as a byproduct , in our analysis , which is more general as it accounts for arbitrary sources , relays , and binary encoding vectors at each relay .in fact , under the classical coding theory framework , the distributed code used in can be seen a repetition code with hamming distance equal to for the single source of the network .accordingly , from _ proposition [ codingdiversitygain ] _ we know that the diversity order is equal to , which confirms the analysis in under a much broader perspective . in summary, the proposed framework can be used to study the end to end performance of dual hop cooperative networks without nc , since a repetition code is a special network code . in this section ,we are interested in analyzing the importance of csi at the receiver to achieve the full diversity inherently available in the structure of the network code , which is given by its sv .in fact , it is important to emphasize that the conclusions drawn in section [ insights_apep ] hold if the receiver has perfect knowledge of the cross over probabilities computed in section [ crossoverprobability ] .this implies that the receiver knows the encoding vectors used at each relay node , along with the csi of all the wireless links of the network .in general , the network code can be agreed during the initialization of the network or transmitted by each relay node over the control plane ( at the cost of some overhead ) . on the other hand, csi must be estimated at the receiver . in this section ,we are aimed at showing the importance , to achieve full diversity , of the knowledge of these cross over probabilities . to this end, we assume that each receive node , including the destination , has access to the csi of the wireless links that are directly connected to it ( single hop ) .in other words , the destination knows only the fading gains over the source to destination and relay to destination links , while it is not aware of the fading gains over the source to relay links . on the other hand ,we assume that the destination is aware of the network code used at the relays .this is a requirement for any nc design . with these assumptions ,the destination is unable to compute the cross over probabilities in ( [ eq_7 ] ) , and , thus , the received bits can not be properly weighted according to their reliability .in such a worst case scenario , the destination can only assign the same reliability to each received bit .this corresponds to set all the weights in ( [ eq_6 ] ) equal to 1 , _i.e. _ , = 1 ] for . due to space limitations , we describe only the main modifications of the proof that lead to ( [ eq_39 ] ) .in particular , when = 1 ] . since end to end diversity is given by the addend having the smallest diversity order , we conclude that . finally , by taking into account the relation between hamming distance and sv given in section [ diversitycodinggain ] , ( [ eq_39 ] ) is obtained .this concludes the proof . _ proposition [ nonml_decoder ] _ brings to our attention the importance of the csi of the source to relay links .in fact , the demodulator in ( [ eq_38 ] ) loses approximately half of the potential diversity order inherently available in the network code .this result is in agreement with some studies available in the literature for simple cooperative networks without nc , such as , , and , where a similar diversity loss due to either non coherent demodulation or imperfect csi has been observed .furthermore , this result seems to agree with the diversity that can be achieved by linear block codes over single hop networks with hard decision decoding ( * ? ? ?however , it should be emphasized that in our case the diversity loss is not due to hard decision demodulation at the physical layer , which is actually used for both demodulators in ( [ eq_6 ] ) and ( [ eq_38 ] ) , but it originates from the distributed nature of the network code , from demodulation errors at the relays , and from the demodulator that does not adapt itself to the reliability of the source to relay links .the aim of this section is to show some numerical examples to substantiate analytical derivations , claims , and conclusions of the paper .more specifically , we are interested in : i ) showing the accuracy of the proposed framework for high snr , as well as the accuracy of diversity order and coding gain analysis ; ii ) understanding the impact of assuming ideal source to relay links , as it is often considered in the literature , and bringing to the attention of the reader that this might lead to misleading conclusions about the usefulness of nc over fading channels ; iii ) studying the impact of the network geometry on the end to end performance , and , more specifically , the role played by the positions of the relays ; and iv ) verifying the diversity reduction caused when the reliability of the source to relay links is not properly taken into account at the destination . the analytical frameworks are compared to monte carlo simulations , which implements ( [ eq_6 ] ) and ( [ eq_38 ] ) with no high snr approximations .simulation parameters are summarized in the caption of each figure .[ [ accuracy - of - the - framework - for - i.i.d .- fading - channels ] ] accuracy of the framework for i.i.d .fading channels + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figs .[ fig1__2s_2r__allnc][fig8__2s_5r__mixs1 ] show the end to end abep for three network topologies ( and ; and ; and ) and for different network codes . in particular , the network codes are chosen according to three criteria : i ) nc is not used and only cooperation is exploited to improve the performance ; ii ) all the relay nodes implement binary nc on all the received data , as it is often assumed in the literature ; and iii ) only some relay nodes perform nc on a subset of receiver packets .the first class of codes provides the reference scenario to understand the benefit of nc over classical cooperative protocols .the second class of codes represents the baseline scenario for network coded cooperative networks . finally , the third class of codes is important to highlight uep capabilities , and to show that a non negligible improvement can be obtained if the network code is properly designed and only some sources are network coded .numerical examples confirm the tightness of our framework for high snr , and that both diversity order and coding gain can be well estimated with our simple framework .furthermore , the uep behavior of many network codes can be observed as well . in particular , by comparing the svs summarized in the caption of each figure with the slope of each curve , we can notice a perfect match , as predicted in section [ diversitycodinggain ] .finally , we note that by comparing the results of the 2source 2relay network with the results of the 2source 5relay network , we can notice that if the network code is not properly chosen , having multiple relays does not necessarily lead to a better diversity order .since the rate of the system is smaller for larger networks ( more relays ) , we can conclude that small networks with well optimized network codes can outperform large networks where the network code is not adequately chosen .what really matters to optimize the performance of multi source multi relay networks is the sv of the network code , and , thus , the way the packets received at the relays are mixed together .[ [ impact - of - the - sourcetorelay - links - on - the - achievable - performance ] ] impact of the source to relay links on the achievable performance + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in table [ tab_1 ] , we show a comparative study of the performance of three network topologies for realistic source to relay links , along with the scenario where for and , which is denoted as `` ideal '' in the table .the results have been obtained from the analytical models and have been verified through monte carlo simulations .the accuracy between model and simulation for the `` realistic '' scenario can be verified in figs .[ fig1__2s_2r__allnc][fig8__2s_5r__mixs1 ] , since the same simulation setup is used . on the other hand , due to space limitations ,similar curves for the `` ideal '' case are not shown , but similar accuracy has been obtained .the framework used for this latter scenario is given in _corollary [ apep_idealsr]_. as discussed in section [ particularchannels ] , table [ tab_1 ] confirms that there is no diversity loss between the two scenarios , but only a coding gain loss can be expected .this is because for both scenarios the ml optimum demodulator is used .however , the conclusions about the usefulness of nc for both scenarios can be quite different .let us consider , for example , the 2source 2relay network .in the `` ideal '' setting , there is no doubt that nc3 and nc4 should be preferred to nc1 ( no nc ) and to nc2 ( all received data packets are network coded ) , as one user achieves a higher diversity order while the other has the same abep as nc1 and nc2 . on the other hand ,the conclusion in the `` realistic '' setting is different . in this case, we observe that the higher diversity order achieved by one user is compensated by a coding gain loss for the second user .in other words , a coding / diversity gain tradeoff exists .however , this behavior is in the spirit of cooperative networking : one user might tolerate a performance degradation in a given communication round and wait for a reward during another communication round . properly choosing the network code enables this possibilityfurthermore , by comparing nc1 and nc2 , we can notice that different conclusions can be drawn about the usefulness of nc in the analyzed scenarios .in the `` ideal '' setting , a cooperative network with nc ( nc2 ) has the same abep as a cooperative network without nc ( nc1 ) .the conclusion is that nc is useless in this case . on the other hand ,the situation changes in the `` realistic '' setting . in this case, we can see that nc2 is superior to nc1 , and , thus , we conclude that the redundancy introduced by nc can be efficiently exploited at the receiver when it operates in harsh fading scenarios .in fact , in the `` realistic '' setting , nc2 can counteract the error propagation due to the dual hop protocol , even though this network code is not strong enough to achieve a higher diversity order .another contradictory behavior can be found when analyzing the 3source 3relay network . by comparing nc1 ( no nc ) and nc2 ( the relays apply nc to all received packets ), we notice that in the `` ideal '' setting nc turns out to be harmful , as nc2 provides worse performance than nc1 . on the other hand , in the `` realistic '' setting we notice that nc1 and nc2 provides the same abep . in other words ,nc does not help but at least it is not harmful .these examples , even though specific to particular networks and codes , clearly illustrate the importance of considering realistic source to relay links to draw sound conclusions about merits and demerits of nc for multi source multi relay networks over fading channels .furthermore , we mention that , for all the network topologies studied in table [ tab_1 ] , nc2 is representative of a network code that has been designed by keeping ( [ eq_37 ] ) in mind , as it provides the same high snr diversity order and coding gain for both `` ideal '' and `` realistic '' settings .finally , we emphasize that our conclusions and trends depend on the coding gain of the network , whose study is often neglected due to its analytical intractability , , , . in this paper, we succeeded to provide an accurate estimate of the coding gain as well .[ [ accuracy - of - the - framework - for - i.n.i.d .- fading - channels - and - impact - of - relay - positions ] ] accuracy of the framework for i.n.i.d .fading channels and impact of relay positions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in fig .[ fig9__2s_2r__mixs2 ] and fig .[ fig10__2s_2r__allfw ] , we analyze the accuracy of the framework for i.n.i.d . fading channels .we consider a 2source 2relay network with nodes located as described in the caption of the figures .we consider five network topologies where the relay nodes can occupy different positions with respect to source and destination nodes .we observe a good accuracy of the framework , and notice that the positions of the relays can affect the end to end performance .this example shows that the proposed framework can be used , for arbitrary fading parameters , for performance optimization via optimal relay placement .[ [ impact - of - receiver - csi - on - the - diversity - order ] ] impact of receiver csi on the diversity order + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in fig .[ fig11__3s_3r__mixs1s2 ] and fig .[ fig12__2s_5r__mixs1 ] , we study the impact of using the sub optimal non ml demodulator in ( [ eq_38 ] ) . in particular , the abep of this demodulator is computed by using monte carlo simulations , and it is compared to the analytical investigation in section [ msd_decoder ] . for comparison , the abep ( analytical framework and monte carlo simulations ) of the ml optimum demodulator in ( [ eq_6 ] ) is shown as well .the non negligible drop of the diversity order can be observed , and , by direct inspection , it can be noticed that the curves have the slope predicted in ( [ eq_39 ] ) .this confirms the importance of csi about the source to relay links in order to avoid substantial performance degradation .in this paper , we have proposed a new analytical framework to study the performance of multi source multi relay network coded cooperative wireless networks for generic network topologies and binary encoding vectors .our framework takes into account practical communication constraints , such as demodulation errors at the relay nodes and fading over all the wireless links .more specifically , closed form expressions of the cross over probability at each relay node are given , and end to end closed form expressions of abep and diversity / coding gain are provided .our analysis has pointed out that the achievable diversity of each source node coincides with the separation vector of the network code , which shows that nc can offer unequal diversity capabilities for different sources .also , the importance of csi about the source to relay channels has been studied , and it has been proved that half of the diversity might be lost if the reliability of the source to relay links is not properly taken into account at the destination .monte carlo simulations have been used to substantiate analytical modeling and theoretical findings for various network topologies and network codes . in particular ,numerical examples have confirmed that the proposed framework is asymptotically tight for high snrs .finally , by comparing the performance of various network topologies , with and without taking into account decoding errors at the relays , we have shown that wrong conclusions about the effectiveness and potential gain of nc for cooperative networks might be drawn when network operations are oversimplified .this highlights the importance of studying the performance of network coded cooperative wireless networks with practical communication constraints for a pragmatic assessment of the end to end performance and to enable the efficient optimization of these networks .the framework proposed in this paper provides an answer to this problem .[ apep_lemma1 ] let } } \right\} ] and ] for are independent rvs , and , thus , } \right\ } } = \prod\nolimits_{k = 1}^{d_h \left ( { { \bf{c}},{\bf{\bar c } } } \right ) } { { \bf{\bar p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right]} ] . furthermore , from the definition of ] have to be included in , and the vector accounts for the dual hop relaying protocol and the specific network code .this concludes the proof . two important remarks are worth being made about _ lemma [ apep_lemma1]_. first , we would like to emphasize that , for ease of presentation and to stay focused on the most important issues of our analysis , _i.e. _ , dual hop networking and nc , the results in ( [ eq_1__lemma1 ] ) and ( [ eq_2__lemma1 ] ) are here given for rayleigh fading only .however , they can be generalized to other fading distributions for which the high snr approximation in exists . in this paper ,rayleigh fading is studied for illustrative purposes only .second , by comparing and ( * ? ? ?* eq . ( 40 ) ) , it follows that , for high snr , the effect , on the error probability at the relays , of performing nc on noisy and faded received data is equivalent to an amplify and forward ( af ) relay protocol with csi assisted relaying and with a number of hops equal to the number of sources that are network coded at each relay .this conclusion is in agreement with the equivalence between the error probability at the relays and the error performance of demf relay protocols already highlighted in section [ crossoverprobability ] .in fact , in it has been shown that , except when the number of hops is very large and the fading severity is very small , the performance of af and demf protocols is very close , for high snr , to each other . as the number of sources that can be network coded is , for practical applications , not very large , this high snr approximation can be very useful to get formulas that provide insights on the system behavior .the high snr equivalency between ( [ eq_2__lemma1 ] ) and af relaying is exploited in _ lemma [ apep_lemma2 ] _ to get high snr but closed form and accurate formulas .[ apep_lemma2 ] let us consider the term } , \prod\nolimits_{k \in \bar { \rm \mathcal{a } } } { { \bf{p}}\left [ { \bar m_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } \right ] } } \right\ } } \right\} ] and ] , where : i ) for the source to destination links , and by bearing in mind that in this case we have a true equality ; and ii ) ^ { - 1} ] and ._ proof _ : from the chernoff bound , _i.e. _ , , which is accurate for that in our case implies high snr ( ) , the following approximation holds : where is a constant correction term , which is introduced to recover the coding gain inaccuracy that might arise when using the chernoff bound . the high snr approximation in ( [ eq_3__lemma3 ] ) can be explained as follows . by direct inspection, left and right hand side terms can be shown to have both diversity order equal to .in fact , the left hand side is the product of terms each one having diversity one . on the other hand ,the right hand side term is the error probability of a mrc scheme with diversity branches at the receiver , which is known to have diversity constant ( correction ) factor is introduced only to avoid coding gain inaccuracies , which are always present when using the chernoff bound . since the goal of this paper is to accurately estimate both coding gain and diversity order , the accurate evaluation of is instrumental to estimate the end to end performance of the system . to get an accurate , but simple and useful for further analysis , approximation we use first order moment matching to estimate in ( [ eq_3__lemma3 ] ) .the motivation is that , as we will better substantiate at the end of this proof , it allows us to have a closed form estimate of that depends only on in ( [ eq_1__lemma3 ] ) , while it is independent of the fading parameters . in formulas, we seek to find such that the following equality is satisfied : to this end , we need closed form expressions of both averages in ( [ eq_4__lemma3 ] ) .once again , we use the high snr parametrization in , which leads to the following result : } \\ { \rm{e}}_{\bf{h } } \left\ { { q\left ( { \sqrt { 2\left ( { { { e_m } \mathord{\left/ { \vphantom { { e_m } { n_0 } } } \right . \kern-\nulldelimiterspace } { n_0 } } } \right)\upsilon \sum\limits_{k \in { \rm \mathcal{a } } } { { \rm{snr}}_{\left ( { { \bf{c}},{\bf{\bar c } } } \right)}^{\left ( k \right ) } } } } \right ) } \right\}\mathop \to \limits^{\left ( b \right ) } \left ( { 4\frac{{e_m } } { { n_0 } } \upsilon } \right)^ { - d } \left [ { \frac{{2^{d - 1 } \pi ^{\frac{{d - 1}}{2 } } \gamma \left ( { d + \frac{1}{2 } } \right)}}{{\gamma \left ( { \frac{3}{2 } } \right)^d \gamma \left ( { d + 1 } \right)\prod\limits_{k \in { \rm \mathcal{a } } } { \left [ { \left ( { \frac{1}{{\sigma _ { r_q d}^2 } } + \sum\limits_{t = 1}^{n_s } { \frac{{g_{s_t r_q } } } { { \sigma _ { s_t r_q } ^2 } } } } \right)^ { - 1 } } \right ] } } } } \right ] \\ \end{array } \right.\ ] ] where : 1 ) is obtained by taking into account that ( i ) are statistically independent for ; ( ii ) according to _ lemma [ apep_lemma2 ] _ , can be seen as the end to end snrs of an equivalent multi hop af relay protocol ; and ( iii ) by using asymptotic analysis for multi hop af relay networks in ; and 2 ) is obtained from and by recognizing that we have to compute the average of an equivalent mrc scheme where each branch is an equivalent multi hop network that uses the af relay protocol . finally , by equating the two terms in ( [ eq_5__lemma3 ] ) , in ( [ eq_1__lemma3 ] ) can be obtained . as mentioned above , is independent of channel statistics .similar to _ lemma [ apep_lemma1 ] _ and _ lemma [ apep_lemma2 ] _ we mention that the proposed procedure can be applied to any fading channel model , for which the parametrization in is available .this concludes the proof . 99 f. rossetto and m. zorzi , `` mixing network coding and cooperation for reliable wireless communications '' , _ ieee wireless commun . mag .1 , pp . 1521 , feb . 2011 .k. j. basel , v. kasemsri , and b. ramakrishnan , `` application of network coding in tactical data networks '' , _ieee military commun ._ , pp . 16 , nov . 2009 .j. n. laneman , d. tse , and g. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior '' , _ ieee trans .inform . theory _30623080 , dec . 2004 .r. ahlswede _ et al ._ , `` network information flow '' , _ ieee trans .inform . theory _4 , pp . 12041216 , july 2000 .et al . _ ,`` codecast : a network coding based ad hoc multicast protocol '' , _ wireless commun ._ , vol . 13 , no .5 , pp . 7681 , oct . 2006 .s. katti , `` network coded wireless architecture '' , _ ph.d . dissertation _, massachusetts institute of technology , usa , sep . 2008 .a. munari , f. rossetto , and m. zorzi , `` phoenix : making cooperation more efficient through network noding in wireless networks '' , _ ieee trans .wireless commun ._ , vol . 8 , no . 10 , pp .52485258 , oct .m. di renzo _ et al ._ , `` robust wireless network coding an overview '' , _springer lecture notes _ , lnicst 45 , pp . 685698 , 2010 .z. ding _ et al ._ , `` on combating the half duplex constraint in modern cooperative networks : protocols and techniques '' , _ ieee wireless commun2011 ( to appear ) .[ online ] .available : http://www.staff.ncl.ac.uk/z.ding/wc_magazine.pdf .r. koetter and f. r. kschischang , `` coding for errors and erasures in random network coding '' , _ ieee trans .inform . theory _54 , no . 8 , pp . 35793591 , aug .d. silva , `` error control for network coding '' , _dissertation _ ,university of toronto , canada , 2009 .o. shalvi , `` multiple source cooperation diversity '' , _ ieee commun ._ , vol . 8 , no . 12 , pp .712714 , dec . 2004 .k. azarian , h. el gamal , and p. schniter , `` on the achievable diversity multiplexing tradeoff in half duplex cooperative channels '' , _ ieee trans .inform . theory _41524172 , dec . 2005 .d. chen and j. n. laneman , `` modulation and demodulation for cooperative diversity in wireless systems '' , _ ieee trans ._ , vol . 5 , no . 7 ,pp . 17851794 , jul .a. k. sadek , w. su , and k. j. ray liu , `` multinode cooperative communications in wireless networks '' , _ ieee trans .signal process .1 , pp . 341355 , jan . 2007 .et al . _ , `` high performance cooperative demodulation with decode and - forward relays '' , _ ieee trans .55 , no . 7 , pp. 14271438 , jul .z. ding , t. ratnarajah , and c. c. f. cowan , `` on the diversity multiplexing tradeoff for wireless cooperative multiple access systems '' , _ ieee trans .signal process .9 , pp . 46274638 , sep ._ , `` link adaptive distributed coding for multi source cooperation '' , _ eurasip j. adv. signal process ._ , jan . 2008 . t. wang and g. b. giannakis , `` complex field network coding for multiuser cooperative communications '' , _ ieee j. sel .areas commun .3 , pp . 561571 , apr .2008 . c. peng _ et al ._ , `` on the performance analysis of network coded cooperation in wireless networks '' ,_ ieee trans .inform . theory _, vol . 7 , no . 8 , pp . 30903097 , aug .k. lee and l. hanzo , `` mimo assisted hard versus soft decoding and forwarding for network coding aided relaying systems '' , _ ieee trans .wireless commun ._ , vol . 8 , no .1 , pp . 376385 , jan .r. annavajjala , `` on optimum regenerative relaying with imperfect channel knowledge '' , _ ieee trans .signal process .3 , pp . 19281934 , mar .m. xiao and m. skoglund , `` multiple user cooperative communications based on linear network coding '' , _ ieee trans .33453352 , dec . 2010 .lai , z. gao , and k. j. ray liu , `` space time network codes utilizing transform based coding '' , _ ieee global commun ._ , pp . 15 , dec . 2010 .a. nasri , r. schober , and m. uysal , `` error rate performance of network coded cooperative diversity systems '' , _ ieee global commun ._ , pp . 16 , dec . 2010 .r. youssef and a. graell i amat , `` distributed serially concatenated codes for multi source cooperative relay networks '' , _ ieee trans .wireless commun .1 , pp . 253263 , jan .h. topakkaya and z. wang , `` wireless network code design and performance analysis using diversity multiplexing tradeoff '' , _ ieee trans .2 , pp . 488496 , feb . 2011 .j. rebelatto _et al . _ ,`` multi user cooperative diversity through network coding based on classical coding theory '' , _ ieee trans .sig . process .2 , pp . 916926 , feb . 2012 .r. zhang and l. hanzo , `` multiple source cooperation : from code division multiplexing to variable rate network coding '' , _ ieee trans .vehicular technol .3 , pp . 10051015 , mar .lai and k. j. ray liu , `` space time network coding '' , _ ieee trans .signal process .17061718 , apr 2011 .z. ding and k. k. leung , `` on the combination of cooperative diversity and network coding for wireless uplink transmissions '' , _ ieee trans .vehicular technol .4 , pp . 15901601 , may 2011 .et al . _ , `` network coded ldpc code design for a multi source relaying system '' , _ ieee trans .wireless commun .5 , pp . 15381551 , may 2011 .et al . _ ,`` high throughput multi source cooperation via complex field network coding '' , _ ieee trans .wireless commun .5 , pp . 16061617 , may 2011 .et al . _ ,`` binary field network coding design for multiple source multiple relay networks '' , _ ieee int ._ , june 2011 .yune , d. kim , and g.h .i m , `` opportunistic network coded cooperative transmission with demodulate and forward protocol in wireless channels '' , _ ieee trans .59 , no . 7 , pp .17911795 , july 2011 . c. wang , m. xiao , and m. skoglund , `` diversity multiplexing tradeoff analysis of coded multi user relay networks '' , _ ieee trans .wireless commun .59 , no . 7 , pp .19952005 , july 2011 .e. fasolo , f. rossetto , and m. zorzi , `` network coding meets mimo '' , _ ieee work .network coding , theory , applications _ , jan .g. j. bradford and j. n. laneman , `` a survey of implementation efforts and experimental design for cooperative communications '' , _ ieee int .acoustics , speech , and signal processing _ , pp .56025605 , mar .m. iezzi , m. di renzo , and f. graziosi , `` network code design from unequal error protection coding : channel aware receiver design and diversity analysis '' , _ ieee int ._ , pp . 16 , june 2011. s. l. h. nguyen _et al . _ ,`` mitigating error propagation in two way relay channels with network coding '' , _ ieee trans .wireless commun ._ , vol . 9 , no . 11 , pp .33803390 , nov .g. al habian _ et al ._ , `` threshold based relaying in coded cooperative networks '' , _ ieee trans .vehicular technol .1 , pp . 123135 , jan .2011 . m. c. ju and i m .kim , `` ml performance analysis of the decode and forward protocol in cooperative diversity networks '' , _ ieee trans .wireless commun ._ , vol . 8 , no . 7 , pp38553867 , july 2009 . m. d. selvaraj , r. k. mallik , and r. goel , `` optimum receiver performance with binary phase shift keying for decode and forward relaying '' , _ ieee trans .vehicular technol .4 , pp . 19481954 , may 2011 .m. xiao and t. aulin , `` on the bit error probability of noisy channel networks with intermediate node encoding '' , _ ieee trans .inform . theory _11 , pp . 51885198 ,m. xiao and t. aulin , `` optimal decoding and performance analysis of a noisy channel network with network coding '' , _ ieee trans .5 , pp . 14021412 , may 2009 . m. iezzi , m. di renzo , and f. graziosi , `` closed form error probability of network coded cooperative wireless networks with channel aware detectors '' , _ ieee global commun16 , dec . 2011 .et al . _ , `` on code parameters and coding vector representation for practical rlnc '' , _ ieee int ._ , june 2011 . d. chase , `` code combining a maximum likelihood decoding approach for combining an arbitrary number of noisy packets '' , _ ieee transcom33 , no .5 , pp . 385393 , may 1985 . b. masnick and j. wolf , `` on linear unequal error protection codes '' , _ ieee trans . inform . theory _it3 , pp .600607 , oct .. i. boyarinov , g. katsman , `` linear unequal error protection codes '' , _ ieee trans .inform . theory _it27 , pp .168175 , mar .1981 . c. poulliat and m. di renzo , `` joint network / channel decoding for heterogeneous multi source multi relay cooperative networks '' , _ acm conf .performance evaluation methodologies and tools _ , pp .16 , may 2011 .j. j. proakis , _ digital communications _ , mcgraw hill , 4th ed . , 2000 .m. k. simon and m.s .alouini , _ digital communication over fading channels _ , john wiley & sons , 1st ed . , 2000 .et al . _ ,`` channel aware decision fusion in wireless sensor networks '' , _ ieee trans . sig . process ._ pp . 34543458 , dec . 2004 .m. di renzo _ et al ._ , `` distributed data fusion over correlated log normal sensing and reporting channels : application to cognitive radio networks '' , _ ieee trans .wireless commun ._ vol . 8 , no .58135821 , dec . 2009 .b. hassibi and h. vikalo , `` on the sphere decoding algorithm : part i , the expected complexity '' , _ ieee trans . signal process .53 , no . 8 , pp . 28062818 ,m. o. hasna and m.s .alouini , `` end to end performance of transmission systems with relays over rayleigh fading channels '' , _ ieee trans .wireless commun .2 , no . 6 , pp .11261131 , nov .e. morgado _et al . _ ,`` end to end average ber in multihop wireless networks over fading channels '' , _ ieee trans .wireless commun ._ vol . 9 , no . 8 , pp . 24782487 , aug .r. knopp and p. a. humblet , `` on coding for block fading channels '' , _ ieee trans .inform . theory _2 , pp . 189205 , jan .m. di renzo , f. graziosi , and f. santucci , `` a unified framework for performance analysis of csi assisted cooperative communications over fading channels '' , _ ieee trans .9 , pp . 25522557 , sep . 2009 .z. wang and g. b. giannakis , `` a simple and general parameterization quantifying performance in fading channels '' , _ ieee trans .51 , no . 8 , pp . 13891398 , aug .ribeiro , x. cai , and g. b. giannakis , `` symbol error probabilities for general cooperative links '' , _ ieee trans .wireless commun .3 , pp . 12641273 , may 2005. l. a. dunning and w. e. robbins , `` optimal encoding of linear block codes for unequal error protection '' , _ information and control _ , vol .2 , pp . 150177 , may 1978 .z. zhang , `` linear network error correction codes in packet networks '' , _ ieee trans .inform . theory _209218 , jan . 2008 .lai and k. j. ray liu , `` wireless network cocast : location aware cooperative communications with linear network coding '' , _ ieee trans .wireless commun ._ , vol . 8 , no . 7 , pp38443854 , july 2009 .lai , `` wireless network cocast : cooperative communications with space time network codes '' , _ ph.d .dissertation _ , university of maryland , apr .[ online ] .available : http://drum.lib.umd.edu/bitstream/1903/11534/1/lai_umd_0117e_12109.pdf .m. di renzo _ et al . _ ,`` greenet an early stage training network in enabling technologies for green radio '' , _ ieee vehicular technol .conf . spring _ , pp .15 , may 2011 . && + & & & & & & + nc-1 & & & - & & & - + nc-2 & & & - & & & - + nc-3 & & & - & & & - + nc-4 & & & - & & & - + + & & + & & & & & & + nc-1 & & & & & & + nc-2 & & & & & & + nc-3 & & & & & & + + & & + & & & & & & + nc-1 & & & - & & & - + nc-2 & & & - & & & - + nc-3 & & & - & & & - + , , ii ) the nodes are located at positions ( in meters ) : , , , , ; and iii ) , .furthermore , we have : i ) and in scenario 1 ; ii ) and in scenario 2 ; iii ) and in scenario 3 ; iv ) and in scenario 4 ; v ) and in scenario 5 . ] , , ii ) the nodes are located at positions ( in meters ) : , , , , ; and iii ) , . furthermore ,we have : i ) and in scenario 1 ; ii ) and in scenario 2 ; iii ) and in scenario 3 ; iv ) and in scenario 4 ; v ) and in scenario 5 . ]
|
in this paper , a multi source multi relay cooperative wireless network with binary modulation and binary network coding is studied . the system model encompasses : i ) a demodulate and forward protocol at the relays , where the received packets are forwarded regardless of their reliability ; and ii ) a maximum likelihood optimum demodulator at the destination , which accounts for possible demodulations errors at the relays . an asymptotically tight and closed form expression of the end - to - end error probability is derived , which clearly showcases diversity order and coding gain of each source . unlike other papers available in the literature , the proposed framework has three main distinguishable features : i ) it is useful for general network topologies and arbitrary binary encoding vectors ; ii ) it shows how network code and two hop forwarding protocol affect diversity order and coding gain ; and ii ) it accounts for realistic fading channels and demodulation errors at the relays . the framework provides three main conclusions : i ) each source achieves a diversity order equal to the separation vector of the network code ; ii ) the coding gain of each source decreases with the number of mixed packets at the relays ; and iii ) if the destination can not take into account demodulation errors at the relays , it loses approximately half of the diversity order . cooperative networks , multi hop networks , network coding , performance analysis , distributed diversity .
|
imaging of a significant number of extrasolar planets requires achieving star vs. planet contrasts of ( young giant planets ) , or even ( old giant and rocky planets ) at a few tenths of an arc - second from a star , which value is in the near infrared for telescopes having pupil sizes of m. in this regime , the dominant noise contribution is due to the stellar background . to achieve these ambitious goals ,high contrast imagers usually include various components .first , an extreme adaptive optics ( xao ) system is used , allowing to correct aberrations up to a high order , and providing a high strehl ratio ( sr ) .second , some coronagraph is included , attenuating the coherent diffraction pattern of the on - axis point spread function ( psf ) . proper combination of these two devices allows reduction of the stellar background down to values of out to the ao system control radius , actuator spacing projected on the telescope pupil . ] , for state - of - the - art system . this background is due to a rapidly changing halo of speckles generated by residual telescope pupil phase distortions , that have spacial frequencies close to those of planet images . in order to avoid false alarms , the detection threshold levelshould then be set at several times the root mean square ( rms ) noise level . even in the favourable case where the speckle intensity distribution can be assumed to be gaussian the detection confidence limitshould be at least 5 times the noise level .this implies that at angular separations , the limiting contrast provided by state - of - the - art extreme - ao and coronagraphy is for 8 - 10 m telescopes .in addition , phase aberrations originating inside the optical train not corrected by the extreme - ao system produce speckles of longer lifetime ( minutes or hours ) than those due the atmosphere .other slowly varying ( of the order of seconds ) phase errors are due to aliasing effects in the wavefront sensor and for coronagraphic systems to adaptive optics time - lag . beyond a handful of favorable cases where planets are warm - e.g. - or with large separation from their parent star , oreventually with both these properties , additional techniques are required to reach the larger contrasts needed for extrasolar planets detection .simultaneous differential imaging ( sdi ) is a high - contrast imaging differential technique by which subtraction of different images of the same field acquired simultaneously by the same instrument allows to remove or reduce the noise produced by atmospheric and instrumental phase aberrations .the sdi principle can be applied to images obtained with different polarization modes or selecting two distinct wavelengths in a fixed spectral range , or better exploiting the entire spectral range by integral field spectroscopy . in this paperwe will focus on sdi based upon this latter strategy only .essentially , sdi is a calibration technique : images are acquired simultaneously in bands at close wavelengths where the planetary ( but not the stellar ) flux differ appreciably . subtracting each otherthese images should allow to remove or at least reduce the speckle noise , since this is assumed to be similar in the various images after a suitable chromatic re - scaling , while the planet signal is left nearly untouched . thereat least two ways to exploit this calibration technique . in the more traditional approach ,specific characteristics of the ( expected ) planetary spectra are exploited . as indicated by various theoretical work and observations ( e.g. of brown dwarfs and gaseous planets in the solar system ) ,the spectra of giant planets are dominated by several absorbtion bands ( mainly due to methane and water vapor ) at both visible and near infrared wavelengths .in such a case , sdi may work by subtracting images where the planet signal is absent from those where it is present , while the background is nearly the same , because the spectrum of the parent star is nearly featureless , see figure [ fg : testi ] .the main advantage of this technique , is the minimum assumptions required on the chromatic behavior of speckles ; however , this technique allows only a limited reduction of noise .alternatively , we might try to model the variation of speckles with wavelength . in principlethis allows to remove completely speckle noise without making any assumption about the planetary spectrum , hence allowing to retrieve the real planetary spectrum . independently to the adopted sdi recipe , integral field spectrograph designs tuned for diffraction - limited high - contrast imaging should take into account several effects jeopardizing the interpolation procedures requested before simultaneous spectral subtractions , which in turn severely limit the accuracy of this calibration technique .in this paper we present a discussion of these effects and derive the basic equations that should be considered when designing lenslet - based diffraction - limited integral field spectrographs .then , we describe a new concept for the lenslet - array shaping the ifu of such instruments ( i.e. bigre ) allowing to improve significantly over the main limitations of the more traditional designs based on the tiger concept .the structure of the paper is as follows . in [ sec : speckle ] we recall the basics of a post - coronagraphic speckle field . in [ sec : sdi ] we summarize the principle of sdi . in [ sec : s - sdi ] we discuss the basics of spectroscopic sdi ( hereafter s - sdi ) , defining the conditions allowing to avoid aliasing errors when sampling both the entrance speckle field and the final exit slit functions . in [ sec : ifs - options ] we present various options for ifs - instruments suited for s - sdi . in [ sec : ct ] we define the cross - talk terms in the case of diffraction - limited lenslet - based ifs . in [ sec : tiger ] we derive the rules governing the image propagation at the diffraction limit through the tiger concept , and in [ sec : bigre ] the ones proper to the new bigre concept .specifically , we explain here how to conceive a bigre - oriented ifs instrument adopting standard dioptric devices . in [ sec : sphere - ifs ] we present two design setups ( based on bigre and tiger respectively ) for sphere , indicating the solution adopted for its future ifs . in [ sec : tigervsbigre ] we compare the tiger and the bigre concepts in terms of coherent and incoherent signals suppression , considering several cases for the single lens shape and the ifu lattice configuration .finally , our conclusions are drawn in [ sec : end ] .an appropriate understanding of chromatic intensity ( e.g. ) and spatial ( e.g. ) scaling of a speckle field is basic to any application of the sdi calibration technique .for this reason a short description of these physical concepts is fundamental to introduce the reader to the topics treated in the rest of the paper .inspired by the approach of , we will use the fraunhofer approximation to describe the impact of small residual phase variations of the electric field imaged on a fixed post - coronagraphic entrance pupil plane , i.e. the working case of high - contrast imaging instruments like sphere .while this approach allows a simple mathematical treatment and physical understanding , it ignores more complex effects due to amplitude errors and fresnel propagation , as pointed out by .it is outside the scope of this paper to discuss such effects , which can be minimized by careful instrument design , but it is likely that they will set the ultimate limit of planet imaging . the most general expression of the monochromatic electric field once projected on the coronagraphic entrance pupil plane is : ,\ ] ] where is the coronagraphic pupil transmission function , and is the phase of the electric field evaluated over this coronagraphic pupil plane . assuming a perfect optical propagation from the telescope to this plane - i.e. no differential chromatical aberrations in the beam - the chromatism of the phase can be written explicitly as a function of the wavelength and the wavefront error as follows : assuming as real the expectation value of the wavefront error given by an extreme - ao system in the near infrared ( i.e. at ) , equation ( [ eq : coro-1 ] ) can be approximated as follows : at this point , the action of an un - specified coronagraph can be formalized directly on the coronagraphic exit pupil plane .the goal of the coronagraph is to cancel as much as possiblee the amplitude of the electric field along the optical axis on this plane . exploiting ( [ eq : coro-3 ] ) , the resulting on - axis electric field for a perfect coronagraph is then : or , by equation ( [ eq : coro-2 ] ) , is equal to : defining finally as the fourier transforms ( ft ) of , equation ( [ eq : coro-5 ] ) allows to express the monochromatic post - coronagraphic speckle field as : equation ( [ eq : coro-6 ] ) shows that the intensity of a speckle field scales proportionally to , while its chromatic wavelength scaling comes from the fact that the variable involved in the wavefront is the spatial frequency and not the position in the image plane , i.e. : .\ ] ] this indicates that spatial frequency translates into position according to wavelength , e.g. by applying the standard grating equation as follows : where is the diffraction order , the diffraction angle and is the grating constant corresponding to the spatial frequency , or : the position on the image plane returns : being the focal length of the post - coronagraphic re - imaging optics . using equations ( [ eq : coro-8 ] ) and ( [ eq : coro-9 ] ) this positioncan be written finally as : equation ( [ eq : coro-11 ] ) indicates that the position of a speckle corresponding to a given fixed spatial frequency due to the post - coronagraphic wavefront error scales linearly with wavelength .more in detail , this means that for every fixed position in the image plane , speckles corresponding to distinct spatial frequencies get distinct wavelengths ( figure [ fg : speckle - spectrum ] ) .we call this feature speckle chromatism .in the approach considered in this paper , the fundamental sdi step is the simultaneous acquisition of images at adjacent wavelengths in a spectral range where the planetary and stellar spectra differ appreciably . from ground - based observations ,the wavelength bands y , j , h , and k are well suited for extrasolar giant planets , and rocky planets .let be the monochromatic spectral signal corresponding to a fixed angular position on sky expressed as the sum of the spectral signal of the star , , and the spectral signal of a candidate low - mass companion ( e.g an extrasolar planet ) which lies specifically in this angular position , . fixing a pair of wavelengths inside the window above , the following relations hold : the basic sdi assumption is that after suitable flux normalization and chromatic re - scaling , the following relations hold for the boundary wavelengths of the range above : then the difference between and should return in principle the spectral signal only , i.e. the one appropriate to the low - mass ( or extrasolar planet ) candidate .however , while working with narrow - band filters several precautions are required : * an image taken with one filter has to be spatially re - scaled before confronting it with a second image taken with a different filter due to the speckle scaling described in [ sec : speckle ] ; * any filter separating two adjacent spectral bands should have similar spectral transmission profiles ; * the difference ( ) between the central wavelengths of two adjacent filters should be as small as possible .the last item is the most critical due to the fact that chromatism of the speckle field always induces a certain amount of phase errors .adopting the formalism of , the residual wavefront distortion can be described through the fourier transform of the post - coronagraphic wavefront error , or by its relative chromatic phase - error .adopting the standard approximation for the strehl ratio , it is possible to transfer this rms wavefront - error on a relative flux variation on the detector plane . in detail , defining as the rms chromatic wavefront - error , found the following relation for the flux residual between images taken with two narrow - band filters : equation ( [ eq : marois-1 ] ) indicates that with the so called _ single difference _method the final error is proportional to : * the variance of the wavefront error : , * the relative wavelength separation between the narrow - band filters : .the need for a calibration technique more efficient than sdi but still based on the simultaneous difference of chromatic images of the same target field was addressed theoretically by , which showed that the speckle noise reduction could be much more efficient if observations at three wavelengths were available using their _double difference _ method , and tasted experimentally with the discovery the first planet obtained by using this calibration technique , whose infrared contrast is .starting from there , it is reasonable to assume that a larger number of images at different wavelengths , taken with a regular spectral - step , can result in even better reduction of speckle noise with a true s - sdi calibration technique. the gain could be even larger if observations at several wavelengths would allow an accurate derivation of the chromatic wavelength scaling , as proposed e.g. by .this thought suggests the use of integral field spectroscopy for collecting data simultaneously at a large number of wavelengths given by the total spectral length and the spectral resolution of a suitable disperser .note that such an approach is convenient even in the more conservative approach where modeling of the spectral dependence fail , simply because a larger number of wavelength pairs can be constructed .exploiting an ifu as field stop array over an optical plane conjugated with the focal plane of the telescope itself allows an appropriate sampling of the post - coronagraphic speckle field defined by equation ( [ eq : coro-6 ] ) . the fact that this optical signal gets a finite cut - off spatial frequency proportional to , where is the post - coronagraphic pupil size and is the spectrograph s cut - on wavelength , means that a correct spatial sampling on this plane should be imposed searching for suitable sizes for the separation between adjacent spaxels , which in turn compose the adopted ifu .this sampling condition is detailed in [ sec :- speckles - spatial - sampling ] .the request of a sampling criterion based upon the shannon theorem is mandatory not only at the level of the ifu spaxels but also at the level of the detector pixels . in this casethe shannon sampling condition allows to interpolate correctly , both spatially and spectrally , the exit slit functions , which in turn are the final output of an integral field spectrograph .these two sampling conditions are detailed in [ sec : spatial - sampling ] and [ sec : spectral - sampling ] respectively .let be the focal ratio by which the post - coronagraphic speckle field is projected on the ifu plane .theory of image formation ( e.g. goodman 1996 ) implies then that the cut - off spatial frequency appropriate to can be written as a function of and as follows : the spaxel size ( ) defines the nyquist spatial frequency on this plane : thus , the shannon sampling theorem applied to the ifu plane returns : the condition avoids aliasing effects when interpolating the array of exit slits over the whole range of wavelengths considered by the spectrograph , and it may be written through the following formalism .the detector pixel size defines the nyquist spatial frequency on this plane : once the final spectrograph s exit slits are imaged on the detector pixels through a fixed output focal ratio and an optical magnification , theory of image formation , e.g. , implies that their spatial cut - off frequency is : where indicates the shortest wavelength imaged by the spectrograph .we define the super - sampling condition as : when working with a speckle pattern data - cube , chromatic re - sampling is needed to obtain both monochromatic images , as indicated by , or spectra , as indicated by . to this aim suggested to adopt a suitable pixel - dependent re - sampling of the speckle field which varies according to wavelengths , while developed a subtraction algorithm based upon analytical modelings of the spectral content of a speckle field .anyhow , before any re - sampling recipe , it is important to find out the exact condition allowing to avoid aliasing errors due to the speckle chromatism effect . since the speckle pattern scales proportionally to wavelength ( [ sec : speckle ] ) , a feature located at an angular distance from the central star at wavelength moves spectrally at a rate of .spatial speckles of width therefore translates into spectral speckles of width : i.e. , the spectral extension of speckles is inversely proportional to the distance from the field center .nyquist sampling of spectral speckles requires spectral sampling ( ) corresponding to half the speckle width , so far a two - pixel resolving power ( ) , nyquist sampling implies the following condition : this condition will be fulfilled within a field angle , referred to as the nyquist radius , given by : we note that it is possible to ensure nyquist sampling in a system which does not fulfil the super - sampling condition written in equation ( [ eq : super-3 ] ) , as long as its field of view does not exceed the nyquist radius and as long as the source itself does not contain spectral features which violate the shannon theorem . for example , an instrument operating on an meter telescope at with a full field of view of arc - seconds , would require a two - pixel resolving power of at least . for systems where larger field of view or lower resolving power is required, the super - sampling condition must be fulfilled . in these systems ,the zone lying within the nyquist radius fulfils both equation ( [ eq : super-3 ] ) and and equation ( [ eq : hyper-2 ] ) .we refer to this double fulfilment as hyper - sampling . for an integral field spectrograph covering a spectral range fixed between a cut - on ( ) and a cut - off wavelength ( ) ,where represents the central one , the hyper - sampling condition will be valid over the whole spectral range within the radius : needs a very large number of pixels at the level of the final image plane where the matrix of spectra is acquired by the detector .this issue is particularly important when spectral and spatial information are recorded simultaneously in the detector plane , such as for ifs based on the image slicer or the tiger concepts .the image slicer option is more efficient in terms of detector pixels usage , since no separation between spectra from adjacent pixels is required in one space dimension . assuming a square detector , the number of detector pixels required for a given number of spaxels and number of spectral samples , is given by the following relation : in this concept a bi - dimensional field of view is divided by mirrors into strips , and then re - formatted on a mono - dimensional pseudo long - slit ( see figure [ fg : slicer ] ) .monochromatic exit slits will be then obtained downstream , by using a standard collimator , disperser and camera optical system .a potential problem of the image slicer design concerns the non common path aberrations in adjacent spaxels of the field of view that fall on different slices .however this concept has been proved able to obtain ( moderately ) high - contrast images from ground even without coronagraphic devices and with moderate strehl ratios .a further examination of an image slicer instrument dedicated to high - contrast diffraction - limited imaging spectroscopy is on progress within the feasibility study for the future e - elt planet finder facility . on the other hand, non - common path aberrations are expected to be very small in the case of the tiger - type concept , which uses an ifu based on a matrix of lenses with fixed lens pitch . in this casespectra given by individual spaxels should be separated on the detector . for a separation of between spectral samples ,the required number of detector pixels becomes : the lenslet - based concept then requires a large number of detector pixels .however , the format of image slicer ifs - data on the detector is suited for spectra with many spectral elements , i.e. , and relatively small number of spaxels , i.e. . this are not typical values for instruments dedicated to planet search that generally requires short spectra ( spectral elements ) for a large number of spaxels . in order to adequately exploit the detector ,the number of slices should be roughly given by the ratio between the spaxels and the length of the spectra .this value is for an integral field spectrograph tuned to planet finding , which would result in an extremely long pseudo - slit .the format of the image slicer ifu then exacerbates the problems related to non common paths : indeed photons from adjacent spaxels may have very different paths through the instrument .it is then difficult to maintain small the phase errors , possibly compromising most demanding high - contrast imaging .given the difficulties inherent to the image slicer solution , we carefully examined the properties of the lenslet - based design , trying to minimize the separation between spectral samples .to this aim , we developed the new optical concept proposed by : bigre .the properties of this design are discussed and compared to the tiger ones , starting from [ sec : bigre ] .adopting the formalism of , any spaxel of an ifu is a sum of linear optical systems . in the specific case of a lenslet - based ifuthese systems are the single lenses .the coherent and incoherent part of the electric field incoming onto these optical linear systems are transmitted in a different way through two adjacent spaxels . specifically , when the illumination is coherent , the linear responses of adjacent spaxels vary in unison , and therefore their signals , once transmitted and re - imaged on the spectrograph s slits plane , must be added in complex amplitude .contrarily , when the illumination is incoherent , the linear responses of two adjacent spaxels are statistically independent .this means that their signals , once transmitted and re - imaged on the spectrograph s slit plane , must be added in intensity .hence , once dispersed and re - imaged by the spectrograph s optics , monochromatic slits corresponding to adjacent spaxels will suffer from a certain amount of interference .we call this quantity coherent cross - talk .furthermore , monochromatic slits will be affected by a spurious amount of signal due to its adjacent spectra .we call this quantity incoherent crosstalk . with reference to figure [ fg : zoomed - image ] , coherent cross - talkis the interference signal between monochromatic spectrograph s entrance slits which correspond to adjacent lenses , i.e. separated by a distance equal to the ifu lens pitch . while , incoherent cross - talk is the spurious signal registered over a fixed monochromatic spectrograph s exit slit and due to its closest spectra , even if due to photons of different wavelength .incoherent and coherent cross - talks represent a major issue identifying the best solution for the spaxels shape ( circular , square , etc ... ) , the lenslet lattice configuration ( hexagonal , square , etc ... ) , and for the geometric allocation of the spectra at the level of the detector plane .in fact , incoherent and coherent cross - talks are spurious signals not removed by the application of super- and hyper - sampling criteria which still affect the final array of spectra , thus damaging the final three - dimensional data cube .the selection of the kind of field unit to be mounted at the entrance of a lenslet - based integral field spectrograph should then depend on the estimate of the level of incoherent and coherent signals over the individual exit slits of such a spectrograph .additional considerations should enter in this choice , e.g. the fact that the relevance of the cross - talk terms depends on the wavefront errors after the coronagraph or that minimization of the cross - talk might result in a system design which is potentially less efficient when observations are limited by photon noise .in general , cross - talk should be specified so that its contribution to the contrast error budget is less than the flat field errors and all remaining spurious effects affecting the the post - coronagraphic speckle field . basically , coherent cross - talk is the interference of a beam passing through a number of apertures ( individual lenslets ) and measured on a screen ( the spectrograph s entrance slits plane ) conjugated to the detector plane .let us assume a flat wavefront impinging onto the ifu lenses .let now be the complex electric field of the coherent signal transmitted by spaxel on the spectrograph s entrance slits plane .let be the stray part of the complex electric field of the coherent signal transmitted by spaxel ( spaxel being adjacent to spaxel ) and evaluated in the position of the slit corresponding to spaxel . and are complex quantities that differ according to the phase difference , which is due to different optical paths through the different apertures ( lenslets ) .the effective coherent intensity measured on the spectrograph s entrance slits plane and corresponding to the position of spaxel will then be : in the worst case , the phase difference of waves passing through adjacent lenses is . in this case and neglecting the term in the binomial expression of equation ( [ eq : coherent-1 ] ) , the effective coherent intensity proper to spaxel becomes : cct is defined as the coherent cross - talk coefficient : where the stray coherent intensity proper to spaxel evaluated in the position of spaxel is defined as : and the own coherent intensity of spaxel is defined as : cct represents the maximum extra - amount of coherent signal on the slit function corresponding to a fixed lenslet aperture , and its estimate can be given by measuring the square root of the coherent intensity proper to the slit function corresponding to the adjacent aperture .however , the total amount of coherent cross - talk is obtained only by adding the contribution due to all the apertures in the lenslet - array .the amount of spurious incoherent light can be evaluated directly on the detector plane , where a single exit slit appears as a spectrum .as indicated in figure [ fg : zoomed - image ] , any final spectrum is surrounded by several adjacent spectra . let be the intensity proper to a fixed monochromatic exit slit ; due to the presence of an adjacent exit slit its effective incoherent intensity will be : where is the stray incoherent monochromatic intensity of a given adjacent exit slit , evaluated at a distance equal to the separation to the fixed one : where is defined as the monochromatic term of the incoherent cross - talk coefficient ( ict ) .the incoherent cross - talk coefficient corresponding to the spectrograph s wavelengths range is then defined as : thus differently to the coherent case the incoherent cross - talk must be considered on the detector plane , searching for spectral alignments for which the distance among adjacent spectra is minimized .once this spectral alignment is found , an estimate of ict can be given by measuring the incoherent intensity of a single monochromatic exit slit at the distance equal to the transversal separation among adjacent spectra .however , the total amount of incoherent cross - talk is obtained only by adding the contribution of all the spectra imaged onto the detector plane .in classical tiger design optimized for seeing limited conditions the spaxels ( or microlenses ) composing the ifu are much bigger than the airy disk , providing therefore resolved images of the telescope entrance pupil , which in turn represent the entrance slits of this kind of integral field instrument , see e.g. . differently , in the case of high - contrast imaging the microlenses sample the telescope image according to the shannon theorem .each microlens acts like a diaphragm isolating a portion of the incoming electric field and concentrates it into a micropupil image in the focal plane of the microlens , acting as the entrance slit function of the spectrograph .the micropupil image is the convolution between the geometrical pupil image and the psf of the microlens .as seen below , nyquist sampling of the focal plane implies that the telescope entrance pupil is unresolved by the microlens .for a circular lens of diameter the transmission function is , where is the image co - ordinate normalized to the lens diameter and is a top - hat function with unitary transmission within the unitary diameter and zero outside of this diameter . according to condition ( [ eq : spatial-3 ] ) , the size of the single microlens should be : following , the monochromatic full - width - half - maximum ( fwhm ) of the psf proper to a circular microlens with focal length is : while the geometrical diameter of the micropupil is : combining equations ( [ eq : tiger - ifu-1 ] ) , ( [ eq : tiger - ifu-2 ] ) and ( [ eq : tiger - ifu-3 ] ) we obtain then : this size is therefore at least twice as wide as the geometrical pupil , and so the convolution product is approximately equal to the microlens psf .thus , we can stay that the field distribution onto the spectrograph s slit plane approximates the one proper to an unresolved micropupil , which is described by the jinc function , where indicates the bessel - j function of order one .] corresponding to the microlens aperture : being defined as the pupil the co - ordinate normalized to , where is : finally , the slit function will be the square modulus of this signal : as indicated by equation ( [ eq : tiger - ifu-6 ] ) , the single spectrograph s slit is an un - bound signal whose size varies linearly with wavelength .the final pixel size defines the spatial nyquist frequency on the spectrograph image plane according equation ( [ eq : super-1 ] ) . due to its un - bound nature , the spatial cut - off frequency of the spectrograph s exit slitgets the finite value fixed by equation ( [ eq : super-2 ] ) .then , following condition ( [ eq : super-3 ] ) , super - sampling imposes a lower limit to the output focal ratio by which the single microlens generates its corresponding micropupil : output focal ratios lower than the one fixed by equation ( [ eq : tiger - ifu-7 ] ) introduce aliasing errors in the sampled spectrum , unless the field is smaller than the nyquist radius . according to condition ( [ eq : hyper-4 ] ) , this latter depends on the post - coronagraphic pupil size , the spectrograph s working wavelengths range and its spectral resolution .hence , the true hyper - sampling is obtained when this radius matches with the maximum image field radius , which in turn is related to the spectrograph s resolving power .then , for a fixed resolving power , hyper - sampling is then a matter of allocation of the array of final spectra onto the detector pixels , which in turn depends on the accepted cross - talk levels ..independent parameters of a tiger - oriented ifs [ cols= " < , < " , ] as figure [ fg : tigervsbigre-1 ] indicates , adopting a bi - logarithmic scale , the single bigre slit gets an intensity profile steeper than the one proper to the single tiger slit both in the case of circular and square shapes .more in detail , the upper envelope to the slit intensity profile proper to a circular tiger - oriented spaxel is a power law with index equal to , while the same quantity for a square tiger - oriented spaxel is a power law with index equal to along the aperture side and with index equal to along its diagonal . at contrary , the upper envelope of the slit intensity profile proper to a circularbigre - oriented spaxel is not a power law ( only its asymptotic tail is fitted quite well with a power law having index ) ; the same quantity is not a power law in the case of a square bigre - oriented spaxel too ( only its asymptotic tail is fitted quite well with a power law with index in the direction of the aperture side and index along its diagonal ) .the result is that the bigre - oriented circular aperture within a hexagonal lattice configuration allows a superior suppression of coherent and incoherent signals , while the slits generated by a circular tiger - oriented aperture in a hexagonal lattice are similar in this context to the ones generated by a square bigre - oriented aperture in a square lattice .finally , the slits generated by a square tiger - oriented aperture in a square lattice are the worst in term of coherent and incoherent signals suppression , see figure [ fg : tigervsbigre-2 ] .hence , the contribution of non - adjacent spaxels can be neglected when evaluating the cross - talk signals in the case of a bigre spectrograph , just because the power laws fitting in a bi - logarithmic plot the intensity distribution proper to the tiger slit functions do not fit at all the one proper to the bigre slit functions . at contrary , the intensity distribution proper to the bigre slit functions can be only approximated with lower index power laws .thus , what for a tiger lenslet - array represents an estimate only , for a bigre lenslet - array it gives realistic measures of the signals due to the spectrograph s slit functions cross - talk .by integral field spectroscopy it is possible to realize the s - sdi calibration technique in the way proposed by , and at least in a few cases to get the spectrum of candidate extrasolar giant planets adopting suited spectral de - convolution recipes , as the one proposed by .however , these techniques can increase the contrast performances only when several sampling conditions , both in the spatial and in the spectral domain of the speckle field , are verified . in this context , our effort has been to discuss in general terms the critical sampling conditions needed to deal with a speckle field data cube before applying on it the s - sdi calibration technique or any spectral de - convolution recipe . to this purpose, we evaluated the impact of the cross - talk as function of various parameters of a lenslet - based integral field spectrograph , especially in the case of trying to minimize the number detector pixels ( which is an issue in general for ifs ) in the case of strong specifications , as the ones requested for high - contrast imaging .for this reason we conceived a new optical scheme we named bigre and characterized it in the specific case of the ifs channel foreseen inside sphere , showing that a bigre - oriented spectrograph is conceptually feasible by standard dioptric optical devices .once applied to the technical specifications of this instrument , a bigre integral field unit is able to take into account the effects appearing if a lenslet - array is used in diffraction - limited conditions .specifically , we proved here that coherent and incoherent cross - talk coefficients reach values deeper than for a tiger ifu when applied to the same optical frame .more in general , the comparison between the bigre and the tiger spaxel concept has been pursued in terms of coherent and incoherent cross - talk suppression , adopting a common size for the single aperture and a fixed monochromatic wavelength for the wavefront propagation . in the ideal case of uniform illumination with un - resolved entrance pupil , the circular bigre spaxel within an hexagonal ifu lattice configuration shows to be the optimal solution among the ones we investigated .the authors thank roberto ragazzoni for the support he gave them in the development of this subject , from the primeval cheops project to sphere .jacopo antichi thanks personally bernard delabre for a dedicated work session at eso - garching in april 2007 , devoted to the final design optimization of the bigre - oriented spectrograph to be mounted in sphere and christophe vrinaud for his advising during the completion of the manuscript .jacopo antichi is supported by laog through the european seventh framework programme infra-2007 - 2.2.1.28 .beuzit , j .- l . , feldt , m. , dohlen , k. , mouillet , d. , puget , p. , wildi , f. , abe , l. , antichi , j. charton , j. , claudi , r. , downing , m. , fabron , c. , feautrier p. , fedrigo , e. , fusco , t. , gach , j .-l , gratton , r. g. , henning , t. , hubin , n. , joos , f. , kasper , m. e. , langlois , m. , lenzen , r. , moutou , c. , pavlov , a. , petit , c. , pragt , j. , rabou , p. , rigal , f. , roelfsema , r. , rousset , g. , saisse , m. , schmid , h. m. , stadler , e. , thalmann , c. , turatto , m. , udry , s. , vakili , f. , waters , r. 2008 , proceeding spie , 7014e .. 41b langrange , a .- m . ,gratadour , d. , chauvin , g. , fusco , t. , ehrenreich , d. , mouillet , d. , rousset , g. , rouan , d. , allard , f. , gendron , ., charton , j. , mugnier , l. , rabou , p. , montri , j. , lacombe , f. 2008 , arxiv0811.3583l kasper , m. e. , beuzit , j .- l . ,vrinaud , c. , yaitskova , n. , baudoz , p. , boccaletti , a. , gratton , r. g. , hubin , n. , kerber , f. , roelfsema , r. , schmid , h. m. , thatte , n .- a . , dohlen , k. , feldt , m. , venema , l. , wolf , s. 2008 proceeding spie , 7014e .. 46k macintosh , b .- a . , graham , j .-, palmer , d .- w . ,doyon , r. , dunn , j. , gavel , d .-, larkin , j. , oppenheimer , b. , saddlemyer , l. , sivaramakrishnan , a. , marois , c. , pyoneer , l - a . , soummer , r. 2008 , proceeding spie , 7015e .. 31 m vrinaud , c. , korkiakoski , v. , martinez , p. , kasper , m. e. , beuzit , j .- l . , abe , l. , baudoz , p. , boccaletti , a. , dohlen , k. , gratton , r. g. , mesa , d. , kerber , f. , schmid , h. m. , venema , l. , slater , g. , tezca , m. , thatte , n. 2008 proceeding spie , 7014e .. 52v
|
integral field spectroscopy ( ifs ) represents a powerful technique for the detection and characterization of extrasolar planets through high contrast imaging , since it allows to obtain simultaneously a large number of monochromatic images . these can be used to calibrate and then to reduce the impact of speckles , once their chromatic dependence is taken into account . the main concern in designing integral field spectrographs for high contrast imaging is the impact of the diffraction effects and the non - common path aberrations together with an efficient use of the detector pixels . we focus our attention on integral field spectrographs based on lenslet - arrays , discussing the main features of these designs : the conditions of appropriate spatial and spectral sampling of the resulting spectrograph s slit functions and their related cross - talk terms when the system works at the diffraction limit . we present a new scheme for the integral field unit ( ifu ) based on a dual - lenslet device ( bigre ) , that solves some of the problems related to the classical tiger design when used for such applications . we show that bigre provides much lower cross - talk signals than tiger , allowing a more efficient use of the detector pixels and a considerable saving of the overall cost of a lenslet - based integral field spectrograph .
|
one of the interesting problems in nonlinear control systems is the synthesis of control laws that achieve stability .control lyapunov functions ( clfs ) represent a powerful tool for providing a solution to this problem .the classical approach is based on the _ off - line _ design of an explicit feedback law that renders the derivative of the clf negative .an alternative to this approach is to construct an optimization problem to be solved _ on - line _ , such that any of its feasible solutions renders the derivative of a candidate clf negative .this method can be traced back to the early results presented in , followed by the more recent articles , where synthesis of clfs is performed in a receding horizon fashion .all the above works mainly deal with the continuous - time case , while conditions under which these results can be extended to sampled - data nonlinear systems using their approximate discrete - time models can be found in .an important article on control lyapunov functions for discrete - time systems is .therein , classical continuous - time results regarding existence of clfs are reproduced for the discrete - time case . a significant relaxation in the _ off - line _ design of clfs for discrete - time systems was presented in , where parameter dependent quadratic clfs are introduced . also, interesting approaches to the off - line construction of lyapunov functions for stability analysis were recently presented in , and . despite the popularity of clfs within systems theorythere is still a significant gap in the application of clfs in real - time control in general , and control of fast systems ( i.e. systems with a very small sampling interval ) in particular .the main reason for this is conservativeness of the sufficient conditions for lyapunov asymptotic stability which are employed by most off - line and on - line methods for constructing clfs .classically , a clf enforces that the resulting closed - loop state trajectory is contained within a cone with a fixed , predefined shape , which is centered at and converges to a desired converging point .this cone is obtained by characterizing the evolution of the state via its state - space position at each discrete - time instant , with respect to a corresponding sublevel set of the lyapunov function .typical examples of relevant classes of systems for which classical clfs are overconservative are linear and nonlinear chains of integrators with bounded inputs and state constraints and discontinuous nonlinear and hybrid systems .furthermore , in many real - life control problems classical clfs prove to be overconservative .for example , consider the control of a simple electric circuit , such as the buck - boost dc - dc converter . at start - up , to drive the output voltage to the reference very fast , the inductor current must rise and stay far away ( e.g. , 5[a ] ) from its corresponding steady - state value ( e.g. , 0.01[a ] ) for quite some time .another typical and very relevant real - life example is control of position and speed in mechatronic devices , such as electromagnetic actuators . for a given position reference, the speed must increase very fast at start - up and then return to its steady state value , which is equal to zero . in both cases enforcing a classical clf designis obviously conservative . motivated by such examples , recently , in , a methodology that reduces the conservatism of clf design for discrete - time nonlinear systems was proposed . rather than searching for a global clf ( i.e. on the whole admissible state - space ) ,therein the focus was on relaxing clf - type conditions for a predetermined local clf through on - line optimization problems , as it is graphically illustrated in figure [ fig2 ] .the goal of this overview is to highlight the potential of flexible clfs for real - time control of fast mechatronic systems , with sampling periods below one millisecond , which are widely used in aerospace and automotive applications .this includes control of electro - magnetic actuators and a real - time application to the control of dc - dc power converters .this research is supported by the veni grant `` flexible lyapunov functions for real - time control '' , grant number 10230 , awarded by stw ( dutch science foundation ) and nwo ( the netherlands organization for scientific research ) .
|
the property that every control system should posses is stability , which translates into safety in real - life applications . a central tool in systems theory for synthesizing control laws that achieve stability are control lyapunov functions ( clfs ) . classically , a clf enforces that the resulting closed - loop state trajectory is contained within a cone with a fixed , predefined shape , and which is centered at and converges to a desired converging point . however , such a requirement often proves to be overconservative , which is why most of the real - time controllers do not have a stability guarantee . recently , a novel idea that improves the design of clfs in terms of flexibility was proposed . the focus of this new approach is on the design of optimization problems that allow certain parameters that define a cone associated with a standard clf to be decision variables . in this way non - monotonicity of the clf is explicitly linked with a decision variable that can be optimized on - line . conservativeness is significantly reduced compared to classical clfs , which makes _ flexible clfs _ more suitable for stabilization of constrained discrete - time nonlinear systems and real - time control . the purpose of this overview is to highlight the potential of flexible clfs for real - time control of fast mechatronic systems , with sampling periods below one millisecond , which are widely employed in aerospace and automotive applications . = 1
|
digital backpropagation ( dbp ) is one of the most studied strategies to counteract nonlinearities by channel inversion . due to both technological and practical reasons, dbp is typically used only to compensate for intra - channel nonlinearity . though most effective for combating nonlinear signal - signal interactions ,dbp is also a key element for the realization of nearly optimum detectors ( accounting also for signal - noise interaction ) when combined to a viterbi processor for maximum likelihood sequence detection , or applied to particle filtering for stochastic backpropagation . in practice, dbp is implemented through the split - step fourier method ( ssfm ) , probably the most efficient numerical method known to simulate fiber - optic propagation .the ssfm allows for complexity versus accuracy trade - off by adjusting the number of steps and takes advantage of the high computational efficiency of the fast fourier transform algorithm .nevertheless , the computational complexity , latency , and power consumption required by dbp in typical long - haul systems are significantly higher than those required by other digital signal processing blocks ( e.g. , linear equalizers ) and still pose some difficulties to its implementation in a real - time digital receiver .several approaches have been proposed to obtain good trade - offs between complexity and performance .based on heuristic approaches , some modifications of the ssfm algorithm have been also proposed to reduce complexity without sacrificing accuracy .however , some approximate solutions of the nonlinear schrdinger equation are available in the literature and could be exploited for improving the ssfm algorithm . while the volterra series and regular perturbation approaches describe the nonlinearity as an additive perturbation and do not appear to be suitable , the logarithmic approximation gives an expression which is closer to that used in the ssfm method for approximating the nonlinear propagation in a piece of fiber but more accurate .based on the logarithmic perturbation technique , an enhanced ssfm ( essfm ) has been recently proposed and , through numerical simulations , it was shown to have a one order of magnitude lower complexity compared to the standard ssfm for a prescribed accuracy . in this work ,we extend the essfm algorithm to account for the propagation of a polarization - multiplexed signal and experimentally demonstrate its effectiveness for the implementation of dbp within a coherent optical receiver . in particular , we compare the ssfm and essfm algorithms to backpropagate a 112gb / s pm - qpsk signal through a 3200 km dispersion - unmanaged link , showing that the essfm provides a significant reduction of complexity , latency , and power consumption . the paper is organized as follows . in section ii, we describe the essfm algorithm . in section iii ,we investigate the computational complexity , computational time ( latency ) , and power consumption of the proposed essfm algorithm and compare them to those of the conventional ssfm and of a simple feed - forward equalizer ( ffe ) for dispersion compensation . in section iv , we show the experimental results and the actual improvements obtained by employing the essfm .finally , in section v , we draw the conclusions .the propagation of a single - polarization optical signal through a fiber - optic link in the presence of chromatic dispersion , kerr nonlinearity , and attenuation is governed by the nonlinear schrdinger equation , which can be numerically solved by means of the ssfm algorithm . according to the ssfm, the link is divided into small segments ( steps ) .each step is further divided into two sub - steps : a linear sub - step , accounting for chromatic dispersion , and a nonlinear sub - step , accounting for a nonlinear phase rotation proportional to the signal intensity ( kerr nonlinearity ) .when considering polarization - multiplexed signals , the nonlinear schrdinger equation is replaced by the manakov equation . in this case , the ssfm can be still employed by modifying the nonlinear sub - step to account for a nonlinear phase rotation on each polarization that is proportional to the overall signal intensity on both polarizations . in both cases , as processing for the linear and nonlinear sub - steps takes place in frequency and time domain , respectively , direct and inverse ffts are used at each step to switch between the time and frequency representation of the signal . in practice ,the propagation of a block of vector samples ( where each vector collects the -th samples of the two signal polarizations ) through a generic step of length , with dispersion coefficient , nonlinear coefficient , and attenuation coefficient , entails performing the following four operations : _ ( i ) _ computation of the frequency components through a pair of ffts ( one per each polarization ) ; _ ( ii ) _ computation of the linear sub - step where is the frequency of the -th component ; _ ( iii ) _ computation of the time components through a pair of inverse ffts ; _ ( iv ) _ computation of the nonlinear sub - step being the effective length .the output sequence becomes , in turn , the input to the next fiber segment and so on , until the end of the link is reached .the overall complexity of the ssfm algorithm is mainly driven by the required ffts and can be reduced by employing the essfm algorithm , which achieves the same accuracy as the ssfm with a lower number of steps . the main idea behindthe essfm is that of keeping the ssfm approach but modifying the nonlinear sub - step ( [ eq : passo_nonlineare_ssfm ] ) to account also for the interaction between dispersion and nonlinearity along . in this way , can be increased ( and , consequently , decreased ) without affecting the overall accuracy .of course , the overall complexity is reduced only if the new term is less costly than the spared ffts .a more accurate expression for the nonlinear sub - step is provided by the frequency - resolved logarithmic perturbation ( frlp ) method . in particular , it can be shown that in the nonlinear step the signal undergoes a nonlinear phase rotation that depends on a quadratic form of the signal samples . by truncating the channel memory to the first past and future samples , retaining only the diagonal terms of the quadratic form , and averaging the frlp coefficients over the signal bandwidth ,results in the modified nonlinear sub - step proposed in ( for a single - polarization signal ) which , in analogy to the ssfm case , is simply extended to a polarization - multiplexed signal by considering a nonlinear phase rotation on each polarization that is the sum of the phase rotations induced by each polarization .the enhanced nonlinear sub - step is thus expressed as where are real coefficients . formally , ( [ eq : passo_nonlineare_essfm ] ) is equal to the nonlinear sub - step proposed in , but replacing scalar samples with vector samples. we also note that ( [ eq : passo_nonlineare_essfm ] ) is similar to the nonlinear sub - step proposed in .however , the coefficient values obtained through a logarithmic - perturbation analysis or numerical optimization , as discussed in , may be significantly different from the low - pass filter coefficients employed in , thus providing a different performance .the hardest challenge for a real - time implementation of dbp is keeping its complexity , latency , and power consumption within feasible values .though an accurate analysis of the computational complexity , latency , and power consumption for a real - time implementation of the ssfm and essfm algorithms is beyond the scope of this work it depends on the actual implementation of the fft and of the exponential operation , on the employed hardware , on the sampling rate , on the adopted precision , and so on here we want to show that the number of steps is a reasonable figure of merit to compare the two algorithms and to provide a rough , yet meaningful , indication about their complexity , latency , and power consumption . when processing a long sequence of samples through the ssfm or essfm algorithms , as required for instance when implementing dbp in a fiber - optic transmission system , the overlap - and - save techniqueis typically employed .the input sequence of samples is divided into several overlapping blocks which are separately processed , and the output sequence is then reconstructed by discarding the overlapping samples. the number of overlapping samples should be at least equal to the overall memory of the fiber - optic channel which , for dispersion - uncompensated links , can be approximated as , where is the fiber dispersion parameter , the link length , and the signal bandwidth ( assumed equal to the sampling rate)while the block length should be optimized to minimize the computational cost per propagated sample .the propagation of each block of samples through each step of fiber requires : the computation of four ffts ( a pair of direct and inverse ffts per polarization ) of complex samples ( about real multiplications and real additions ) or ) are precalculated and that the complex exponential in ( [ eq : passo_nonlineare_essfm ] ) is evaluated by using a lookup table . ] ; the computation of the linear sub - step ( [ eq : passo_lineare ] ) ( real multiplications and real additions ) ; the computation of the nonlinear sub - step ( [ eq : passo_nonlineare_essfm ] ) , which in turn requires the computation of squared moduli ( real multiplications and real additions ) , their linear combination ( real multiplications and real additions ) , and the nonlinear phase shift rotation ( real multiplications and real additions , neglecting the cost of the complex exponential ) .overall , considering that samples out of are discarded by the overlap - and - save algorithm , the essfm algorithm requires real multiplications and real additions per step per received sample .the complexity of the ssfm is exactly the same , with .it is useful to make a comparison with the complexity of a linear feed - forward equalizer ( ffe ) for bulk dispersion compensation .this is typically implemented in frequency domain and is practically equivalent to a single step of the ssfm , in which only the linear sub - step is considered : two parallel direct ffts ( one per polarization ) of complex samples , the linear sub - step ( [ eq : passo_lineare ] ) , and two parallel inverse ffts .overall , the ffe requires real multiplications and real additions per received sample . computational complexity per step as a function of the block length for a channel memory of samples and different algorithms . ]given the memory of the channel , the block length can be optimized to minimize the complexity ( e.g. , number of additions and/or multiplications ) . as an example , considering a memory of samples due , for instance , to the propagation of a 50 ghz signal through about 3000 km of standard single - mode fiber fig.[fig1 ] shows the number of real multiplications required by the ffe , by _ one step _ of the ssfm , and by _ one step _ of the essfm ( with ) per each processed sample as a function of the ratio . as it is clear also from the expressions provided above, the optimum ratio depends on the considered algorithm and on the value of ( and also on the value of , assumed fixed in fig .[ fig1 ] ) .however , it can be observed that by setting , one obtains nearly minimum complexity in all the considered cases .lower values of would reduce latency , but at the expense of a significantly higher complexity .on the other hand , higher values of would only slightly reduce complexity , but at the expense of a higher latency .a similar result is obtained also when considering the number of real additions and different values of and ( within a reasonable range of practical interest ) .therefore , in the following , we will always consider .computational complexity per step as a function of the link length for a block length and different algorithms . ] given this choice , it is interesting to compare the complexity of the various algorithms and see how it changes with link length . fig .[ fig2 ] reports the number of real multiplications per processed sample for the ffe , ssfm , and essfm with different values of as a function of the link length .a standard single - mode fiber ( {ps^{2}/km} ] are applied to the in - phase ( i ) and quadrature ( q ) port of the modulator to obtain a 56gb / s qpsk optical signal .polarization multiplexing is finally emulated through a 50/50 beam splitter , an optical delay , and a polarization beam combiner ( pbc ) , obtaining a 112gb / s pm - qpsk optical signal .a recirculating loop is used to emulate transmission over long distances .the loop is composed by two spans of 40 km of standard single - mode fiber , each one followed by an erbium - doped fiber amplifier ( edfa ) . a gain equalization filter ( gef )is used to equalize the distortions due to the amplifier gain profile and a polarization scrambler ( pol - s ) is included in the loop to emulate random polarization rotations along the link . at the receiver ,the optical signal is detected by employing coherent phase- and polarization - diversity detection and setting the local oscillator ( lo ) at the same nominal wavelength of the transmitter tls ( with {ghz}$ ] accuracy ) .the received optical signal is mixed with the lo through a polarization - diversity 90 hybrid optical coupler , whose outputs are sent to four couples of balanced photodiodes .the four photodetected signals are sampled and digitized through a 20ghz 50gsa / s real - time oscilloscope in separate blocks of one million samples at a time .each block of samples is processed off - line according to the scheme of fig.[fig3 ] .bulk dispersion compensation ( with a frequency - domain ffe ) or dbp based on the ssfm or essfm algorithm is performed on signal samples taken at the original sampling rate ( about 1.8 sample per symbol ) .then , after digital resampling at two samples per symbol , a butterfly equalizer is employed to adaptively compensate for polarization mode dispersion and residual chromatic dispersion . finally , asynchronous detection is employed ( at symbol rate ) to account for phase noise and a possible frequency offset and to make decisions as in .the first 100000 received samples are used to optimize the essfm coefficients , while bit - error rate ( ber ) is measured on the remaining samples .the essfm coefficients are optimized by using the output of the ssfm algorithm with multiple steps / span as a target and minimizing the mean square error ( mse ) with respect to it .ber values are finally obtained by averaging over 5 different blocks of samples . ]the performance and complexity of the ssfm and essfm algorithm are compared at a transmission distance of 3200 km , at which the system operates with a ber above an arbitrary prescribed threshold of without dbp ( with ffe only ) .the ber versus launch power obtained without dbp ( replaced by the ffe for dispersion compensation ) , with the ssfm , and with the essfm algorithm is shown in fig.[fig4 ] . at this distance ,a channel memory samples and a nearly optimal fft size are taken .different number of steps for the ssfm and essfm algorithms are considered . for the essfm algorithm , is selected to provide a good trade - off between performance and complexity .for the system without dbp , the minimum ber is obtained at a launch power of -1dbm and is higher than the prescribed threshold . when including dbp based on the standard ssfm algorithm , at least 20 steps ( one step each four spans ) are required to obtain ( at a launch power of 0dbm ) . on the other hand ,when the essfm algorithm is used to implement dbp , the prescribed ber can be achieved ( already at -1dbm of launch power ) with just a single step for the whole link and coefficients .the total number of real multiplications per received sample is 128 for the ffe , 179 for the essfm , 2286 for the 16-step ssfm , and 2857 for the 20-step ssfm .the relative complexity ( and power consumption ) of the various algorithms is shown in the inset of fig.[fig4 ] , taking the 20-step ssfm as a reference . by employing the essfm ,the overall complexity and power consumption are reduced by a factor of 16 with respect to a conventional ssfm with the same performance , and latency by a factor of 20 .low - complexity dbp based on the essfm algorithm has been experimentally demonstrated by backpropagating a 112gb / s pm - qpsk signal through a 3200 km dispersion - unmanaged link .a target ber of has been achieved with a single dbp step , with a 16 times lower complexity and 20 times lower latency than conventional dbp .this means that essfm allows for complexity , latency , and power - consumption comparable with those required by standard feedforward equalization for chromatic dispersion compensation .this work was supported in part by the italian miur under the firb project cotone and by the eu fp-7 gant project coffee .n. v. irukulapati , d. marsella , p. johannisson , m. secondini , h. wymeersch , e. agrell , and e. forestieri , `` on maximum likelihood sequence detectors for single - channel coherent optical communications , '' in _european conf . on optical commun .( ecoc ) _ , p. p.3.19 , 2014 .t. r. taha and m. j. ablowitz , `` analytical and numerical aspects of certain nonlinear evolution equation , ii , numerical , nonlinear schroedinger equation , '' _ j. computat . phys ._ , vol . 5 , pp .203 230 , 1984 .r. asif , c .- y .lin , m. holtmannspoetter , and b. schmauss , `` optimized digital backward propagation for phase modulated signals in mixed - optical fiber transmission link , '' _ optics express _ , vol .18 , no .22 , pp . 2279622807 , 2010 .l. b. du and a. j. lowery , `` improved single channel backpropagation for intra - channel fiber nonlinearity compensation in long - haul optical communication systems , '' _ optics express _ , vol .18 , no .16 , pp . 1707517088 , 2010 .l. li , z. tao , l. dou , w. yan , s. oda , t. tanimura , t. hoshida , and j. c. rasmussen , `` implementation efficient nonlinear equalizer based on correlated digital backpropagation , '' in _ opt .fiber commun .( ofc ) _ , p. oww3 , 2011 .e. ip , n. bai , and t. wang , `` complexity versus performance tradeoff for fiber nonlinearity compensation using frequency - shaped , multi - subband backpropagation , '' in _ opt .fiber commun .( ofc ) _ , p. othf4 , 2011 .d. rafique , m. mussolin , m. forzati , j. mrtensson , m. n. chugtai , and a. d. ellis , `` compensation of intra - channel nonlinear fibre impairments using simplified digital back - propagation algorithm , '' _ optics express _ , vol .19 , no . 10 , pp . 94539460 , 2011 .m. secondini , e. forestieri , and g. prati , `` achievable information rate in nonlinear wdm fiber - optic systems with arbitrary modulation formats and dispersion maps , '' _ j. lightwave technol ._ , vol . 31 , no . 23 , pp . 38393852 , 2013 . m. kuschnerov , f. n. hauske , k. piyawanno , b. spinnler , m. s. alfiad , a. napoli , and b. lankl , `` dsp for coherent single - carrier receivers , '' _ j. lightwave technol ._ , vol . 27 , pp .36143622 , 15 aug . 2009 .f. cugini , f. paolucci , g. meloni , g. berrettini , m. secondini , f. fresi , n. sambo , l. poti , and p. castoldi , `` push - pull defragmentation without traffic disruption in flexible grid optical networks , '' _ j. lightwave technol ._ , vol . 31 , no . 1 ,pp . 125133 , 2013 .
|
enhanced - ssfm digital backpropagation ( dbp ) is experimentally demonstrated and compared to conventional dbp . a 112gb / s pm - qpsk signal is transmitted over a 3200 km dispersion - unmanaged link . the intradyne coherent receiver includes single - step digital backpropagation based on the enhanced - ssfm algorithm . in comparison , conventional dbp requires twenty steps to achieve the same performance . an analysis of the computational complexity and structure of the two algorithms reveals that the overall complexity and power consumption of dbp are reduced by a factor of 16 with respect to a conventional implementation , while the computation time is reduced by a factor of 20 . as a result , the proposed algorithm enables a practical and effective implementation of dbp in real - time optical receivers , with only a moderate increase of the computational complexity , power consumption , and latency with respect to a simple feed - forward equalizer for dispersion compensation . fiber - optic systems ; fiber nonlinearity ; digital backpropagation
|
low - rank matrix approximation is an important ingredient of modern machine learning methods .numerous learning tasks rely on multiplication and inversion of matrices , operations that scale cubically in the number of data points , and therefore quickly become a bottleneck for large data . in such cases ,low - rank matrix approximations promise speedups with a tolerable loss in accuracy .a notable instance is the _ nystrm method _ , which takes a positive semidefinite matrix as input , selects from it a small subset of columns , and constructs the approximation .the matrix is then used in place of , which can decrease runtimes from to , a huge savings ( since typically ) . since its introduction into machine learning ,the nystrm method has been applied to a wide spectrum of problems , including kernel ica , kernel and spectral methods in computer vision , manifold learning , regularization , and efficient approximate sampling .recent work shows risk bounds for nystrm applied to various kernel methods .the most important step of the nystrm method is the selection of the subset , the so - called _landmarks_. this choice governs the approximation error and subsequent performance of the approximated learning methods .the most basic strategy is to sample landmarks uniformly at random .more sophisticated non - uniform selection strategies include deterministic greedy schemes , incomplete cholesky decomposition , sampling with probabilities proportional to diagonal values or to column norms , sampling based on leverage scores , via k - means , or using submatrix determinants .we study landmark selection using _ determinantal point processes ( dpp ) _ , discrete probability models that allow tractable sampling of diverse non - independent subsets .our work generalizes the determinant based scheme of .we refer to our scheme as dpp - nystrm , and analyze it from several perspectives .a key quantity in our analysis is the error of the nystrmapproximation .suppose is the target rank ; then for selecting landmarks , nystrm s error is typically measured using the frobenius or spectral norm relative to the best achievable error via rank- svd ; i.e. , we measure several authors also use additive instead of relative bounds . however , such bounds are very sensitive to scaling , and become loose even if a single entry of the matrix is large .thus , we focus on the above relative error bounds . first , we analyze this approximation error .previous analyses fix a cardinality ; we allow the general case of selecting columns .our relative error bounds rely on the properties of characteristic polynomials .empirically , dpp - nystrmobtains approximations competitive to state - of - the - art methods .second , we consider its impact on kernel methods . specifically , we address the impact of nystrm - based kernel approximations on kernel ridge regression .this task has been noted as the main application in .we show risk bounds of dpp - nystrmthat hold in expectation .empirically , it achieves the best performance among competing methods .third , we consider the efficiency of dpp - nystrm ; specifically , its tradeoff between error and running time . since its proposal , determinantal sampling has so far not been used widely in practice due to valid concerns about its scalability .we consider a gibbs sampler for -dpp , and analyze its mixing time using a _ path coupling _ argument . we prove that under certain conditions the chain is fast mixing , which implies a _ linear _ running time for dppsampling of landmarks .empirical results indicate that the chain yields favorable results within a small number of iterations , and the best efficiency - accuracy traedoffs compared to state - of - art methods ( figure [ fig : tradeoff ] ) .throughout , we are approximating a given positive semidifinite ( psd ) matrix with eigendecomposition and eigenvalues .we use for the -th row and for the -th column , and , likewise , for the rows of and for the columns of indexed by ] is the best rank- approximation to in both frobenius and spectral norm .we write for the rank and for the pseudoinverse , and denote a decomposition of by , where . *the nystrm method .* the _ standard nystrm _ method selects a subset ] is proportional to , that is , when conditioning on a fixed cardinality , one obtains a -dpp . to avoid confusion with the target rank , andsince we use cardinality , we will refer to this distribution as -dpp , and note that where is the -th coefficient of the characteristic polynomial .sampling from a ( -)dppcan be done in polynomial time , but requires a full eigendecomposition of , which is prohibitive for large .a number of approaches have been proposed for more efficient sampling .we follow an alternative approach based on gibbs sampling and show that it can offer fast polynomial - time dppsampling and nystrm approximations .next , we consider sampling landmarks ] of . using that and applying cauchy - schwartz yields = \mathbb{e}_c \left[\|b^\top ( u^c)^\perp ( ( u^c)^\perp)^\top b\|_f\right]\\ & = \mathbb{e}_c \left[\sqrt{{\sum\nolimits}_{i , j } ( b_i^\top ( u^c)^\perp ( ( u^c)^\perp)^\top b_j)^2}\right]\le \mathbb{e}_c \left[\sqrt{({\sum\nolimits}_{i , j } \|b_i^\top ( u^c)^\perp\|_2 ^ 2 \|b_j^\top ( u^c)^\perp\|_2 ^ 2)}\right]\\ & = \mathbb{e}_c \left[{\sum\nolimits}_i \|b_i^\top ( u^c)^\perp\|_2 ^ 2\right]= { 1\over e_c(k)}{\sum\nolimits}_{|c| = c}{\sum\nolimits}_{i } \det(b_{\cdot , c}^\top b_{\cdot , c } ) \|b_i^\top ( u^c)^\perp\|_2 ^ 2\\ & \overset{(a)}= { 1\over e_c(k)}{\sum\nolimits}_{|c| = c}{\sum\nolimits}_{i\notin c } \det(b_{\cdot , c\cup\{i\}}b_{\cdot , c\cup\{i\}}^\top)\\ & \overset{(b)}{= } ( c+1 ) { e_{c+1}(k)\over e_c(k)}. \end{aligned}\ ] ] in , we use that projects vectors onto the null ( column ) space of , and uses the definition of .with lemma [ lem : char ] , it follows that the bound on the frobenius norm immediately implies the bound on the spectral norm : \;\;\le \mathbb{e}_c \left[\|k - k_{\cdot c } k_{c , c}^\dagger k_{c\cdot}\|_f \right]\\ & \;\;\le { c+1\over c+1-k}\sqrt{n - k } \|k - k_k\|_f \;\;\le { c+1\over c+1-k}(n - k ) \|k - k_k\|_2 \qedhere \end{aligned}\ ] ] [ [ remarks . ] ] remarks .+ + + + + + + + compared to previous bounds ( e.g. , on uniform and leverage score sampling ) , our bounds seem somewhat weaker asymptotically ( since as they do not converge to 1 ) .this suggests that there is an opportunity for further tightening our bounds , which may be worthwhile , given than in section sec .[ sec : exp : app ] our extensive experiments on various datasets with dpp - nystrmshow that it attains superior accuracies compared with various state - of - art methods .our theoretical ( section [ sec : dppnys ] ) and empirical ( section [ sec : exp : app ] ) results suggest that dpp - nystrmis well - suited for scaling kernel methods . in this section, we analyze its implications on kernel ridge regression .the experiments in section [ sec : exp ] confirm our results empirically .we have training samples , where are the observed labels under zero - mean noise with finite covariance .we minimize a regularized empirical loss over an rkhs .equivalently , we solve the problem for the corresponding kernel matrix . with the squared loss ,the resulting estimator is and the prediction for is given by .denoting the noise covariance by , we obtain the risk observe that the bias term is matrix - decreasing ( in ) while the variance term is matrix - increasing . since the estimator requires expensive matrix inversions , it is common to replace in by an approximation . if is constructed via nystrmwe have , and it directly follows that the variance shrinks with this substitution , while the bias increases . denoting the predictions from by , theorem [ thm : krr ] completes the picture of how using affects the risk .[ thm : krr ] if is constructed via dpp - nystrm , then \le 1 + { ( c+1)\over n\gamma } { e_{c+1}(k)\over e_c(k)}.\end{aligned}\ ] ] again , using , we obtain bounds that hold with high probability ( appendix [ append : sec : proof ] ) .we build on .knowing that as , it remains to bound the bias .using and , we obtain where .since and commute , we have it follows that hence , finally , this inequality implies that taking the expectation over -dpp( ) yields \le 1 + \mathbb{e}_c\left[{\nu_c \over n\gamma}\right]= 1 + { ( c+1)\over n\gamma } { e_{c+1}(k)\over e_c(k)}.\end{aligned}\ ] ] together with the fact that , we obtain & = \mathbb{e}_c \left[\sqrt{\mathrm{bias}(\tilde{k } ) + \mathrm{var}(\tilde{k})\over \mathrm{bias}(k ) + \mathrm{var}(k)}\right]\\ & \le 1 + { ( c+1)\over n\gamma } { e_{c+1}(k)\over e_c(k)}\end{aligned}\ ] ] for any . [ [ remarks.-1 ] ] remarks .+ + + + + + + + theorem [ thm : krr ] quantifies how the learning results depend on the decay of the spectrum of .in particular , the ratio closely relates to the effective rank of : if and , this ratio is almost zero , resulting in near - perfect approximations and no loss in learning .there exist works that consider nystrmmethods in this scenario .our theoretical bounds could also be tightened in this setting , possibly by a tighter bound on the elementary symmetric polynomial ratio .this theoretical exercise may be worthwhile given our extensive experiments comparing dpp - nystrmagainst other state - of - art methods in sec .[ sec : exp : krr ] that reveal the superior performance of dpp - nystrm .despite its excellent empirical performance and strong theoretical results , determinantal sampling for nystrmhas rarely been used in applications due to the computational cost of for directly sampling from a dpp , which involves an eigendecomposition .instead , we follow a different route : an mcmc sampler , which offers a promising alternative if the chain mixes fast enough .recent empirical results provide initial evidence , but without a theoretical analysis ; other recent works do not apply to our cardinality - constrained setting .we offer a theoretical analysis that confirms fast mixing ( i.e. , polynomial or even _linear_-time sampling ) under certain conditions , and connect it to our empirical results .the empirical results in section [ sec : exp ] illustrate the favorable performance of dpp - nystrmin trading off time and error . concurrently with this paper , derived a different , general analysis of fast mixing that also confirms our observations .algorithm [ algo : mcdpp ] shows a gibbs sampler for -dpp . starting with a uniformly random set , at iteration , we try to swap an element with an element , according to and .the stationary distribution of this chain is exactly the desired -dpp( ) . *input : * the kernel matrix , ] and .suppose a coupling of the markov chain is defined on all pairs in such that there exists an such that \le \alpha \delta(r , t) ] for an appropriate .since , the sets and differ in only two entries .let , so and and . for a state transition, we sample an element and \backslash r ] as switching candidates for .let and be the bernoulli random variables indicating whether we try to make a transition . in our couplingwe always set .hence , if then both chains will not transition and the distance of states remains . for ,we distinguish four cases : [ [ case - c1 ] ] case c1 + + + + + + + if and , we let and . as a result , .[ [ case - c2 ] ] case c2 + + + + + + + if and , we let and . in this case , if both chains transition , then the resulting distance is zero , otherwise it remains one . with probability both chains transition .[ [ case - c3 ] ] case c3 + + + + + + + if and , we let and . again , if both chains transition , then the resulting distance is , otherwise it remains one . with probability both chains transition .[ [ case - c4 ] ] case c4 + + + + + + + if and , we let and .if both chains make the same transition ( both move or do not move ) , the resulting distance is one , otherwise it increases to 2 .the distance increases with probability . with those four cases , we can now bound ] holds with probability at least .hence \le 1 + \mathbb{e}\left[{\nu_c\over n\gamma}\right ] + \sqrt{8c\log(1/\delta ) } { \text{tr}(k)\over n\gamma}\\ & \;\;\;\;= 1 + { 1\over n\gamma } \left({(c+1 ) e_{c+1}(k)\over e_c(k ) } + \sqrt{8c\log(1/\delta ) } \text{tr}(k)\right)\end{aligned}\ ] ] holds with probability at least .we first show the mixing of the gibbs dpp - nystrmwith 50 landmarks with different performance measures : relative spectral norm error , training error and test error of kernel ridge regression in fig .[ append : fig : conv_ailerons_50 ] .we also show corresponding results with respect to 100 and 200 landmarks in fig .[ append : fig : conv_ailerons_100 ] and fig .[ append : fig : conv_ailerons_200 ] , so as to illustrate that for varying number of landmarks the chain is indeed fast mixing and will give reasonably good result within a small number of iterations .we next show time - error trade - offs for various sampling methods on small and larger datasets with respect to fnorm and 2norm errors .we sample 20 landmarks from ailerons dataset of size 4,000 and california housing of size 12,000 .the result is shown in figure [ append : fig : ailerons_tradeoff_large ] and figure [ append : fig : calhousing_tradeoff_large ] and similar trends as the example results in the main text could be spotted : on small scale dataset ( size 4,000 ) ` kdpp`get very good time - error trade - off .it is more efficient than ` kmeans ` , though the error is a bit larger . while on larger dataset ( size 12,000 ) the efficiency is further enhanced while the error is even lower than ` kmeans ` .it also have lower variances in both cases compared to ` applev`and ` appreglev ` .overall , on larger dataset we obtain the best time - error trade - off with ` kdpp ` .
|
the nystrmmethod has long been popular for scaling up kernel methods . its theoretical guarantees and empirical performance rely critically on the quality of the _ landmarks _ selected . we study landmark selection for nystrmusing determinantal point processes ( dpps ) , discrete probability models that allow tractable generation of _ diverse _ samples . we prove that landmarks selected via dpps guarantee bounds on approximation errors ; subsequently , we analyze implications for kernel ridge regression . contrary to prior reservations due to cubic complexity of dppsampling , we show that ( under certain conditions ) markov chain dppsampling requires only _ linear _ time in the size of the data . we present several empirical results that support our theoretical analysis , and demonstrate the superior performance of dpp - based landmark selection compared with existing approaches .
|
pervasive , geolocalized data generated by individuals has recently triggered a renewed interest for the study of cities and urban dynamics , and in particular , individual mobility patterns .various data sources have been used such as car gps , rfids for collective transportation , and also data coming from social networks such as twitter or foursquare . a recent , very important source of datais given by individual mobile phone data .these data have allowed to study the individual mobility patterns with a high spatial and temporal resolution , the automatic detection of urban land uses , or the detection of communities based on human interactions .morphological aspects , such as the quantitative characterization and comparison of cities through their density landscape , their space consumption , their degree of polycentrism , or the clustering degree of their activity centers , have been studied for a long time in quantitative geography and spatial economy . until the late 2000 ,these quantitative comparisons of cities were necessarily based on census data and/or remote sensing data , both giving a static estimation of the density of individuals and land uses in the city , at a fine spatial granularity but at a fixed , unique point in time . given the atemporal nature of these studies , they could not investigate some interesting questions related to the dynamical properties of the spatial structure of cities : how much does the city shape change through the course of the day ?where are the city s hotspots located at different hours of the day ?how are these hotspots spatially organized ?is there some kind of typical distance(s ) characterizing the permanent core , or ` backbone ' , of each city ?mobile phone data contain the spatial information about individuals and how it evolves during the day .these datasets thus give us the opportunity to answer some of these questions and to characterize quantitatively the spatial structure of cities . in this article, we address some of these questions using mobile phone data for a set of spanish cities shown on figure [ fig : map ] .we focus on the spatio - temporal properties of cities and , defining new metrics , study their structural properties and exhibit interesting patterns of urban systems .our analysis is based on aggregated and annonymized mobile phone traces provided by a spanish telecommunications operator , which concerns 31 spanish urban areas studied during weekdays .these urban areas are very diverse in terms of geographical location , area , population size and density , as illustrated by figure [ fig : dataset ] .in particular , the wide range of population sizes will allow us to test some scaling relations and also to identify various behaviors .we will first describe the dataset and then present the results obtained about several aspects of cities . [cols="^,^ " , ]we have shown in this study that it is possible to extract relevant information from mobile phone data , not only about the mobility behavior of individuals , but also about the structure of the city itself .we have defined various indices that allow us to propose a new classification of cities based on their dynamical properties .we have also presented a method to determine the dominant centers , the hotspots , and we have confirmed recent results -obtained on completely different data- showing that the number of activity centers in cities scales sublinearly with the population size of the city . we have also highlighted some properties of hotspots such as the strong stability of the hierarchy of city centers along the day , whatever the city size .these results constitute a step towards a quantitative typology of cities and their spatial structure , an important ingredient in the construction of a science of cities .comparing the spatial structure of cities of very different population sizes and areas requires to rely on a harmonized definition of cities that goes beyond the arbitrariness of the spatial boundaries of the administrative units . to that extent, we have chosen to rely on the _ urban areas _ defined by the audes initiative ( areas urbanas de espaa ) which capture some coherent delimitations of cities regarding the home - work commuting patterns of individuals living in the core city of the metropolitan areas and in their surrounding municipalities .these delimitations are built upon statistical criteria based on the proportion of residents of surrounding municipalities that commute to the main city to work .we started with the venables index , defined as : with the share of individuals present in cell at time , and the distance between and . when all activity is concentrated in one spatial unitonly , the minimum value zero of is reached .an important point of this dilatation index is that one does nt need to determine hotspots to compute it . by normalizing by the densities, we can compute a weighted average distance , the ` venables distance ' with the share of individuals present in cell at time . in order to compare the value of across cities ,we compute with the area of the city . by considering all pairs of cells and weighting their distance by the densities of individuals in each of them , signals how much the important places of the city at time are distant from each other .the data gives access to the spatial density of users at different moments .the full density is a complex object and we have to extract relevant and useful information .the locations that display a density much larger than the others - the hotspots - give a good picture of the city by showing where most of the people are .the hotspots thus contain important information about points of interest and activities in the city .the determination of centres and subcentres is a problem which has been broadly tackled in urban economics .starting from a spatial distribution of densities , we have to identify the local maxima .this is in principle a simple problem solved by the choice of a threshold for the density : a cell is a hotspot at time if the instantaneous density of users .this is for example what was done in to determine employment centres in los angeles .it is however clear that this method introduces some arbitrariness due to the choice of , and also requires prior knowledge of the city to which it is applied to choose a relevant value of .nonparametric methods have also been applied to determine the number of centres , some based on the regression of the natural logarithm of employment density on distance from the centre , some on the exponent of the negative exponential fit of the density distribution .limits of these methods stand in the fact that they return a unique number of centres that could be biased when the actual density distribution is not properly fitted by an exponential law . herewe will propose an alternative method that allows us to control the impact of this choice .a first simple criterion is to choose the point that corresponds to the average of the distribution at time : all the cells whose density is larger than are hotspots .this is indeed a weak definition of what can be considered as a hotspot , and we propose here to use it as a ` lower ' bound . in order to understand how the various properties of hotspots will depend on this definition , we introduce a more restrictive definition which will be considered as an upper bound of what can be considered as a hotspot . in the followingwe discuss how to find this upper bound . in order to characterize the disparity of the activity in the city and to isolate the dominant places , we first plot the lorenz curve of the density distribution in the city at each hour .the lorenz curve , a standard object in economics , is a graphical representation of the cumulative distribution function of an empirical probability distribution . for a given hour, we have the distribution of densities and we sort them in increasing rank , and denote them by where is the number of cells .the lorenz curve is constructed by plotting on the x - axis the proportion of cells and on the y - axis the corresponding proportion of users density with : if all the densities were of the same order the lorenz curve would be the diagonal from to . in general we observe a concave curve with a more or less strong curvature , and the area between the diagonal and the actual curve is related to the gini coefficient , an important indicator of inequality used in economics . in the lorenz curve , the stronger the curvature the stronger the inequality and , intuitively , the smaller the number of hotspots .this remark allows us to construct a new criterion by relating the number of dominant hotspots ( i.e. those that have a very high value compared to the other cells ) to the slope of the lorenz curve at point : the larger the slope , the smaller the number of dominant individuals in the statistical distribution .the natural way to identify the typical scale of the number of hotspots is to take the intersection point between the tangent of at point and the horizontal axis ( see figure [ fig : lorenz - curve ] ) .this method is inspired from the classical scale determination for an exponential decay : if the decay from were an exponential of the form where is the typical scale we want to extract , this method would give .we note here that the average criterion corresponds to the point of the lorenz curve with slope equal to .indeed , the general expression of the lorenz curve for the set of densities whose cumulative function is is : where is the inverse function of the cumulative .this point thus satisfies which gives or in other words , the hotspots will be those with densities larger than the average .in contrast , our more restrictive criterion based on the slope at gives where is the maximum value of ( for a given time ) .we thus see that in general and that this new criterion , more restrictive , does not only depend on the average value of the density but also on the dispersion : as increases , the value of increases and therefore the number of detected hotspots decreases .all other possible and reasonable methods will then give a value comprised in the interval $ ] between the average criterion and our criterion ( also denoted by ` loubar ' ) . instead of choosing a particular point , we will thus study most of the properties computed for hotspots with the two methods , giving us both a lower and upper bounds .in particular , we will be able to test the robustness of our results against the arbitrariness of the hotspot identification method .figure [ fig : mapshs - methods - bcn ] shows the location of the hotspots selected according to the two methods / criteria at different moments of the day , in the metropolitan area of barcelona .these maps can be regarded as the extremes of hotspots maps that reasonable delimitation methods could produce ( i.e. with a number of hotspots comprised between and .square cells . ] in the hotspots identification process , the size of the grid cells on which we aggregate the numbers / densities of users is another arbitrary parameter ( cf .section methods ) . since we do nt want to determine this value separately for each city , we consider that several sizes should be tested for each city and that it is reasonable to consider that this cell size can vary from meters to km .figure [ fig : hovernvsa - cities ] gives an idea of how much the proportion of hotspots change from one cell size to another .the cell size should primarily be chosen based on what is considered as a reasonable size for an urban hotspot . from the pedestrian point of view , every size between 500 metres and 2 kilometres seems _ a priori _ acceptable . below 500 m, it would clearly be necessary to aggregate contiguous hotspots : for example , for ( cells ) , two contiguous hotspots could not as easily be distinguished as two different ones from a pedestrian point of view .in contrast , a size of can be considered as an upper bound for the same reasons : if two contiguous cells are classified as hotspots , it is reasonable to identify them as two distincts neighbourhoods .it is however a question of perception and should be discussed carefully . in the hypothesis of ( cells ) , we chose to consider that _ two adjacent hotspots are two different hotspots_. for reasonable sizes of grid , the values of the indicators should be robust with a change of the cell size .we then tested the sensitivity of our results with respect to different resolutions . for two hotspots definitions and different sizes of grid cells , for eight different cities of very different sizes*.the cities chosen cover the full range of the poulation size distribtion of the set of the 31 cities studied .every reasonable method for defining hotspots would give a value between the two lines of each plot .one can see that qualitatively pattern stays identical whatever the grid size for couple ( city , method ) . ] in figure [ fig : hvsp2 ] we show the scaling relation between the number of hotspots with the population and the effect of the grid size . herewe see that the scaling results and the value of the exponent are robust against a change in ( i ) the threshold used for identifying the hotspots and ( ii ) the size of the grid cells . vs. the population size for the 31 cities studied .* each point in the scatterplot corresponds to the average number of hotspots determined for each one - hour time bin of a weekday time period considered for the five weekdays .the linear relationship on a log - log plot indicates a power - law relationship between the two quantities , with an exponent value , indicating that the number of activity centers in a city grows sublinearly with its population size . ]the kendall rank coefficient is used as a test statistic to establish whether two lists of random variables may be regarded as statistically dependent .to each cell we associate its rank in the ordered density distribution at time . kendall s value indicates how much the hierarchy changed between and .for a set of pairs , it is equal to the difference between the number of converging pairs ( i.e. was larger ( resp .smaller ) than at and is still larger ( resp .smaller ) at ) and the number of diverging pairs ( was smaller ( resp .larger ) than at and is larger ( resp .smaller ) at ) .the kendall values are plotted on figure [ fig : kendall ] . under the null hypothesis of independence of two lists, the distribution of has an expected value of zero and for larger samples , the variance is given by any value of larger than this null - value signals the existence of relevant correlations .we show in figure [ fig : kendall ] the evolution of kendall values calculated for the set of permament hotspots during daytime in an average weekday , for 31 spanish urban areas with more than 200,000 inhabitants .the curves are ranged by decreasing order of population size ( the biggest city in the top left corner , the smallest in the bottom right ) .the red curves correspond to the daytime evolution of the kendall for the hotspots selected with the ` loubar ' more restrictive criterion , the blue ones to the kendall of the hotspots selected with the average criterion .these results indicate that the hierarchy of permanent hotspots is indeed very stable in time .the authors acknowledge funding from the eu commission through project eunoia ( fp7-dg.connect-318367 ) .t.l . designed the study ,analysed the data and wrote the manuscript ; m.l . processed and analysed the data ; o.g.c and m.p . processed the data ; r.h . and j.j.r .coordinated the study ; e.f .- m .obtained and processed the data ; m.b . coordinated and designed the study , and wrote the manuscript .all authors read , commented and approved the final version of the manuscript .asgari f , gauthier v , becker m ( 2013 ) a survey on human mobility and its applications .[ physics ] .gallotti r , bazzani a , rambaldi s ( 2012 ) towards a statistical physics of human mobility .international journal of modern physics 23 .furletti b , cintia p , renso c , spinsanti l ( 2013 ) inferring human activities from gps tracks . in : urbcomp 2013 chicaco ,usa , august 11 .roth c , kang sm , batty m , barthlemy m ( 2011 ) structure of urban movements : polycentric activity and entangled hierarchical flows .plos one 6 : e15923 .hawelka b , sitko i , beinat e , sobolevsky s , kazakopoulos p , et al .( 2013 ) geo - located twitter as the proxy for global mobility patterns .[ physics ] .noulas a , scellato s , lambiotte r , pontil m , mascolo c ( 2012 ) a tale of many cities : universal patterns in human urban mobility .7 : e37027 .onnela jp , saramki j , hyvnen j , szab g , lazer d , et al .( 2007 ) structure and tie strengths in mobile communication networks .proceedings of the national academy of sciences 104 : 7332 .lambiotte r , blondel vd , de kerchove c , huens e , prieur c , et al .( 2008 ) geographical dispersal of mobile communication networks .physica a : statistical mechanics and its applications 387 : 53175325 .gonzalez mc , hidalgo ca , barabasi al ( 2008 ) understanding individual human mobility patterns .nature 453 : 779782 .schneider cm , belik v , couronn t , smoreda z , gonzlez mc ( 2013 ) unravelling daily human mobility motifs .journal of the royal society interface 10 .kung ks , sobolevsky s , ratti c ( 2013 ) exploring universal patterns in human home / work commuting from mobile phone data .[ physics ] .pei t , sobolevsky s , ratti c , shaw sl , zhou c ( 2013 ) a new insight into land use classification based on aggregated mobile phone data .sobolevsky s , szell m , campari r , couronn t , smoreda z , et al .( 2013 ) delineating geographical regions with networks of human interactions in an extensive set of countries .[ physics ] .anas a , arnott r , small ka ( 1997 ) urban spatial structure .university of california - transportation center .bertaud a , malpezzi s ( 2003 ) the spatial distribution of population in 48 world cities : implications for economies in transition .report , world bank .tsai yh ( 2005 ) quantifying urban form : compactness versus sprawl .urban studies 42 : 141161 .pereira rhm , nadalin v , monasterio l , albuquerque phm ( 2013 ) urban centrality : a simple index .geographical analysis 45 : 7789 .schwarz n ( 2010 ) urban form revisited - selecting indicators for characterising european cities . landscape and urban planning 96 : 2947 .thomas i , frankhauser p , biernacki c ( 2008 ) the morphology of built - up landscapes in wallonia ( belgium ) : a classification using fractal indices . landscape and urban planning 84 : 99115 .gurois m , pumain d ( 2008 ) built - up encroachment and the urban field : a comparison of forty european cities .environment and planning a 40 : 21862203 .berroir s , mathian h , saint - julien t , sanders l ( 2011 ) the role of mobility in the building of metropolitan polycentrism . in : desrosiersf , thriault m , editors , modelling urban dynamics , iste - wiley .le nchet f ( 2012 ) urban spatial structure , daily mobility and energy consumption : a study of 34 european cities .cybergeo : european journal of geography .openshaw s , taylor pj ( 1979 ) a million or so correlation coefficients : three experiments on the modifiable areal unit problem .statistical applications in the spatial sciences 21 : 127144 .louf r , barthelemy m ( 2013 ) modeling the polycentric transition of cities .physical review letters 111 .arcaute e , hatna e , ferguson p , youn h , johansson a , et al .( 2013 ) city boundaries and the universality of scaling laws .[ physics ] .bretagnolle a , pumain d , vacchiani - marcuzzo c ( 2009 ) the organisation of urban systems .complexity perspective in innovation and social change : 197220 .bretagnolle a , paulus f , pumain d ( 2002 ) time and space scales for measuring urban growth .cybergeo : european journal of geography .giuliano g , small ka ( 1991 ) subcenters in the los angeles region . regional science and urban economics 21 : 163182 .mcmillen dp ( 2001 ) nonparametric employment subcenter identification .journal of urban economics 50 : 448473 .mcmillen dp , smith sc ( 2003 ) the number of subcenters in large urban areas .journal of urban economics 53 : 321338 .griffith da ( 1981 ) modelling urban population density in a multi - centered city .journal of urban economics 9 : 298310 .
|
pervasive infrastructures , such as cell phone networks , enable to capture large amounts of human behavioral data but also provide information about the structure of cities and their dynamical properties . in this article , we focus on these last aspects by studying phone data recorded during 55 days in 31 spanish metropolitan areas . we first define an urban dilatation index which measures how the average distance between individuals evolves during the day , allowing us to highlight different types of city structure . we then focus on hotspots , the most crowded places in the city . we propose a parameter free method to detect them and to test the robustness of our results . the number of these hotspots scales sublinearly with the population size , a result in agreement with previous theoretical arguments and measures on employment datasets . we study the lifetime of these hotspots and show in particular that the hierarchy of permanent ones , which constitute the ` heart ' of the city , is very stable whatever the size of the city . the spatial structure of these hotspots is also of interest and allows us to distinguish different categories of cities , from monocentric and `` segregated '' where the spatial distribution is very dependent on land use , to polycentric where the spatial mixing between land uses is much more important . these results point towards the possibility of a new , quantitative classification of cities using high resolution spatio - temporal data .
|
chemical kinetics with multiple time scales and their control involve highly stiff and often high - dimensional ordinary differential equations ( ode ) .this poses hard challenges to the numerical solution and is the reason why model reduction methods are considered .the dynamics can be simplified by focusing on the long time behavior of such systems ( leaving fast transients unresolved ) and calculating fast modes as functions of the slow ones .ideally this leads to low dimensional manifolds in high - dimensional state space . in the special case of singularly perturbed system, they are understood quite well and called slow invariant manifolds .an open problem is how the slow manifolds can be used to simplify the solution of optimal control problems ( ocp ) that involve multiple time scale ode constraints .in dissipative dynamical systems geometrically the bundling of trajectories ( on a fast time scale ) to low - dimensional manifolds is observed .once trajectories reach the neighborhood of the slow manifold , they will evolve slowly and will never leave this manifold neighborhood .thus , this manifold is called _ slow invariant attracting manifold _ ( siam ) .the aim of slow manifold computation techniques is to approximately compute the siam as the graph of a function of only a few selected species ( so called _ reaction progress variables _ ) .thus , manifold - based model reduction generate a function ( is the number of slow variables resp .reaction progress variables and is the number of fast variables ) , such that approximates points of the siam . in order to investigate optimal control benchmark problems, we consider singularly perturbed systems , i.e. systems where the ode can be transformed into the following form : [ formula : sps ] two methods relevant in our context for the approximative calculation of the siam are briefly reviewed in the following subsections .the main idea of the zero derivative principle ( zdp ) , for model reduction of singularly perturbed systems is to identify for given values of the slow variables a point such that the higher - order time derivatives of fast components vanish , i.e another approach proposed by lebiedz and unger is motivated geometrically : among arbitrary trajectories of ( [ formula : sps ] ) for which the slow components end within the time in the state the corresponding part of the trajectory on the siam is characterized by the smallest curvature ( see also , ) .this motivates optimization problem ( [ formula : unger ] ) which is a variational boundary value problem ( bvp ) .[ formula : unger ] \\ & & { \varepsilon}\dot z_f & = f_f\big(z_s , z_f\big ) , \quad t \in [ t_0,t_1]\\ & & z_s(t_1 ) & = z_s^*. \end{aligned}\ ] ] in our application context we also use the local reformulation of problem ( [ formula : unger ] ) , where .one of our research interests is to solve optimal control problems involving multiple time scales as it appears frequently e.g. in the field of chemical engineering .thus , we consider the following ( typically high - dimensional ) ocp : [ formula : origocp ] applying the model reduction methods presented in the last section and assuming the control to be a slow variable , yields the lower dimensional problem ( cf . ) [ formula : redocp ] this systems has the advantage , that it has significantly less optimization variables and the ode ( [ formula : redocp ] ) is less stiff , which makes it solvable by fast explicit numerical integrators compared to implicit methods required for stiff ode .however , numerical solution methods for ocps like the multiple shooting method need repeated evaluation of the function as well as its partial derivatives and .therefore , it would be beneficial to combine the calculation of the sim and the optimal control problem .this is obviously possible , if the approximation of the sim can be formulated as a ( nonlinear ) root finding problem , e.g. with the zdp method .thus , we propose to solve the following ocp instead of ( [ formula : redocp ] ) : [ formula : liftedocp ] apply the ideas presented in the last sections to a benchmark ocp motivated by the michaelis - menten - henri mechanism modeling the reaction of substrate to a product via a substrate - enzyme - complex with the help of enzyme . simplifying the ode given by ( [ formula : enzyme ] ) and introducing an artificial objective function yields ocp ( [ formula : enzymeocp ] ) .where the control $ ] represents the possibility to add some substrate ( corresponds to variable ) to the system and describes the time - scale separation ( between the time evolution of and ) .figure [ fig : enzyme ] shows the results of the numerical solution of ( [ formula : enzymeocp ] ) using the multiple - shooting scheme with an implicit radau-2a integrator .if we refer to the solution of the proposed ocp ( [ formula : liftedocp ] ) as and to the solution of ( [ formula : enzymeocp ] ) as , then it holds which gives a relative error of for both objective functional value and .although , the proposed method uses exactly as many variables than the original ocp , we observe a speed up of factor 4 for solving ocp ( [ formula : enzymeocp ] ) due to the use of an explicit integration scheme . c. w. gear and t. j. kaper and i. g. kevrekidis and a. zagaris ,_ projecting to a slow manifold : singularly perturbed systems and legacy codes _ , siam journal on applied dynamical systems , 4 ( 2005 ) , pp . 711732 .d. lebiedz and j. unger , _ on unifying concepts for trajectory - based slow invariant attracting manifold computation in kinetic multi - scale models _ , mathematical and computer modelling of dynamical systems , 22 ( 2016 ) , pp .87112 .a. zagaris , c. w. gear , t. j. kaper and y. g. kevrekidis , _ analysis of the accuracy and convergence of equation - free projection to a slow manifold _ , mathematical modelling and numerical analysis , 43 ( 2009 ) , pp .
|
chemical reactions modeled by ordinary differential equations are finite - dimensional dissipative dynamical systems with multiple time - scales . they are numerically hard to tackle especially when they enter an optimal control problem as `` infinite - dimensional '' constraints . since discretization of such problems usually results in high - dimensional nonlinear problems , model ( order ) reduction via slow manifold computation seems to be an attractive approach . we discuss the use of slow manifold computation methods in order to solve optimal control problems more efficiently having real - time applications in view .
|
the analysis of spreading processes in large - scale complex networks is a fundamental dynamical problem in network science .the relationship between the dynamics of epidemic / information spreading and the structure of the underlying network is crucial in many practical cases , such as the spreading of worms in a computer network , viruses in a human population , or rumors in a social network .several papers approached different facets of the virus spreading problem .a rigorous analysis of epidemic spreading in a finite one - dimensional linear network was developed by durrett and liu in . in ,wang et al . derived a sufficient condition to tame an epidemic outbreak in terms of the spectral radius of the adjacency matrix of the underlying graph .similar results were derived by ganesh et al . in , establishing a connection between the behavior of a viral infection and the eigenvalues of the adjacency matrix of the network . in this paper , we study the dynamics of a viral spreading in an important type of proximity networks called random geometric graphs ( rgg ) .rgg s consist of a set of vertices randomly distributed in a given spatial region with edges connecting pairs of nodes that are within a given distance from each other ( also called _ connectivity radius _ ) . in this paper , we derive new explicit expressions for the expected spectral moments of the random adjacency matrix associated to an rgg .our results allow us to derive analytical conditions under which an rgg is well - suited to tame an infection in the network .the paper is structured as follows . in sectionii , we describe random geometric graphs and introduce several useful results concerning their structural properties .we also present the spreading model in and review an important result that relates the behavior of an initial infection with the spectral radius of the adjacency matrix . in section iii ,we study the eigenvalue spectrum of random geometric graphs .we derive explicit expressions for the expected spectral moments in the case of one- and two - dimensional rgg s . in section iv , we use these expressions to study the spectral radius of rgg s .our results allow us to design rgg s with the objective of taming epidemic outbreaks .numerical simulations in section iv validate our results .in this section , we briefly describe random geometric graphs and introduce several useful results concerning their structural properties ( see pen03 for a thorough treatment ) .we then describe the spreading model introduced in and show how to study the behavior of an infection in the network from the point of view of the adjacency eigenvalues .consider a set of nodes , , respectively located at random positions , , where are i.i.d .random vectors uniformly distributed on the -dimensional unit torus , .we use the torus for convenience , to avoid boundary effects .we then connect two nodes and only if , where is the so - called connectivity radius . in other words , a link exists between and if and only if lies inside the sphere of radius centered at .we denote this spherical region by , and the resulting random geometric graph by .we define a _ walk _ of length from to as an ordered set of ( possibly repeated ) vertices such that for ; if the walk is said to be _closed_. the _ degree _ of a node is the number of edges connected to it . in our case ,the degrees are identical random variables with expectation : = nv^{\left ( d\right ) } r^{d } , \label{expected degree}\]]where is the volume of a -dimensional unit sphere , , and is the gamma function .the _ clustering coefficient _ is a measure of the number of triangles in a given graph , where a triangle is defined by the set of edges such that . for one- and two - dimensional rggs we can derive an explicit expression for the expected number of triangles , ] , is defined entry - wise by if nodes and are connected , and otherwise .( note that for simple graphs . ) denote the eigenvalues of a symmetric adjacency matrix by .the -__th order moment of the eigenvalue spectrum of is defined as : (which is also called the -th _ order spectral moment _ ) .we are interested in studying asymptotic properties of the sequence for some sequence . in ,two particularly interesting regimes are introduced : the _ thermodynamic limit _ with , so that the expected degree of a vertex tends to a constant , and the _ connectivity regime _ with with a constant , so that the expected degree of the nodes grows as . in this paper , we focus on studying the spectral moments in the connectivity regime . in section iii , we derive explicit expressions for the expected spectral moments of for any network size .we then use this information to bound the spectral radius of the adjacency matrix of . in this section ,we briefly review an automaton model that describes the dynamics of a viral infection in a specific network of interactions .this model was proposed and analyzed in , where a connection between the growth of an initial infection in the network and the spectral radius of the adjacency matrix was established .this model involves several parameters .first , the infection rate represents the probability of a virus at an infected node spreading to another neighboring node during a time step .also , we denote by the probability of recovery of any infected node at each time step . for simplicity , we consider and to be constants for all the nodes in .we also denote by ] . in this regime , a sufficient condition for a small initial infection to die out is : can prove that ( [ epidemic conditions ] ) is a sufficient condition for local stability around the disease - free state .thus , we can use condition ( [ epidemic conditions ] ) to design networks with the objective of taming initial low - density infections .in this paper , we study the eigenvalue distribution of the random adjacency matrix associated to for . in this section ,we characterize eigenvalue distribution using its sequence of spectral moments . in our derivations, we use an interesting graph - theoretical interpretation of the spectral moments : _ the -th spectral moment of is proportional to the number of closed walks of length in ._ this result allows us to transform the algebraic problem of computing spectral moments of the adjacency matrix into the combinatorial problem of counting closed walks in the graph . in the following subsection, we compute the expected value of the number of closed walks of length in . as we mentioned above, we can compute the -th spectral moment of a graph by counting the number of closed walks of length . in the case of an rgg ,this number is a random variable . in this subsection , we introduce a novel technique to compute the expected number of closed walks of length . for clarity ,we introduce our technique for the first three expected spectral moments .we then use these results to induce a general expression for higher - order moments in one - dimensional rgg s .the first - order spectral moment is equal to the number of closed walks of length . since is a simple graphs with no self - loops , we have that is a deterministic quantity equal to .we now study the expected second moment , ] can be computed as ] can be computed as ] .hence , from ( [ walks as volumes ] ) and ( [ volumes as eulers ] ) , we have the following closed - form expression for the asymptotic expected spectral moments: \asymp \left ( nr\right ) ^{k-1}\frac{1}{% 2\left ( k-1\right ) ! } \sum_{j=1}^{k-2}\binom{k-1}{j-1}~e_{k-1,j}. \label{expected spectral moments 1d}\ ] ] in the following table we compare the analytical result in ( [ expected spectral moments 1d ] ) with numerical realizations of the empirical spectral moments . in our simulations , we distribute nodes uniformly in and choose a connectivity radius ( which results in an average degree =20 & & \\\hline 1 & \multicolumn{1}{|c}{0 } & \multicolumn{1}{c}{1.38e-16 } & \multicolumn{1}{c|}{1.3e-15 } \\ 2 & \multicolumn{1}{|c}{20 } & \multicolumn{1}{c}{19.9326 } & \multicolumn{1}{c|}{0.0976 } \\ 3 & \multicolumn{1}{|c}{300 } & \multicolumn{1}{c}{297.284 } & \multicolumn{1}{c|}{4.3598 } \\ 4 & \multicolumn{1}{|c}{5,733 } & \multicolumn{1}{c}{5,956.30 } & \multicolumn{1}{c|}{196.94 } \\\hline \end{tabular}%\]]our numerical results present an excellent match with our analytical predictions . in this subsection, we derive expressions for the first three expected spectral moments of when the nodes are uniformly distributed in . the expressions for the first and second expected spectral moments are and = \pi nr^{2} ] and , as follows = \int_{\rho = 0}^{r}\int_{\phi = 0}^{2\pi } n^{2}a_{l}\left ( \rho ; r\right ) ~\rho ~d\rho ~d\phi . \label{triangleintegral}\]]after substituting ( [ lens area ] ) in ( [ triangle integral ] ) , we can explicitly solve the resulting integral to be = \left ( \pi -\frac{3\sqrt{3}}{4}\right ) \pi \left ( nr^{2}\right ) ^{2}\approx 5.78\left ( nr^{2}\right ) ^{2}.\label{triangles 2d}\]]consequently , we have the following expression for the third expected spectral moment = \frac{1}{n}% \sum_{i=1}^{n}\mathbb{e}\left [ t_{i}\right ] = \mathbb{e}\left [ t_{i}\right ] ] and for ) , as follows = n^{k-1}\int_{\left ( \mathbf{\eta , \varphi } \right ) \in c_{k-2}}a_{l}\left ( \rho ; r\right ) ~\prod_{j=2}^{k-1}\eta _ { j}~d\mathbf{\eta ~}d\mathbf{\varphi , } \]]where , , and ^{k-2} ] .in a two - dimensional rgg . ] in the following table , we compare our analytical results with numerical realizations of the empirical spectral moments of a two - dimensional rgg . in our simulations, we distribute nodes uniformly on and choose a connectivity radius ( which results in an average degree =50 & & \\ \hline 1 & \multicolumn{1}{|c}{0 } & \multicolumn{1}{c}{-9.2e-16 } & \multicolumn{1}{c|}{1.1e-15 } \\ 2 & \multicolumn{1}{|c}{50 } & \multicolumn{1}{c}{50.0820 } & \multicolumn{1}{c|}{0.3908 } \\ 3 & \multicolumn{1}{|c}{1,464.1 } & \multicolumn{1}{c}{1,475.8 } & \multicolumn{1}{c|}{37.3777 } \\ 4 & \multicolumn{1}{|c}{59,452 } & \multicolumn{1}{c}{60,127 } & \multicolumn{1}{c|}{2,955.3 } \\\hline \end{tabular}%\]]our numerical results present an excellent match with our analytical predictions . in the following section , we use the results introduced in this section to study the spreading of an infection in a random geometric network .in this section , we use the expressions for the expected spectral moments to design random geometric networks to tame an initial viral infection in the network . in our design problem , we consider that the size of the network and the parameters in ( [ epidemic model ] ) , i.e. , and , are given .hence , our design problem is reduced to studying the range of values of for which the rgg is well - suited to tame an initial viral infection .a sufficient condition for local stability around the disease - free state was given in ( [ epidemic conditions ] ) .thus , we have to find the range of values of for which the associated spectral radius is smaller than the ratio . in the following subsection, we show how to derive an analytical upper bound for the spectral radius based on the expected spectral moments . in order to upper - bound the spectral radius , we use wigner s high - order moment method .this method provides a probabilistic upper bound based on the asymptotic behavior of the -th expected spectral moments for large .we present the details for a one - dimensional rgg , although the same technique can be applied to rgg s in higher dimensions . for a one - dimensional rgg in the connectivity regime, we derived an explicit expression for the expected spectral moments in ( [ expected spectral moments 1d ] ) .a logarithmic plot of vol for unveils that vol for large - order moments ( a line in logarithmic scale ) , where , from a numerical fitting , we find that and . therefore , from ( [ expected spectral moments 1d ] ) we have \asymp \beta _ { 1}\left ( c_{1}nr\right ) ^{k},\]]for large . for even - order expected spectral moments ( i.e. , for ) , the following holds = \frac{1}{n}\sum_{i=1}^{n}\mathbb{e}[\lambda _ { i}^{2s}]\geq \frac{1}{n}\mathbb{e}[\lambda _ { \max } ^{2s}].\]]define ; thus , for any ( and ) , we can apply markov 's inequality as follows}{% ( c_{1}nr+\varepsilon rf\left ( n\right ) ) ^{2s } } \\ & \leq & \frac{n~\mathbb{e}\left [ m_{2s}\right ] } { ( c_{1}nr+\varepsilon rf\left ( n\right ) ) ^{2s}},\end{aligned}\]]for large , one can prove that that grows as , for , we have all sufficiently large .thus, in other words , is upper - bounded by with probability for . in practice , for a large ( but finite ) , we can use as an upper bound of . in fig .4 , we plot the empirical spectral radius of an rgg with and , with expected degrees :1:100 ] . ]the technique introduced in this subsection is also valid for rgg s in higher - dimensions . in general, one can prove that for a -dimensional rgg that the expected spectral moment grows as \rightarrow \beta _ { d}\left ( c_{d}nr^{d}\right ) ^{k}] ] for .each horizontal line represents the value of ] for all . hence , this latter rgg is well - suited to tame initial viral infections . ] for a particular . in this color map ,blue represents a zero value , green and yellow tones represent intermediate values , and red represents values close to one . in this case , we observe an epidemic outbreak . ]$ ] when we increase the recovery rate to ( the rest of parameters are the same as we used for fig . 5 ) .we observe how the probability of infection of every node converges towards zero in this case . ]in this paper , we have studied the spreading of a viral infection in a random geometric graph from a spectral point of view .we have focused our attention on studying the eigenvalue distribution of the adjacency matrix .we have derived , for the first time , explicit expressions for the spectral moments of the adjacency matrix as a function of the density of nodes and the connectivity radius .we have then applied our results to the problem of viral spreading in a network with a low - density infection . using our expressions , we have derived upper bounds for the spectral radius of the adjacency matrix .finally , we have applied this upper bound to design random geometric graphs that are well - suited to tame an initial low - density infection .our numerical results match our predictions with high accuracy .
|
in this paper , we study the dynamics of a viral spreading process in random geometric graphs ( rgg ) . the spreading of the viral process we consider in this paper is closely related with the eigenvalues of the adjacency matrix of the graph . we deduce new explicit expressions for all the moments of the eigenvalue distribution of the adjacency matrix as a function of the spatial density of nodes and the radius of connection . we apply these expressions to study the behavior of the viral infection in an rgg . based on our results , we deduce an analytical condition that can be used to design rgg s in order to tame an initial viral infection . numerical simulations are in accordance with our analytical predictions .
|
a microlensing event occurs when an astronomical object ( lens ) is closely aligned with the line of sight toward a background star ( source ) .microlensing causes change of the source star brightness and the resulting light curve is characterized by its smooth variation .if the lensing object is a star and it contains a planet , the resulting light curve can exhibit a discontinuous signature of the planet on the smooth light curve of the primary - induced event , and thus microlensing can be used as a method to search for extrasolar planets .microlensing is sensitive to planets that are generally inaccessible to other methods , in particular cool planets at or beyond the snow line , very low - mass planets , planets orbiting low - mass stars , free - floating planets , and even planets in external galaxies .therefore , when combined with the results from other surveys , microlensing planet searches can yield an accurate and complete census of the frequency and properties of planets . since the first discovery in 2004 , 9 microlensing planetshave been reported .characterization of microlensing planets requires modeling of observed light curves .this modeling process requires to include many parameters because the pattern of lensing light curves and the signals of planets take different forms depending on the combination of these parameters .therefore , studying the dependency of the pattern of light curves on the lensing parameters and the correlations between the parameters help to understand how the uncertainties of the planetary parameters are propagated from other lensing parameters .this also helps to establish observational strategies for better characterization of planets .however , it appears that the correlations between the lensing parameters are very complex due to the enormous diversity of lensing light curves resulting from the combinations of the numerous parameters . in this paper , we show that despite the apparent complexity of the pattern of light curves of planetary lensing events , the correlations between the lensing parameters can be understood based on the dependency of the characteristic features of lensing light curves on the parameters .we provide the correlations for the two representative cases of planetary events .we also demonstrate the applicability of the correlations to general planetary lensing events by actually obtaining the correlations from modelings of light curves produced by simulations .the microlensing signal of a planet is a brief perturbation to the smooth standard light curve of the primary - induced single - lensing event .therefore , the parameters needed to describe planetary lensing light curves are broadly divided into two categories .the first set of parameters is needed to describe the light curve of a standard single - lens event produced by the star hosting the planet .these parameters include the closest lens - source separation normalized by the einstein radius , ( impact parameter ) , the time of the closest lens - source approach , , the time required for the source to transit the einstein radius of the lens , ( einstein time scale ) , the flux from the lensed star , , and the blended flux , . with these parameters ,the single - lensing light curve is represented by where ^{1/2}. \label{eq2}\ ] ] here represents the lens - source separation normalized by the einstein radius .these parameters characterize the global shape of lensing light curves such as the height and width . besides the single - lensing parameters, additional parameters are needed to described the detailed structure of the perturbations induced by planets .these parameters include the planet / star mass ratio , , the projected star - planet separation normalized by the einstein radius , , and the angle between the source trajectory and the star - planet axis , . since planetary perturbations are produced in most cases by close approaches or crosses of source stars over caustics , an additional parameter of the source size normalized by the einstein radius , , is needed to describe the deviation of the perturbation affected by the finite - source effect . from the combinations of the lensing parameters , the light curves of planetary events exhibit various patterns .the correlations between the single - lensing parameters can be found by investigating how the global features of lensing light curves such as the height and width vary depending on the parameters .the height of the light curve is determined by the combination of the impact parameter and the blended light ratio , . when affected by blended flux, the impact parameter estimated based on the apparent height of the light curve is related to the blended light ratio as ^{1/2};\ \a_{\rm max , b}={a_{\rm max}+f_{\rm b}\over 1+f_{\rm b } } , \label{eq3}\ ] ] where is the apparent peak magnification .then , with the increase of , decreases , and thus increases , implying that as the blended light increases , the peak magnification appears to be lower , and the resulting impact parameter becomes larger. therefore , it is found that _ the blended flux ratio and the impact parameter are correlated_. blending affects not only the height but also the width of light curves .the event time scale estimated from the width of the blended light curve , , differs from the true value , .the relation between the two time scales is here and represent the threshold impact parameters of the source trajectory for event detections with the presence and absence of blended flux , respectively .because of blending , the threshold magnification is increased by and thus the corresponding threshold impact parameter is lowered by ^{1/2} s>1 s<1 ] and . in the limiting cases of planetary separations of and ,the relations are expressed in compact forms of respectively .then , from the relation between and combined with the constraint of the perturbation duration measured from the light curve , it is found that _ the planetary separation and the mass ratio should be correlated for planetary events with major - images perturbations ( ) and anti - correlated for events with minor - image perturbations ( )_. the main constraint on the normalized source radius is provided by the duration of caustic - crossings ( caustic - crossing time scale ) .the caustic - crossing time scale is related to the normalized source radius and the einstein time scale by where represents the angle between the caustic and the source trajectory .then , to match the constraint of the caustic - crossing time scale measured from the observed light curve , _ the normalized source radius and the einstein time scale should be anti - correlated_. in table [ table : one ] , we summarize the correlations between the lensing parameters . here , the correlations between the pairs of parameters not mentioned in the text are deduced from the correlations with other parameters .for example , we deduce the correlation between the normalized source radius and the impact parameter based on the correlations between and parameter pairs . in the table , we mark `` '' for the pairs of parameters that are correlated , while the correlation is marked by `` '' for the pairs of parameters that are anti - correlated . studied the relation between the normalized source radius and the impact parameter for high - magnification single - lens events .see also for the relation between the lens - source proper motion , , and the time scale , although is not a standard planetary lensing parameter .in the previous section , we investigated the correlations between the lensing parameters based on analytic arguments about the dependency of the characteristic features of lensing light curves on the parameters . in this section , we demonstrate that the correlations are applicable to general planetary microlensing events by actually obtaining the correlations from modelings of light curves produced by simulations .we produce two light curves of planetary lensing events , where the individual curves represent those of events with major and minor - image perturbations , respectively .the light curves are produced considering the strategy of the current planetary lensing experiments where events are detected through modest - cadence survey observations and perturbations are densely covered by follow - up observations .we set the photometric uncertainty by assuming that the photometry follows photon statistics with a 1% systematic uncertainty and the deviations of the data points are gaussian distributed .the other factors affecting the photometry such as the source brightness and blending are based on the values of typical galactic bulge events .figure [ fig : two ] shows the light curves of the events produced by the simulation where the upper and lower panels are those of the events with major and minor - image perturbations , respectively .the inset in each panel shows the geometry of the event where the straight line with an arrow is the source trajectory and the temperature scale represents magnifications where a brighter tone implies a higher magnification .we search for the solution of the lensing parameters by conducting modeling of the simulated light curves . in modeling planetary lensing light curves , it is difficult to conduct brute - force search throughout all parameter space due to the large number of parameters .in addition , it is difficult to conduct simple downhill approach due to the complexity of the surface .we , therefore , use a hybrid approach where grid searches are conducted over the space of parameters of , , and and the remaining parameters are allowed to vary so that the model light curve results in minimum at each grid point .we use a markov chain monte carlo method for minimization .once the solutions of the individual grid points are determined , the best - fit model is obtained by comparing the minima of the individual grid points .the uncertainties of the best - fit parameters are estimated based on the chains of solutions produced by modeling .for the computations of lensing magnifications including finite - source effect , we use the ray - shooting method where a large number of rays are shot from the image plane , bent according to the lens equation , and arrive on the source plane .then , the magnification corresponding to the location of a source star is computed by comparing the number density of rays on the source star with the density on the image plane .we minimize the computation time by restricting the region of ray shooting around the images and thus minimizing the number of rays needed for the computations of finite - source magnifications .we also apply semi - analytic hexadecapole approximation for magnification computations in the region where the effect of source size is not important . in figure[ fig : three ] and [ fig : four ] , we present the results of the modeling for the planetary events with major and minor image perturbations , respectively .the individual panels of each figure show the contour plots of in the spaces of the combinations of the lensing parameters . to directly compare the correlations, we arrange the panels according to the same order of the parameters presented in table [ table : one ] . from the comparison , it is found that the correlations found from modeling coincide with those predicted based on analytic arguments .this implies that the correlations presented in table [ table : one ] are applicable to general planetary microlensing events .we investigated the correlations between the parameters of planetary lensing events . from this, we found that the correlations could be understood by studying how the lensing parameters affect the characteristics of lensing light curves such as the height and width of the light curve , the caustic - crossing time scale , and the location and duration of perturbations .based on analytic arguments about the dependency of the features of lensing light curves on the parameters , we obtain the correlations .we also demonstrated the applicability of the correlations to general planetary lensing events by actually obtaining the correlations from modelings of light curves produced by simulations .understanding the correlations between lensing parameters can help to setup observational strategies for better constraints of the planetary parameters .for example , the correlations between the blending and planetary parameters imply that blending affects determinations of the planetary parameters and thus de - blending or precise determination of the blending parameter is important to better constrain the planetary parameters .several methods can be applied to resolve the blending problem .photometrically , it is known that good coverage of the peak region and wings of light curves by follow - up observations help to constrain the blending parameter .high resolution imaging from space observations or ground - based ao imaging can help to resolve source stars from blended stars .astrometric measurement of the blended image centroid can also help to identify the lensed source among blended stars .precise and dense coverage of the perturbation is another way to constrain the planet parameters .this is because not only the location of perturbations but also their shape provides constraints on planetary parameters .then , even if the location of the perturbation is uncertain due to severe blending , it is still possible to constrain the planetary parameters from the shape of the perturbation .
|
characterization of microlensing planets requires modeling of observed light curves including many parameters . studying the dependency of the pattern of light curves on the lensing parameters and the correlations between the parameters is important to understand how the uncertainties of the planetary parameters are propagated from other parameters . in this paper , we show that despite the apparent complexity of the pattern of light curves of planetary lensing events , the correlations between the lensing parameters can be understood by studying how the parameters affect the characteristics of lensing light curves such as the height and width , the caustic - crossing time scale , and the location and duration of planetary perturbations . based on analytic arguments about the dependency of light curve features on the parameters , we obtain the correlations for the two representative cases of planetary events . we also demonstrate the applicability of the correlations to general planetary events by actually obtaining the correlations from modelings of light curves produced by simulations .
|
the integration of low - power wireless networking technologies such as ieee 802.15.4-enabled transceivers with inexpensive camera hardware has enabled the development of the so - called _ visual sensor networks _( vsns) .vsns can be thought of as networks of wireless devices capable of sensing multimedia content , such as still images and video , audio , depth maps , etc . via the recent provisioning of an all - ipv6 network layer under 6lowpan and the emergence of collision - free low - power medium access control ( mac ) protocols , such asthe time slotted channel hopping ( tsch ) of ieee 802.15.4e-2012 , vsns are expected to play a major role in the internet - of - things ( iot ) paradigm . in comparison to traditional wireless sensor networks ,vsns are uniquely challenging because of their heavy computational and bandwidth requirements that stretch hardware and networking infrastructures to their limits .hence , an increasing number of vsn solutions were proposed recently , focusing on : new transmission protocols allowing for high - bandwidth collision - free communications , in - network processing techniques and optimized multimedia processing .also , several hardware solutions have been proposed , with the aim of finding a vsn platform that could be used for a broad range of multimedia tasks .most of these proposed hardware solutions can be abstracted as two tightly - coupled subsystems , shown in figure [ fig : system_model](b ) : a multimedia processor board and a low - power radio subsystem , interconnected via a push model . within each node of the vsn ,the multimedia subsystem is responsible for acquiring images , processing them and pushing the processed visual data to the radio subsystem , which transmits it to a remote location .for example , in a traditional surveillance application , the multimedia subsystem would compress or process ( e.g. , extract visual features ) the acquired images and push the resulting bitstream to the radio subsystem for transmission to a central controller , where the data would be analyzed or stored . similar to traditional wireless sensor networks , vsn nodes are usually battery operated . hence , energy consumption plays a crucial role in the design of a vsn , especially for those applications where a vsn is required to operate for days or even weeks without external power supply . in the last few years , several proposals strive for lifetime maximization in vsns .specifically , solutions are available for energy - aware protocols , cross - layer optimization , application tradeoffs and deployment strategies . while existing work addresses transmission , scheduling and protocol design aiming for energy efficiency, it does not consider the impact of the spatio temporal coverage in the energy consumption of vsns .this is precisely the focus of this paper .we consider wireless visual sensor networks comprising a cluster - tree topology , such as the one illustrated in figure [ fig : system_model](a ) , where each camera node processes and transmits visual data to the nodes of the higher tier , or to the low - power border router ( lpbr ) that can relay the streams to any ip address over the internet for analysis and processing .moreover , we focus on the case of a _ uniformly - formed _ vsn , i.e. a network of identical sensor nodes that , within each activation interval , are : _ ( i ) _ producing bitstream sizes with the same statistical characterization and _ ( ii ) _ connected to the base station via a balanced _ cluster - tree topology _ , represented by a symmetric and acyclic graph with balanced bandwidth allocation per link .each node also relays streams stemming from other nodes of lower tier(s ) . within each node ,the multimedia and radio subsystems work in parallel [ figure [ fig : system_model](b ) ] : while the multimedia system acquires and processes data corresponding to the current video frame , the radio subsystem transmits ( or relays ) the multimedia stream stemming from the processing of previous video frame(s ) .let kilobit - per - second ( kbps ) be the average bandwidth at each node ( in transmit or receive mode ) , with indicating the bits consumed by each receiver / relay node over the vsn active interval of seconds .for example , for a 802.15.4-compliant vsn and second , the average consumption rate would be 250 kbps at the physical layer .the mac layer of the network is operating under a collision - free time - division ( or time - frequency division ) multiple access , so that each tier in the network can be configured in a way that simultaneous transmissions in the same channel are avoided .the number of frames captured by each camera during the operational time interval of the vsn , i.e. each node s temporal coverage , is controlling the frequency of the push operations . at the same time, the multimedia processing task itself ( e.g. , image / video compression or extraction of visual features ) controls the size of the bitstream pushed to the radio subsystem within each frame s duration . on the other hand , the number of sensors in the same tier of the cluster - tree topology , i.e. , the vsn s spatial coverage , and the number of nodes whose bitstreams must be relayed by each node ( if any ) control the bandwidth available to each sensor ( i.e. , its average transmission rate ) in each tier under a collision - free mac protocol . therefore , there is a fundamental tradeoff between the spatial and temporal coverage in a network : a large number of frames leads to high bandwidth requirement per transmitter , which in turn decreases the number of sensors that can be accommodated within each tier of the vsn .conversely , dense spatial coverage via the use of a large number of visual sensors per tier decreases the available bandwidth per sensor , which reduces the number of frames per sensor . in this paper , we derive analytic results concerning energy - aware vsn design under the push model of figure [ fig : system_model ] . specifically , we are interested in the link of the aforementioned spatio temporal tradeoff with the incurred energy consumption under well - known probability density functions modeling the pushed bitstream size of image and video applications , such as intra / inter - frame video coding and local visual features extraction and transmission , and make the following contributions : * we derive an analytic model that captures the expected energy consumption in function of : _ ( i ) _ the number of visual sensors deployed at each tier of the cluster - tree topology , _( ii ) _ the number of frames captured by each camera sensor within the operational time interval and _ ( iii ) _ the statistical characterization of the bitstream data volume produced by each sensor after on - board multimedia processing . * the extrema of the derived energy consumption function are then analytically derived in order to provide closed - form expressions for the minimum energy consumption of each case under consideration . *the analytic results are validated within two applications : video coding and transmission based on differential motion - jpeg and visual feature extraction and transmission . while our results are directly applicable to uniformly - formed vsns , we also indicate how they can be extended to non - uniformly formed vsns with varying statistical characterizations for the bitstream sizes of different sensors and unbalanced bandwidth allocation for the various links of each vsn tier during each activation interval .the rest of this paper is organized as follows : section [ sec : system model ] presents the proposed system model , while section [ sec : min_energy_analysis ] presents the theoretical results ; section [ sec : evaluation - of - energy ] presents real - world experiments that validate the proposed framework under controlled data production from each sensor , while section [ sec : applications ] presents results showcasing the accuracy of the proposed model under real vsn data ; finally , section [ sec : conclusions ] concludes the paper .in the following sections we introduce the components of the proposed system model . the corresponding nomenclature is summarized in table [ tab : nomenclature - table.1 ] .this sets the context for the derivation of the expected energy consumption of each node of the uniformly - formed visual sensor network in function of the utilized spatio temporal coverage settings .we consider that the visual sensor network is established under the following two application constraints : * _ spatial coverage bounds _ ; the number of deployed nodes at each tier of the cluster - tree topology , , is upper- and lower - bounded , i.e. * _ temporal coverage lower bound _ ; the total frame acquisitions , , within a pre - defined time interval , , is lower - bounded , i.e. the bounds of the spatio temporal coverage stem from application specifics , such as : the cost of installing and maintaining visual sensors , the minimum and maximum spatial coverage required for the area to be monitored , and the minimum number of frames that allows for visual data gathering and analysis with sufficient temporal resolution within seconds .since the multimedia subsystem of each visual sensor produces varying amounts of data depending on the monitored events and the specifics of the visual analysis and processing under consideration , the bitstream size produced by each sensor node in such multimedia applications is a non - deterministic quantity .therefore , the bitstream size produced when each visual node processes frames within an activation interval is a random variable ( rv ) , , characterized by its probability density function ( pdf ) , , .since the underlying processes deriving this bitstream may not be stationary and/or this data may include multi - rate channel codes ( or retransmissions ) to alleviate channel impairments due to propagation and other environmental effects of transmission , we assume marginal statistics for , which are derived starting from a doubly - stochastic model for the multimedia processing .specifically , such marginal statistics can be obtained by : _ ( i ) _ fitting pdfs to sets of past measurements of bitstream sizes transmitted by each sensor , with the statistical moments ( parameters ) of such distributions characterized by another pdf ; _ ( ii ) _ integrating over the parameter space to derive the final form of .for example , if the bitstream size is modeled as a half - gaussian distribution with variance parameter that is itself exponentially distributed , by integrating over the parameter space , the marginal statistics of the data rate become laplacian .the disadvantage of using marginal statistics for the bitstream size of each node during each activation interval is the removal of the stochastic dependencies to its transient physical properties .however , in this work we are interested in the _ expected _ energy consumption over a time interval and not in the instantaneous _ variations _ of energy consumption .thus , a mean - based analysis using the marginal statistics of the produced bitstream sizes is suitable for this purpose . following the push model of the camera node subsystem illustrated in figure [ fig : system_model](b ), each vsn node performs the following operations : 1 ._ acquisition , processing and transmission : _ a new frame is acquired by means of a low - power camera sensor and processed with a cpu - intensive algorithm , realized by the multimedia subsystem .each frame processing ( possibly including coding to mitigate channel impairments ) produces , on average , bits for transmission .these bits are pushed to the radio subsystem , which in turn transmits them to the higher tier or , eventually , to the lpbr .let joule ( j ) be the energy expenditure for acquiring a frame , be the average energy in joule ( j ) required for processing and producing one bit of information to be transmitted and the average energy required to transmit it to the lpbr or a relay node .different multimedia applications may incur different levels of energy consumption for the production of each bit to be transmitted , while the average transmission energy consumption per bit depends only on the specific radio chip used by each wireless sensor node .hence , the average energy consumed for acquisition , processing and transmission within the active interval of seconds is ] bits comprising the statistical expectation of the data volume corresponding to frames . 2 ._ buffering and idling : _ as shown in figure [ fig : system_model ] , each tier of the sensor network consists of sensor nodes that communicate with the lpbr ( or the relay nodes of the higher tier ) .the set of all receivers ( sink nodes ) of each tier has predefined consumption rate of kbps . under balanced coupling, each sensor node can transmit bits during the analysis time interval of seconds .we thus identify two cases : if the amount of data generated by the processing phase and relayed from nodes of the lower tiers is less than bits , the sensor node enters an `` idle '' state , where j / bit is consumed for beaconing and other synchronization operations .the energy spent during the idle mode of the analysis time interval is : j , with the rv modeling the data rate of a node processing frames and relaying data from other independent and identical nodes [ with and .conversely , if the data generated is greater than bits , then the sensor node has to buffer the remaining data in a high - power , typically off - chip , memory .letting j be the energy cost of storing one bit of information , the energy spent for buffering during the active time interval is : j. this case introduces delay , as buffered data will be scheduled for later transmission .thus , the proposed model is suitable for delay - tolerant multimedia applications .receiving / buffering and relaying data : _ under a multi - tier cluster - tree topology , each node receives additional data streams from nodes positioned at the lower tier(s ) and relays them along with its own data streams ( see figure [ fig : system_model ] for an example with ) . over the analysis interval of seconds , the energy expenditure corresponding to this process is given by ] the statistical expectation of the number of bits received from all nodes of the lower tier(s ) during the active time interval .in practice , this energy expenditure is dominated by the receiver power requirements , , , , and may also include fixed , rate - independent costs of the particular multimedia or transceiver hardware ( e.g. , visual sensor , transceiver or buffer startup and shutdown costs ) . ] . giventhat , for ieee 802.15.4-compliant transceivers , the transceiver power under receive mode is virtually the same regardless if the node is actually receiving data or not , it is irrelevant to the receiver power whether the transmitting node(s ) used their entire transmission intervals or not .> m0.1>m0.07>p0.65 & & + & seconds & active time interval + , , & & number of transmitting sensor nodes _ at the same tier _ of the cluster - tree topology and minimum & maximum nodes allowed by the application + , & & number of frames captured and processed within seconds and minimum - allowed by the application + & bit & average number of bits produced after processing one frame + & & number of _ additional _ nodes whose traffic is relayed by each node at a given tier of the cluster - tree topology + & j & energy to acquire one frame and initialize the multimedia processing + & j / bit & energy for processing one bit + & j / bit & energy for transmitting one bit + & j / bit & penalty energy for storing one bit during receiver overloading + & j / bit & energy during idle periods for the time interval corresponding to one bit transmission + & j / bit & energy for receiving and temporary buffering one bit under the relay case + & bit & data volume ( bits ) of a relay node ( or base station ) received within seconds + & bit & rv modeling the cumulative bits transmitted by each node , including the bits relayed from nodes of lower tiers , after each node processed video frames + ] . _( is uniform ) : _ _ we define as the uniform distribution when : _ _ with _ =kr ] _ _ ] corresponding to the mean value of the data transmitted by a node that produces frames of bits each on average ( and relays information from other nodes ) ._ * corollary 1 . * _ is uniform , there exists no global solution to in its unconstrained form . _using in leads to : \right]\label{eq : uniform_energy}\\ & -\frac{ps}{n}+\frac{s^{2}(b+p)}{4n^{2}kr(d+1)}.\nonumber \end{aligned}\ ] ] to obtain the solution to under the energy consumption given by , one can search for critical points of . by definition , a critical point of a multidimensional functionis the point where the gradient of the function is equal to zero .imposing that the derivatives of with respect to and are both equal to zero leads to : \\ \qquad\;-\frac{s^{2}(b+p)}{4n^{2}k^{2}r(d+1)}=0 \end{array}\right.\label{eq : gradient_zero}\ ] ] solving for gives .substituting this solution in and solving for , leads to .however , this is not feasible since is the energy cost to acquire one frame .hence , under the physical constraints of the problem , _ there is no single ( global ) solution to in its unconstrained form _ ,i.e. when one ignores the constraints of .we now extend the analysis towards other pdfs for the data transmission , which are frequently encountered in practice . _( is pareto ) : _ _ we consider as the pareto distribution with scale and shape when : setting leads to _ =kr ] _ [ and _ =kr\left(d+1\right) ] . this distribution has been widely used in data gathering problems in science and engineering when the modeled data has non - negativity constraints .some recent examples include the statistical characterization of motion vector data sizes in wyner - ziv video coding algorithms suitable for vsns , or the statistical characterization of sample amplitudes captured by an image sensor . _( is half - gaussian ) : _ _ we consider as the half - gaussian distribution when : __ with _=kr ] _ _ ] corresponding to the mean value of the data transmitted by a node that produces frames of bits each on average ( and relays information from other nodes ) ._ _ * corollary 2 . * _ is the pareto , exponential or half - gaussian distribution , given by , there exists no global solution to in its unconstrained form . _ under , the energy expression of becomes : \right]\nonumber \\ & + \frac{bs}{n}+\left(b+p\right)\left(\frac{v^{\alpha}n^{\alpha-1}}{s^{\alpha-1}\left(\alpha-1\right)}-\frac{\alpha v}{\alpha-1}\right).\label{eq : pareto_energy}\end{aligned}\ ] ] in addition , replacing in the energy expression of , we obtain : \right]+\frac{bs}{n}\label{eq : exponential_energy}\\ & + \left(b+p\right)\left[kr\left(d+1\right)\left(\exp\left(-\frac{s}{nkr(d+1)}\right)-1\right)\right].\nonumber \end{aligned}\ ] ] finally , replacing in the energy expression of , we obtain : \right]-\frac{ps}{n}+\left(b+p\right)\\ \times\left[kr\left(d+1\right)\left(\exp\left(-\frac{s^{2}}{\pi k^{2}r^{2}n^{2}(d+1)^{2}}\right)-1\right)\right.\\ \left.+\frac{s}{n}\text{erf}\left(\frac{s}{\sqrt{\pi}krn(d+1)}\right)\right ] .\end{array}\label{eq : halfgaussian_energy}\ ] ] to obtain the solution to under the energy consumption given by , one can search for critical points of , and .similarly as for corollary 1 , it is straightforward to show that imposing that the derivatives of , and with respect to and are both equal to zero leads to solutions that require ( detailed derivations omitted ) , which is not physically feasible since is the energy cost to acquire one frame .it follows from corollary 1 and 2 that , under the physical constraints of the problem , _ there is no single ( global ) solution to in its unconstrained form _ ,i.e. , when one ignores the constraints of . however , we may consider each dimension individually ( i.e. , perform univariate minimization along the or dimension ) in order to find a local or global minimum for that particular dimension and then choose for the other dimension the value that minimizes under the spatio temporal constraints of .subsequently , we can identify if the derived minima are unique under the imposed constraints and whether the entire region of support of the energy function under these constraints has been covered by the derived solutions .following this approach , the main results are presented in the following subsection .the detailed derivations are contained in the appendices .* proposition 1 .* _ when the data transmitted by each vsn node follows the uniform , pareto or exponential distributions of definitions 13 , the sets of solutions giving the minimum energy consumption in under the spatio temporal constraints of are : _ _ with _ _ indicating each of the three distributions , and _ _ and _ _ defined by : _ \right]}},\label{gamma_u}\ ] ] -a}{r\left(d+1\right)\left(b+p\right)}\right)^{\frac{1}{\alpha-1}},\label{eq : gamma_p}\ ] ] _ and _ }{r(d+1)(b+p)}\right)+1\right]},\label{eq : gamma_e}\ ] ] _ with the lambert product - log function . for the particular case when _ _ ( exponential pdf ) , holds under the condition that , i.e. , the penalty energy to buffer bits is higher than beaconing energy ._ see appendix [ sec : appendix - i ] .* proposition 2 . *_ when the data transmitted by each vsn node follows the half - gaussian distribution of definition 4 , the set of solutions giving the minimum energy consumption in under the spatio temporal constraints of is : _ _ with _the proof follows the same steps as for the previous cases and it is summarized in appendix [ sub : appendix - half - gaussian ] .the key observation from propositions 1 and 2 is that , regardless to the distribution used for modeling the data production process , the solutions giving the minimum energy consumption attain the same mathematical form .specifically , when the initial constraint on the minimum number of frames captured and processed within seconds , , is higher than the threshold value : , the optimal solution is the one where nodes process frames each ( i.e. , the minimum setting possible for nodes and frames - per - node ) . if is smaller or equal than this threshold , therefore facilitating more nodes within each tier of the vsn , the optimal number of nodes , , derived by propositions 1 and 2 , increases to . however , when reaches the constraint on the maximum number of nodes , , then the optimal solution for each node is to use a frame setting that is higher than .the latter is true for proposition 1 ; however , for proposition 2 ( half - gaussian pdf ) , the corresponding optimal frame setting was found to be imaginary regardless to the specific system parameter .therefore , the optimal solution for this case is always . in terms of relevance to practical applications ,the results of this section can be used to assess the impact of the spatio temporal constraints and the data production and transmission process ( as characterized by its marginal pdf ) on the energy consumption of vsns , under a variety of energy consumption rates for the radio and multimedia subsystems .for example , under given energy availability from the node battery and predetermined system activation time ( ) , this allows for the determination of appropriate hardware to be used ( i.e. , , , , and parameters ) in order to meet the spatio temporal constraints of the application .moreover , via the analysis of the previous four subsections , one can optimize the system under the assumption of a certain marginal pdf characterizing the data production and transmission process of each node .conversely , under particular technology ( i.e. given , , , , and parameters ) and given configuration for the vsn in terms of number of nodes and frames to capture within the activation time interval , one can determine the required energy in order to achieve the designated visual data gathering task .furthermore , under the proposed framework , one can determine the data production and transmission ( marginal ) pdfs that meet predetermined energy supply and spatio temporal constraints .although we do not claim that the utilized pdfs cover all possible scenarios that can be encountered in practice , they comprise an ensemble of distributions that includes several important aspects , i.e. , : _( i ) _ the maximum - entropy pdf ( uniform ) ; _ ( ii ) _ well - known distributions characterizing the transmission rate of real - world systems ( exponential and half - gaussian ) , and _ ( iii ) _ a parameterized distribution ( pareto ) that corresponds to the continuous equivalent to zipf s law for generalized frequency of occurrences of physical phenomena ; moreover , if , the pareto distribution corresponds to near fixed - rate transmission with rate . beyond the cases considered in this paper ,if another distribution provides a better fit to a particular deployment , the steps of propositions 1 and 2 can be used to provide a characterization of the available solution space . moreover , given that the results of propositions 1 and 2 are applicable per node , if the considered scenario involves a non uniformly - formed vsn , the same analysis applies for each node of each cluster - tree tier , albeit with the use of : 1 . a different pdf per sensor , leading to a mixture of pdfs for the relayed traffic , with the resulting distribution being the convolution of the intermediate distributions ; 2 . _unbalanced _ _ coupling _ in and , i.e. , the node transmitting bits during the analysis time interval of seconds , with allocated by the utilized protocol during the cluster formation ; 3 .the node of each cluster relaying traffic from nodes , and , in general , for . given that a numerical package ( e.g. , mathematica or matlab symbolic ) can be used for the calculation of : _( i ) _ the convolution of distributions ( corresponding to the mixture of pdfs of the node of each tier ) and _ ( ii ) _ the term of , we do not expand on these cases further .overall , our proposed energy consumption model and the associated analytic results can be used in many ways for early - stage exploration of system , network , and data production parameters in vsns that match the design specifications of classes of application domains .such application examples are given in section [ sec : applications ] .to validate the proposed analytic model of and propositions 1 and 2 for the settings leading to the minimum energy consumption , we performed a series of experiments based on a visual sensor network matching the system model of section [ sec : system model ] and an energy - measurement testbed .specifically , each visual node of the sensor network is composed of a beaglebone linux computer ( multimedia subsystem ) attached to a telosb sensor node for low - power wireless communications ( radio subsystem ) .each beaglebone is equipped with a radiumboard cameracape to provide for the video frame acquisition .for energy - efficient processing , we downsampled all input images to qvga ( 320x240 ) resolution . in order to measure the energy consumption of each vsn node, we captured the real - time current consumption at two high - tolerance 1 ohm resistors , the first of which was placed in series with the multimedia and the second in series with the radio subsystem of each visual node .a tektronix mdo4104 - 6 oscilloscope was used for the two current consumption captures of each experiment .further , our deployment involved : _( i ) _ a telosb node serving as the lpbr and collecting all bitstreams and 2 to 32 visual nodes positioned within four adjacent rooms and the corridor of the same floor of the department of electrical and electronic engineering at university college london [ following the layout of figure [ fig : system_model](a ) ] ; _ ( ii ) _ a uniformly - formed hierarchical cluster - tree network topology with to nodes per network tier and the recently - proposed ( and available as open source ) tfdma protocol for contention - free mac - layer coordination ; _ ( iii ) _ no wifi or other ieee802.15.4 networks concurrently operating in the utilized channels of the 2.4 ghz band .even if ieee802.11 or other ieee802.15.4 networks coexist with the proposed deployment , well - known channel hopping schemes like tsch or interference - avoidance schemes can be used at the mac layer to mitigate such external interference while maintaining a balanced cluster tree topology in the wsn .tfdma ensures collision - free multichannel communications with guaranteed timeslots via a fair time - division multiple access ( tdma ) schedule constructed within each of the utilized channels of the ieee802.15.4 physical layer via beacon packet exchanges .protocols such as tfdma , the tsch mode of ieee 802.15.4e-2012 and other balanced cluster - tree based mac - layer protocols , allow for collision - free , uniformly - formed , cluster - tree based vsns to be formed via the combination of fair tdma scheduling and channel allocation or channel hopping .experiments have shown that such protocols can scale to hundreds or even thousands of nodes . therefore , our evaluation is pertinent to such scenarios that may be deployed in the next few years within the iot paradigm .for what concerns the radio subsystem , each telosb runs the low - power contiki 2.6 operating system .given that the utilized tfdma protocol ensures collision - free transmissions from each node , we enabled the low - power nullmac and nullrdc options of the contiki os that disable the default mac queuing and backoff mechanisms .this led to data consumption rate at the application layer of kbps .given that varying the transmission power level has minimal effect on the vsn node energy consumption ( since most of the transceiver current consumption is due to reception ) and may compromise error - free data reception , we utilized the maximum transmit power , which led to reliable data transmission under the collision - free timeslot allocation of tfdma . under these operational settings ,the average transmission cost per bit of information , j / bit , as well as the cost for beaconing , j / bit , and buffering , j / bit , were established experimentally by repeating several dedicated energy - measurement tests with the telosb subsystem ; their values are shown at the top half of table [ tab : telosb settings ] and we have experimentally verified that they remained constant over several activation intervals . since the energy consumption of the multimedia subsystem is application - dependent , we focused on two different applications , namely : _( i ) _ encoding and transmission of jpeg video frames and _ ( ii ) _ extraction and transmission of local features for visual analysis .these two scenarios represent a wide range of practical vsn - related deployments proposed recently . 1 ._ differential motion jpeg ( mjpeg ) encoding : _ we used a hybrid dct - dpcm encoder , such as the one presented in . in this system , the first frame of the video sequence is jpeg encoded and transmitted . for the subsequent frames , only the difference between two adjacent frames is encoded .the encoding process follows the standard jpeg baseline , i.e. , quantization of the discrete cosine transform ( dct ) coefficients followed by run length coding ( rle ) and huffman coding .visual features extraction : _ several visual analysis tasks can be performed by disregarding the pixel representation of an image , and relying only on a much more compact representation based on local visual features . in a nutshell ,salient keypoints of an image are identified by means of a detector , and a descriptor is computed from the pixel values belonging to the image patch around each keypoint . here ,we focus on corner - like local features produced by processing each frame of the input video sequence with the fast corner detector , which is optimized for fast extraction of visual features on low - power devices .each detected keypoint is then described by means of a binary descriptor : we used the brief algorithm , which outputs descriptors of 64 bytes each .dedicated energy - measurement tests were performed with the beaglebone multimedia subsystem by varying the encoding quality factor for differential mjpeg , while for features extraction , we varied the fast detection threshold .this allowed us to trace curves in the energy - rate plane and to obtain the average energy cost per bit , as well as the average initialization cost per frame for both the application scenarios , which are reported at the bottom half of table [ tab : telosb settings ] .the cost of acquiring one frame was derived from the specifications of the aptinamt9m114 image sensor mounted on the cameracape and is reported in table [ tab : telosb settings ] .the overall acquisition cost for one frame is established as for the jpeg case and for the visual - feature extraction case . + & data consumption rate & kbps & 144 + & transmission cost & j / bit & + & receiving cost & j / bit & + & beaconing / idling cost & j / bit & + & buffering cost & j / bit & + + & acquisition cost & j & + & initialization cost ( jpeg ) & j & + & initialization cost ( visual feat . ) & j & + & processing cost ( jpeg ) & j / bit & + & processing cost ( visual feat . ) & j / bit & + under the settings described previously and shown in table [ tab : telosb settings ] , our first goal is to validate the analytic expressions of section [ sec : min_energy_analysis ] that form the mathematical foundation for propositions 1 and 2 , namely , , and . to this end , we create a controlled multimedia data production process on each vsn node by : _( i ) _ artificially creating several sets of bitstream sizes according to the marginal pdfs of section [ sec : min_energy_analysis ] via rejection sampling ; _ ( ii ) _ setting the mean data size per video frame to kbit ; _ ( iii ) _ setting ( no relaying ) and for each distribution .the sets containing data sizes are copied onto the read - only memory of each sensor node during deployment . at run time, each node fetches a new frame size from the preloaded set , produces artificial data according to it ( akin to receiving the information from the multimedia subsystem ) and transmits the information to the lpbr following the process described in the system model of section [ sec : system model ] . depending on the frame size, the node can enter in idling / beaconing state , or it can buffer the data exceeding the allocated tfdma slots .this controlled experiment with monte - carlo generated datasets creates the conditions that match our statistical characterization and can therefore confirm the validity of our derivations .we report here energy measurements obtained under varying values of and .the chosen active time interval was set to be seconds and , beyond measuring the accuracy of the model versus experiments , we also compared the theoretically - optimal values for and according to section [ sec : min_energy_analysis ] with the ones producing the minimum energy consumption in the experiments . for the reported experiments of figures [ fig : dist_all ] , and table [ tab : diff - theory&experiment-1 - 1 ] , the spatio temporal constraints were : , and frames , i.e. two frames per second .all our reported measurements and the values for are normalized to a one - second interval for easier interpretation of the results . as one can see from figures [ fig : dist_all ] , and table [ tab : diff - theory&experiment-1 - 1 ] , the theoretical results match the experimental results for all the tested distributions , with the maximum percentile error between them limited to and all the coefficients of determination between the experimental and the model points being above .in addition , the theoretically - obtained optimal values for from and are always in agreement with the experimentally - derived values that were found to offer the minimum energy consumption under the chosen spatio temporal constraints .we have observed the same level of accuracy for the proposed model under a variety of data sizes ( ) , active time interval durations ( ) , number of relay nodes ( ) and spatio temporal constraints ( , and ) , but omit these repetitive experiments for brevity of exposition . as mentioned in section [ sec : min_energy_analysis]-c , the optimal solution does not always correspond to the minimum allowable number of frames ( i.e. , ) .for instance , figure [ fig : dist_kmin ] shows the theoretical and experimental results obtained by setting , and ( i.e. , one frame every two seconds ) , and using the uniform distribution . under these settings ,the optimal solution was found to be , thereby confirming the validity of the proposed model . , and .all energy values and frames ( ) are normalized to an one - second interval .[ fig : dist_kmin ] ] [ cols="^,^,^,^,^,^,^,^,^ " , ]we proposed an analytic model for the energy consumption of a uniformly - formed wireless visual sensor network ( vsn ) under varying spatio temporal constraints , defined in terms of number of nodes to be deployed per network tier and video frames to be captured by each node .analytic conditions for the optimal spatio temporal settings within the vsn were derived for different probability density functions characterizing the multimedia data volume to be transmitted by each node .monte - carlo experiments performed via an energy - measurement testbed revealed that the proposed model s accuracy is within 7% of the obtained energy consumption . applying the model to two realistic scenarios for motion jpeg compression and local visual features extraction within each node in the vsn demonstrated that substantial energy savings can be obtained via the proposed approach against _ ad - hoc _settings for the spatio temporal parameters of the vsn . as such, the proposed model can be used for early - stage studies of vsns to determine the best operational parameters to be considered prior to cumbersome and costly real - world deployment and testing .we first present the detailed proof of proposition 1 under the uniform distribution ( ) .the proofs for the pareto , exponential and half - gaussian distributions ( i.e. , proposition 2 ) are summarized afterward , since they follow the same steps as for the case of the uniform .we examine the function along the plane , , and analyze which is now a function of only .it is straightforward to show by first - derivative analysis that the only candidate extremum or inflection point of is , with given by [ eq : beta_u ] .this candidate extremum holds under the assumption that : , i.e. that the candidate extremum or inflection point of falls within the predefined spatial constraints of .furthermore , we find that , which demonstrates that is a local minimum .given that local extrema must alternate within the region of support of a continuous and differentiable function , is also the global minimum of within . having derived the global minimum of along an arbitrary plane , , we can now attempt to find the value of , , that minimizes the energy function . evaluating on , we obtain : \right].\label{eq : e_cu(n_0u , k)}\end{aligned}\ ] ] evidently , the value of minimizing is the minimum allowable , i.e. .thus , the solution minimizing in the -direction is this solution holds under the constraint : similarly , we cut along the plane , , and minimize which is now a function of only . following the steps presented earlier , we can show by first and second derivative analysis that the global minimum of occurs at , with given by .this global minimum holds under the assumption that , due the predefined temporal constraint of .having derived the global minimum of along an arbitrary plane , , we can now attempt to find the value of , , that minimizes the energy function . evaluating on we obtain : \right]\gamma_{\textrm{u}}\right.\label{eq : e_cu(n , k_0u)}\\ & \left .- ps+\frac{s^{2}(b+p)}{4r(d+1)\gamma_{\textrm{u}}}\right].\nonumber \end{aligned}\ ] ] evidently , the value of minimizing is the maximum allowable , i.e. .hence , the solution when attempting to minimize in the -direction under the constraints of is under the constraint : so far , we have found two solutions minimizing the energy consumption of each node : , which minimizes the energy in the -direction by appropriately choosing the number of nodes to deploy ( spatial resolution ) , and , which minimizes the energy in the -direction by appropriately setting the optimal number of frames to capture ( temporal resolution ) during the active time interval .however , the following issues arise : 1. both solutions are only applicable under constraints and .is it possible that _ both _ constraints are satisfied and , if so , then what is the best solution for ? 2 .conversely , if _ neither _ of these two constraints is satisfied , then what is the optimal solution for ?it turns out that the answer to both questions can be derived based on the value of the temporal constraint , , as it is clarified in the following analysis . starting from , with a few straightforward manipulations we reach .the second constraint for is provided by .it is now easy to prove that ( see appendix [ subsec : ff ] ) , which demonstrates that the constraints of the two solutions are _ non - overlapping _ , as the lower bound of is larger than the upper bound of .this answers the first question . to address the second question , we have to analyze what happens when or , as neither of and are applicable in such cases .it is straightforward to show that and are never zero within these intervals .hence , the solution we are looking for must lie on one of the two boundary points : or .let us focus on the case of and evaluate on the boundary plane . since is monotonically increasing for optimal point is , which leads to the solution .similarly , let us look at the direction by evaluating the energy function on the plane .now is larger than and is thus not admissible .since is decreasing for , the optimal point is , which also leads to the solution . finally , when , following a similar analysis we reach that the optimal solution is . summarizing , whenthe data transmitted by each vsn node follows the uniform distribution of , the set of solutions giving the minimum energy consumption in under the spatio temporal constraints of is given by .considering the energy consumption for the pareto distribution in , we follow the derivative - based analysis along each direction and join the obtained minima along with their constraints .the partial derivative of with respect to ( i.e. under a plane with ) is : the only solution for that can be admissible under the constraints of is , with given by .it is straightforward to show that corresponds to the global minimum of .evaluating for leads to \right],\end{aligned}\ ] ] which attains its minimum value for the minimum allowable , i.e. at point .now we have to ensure that , which gives . as discussed for the uniform case , for values of outside this range , the optimal solution comprises the border points or , depending on temporal constraint .the partial derivative of with respect to ( i.e. under a plane with ) is : \nonumber \\ & + r(b+p)(d+1)\\ & \times\left[\left(\frac{\bar{n}}{s}\right)^{\alpha-1}\left(\frac{kr(\alpha-1)(d+1)}{\alpha}\right)^{\alpha-1}-1\right].\nonumber \end{aligned}\ ] ] the only solution for that is admissible under the constraints of is , with defined in .the first constraint imposed on is that it must be positive , which leads to the last equation indicates that the global minimum of holds only if the energy consumption during the idle state is greater than the energy during transmission . while this is possible from a mathematical point of view , the physical reality of wireless transceivers does not allow for this case to manifest in a practical setting .we also note that , beyond the constraint of , the global minimum of holds under the assumption that due the predefined temporal constraint of .evaluating on , we obtain \label{eq : e_cp(n , k_0p)}\\ & + \frac{bs}{n}+\frac{\gamma_{\textrm{p}}}{n_{\text{max}}}\frac{\beta_{\textrm{p}}\left[a+r\left[\left(j+p\right)\left(d+1\right)+hd+g\right]\right]}{n}\nonumber \end{aligned}\ ] ] evidently , for , the value of minimizing is the maximum allowable , i.e. .hence , the solution when attempting to minimize the energy consumption function in the -direction under the constraints of is under the constraint .it is now easy to prove that ( see appendix [ subsec : ff2 ] ) , which demonstrates that the constraints of the two solutions are _ non - overlapping_. the energy consumption in the case of exponential distribution is , given by .we follow the derivative - based analysis along each direction and join together the obtained minima along with their constraints .the partial derivative of with respect to ( i.e. under a plane with ) is : which , under the constraints of [ eq : n_k_constraints ] , is equal to zero for , with given by .it is straightforward to show that corresponds to the global minimum of .evaluating for leads to : \right],\end{aligned}\ ] ] which has its minimum value for the minimum allowable , i.e. at point .now we have to ensure that , which leads to .again , for values of outside this range , the optimal solution comprises the border points or , depending on temporal constraint .the partial derivative of with respect to ( i.e. under a plane with ) is : \right]\\ & + \left(b+p\right)\left[r\left(d+1\right)\left(\exp\left(-\frac{s}{\bar{n}kr(d+1)}\right)-1\right)\right.\\ & \left.+\frac{s}{\bar{n}k}\exp\left(-\frac{s}{\bar{n}kr(d+1)}\right)\right ] .\end{array}\ ] ] the only solution for that may be admissible under the constraints of is , with defined in .the first constraint imposed on is that it must be positive .that is , the product - log function should be smaller than -1 .this is true when the argument of the product - log function is limited within .that is : }{\exp\times r(d+1)(b+p)}<0.\label{eq : constraint_k_0,e_positive}\ ] ] it is easy to verify that a necessary condition for to hold is .thus , similar to the pareto case , while the the global minimum of is in principle possible , it is not expected to be encountered in a practical setup . beyond the constraint of, the global minimum of holds under the assumption that due the predefined temporal constraint of . having derived the global minimum of along an arbitrary plane , , we can now attempt to find the value of , , that minimizes the energy function . evaluating on we obtain }{n}.\label{eq : ec_e(n , k0_e)}\end{aligned}\ ] ] evidently , for , the value of minimizing is the maximum allowable , i.e. .hence , the solution when attempting to minimize the energy consumption function in the -direction under the constraints of is under the constraint .it is now easy to prove that ( see appendix [ subsec : ff2 ] ) , which demonstrates that the constraints of the two solutions are _ non - overlapping_. the energy consumption for half - gaussian distribution is given by .the partial derivative of with respect to ( i.e. under a plane with ) is : which , under the constraints of [ eq : n_k_constraints ] , is equal to zero for , with given by .it is easy to show that corresponds to the global minimum of .evaluating for leads to : ^{2}\right)\right]\right].\end{aligned}\ ] ] which has its minimum value for the minimum allowable , i.e. at .now , we have to ensure that , which leads to .similarly as for the previous distributions , for values of outside this range , the optimal solution comprises the border points or .the partial derivative of with respect to ( i.e. under a plane with ) is : \right.\nonumber \\ & + r\left(b+p\right)(d+1)\\ & \times\left(\exp\left(-\frac{s^{2}}{\pi k^{2}r^{2}\bar{n}^{2}(d+1)^{2}}\right)-1\right),\nonumber \end{aligned}\ ] ] which can be shown to be positive .hence , the energy function is increasing with respect to and the optimal value is the minimum allowable .thus , the solution is equal to .replacing and from and in the inequality we desire to prove , squaring both sides ( since all terms are positive ) and rearranging terms , leads to +a\left(b+p\right ) & > 0,\end{aligned}\ ] ] which is indeed positive because all constants are positive quantities . with .evidently , since all constants are positive . substituting in the numerator of the right hand side of via , we obtain . since , in order to prove the last expression it suffices to prove that holds . the last expression is indeed true because . substituting in the numerator of the right side of with the expression of and using the definition of the product - log function , , the last inequality leads to .the right - hand side is upper bounded by , since the lambert function is upper bounded by -1 .thus , to complete the proof , it suffices to prove that . for derivating the solutions in the exponential case ,we have assumed that and shows that .therefore , .a. tinka , t. watteyne , and k. pister , `` a decentralized scheduling algorithm for time synchronized channel hopping , '' _ lect .notes inst . comp .( ad - hoc networks ) _ , vol .49 , no . 4 , pp .201216 , aug .m. zorzi , a. gluhak , s. lange , and a. bassi , `` from today s intranet of things to a future internet of things : a wireless- and mobility - related view , '' _ ieee wireless comm ._ , vol .17 , no . 6 , pp .4451 , dec . 2010 .j. gubbi , r. buyya , s. marusic , and m. palaniswami , `` internet of things ( iot ) : a vision , architectural elements , and future directions , '' _ future gen .syst . j. _ ,29 , no . 7 , pp .16451660 , sep .2013 .d. buranapanichkit and y. andreopoulos , `` distributed time - frequency division multiple access protocol for wireless sensor networks , '' _ ieee wireless comm . letters _ , vol . 1 , no . 5 , pp . 440443 , oct .z. zuo , q. lu , and w. luo , `` a two - hop clustered image transmission scheme for maximizing network lifetime in wireless multimedia sensor networks , ''_ j. comput ._ , vol .35 , no . 1 ,pp . 100108 , 2012 . j. f. mingorance - puga , g. maci - fernndez , a. grilo , and n. m. tiglao , `` efficient multimedia transmission in wireless sensor networks , '' in _ ieee proc .6th euro - nf conf . on next gen .internet ( ngi ) _ , 2010 ,a. newell and k. akkaya , `` self - actuation of camera sensors for redundant data elimination in wireless multimedia sensor networks , '' in _ proc .ieee internat ., icc09_.1em plus 0.5em minus 0.4emieee , 2009 , pp .a. canclini , l. baroffio , m. cesana , a. redondi , and m. tagliasacchi , `` comparison of two paradigms for image analysis in visual sensor networks , '' in _ proceedings of the 11th acm conference on embedded networked sensor systems _ , ser .sensys 13.1em plus 0.5em minus 0.4emacm , 2013 , pp .62:162:2 .p. chen , p. ahammad , c. boyer , s .-huang , l. l. , e. lobaton , m. meingast , s. oh , s. wang , p. yan , a. yang , c. yeo , l .- c .chang , j. d. tygar , and s. sastry , `` citric : a low - bandwidth wireless camera network platform , '' in _ acm / ieee int. conf . on distrib .smart cam ., icdsc _ , sept . , pp .m. rahimi , r. baer , o. i. iroezi , j. c. garcia , j. warrior , d. estrin , and m. srivastava , `` cyclops : in situ image sensing and interpretation in wireless sensor networks , '' in _ _ proc .acm internat . conf . on embed .sensor syst.__1em plus 0.5em minus 0.4emacm , 2005 , pp . 192204 .s. hengstler , d. prashanth , s. fong , and h. aghajan , `` mesheye : a hybrid - resolution smart camera mote for applications in distributed intelligent surveillance , '' in _ proc .ieee internat .symp . on inf . process . in sensornet.ipsn 2007_.1em plus 0.5em minus 0.4em ieee , 2007 , pp . 360369 . m. tahir and r. farrell , `` a cross - layer framework for optimal delay - margin , network lifetime and utility tradeoff in wireless visual sensor networks , '' _ ad hoc networks _ , vol .11 , no . 2 , pp .701 711 , 2013 .d. kandris , m. tsagkaropoulos , i. politis , a. tzes , and s. kotsopoulos , `` energy efficient and perceived qos aware video routing over wireless multimedia sensor networks , '' _ ad hoc networks _, vol . 9 , no . 4 , pp . 591607 , 2011 .a. koubaa , a. cunha , and m. alves , `` a time division beacon scheduling mechanism for ieee 802.15.4/zigbee cluster - tree wireless sensor networks , '' in _19th euromicro conf . on real - time syst . , ecrts 07 _ , july 2007 , pp .125 135 .j. degesys , i. rose , a. patel , and r. nagpal , `` desync : self - organizing desynchronization and tdma on wireless sensor networks , '' in _ proc .6th int . conf . on inf .process . in sensor netw ., ipsn _ , 2007 , pp .1120 .b. foo , y. andreopoulos , and m. van der schaar , `` analytical rate - distortion - complexity modeling of wavelet - based video coders , '' _ ieee trans . on signal processing _ ,56 , no . 2 ,797815 , feb .2008 .r. kumar , `` a protocol with transcoding to support qos over internet for multimedia traffic , '' in _ ieee proc . of internat .conf . on multimedia and expo , icmeicme 03.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2003 , pp .465468 .a. kouba , m. alves , and e. tovar , `` gts allocation analysis in ieee 802.15.4 for real - time wireless sensor networks , '' in _ ieee proc .internat . par . anddistrib . process ., ipdps_.1em plus 0.5em minus 0.4emieee , 2006 , pp . 8pp .`` ieee standard for local and metropolitan area networks part 15.4 : low - rate wireless personal area networks ( lr - wpans ) amendment 1 : mac sublayer , '' _ ieee std 802.15.4e-2012 ( amendment to ieee std 802.15.4 - 2011 ) _ , pp . 1225 , april 2012 .y. kwon and d. shin , `` the security monitoring system using ieee 802.15.4 protocol and cmos image sensor , '' in _ proc .ieee internat .conf . on new trends in inf . and serv ., niss 09 _ , pp . 11971202 .s. paniga , l. borsani , a. redondi , m. tagliasacchi , and m. cesana , `` experimental evaluation of a video streaming system for wireless multimedia sensor networks , '' in _ _ ieee / ifip annual ad hoc netw . worksh.__1em plus 0.5em minus 0.4emieee , 2011 , pp . 165170 . a. redondi , m. cesana , and m. tagliasacchi , `` rate - accuracy optimization in visual wireless sensor networks , '' in _ proc .ieee internat .conf . image process ., icip_.1em plus 0.5em minus 0.4emieee , 2012 , pp .11051108 . w.yu , z. sahinoglu , and a. vetro , `` energy efficient jpeg 2000 image transmission over wireless sensor networks , '' in _ proc .ieee global telecom ., globecom _ , vol .5.1em plus 0.5em minus 0.4em ieee , 2004 , pp .27382743 .alessandro redondi received his master degree in computer engineering in july 2009 and the ph.d . in information engineering in 2014 , both from politecnico di milano .currently he is a post - doctoral researcher at the department of electronics and information of politecnico di milano .his research activities are focused on algorithms and protocols for visual sensor networks and real time localization systems .dujdow buranapanichkit is lecturer in the department of electrical engineering , faculty of engineering , prince of songkla university ( thailand ) .she received her phd from the department of electronic and electrical engineering of university college london ( uk ) .her research interests are in wireless sensor networks and distributed synchronization mechanisms and protocol design .matteo cesana is currently an assistant professor with the dipartimento di elettronica , informazione e bioingegneria of the politecnico di milano , italy .he received his ms degree in telecommunications engineering and his ph.d .degree in information engineering from politecnico di milano in july 2000 and in september 2004 , respectively . from september 2002to march 2003 he was a visiting researcher at the computer science department of the university of california in los angeles ( ucla ) .his research activities are in the field of design , optimization and performance evaluation of wireless networks with a specific focus on wireless sensor networks and cognitive radio networks .cesana is an associate editor of the ad hoc networks journal ( elsevier ) .marco tagliasacchi is currently assistant professor at the `` dipartimento di elettronica e informazione - politecnico di milano '' , italy .he received the `` laurea '' degree ( 2002 , cum laude ) in computer engineering and the ph.d . in electrical engineering and computer science ( 2006 ) , both from politecnico di milano .he was visiting academic at the imperial college london ( 2012 ) and visiting scholar at the university of california , berkeley ( 2004 ) .his research interests include multimedia forensics , multimedia communications ( visual sensor networks , coding , quality assessment ) and information retrieval .tagliasacchi co - authored more than 120 papers in international journals and conferences , including award winning papers at mmsp 2013 , mmsp2012 , icip 2011 , mmsp 2009 and qomex 2009 .he has been actively involved in several eu - funded research projects .he is currently co - coordinating two ict - fp7 fet - open projects ( greeneyes - www.greeneyesproject.eu , rewind - www.rewindproject.eu ) .dr . tagliasacchi is an elected member of the ieee information forensics and security technical committee for the term 2014 - 2016 , and served as member of the ieee mmsp technical committee for the term 2009 - 2012 .he is currently associate editor for the ieee transactions on circuits and systems for video technologies ( 2011 best ae award ) and apsipa transactions on signal and information processing .dr . tagliasacchi was general co - chair of ieee workshop on multimedia signal processing ( mmsp 2013 , pula , italy ) and he will be technical program coordinator of ieee international conference on multimedia&expo ( icme 2015 , turin , italy ) .yiannis andreopoulos ( m00 ) is senior lecturer in the department of electronic and electrical engineering of university college london ( uk ) .his research interests are in wireless sensor networks , error - tolerant computing and multimedia systems .
|
wireless visual sensor networks ( vsns ) are expected to play a major role in future ieee 802.15.4 personal area networks ( pan ) under recently - established collision - free medium access control ( mac ) protocols , such as the ieee 802.15.4e-2012 mac . in such environments , the vsn energy consumption is affected by the number of camera sensors deployed ( spatial coverage ) , as well as the number of captured video frames out of which each node processes and transmits data ( temporal coverage ) . in this paper , we explore this aspect for _ uniformly - formed _ vsns , i.e. , networks comprising identical wireless visual sensor nodes connected to a collection node via a balanced cluster - tree topology , with each node producing independent identically - distributed bitstream sizes after processing the video frames captured within each network activation interval . we derive analytic results for the energy - optimal spatio temporal coverage parameters of such vsns under _ a - priori _ known bounds for the number of frames to process per sensor and the number of nodes to deploy within each tier of the vsn . our results are parametric to the probability density function characterizing the bitstream size produced by each node and the energy consumption rates of the system of interest . experimental results derived from a deployment of telosb motes under a collision - free transmission protocol and monte - carlo generated data sets reveal that our analytic results are always within 7% of the energy consumption measurements for a wide range of settings . in addition , results obtained via a multimedia subsystem ( beaglebone linux computer ) performing differential motion jpeg encoding and local visual feature extraction from video frames show that the optimal spatio temporal settings derived by the proposed framework allow for substantial reduction of energy consumption in comparison to _ ad - hoc _ settings . as such , our analytic modeling is useful for early - stage studies of possible vsn deployments under collision - free mac protocols prior to costly and time - consuming experiments in the field . visual sensor networks , energy consumption , frame - rate , sensor coverage , internet - of - things
|
in various types of musical performance , one component of the musical expression is conveyed in the short - term manipulation of tempo , with tempo modulation reflecting musical phrase structure .this has motivated various authors to construct automatic analyses of the arc - shaped tempo modulations in recorded musical performances , with or without score - derived information to supplement the analysis .( see also who fit piecewise linear arcs to rock and jazz data , applying similar techniques but to genres in which the underlying tempo is held more fixed . )machine understanding of tempo , including its variability , can be useful in live human - machine interaction .however most current online tempo - tracking systems converge to an estimate of the current tempo , modelling expressive variations as deviations rather than as components of an unfolding tempo expression . in this paperwe work towards the understanding of tempo arcs in a real - time system , paving the way for automatic accompaniment systems which follow the expressive tempo modulation of players in a more natural way .we also consider tempo arcs within a probabilistic framework .previous authors have approached piecewise arc estimation using dynamic programming ( dp ) with cost functions based on squared error .these are useful and can provide efficient estimation , but by setting the problem in a probabilistic framework ( and providing the corresponding viterbi - like dp estimator ) , we gain some advantages : prior beliefs about the length and shape of arcs can be expressed coherently as prior distributions ; measurement noise is explicitly modelled ; and the goodness - of - fit of models is represented meaningfully as posterior probabilities , which allows for model comparison as well as integration with other workflow components which can make use of estimates annotated with probability values . note that while we describe a fully probabilistic model , for efficient inference we will develop a maximum a posteriori ( map ) estimator , which returns only the maximum probability parameter settings given the priors and the data . in the followingwe will describe our model of arcs in time - series data , and develop an efficient map estimation technique based on least - squares optimisation and viterbi - like dp .the approach requires some kind of unsmoothed instantaneous tempo estimate as its input , which may come from a tempo tracker or from a simple measurement such as inter - onset interval ( ioi ). we will then discuss how the estimator can be used for immediate - future tempo prediction , and how it can be applied to multiple levels simultaneously .finally we will apply the technique to tempo data from three professional piano performances , and discuss what the analysis reflects in the performances .for our basic model , we consider tempo to evolve as a function of metrical position ( beat number ) in a musical piece as a series of connected arcs , where each arc s duration , curvature and slope are independently drawn from prior distributions ( to be described shortly ) .our model is deliberately simple , and agnostic of any score information that might be available .to sample from this model , we pick an initial tempo at the starting time , then define a single upwards tempo arc which starts from that point , and the tempo trajectory ( speeding up and then slowing down ) over a number of measures . any tempo data which may be measured during this interval is modelled as being drawn from the arc plus some amount of gaussian noise .once the ending breakpoint of this arc is reached , the next arc is sampled from the same priors , using the ending tempo as the new starting tempo .hence each tempo arc is conditionally independent of all previous observations once the starting tempo is determined , i.e. once the previous arc s parameters are fixed .this assumption of conditional independence is slightly unrealistic , since it ignores long - range relationships between tempo arcs , but it accounts for the most important interactions and makes inference tractable .our basic model is also only single - level , assuming that a single arc contributes to the current tempo at any moment , rather than considering for example contributions from multiple timescales such as piece - level , movement - level , phrase - level and bar - level combined . in section [ sec : multiscale ] we will consider a simple multi - scale extension of our technique , which we will apply in our analysis of piano performance data .( for an alternative approach in which various components can be simultaneously active see . ) to fit a single arc shape to data , one can use standard quadratic regression , fitting a function of the form and minimising the prediction error over the supplied data for . in the bayesian context ,we wish to incorporate our prior beliefs about the regression parameters ( here , and ) , which is related to the optimisation concept of _ regularisation _ , the class of techniques which aims to prevent overfitting by favouring certain parameter settings .in fact , a gaussian prior on a regression parameter can be shown to be equivalent to the conventional -norm regularisation of the parameters , summarised as : this equivalence is useful because it allows us to use common convex optimisation algorithms to perform the equivalent regularised least squares optimisation , and they will yield the map estimate for the probabilistic model .however , in this context a standard gaussian prior is not exactly what we require , since we are expecting upwards arcs and not troughs we are expecting in equation [ eq : simplequad ] to be negative .a more appropriate choice of prior might be a negative log - gaussian distribution , which allows us to specify a `` centre of mass '' for the arc shapes ( expressed through the log - mean and log - standard - deviation parameters ) , yet better represents our expectation that tempo arcs will always have negative curvature , ( almost ) flat and extremely strongly curved arcs being equally rare .the unconventional choice of prior might seem to remove the equivalence of the map regression technique with standard regularised least squres . yetif we rewrite our function to be then our prior belief about this modified parameter becomes a gaussian , yielding a negative - log - gaussian in combination with our function .in addition , we will use a standard gaussian prior on .we could do the same for but instead we will use an improper uniform prior , for reasons which will be described in section [ sec : multi ] .therefore , our priors for equation [ eq : modquad ] will be gaussian priors on and , which can easily be converted to the equivalent -regularisation terms for optimisation .the strength of the regularisation ( the value of the regularisation coefficient ) reflects the specificity of our priors versus our data specifically , the regularisation parameter is given by the noise variance divided by the prior variance .again , we see how the probabilistic setting helps to ground our problem , connecting the strength of the regularisation directly to our prior beliefs about the model and the data rather than manually - tuned parameters .if a time - series is composed of multiple arcs and the breakpoints are known , then fitting multiple arcs is as simple as performing the above single - arc fit for each subsection of the time series ( as in figure [ fig : toydatatwoarcs ] ) .additionally , one should take care of the arc s dependence upon its predecessor ( to enforce that they meet up ) , which is not shown in these plots . in our case, we want to estimate the breakpoint locations as well as the arc shapes between those breakpoints .this can be performed by iterating over all possible combinations of one breakpoint , two breakpoints , three ( ) for the dataset , and choosing the result with the lowest cost ( the highest posterior likelihood ) .the bayesian setting makes it possible to compare these different alternatives ( e.g. one single arc vs. one arc for every datapoint ) without having to add arbitrary terms to counter overfitting ; instead , we specify a prior distribution over the arc durations , which in combination with the other priors and data likelihoods yields a map probability for any proposed set of arcs . in this paperwe choose a log - normal prior distribution over arc durations .see figure [ fig : toydatatwoarcs ] for some examples of different sets of arcs fitting to a synthetic dataset , and the posterior ( log-)probabilities associated . in order for only a single tempo value to exist at each breakpoint ( andnot a discontinuous leap from one tempo to another ) , we fit each arc under the constraint that its starting value equals the ending value of the previous arc .this removes one degree of freedom from the function to be fit ( equation [ eq : modquad ] ) which otherwise has three free parameters .we implement this by constraining the value of in the optimisation so that the function evaluates to the predetermined value at the appropriate time - point .the least - squares optimisation therefore only operates on and .the number of possible combinations of arcs for even a small time - series ( such as figure [ fig : toydatatwoarcs ] ) grows quickly very large , and so it is impractical to iterate all combinations .this is where dynamic programming ( dp ) can help .here we describe our dp algorithm , which , like the well - known viterbi algorithm , maintains a record of the most likely route that leads to each of a set of possible states . rather than applying it to the states of a hidden markov model ,we apply it to the possibility that each incoming datum represents a breakpoint .assume that the first incoming datum is a breakpoint .( this assumption can be relaxed , in a similar way to the treatment of the final datum which we consider later . ) then , for each incoming datum ( , ) , we find what would be the most likely path _ if it were certainly _ a breakpoint .we do this by finding the most appropriate past datum ( , ) which could begin an arc to the current datum where the appropriateness is judged from the map probability of said arc , combined with the map probability of the whole multiple - arc history that leads up to that past datum ( recursively defined ) . with our lognormal prior on the arc lengths ( and with many common choices of prior ), the probability mass is concentrated at an expected time - scale , and very long arcs are highly improbable _ a priori_. hence in practice we truncate the search over potential previous arc points to some maximum limit ( i.e. ) . thus , for every incoming data point we perform no more than single - arc fits , then store the details of the chosen arc , the map probability so far , and a pointer back to the datapoint at the start of the chosen single arc . the simplest way to choosethe overall map estimate is then to pick another definite breakpoint ( for example , the last datum if the performance has finished ) and backtrack from there to recover the map arc path .the time complexity of the algorithm depends strongly on that of the convex optimisation used to perform a single - arc fit .assume that the complexity of a single - arc fit is proportional to the number of data points included in the fit , where .then for each incoming data point a search is performed for one subset each of 2 , 3 , k data points , which essentially yields an order ( ) process .for online processing this is manageable if is not too large .analysing a whole dataset of points then has time complexity ( ) .( compare this to the broadly similar complexity analysis of . )the space complexity is simply ( ) , or ( ) if the full arc history since the very beginning does not need to be stored .this is because a small fixed amount of data is stored per datapoint .as discussed , if we know the performance has finished then we can find the viterbi path leading to a breakpoint at the final data point received .however , we would also like to determine the most likely set of arcs in cases where the performance might not have finished ( e.g. for real - time interactive systems ) , and thus where we do not wish to assert that the latest datum is a breakpoint .we wish to be able to estimate an arc which may still be in progress .if we can , this has a specific benefit of predicting the immediate future evolution of the tempo modulations ( until the end of the present arc ) , which may be particularly useful for real - time interaction .we can carry this out in our current approach as follows .since an arc s duration ( as well as the curve - fit ) affects its map probability , in the case where the latest arc may or may not be terminating we must iterate over the arc s possible durations and pick the most likely . to do this we choose a set of future time - points as candidate breakpoints , ( e.g. an evenly - spaced tatum grid of future points ). then we supply these data to the viterbi update process exactly as is done with actual data , but with no associated values .these `` hypothetical '' viterbi updates will use these time - points to determine the arc - lengths being estimated , and in normalising the data subset , but will not include them in the arc - fitting process .it will therefore yield a map probability estimate for each of the time - points as if an arc extended from the real data as far as this hypothetical breakpoint . out of these possibilities ,the one with the highest map probability is the map estimate for an arc which includes the latest real datum and some portion of the hypothetical future points . (the hypothetical viterbi updates are not preserved : if more data comes in , it is appended to the viterbi storage corresponding only to the actual data . )the model we describe operates at one level , with expected arc durations given by the corresponding prior .our model is adaptable to any time - scale by simply adapting the prior .it does not however automatically lend itself to simultaneous consideration of multiple active timescales .multi - scale analysis can be carried out by analysing a dataset with one timescale , then analysing the residual at a second timescale .this residual - based decomposition has been used previously in the literature ( e.g. ) ; it requires a strong hierarchical assumption that the arcs at the first timescale do not depend at all on those at the second timescale , while the second is subordinate to the first .we consider this to be unrealistic , since there may well be interactions between the different timescales on which a performer s expression evolves .however this assumption leads to a tractable analysis .note also that this approach to multi - scale estimation requires the first analysis to be completed ( so that the residual is known ) before the second scale can be analysed .some dp approach may be possible to enable both to be calculated online , but we have not developed that here .for the present work , the single - scale viterbi tracking is applicable and useful for online tracking , while multi - scale analysis is an offline process , which we will next apply to modelling of pre - recorded tempo data .we applied our analysis to an existing set of annotations of three performances of beethoven s _ moonlight sonata_. the annotations by elaine chew have previously been analysed by chew with reference to observations noted by jeanne bamberger .for each of three well - known performances of the piece by daniel barenboim ( 1987 ) , maurizio pollini ( 1992 ) and artur schnabel ( 2009)the first 15 bars have been annotated with note onset times , which correspond to regular triplet eighth - note timings .we implemented the algorithm in python , using the ` scipy.optimize.fmin ` optimiser to solve individual regressions .source code is available .( note that this development implementation is not generally fast enough for real - time use . )instantaneous tempo was derived from these inter - onset intervals , then analysed using a two - pass version of our algorithm : first the data was analysed using an arc - duration prior centred on four bars ; then the residual was analysed using an arc - duration prior centred on one bar .this choice of timescales is a relatively generic choice which might reasonably be considered to reflect a performer s short - term and medium - term state ; however it might also be said to be a form of basic contextual information about the relevant timescales in the current piece . for the current study, we confine ourselves to priors with log - normal shapes , though an explicitly score - derived or corpus - derived prior could have a more tailored and perhaps multimodal shape .figure [ fig : manual_analyses ] shows a set of manual annotations of hierarchically embedded phrases , and figure [ fig : twoarcsthree ] shows the automatically computed results .the automatic analyses show some notable similarities with the manual one at the shorter time scale , and significant differences at the longer time scale .the difficulty of the longer time scale analysis may be explained by figure [ fig : moonlight_phrases ] , which shows one plausible set of phrase groupings for this excerpt ; the overlapping \{5,5,7 } bar phrases do not fit easily into a four - bar duration framework .nevertheless , the longer time scale analysis ( centred on four bars ) highlights differences between the performances : pollini s performance appears to contain relatively little variation on this level , as the fit yields long and shallow arcs , with breakpoints near positions 48 , 96 and 168 ( structurally important positions ; 96 is where the key - change occurs ) .on the other hand , both barenboim and schnabel s tempo curves exhibit fairly deep and varied arcs .schnabel s performance exhibits the most dramatic variation in the first four bars until around measure 48 : this first four - bar section corresponds to the opening statement of the basic progression , before the melody enters in the fifth bar ( and the underlying progression repeats ) .bamberger described schnabel as performing them `` as if in one long breath '' ( quoted in ) , not quite reflected in our automatic analysis . on the shorter time scale ,the analysis tends to group phrases into one - bar or two - bar arcs .aspects of the musical structure are reflected in the arcs observed .sections of the melody which lend themselves to two - bar phrasing ( e.g. 7296 ) are generally reflected in longer arcs crossing bar lines .conversely , in the region 96132 the change to the new key unfolds as each new chord enters at the start of a bar , and the tempo curves for all three performers reflect an expressive focus on this feature , with one - bar arcs which are more closely locked to the bar - lines than elsewhere .note that in this section schnabel matches pollini in exhibiting a long and shallow arc on the slow timescale , with all the expressive variation concentrated on the one - bar arcs . over the excerpt generally ,the breakpoints for schnabel are further away from the barline than the others , as was observed in chew s manual analysis .we can quantify this by measuring the mean deviances of arc endpoints from the barlines in each performance .the resulting mean deviances confirm our observations ( table [ tbl : dev ] ) ..mean deviance from the barlines of the arc endpoints inferred for each performance , averaged over the short - timescale arcs in each case .[ cols="<,>",options="header " , ] we have extended the plots slightly beyond the 180 annotated data points , to illustrate the immediate - future predictions made by the model .( this is done for both timescales , though only the longer timescale ( in red ) shows noticeable extended arcs . )all the performers , and especially schnabel , exhibit an acceleration towards the end of the annotated data , reflected in the predictions of an upward arc followed by a gradual slowing over the next bar .this type of prediction is plausible for such expressively - timed music . to illustrate the effect that the prior parameters have upon the regression ,figure [ fig : twoarcsthreeb ] shows the same analysis as figure [ fig : twoarcsthree ] but with the standard deviation of the noise prior set at 4.0 rather than 3.0 .the increase in the assumed noise variance leads the algorithm to `` trust '' the data less and the prior slightly more ( cf .equation [ eq : equivalence ] ) . in our example , some of the breakpoints for the long - term arcs ( in red ) have changed , losing some detail , though most of the detail of the second - level analysis ( in blue ) is consistent .we have described a model with similarities to some previous piecewise - arc models of musical expression , but with a bayesian formulation which facilitates model comparison and the principled incorporation of prior beliefs .we have also described an efficient viterbi - like dynamic programming approach to estimation of the model from data .the approach provides scope to apply the model to real - time score - free performance tracking , including prediction of immediate future tempo modulation .source code for the algorithm ( in python ) is available .we have applied the model in a two - level analysis to data from expressive piano performance , illustrating the algorithm s capacity to operate at different time - scales , and to recover expressive arc information that corresponds with some musicological observations regarding phrasing and timing .further research would be needed to develop a model of multiple simultaneously - active levels of expression which can be applied online as with our single - level viterbi - like algorithm .similar arcs have been observed and analysed in loudness information extracted from performances .it would also be useful to combine loudness information with tempo information in this model .
|
in musical performances with expressive tempo modulation , the tempo variation can be modelled as a sequence of tempo arcs . previous authors have used this idea to estimate series of piecewise arc segments from data . in this paper we describe a probabilistic model for a time - series process of this nature , and use this to perform inference of single- and multi - level arc processes from data . we describe an efficient viterbi - like process for map inference of arcs . our approach is score - agnostic , and together with efficient inference allows for online analysis of performances including improvisations , and can predict immediate future tempo trajectories . , expression , viterbi , time series
|
regularization has been studied extensively since the theory of ill - posed problems . by adding a penalty associated to a choice of regularization parameter , one penalizes overfitted models and can achieve good generalized performance with the training data .this has been a crucial feature in current modeling frameworks : for example , tikhonov regularization , lasso regression , smoothing splines , regularization networks , svms , and ls - svms .svms in particular are characterized by dual optimization reformulations , and their solutions follow from convex programs .standard svms reduce to solving quadratic problems and ls - svms reduce to solving a set of linear equations .many general purpose methods to measure the appropriateness of a regularization parameter for given data exist : cross - validation ( cv ) , generalized cv , mallows s , minimum description length ( mdl ) , akaike information criterion ( aic ) , and bayesian information criterion ( bic ) .recent interest has also been in discovering closed - form expressions of the solution path , and developing homotopy methods . like svms , this paper takes the perspective of convex optimization following in order to tune the regularization parameter for optimal model selection .classical tikhonov regularization schemes require two steps : 1 .( training ) choosing a grid of fixed parameter values , find the solution for each constant regularization parameter ; 2 . ( validating )optimize over the regularization constants and choose the model according to a model selection criterion .the approach in this paper reformulates the problem as one for constrained optimization .this allows one to compute both steps simultaneously : minimize the validation measure subject to the training equations as constraints .this approach offers noticeable advantages over the ones above , of which we outline a few here : * * automation * : practical users of machine learning tools may not be not interested in tuning the parameter manually .this brings us closer to fully automated algorithms . ** convexity * : it is ( usually ) much easier to examine worst case behavior for convex sets , rather than attempting to characterize all possible local minima . * * performance * : the algorithmic approach of training and validating simultaneously is occasionally more efficient than general purpose optimization routines . for this write - up , we focus only on ridge regression , although the same approach can be applied to more complex model selection problems ( which may benefit more as they suffer more often from local minima ) .we introduce the standard approach to ridge regression and prove key properties for a convex reformulation . for a more rigorous introduction ,see .let and be a positive semi - definite matrix . for a fixed , recall that tikhonov regularization schemes of linear operators lead to the solution , where the estimator solves _ for a fixed value , we define the _ ridge solution set _ as the set of all solutions corresponding to a value .that is , _ the value can be thought of as the minimal regularization parameter allowed in the solution set ; this would speed up computation should the user already know a lower bound on their optimal choice of .let denote the reduced singular value decomposition ( svd ) of , i.e. , is orthogonal and contains all the ordered positive eigenvalues with .[ lemma ] the solution function is lipschitz continuous . for two values such that , consider the function then by the mean value theorem , there exists a value $ ] such that we now examine the the convex hull of the solution set .this will allow us to search efficiently through a convex set of hypotheses as in section [ 3 ] .we decompose the solution function into a sum of low rank matrices : then define for all .the following proposition provides linear constraints on the set of s following from this reparameterization .let for .then the polytope parametrized by as follows is convex , and moreover , forms a convex hull to .it is easy to verify the first two constraints by looking at the function defined previously , which is strictly increasing .set , and as before .then for all , hence the third inequality is also true .moreover , the set is characterized entirely by equalities and inequalities which are linear in the unknown s .so it forms a polytope .then by the above result together with the convex property of polytopes , it follows that is a convex relaxation to the set .we now find the relationship between solutions in and solutions in its convex relaxation .the maximal distance between a solution in and its closest counterpart in is bounded by the maximum range of the inverse eigenvalue spectrum : following similar steps to prove lemma [ lemma ] , one can show that the maximal difference between a solution for a given and its corresponding closest is for any , the value is bounded by the worst case scenario that the solution passes through or through .then where satisfies .that is , which is greater than zero by construction .hence for all , there exists such that let be a given data set .the ridge regression estimator with minimizes the regularized loss function where is some loss function .set and fix . for to be the unique global minimizer of , it is necessary and sufficient that satisfies where is the design matrix and is the response vector formulated from .note that we use the kkt notation in order to hint to the extension to other learning machines , which reduce to solving a similar convex optimization problem but with inequality constraints .let be a validation data set .the optimization problem of finding the optimal regularization parameter with respect to a validation performance criterion can then be written as that is , we find the least squared error among all s in the solution set , or equivalently all s satisfying the kkt conditions .for the convex approach , we simply replace the non - convex solution set with its convex relaxation .then one obtains the convex optimization problem this has the immediate advantage of simultaneously training and validating ( 1 step ) ; in comparison the original method requires finding a grid of points in and then minimizing among those ( 2 steps ) .furthermore , the convex hull is defined by equality / inequality constraints , whose complexity is not any higher than the original problem .for example , can be solved with a qp solver when as before , or with a lp solver when ( the latter of which may be preferred for sparsenesses or feature selection ) .the convex relaxation constitutes the solution path for the modified ridge regression problem where and for all , and the following inequalities hold by translating : the above applies to a single training and validation set , and we now extend it to -fold cv in general .let and denote the set of training and validation data respectively , corresponding to the fold for : that is , they satisfy let . then in order to tune the parameter according to -fold cv, we have the optimization problems for all . then we need only relax the kkt conditions independently for each .the convex optimization problem according to a -fold cv is for all , each of which is solved as before , and so with constraints .then just as in typical -fold cv , we take the average of the folds as the final model .we show two simulation studies as a benchmark in order to compare it to current methods .the first figure below provides intuition behind the solution paths : the curve and its convex relaxation . in the second figure, we simulate data with the function , where are sampled i.i.d . , observations , and .the figure compares the performance of the method with one which uses basis functions , another with cv , and another with generaized cv ( gcv ) .cv and gcv are implemented with standard gradient descent , and the convex algorithm outlined here uses the interior point method .this toy example demonstrates that the relaxation does not result in an increase in true error .we also conduct a monte carlo simulation : every iteration constructs a simulated model for a given value of defined as for random values of where , , and is a covariance matrix with . a data set of size constructed such that and is sampled i.i.d . for all .we compare three methods for tuning the parameter with respect to the ordinary least squares ( ols ) estimate : 10-fold cv with gradient descent ( ` rr+cv ` ) , generalized cv with gradient descent ( ` rr+gcv ` ) , and 10-fold cv criterion which applies the convex method as in ( 18 ) ( ` frr+cv ` ) .we run it for 20,000 iterations .the figure on the left compares the true error as the condition number grows ; the figure on the right compares the true error as the number of observations increases . as we can see , the method is comparable to both cv and gcv for variable changes in the data set .we expect that the ols worsens drastically over ill - conditioned matrices , and our method compensates for that via the optimal tuning parameter which is roughly the same as cv s .viewing the model selection problem as one in constrained optimization gives rise to a natural approach to tuning the regularization parameter . according to the simulation results ,the convex program provides comparable performance to popular methods .moreover , global optimality is guaranteed and efficient convex algorithms can be employed as suited to the modeling problem at hand .this would especially outperform general purpose techniques when there is a high number of local optima , or if finding each individual solution for fixed regularization parameter is more costly than optimizing it simultaneously with validation as we do here .further extensions to this framework can be used as a generic convex hull method , and it can also be applied for constructing stable kernel machines , feature selection , and other possibilities related to simultaneous training and validating .
|
we develop a robust convex algorithm to select the regularization parameter in model selection . in practice this would be automated in order to save practitioners time from having to tune it manually . in particular , we implement and test the convex method for -fold cross validation on ridge regression , although the same concept extends to more complex models . we then compare its performance with standard methods .
|
[ intro ] two - dimensional ( 2d ) intersymbol interference ( isi ) channels have received a lot of attention lately .this is mainly due to the fact that research focus in the storage industry is shifting towards developing a two - dimensional paradigm for storage .current storage technologies are restricted by physical limits which will prevent them from keeping up with the ever increasing demands for data storage .this has prompted the development of technologies that employ novel techniques for data storage .patterned magnetic media , in which information is stored in isolated single grain islands , make areal densities of the order of feasible , which is far beyond the limits of conventional magnetic recording .holographic storage , page - oriented optical memories , and the two - dimensional optical storage ( twodos ) technology are potential optical storage technologies of the future . due to the two - dimensional nature of storage ,these advanced storage technologies have 2d isi during the readback process .conventional recording technologies like magnetic hard disks and dvds have one - dimensional isi for which partial response maximum - likelihood decoding has been highly successful .extending prml to two dimensions is not straightforward since maximum - likelihood decoding in two - dimensions is computationally infeasible .this motivates the need for new methods to combat 2d isi .besides advanced storage technologies , multi - user communication scenarios , like cellular communication , also have situations where 2d isi is prevalent .detection schemes for 2d isi channels have been proposed by many researchers - .et al . _ , , have proposed joint equalization and decoding schemes for 2d isi channels and have shown the benefit of using error - correction coding in conjunction with detection . more often than not , the isi is modeled as a linear filter .although a good starting point , the linearity assumption does nt hold in general .twodos is an example of a system where the isi is nonlinear .twodos is , potentially , the next generation optical storage technology with projected storage capacity twice that of the blu - ray disk and with ten times faster data access rates , .as in conventional optical disk recording , bits in the twodos model are written on the disk in spiral tracks . however , instead of having a single row of bit cells , each track consists of a number of bit rows stacked together . thus , twodos is a truly two - dimensional storage paradigm .successive tracks on the disk are separated by a guard band which consists of one empty bit row .in addition , the bit cells are hexagonal ; this allows 15 percent higher packing density than rectangular bit cells leading to higher storage capacity . as in conventional optical disk recording ,a 0/1 is represented by the absence / presence of a pit on the disk surface .a scalar diffraction model proposed by coene for optical recording is used to model the readback signal from the disk . under this modelthe readback intensity from the disk has linear and bilinear contributions from the stored data .various detection schemes for twodos have been proposed - .these schemes , with the exception of that proposed by immink _et al . _ , , use two - dimensional partial response equalization to obtain a linear channel model for the isi . then, equalization methods like minimum mean - squared - error equalization , are used for detection on this linear channel model . sincepartial response equalization leads to noise correlation there is an inherent loss associated with these schemes .thus , it is prudent to search for decoding schemes that avoid partial response equalization and are designed taking into account the nonlinear structure of the isi ._ , propose using a stripe - wise viterbi detector that is designed for the nonlinear isi .however , they did not employ any error correction coding . in this paper , a low - complexity scheme for joint equalization and decoding for nonlinear 2d isi channels is presented .the scheme was first proposed for linear 2d isi channels and has been appropriately modified for the nonlinear channel .this scheme , called the full graph scheme , performs sum - product message - passing on a joint graph that represents the error - correction code and the nonlinear 2d isi channel . low - density parity - check ( ldpc ) codes are used for error correction .simulations for the nonlinear channel model of twodos demonstrate the potential of using the full graph scheme .significant improvement in performance is observed over uncoded performance .noise tolerance thresholds are calculated for regular ldpc codes of different rates and full graph decoding for the nonlinear 2d isi channel .the paper is organized as follows .the model of the system and the channel model for twodos is described in section [ channelmodel ] .the full graph message - passing algorithm and its performance for twodos are presented in section [ fgmp ] . the density evolution algorithm and the noise tolerance thresholdsare presented in section [ denevol ] .section [ conclusion ] concludes the paper .[ channelmodel ] the system is modeled as a discrete - time communication system governed by the following equation ; here , is the data received at the output of the channel . are the channel inputs , obtained by encoding the user data using an error correction code . the user data and the encoded data are assumed to be binary .ldpc codes are used for error correction . are samples of additive white gaussian noise ( awgn ) with zero mean and variance . is the set of indices of all the bits that interfere with during readback and is the function that encapsulates the nonlinear 2d interference . for twodos ,a scalar diffraction model proposed by coene for optical recording is used to model the readback signal . using the model , the readback signal ( optical intensity ) from the diskis where and are respectively the linear and nonlinear isi coefficients .these coefficients depend on the parameters of the optical system , such as the wavelength of the laser , numerical aperture of the readback lens and geometry of the recording ( pit and track dimensions ) .the extent of the interference is limited by the spot size of the laser used for reading . for low - to - moderate storage densitiesthis leads to interference from the nearest neighbors only , whereas , for high storage densities ( twice that of the blu - ray disk ) the interference from bits in the other shells also becomes significant . using a nearest neighbor interference model ,the signal intensity in ( [ eq : twodosisi ] ) depends on the data bit stored in the central bit cell and the 6 neighboring bit cells . if it is assumed that two configurations with the same central bit and same number of nonzero neighbors have identical signal valuesthen the signal intensity takes on 14 values corresponding to the 14 different configurations . fig .[ fig : hex ] shows four of these 14 configurations .as shown by coene , this symmetry assumption is a good approximation .table [ tbl : siglevel ] lists the signal levels for the 14 different configurations ..signal levels for twodos recording using scalar diffraction model and nearest neighbors .( reproduced from ) [ cols="^,^,^ " , ] in proving the existence of thresholds for memoryless channels the crucial innovation of richardson and urbanke was the `` concentration results . ''these results state that as the block length tends to infinity the performance of the ldpc decoder on random graphs converges to its expected behavior and that the expected behavior can be determined from the corresponding cycle - free behavior .et al . _ , extended these concentration results to one - dimensional isi channels by using ldpc coset codes .for 2d isi channels the concentration results do not hold since the channel graph has short cycles even in the limit of infinitely long block length .hence existence of thresholds can not be proved using the concentration analysis . however , our simulations seem to suggest that for the twodos channel the full graph algorithm respects the thresholds computed using density evolution .simulations for long block lengths ( ) show that very low bit error rates ( ) are obtained only when the noise variance is smaller than the threshold .although this does not prove the existence of a threshold , it suggests that the noise tolerance thresholds of table [ tbl : thresholds ] are upper bounds on the performance of the full graph algorithm . besides that, the thresholds also serve as a design parameter ; given a system with a specified snr it is sufficient to pick an ldpc code having a smaller threshold snr thereby ensuring that the bit - error rate can be made arbitrarily small as the block length increases .[ conclusion ] a message - passing based scheme for joint equalization and decoding for nonlinear two - dimensional intersymbol interference channels has been proposed .the scheme , called the full graph algorithm , performs sum - product message - passing on a joint graph of the error correction code and the channel .the complexity of the full graph algorithm is linear in the block length of the error correction code and quadratic in the size of interference neighborhood .the performance of the algorithm is studied for the two - dimensional optical storage paradigm .simulations for the nonlinear channel model of twodos show significant improvement over uncoded performance .the performance is about 8 db better than that reported by immink _et al . _ , for the same isi . using density evolution noise tolerance thresholds for the full graph algorithmare also computed .the authors would like to thank the reviewers for their helpful suggestions .this work was supported by the office of naval research under award n00014 - 03 - 1 - 0110 .j. riani , j. w. m. bergmans , s. j. l. v. beneden , w. m. j. coene , and a. h. j. immink , `` equalization and target response optimization for high density two - dimensional optical storage , '' , pp . 141 - 148 , may 2003 . s .- y . chung , g. d. forney , t. richardson , and r. urbanke , `` on the design of low - density parity - check codes within 0.0045 db of the shannon limit , '' _ ieee comm . letters _ , vol . 5 , pp .58 - 60 , feb . 2001
|
an algorithm that performs joint equalization and decoding for channels with nonlinear two - dimensional intersymbol interference is presented . the algorithm performs sum - product message - passing on a factor graph that represents the underlying system . the two - dimensional optical storage ( twodos ) technology is an example of a system with nonlinear two - dimensional intersymbol interference . simulations for the nonlinear channel model of twodos show significant improvement in performance over uncoded performance . noise tolerance thresholds for the twodos channel computed using density evolution are also presented .
|
cognitive channel is a special case of an interference channel in which the second transmitter has complete and non - causal knowledge of the messages and codewords of the first transmitter .this channel can be used to model an ideal operating scenario for cognitive radios , a device that can sense and adapt to the environment intelligently in coexistence with primary users .fundamental limits of such a communication channel are of interest .achievable rates of the cognitive channel was first obtained in by merging gelfand - pinsker coding with the well - known han - kobayashi encoding for the interference channel . at low interference , the capacity region of this channel in the gaussian casehas recently been established by and independently .while the former considers the gaussian channel only , the latter studies the general discrete memoryless channel case , also called the interference channel with degraded message set ( ic - dms ) .cognitive channel capacity is also known for very strong interference , when both receivers can decode both messages . at medium interference ,the capacity is still an open problem , with some achievable rate regions presented in , , and .the z - interference channel ( zic ) is an interference channel in which only one receiver suffers from interference .its capacity is also unknown even for the gaussian channel , except for some special cases . from the capacity perspective , it is not important which transmitter interferes with the other in the zic . in a cognitive zic , however , due to asymmetric transmitters , two different zic are conceivable .one is with interference from the cognitive transmitter to the primary receiver , and the other from the primary transmitter to the cognitive receiver .while achievable rate regions for the first one have been studied recently in , , there has not been such an investigation for the second one . in this paper , we study the cognitive channel in general and apply the results to the gaussian cognitive zic ( gczic ) in which the cognitive transmitter interferes with the primary receiver .the contribution can be summarized as follows .first , we introduce a new discrete memoryless cognitive interference channel ( dm - cic ) in which the primary receiver is more capable than the secondary receiver .we term it the more capable dm - cic . then , using superposition coding , we establish inner and outer bound on its capacity .we also define a strong interference condition and show that the proposed outer bound holds under this condition also . implicitly , both inner and outer bounds are also valid for cognitive z - interference channel that the interfered receiver is more capable than the other receiver .second , we show that at strong interference ( ) , where is the gain of interference link from secondary user to primary receiver , the outer bound is applicable to the gaussian cic , and thus to the gczic .then we prove that in gaussian noise channel , jointly gaussian distribution is the optimum distribution for this outer bound ; and therefore , we are able to compute this outer bound for the gczic . the outer boundis proven to be the best outer bound for the gczic at strong interference .finally , we derive the gaussian version of the achievable rate region and prove that when interference is highly strong , i.e. , , the inner and outer bounds coincide .thus , we establish the capacity region of the gczic at this range , and show that superposition coding is the capacity achieving scheme .for such a large , superposition encoding at the cognitive transmitter and successive decoding at the primary receiver are capacity - achieving .the rest of paper is organized as follows . in section[ sec : models ] , we discuss models for the gaussian cognitive interference channel and the gczic as well as the existing capacity result for this channel at . we also introduce the more capable dm - cic in this section . in section [ sec : dm - czic ] , we provide new inner and outer bounds on the capacity region of the dm - cic .then in section [ sec : cap ] , we show that for we can apply the introduced inner and outer bounds to the gczic ; and , we compute these bounds for this range . for , we prove the outer bound is equal to the proposed achievable region ; and thus , establish the capacity of the gczic .section [ sec : sum ] concludes the paper .the classical interference channel ( ic ) consists of two independent , non - cooperating pairs of transmitter and receiver , both communicating over the same channel and interfering each other .a special case of the ic is the cognitive ic , also called an ic with degraded message sets ( ic - dms ) , in which a transmitter , the cognitive one , has non - causal knowledge of the messages and codewords to be transmitted by the other transmitter , the primary one . in this sectionwe formally define this channel and some other derivative of that .consider the discrete memoryless cognitive interference channel ( dm - cic ) , also termed the discrete memoryless interference channel with degraded message sets ( ic - dms ) , depicted in figure [ fig : dm - czic ] , where sender 1 wishes to transmit message to receiver 1 and sender 2 wishes to transmit message to receiver 2 .message is available only at sender 2 , while both senders know .this channel is defined by a tuple where two inputs , and two outputs are related by a collection of conditional probability density functions .the discrete memoryless cognitive z - interference channel ( dm - czic ) is a dm - cic in which interference is one sided .more specifically , we consider the case where the primary user does not interfere the secondary one .this only affects the channel transition matrix .thus , the dm - czic with two private messages , for the two receivers , two inputs , and two outputs is a dm - cic in which for all .the dm - cic is said to be more capable if for all . since the second transmitter can encode and broadcast both messages , in the absence of the first transmitter this channel reduces to the well - known more capable dm - bc . in the presence of first sender ,this channel is no longer a bc but an interference channel ( ic ) .however , due to cognition , the second transmitter has complete and non - causal knowledge of both messages and codewords ; thus , it can act similarly to the bc s transmitter .this observation motivated us to define a condition similar to the one that makes one receiver more capable than the other one in a dm - bc .we also define another condition to identify that primary receiver is in a better situation than secondary receiver in receiving the signal of cognitive user .we name this strong cognitive inference condition , as it indicates , roughly speaking , the interference link from cognitive user to primary reciter is stronger the direct link of cognitive sender to its corresponding receiver .the dm - cic is under strong cognitive interference if for all .note that in general neither of these two definitions and implies the other one . without loss of generality , we use the standard form of the gaussian interference channel , , in which the gains of both direct links are 1 and both noises are independent with unit variance . the standard gaussian cognitive interference channel is shown in figure [ fig : standardic ] and is expressed as here the interference links are arbitrary constants and known at all the transmitters and receivers ; represent the primary and secondary users transmit signals , and their received signals ; are independent additive noises ( ) .we also assume that transmitted signals are subject to average power constraint as \leq p_{1 } ] .depending on the values of the interference links and , different classes of ic emerge .a special class is the z - interference channel ( zic ) when either or . for a non - cognitive system, there is no difference in the capacity analysis of these two zics . in a cognitive system , however , due to asymmetric knowledge at the transmitters , two different cognitive zics are conceivable .one is when the primary receiver has no interference ( ) , and the other is when the secondary receiver has no interference ( ) .these two gczic channels have completely different capacity regions . the capacity of the gczic with can be simply obtained from the well - known result of dirty paper coding by costa .achievable rate and capacity regions of this cognitive zic for the discrete memoryless case can also be found in .on the other hand , to the best of our knowledge , not much work has been done on the second gczic ( with ) . in this paper , we investigate the capacity region for this gczic with . in the rest of this paper , gczic refers to this channel . in the following sections, we establish an inner bound and an outer bound for the dm - cic satisfying either or .these bounds are valid for the dm - czic as well .later , we will use these bounds to prove the capacity of the gczic with very strong interference .in the first part of this section , we derive an achievable rate region for the dm - cic . in the second part , we introduce a new outer bound on the capacity of the more capable dm - cic which is valid for dm - cic with strong interference also . since the more capable dm - cic is an extension of the more capable dm - bc , the achievable region also is an extension of its component s capacity region .the technique used for achievability is the same as capacity achieving technique for the conventional more capable dm - bc .in addition , the outer bound also resembles that of the more capable dm - bc .similarly , achievability , error analysis , and the proof of converse ( outer bound ) follow those of the dm - bc. nevertheless , in general the inner bound and the outer bound are not equal in the more capable dm - cic while they are proven to be the same for the more capable dm - bc .indeed , this difference , which will be addressed later in this section , prevents establishing capacity region for the more capable dm - cic .theorem 1 provides an achievable region for the dm - cic .the achievable technique uses superposition encoding at the cognitive transmitter .the decoding is based on the joint typicality .an achievable rate region for the dm - cic consist of all rate pairs that satisfy for some joint distributions that factors as , where is a function which can be random or deterministic .[ thm1 ] the proof uses the superposition coding idea in which can only decode ( the cloud center ) while is intended to decode the satellite codeword . for completenesswe provide the proof in the appendix a. inspired by capacity of more capable bc , , instead of proving the outer bound for region , we prove it for the slightly altered rate region below .the following outer bound on the capacity holds both for the more capable dm - cic and dm - cic with strong interference .the union of all rate pairs such that for some joint distributions constitutes an outer bound on the capacity region of a dm - cic satisfying either the more capable condition in or strong interference condition in .[ thm2 ] the proof is based on the proof of converse for the more capable bc in but adapted for the dm - cic . for completeness , we provide the proof in the appendix b. in the bc , this new form is shown to be an alternative representation of the rate region in theorem [ thm1 ] ; thus , proving the converse for this equivalent region establishes the capacity of the more capable bc .however , these two regions are not equivalent for dm - cic because of different input distributions .therefore , theorem [ thm2 ] provides only an outer bound for the capacity of the more capable dm - cic and dm - czic .nevertheless , later in this paper we show that this outer bound is tight for the gczic at very strong interference .the gczic at weak interference ( ) is a special case of the gaussian cognitive interference channel ( when ) , for which the capacity region is known for and any real .the cognitive user partially devotes its power to help send the codeword of the primary user .it dirty paper encodes its own codeword against the codeword of the primary user .the cognitive receiver performs dirty paper decoding to extract its message free of interference and . at strong interference regime ( )however , the capacity of the gczic is not known in general . an outer bound on the capacity of the gaussian cognitive ic was established by maric et al . in ,corollary 1 . in this section ,we first find the condition in which the gczic is a more capable or under strong interference .we show that for the gaussian cic , strong interference conditions is equivalent to .thus for , we provide new inner and outer bounds by evaluating the inner and outer bounds in section [ sec : dm - czic ] .finally , we prove that these inner and outer bounds coincide when the interference is very strong ( ) , thus establish the capacity of the gczic for this range of interference . in this sectionwe explore the conditions for which theorem [ thm2 ] holds for the gczic ; i.e , we find the condition that the gczic is either more capable or under strong interference .intuitively , the gczic is more capable when interference is very strong . considers the equivalent channel in figure [ fig : zic - eq ] which is achieved by manipulating figure [ fig : zic ] .since both figures have the same , and is a scaled transformation of , the channels depicted in these figures are equivalent from capacity point of view .the equivalent channel in figure [ fig : zic - eq ] looks like a broadcast channel if we consider as interference . without channel is a degraded bc and its capacity is known .now , considering the interference as noise , and assuming that is large enough that the power associated with noise plus interference ( ) is less than noise power at , then can be more capable than .we need to find the range of for which in figure [ fig : zic - eq ] ( or equivalently in figure [ fig : zic ] ) is more capable than in decoding .the condition ( [ eq : defn1 ] ) is equivalent to because of channel transition matrix at ( [ eq : cond0 ] ) . for the gczic ,this is equivalent to for all . with jointly gaussian with correlation factor then becomes choosing , which is the worst case , the gczic is more capable if it is not clear , however , if this condition implies more capability .let s now find the range of for which the strong interference condition holds for the gczic .the proof follows directly from the strong interference condition for the gaussian cognitive ic , which shows that condition is equivalent to we can draw the conclusion that the strong interference condition implies the more capability condition for any gaussian cognitive ic .this completes the proof that theorem [ thm2 ] is applicable for the gaussian cic , and thus for the gczic , if .the capacity of the gczic is partially unknown for strong interference ( ) . at this regime , the best outer bound on the capacitythe gczic was established in , corollary 1 . in this section ,we provide a new outer bound for the capacity of the gczic at strong interference .this outer bound is the gaussian version of the outer bound in theorem [ thm2 ] , with the extra inequality , to that .any achievable rate pair of the gczic with , is upper bounded by the following constraints where , and .[ lem1 ] the proof of this lemma involves showing that the jointly gaussian distribution is the optimum distribution and evaluating the outer bound in theorem [ thm2 ] for the gczic , then finding the covariance matrix of jointly gaussian to maximize the rhs of all inequalities in theorem [ thm1 ] .details of evaluation and maximization can be found in appendix c. and the best existing outer bound in , for the gczic with . ] from the proof , it can be seen that without , optimality condition is which is achieved when , and implies that is a function of .this condition is the same as that in the inner bound .however , the inner bound applies to independent and whereas the outer bound is for general and . in section [ capacity ] , we show that for a certain range of ( ) the optimal and for the outer bound are also independent , thus establish the capacity at that range .figure [ fig : fig1 ] numerically compares the outer bound proposed in lemma [ lem1 ] with the best existing outer bound in .it shows that new outer bound is strictly better than the existing one .moreover , as becomes larger , the gap between these two bounds increases , and the outer bound in lemma [ lem1 ] gets much closer to the achievable region .this lemma establishes the best outer bound on the capacity region of the gczic at strong inference ( ) .we prove this claim by simplifying the outer bound in lemma [ lem1 ] into three simpler outer bounds in the following corollaries .to do so , we first introduce , , and ] .[ cor1 ] this follows immediately by removing , from the lemma [ lem1 ] .this is the same as the best existing outer bound in , and our claim that lemma [ lem1 ] provides the best outer bound follows readily .it should be highlighted that , this region has been recently proven in and to be the capacity of the gczic for .any achievable rate pair of the gczic , is upper bounded by the convex hull of the following region where ] .[ cor3 ] to prove this , we first remove in the lemma [ lem1 ] .then , in we use where the last inequality follows applying triangle inequality with and to the lhs of .the maximum is attained when has the same sign with .finally , the outer bound in corollary [ cor3 ] is obtained considering that , .similar to corollary [ cor2 ] , in section [ capacity ] , we will show that for , corollary [ cor3 ] results in the capacity region of the gczic when .in this section , we compute the gaussian version of the achievable region introduced in section [ inner ] for the gczic . following lemma , which extends theorem [ thm1 ] to the gczic ,provides the achievable region by superposition coding . any rate pair satisfying with ] . [ thm3 ] we want to show that for the outer bound in corollary [ cor3 ] and the inner bound in lemma [ lem2 ] coincide .let s define as the set of all rate pairs satisfying the constraints in lemma [ lem2 ] and as the set of all rate pairs satisfying - .using the same argument as el gamal , we can show that . here can be thought of as the rate of common message which can be decoded at both receivers while is the rate of the private message .now if and only if for any . this means the common rate ( ) can be partially or entirely private .thus region can be represented as .we next show that the outer bound in corollary [ cor3 ] simplifies to .to do so , it suffices to show that for , has to be in corollary [ cor3 ] .consider the first two inequalities of the outer bound in corollary [ cor3 ] ; we can see that on the boundary of this outer bound we must have comparing this inequality with the first inequity in , we conclude that either has to be or the first inequity of must be loose ; since otherwise the outer bound is less that the inner bound , which is not possible .for this inequality to be redundant in we need for to hold with any then this implies the first inequality of lemma [ lem2 ] is redundant only for . in other words , if there exist some for which this inequality can not be redundant ; this in turn enforces .thus , for this range of , the outer bound in corollary [ cor3 ] simplifies to .note also that with the optimal input for the outer bound is the same as the input for the inner bound ( i.e. , , are independent and ) , thus the capacity is established .as a special case , when the third constraint is redundant in and , the outer bound in corollary [ cor2 ] is tight .for such an , the capacity region in theorem [ thm3 ] further simplifies as below .the capacity region the gczic for is the set of all rate pairs satisfying for ] _ i.i.d . _ according to .also , randomly and independently generate sequences , $ ] with elements _ i.i.d ._ according to .next , for each pair of sequences , randomly and conditionally independently generate one sequence with elements _ i.i.d ._ according to .decoding is based on standard joint typicality . the less capable receiver ( )can only distinguish the auxiliary random variable .decoder 2 declares that message is sent if it is a unique message such that ; otherwise it declares an error .decoder 1 declares that message is sent if it is a unique message such that for some ; otherwise it declares an error . to analyze the probability of error , without loss of generality , assume that is sent .first we consider the average probability of error for decoder 2 .let s define the error events by union bound , the probability of error for decoder 2 is upper bounded by now by law of large numbers ( lln ) also , since is independent of for by the packing lemma then , consider the average probability of error for decoder 1 .we define the following error events using union bound , the probability of error for decoder 2 is upper bounded by now we evaluate each term in the right - hand side ( rhs ) of this inequality when .first consider ; again by lln next consider .for since is conditionally independent of given , by packing lemma because is a function of . finally consider .for and , is independent of ; hence , by packing lemma .the equality follows because forms a markov chain .the above analysis completes the proof of achievability since it shows that both receivers can decode corresponding messages with the total probability of error tending to zero if is satisfied .therefore , there exists a sequence of codes with error probability tending to 0 .the proof is also similar to the converse proof for the more capable broadcast channel .we follow the same line of proof as in ; the only difference is replacing in with , since here also encodes .we can bound the rates and as and where and ( [ eq : f-2 ] ) follow by fano s inequality . in a very similar fashion , sum rate can be also bounded by now we manipulate the rhs of - to obtain the desired terms in .first , consider the mutual information term in in which ( [ eq : r2 - 1 ] ) follows from the chain rule , and we have defined the auxiliary random variable moving to ( [ eq : r2 - 2 ] ) .next , we bound the mutual information terms of the second inequality in ( [ eq : f-2 ] ) . where ( [ eq : sum ] ) follows by the csiszar sum identity and the auxiliary random variable ; ( [ eq : rsum1 ] ) follows by markov chain .in which ( [ eq : a0 ] ) follows similar steps to the bound for the first inequality on sum rate ; ( [ eq : a1 ] ) follows from ; and ( [ eq : b ] ) follows by ( [ eq : defn1 ] ) that gives , and implies that .the proof outer bound under the strong interference condition is almost the same , with only slight difference in the proof of last inequality .this is because the first two inequalities hold for any dm - cic . under the strong interference condition we can we can bound the mutual information terms in ( [ eq : f-3 ] ) as in which ( [ eq : a6 ] ) follows by ( [ eq : defn2 ] ) that gives , and implies that .the other steps are straightforward .finally , this proves that theorem [ thm2 ] holds both for more capable strong and interference dm - cic .we need to find the distribution that maximize the rate region in theorem [ thm2 ] for the gaussian channel . in what follows ,we show that jointly gaussian is optimum , i.e. , it provides the largest outer bound for the gaussian channel . by maximum entropy theorem , the rhs of the third inequality is maximized when is gaussian , thus similarly , where denotes when the inputs are gaussian .the last inequality follows by conditional version of entropy power inequality ( epi ) for which equality is achieved when .likewise , for the term we can write where where denote when the inputs are gaussian , and inequalities follow by maximum entropy theorem and conditional version of epi , respectively .again equality is achieved when all terms are gaussian .hence , all inequalities in the outer bound are maximized with jointly gaussian .now the problem is to find the optimum covariance matrix to maximize the bounds , i.e. , to determine correlation coefficients among and .let , which are correlated gaussian random variables and with covariance matrix since the covariance matrix is positive semidefinite , the determinant of this matrix must be nonnegative .that is the inequality holds if or equivalently the covariance matrix is positive semidefinite if now we evaluate the rate constraints defining the outer bound in theorem [ thm2 ] . where similarly where to checkif the inequality ( [ eq : ineq1 ] ) can hold with equality , we evaluate the term in which the covariance matrix is defined by ( [ eq : cov ] ) .since both numerator and denominator are nonnegative in ( [ eq : ineq1 ] ) , the argument of this function is either zero or positive . therefore , is achieved when , or equivalently , .note that implies to be a function of . keeping this in mind that is optimum condition for ( [ eq : ineq1 ] ), we evaluate as follows . where , the last inequality follows from ( [ eq : cond4 ] ) .interestingly , again turns out to be the optimum condition . from this two values for are plausible which are respectively then , it is also straightforward to calculate the third bound in theorem [ thm2 ] to obtain as a last step , we can evaluate and add the gaussian version of the standard inequality , , to these bounds ; the corresponding inequality is as a result , the outer bound is as given in lemma [ lem1 ] .a. jovicic and p. viswanath , `` cognitive radio : an information- theoretic perspective , '' in proc .theory , july 2006 , pp .2413 - 2417 .w. wu , s. vishwanath , and a. arapostathis , `` capacity of a class of cognitive radio channels : interference channels with degraded message sets , '' _ ieee transactions on information theory _ ,4391 - 4399 , nov .2007 .i. maric , a. goldsmith , g. kramer , and s. shamai ( shitz ) , `` on the capacity of interference channels with one cooperatig taransmitter , '' _european trans .10 , pp . 405 - 420 , april 2008 .n. liu , i. maric , a. goldsmith and shlomo shamai ( shitz ) , `` bounds and capacity results for the cognitive z - interference channel , '' in proc .theory , ( isit09 ) , seoul , korea , june 2009 .s. rini , d. tuninetti and n. devroye , `` new results on the capacity of the gaussian cognitive interference channel , '' forty - eighth annual allerton conference on communication , control , and computing , monticello , sept . 2010 .s. rini , d. tuninetti and n. devroye , `` inner and outer bounds for the gaussian cognitive interference channel and new capacity results , '' submitted to _ ieee trans . on inf .theory _ , october 2010 , available at http://arxiv.org/abs/1010.5806 .
|
this paper considers the cognitive interference channel ( cic ) with two transmitters and two receivers , in which the cognitive transmitter non - causally knows the message and codeword of the primary transmitter . we first introduce a discrete memoryless more capable cic , which is an extension to the more capable broadcast channel ( bc ) . using superposition coding , we propose an inner bound and an outer bound on its capacity region . the outer bound is also valid when the primary user is under strong interference . for the gaussian cic , this outer bound applies for , where is the gain of interference link from secondary user to primary receiver . these capacity inner and outer bounds are then applied to the gaussian cognitive z - interference channel ( gczic ) where only the primary receiver suffers interference . upon showing that jointly gaussian input maximizes these bounds for the gczic , we evaluate the bounds for this channel . the new outer bound is strictly tighter than other outer bounds on the capacity of the gczic at strong interference ( ) . especially , the outer bound coincides with the inner bound for and thus , establishes the capacity of the gczic at this range . for such an , superposition encoding at the cognitive transmitter and successive decoding at the primary receiver are capacity - achieving .
|
the set cover problem is that : given a ground set of elements and a collection of subsets of , try to find a minimum number of subsets in such that .if we add an additional constrain such that all subsets in the solution are pairwise disjoint , then the set cover problem becomes the mutually exclusive set cover problem .if we further assign each subset in a real number weight and search the solution with the minimum weight , i.e. the sum of weights of subsets in the solution is minimized , then the problem becomes the weighted mutually exclusive set cover problem .recently , the weighted mutually exclusive set cover problem has found important applications in cancer study to identify driver mutations , i.e. somatic mutations that cause cancers . as somatic mutations will change the structures ( and therefore the functions ) of signaling proteins ; thus , perturb cancer pathways that regulate the expressions of genes in certain important biological processes , such as cell death , cell proliferation etc .the perturbations within a common cancer pathway are often found to be mutually exclusive in a single cancer cell , i.e. each tumor usually has only one perturbation on one given cancer pathways ( one perturbation is enough to cause the disease ; hence , there is no need to wait for another perturbation ) .modern lab techniques can identify somatic mutations and gene expressions of cancer cells .after preprocessing the data , we will obtain following information for important biological processes , e.g. cell death : 1)which cancer cells have disturbed the expressions of genes in the biological process ; 2 ) which genes have been mutated in those cancer cells ; 3 ) how possible each mutation is related to the given biological process ( i.e. each mutation is assigned a real number weight ) .then next step is finding a set of mutations such that each cancer cell has one and only one mutation in the solution set ( mutually exclusive ) and the sum of weights of all genes in the solution set is minimized , which is the weighted mutually exclusive set cover problem . while there is not much research on the mutually exclusive set cover or the weighted mutually exclusive set cover problems ,the set cover problem has been paid much attention .the set cover , which is equivalent to the hitting set problem , is a fundamental np - hard problem in karp s 21 np - complete problems .one research direction for the set cover problem is approximation algorithms , e.g. papers gave polynomial time approximation algorithms that find solutions whose sizes are at most times the size of the optimal solution , where is a constant .second direction is using , the number of subsets in the solution , as parameter to design fixed - parameter tractable ( fpt ) algorithms for the equivalent problem , the hitting set problem .those algorithms have a constrain such that each element in is included in at most subsets in , i.e. sizes of all subsets in the hittng set problem are upper bound by ; it is also called the -hitting set problem .for example , paper gave an algorithm for the -hitting set problem , and paper further improved the time complexity to .the third direction is designing algorithms that use as parameter in the condition that is much less than .papers designed algorithms with time complexities of for the problem .the paper also extended the algorithm to solve the weighted mutually exclusive set cover problem with the same time complexity .paper improved the time complexity to under the condition that at least elements in are included in at most subsets in .this algorithm can also be extended to the weighted mutually exclusive set cover problem with the same time complexity . however , in the application of cancer study , neither is less than nor each element in is included in bounded number of subsets in . hence , there is a need to design new algorithms . in this paper, we will design a new algorithm that uses as parameter ( in application of cancer study , is smaller than , where can be as large as several hundreds ) .trivially , if using as parameter , we can solve the problem in time of , where the algorithm basically just tests every combination of subsets in .to our best knowledge , we have not found any algorithm that is better than the trivial algorithms when using as parameter .this paper will give the first un - trivial algorithm with the time complexity of to solve the weighted mutually exclusive set cover problem .we have tested this algorithm in the cancer study , and the program can finish the computation practically when is less than 100 .the formal definition of the weighted mutually exclusive set cover problem is : given a ground set of elements , a collection of subsets of , and a weight function , if such that , and for any , then we say is a mutually exclusive set cover of and is the weight of ; the goal of the problem is to find a mutually exclusive set cover of with the minimum weight , or report that no such solution exists .as we have not found the proof of np - hardness for the weighted mutually exclusive set cover problem , in this section , we will prove that the mutually exclusive set cover problem is np - hard ; thus , prove that the weighted mutually exclusive set cover problem is np - hard .we will prove the np - hardness of the mutually exclusive set cover problem by reducing another np - hard problem , the maximum set packing problem , to it .remember that the maximum set packing problem is : given a collection of subsets , try to find an such that subsets in are pairwise disjoint and is maximized .the mutually exclusive set cover problem is np - hard .let be an instance of the maximum set packing problem , where .we create an instance of the mutually exclusive set cover problem such that : * , where for all ; * , where , , and . next , we will prove that if is a solution of the mutually exclusive set cover problem , then is a solution of the maximum set packing problem , where .thus we will prove that the time to solve the maximum set packing problem is bounded by the total time of transforming the maximum set packing problem into the mutually exclusive set cover , and of solving the mutually exclusive set cover problem .therefore , the mutually exclusive set cover problem is np - hard . as subsets in are pairwise disjoint, it is obvious that subsets in are pairwise disjoint .hence , if we suppose that is not the solution of the maximum set packing problem , then there must exists a such that subsets in are pairwise disjoint and .thus we can make a new solution of the mutually exclusive set cover problem such that includes and other subsets in and .if let and ( note : any , which is not covered by a subset in , needs subsets in to cover it ; any , which is not covered by a subset in , needs a subset in to cover it ) , then and therefore , i.e. is a solution with less subsets in , which cases contradiction that is the solution of the mutually exclusive set cover problem .hence , is a solution of the maximum set packing problem .in this section , we will introduce our new algorithm to solve the weighted mutually exclusive set cover problem .let be an instance of the weighted mutually exclusive set cover problem .we can use a bipartite graph to represent such that all nodes on one sides are subsets in while nodes on the other side are elements in , and if an element of is in subset , i.e. , then an edge is added between and . for the convenience ,let us introduce some notations .the figure [ fig_1 ] can help you to understand and remember following notations .for any , let , , .for any in , let , , . xxxxxxxxxxxxxxxxxxxxxxxxxxxx= + * input : * an instance of the weighted mutually exclusive set cover problem , two variables , + where is a global variable to keep the best solution . +* output : * a minimum weight mutually exclusive set cover or no solution " .+ + 1 * then * + 1.1 * then * replace with ; + 2 find such that is minimized ; + 3 * then * * return * no solution " ; + 4 * then * wmes - cover ; + 5 for all * then * + 5.1 there exists such that * then * + 5.1.1 wmes - cover ; + + 5.1.2 no solution " ; + 6 * then * //suppose ; note that and .+ 6.1 wmes - cover ; + 6.2 wmes - cover ; + // ( note : ) + 6.3 there exists a such that * then * + 6.3.1 let such that and ; + 6.3.2 * then * //( note : ) + 6.3.2.1 find any ; + 6.3.2.2 wmes - cover ; + 6.3.2.3 wmes - cover ; + + 6.3.2.4 find any ; + 6.3.2.5 wmes - cover ; + 6.3.2.6 wmes - cover ; + + 6.3.3 find a ) such that is maximized ; + 6.3.4 find a ; + 6.3.5 wmes - cover ; + 6.3.6 wmes - cover ; + the main algorithm , algorithm-1 , is shown in figure [ algorithm_main ] .basically , the algorithm-1 first finds an with minimum degree and then branches at one subset in ( such as in step 6.2.2 and 6.2.3 ) . for the convenience ,if , then we say that algorithm-1 is doing a -branch . because of steps 3,4,5 , when the program arrives at step 6 , we must have : 1 ) ; 2 ) for any , ; 3 ) there exists a such that .the algorithm-1 is basically searching the solution by going through a search tree ; hence , if knowing the number of leaves in the search tree , then we will obtain the time complexity of the algorithm-1 .next , we will estimate the number of leaves in the search tree by studying the different cases of branching .we begin from the -branch .[ main_pr_2 ] the search tree has at most leaves if only the 2-branches are applied in algorithm-1 .suppose that and such that .let .in the case of , let . in the branches of choosing either or into the solution ,if is covered , then will be removed from the , or else if is not covered yet , then will be chosen into the solution in order to cover ( note : after are removed , in the new instance ( at line 6.1.1 and 6.1.2 of algorithm-1 ) ; thus , will be included into the solution in the next call of the algorithm-1 in this branch ) . hence , in any case , subsets in will be removed . if letting be the number of leaves in the search tree when , then we will obtain the following recurrence relation the characteristic equationof this recurrence relation is ; hence , we will have .in the case of , we consider following sub - cases ._ sub - case 1_. suppose , and .then at least and will be removed from for the branch of choosing into the solution ; at least , , and all subsets ( at least two ) in will be removed for the branch of choosing into the solution .thus the recurrence relation of is leads to ._ sub - case 2_. suppose .then in either branch , is covered by or , which is chosen into the solution .hence , , and all subsets ( at least two ) in will be removed from .thus we will obtain the recurrence relation leads to . by considering all above cases, we obtain that .now , we consider the case of doing -branch .remember that when algorithm-1 is doing a -branch , for all .[ main_pr_3 ] the search tree has at most leaves if only the -branches for are applied in algorithm-1 .the cases of -branches are considered in the last proposition .now we consider the cases of -branches .suppose that and such that .let . if , then ( as ) .let .we further consider following sub - cases ._ sub - case 1_. suppose .let .the algorithm-1 branches at .the branch one includes into the solution ; thus , will be removed .this will further make .hence , will also be included into the solution .totally , in this branch , we will remove at least subsets from . in branch two , we will exclude from the solution . then either or must be included into the solution .thus is covered by or , and will not be in the solution . therefore , in this branch , we know that at least and will be removed .so we will obtain the recurrence relation which leads to ._ sub - case 2_. suppose .then will not in the solution and any one of ( one and only one of them must be included into the solution to cover ) will cover .the algorithm-1 will branch at any one of . without loss of generality , we branch at . in the branch of including into the solution , will be removed , which will totally remove at least subsets . in the branch of excluding into the solution , will be removed . thus subsets will be removed .we will obtain the following recurrence relation which leads to .in the case of , let .algorithm-1 branches at . in the first branch , included into the solution .then and at least subsets in will be removed . in the second branch , is excluded , which will make in the new instance ; hence , in this branch , a -branch will follow . thus even considering the worst case of the -branch ( the recurrence relation ( 2 ) ), we will have which will lead to . from all above cases and proposition [ main_pr_2 ] , we will have .let us consider the case of doing -branch for .[ main_pr_4 ] the search tree in algorithm-1 has at most leaves .we only need to consider the cases of -branches for .suppose that and such that .let . in the case of , can only be or ._ sub - case 1_. suppose .then there is one and only one subset in .without loss of generality , we suppose .algorithm-1 will branch on such that in the branch of including into the solution , all subsets in and one subset in will be removed ( i.e. in this branch , at least subsets will be removed ; in the branch of excluding from the solution , one subset in will be included into the solution , which will be covered and the only subset in will be removed ( i.e. in this branch , two subsets will be removed ) .therefore , we will have following recurrence relation which leads to ._ sub - case 2_. suppose . without loss of generality, we suppose that algorithm-1 branches on . then it is easy to understand the we will have the following recurrence relation which leads to . in the case of ,suppose and algorithm-1 branches on .then in the branch of including into the solution , all subsets in and will be removed ( at least subsets will be removed ) . in the branch of excluding into the solution , at least one subset will be removed .hence , we will have the recurrence relation which leads to . considering all above cases , proposition [ main_pr_2 ] , and proposition [ main_pr_3 ], we have . [ main_th_2 ]the weighted mutually exclusive set cover problem can be solved by an algorithm with a time complexity of .let be an instance of the weighted mutually exclusive set cover problem , where is a ground set of elements , is a collection of subsets of , and is the weight function .now we prove that the problem can be solved by the algorithm-1 in time .the correctness of the algorithm is easy to understand . if there is an such that , then can not be covered by any subset in . thus , the problem has no solution .the step 3 of the algorithm-1 deals with this situation .if , for any given , , then there exists one and only one subset in that covers , i.e. must be included into the solution .thus and will be removed from the problem .this situation is dealt with in step 4 .if for all in , , then can only be covered by subset(s ) in . by the exclusivity , at most one subset in can be chosen into the solution .thus , if finding a subset in such that , then algoirhtm-1 will include into the solution , or else the problem has no solution .the step 5 of the algorithm-1 deals with this situation . afterthe algorithm-1 reaches step 6 , we have : 1 ) for all , ( as is the element in with the minimum degree ) ; 2 ) there is a such that . if , then one and only one subset in will be in the solution .the step 6.1 and 6.2 correctly deals with this situation . for the cases after step 6.2, the algorithm-1 basically chooses one subset in and branches on such that one branch includes into the solution and the other branch excludes from the solution ( note : when , we used a small trick to include or exclude the additional subset in into or from the solution ; please refer to sub - case 1 and sub - case 2 in the proposition [ main_pr_4 ] ) .therefore , algorithm-1 will go through the search tree and find the solution with the minimum weight ( if the solution exists ) , which is saved in step 1.1 . by proposition [ main_pr_4 ], the search tree has at most leaves .hence , the time complexity of the algorithm is bounded by . if we further notice that the time to process each node is bounded by , then the more accurate time complexity of the algorithm is .in this paper , we first proved that the weighted mutually exclusive set cover problem is np - hard .then we designed the first non - trivial algorithm , which uses the as parameter , with a time complexity of for the problem .the weighted mutually exclusive set cover problem has been used to find the driver mutations in cancers .our new algorithm can find the optimal solution for the problem , which is better than solutions found by the heuristic algorithms in the previous research .the exclusivity is the extreme case . in practical applications, a cancer cell may have more than one mutation to perturb a common pathway .hence , a modified model is finding a set of mutations with minimum weight sum such that each cancer cell has at least one and at most t ( t=2 or 3 ) mutations in the solutions , which leads to the small overlapped set cover problem .also , on application , some mutations in cancer cells may not be detected because of errors .thus , it is not always ideal to find a solution mutations that cover all cancer cells .a modified model is finding a set of mutually exclusive mutations that cover at least percent ( or ) of cancer cells , which leads to the maximal set cover problem .our next research will design efficient algorithms for above two new problems . c. miller ,s. settle , e. sulman , k. aldape , a. milosavljevic , discovering functional modules by identifying recurrent and mutually ecxlusive mutational patterns in tumors , bmc medical genomics , 4 , pp .34 , 2011 .
|
in this paper , we will introduce an exact algorithm with a time complexity of for the weighted mutually exclusive set cover problem , where is the number of subsets in the problem . this problem has important applications in recognizing mutation genes that cause different cancer diseases . department of biomedical informatics , university of pittsburgh , pittsburgh , pa 15219 , usa email : songjian.edu , xinghua.edu
|
blind quantum computation ( bqc ) is a new type of quantum computation model which can release the client who does not have enough knowledge and sophisticated technology to perform the universal quantum computation .a complete quantum computation comprises two parts .one is the client , say alice , who has a classical computer and some ability of quantum operation , or she may be completely classical .the other is the fully - fledged quantum computer server owned by bob .the first bqc protocol was proposed by childs in 2005 .it requires the standard quantum circuit model . in his protocol ,bob needs to perform the quantum gates and alice requires the quantum memory . in 2006 , arrighi and salvail proposed another bqc protocol where alice needs to prepare and measure multiqubit entangled states .it is cheat sensitive for bob obtaining some information , if he does not mind being caught . in 2009 ,broadbent , fitzsimons , and kashefi proposed a different bqc model ( bfk protocol ) based on the one - way quantum computation . in their protocol ,alice only requires to generate the single - qubit quantum state and a classical computer .she does not need the quantum memory .moreover , bob can not learn anything from alice s input , output and her algorithm , which makes it unconditionally secure .inspired by the bfk protocol , several bqc protocols have been proposed .for instance , morimae _ et al . _ proposed two bqc protocols based on the affleck - kennedy - lieb - tasaki state .fitzsimons and kashefi constructed a new verifiable bqc protocol based on a new class of resource states .recently , morimae and fujii proposed a bqc protocol in which alice only makes measurements .the experimental realization of the bfk protocol based on the optical system was also reported .actually , the aim of the bqc is to let the client who does not have enough sophisticated quantum technology and knowledge perform the quantum computation .therefore , the alice s device and operation is more classical , the protocol is more successful . in bfk protocol , if bob only has one service , alice still needs some quantum technology . on the other hand ,if two servers are provided which are owned by bob1 and bob2 , respectively , alice does not require any quantum technology .she can complete the quantum computation task with a classical computation , resorting to the classical communication .this protocol is called double - server bqc protocol . in double - server bqcprotocol , bob1 and bob2 should obey a strong assumption that they can not communicate with each other . if not , they can learn the computation information from alice and make the computation insecure . before starting the bqc protocol, they should share the maximally entangled bell states .unfortunately , in a realistic environment , the noisy channel will greatly degrade the quality of the entanglement and it will make the whole protocol become a failure .therefore , they should recover the mixed entangled states into the maximally entangled states .entanglement purification is the standard way for distilling the high quality entangled state from low quality entangled state , which has been widely discussed in current quantum communication . in 1996 , bennett _ et al ._ proposed the entanglement purification protocol ( epp ) based on the controlled - not gate . in 2001 , pan _proposed a novel epp with linear optics .there are some epps based on the nonlinear optics and hyperentanglement .unfortunately , in a standard epp , they all need the local operation and classical communication . as pointed out by morimae and fujii , it is not sevi - evident that the security of the double - server bqc protocol is guaranteed , when use the entanglement distillation protocol into the double - server blind protocol . .pbs is the polarization beam splitter .it can transmit the polarized photon and reflect the polarized photon . is the coherent state.,width=302 ] recently , morimae and fujii presented a secure entanglement distillation protocol based on the one - way hashing distillation method . in their protocol ,alice first randomly chooses a -bit string and sends it to two bobs , respectively .then each bob performs certain local unitary operation determined by . by measuring a qubit of the single pair ,alice can obtain a bit information from the remained mixed state ensembles .therefore , by repeating this protocol , they can obtain bits of information about the mixed states ensembles . at the end of distillation , they can share about pairs . in this paper, we will present another deterministic entanglement distillation protocol for secure double - server bqc protocol .the whole protocol is based on the optical system , as the photons are well controlled and manipulated . *this protocol is quite different from the one - way hashing distillation model and we resort to the hyperentanglement to complete the distillation . * after performing the protocol , alice can obtain the exact bell state deterministically , with the success probability of 100% , in principle , according to the bobs s measurement results , while she does not feedback any information to bobs , which makes this distillation absolutely secure .before we start to explain this protocol , let us introduce the distillation equipment shown in fig .1 . it is the quantum nondemolition ( qnd ) measurement with the cross - kerr nonlinearity . as pointed out by refs . , the hamiltonian of the system is . herethe ,( , ) are the creation and destruction operators of the signal ( probe ) mode . from fig .1 , if a single photon with vertical polarization ( ) in the spatial mode passes through the equipment , the polarization of the photon will be flipped to horizonal polarization ( ) by half - wave plate ( hwp ) and transmit through the polarization beam splitter ( pbs ) .the single photon combined with the coherent state will interact with the cross - kerr nonlinearity and become .it is shown that the single photon state is unaffected but the coherent state picks up a phase shift directly proportional to the number of the photons . by measuring the phase of the coherent state, one can construct a qnd measurement for the single photons .the basic principle of the double - server bqc protocol combined with distillation model is shown in fig .the source ( trust center ) first generates a pair of hyperentangled state in both polarization and spatial modes , which can be written as latexmath:[\[\begin{aligned } such state is distributed to bob1 and bob2 through the spatial modes , , and , respectively . * as pointed out by refs. , during the transmission , the spatial entanglement is more robust than polarization entanglement .certainly , as pointed out by simon and pan , the energy - time entanglement , which is more robust than the polarization entanglement and allows one to go to longer distances can also be used to perform this protocol . *the noisy channel will lead the polarization part become a mixed state as here . and are the polarized bell states with the whole system can be described as a probabilistic mixture of four pure states : with a probability of , pair is in the state , with a probability , in the state , with a probability of , in the state , and with a probability of , in the state . here with .after passing through the qnds , the state combined with two coherent states and evolves as the and are the coherent states used in the qnd for bob1 and bob2 , respectively . on the other hand ,the state combined with two coherent states and evolves as if they consider the other items and , they can obtain the similar results .then bob1 and bob2 both measure the phase of the coherent state with the x quadrature measurement , which makes the indistinguishable . therefore , both bobs only have two different results , say or 0 .after the measurement , they both send their measurement results to alice by classical communication .finally , alice can judge the exact bell state according to the measurement results .in detail , if the measurement results are the same , say both or 0 , they will obtain , with the probability of .otherwise , if the measurement results are different , say bob1 is and bob2 is 0 , or bob1 is 0 and bob2 is , they will obtain , with the probability of . during the whole protocol, two bobs do not require to exchange their measurement results and they even do not know the information of the remained bell state .they can only judge the output modes according to the different phase shift .if the coherent state picks up no phase shift , the photon must be in the upper output modes .otherwise , if the coherent state picks up phase shift , the photon must be in the lower output modes .combined with the entanglement distillation , the double - server bqc protocol runs as follows : step 1 : the entanglement source emits the hyperentangled pairs to bob1 and bob2 .they share pairs of mixed states , because of the noise .step 2 : both bobs perform the distillation protocol and send the measurement results to alice .the purified states are } ] .step 3 : the following steps are the same as the traditional bqc protocol .alice sends bob1 classical messages , where is randomly chosen by alice from . in detail , if alice obtains , she randomly sends bob1 , and if she obtains , she randomly sends .step 4 : bob measures his qubit in the bell states in the basis . herewe denote and .after bob1 performing the measurement , he tells alice the results with .step 5 : alice and bob2 start the single - server bqc protocol .the traditional entanglement distillation protocols are unsuitable for double - server bqc protocol , because message exchanges between two bobs must be done through alice s mediation . in this way, bob1 can indirectly send a message to bob2 , which will make the computation insecure .interestingly , this protocol does not require mediation .alice can judge the deterministic bell state according to the measurements results coming from two bobs and start the bqc protocol subsequently . during the total distillation, alice does not feedback any messages to both bobs .once two bobs learn nothing from alice and can not exchange the message with each other , it essentially means that distillation is absolutely secure . on the other hand , both bobs may have the evil intention and send wrong messages to alice . in this way, alice will obtain the wrong information about the bell state , and it will induce the error computation .however , both bobs still learn nothing from alice .using spatial entanglement to purify the polarization entanglement has been studied for several groups . however , their protocols are all unsuitable for bqc protocol . in ref . , the bit - flip error can be well corrected by choosing the same output modes .however , they should require the traditional entanglement purification to correct the phase - flip error . in refs . , with local operation and classical communication , both bit - flip error and phase - flip error can be corrected in one step .but the photon pair is destroyed because of the post - selection principle . in this protocol, the purified photon pair can be remained , resorting to the qnd measurement . moreover, both bobs do not require to exchange the classical information , which makes it extremely suitable for double - server bqc protocol . in a practical realization, they should generate the hyperentanglement and make the spatial entanglement stable .the generation of the hyperentanglement with both spatial and polarization degrees of freedom can be well solved with the spontaneous parametric down conversion ( spdc ) source .the pump pulse of ultraviolet light passes through a -barium borate crystal ( bbo ) .it can produce one pair of polarization entangled pairs with probability of , and is reflected and traverses the crystal a second time and can produce the same photon pairs with the same order of magnitude . *this protocol realizes on the hypothesis that the spatial entanglement does not suffer from the noise .though the spatial entanglement is robust than polarization entanglement , it still will be polluted in noisy channel .interestingly , it usually suffers from the phase - noise , while the phase - noise can also be well controlled in current technology .moreover , the experiment for phase - noise measurements showed that the phase in long fibers ( several tens of km ) remains stable , which is an acceptable level for time on the order of 100 .* the other technology challenge may come from the cross - kerr nonlinearity .though many quantum information processes based on the cross - kerr nonlinearity were proposed , it is still a controversial topic .shapiro showed that single - photon kerr nonlinearity may do not help quantum computation .gea - banacloche also argued that a large phase shift via a `` giant '' kerr effect with single - photon wave packets is impossible .as pointed out by kok _, kerr phase shift is only in the optical single - photon regime and a clean cross - kerr nonlinearity is quite a controversial assumption with current technology .fortunately , hofmann showed that a large phase shift of can be obtained with a single two - level atom in a one - sided cavity . using weak measurement, it is possible to amplify a cross - kerr phase shift to an observable value .the theoretical work of zhu and huang also showed that giant cross - kerr nonlinearities were also obtained in a double - quantum - well structure with a four - level , double - type configuration .the `` giant '' cross - kerr effect with phase shift of 20 degrees per photon has been observed in current experiment . in conclusion , we have presented a deterministic entanglement distillation protocol for double - server bqc protocol . after performing the protocol , they can obtain the pure maximally entangled state with the success probability of 100% in principle .bob1 and bob2 do not communicate with each other and they also learn nothing from alice .it makes the protocol unconditionally secure .this work is supported by the national natural science foundation of china under grant no .11104159 , university natural science research project of jiangsu province under grant no .13kjb140010 , the open research fund of key lab of broadband wireless communication and sensor network technology , nanjing university of posts and telecommunications , ministry of education ( no .nykl201303 ) , scientific research foundation of nanjing university of posts and telecommunications under grant no .ny213054 , and a project funded by the priority academic program development of jiangsu higher education institutions . c. wang , y. zhang and g. s. jin , phys .a * 84 * , 032307 ( 2011 ) ; c. wang , y. zhang , and r. zhang , opt . expre . * 19 * , 25685 ( 2011 ) .d. gonta , and p. van loock , phys .a * 84 * , 042303 ( 2011 ) ; d. gonta , and p. van loock , phys .a * 86 * , 052312 ( 2012 ) .j. t. barreiro , n. k. langford , n. a. peters , and p. g. kwiat , phys .lett . * 95 * , 260501 ( 2005 ) .
|
blind quantum computation ( bqc ) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation . the single - server bqc protocol requires the client to have some minimum quantum ability , while the double - server bqc protocol makes the client s device completely classical , resorting to the pure and clean bell - state shared by two servers . in this paper , we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double - server bqc protocol . this protocol can obtain the pure maximally entangled bell state with the success probability of 100% in principle . the distilled maximally entangled states can be remaind to perform the bqc protocol subsequently . the parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client . it makes this protocol unconditionally secure and suitable for current bqc protocol .
|
microgrid is a distributed electric power system that can autonomously co - ordinate local generations and demands in a dynamic manner .illustrated in fig .[ fig : microgrid ] , modern microgrids often consist of distributed renewable energy generations ( _ e.g. , _ wind farms ) and co - generation technology ( _ e.g. , _ supplying both electricity and heat locally ) .microgrids can operate in either grid - connected mode or islanded mode .there have been worldwide deployments of pilot microgrids , such as the us , japan , greece and germany .microgrids are more robust and cost - effective than traditional approach of centralized grids .they represent an emerging paradigm of future electric power systems that address the following two critical challenges ._ power reliability_. providing reliable and quality power is critical both socially and economically . in the us alone , while the electric power system is 99.97% reliable , each year the economic loss due to power outages is at least ] captures the maximum price discrepancy between using local generation and external sources to supply energy .we also prove that the above competitive ratio is the best possible for any deterministic online algorithm .the above competitive ratio is attained without any future information of demand and supply . in sec .[ sec : lookahead ] , we then extend to intelligently leverage limited look - ahead information , such as near - term demand or wind forecast , to further improve its performance . in particular, achieves an improved competitive ratio of when it can look into a future window of size . here , the function ) & the sunk cost per interval of running local generator ( /watt) & the price per unit of heat obtained externally using natural gas ( /watt) & the joint input at time : & the on / off status of the -th local generator ( on as `` 1 '' and off as `` 0 '' ) , & the power output level when the -th generator is on ( watt ) , & the heat level obtained externally by natural gas ( watt) & the power level obtained from electricity grid ( watt ) note : we use bold symbols to denote vectors , _e.g. , _ .brackets indicate the units .we consider a typical scenario where a microgrid orchestrates different energy generation sources to minimize cost for satisfying both local electricity and heat demands simultaneously , while meeting operational constraints of electric power system .we will formulate a microgrid cost minimization problem ( * mcmp * ) that incorporates intermittent energy demands , time - varying electricity prices , local generation capabilities and co - generation .we define the notations in table [ tab : notations ] .we also define the acronyms for our problems and algorithms in table [ tab : acronyms ] . 0.95|>p0.75 * * acronym * * & * meaning * & microgrid cost minimization problem & for fast - responding generators & with single fast - responding generator & a simplified version of & the baseline version of for & for & the baseline version of for with look - ahead & for with look - ahead & for with look - ahead & for with look - ahead * intermittent energy demands * : we consider arbitrary renewable energy supply ( _ e.g. , _ wind farms ) .let the net demand ( _ i.e. , _ the residual electricity demand not balanced by wind generation ) at time be .note that we do not rely on any specific stochastic model of . * external power from electricity grid * : the microgrid can obtain external electricity supply from the central grid for unbalanced electricity demand in an on - demand manner .we let the spot price at time from electricity grid be .we assume that .again , we do not rely on any specific stochastic model on .* local generators * : the microgrid has units of homogeneous local generators , each having an maximum power output capacity . based on a common generator model , we denote as the startup cost of turning on a generator .startup cost typically involves the heating up cost ( in order to produce high pressure gas or steam to drive the engine ) and the time - amortized additional maintenance costs resulted from each startup ( _ e.g. , _ fatigue and possible permanent damage resulted by stresses during startups ) .we denote as the sunk cost of maintaining a generator in its active state per unit time , and as the operational cost per unit time for an active generator to output an additional unit of energy .furthermore , a more realistic model of generators considers advanced _ operational constraints _ :1 . _ minimum on / off periods _ : if one generator has been committed ( resp . , uncommitted ) at time , it must remain committed ( resp . , uncommitted ) until time ( resp . , ) ._ ramping - up / down rates _: the incremental power output in two consecutive time intervals is limited by the ramping - up and ramping - down constraints .most microgrids today employ generators powered by gas turbines or diesel engines .these generators are `` fast - responding '' in the sense that they can be powered up in several minutes , and have small minimum on / off periods as well as large ramping - up / down rates . meanwhile , there are also generators based on steam engine , and are `` slow - responding '' with non - negligible , , and small ramping - up / down rates .* co - generation and heat demand * : the local chp generators can simultaneously generate electricity and useful heat . let the heat recovery efficiency for co - generation be , _i.e. , _ for each unit of electricity generated , unit of useful heat can be supplied for free .alternatively , without co - generation , heating can be generated separately using external natural gas , which costs per unit time .thus , is the saving due to using co - generation to supply heat , provided that there is sufficient heat demand .we assume .in other words , it is cheaper to generate heat by natural gas than purely by generators ( if not considering the benefit of co - generation ) .note that a system with no co - generation can be viewed as a special case of our model by setting .let the heat demand at time be . to keep the problem interesting , we assume that .this assumption ensures that the minimum co - generation energy cost is cheaper than the maximum external energy price .if this was not the case , it would have been optimal to always obtain power and heat externally and separately .we divide a finite time horizon into discrete time slots , each is assumed to have a unit length without loss of generality .the microgrid operational cost in ] . throughout this paper , we set the initial condition , . we formally define the * mcmp * as a mixed integer programming problem , given electricity demand , heat demand , and grid electricity price as time - varying inputs : , t\in [ 1,t],\nonumber\end{aligned}\ ] ] where is the indicator function and represents the set of non - negative numbers .the constraints are similar to those in the power system literature and capture the operational constraints of generators .specifically , constraint captures the constraint of maximal output of the local generator .constraints - ensure that the demands of electricity and heat can be satisfied , respectively .constraints - capture the constraints of maximum ramping - up / down rates .constraints - capture the minimum on / off period constraints ( note that they can also be expressed in linear but hard - to - interpret forms ) .this section considers the fast - responding generator scenario .most chp generators employed in microgrids are based on gas or diesel .these generators can be fired up in several minutes and have high ramping - up / down rates .thus at the timescale of energy generation ( usually tens of minutes ) , they can be considered as having no minimum on / off periods and ramping - up / down rate constraints .that is , , , , .we remark that this model captures most microgrid scenarios today .we will extend the algorithm developed for this responsive generator scenario to the general generator scenario in sec .[ sec : slowrep ] . to proceed ,we first study a simple case where there is one unit of generator .we then extend the results to units of homogenous generators in sec .[ sec : ngens ] .we first study a basic problem that considers a single generator .thus , we can drop the subscript ( the index of the generator ) when there is no source of confusion : .\nonumber\end{aligned}\ ] ] note that even this simpler problem is challenging to solve .first , even to obtain an offline solution ( assuming complete knowledge of future information ) , we must solve a mixed integer optimization problem .further , the objective function values across different slots are correlated via the startup cost ^+ ] . otherwise , the minimum cap ( ) and maximum cap ( 0 ) will apply to retain within ] into disjoint parts called _ critical segments _ : ,[t_{1}^{c}+1,t_{2}^{c}],[t_{2}^{c}+1,t_{3}^{c}], ... ,[t_{k}^{c}+1,t]\ ] ] the critical segments are characterized by a set of _ critical points _ : .we define each critical point along with an auxiliary point , such that the pair satisfies the following conditions : * ( boundary ) : either ( and ) + or ( and ) . * ( interior ) : for all . in other words , each pair of corresponds to an interval where goes from - to or to - , without reaching the two extreme values inside the interval .for example , and in fig .[ fig : example1 ] are two such pairs , while the corresponding critical segments are and .it is straightforward to see that all are uniquely defined , thus critical segments are well - defined .see fig .[ fig : example1 ] for an example . , , and .in the top two rows , we have , .the price is chosen as a constant in . in the next row , we compute according to and . for ease of exposition ,in this example we set the parameters so that increases if and only if and .the solutions , and at the bottom rows are obtained accordingly to , algorithms [ alg : chase0 ] and [ alg : chase - lk ] , respectively . ]once the time horizon ] * _ type-1 _ : ] , if and * _ type - end _ ( also call _ type-3 _ ) : ] will be no less than the startup cost .hence , we can safely turn on the generator at .similarly , for each type-2 segment we can turn off the generator at the beginning of the segment .( we note that our offline solution turns on / off the generator at the beginning of each segment because all future information is assumed to be known . )the optimal solution is easy to compute .more importantly , the insights help us design the online algorithms .denote an online algorithm for by .we define the competitive ratio of by : recall the structure of optimal solution : once the process is entering type-1 ( resp . , type-2 ) critical segment , we should set ( resp ., ) .however , the difficulty lies in determining the beginnings of type-1 and type-2 critical segments without future information .fortunately , as illustrated in fig .[ fig : example1 ] , it is certain that the process is in a type-1 critical segment when reaches for the first time after hitting .this observation motivates us to use the algorithm , which is given in algorithm [ alg : chase0 ] . if , maintains ( since we do not know whether a new segment has started yet . )however , when ( resp . ) , we know for sure that we are inside a new type-1 ( resp .type-2 ) segment .hence , sets ( resp . ) . intuitively ,the behavior of is to track the offline optimal in an online manner : we change the decision only after we are certain that the offline optimal decision is changed .find set , , and according to and return even though is a simple algorithm , it has a strong performance guarantee , as given by the following theorem .[ thm : chase - competitive - ratio ] the competitive ratio of satisfies where \label{eq : alpha_def}\ ] ] captures the maximum price discrepancy between using local generation and external sources to supply energy .refer to appendix [ subsec : chase - competitive - ratio ] . * remark * : ( i ) the intuition that is competitive can be explained by studying its worst case input shown in fig .[ fig : worstcase ] . the demands and pricesare chosen in a way such that in interval ] decreases from to .we see that in the worst case , never matches .but even in this worst case , pays only more than the offline solution on ] [ cor : chase ] achieves the asymptotic optimal competitive ratio of any deterministic online algorithm , as * remark * : at the beginning of sec . [ sec : singlefastrep ] , we have discussed the structural differences of online server scheduling problems and ours . in what follows , we summarize the solution differences among these problems . note that we share similar intuitions with , both make switching decisions when the _ penalty cost _ equals the switching cost .the significant difference , however , is when to reset the penalty counting . in ,the penalty counting is reset when the demand arrives .in contrast , in our solution , we need to reset the penalty counting only when , given in the non - trivial form in ( [ eqn : delta - definition ] ) , touches 0 or . this particular way of resetting penalty counting is critical for establishing the optimality of our proposed solution .meanwhile , to compare with , the approach in does not explicitly count the penalty . furthermore , the online server scheduling problem in is formulated as a convex problem , while our problem is a mixed integer problem .thus , there is no known method to apply the approach in to our problem .we consider the setting where the online algorithm can predict a small window of the immediate future .note that returns to the case treated in section [ sec : online ] , when there is no future information at all .consider again a type-1 segment ] is defined in , and captures the benefit of looking - ahead and monotonically increases from to 1 as increases .in particular , + . refer to appendix [ subsec : chaselk - competitive - ratio ] .we replace by in and obtain an improved algorithm for the look - ahead setting , named .[ fig : chase_ratio_with_lookahead ] shows the competitive ratio of as a function of and . the competitive ratio of as a function of and . ]now we consider the general case with units of homogeneous generators , each having an maximum power capacity , startup cost , sunk cost and per unit operational cost .we define a generalized version of problem : next , we will construct both offline and online solutions to in a divide - and - conquer fashion .we will first partition the demands into sub - demands for each generator , and then optimize the local generation _ separately _ for each sub - demand .note that the key is to correctly partition the demand so that the combined solution is still optimal .our strategy below essentially slices the demand ( as a function of ) into multiple layers from the bottom up ( see fig . [fig : layers = example ] ) .each layer has at most units of electricity demand and units of heat demand .the intuition here is that the layers at the bottom exhibit the least frequent variations of demand .hence , by assigning each of the layers at the bottom to a dedicated generator , these generators will incur the least amount of switching , which helps to reduce the startup cost .more specifically , given , we slice them into layers : \\ h^{{\rm ly\mbox{-}}n}(t ) = & \min\{\eta \cdot l , h(t ) \mbox{- } \textstyle{\sum}_{r=1}^{n-1 } h^{{\rm ly\mbox{-}}r}(t)\ } , n \in [ 2,n ]\\ a^{\rm top}(t ) = & \min\{l , a(t ) - \textstyle{\sum}_{r=1}^{n } a^{{\rm ly\mbox{-}}r}(t)\ } \\ h^{\rm top}(t ) = & \min\{\eta\cdot l , h(t ) - \textstyle{\sum}_{r=1}^{n } h^{{\rm ly\mbox{-}}r}(t ) \label{eq : demand_slicing - end}\}\end{aligned}\ ] ] it is easy to see that electricity demand satisfies and heat demand satisfies .thus , each layer of sub - demand can be served by a single local generator if needed .note that can only be satisfied from external supplies , because they exceed the capacity of local generation . based on this decomposition of demand, we then decompose the * fmcmp * problem into sub - problems ( ) , each of which is an problem with input .we then apply the offline and online algorithms developed earlier to solve each sub - problem ( ) _ separately_. by combining the solutions to these sub - problems , we obtain offline and online solutions to * fmcmp*. for the offline solution , the following theorem states that such a divide - and - conquer approach results in no optimality loss .[ thm : nofa - optimal ] suppose is an optimal offline solution for each ( ). then + defined as follows is an optimal offline solution for : refer to appendix [ subsec : nofa - optimal ] . for the online solution, we also apply such a divide - and - conquer approach by using ( i ) a central demand dispatching module that slices and dispatches demands to individual generators according to - , and ( ii ) an online generation scheduling module sitting on each generator ( ) _ independently _ solving their own sub - problem using the online algorithm .the overall online algorithm , named , is simple to implement without the need to coordinate the control among multiple local generators . since the offline ( resp .online ) cost of is the sum of the offline ( resp .online ) costs of ( ) , it is not difficult to establish the competitive ratio of as follows .[ thm : chase - ng ] the competitive ratio of satisfies where ] is defined in .refer to appendix [ subsec : multipleratio ] .we next consider the slow - responding generator case , with the generators having non - negligible constraints on the minimum on / off periods and the ramp - up / down speeds .for this slow - responding version of * mcmp * , its offline optimal solution is harder to characterize than * fmcmp * due to the additional challenges introduced by the cross - slot constraints - . in the slow - responding setting, local generators can not be turned on and off immediately when demand changes .rather , if a generator is turned on ( resp . , off ) at time , it must remain on for at least ( resp . , ) time .further , the changes of must be bounded by and .a simple heuristic is to first compute solutions based on , and then modify the solutions to respect the above constraints .we name this heuristic and present it in algorithm [ alg : chase - general ] . for simplicity ,algorithm [ alg : chase - general ] is a single - generator version , which can be easily extended to the multiple - generator scenario by following the divide - and - conquer approach elaborated in sec . [ sec : ngens ] . ^{+}} ] return we now explain algorithm [ alg : chase - general ] and its competitive ratio . at each time slot , we obtain the solution of , including , as a reference solution ( line 1 ) . then in line 2 - 6 , we modify the reference solution s to our actual solution , to respect the constraints of minimum on / off periods . more specifically , we follow the reference solution s ( _ i.e. , _ ) _ if and only if _ it respects the minimum on / off periods constraints ( line 2 - 3 ) . otherwise , we let our actual solution s equal our previous slot s solution ( ) ( line 4 - 5 ) .similarly , we modify the reference solution s to our actual solution s , to respect the constraints on ramp - up / down speeds ( line 7 - 11 ) . atlast , in our actual solution , we use to compensate the supply and satisfy the demands ( line 12 - 13 ) . in summary ,our actual solution is designed to be aligned with the reference solution as much as possible .we derive an upper bound on the competitive ratio of as follows .[ thm : slowratio ] the competitive ratio of is upper bounded by , where is defined in and refer to appendix [ subsec : slowratio ] .we note that when , , the above upper bound matches that of in theorem [ thm : chase - ng ] ( specifically the first term inside the min function ) .we evaluate the performance of our algorithms based on evaluations using real - world traces .our objectives are three - fold : ( i ) evaluating the potential benefits of chp and the ability of our algorithms to unleash such potential , ( ii ) corroborating the empirical performance of our online algorithms under various realistic settings , and ( iii ) understanding how much local generation to invest to achieve substantial economic benefit . * demand trace * : we obtain the demand traces from california commercial end - use survey ( ceus ) .we focus on a college in san francisco , which consumes about 154 gwh electricity and therms gas per year .the traces contain hourly electricity and heat demands of the college for year 2002 .the heat demands for a typical week in summer and spring are shown in fig .[ fig : trace ] .they display regular daily patterns in peak and off - peak hours , and typical weekday and weekend variations .* wind power trace * : we obtain the wind power traces from .we employ power output data for the typical weeks in summer and spring with a resolution of 1 hour of an offshore wind farm right outside san francisco with an installed capacity of 12mw . the net electricity demand , which is computed by subtracting the wind generation from electricity demand is shown in fig .[ fig : trace ]. the highly fluctuating and unpredictable nature of wind generation makes it difficult for the conventional prediction - based energy generation scheduling solutions to work effectively . *electricity and natural gas prices * : the electricity and natural gas price data are from pg&e and are shown in table [ tab : pg&e - tariffs ] . besides , the grid electricity prices for a typical week in summer and winter are shown in fig . [fig : trace ] . both the electricity demand and the price show strong diurnal properties : in the daytime , the demand and price are relatively high; at nights , both are low .this suggests the feasibility of reducing the microgrid operating cost by generating cheaper energy locally to serve the demand during the daytime when both the demand and electricity price are high . * generator model * : we adopt generators with specifications the same as the one in .the full output of a single generator is .the incremental cost per unit time to generate an additional unit of energy is set to be , which is calculated according to the natural gas price and the generator efficiency .we set the heat recovery efficiency of co - generation to be according to .we also set the unit - time generator running cost to be , which includes the amortized capital cost and maintenance cost according to a similar setting from .we set the startup cost equivalent to running the generator at its full capacity for about 5 hrs at its own operating cost which gives .in addition , we assume for each generator and , unless mentioned otherwise. for electricity demand trace we use , the peak demand is 30mw .thus , we assume there are 10 such chp generators so as to fully satisfy the demand . * local heating system * : we assume an on - demand heating system with capacity sufficiently large to satisfy all the heat demand by itself and without on - off cost or ramp limit .the efficiency of a heating system is set to according to , and consequently we can compute the unit heat generation cost to be .* cost benchmark * : we use the cost incurred by using only external electricity , heating and wind energy ( without chp generators ) as a benchmark .we evaluate the cost reduction due to our algorithms .* comparisons of algorithms * : we compare three algorithms in our simulations . ( 1 )our online algorithm chase ; ( 2 ) the receding horizon control ( rhc ) algorithm ; and ( 3 ) the offline optimal algorithm we introduce in sec . [ sec : slowrep ] .rhc is a heuristic algorithm commonly used in the control literature . in rhc ,an estimate of the near future ( _ e.g. , _ in a window of length ) is used to compute a tentative control trajectory that minimizes the cost over this time - window .however , only the first step of this trajectory is implemented . in the next time slot , the window of future estimates shifts forward by slot .then , another control trajectory is computed based on the new future information , and again only the first step is implemented .this process then continues .we note that because at each step rhc does not consider any adversarial future dynamics beyond the time - window , there is no guarantee that rhc is competitive . for the offline algorithm , the inputs are system parameters ( such as , and ) , electricity demand , heat demand , wind power output , gas price , and grid electricity price . for online algorithms chase and rhc , the input is the same as the offline except that at time , only the demands , wind power output , and prices in the past and the look - ahead window ( _ i.e. , _ ] .* purpose * : the experiments in this subsection aim to answer two questions .first , what is the potential savings with microgrids ?note that electricity , heat demand , wind station output as well as energy price all exhibit seasonal patterns . as we can see from figs .[ fig : dtrace_summer ] and [ fig : dtrace_winter ] , during summer ( similarly autumn ) the electricity price is high , while during winter ( similarly spring ) the heat demand is high .it is then interesting to evaluate under what settings and inputs the savings will be higher .second , what is the difference in cost - savings with and without the co - generation capability ?in particular , we conduct two sets of experiments to evaluate the cost reductions of various algorithms .both experiments have the same default settings , except that the first set of experiments ( referred to as chp ) assumes the chp technology in the generators is enabled , and the second set of experiments ( referred to as nochp ) assumes the chp technology is not available , in which case the heat demand must be satisfied solely by the heating system . in all experiments ,the look - ahead window size is set to be hours according to power system operation and wind generation forecast practice .the cost reductions of different algorithms are shown in fig .[ fig : with - chp ] and [ fig : without - chp ] .the vertical axis is the cost reduction as compared to the cost benchmark presented in sec . [sec : setting ] .* observations * : first , the whole - year cost reductions obtained by offline are 21.8% and 11.3% for chp and nochp scenarios , respectively .this justifies the economic potential of using local generation , especially when chp technology is enabled .then , looking at the seasonal performance of offline , we observe that offline achieves much more cost savings during summer and autumn than during spring and winter .this is because the electricity price during summer and autumn is very high , thus we can benefit much more from using the relatively - cheaper local generation as compared to using grid energy only .moreover , offline achieves much more cost savings when chp is enabled than when it is not during spring and winter .this is because , during spring and winter , the electricity price is relatively low and the heat demand is high .hence , just using local generation to supply electricity is not economical .rather , local generation becomes more economical only if it can be used to supply both electricity and heat together ( _ i.e. , _ with chp technology ) .second , chase performs consistently close to offline across inputs from different seasons , even though the different settings have very different characteristics of demand and supply .in contrast , the performance of rhc depends heavily on the input characteristics .for example , rhc achieves some cost reduction during summer and autumn when chp is enabled , but achieves 0 cost reduction in all the other cases . * ramifications * : in summary , our experiments suggest that exploiting local generation can save more cost when the electricity price is high , and chp technology is more critical for cost reduction when heat demand is high .regardless of the problem setting , it is important to adopt an intelligent online algorithm ( like chase ) to schedule energy generation , in order to realize the full benefit of microgrids .* purpose * : we compare the performances of chase to rhc and offline for different sizes of the look - ahead window and show the results in fig .[ fig : cr_w ] .the vertical axis is the cost reduction as compared to the cost benchmark in sec .[ sec : setting ] and the horizontal axis is the size of lookahead window , which varies from 0 to 20 hours .* observations * : we observe that the performance of our online algorithm chase is already close to offline even when no or little look - ahead information is available ( _ e.g. , _ , , and ) .in contrast , rhc performs poorly when the look - ahead window is small .when is large , both chase and rhc perform very well and their performance are close to offline when the look - ahead window is larger than 15 hours .an interesting observation is that it is more important to perform intelligent energy generation scheduling when there is no or little look - ahead information available .when there are abundant look - ahead information available , both chase and rhc achieve good performance and it is less critical to carry out sophisticated algorithm design . in fig .[ fig : ratio1 ] and [ fig : ratio2 ] , we separately evaluate the benefit of looking - ahead under the fast - responding and slow - responding scenarios .we evaluate the empirical competitive ratio between the cost of chase and offline , and compare it with the theoretical competitive ratio according to our analytical results . in the fast - responding scenario ( fig .[ fig : ratio1 ] ) , for each generator there are no minimum on / off period and ramping - up / down constraints .namely , , , , . in the slow - responding scenario ( fig .[ fig : ratio2 ] ) , we set and .in both experiments , we observe that the theoretical ratio decreases rapidly as look - ahead window size increases . further , the empirical ratio is already close to one even when there is no look - ahead information .cost reduction as a function of local generation capacity .] cost reduction as a function of local generation capacity . ]* purpose * : previous experiments show that our algorithms have better performance if a larger time - window of accurate look - ahead input information is available .the input information in the look - ahead window includes the wind station power output , the electricity and heat demand , and the central grid electricity price . in practice, these look - ahead information can be obtained by applying sophisticated prediction techniques based on the historical data . however , there are always prediction errors .for example , while the day - ahead electricity demand can be predicted within 2 - 3% range , the wind power prediction in the next hours usually comes with an error range of 20 - 50% .therefore , it is important to evaluate the performance of the algorithms in the presence of prediction error .* observations * : to achieve this goal , we evaluate chase with look - ahead window size of 1 and 3 hours . according to ,the hour - level wind - power prediction - error in terms of the percentage of the total installed capacity usually follows gaussian distribution .thus , in the look - ahead window , a zero - mean gaussian prediction error is added to the amount of wind power in each time - slot .we vary the standard deviation of the gaussian prediction error from 0 to 120% of the total installed capacity .similarly , a zero - mean gaussian prediction error is added to the heat demand , and its standard deviation also varies from 0 to 120% of the peak demand .we note that in practice , prediction errors are often in the range of 20 - 50% for 3-hour prediction .thus , by using a standard deviation up to 120% , we are essentially stress - testing our proposed algorithms .we average 20 runs for each algorithm and show the results in figs .[ fig : cr_aerror ] and [ fig : cr_berror ] . as we can see, both chase and rhc are fairly robust to the prediction error and both are more sensitive to the wind - power prediction error than to the heat - demand prediction error . besides , the impact of the prediction error is relatively small when the look - ahead window size is small , which matches with our intuition .* purpose * : microgrids may employ different types of local generators with diverse operational constraints ( such ramping up / down limits and minimum on / off times ) and heat recovery efficiencies . it is then important to understand the impact on cost reduction due to these parameters . in this experiment , we study the cost reduction provided by our offline and online algorithms under different settings of , , , and . * observations * : fig .[ fig : cr_rud ] and [ fig : cr_tud ] show the impact of ramp limit and minimum on / off time , respectively , on the performance of the algorithms .note that for simplicity we always set and . as we can see in fig .[ fig : cr_rud ] , with and of about 40% of the maximum capacity , chase obtains nearly all of the cost reduction benefits , compared with which needs 70% of the maximum capacity .meanwhile , it can be seen from fig .[ fig : cr_tud ] that and do not have much impact on the performance .this suggests that it is more valuable to invest in generators with fast ramping up / down capability than those with small minimum on / off periods . from fig .[ fig : cr_eta_summer ] and [ fig : cr_eta_winter ] , we observe that generators with large save much more cost during the winter because of the high heat demand .this suggests that in areas with large heat demand , such as alaska and washington , the heat recovery efficiency ratio is a critical parameter when investing chp generators .+ thus far , we assumed that the microgrid had the ability to supply all energy demand from local power generation in every time - slot . in practice, local generators can be quite expensive .hence , an important question is how much investment should a microgrid operator makes ( in terms of the installed local generator capacity ) in order to obtain the maximum cost benefit .more specifically , we vary the number of chp generators from 1 to 10 and plot the corresponding cost reductions of algorithms in fig . [fig : cr_yub ] .interestingly , our results show that provisioning local generation to produce 60% of the peak demand is sufficient to obtain nearly all of the cost reduction benefits .further , with just 50% local generation capacity we can achieve about 90% of the maximum cost reduction .the intuitive reason is that most of the time demands are significantly lower than their peaks .energy generation scheduling is a classical problem in power systems and involves two aspects , namely unit commitment ( uc ) and economic dispatching ( ed ) .uc optimizes the startup and shutdown schedule of power generations to meet the forecasted demand over a short period , whereas ed allocates the system demand and spinning reserve capacity among operating units at each specific hour of operation without considering startup and shutdown of power generators . for large power systems , uc involves scheduling of a large number gigantic power plants of several hundred if not thousands of megawatts with heterogeneous operating constraints and logistics behind each action .the problem is very challenging to solve and has been shown to be np - complete in general in ( 3a)-(3d ) is an instance of uc , and that uc is np - hard in general does not imply that the instance is also np - hard . ]sophisticated approaches proposed in the literature for solving uc include mixed integer programming , dynamic programming , and stochastic programming .there have also been investigations on uc with high renewable energy penetration , based on over - provisioning approach .after uc determines the on / off status of generators , ed computes their output levels by solving a nonlinear optimization problem using various heuristics without altering the on / off status of generators .there is also recent interest in involving chp generators in ed to satisfy both electricity and heat demand simultaneously .see comprehensive surveys on uc in and on ed in .however , these studies assume the demand and energy supply ( or their distributions ) in the entire time horizon are known _ a prior_. as such , the schemes are not readily applicable to microgrid scenarios where accurate prediction of small - scale demand and wind power generation is difficult to obtain due to limited management resources and their unpredictable nature .several recent works have started to study energy generation strategies for microgrids .for example , the authors in develop a linear programming based cost minimization approach for uc in microgrids . considers the fuel consumption rate minimization in microgrids and advocates to build ict infrastructure in microgrids . discuss the energy scheduling problems in data centers , whose models are similar with ours .the difference between these works and ours is that they assume the demand and energy supply are given beforehand , and ours does not rely on input prediction .online optimization and algorithm design is an established approach in optimizing the performance of various computer systems with minimum knowledge of inputs .recently , it has found new applications in data centers . to the best of our knowledge ,our work is the first to study the competitive online algorithms for energy generation in microgrids with intermittent energy sources and co - generation .the authors in apply online convex optimization framework to design ed algorithms for microgrids .the authors in adopt lyapunov optimization framework to design electricity scheduling for microgrids , with consideration of energy storage .however , neither of the above considers the startup cost of the local generations .in contrast , our work jointly consider uc and ed in microgrids with co - generation .furthermore , the above three works adopt different frameworks and provide online algorithms with different types of performance guarantee .in this paper , we study online algorithms for the micro - grid generation scheduling problem with intermittent renewable energy sources and co - generation , with the goal of maximizing the cost - savings with local generation . based on insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms , called chase that track the offline optimal in an online fashion . under typical settings , we show that chase achieves the best competitive ratio of all deterministic online algorithms , and the ratio is no larger than a small constant 3 .we also extend our algorithms to intelligently leverage on _ limited prediction _ of the future , such as near - term demand or wind forecast . by extensive empirical evaluations using real - world traces ,we show that our proposed algorithms can achieve near offline - optimal performance .there are a number of interesting directions for future work .first , energy storage systems ( _ e.g. , _ large - capacity battery ) have been proposed as an alternate approach to reduce energy generation cost ( during peak hours ) and to integrate renewable energy sources. it would be interesting to study whether our proposed microgrid control strategies can be combined with energy storage systems to further reduce generation cost. however , current energy storage systems can be very expensive .hence , it is critical to study whether the combined control strategy can reduce sufficient cost with limited amount of energy storage .second , it remains an open issue whether chase can achieve the best competitive ratios in general cases ( _ e.g. , _ in the slow - responding case ) .the work described in this paper was partially supported by china national 973 projects ( no .2012cb315904 and 2013cb336700 ) , several grants from the university grants committee of the hong kong special administrative region , china ( area of excellence project no .aoe / e-02/08 and general research fund project no .411010 and 411011 ) , two gift grants from microsoft and cisco , and masdar institute - mit collaborative research project no .xiaojun lin would like to thank the institute of network coding at the chinese university of hong kong for the support of his sabbatical visit , during which some parts of the work were done . is an optimal solution for * sp*. suppose is an optimal solution for * sp*. for completeness , we let and . we define a sequence , as follows : for all ] and \\ y^{\ast}(t ) , & \mbox{otherwise\ } \end{cases}\ ] ] 3 . for all ] is type-1 .hence , for all ] .hence , we obtain : where eqn .( [ eqn : delta - def1 ] ) follows from the definition of ( see eqn .( [ eqn : delta - definition ] ) ) and eqn .( [ eqn : delta - func1 ] ) follows from lemma [ lem : delta - function ] .this completes the proof for case 1 .* case 2 * : suppose for some ] by ,[\tau_{2}^{b},\tau_{2}^{e}],[\tau_{3}^{b},\tau_{3}^{e}], ... ,[\tau_{p}^{b},\tau_{p}^{e}]\ ] ] such that \ne y_{{\rm ofa}}(t) ] , , where . since , then there exists at least one ] , switches from 0 to 1 .hence , it incurs the startup cost .however , when and , the startup cost is not for critical segment ] is type-2 .hence , for all ] and ( case 2 ) : for some ] .[ lem : delta - function ] suppose ] as type-1 .this implies that only , whereas for ] is type-2 , we proceed with a similar proof , except therefore , competitive ratio of we denote the outcome of by .we aim to show that first , we denote the set of indexes of critical segments for type- by .note that we also refer to type - start and type - end by type-0 and type-3 respectively .define the sub - cost for type- by ^{+}\end{aligned}\ ] ] hence , .we prove by comparing the sub - cost for each type- .( * type-0 * ) : note that both for all ] .we note that by the definition of type-1 , . and switch from 0 to 1 within ] is let the number of type- critical segments be .then , we obtain ( * type-2 * ) and ( * type-3 * ) : we consider a particular type-2 ( or type-3 ) critical segment ] : together , we obtain : therefore , [ lem : segment-1 minimum cost ] }\frac{\psi\big(\sigma(\tau),1\big)-c_{m}}{\psi\big(\sigma(\tau),0\big)-\psi\big(\sigma(\tau),1\big)+c_{m}}\geq\frac{c_{o}}{p_{\max}+\eta \cdot c_{g}-c_{o}}\ ] ] we expand for each case : _ case 1 _ : . when , by lemma [ lem : fmcmp ] thus , therefore , _ case 2 _ : .when , by lemma [ lem : fmcmp ] ^{+}\ ] ] thus , ^{+}+c_{m } \\\psi\big(\sigma(\tau),0\big)&=&p(t)a(t)+c_{g}h(t)\end{aligned}\ ] ] therefore , ^{+}}{\big(p(t)-c_{o}\big)a(t)+c_{g}\big(h(t)-\big[h(t)-\eta\cdot a(t)\big]^{+}\big)}\\ & \geq & \frac{c_{o}a(t)}{\big(p(t)-c_{o}\big)a(t)+c_{g}\min\{h(t),\eta \cdot a(t)\}}\\ & \geq & \frac{c_{o}a(t)}{\big(p(t)-c_{o}\big)a(t)+\eta \cdot c_{g}a(t)}\\ & = & \frac{c_{o}}{p(t)-c_{o}+\eta \cdot c_{g}}\end{aligned}\ ] ] _ case 3 _ : .when , by lemma [ lem : fmcmp ] thus , ^{+ } \notag\\ & & + c_{g}\big[h(t)-\eta \cdot a(t)\big]^{+}+c_{m }\\ \psi\big(\sigma(\tau),0\big)&=&p(t)a(t)+c_{g}h(t)\end{aligned}\ ] ] therefore , and combing all the cases , we obtain }\frac{\psi\big(\sigma(\tau),1\big)-c_{m}}{\psi\big(\sigma(\tau),0\big)-\psi\big(\sigma(\tau),1\big)+c_{m}}\geq\frac{c_{o}}{p_{\max}+\eta \cdot c_{g}-c_{o}}\ ] ]denote an online algorithm by , and an input sequence by . more specifically ,we write and , when it explicitly refers to input sequence by .define we have we prove this lemma by contradiction .suppose that there exists a deterministic online algorithm for * fmcmp * with output , such that also , it follows that for any an input sequence , it follows that ( by lemma [ lem : fmcmp ] ) based on , we can construct an online algorithm for * sp * , such that .by lemma [ lem : fmcmp ] , therefore , we obtain however , as is a lower bound of competitive ratio for any deterministic online algorithm for * sp*. this is contradiction , and it completes our proof . the competitive ratio for any deterministic online algorithm for is lower bounded by a function : when , we have the basic idea is as follows . given any deterministic online algorithm , we construct a special input sequence , such that for a function .first , we note that at time , determines only based on the past input in ] into disjoint segments of consecutive intervals of full demand or zero demand : * full - demand segment : ] , and . *zero - demand segment : ] , and .note that according to eqn .( [ eq : a(t).construction ] ) , .thus , the time must belong to a full - demand segment . also , full - demand and zero - demand segments appear alternating .let and be the number of full - demand and zero - demand segments in ] , since , for all ] with length , incurs a cost similarly , in a zero - demand segment ] respectively . by summing the costs over all full - demand and zero - demand segments and simplifying terms , we obtain a compact expression of the cost of w.r.t . as follows : * step 2 * : ( bounding ) : we divide the input into critical segments .we then define be the set of all type-0 , type-2 , type-3 , and the `` increasing '' parts of type-1 critical segments , and be set of the `` plateau '' parts of type-1 critical segments . here , for a type-1 critical segment ] and the `` plateau '' part is defined as ] , the deficit function wriggles up from to , and it cost the same to served the part by either buying power from the grid or using on - site generator ( which incurs a turning - on cost ) .hence , we can simplify the offline cost on the increasing part as with this simplification , we proceed with the ratio analysis as follows : as goes to infinity , it is clear that to lower - bound the above ratio , it suffices to consider only those ( ) with unbounded length in time .next , we study each term in the lower bound of the competitive ratio .we define and as the total length of full - demand ( zero - demand ) intervals in the increasing parts and plateau , respectively .similarly , we define and as the number of full - demand ( zero - demand ) intervals in the increasing parts and plateau , respectively .* step 2 - 1 * : ( bounding ) first , we seek to lower - bound the term under the assumption that is unbounded . from the offline solution structure , we know that on type-0 , type-2 , type-3 , and the `` increasing '' parts of type-1 critical segments , the offline optimal cost is given by noticing that we also have , we obtain in , either there is only one type-0 segment , or there are equal number of type-2/3 critical segments and type-1 critical segment `` increasing '' parts . hence , the total deficit function increment , i.e. , , must be no more than the total deficit function decrement , which is upper bounded by where the term accounts for that the deficit function does not end naturally at but get dragged down to at the end of . that is , moreover , since contains only type-0 , type-2 , type-3 , and the `` increasing '' parts of type-1 critical segments , the deficit function increment introduced by every full - demand segment must be no more than .that is , by the above inequalities ,we continue the derivation as follows : now we discuss the second term in eqn .( [ eq : type-0.interval.ratio.lower.bound ] ) , denoted as .recall in the problem setting , .hence , term is monotonically decreasing in , and its minimum value is taken when is replaced with the upper - boundary value : the above inequality holds for arbitrary .now we discuss the third term in eqn .( [ eq : type-0.interval.ratio.lower.bound ] ) , denoted as .when goes to infinity , can be discussed by two cases . in the first case, remains bounded when goes to infinity . since we must have unbounded as goes to infinity . as a result ,the term is unbounded , and so is . in the second case , goes unbounded when goes to infinity . then by eqn .( [ eq : type - start - relation - increment - decrement ] ) , we know that also goes unbounded and , overall , by substituting eqn .( [ eq : type-0.interval.term(i).lower.bound ] ) and eqn .( [ eq : type-0.interval.term(ii).lower.bound ] ) into eqn .( [ eq : type-0.interval.ratio.lower.bound ] ) , we obtain * step 2 - 2 * : ( bounding ) we now lower - bound the term under the assumption that is unbounded . since only contains the `` plateau '' parts of type-1 critical segments , we have therefore , in , either there is only one type-0 segment , or there are equal number of type-2/3 critical segments and type-1 critical segment `` increasing '' parts . hence ,the total deficit function increment , i.e. , , must be no more than the total deficit function decrement , which is upper bounded by where the term accounts for that the deficit function does not end naturally at but get dragged down to at the end of .that is , on , the total deficit function increment , i.e. , , must be no less than the total deficit function decrement , which is .that is , moreover , since contains only type-0 , type-2 , type-3 , and the `` increasing '' parts of type-1 critical segments , the deficit function increment introduced by every full - demand segment must be no more than .that is , moreover , on , the deficit function decrement caused by every zero - demand segment must be less than ; otherwise , the deficit function will reach value and it can not be a `` plateau '' part of a type-1 critical segment .that is , since a plateau part of a type-1 critical segment must end with a full - demand interval , we must have .we continue the lower bound analysis as follows : by checking the derivative , we know that the last term is monotonically increasing / decreasing in the ratio . hence , its minimum value is taken when the ratio is replaced with the lower - boundary value or the upper - boundary value .carrying out the derivation and taking into account the problem setting , we obtain at the end , we obtain the desired result of competitive ratio of we note that the proof is similar that of theorem [ thm : chase - competitive - ratio ] , except with modifications considering time window .we denote the outcome of by ( * type-0 * ) : similar to the proof of theorem [ thm : chase - competitive - ratio ] ( * type-1 * ) : based on the definition of critical segment ( definition [ def : critical - seg ] ) , we recall that there is an auxiliary point , such that either ( and ) or ( and ) .we focus on the segment .we observe , \end{cases}\ ] ] we consider a particular type-1 critical segment , i.e. , type-1 critical segment : ] is where .recall the number of type- critical segments . ( * type-2 * ) and ( * type-3 * ) : we derive similarly for or 3 as note that for all .furthermore , we note .overall , we obtain by lemma [ lem : w lower bound type-1 ] and simplifications , we obtain [ lem : w lower bound type-1 ] consider a particular type-1 segment ] and ] is defined in and $ ] is defined in .we denote the outcome of be the cost of can be expressed in the following way : ^{+}\right)\right\ } \nonumber \\ & = & \sum_{n=1}^{n}\sum_{t=1}^{t}\left\ { p(t)v_{n}^{on}(t)+c_{g}\cdot s_{n}^{on}(t)+\right.\nonumber \\ & & \left.c_{o}\cdot u_{n}^{on}(t)+y_{n}^{on}(t)\cdot c_{m}+\beta\cdot\left[y_{n}^{on}(t)-y_{n}^{on}(t-1)\right]^{+}\right\ } \nonumber \\ & = & \sum_{n=1}^{n}\mathrm{cost}\left(y_{n}^{on},u_{n}^{on},v_{n}^{on},s_{n}^{on}\right),\label{eq : exchangesum}\end{aligned}\ ] ] where is the online solution by sub - problem based on theorem 5 , we know the optimal offline solution , denoted as can be expressed as : where is the optimal offline solution for sub - problem similar as ( [ eq : exchangesum ] ) , we have next , from corollary 1 , we know where is the competitive ratio we achieved in corollary 1 .thus , by summing up inequalities ( [ eq : singleraito->wholeratio ] ) , we get it completes the proof .the competitive ratio of is upper bounded by , where * : the cost of online algorithm for * mcmp * with input * : the cost of offline optimal algorithm for * mcmp * with input * : the cost of offline optimal algorithm for * fmcmp * with input * : the cost of online algorithm for * fmcmp * with input ( [ eq : one - slot ratio ] ) says to build a upper bound of over time slots , we only need to consider the maximum ratio on a single time slot . when , it is easy to see thus , we only consider the situation when eqn .( [ eq : v - v < u - u ] ) is from note now we have ^{+}-\big[a - u_{s}\big]^{+}\\ & = & \max\big(a - u_{2},0\big)+\min\big(u_{s}-a,0\big)\\ & = & \begin{cases } a - u_{2 } & if\ u_{s}>a > u_{2}\\ 0 & if\ u_{s}>u_{2}>a\\ u_{s}-a & if\ a > u_{s}>u_{2 } \end{cases}\\ & \leq & u_{s}-u_{2}\end{aligned}\ ] ] now , we still need to upper bound the term note the set represents the time durations when and have different on / off status , and this can only occur on the minimum on / off periods : that is : when has a startup , can not startup the generator because the generator is during its minimum off periods similarly , we know such mismatch of on / off status also occurs during the minimum on periods of . as always follows to startup ,thus has at most the same number of startups as .if we denote the number of the startup of and as and , we have otherwise , , then , thus :
|
microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations . two recent trends in microgrids are the integration of local renewable energy sources ( such as wind farms ) and the use of co - generation ( _ i.e. , _ to supply both electricity and heat ) . however , these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids . traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand . they are no longer applicable to microgrids with unpredictable renewable energy supply and with co - generation ( that needs to consider both electricity and heat demand ) . in this paper , we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co - generation , with the goal of maximizing the cost - savings with local generation . based on the insights from the structure of the offline optimal solution , we propose a class of competitive online algorithms , called ( competitive heuristic algorithm for scheduling energy - generation ) , that track the offline optimal in an online fashion . under typical settings , we show that achieves the best competitive ratio among all deterministic online algorithms , and the ratio is no larger than a small constant 3 . we also extend our algorithms to intelligently leverage on _ limited prediction _ of the future , such as near - term demand or wind forecast . by extensive empirical evaluations using real - world traces , we show that our proposed algorithms can achieve near offline - optimal performance . in a representative scenario , leads to around 20% cost reduction with no future look - ahead , and the cost reduction increases with the future look - ahead window . copyrightspace modeling techniques ; design studies online computation scheduling
|
the search for a non - zero electric dipole moment ( edm ) of the neutron , initiated 60 years ago by norman ramsey , continues today to motivate experimental activity with increasing precision .a non - zero electric dipole moment for a spin 1/2 particle , such as the neutron , implies the violation of both parity ( p ) and time reversal ( t ) symmetries ; according to the cpt theorem this would also imply a violation of the combined cp symmetry . since cp violation beyond the standard model of particle physics is required to explain the asymmetry between matter and antimatter , the so - called baryon asymmetry of our universe , electric dipole moments provide a window on the early universe when the baryon asymmetry was generated .for a recent review on the connection between neutron physics and cosmology , see and references therein .the best measurement of the neutron edm was obtained at the institute laue - langevin ( ill ) in grenoble , france and provided the upper limit ( 90% c.l . ) .it was extracted from the precession frequency of trapped ultra - cold neutrons ( ucn ) in a weak ( = 1 t ) magnetic field , using ramsey s method of separated oscillating fields .in addition , the apparatus has also been used to perform sensitive tests of lorentz invariance with neutrons , where a daily modulation of the larmor frequency was searched for .the new experiment , installed at the paul scherrer institute ( psi ) , villigen , switzerland , uses an upgraded version of this apparatus and aims at improving the sensitivity by an order of magnitude , thanks to a higher ucn density .the development of a completely renewed data - acquisition system ( daq ) was part of the numerous upgrades .to this end , new electronic modules were needed to execute all requested actions and control the experiment . instead of multiple specialized modules ,we have opted for a central and multifunction module .with such a solution , we expect to benefit from the limited number of connectors , thus minimizing the delicate problem of bad contacts .after a brief summary of the experiment principle , a description of a typical measurement cycle will be given in section [ part2 ] .the experiment most stringent requirements are given in section [ requirements ]. section [ hardwaresec ] will present in details the electronic board , and section [ firmwaresec ] the embedded firmware . finally , section [ protovalsec ] will report on some of the achieved performances .the psi edm experiment uses the ral / sussex / ill apparatus , connected to the newly built psi ucn source .the main elements of the apparatus are depicted in fig .[ experimentdesc ] .the central part , the precession chamber , is a 21 liters cylindrical bottle ( d=47 cm , h= 12 cm ) used to store polarized ultra - cold neutrons and polarized atoms .this chamber is shielded against the ambient magnetic field with a four - layer mu - metal assembly . a highly homogeneous vertical magnetic field = 1 t is generated inside the mu - metal shield by a set of coils .additionally , oscillating transverse ( to ) fields can be applied by pairs of coils in the x or y directions ( fig .[ experimentdesc ] ) .the bottom and top parts of the precession chamber also serve as electrodes to generate a strong electric field ( typically ) , either parallel or antiparallel to the magnetic field .the principle of the experiment consists in measuring the neutrons larmor precession frequency using ramsey s method of separated oscillating fields .the neutron larmor frequency is proportional to the magnitude of , plus possibly a small frequency shift due to the neutron edm. a frequency difference , proportional to e , between the parallel and antiparallel configurations would be the signature for a non - zero edm .however , the small drifts of the field during the experiment could easily hide the signal . to measure andcorrect for these fluctuations , an atomic magnetometer using a vapor of polarized atoms cohabiting in the same volume as the ucn is used . a standard edm cycle , sketched in fig . [ standardedmcycle ] ,is composed of seven successive steps detailed below . 1 .* ucn fill : * polarized ucn are guided toward the storage chamber .the filling time , controlled by the ucn valve , lasts typically 20s .* mercury fill : * the mercury valve is opened for 2 s , to let the gas , optically pumped in the polarization cell , diffuse in the precession chamber .* mercury pulse : * a rotating transverse magnetic field is generated for 2s by two pairs of coils .the signal frequency is equal to the larmor frequency of the atoms ( approx .7.6hz for =1 t ) , measured during the previous cycle .the signals feeding the x and y coils are in phase quadrature sine waves and their amplitudes are adjusted so that the hg spins rotate by .a triangular envelope was chosen in order to minimize the effect of the mercury pulse on the neutrons spins .* first ucn pulse : * similarly to the mercury pulse , two pairs of coils induce a flip of the neutrons spins .the signal frequency is also calculated from the mercury larmor frequency measured at the previous cycle . however , for the neutrons , the excitation frequency is changed , from one cycle to the next , using two working points , on both sides of the central ramsey resonance fringe .these four working points are used to fit the larmor neutron precession frequency ( approx .29hz for =1 t ) . 5 .* free precession : * for a typical duration of 200s , both neutron and mercury spins precess in the horizontal plane around .the mercury free precession is optically monitored , using through - going polarized light produced by a mercury lamp .the light intensity , measured with a photomultiplier ( pmt ) , is modulated at the larmor frequency of the mercury spins .the pmt signal is sampled with the daq electronics and an estimate of the frequency is extracted . as discussed above , the estimated frequency serves to set the frequency of both mercury and neutron pulses at the next cycle , thus accounting for the drifts in magnetic field .* second ucn pulse : * at the end of the precession time , a second neutron pulse , in phase with the first one , is generated , thus completing the ramsey sequence .* neutron polarization measurement : * the ucn valve is opened to empty the precession chamber and neutrons fall down toward a neutron detector . on their way , neutrons encounter a magnetized ferromagnetic foil that serves as spin analyzer by letting only one spin component go through . after a fixed amount time , an adiabatic fast passage ( afp ) spin flipper is turned on to count the number of neutrons in the other component . from this information and the knowledge of the pulse frequency , the neutron larmor frequency can be extracted .the described sequence is repeated continuously to accumulate statistics , with periodic reversal of the electric field .two years of data - taking at the psi ucn source are planned to record about 50,000 cycles and gain an order of magnitude in sensitivity .in this section , the most stringent experimental requirements are discussed .* experiment timing resolution + the sequence of actions described above must be controlled with high precision .the most demanding actions correspond to the control of the /2 pulses and of the neutron counting sequence .an error on the /2 pulse duration will affect the precision of the frequency extraction but not the frequency itself .for the mercury as well as for the neutron /2 pulses , we have estimated that a 1ms deviation from a 2s pulse would result in a precision loss of 10 relative to the extracted frequency , a negligible amount .likewise , having a resolution of on the counting time sequence would induce an error on the up / down asymmetry much smaller than the statistical fluctuations , which is sufficient . *wave generator frequency resolution + the neutron larmor precession frequency is extracted from the combination of the applied frequency of the /2 pulse and of the neutron counting .the resolution of the wave generator should be therefore significantly better than the expected statistical precision , i.e at 30hz a resolution of about 0.1mhz is required .* scaler counting rate + with the increased neutron density provided by the new psi source , peak counting rates as high as 10 can be reached . *mercury signal encoding resolution + the precision on the extracted mercury larmor frequency is directly proportional to the signal - to - noise ratio ( snr ) of the precession signal measurement .our current snr is typically 2000 at the beginning of the precession , which limits us to a frequency precision of 0.5hz . in the future ,we plan to reach a snr of about 10000 to approach a 0.1hz precision .therefore to have a significant margin over the quantization noise , a 16 bit adc is required ..,scaledwidth=95.0% ] a picture of the electronic board is shown in fig .[ boardpicture ] and its block diagram is shown in fig .[ bloccartev2 ] .it is designed around a field programmable gate array ( fpga ) ( xilinx xc3s400fg456 ) , whose main purposes are to perform the system precise operation sequencing ( ` micro - timer ' ) and the data acquisition . the communication with the board is done via a usb2.0 capable micro - controller ( cypress cy7c68013a ) , this allows to perform the system control and the data readout .furthermore , it is used to load the fpga firmware at board power up and to perform the monitoring of 16 input channels with a 16 bit adc .it directly controls the signal multiplexer and reads out the adc .channel 0 is reserved for reading the photo multiplier tube ( pmt ) dc signal . a rb atomic clock/10s and long term stability : 3/month .] , operating at 10mhz , is directly connected to the fpga in order to reference the whole system ( signal generation and measurements ) .the various sine wave generation is ensured by two kinds of direct digital synthesis ( dds ) circuits whose outputs are connected or disconnected by fpga driven switches .the first dds kind ( ad9833 ) is used to provide the spin flipper signals ( typical frequency of 19khz ) .they have a 28 bit resolution phase accumulator and thus can achieve a frequency resolution of 37mhz when supplied with a 10mhz reference clock .these dds are configured by the micro - controller via a serial peripheral interface ( spi ) link .each dds output signal , which has an amplitude resolution of 10 bit , is filtered by an active second order low pass filter ( lpf ) before passing through the fpga controlled switch . the switch outputis then amplified by 16 before being provided to the spin flipper coils .the second dds kind ( ad9852 ) is used to provide the two neutron ( around 30hz ) and two mercury ( around 8hz ) excitation signals with a very high precision in frequency .they feature a 48 bit phase accumulator and , given the fact that they are referenced at 500khz ( a divided version of the atomic clock ) , they have a tuning resolution of 1.77nhz .this resolution which is therefore far better than the experimental requirement of 0.1mhz . also these dds offer the possibility to add an offset to the internal phase accumulators result .consequently , it is possible to generate the excitation signal pair with a known phase relationship by updating them synchronously with the same phase increment value but with different phase offset .this synchronous update is achieved the fpga which directly controls both dds update input pin of a pair . moreover , by performing this operation slightly before closing the dds output switches , which are also fpga controlled , it is possible to adjust the start phases of both sine waves ( see fig .[ osksequence ] ) .another advantage , is that the sine wave output amplitude , which has an amplitude resolution of 12 bit , can be digitally adjusted either by a constant factor or by using the output shape keying ( osk ) feature . as illustrated in fig .[ ofskprinciple ] , this allows the control of the signal amplitude as a function of time and therefore to generate the required triangular shape envelope instead of the abrupt rectangular shape . with the dds clock running at 500khz , the ramping timecan be adjusted from 32.768ms up to 2.096s with a step resolution of 8.192ms .the ramping ( up or down ) is deterministically activated by the micro - timer embedded in the fpga via a dedicated pin ( labeled osk ) .an example of the control sequence allowing to adjust the initial phase , the ramping and the switch control is displayed in fig .[ osksequence ] .each dds output signal is amplified with an adjustable gain up to a factor ten and filtered by an active fourth order low pass filter ( lpf ) before passing through the fpga controlled switch . a serial resistor ( 820 ) has been added at the circuit output in order to have a current - source - like output .consequently , the magnetic field generated is less sensitive to the coil resistance variation due to thermal fluctuations ( typical coil resistance is around 10 ) .the pmt signal is connected to a conditioning block detailed in fig .[ pmtfe ] .the first stage converts the current signal in a voltage signal with a current preamplifier having a 1m resistor feedback .the alternating ( ac ) and continuous ( dc ) components of the signal are then separated : the dc component is extracted via a second order low pass filter while the ac part passes through a q=8 band pass filter centered around 8hz .the resulting signal is level shifted by half of the adc full - scale in order to fit in the available adc range .some additional features of this module include : * the possibility to control eight external electromechanical devices synchronously to the micro - timer with the provided buffered ttl outputs . * the generation of a real - time four bit signal at each micro - timer step , encoding the step number to permit the synchronization with other electronic modules . * the availability of twelve 32 bit scalers to record the neutron counts at the end of a edm cycle .an overview of the fpga firmware which is the core of the system and whose main purposes are to perform the system precise operation sequencing and the data acquisition is shown in fig . [ blocfpgav2 ] .it is composed of seven different blocks : the frequency divider , the usb interface for communication , the scalers , the micro - timer for precise operation sequencing the adc interface for data acquisition and the two dds interface blocks .the frequency divider is used to divide the atomic clock frequency by a factor 20 for referencing of the dds . as explained in section [ hardwaresec ] , the frequencymust be lowered in order to reach the highest resolution on the output signal frequency but also to be able to have slow ramping amplitudes .this division is performed by a 0 to 9 counter which keeps a 50% duty cycle .the micro - timer block is designed to manage up to 16 time sequences .each sequence has a programmable duration , that can be adjusted with a 1s step resolution in a 1 to 2 ^ 32 ^ -1 range , and can manage up to 32 different actions .the actions to be executed during each step are controlled by a 32-bit word ( action mask ) , each action having a dedicated bit .the 32 micro - timer parameters , the 16 durations and the 16 action masks , are written in the dual port block ram by the usb interface before starting the time sequence execution .the micro - timer block is managed by a simple two states finite state machine ( fsm ) which starts the time sequence execution when the run signal , provided by the usb interface , is activated . before executing each instruction cycle , the corresponding parameters are read from the memory . additionally , the 4 most significant bit ( msb ) of the memory read pointer are used to reflect the current sequence in progress .this information is available for usb readout and on the board for external synchronization .the possible micro - timer actions are listed below : * the eight ` ttlout ' may activate external electro - mechanical actuators ( ucn and hg valves ) . * ` hgexcupd ' and ` neuexcupd ' : signals to trigger the dds parameters update for mercury and neutron , respectively . *` enhgexc ' and ` enneuexc ' : enabling of signals generation for mercury and neutron respectively . * ` oskhg ' and ` oskneu ' : initialization of the ramping of the sine wave amplitude ( see figure [ ofskprinciple ] ) . * ` enadc ' : enabling of pmt signal acquisition . *the four ` enspinflipper ' are used to activate the selected 19khz sine wave generator . *the ` enspinupcnt ' and ` enspindncnt ' bit are respectively used to allow the 12 associated scalers to count during spin up and spin down .the dds interface blocks are used to simultaneously configure the dds pairs ( mercury or neutron ) with the parameters provided by the usb interface in their embedded block memory .each dds requires up to 40 configuration bytes .consequently , the two dds parameters bytes are concatenated in a single word and the whole configuration is conveniently stored in the 40 first memory words ( 16 bit ) .the fsm , which manages the communication protocol with the dds , initiates the parameter transfer upon reception of the update signal provided by the micro - timer table or by the usb interface .the firmware features a scaler containing two banks of twelve 32-bit counters , each counter having the capacity to count with a frequency up to 100mhz .these counters are operated in a gated mode with the gating signals provided by the micro - timer as previously described .the pmt signal acquisition is managed by the adc interface block which performs the data acquisition at the selected sampling period and when activated by the micro - timer .the digitized data are written in the embedded output fifo and real - time data - readout is realized via the usb interface .the sampling period , which is a sub - division of the atomic reference clock , can be adjusted by steps of 100ns between a minimum of 10s and a maximum of 1677ms .it should be noted that running at a conversion rate significantly higher than the signal frequency ( 8hz ) can be an asset .indeed , at the price of more computing power , it is possible to apply an off - line sharp digital filtering to further reduce the bandwidth given by the analog band pass filter .hence , the signal - to - noise ratio can be further improved .a series of tests was conducted to check the main performances of the board and ensure it fulfills the requirements . as an example of the various basic functional tests , we show in fig .[ triangle-2s_1khz ] the generation of a sine wave signal with a triangular envelope .this 8hz sine wave having a duration of 2s and 1s rise and fall times , was sampled at 1khz with the board s own adc .we have quantified the quality of the waveform by fitting the signal with phase , starting time and maximum amplitude as free parameters ( the frequency and the total duration of the signal were fixed to preset values ) .we found a residual better than 10 ^ -3^ relative to the maximum amplitude .the most critical and demanding function of the board concerns the pmt signal treatment . indeed , as discussed in section [ part2 ] , the hg co - magnetometer is a key element of the nedm apparatus since , by measuring the mercury precession frequency , we can precisely measure the magnetic field seen by the neutrons and correct for its fluctuations .systematic tests were therefore carried out to study the measurement chain performances . at first , the ac input band pass filter was characterized . to this end, one of the mercury ` 8hz ' outputs was connected to the pmt signal input , and its frequency was varied from 0.2hz to 11.5hz .the result of the measurement , displayed in fig .[ reponsefiltre8hz ] , is in accordance with a q=8 filter , centered at 8hz .the next crucial test consisted in checking that the intrinsic board noise was negligible as compared to the physical noise which is obviously one of the parameter limiting the precision .the power spectrum density ( psd ) of the complete board input chain was measured : this includes the amplifier ( tested with a gain 1 ) , the band pass filter and the adc . as in the previous measurement ,the input signal was a mercury ` 8hz ' signal generated by the board itself .a sampling rate of 100hz and an acquisition time of 100s were chosen as they represent typical values used by the experiment .the fast fourier transform of the digitized signal is shown in fig . [ dds_8hz_reboucle_100s_100hz ] .the measured noise floor is compatible with the expected adc performances as advertised by the manufacturer for the same amount of points .it is also worth noting that the noise floor inside and outside of the filter bandwidth is identical , indicating that the electronics noise is due to quantization only .the signal - to - noise ratio obtained from the psd spectrum is about 14000 .those performances should be compared with the precession signal measurement in the experimental setup .the psd of a typical measurement is shown in fig .[ psd_psi ] .in contrast to the loop - back measurement , the noise floor greatly increases in the filter bandwidth , showing that the measurement noise is dominated by the physical noise . the signal - to - noise ratio obtained from the psd spectrum is about 840 . from this test, we can conclude that the sensitivity in the frequency extraction of the mercury signal will not be limited by the electronic performances ( security coefficient of 10 ) .we have designed and constructed an electronic board for the data acquisition and control of the neutron electric dipole moment experiment at the paul scherrer institute .the board , organized around a field programmable gate array ( fpga ) , is a multifunction module which fulfills a large fraction of the requirements regarding the time - sequencing and the data acquisition of the experiment .thanks to the use of a fpga , it is compact , versatile and evolutive . additionally , the high integration level and the corresponding limited number of connections should result in a better reliability .a list of the main functions and specifications is given in table [ perftable ] .it must be noted that all requirements given in section [ requirements ] are all easily met .
|
experiments aiming at measuring the neutron electric dipole moment ( nedm ) are at the forefront of precision measurements and demand instrumentation of increasing sensitivity and reliability . in this paper , we report on the development of a dedicated acquisition and control electronics board for the nedm experiment at the paul scherrer institute ( psi ) in switzerland . this multifunction module is based on a fpga ( field - programmable gate array ) which allows an optimal combination of versatility and evolution capacities .
|
the calculation of acoustic scattering by extended rough surfaces remains a challenging problem both theoretically and computationally ( e.g. ) especially in the presence of strong multiple scattering .this becomes acute at low grazing angles , where multiple scattering occurs for very slight roughness .boundary integral methods are flexible and often used for such problems but can be computationally intensive and scale badly with increasing wavenumber .much effort has therefore been devoted to this aspect , where possible exploiting properties of the scattering regime . for forward scattering in 2-dimensions , for example , provided roughness length - scales are large , the ` parabolic integral equation method ' can be applied . for electromagnetic problems ,also formulated using boundary integrals , the methods of ordered multiple interactions and left - right splitting in both 2-d and 3-d ( - ) have been developed : here the scattered field is expressed as an iterative series of terms of increasing orders of multiple scattering , as described below .approaches using conjugate gradient solutions combined with fast multilevel multipole are also receiving much attention . an important exception which overcomes the dependence of computational expense on wavenumber is , which has been applied to surfaces with piecewise constant impedance data or scattering in 2-d by convex polygons . a versatile recursive technique known as multiple sweep method of momentswas developed and analysed in where it was compared with method of ordered multiple interactions .this technique was shown to tackle ` composite ' problems for which the above method diverges such as for a ship on a rough sea surface .other iterative solutions have been studied in .in addition theoretical results are available in various limiting regimes ( e.g. perturbation theory for small surface heights , including periodic surfaces , kirchhoff approximation , or the small slope approximation which is accurate over a wider range of scattering angles than both of these ) . for arbitrary finite rough surfaces , however , validation is more difficult , and such results are therefore scarce . in this paperthe left - right splitting method is developed and applied to the problem of acoustic scattering in three dimensions by randomly rough surfaces .for relatively small surfaces the results are validated by comparison with numerical solution of the full boundary integral equation .the principal aims are to validate the approach ; to examine its robustness and convergence as the angle of incidence changes ; and to consider further approximations which may reduce the computation time .the approach is applicable to a wide range of interior and exterior scattering problems , and we give examples for acoustic propagation in a varying duct , in addition to scattering from large rough surfaces .the mathematical principles of the method are the same as for the two - dimensional problem although implementation is considerably more complicated : the unknown field on the surface is expressed as the solution to the helmholtz integral equation , with the integration taken over the rough surface .this may be written formally as , where is the incident field impinging ( say ) from the left , so that we require .the region of integration is split into two , to the left and right of the point of observation , allowing to be written as the sum of ` left ' and ` right ' components , say . roughly speaking represents surface interactions due to scattering from the left , and the residual scattering from the right .the inverse of can formally be expressed as a series a^-1= l^-1 - l^-1 r l^-1 + ... [ eq1 ] discretization of the integral equation yields a block matrix equation , in which is the lower triangular part of the block matrix ( including the diagonal ) and is the upper triangular part . under the assumptionthat most energy is right - going , is the dominant part of , and the series can be truncated to provide an approximation for .this approach has several advantages . in terms of wavelength ,evaluation of each term scales with the fourth rather than the sixth power of required for ; subsequent terms ( of which typically only the first one or two are needed ) have the same computational cost . with further approximationsthis can be reduced to . however , this operation count is only part of the story , because the low complexity and memory requirement allow very large problems to be tackled without such additional approximation .in addition the algorithm lends itself well to parallelisation , and the speed scales approximately linearly with the number of processors . in 2 the governing equations and left - right splitting approximation are formulated .the numerical details and main results are shown in 3 .consider a 3-dimensional medium with horizontal axes and vertical axis directed upwards , and let be the wavenumber .let be a 2-dimensional rough surface , varying about the plane , which is continuous and differentiable as a function of ( see figure [ example ] ) .( arbitrary scatterers can also be treated by the methods shown here ; examples will be given later . )consider a time - harmonic acoustic wave , obeying the wave equation in the region , resulting from an incident wave at a small grazing angle to the horizontal plane .this may for example be a plane wave or a finite beam .the axes can be chosen so that the principal direction of propagation is at a small angle to the plane . 1 true cm 3 true cm example rough surface , title="fig:",height=226 ] we will treat the neumann boundary condition , i.e. an acoustically hard surface .the derivation for the dirichlet condition is similar .thus = 0 where is the outward normal ( i.e. directed out of the region ) .the free space green s function is given by g(, ) = e^ik|-|4|-| .[ ( 5.1 ) ] the field at a point in the medium is related to the surface field by the boundary integral _ inc ( ) = ( ) - _ s g(,)n ( ) d [ eq4 ] where and , say , and taking the limit as gives _inc(_s ) = ( _ s ) - _ s g(_s,)n ( ) d [ eq5 ] where now .the integrand is singular at the point , and we must take care to interpret this integral as the limit of the integral in eq .( [ eq4 ] ) as .in order to treat the equation numerically it is convenient to write the integration with respect to , , so that eq .( [ eq5 ] ) becomes _inc(_s ) = ( _ s ) - _ -^ _ -^ ( ) ( ) dx dy [ eq6 ] where ( with very slight abuse of notation ) ( ) = . [ ( 5.5 ) ] and the expression under the square root is evaluated at .the method of solution is analogous to that applied to the electromagnetic problem in 2-d or 3-d .the governing integral equation ( [ eq6 ] ) is expressed in terms of right- and left - going operators and with respect to the -direction : _ inc ( _ s ) = a ( l+r ) [ eq8 ] where and are defined ( for an function ) by and , .[ for notational conveneince is interpreted to include the contribution from the singularity arising in ( [ eq5 ] ) when . ]the region of integration is thus split into two with respect to , and the solution of equation ( [ eq8 ] ) can be expanded as a series , given by = ( l+r)^-1 _ inc = _ inc .[ series ] the key observation is that at fairly low grazing angles the effect of is in some sense small , so that the series converges quickly and can be truncated .define the -th order approximation as _n=_1^n l^-1 ( r l^-1)^n-1 . [ note that and depend on surface geometry and wavenumber only , not on incident field ; and that one might expect convergence of the series ( [ series ] ) for given but not uniform ( norm ) convergence of the series ( [ eq1 ] ) . ]this corresponds physically to an assumption that surface - surface interactions are dominated by those ` from the left ' , as expected in this scattering regime . is large compared with first , because includes the dominant ` diagonal ' value ; second because a predominantly right - going wave gives rise to more rapid phase - variation in the integrand in than in .( although this depends on surface geometry and can not in general be quantified precisely , it occurs because in ( [ eq5 ] ) the phase in the green s function kernel decreases as the observation point is approached from the left and then increases to the right ; whereas the phase of tends to increase throughout , like that of the incident field . )this is borne out numerically , with many cases of interest well - described using only one or two terms of the series .the scattered field due to a given approximation is obtained by substitution back into the boundary integral ( [ eq4 ] ) .it is helpful to consider the significance of successive approximations to this field in the ray - theoretic limit : the first iteration contains ray paths which , before leaving the surface , may have interacted with the surface arbitrarily many times but only in a forward direction .the second includes most paths which have changed direction twice : once via the operator and again via ; and so on ( see figure [ schematic ] ) .thus the first iteration accounts for multiple scattering but not _reversible _ paths which can occur when incident and backscatter direction are opposite ; these paths occur in pairs of equal length and therefore add coherently , giving rise to a peak in the backscattered direction ( enhanced backscatter eg ) in strongly scattering regimes .we would therefore expect this to show initially at the second approximation .1 true cm possible paths ( a ) at 1st iteration , and ( b ) at 2nd iteration when reversible paths can occur and add coherently.,height=113 ] having obtained this series , numerical treatment by surface discretization is straightforward .( discretization can equivalently be carried out before the series expansion , but it is more convenient , and analytically more transparent , to expand the integral operator first . )although use of the series ( [ series ] ) is motivated by physical considerations and its terms provide a convenient theoretical interpretation , the immediate advantage is computational : if the surface is discretized using a rectangular grid of by points , with transverse steps ( direction ) and in range ( ) , then becomes an matrix , and exact inversion would take operations . on the other hand evaluation of each term of eq .( [ series ] ) involves inversion of an matrix at each of range steps , requiring just operations and far less memory . assuming a resolution of say 10 points per wavelength , this scales with .there is an additional ` matrix filling ' component ; this also increases with , and in practice this is the dominant computational cost in the left - right splitting algorithm ( typically more than 90% when ). the numerical treatment will now be outlined .the notation , will be used to refer to the discretized forms of the integral operators where no confusion arises , and we will focus on solution of the first term of ( [ series ] ) , i.e. inversion of . although not evaluated explicitly as such , the matrix is conveniently viewed as an lower - triangular block matrix whose entries are matrices .the system can therefore be inverted by gaussian elimination and back - substitution .this is an -step ` marching ' process , in which each diagonal block is inverted in turn , corresponding to marching the solution for the unknown surface field in the positive direction .choosing step - sizes , we define denote the discretized surface values by denote the area of each subintegration region by , and write where ( equation ( [ ( 5.5 ) ] ) ) is evaluated at the point .this induces a discretization of ( [ eq8 ] ) and at each point surface point we get a_nm = _ i=1^n _ j=1^m a_nmij b_ij [ ( 5.10 ) ] where \label{(5.11 ) } \end{aligned}\ ] ] and again . for each value of gives a set of equations .retaining just the first term in the iterative series ( [ series ] ) , l^-1 , [ ( 5.12 ) ] yields a set of equations identical to ( [ ( 5.10 ) ] ) except that the sum over has upper limit : a_nm = _ i=1^n _ j=1^m a_nmij b_ij . [ ( 5.13 ) ] this is equivalent to integration over the half plane to the left of the line of observation ( . now at each range step , assuming that we have obtained the values for , equation ( [ ( 5.13 ) ] ) can be rearranged to give a_nm - _ i=1^n-1 _ j=1^m a_nmij b_ij =_ j=1^m a_nmnj b_nj [ ( 5.14 ) ] for .everything on the left - hand - side is known or has been found at previous steps . for each gives a matrix equation , which we rewrite for convenience as _n = b_n _ n [ ( 5.15 ) ] where the subscript indicates dependence on and we have written the vectors in bold .therefore , denotes solution values at the range step , and is the matrix ( the -th term on the diagonal of ) with elements ( b_n ) _ mj = a_nmnj .we thus require _n = b_n^-1 _ n [ abc ] for each .we solve ( [ abc ] ) in turn for , using each result to redefine the left - hand - side of eq .( [ ( 5.15 ) ] ) and thus find the surface field as defined by ( [ ( 5.12 ) ] ) .subsequent terms in the series ( [ series ] ) are evaluated in exactly the same way , with the ` driving ' term replaced by times the result of the previous evaluation .one of the main applications is to irregular or randomly rough surfaces ( for example sea surfaces or terrain ) .statistically stationary surfaces with gaussian statistics ( normally distributed heights ) are easily generated computationally with any prescribed spatial autocorrelation function ( a.c.f . ) , where ( , ) = < s(x , y ) s(x+,y+ ) > . herethe angled brackets denote ensemble averages . for simplicitywe have used an isotropic two - dimensional gaussian a.c.f . , /l^2 )$ ] where defines a correlation length . in order to minimise and distinguish edge effects we used surfaces which become flat at the outer edge ; this is not necessary for the method to be applicable .studies included the strongly scattering regime of surfaces with both correlation length and r.m.s .height of the order of a wavelength . with the exception of parallel code mentioned later, all tests were run on a desktop pentium 4 3.2ghz machine with 1 gb memory running linux .comparison was made first against the full or ` exact ' inversion of the boundary integral .the quantity used for the comparison was the surface field . because of the high computational cost of full inversion this comparison was carried out for a relatively small surface of wavelengths , using a grid of points . herethe r.m.s .height and correlation length are approximately equal to .contour plots of the amplitude of calculated by the two methods is shown in figure [ contour ] .one iteration of the left - right series took around 7 seconds , whereas `` exact '' full inversion took around 23 minutes .( the full inversion code at double precision ran out of memory at this stage so , in this case only , the matrix was evaluated in single precision .iterative code remains in double precision throughout . )-0.3 true cm shaded contour plot of the amplitude of the surface fields by ( a ) exact and ( b ) iterative solution ( 2 terms ) , for surface with r.m.s . height and correlation length approximately equal to .,title="fig:",height=200 ] in order to illustrate the convergence , comparison of field values along the mid - line in the -direction is shown in figure [ compare ] for the first 4 iterations . in this casethe incident field was a plane wave impinging at an angle of 10 from grazing .extremely good agreement is found .notice that the oscillatory behaviour at the left is captured at the 2nd but not the 1st iteration .( it should be emphasized that although we found no divergent cases , convergence is not necessarily guaranteed . for electromagnetic waves the method exhibited divergences apparently due to resonant surface features . )comparison between exact and successive terms of the left - right solution corresponding to fig .[ contour ] , along a line in -direction , for grazing angle .,height=302 ] the solution for an field incident at impinging on the same surface is shown in figure [ compare2 ] , and again converges rapidly .comparison for surface as in figure [ compare ] , for grazing angle of .,height=302 ] a further comparison ( figure [ compare3 ] ) using a ` smoother ' surface , with the same correlation lengths but r.m.s .height reduced to , at from grazing , gives similarly close agreement .comparison between exact and successive terms of the left - right solution , for grazing angle due to a smoother surface with r.m.s .height .,height=302 ] we now consider the application of the code to larger surfaces , in order further to examine timings and rates of convergence as functions of incident angle .evaluations of the first iterates were carried out for several cases . as mentioned above ,the two main components of the calculation are a matrix inversion and a set of green s functions evaluation , at each of range steps .the matrix inversion remained a small percentage of the cost in all cases , and computation time should increase with the square of the number of unknowns , .the actual computation times were found to conform closely to this , as shown in table 1 .times in the second column , corresponding to the simple optimised integration as described below , should be regarded as applicable for most surface geometries , and can easily be reduced further with higher order schemes ..computation time on desktop computer [ cols="<,>,^",options="header " , ] note that the algorithm is easily parallelised : the integration , to which the bulk of computation time is devoted , can be shared among any number of processors .this has been carried out using mpi on a sunfire machine , and as expected the computation speed increases linearly with the number of processors .solution for around unknowns , on a waveguide of 550 in length and 80 circumference , was obtained in 5.3 hours with standard integration and under 2 hours using the optimised integration below , on 96 processors .strategies are available for reduction of the green s function evaluation cost .one of these is fast multilevel multipole , which can reduce the time - dependence to , but we found this to have certain disadvantages including relatively high complexity and memory cost , and accuracy which is not easily regulated . a much simpler expedient which retains the order of dependence on the number of unknowns ,but reduces the multiplier , is the following : a simple quadrature using all available points was initially used to carry out the integration for the left - hand - side of eq .( [ abc ] ) .the integrand , however , is relatively smooth as a function of transverse coordinate , and this increases with spatial separation in .thus as the marching solution proceeds , we can use higher - order integration schemes utilising far fewer points with little loss of accuracy .even a simple trapezium rule , for example , operating on half the number of points reduced the computation time by a factor of 3 and resulted in errors of well under 1% .we calculated surface fields on a desktop computer for a surface of ( 230,000 unknowns ) in around 10 minutes , and ( unknowns ) in 180 minutes .the same method is applicable to exterior and interior scattering problems due to various large scatterers and geometries .most such geometries involve even better - behaved integrals , and are therefore amenable to the above integration strategy .solution for the much larger problem of a waveguide of around 150 wavelengths in length and diameter 20 wavelengths ( not shown ) was calculated on the desktop computer in around 140 minutes .real part of surface field on waveguide at two frequencies.,title="fig:",height=188 ] real part of surface field on waveguide at two frequencies.,title="fig:",height=188 ]the paper describes the development and application of the left - right splitting algorithm for acoustic scattering by rough perfectly reflecting surfaces and other complex scatterers .results have been validated by comparison with `` exact '' numerical solutions , and by examining the convergence of the series .the formulation is physically - motivated to apply to incident fields at low grazing angles , although good convergence has been obtained at angles close to normal incidence .problems involving up to unknowns or more can be solved relatively simply on a standard desktop computer , and much larger problems still in a few hours on a parallel machine .the cost of the method scales with the square of the number of unknowns ; this can be improved by application of , say , fast multipole methods , but this has not been necessary as in this approach the multiplier is relatively small and can be further reduced by optimising the integrations .the authors acknowledge partial funding from the dti escience programme , and use of the cambridge - cranfield high performance computer facility .many of the ideas arose out of a previous electromagnetic project supported by bae systems and ms is grateful for many helpful discussions .p. tran , calculation of the scattering of electromagnetic waves from a two - dimensional perfectly conducting surface using the method of ordered multiple interaction , _ waves in random media _ , * 7 * , 295 - 302 ( 1997 ) .pino , l. landesa , j.l .rodriguez , f. obelleiro , & r.j .burkholder , the generalized forward - backward method for analyzing the scattering from targets on ocean - like rough surfaces _ ieee trans ._ , * 47 * , 961 - 969 ( 1999 ) .d. colak , r.j .burkholder , & e.h .newman , on the convergence properties of multiple sweep method of moments scattering from 3d targets on ocean - like rough surfaces , _ appl .. j. _ * 22 * , 207 - 218 ( 2007 ) .d. colak , r.j .burkholder , & e.h .newman , multiple sweep method of moments analysis of electromagnetic scattering from 3d targets on ocean - like rough surfaces , _ microwave and opt tech lett _ * 49 * , 241 - 247 ( 2007 ) . figure [ contour ] : shaded contour plot of the amplitude of the surface fields by ( a ) exact and ( b ) iterative solution ( 2 terms ) , for surface with r.m.s .height and correlation length approximately equal to .
|
the left - right operator splitting method is studied for the efficient calculation of acoustic fields scattered by arbitrary rough surfaces . here the governing boundary integral is written as a sum of left- and right - going components , and the solution expressed as an iterative series , expanding about the predominant direction of propagation . calculation of each term is computationally inexpensive both in time and memory , and the field is often accurately captured using one or two terms . the convergence and accuracy are examined by comparison with exact solution for smaller problems , and a series of much larger problems are tackled . the method is also immediately applicable to other scatterers such as waveguides , of which examples are given . 2.2 1.5 true cm = cmr8 1 true cm department of applied mathematics and theoretical physics , the university of cambridge cb3 0wa , uk 1.5 true cm 2 true cm * h * *l * * c * *l * * u * *p *
|
rope - like assemblies of filamentous biopolymers are important and common structural elements in living organisms .fibers of cellulose and collagen provide mechanical reinforcement of extra - cellular plant and animal tissue while inside of eukaryotic cells , bundles of cytoskeletal protein filaments microtubles , filamentous actin ( f - actin ) and intermediate filaments are implicated in an outstanding array of physiological processes , from cell division and adhesion to motility and mechanosensing .understanding the physical mechanisms that underly the robust and high fidelity assembly pathways of protein filaments is thus an important , outstanding challenge with broad implications in biology .key questions revolve around the role played by the myriad types of relatively compact , crosslinking proteins that coassemble in parallel bundles and fibers of certain protein filaments , the primary example of these being parallel actin bundles . though the intrinsic properties of f - actin are largely conserved among different cell types and species , the structural , mechanical and dynamic properties of f - actin bundles are evidently quite modular .for example , filopodial bundles that form at the periphery of motile cells are loosely organized and highly dynamical structures of rather limited diameter ( 100 nm ) , while f - actin bundles formed in mechanosensory appendages of the cochlea and inner ear are nearly crystalline in cross - section and quite large by comparison ( 1 m ) .recent experimental studies of parallel actin bundles _ in vitro _ demonstrate that to large extent the mechanical and structural properties of bundle assemblies can be attributed to the crosslinking proteins as well as their interactions with the bundled filaments .an important but unresolved question concerns the organization of crosslinks in the bundle and how this organization reflects structural features of the filaments themselves .in particular , protein filaments are universally helical in structure , and by virtue of the helical distribution of binding sites , interactions mediated by proteins crosslinking neighboring filaments must reflect the underlying chiral nature of the assembly . indeed ,there is numerous experimental evidence to show that crosslinking actin in parallel bundles modifies the torsional state of constituent filaments from their native , unbound geometry .the extent to which this helical twist is transfered more globally to the structure of crosslinked actin bundles as has been observed in superhelical assemblies of fibrin and collagen fibers is presently unclear . in this paper, we explore the fundamental interplay between the helical structure of biological protein filaments and intrinsic torques generated by self - organizing distributions of crosslinks between filaments in regular bundle arrays .our task is to develop a generic theoretical model for the frustration that arises between the regular in - plane organization of filaments in bundles and the helical distribution of crosslinking sites along the filaments .we seek to understand how this frustration can be relieved at the expense of different types of mechanical distortions of the filament assembly , in particular , a global twist distortion of the parallel array .a number of theoretical , computational and experimental studies demonstrate that chiral filament interactions which tend to twist bundled assemblies have the important consequence of providing an intrinsic and thermodynamic limitation to the lateral growth of bundles . in the present study, we show that the helical distribution of crosslinking sites on the filaments gives rise to a tendency for neighboring filaments to twist in order to relieve the elastic cost of distorting crosslinks . in turn , the tendency to twist filaments superhelically in the bundle leads to a mechanical cost for growing the bundle diameter that for sufficiently weak crosslink affinities and sufficiently high crosslink densities becomes finite in equilibrium .our approach to this problem is based on a generalization of the `` worm - like bundle '' model of crosslinked filaments .this semi - microscopic model treats filaments as semi - flexible polymers decorated by discrete arrays of sites to which elastic crosslinks between neighboring filaments bind .this model has been successfully used to analyze the complex mechanical response of crosslinked bundles to bend and twist deformations .however , it does not allow to explore the consequences of the helical filament structure for the bundle mechanical and thermodynamical properties . in the present study, we consider a model where crosslink sites are located along helical `` grooves '' on the filaments , which are characterized by bending and torsional stiffness .modeling the elastic cost of linker distortions , we show that crosslinking between helical filaments leads to a energetic preference to align the opposing grooves of crosslinked filaments .depending on the relative stiffness of the linker and the filaments we find two different regimes : 1 ) a high - torque `` groove - locked '' regime , where stiff crosslinks force the grooves into alignment , and 2 ) a low - torque `` groove - slip '' regime , in which crosslinks are not stiff enough to enforce this alignment and can not unwind the filaments from their native state of twist .though of a distinct microscopic origin , we find that in the groove - locked regime linker - mediated interactions lead to a similar elastic frustration as occurring in `` coiled - coil '' assemblies of polypeptides .filament pairs may align opposing grooves by either untwisting the pitch of grooves themselves _ or _ by winding helically around one another with the appropriate pitch .based on the intrinsic and non - linear torques generated by crosslinking in helical filament bundles , our model makes two important predictions .first , we predict that subject to an external torque self - assembled bundles of crosslinked helical filaments exhibit non - linear torsional response that is highly - sensitive to both the intrinsic helical geometry of filaments as well as the fraction of bound crosslinks , .second , we show that the competition between the linker - generated torques and the mechanical energy of bending filaments in superhelical bundles gives rise to the formation of self - limited bundles when the crosslink fraction is larger than a critical value , which is itself determined by the ratio of bending cost of helically winding a pair of filaments to the torsional cost of unwinding the intrinsic twist of the helical grooves .a primary conclusion of this study is that finite - diameter bundles of helical filaments form preferentially when crosslinks are highly resistant to in - plane shear distortions and filaments have a large torsional stiffness relative to the bending modulus .this article is organized as follows . in sec .ii we introduce our model of crosslinked helical filament assemblies and in sec .iii we derive the form of the linker - mediated torsional energy of filament bundles . in sec .iv we determine the dependence of bundle twist on bundle size , as well as the torsional response of self - assembled bundles . in sec .v we predict the thermodynamic behavior of crosslinked filament assemblies in terms of the density and binding energy of crosslinks in the bundle . finally , we conclude with a discussion of our results in the context of biological filament assemblies .to describe the interaction between crosslinking in parallel bundles and the helical geometry of constituent filaments , we introduce the following coarse - grained model , depicted schematically in fig . [fig : model ] . in the cross - section ( `` in - plane '' ) , the bundle is organized into a hexagonal array , with the center - to - center spacing of neighbor filaments , .crosslinkers bind to neighboring filaments in parallel bundles , and reflecting the helical symmetry of the filament , the ends of crosslinks are located at discrete points on helical grooves on the filaments . in the most general case , helical filaments possess a range of groove geometries of differing helical symmetry . for the purposes of the following analysis , we focus on the simple case where binding sites are located on double - helical grooves , which are perfectly out of phase ( between grooves ) .the crosslinking sites are linearly spaced by a vertical separation , , along the filament backbone direction , and the pitch of each helical groove is defined to be .albeit simpler , this double helical geometry is not unlike the two - start helical structure of f - actin , whose grooves rotate at a rate in the native state of twist and represents the vertical distance separating 2 actin monomers along the same bi - helical groove .while in actin filaments , the monomers on opposing grooves are off - set vertically by , the present simplified model has the advantage that in the lowest energy state , which allows for maximal crosslinking , all filaments maintain the same axial `` orientation '' , meaning that grooves in each vertical plane are aligned along the same direction .certainly , the analysis may be extended to address groove geometries that lead to non - trivial , inter - filament correlations , as has been done in ref . , but our purpose here is to analyze the simplest model describing the interplay between helical geometry of filaments and the inter- and intra - filament torques that arise in crosslinked parallel bundles . in parallel bundles ,crosslinks bind selectively to pairs of sites which are closest in separation ( see fig .[ fig : bindingzone ] ) , which in our model occur when grooves on any neighbor filament `` cross '' , and the orientation of the groove at a given plane is perfectly aligned with the in - plane vector separating the neighbors . for perfectly straight filaments with native twist , such a crossing occurs every along the backbone between any neighbor pair , and over a distance , the crossing rotates from one of the six neighbors of a given filament to the next . here , we will consider the case where the fraction , , of occupied crosslinking sites in the bulk of the bundle is fixed . by the assumption of the minimal crosslink stretching ,crossing points are populated first , followed by sites closest to the crossing point , and so on , until a `` binding zone '' representing the wedge shared by any 2 neighboring filaments is occupied with the appropriate number of crosslinks ( see fig .[ fig : bindingzone ] ) . to model the elastic cost of crosslinks that are poorly aligned due to the helical distribution of sites we introduce the following simple crosslink energy , here, is the energy of a single crosslinker in the bundle , and represents the `` bare '' energy gain for binding between 2 perfectly aligned sites .the second term represents the elastic cost of _ shear _ distortions of crosslinks away from the perfectly aligned geometry perpendicular to the backbone orientations .specifically , represents the _ in - plane shear _ as shown in fig .[ fig : model ] .this model has marked similarities with a model for contact between hydrophobic residues on -helical polypeptide chains studied in ref . , and hence , our current model may also be applicable to the study of coiled - coil bundles of -helices .depending on the location of the crosslink in the bundle , these distortions , , are purely determined by the state of twist of constituent filaments as well as the global twisting of the bundle itself .the twist of individual filaments is described by the angle , , that describes the orientation of the groove in the plane of filament packing .hence , for the case of native filament twist , .in addition to the `` local '' twist of individual filaments , the bundle may twist as a whole described by , the rate at which in - plane lattice directions rotate around axis , the long axis of he bundle .notice that in this description , and are decoupled by construction , meaning that filaments may be twisted without the bundle experiencing twist ( and ) or the bundle may be twisted without distortion the native symmetry of the constituent filaments ( and ) .it is clear from this geometry ( fig .[ fig : model ] ) that is determined by the _ relative rotation _ of the grooves with respect to the bundle lattice directions .this relative rotation can be measured by the angle , which gives the angle between the helical groove and the nearest neighbor lattice separation as shown in fig .[ fig : model ] . from thiswe have the in - plane linker shear , when both and are non - zero , the distance between crossover points is determined by the distance over which the _ relative angle _ rotates by . we will call this distance , , and refer to such a wedge shared by 2 filaments as a `` binding zone '' ( see fig .[ fig : bindingzone ] ) .it is straightforward to see that , where is the mean rate of relative rotation in this span .likewise , it is easy to see that of this zone , only a length centered around the crossover point will be occupied with crosslinks . before analyzing the structural and thermodynamic consequences of in - plane shear costs of crosslinking bonds, we note that a superhelical twist of the bundle introduces other , out - of - plane modes of crosslinker shear .we discuss , in the appendix , that a reorganization of the crosslinking sites along the long axis of the bundle allows linkers to trade a high elastic energy out - of - plane shear cost for lower energy in - plane distortion . furthermore , under this linker reorganization the elastic cost of non - zero out - of - plane shear , ultimately contributes to the total free energy of the bundle at higher order than the leading - order , , cost of in - plane shear derived below and represents nominal modification to the forgoing analysis .in this section , we consider a helical filament bundle at a fixed , constant rate of bundle twist , . the aim is to integrate out the distribution of the individual crosslinks and the twist of filaments within a binding zone to derive an effective free energy in terms of bundle twist alone .> from this , we learn that crosslinking helical filament bundles necessarily introduces intrinsic torques which tend to wind the bundle superhelically in order to reduce in - plane shearing of linkers .competing with the cost of crosslink shear , is the cost for twisting individual filaments away from their native helical symmetry , .\ ] ] here is the torsional modulus of the filament and we used eq .( [ eq : psi ] ) to rewrite the filament rotation , , in terms of the groove alignment angle .we analyze the respective costs of linker shear and filament twist , by considering the profile of crosslinking occurring within a single binding zone ( shown in figure [ fig : bindingzone ] ) . within a binding zone rotates by as opposing grooves come into near registry .thus , if is the length of the binding zone along the long axis of the bundle , then the cross term in eq .( [ eq : torsion ] ) proportional to is fixed . as shown in fig .[ fig : bindingzone ] , at fixed fraction of bond crosslink sites , , linkers occupy a span of size centered around the close contact between opposing helical grooves . hence , for , we may write the -dependence of the elastic energy of the binding zone ( per filament ) in terms of the following functional , .\ ] ] where from eq .( [ eq : dperp ] ) we find that represents the effective pinning strength of the linkers located at , which favor pinning the relative orientation to .the minimal energy torsional state of the filament is determined by , this equation , along with the boundary condition , is satisfied by the following rotation profile , where here , is a characteristic lengthscale defined by the relative cost of filament twist and linker shear .it is the lengthscale over which the filament twist can adjust from its native rate to the value required in the binding zone . the twist profile within a binding zone is largely determined by the ratio of this elastic lengthscale to the size of the binding zone .some characteristic groove - slip profiles are displayed in figure [ fig : psi ] . in the rigid filament limit , where , the solution is simply , which is equivalent to a homogeneously twisted state , .this indicates a weak modification of the filament twist due to crosslinker elasticity and we refer to this as the limit of _ groove - slip_. in the opposite , rigid - crosslink limit , when , we can show that in this second situation , referred to as the _ groove - locked _limit , the linker elasticity pins or locks the groove over the region of occupied linkers . as a consequencethe filament twist in this region follows the bundle twist , .the rotation of the groove towards the next binding partner is achieved over the reduced distance , where there are no crosslinks .there , the filaments rotate freely according to a constant twist , . accordingly, the filament twist energy will be minimized when and .we find below that this twist profile results in a tendency to twist the bundle as a whole .the minimal energy solution for , yields the following elastic energy for fixed , ^ 2 } , \ ] ] where .recalling that the mean rate of twist is related to the binding zone size by , , the has the following limits for groove - slip , and for groove - locked , the physical interpretation underlying each limit is straightforward . in the groove - slip regimethe elastic energy is computed from the homogeneous rotation profile , which incurs a torsional cost per unit length , as well as the cost of shearing the bound crosslinks according to . in the groove - locked regime , where crosslinks are bound and the rotation is carried over the unbound length , , of the binding zone for which .hence , the mean torsional energy of eq .( [ eq : groovelock ] ) .the final step in minimizing the elastic energy over the filament torsion is accomplished by minimizing the combined elastic and torsional energy density over for fixed to find the dependence of linker - mediated groove interactions on bundle twist , .\ ] ] in the limits described above it is is straightforward to show , where again , .the second - line above shows the clear preference of the groove - locked regime to maintain contact between crosslinked and nearly parallel grooves on neighbor filaments over large distances , , as . inserting these limiting cases into eq .( [ eq : fgroove ] ) we find a central result of our analysis , the free energy of linker and filament elasticity in terms of , the full non - linear behavior for the coarse - grained free energy ( as calculated from eq .( [ eq : fgroove ] ) ) is shown in figure [ fig : torque ] . in the groove - slip limit ( ) the linker - mediated groove interactions become insensitive to bundle twist , as the crosslinks are not stiff enough to noticeably affect the state of twist of the individual filaments . in this case, the elastic energy is set by the deformation of the flexible crosslinks , , but does not depend on the amplitude of the bundle twist .thus , increasing bundle twist does not appreciably increase the cost of shear deformation in the crosslinks .instead , the crosslinks reorganize into new binding sites allowing them to maintain a constant average crosslink deformation .this is possible , as in hexagonal bundles the angle is an upper limit for the angle between two neighboring grooves , _ independent _ of the size of the binding zone , .accordingly , the shear deformation of the maximally stretched linker can not grow larger than , independent of the bundle twist . in the groove - locked limit ( )the energy scale is set by the filament twist stiffness .the linker - mediated interactions lead to a preference to twist the bundle at a rate equal to the intrinsic twist of filaments , .this latter case reflects the fact that crosslinks on neighboring filaments lock a span of length into a parallel configuration . by rotating the interfilament position at a rate , these groove - locked domains on neighbor filaments can be brought into coincidence with the native , helical geometry of the untwisted grooves .therefore , a high degree of crosslinking by rigid linkers ( ) induces an _ intrinsic torque _ on the entire bundle , which prefers the fiament lattice to rotate at the rate of the helical grooves .the crossover between groove - locked and groove - slip behavior can be related to .groove - locking occurs for , while groove - slip occurs for larger deviations from optimal twist , , where , therefore , as shown in fig .[ fig : torque ] , the harmonic , linear - elastic twist dependence of the linker elastic energy is maintained for bundle twists near to the intrinsic rate of groove twist , , over range of twists that increases with the fraction of bound crosslinks .in the previous section we have derived the form of the linker - induced elastic energy for global twist of crosslinked bundles of helical filaments .we found that crosslinks induce torques that tend to twist the bundle as a whole .these intrinsic torques give rise to unusual structural and mechanical properties for self - assembled bundles , including a highly non - linear dependence of bundle twist on lateral size and externally applied torsional moment .to demonstrate these properties , we consider crosslinked bundles of semi - flexible helical filaments with a twist - dependence described by eq .( [ eq : fgroove ] ) .the additional mechanical costs of filament bending can be described by the following simple elastic energy , where is the bend modulus and is the curvature of the filament in a twisted bundle .averaging over a cylindrical cross - section of radius , we obtain a free energy per unit filament length , analytical progress is hampered by the complicated form of the twist free energy , eq .( [ eq : fgroove ] ) . in the followingwe therefore assume the simplified expression , \ ] ] which captures the limiting groove - locked and groove - slip behaviors of eq .( [ eq : ftwist ] ) as well as the overall non - linear dependence on between these two limits . in the following subsection, we analyze the structural and mechanical properties of bundles in terms of two dimensionless parameters , and , where is a characteristic bundle size at which the mechanical cost of filament bending becomes comparable to the linker - induced cost of twisting the filaments from their native symmetry .we first analyze the equilibrium twist of self - assembled bundles as function of lateral radius , . according to - mediated torques prefer a constant degree of twist , . on the other hand ,bending resistance of filaments penalizes bundle twist in an -dependent manner due to the linear increase of filament curvature with radial distance from the helical bundle center . based on our expression for , for a fixed and , the equilibrium bundle twist is determined from the solutions to where is the reduced twist . for the limiting case of , where linker shear is sufficiently strong to maintain the high torque , groove - locked behavior over the entire range , the equilibrium bundle twist satisfies a cubic equation , in this limit , the dependence of the twist has the form , \ { \rm for } \\delta \bar { \omega}_c \gg 1 , \ ] ] where in this groove - locked limit , the bundle is unwound continuously as increases due to the increased cost of filament bending . in the limit of large bundles , , eq .( [ eq : omlocked ] ) predicts a power law dependence of optimal twist on bundle size , .the range of groove - locked behavior is highly sensitive to the degree of crosslinking as , hence for weakly crosslinked bundles as the torsional energy dependence necessarily crosses over to groove - slip behavior , becoming largely insensitive to .thus , in the most general case , we expect the groove - locked predictions of described by eq .( [ eq : omlocked ] ) to hold only for sufficiently small bundles where the groove - locked approximation predicts that . beyond this size, we expect the decrease of with increasing size to become more rapid as the bundle slips from the high - torque region .the full dependence of vs. is shown in fig .[ fig : omvsr ] for several values of .these show that unwinding of the bundle due to the increased bending cost of large bundles becomes more rapid in comparison to the predictions of eq .( [ eq : omlocked ] ) as is decreased . for a critical value , ,the sensitivity of twist to bundle size becomes singular , , at . for below this critical value , unwinds by a discontinuous , first - order jump , as the elastic energy minimum slips from the narrow high - torque behavior near to the low - torque , groove - slip behavior of for . in this regime ,the equililbrium state of twist is therefore highly susceptible to small changes in the mechanical or geometrical properties of the bundle .changing only slightly bundle radius or crosslink density may result in a sudden and strong reduction of the overall bundle twist . in this section ,we demonstrate how this sensitivity leads to a highly non - linear twist response when the bundle is subject to an external torque .experiments that probe the torque - twist relation of single molecules have proven fruitful tools in advancing the understanding of the mechanical properties of biopolymers , such as dna .while the active manipulation of filament bundles is still a very delicate task , experiments in this direction may soon be feasible and may therefore provide an indirect means to probe the elastic properties of crosslinkers and their interactions with filaments . in the presence of an external torque , the free energy eq .( [ eq : fomega ] ) becomes in the absence of the filament bending term , it is straighforward to see that the bundle is thermodynamically unstable for any nonzero value of .the linear term leads to a tilting of the twist free energy .as is asymptotically independent of , in the absence of bending resistance , the thermodynamic groundstate is always the fully twisted state , .physically , this means that increasing the external torque does not lead to restoring forces , e.g. in the form of crosslink shearing .instead , the bundle adapts to the increased load by a reorganization of the crosslinks into new binding sites .accounting for the mechanical cost of filament bending , the bundle is stabilized at a value of , at which the twist - induced bending energy of the filaments balance the external torque .the full dependence of vs. is shown in fig .[ fig : externaltorque ] . for small values of and the instability appears as a sudden change of with the external torque .note that due to the helical nature of the filaments , the response of the bundle will be inherently asymmetric and different for positive or negative torques .hence , careful measurements of the non - linear torsional response of the self - assembled bundles should provide an indirect , experimental means to probe the twist state of crosslinked filaments .in the previous sections we have discussed the properties of bundles of a given radius . in this sectionwe analyze the equilibrium thermodynamics of a system of self - assembled filaments and consider the thermodynamic stability of bundles of finite radius in the presence of a fixed degree of crosslinking . here , we consider a system possessing a fixed total number of filaments . in the bundled state ,all filaments are assumed to form bundles of a mean - size , , and a negligible number of unbundled filaments remain dispersed in solution . as described in eq .( [ eq : elink ] ) , each bound crosslink contributes , , of cohesive free energy . in the bulk of a self - assembled aggregate ,crosslinks contribute per unit filament length , as there are 3 crosslinking `` channels '' per filament in the interior of the bundle . at the outer boundary of the bundle , there are fewer crosslinks , roughly fewer cohesive bonds times the number of filaments at the surface of the bundle , . for a large bundle with a circular cross - section ,we may estimate so that the net cohesive free energy per unit filament length from crosslinked bundle of size has the form , where . combining the cohesive energy with the elastic costs of linker shear and filament bending we may write the total free energy per unit filament length , , \end{aligned}\ ] ] where the ratio of bend to twist moduli define a characteristic crosslink fraction , and is a characteristic cohesive energy scale .the thermodynamics of filament assembly are characterized by the dependence of the equilibrium , free energy minimizing values of and on and .it is straightforward to show that the value of minimizing at fixed is determined by a balance between the cohesive and bending energies of the bundle , satisfying thus , the growth of equilibrium radius is quite generally correlated to decreasing twist for self - assembled bundles , and a non - zero measure of bundle twist implies a finite equilibrium radius . using the definition of , eq.([eq : domc ] ) , we can define another characteristic crosslink fraction , that characterizes the crosslink density below which the bundle `` slips '' to the low - torque elastic energy at . with the solution for , eq .( [ eq : rofom ] ) , and the form of from eq .( [ eq : interp ] ) , we may rewrite the reduced energy density purely in terms of the reduced twist , , + \frac{3}{4 } v(\rho,\epsilon ) \bar{\omega}^{4/3 } - \epsilon/\epsilon_c , \ ] ] where , the reduced form of the free energy density , eq .( [ eq : reduced ] ) , demonstrates that the phase behavior of helical crosslinked bundles is determined by two dimensionless parameters : and .consequently the thermodynamic dependence of the assembly on crosslink density and cohesive energy of crosslinks is encoded in the and dependence of these parameters . the first term in eq .( [ eq : reduced ] ) , representing the crosslink - mediated torsional energy , is minimized at , while the second term , representing the combined cohesive and bending energies is minimized at zero twist , .hence , generically may be characterized by two minima , whose relative depth is determined by and .it is straightforward to analyze the case of rigid linkers , for which we expect over the range of filament assembly and where the first term in eq .( [ eq : reduced ] ) adopts the groove - locked limit , . in this case , we minimize the reduced free energy density for the limiting cases , we note that in the limit of and the minimum of corresponds to the groove - locked state , from which we find the following dependence of equilibrium bundle size on and , where we have used .stable , microscopic bundles are associated with the limit of high cross - link density and relatively weak cohesive energy per link where while .as one might have expected , eq . ( [ eq : size ] ) suggests that due to the adhesive effect of crosslinking , bundles grow with increasing , as larger bundles imply smaller surface effects .perhaps more surprising , the limit of this model predicts that at large linker densities bundles assemble to a finite size that _ grows _ with increasing fraction of bound crosslinkers , growing as . in this regimethe bundle is fully twisted , so filaments have to bend in order to be incorporated into the bundle . increasing bundle sizeis therefore only possible when the mechanical cost of filament bending is offset by the cohesive energy gain of adding crosslinks . in the limit of large ,note that the size of equilibrium bundles also diverges in the limit of small linker fraction as , indicating a smooth crossover to a state of unlimited , macroscopic filament assembly in the limit .it is straightforward to determine the full equation of state for an arbitrary value of , relating equilibrium twist , , to and from , } .\ ] ] the solutions to this equation of state are shown in fig .[ fig.omega.rhos ] , where we plot as a function of for constant values of .these results show that equilibrium twist decreases from both with _ decreased _ as well as with _ increased _ . underlying the unwinding of equilibrium bundlesare two effects , driven by decreasing crosslink fraction .first , as the number of crosslinkers in the bundles is reduced , the strength of the intrinsic torques that drive the bundle towards is correspondingly reduced .second , diverges as indicating a dramatic increase in the relative importance of filament bending in the elastic energy , further enhancing the preference to untwist the bundle . according to eq .( [ eq : rofom ] ) equilibrium bundles grow in radius as they unwind , and ultimately diverge in size at the untwisted , state .[ fig.omega.rhos ] shows this unwinding with decreased may occur either continuously or discontinuously , accompanied by a rapid jump in equilibrium twist . from the equation of state ,( [ eq : eos ] ) , we find a critical value that separates smooth from discontinuous bundle untwisting . at this critical point , and may differ slightly in the full model . ] .for the equilibrium twist state jumps from the groove - locked minimum of near to the nearly unwound groove - slip state .this discontinuous and highly non - linear thermodynamic dependence of self - assembled bundles derives from the non - linear interplay between linker shear and filament twist encoded in .we now proceed to sketch the possible phase diagrams of self - assembled filament systems .we begin by analyzing a transition between the 2 torsional states of self - assembled filament arrays , groove - locked vs. groove - slip . in terms of parameters and we sketch the twist - state phase diagram of fig.[fig.omega.rhos]b .thermodynamically stable finite - sized bundles are associated with the groove - locked regime ( l ) , while in the slip regime ( s ) radial bundle growth is nearly unlimited in equilibrium as .the line of discontinuous transitions between locked and slip regime terminates at the critical point .the transition line can approximately be calculated by comparing the solution ( , groove - locked ) with the solution ( , groove - slip ) .note , that this calculation will not be accurate close to the critical point , where .equating these two limiting forms of the energy , we find a relationship between and , satisfied at the ( ls ) boundary , across this line of first order transitions , the equilibrium twist of bundles slips from a nearly groove - locked state with of microscopic size to a nearly untwisted and macroscopically sized bundle .the location of the critical point follows from the condition which gives this critical branch is sketched as a dashed line in fig .[ fig.omega.rhos]b .hence , with increasing , say by decreasing linker shear stiffness , the ( ls ) boundary and the critical point shift to larger values of .notice that the bundle phase diagram has the familiar form of a liquid - gas transition .stable finite - sized bundles correspond to the `` condensed '' phase , while in the `` gas - phase '' the bundles are large and nearly untwisted . below a critical value of corresponding to ,there is a well - defined first - order transition between these states at a certain linker density .for sufficiently strong cohesive energies the state of untwisted , macroscopic assembly evolves continuously to the state of finite - sized bundles at high linker fraction .in addition to the `` microscopic '' and `` macrosopic '' bundle phases , we also consider the possibility of a state of unassembled filaments , where no bundles form and filaments remain uncrosslinked .neglecting entropic contributions , such as translational entropy of linkers and filaments , the free energy density in the filament phase vanishes , , as free filaments are mechanically undistorted .thus , the state of bundled filaments is stable with respect to unbundled filaments where the free energy of eq.([eq : f.all.terms ] ) remains negative , indicating a net lowering of the free energy due to crosslink binding .the phase boundary , , can be determined approximately by assuming the limiting cases of groove - locked ( ) and groove - slip ( ) . equating the energy density of the groove - locked bundles to the free filament case, we find the condition satisfied along the twisted , bundle / filament ( bf ) phase boundary , we may also estimate the phase boundary between untwisted , macroscopic bundle assembly and free filament phase , denoted as ( uf ) , by comparing the free energy of the state with and to the free filament energy in both the groove - locked and groove - slip regimes . from eq .( [ eq : reduced ] ) , we find this boundary satisfies , based on the - and -dependence of these boundaries , we sketch two basic scenarios for the diagram of state of bundle assembly of crosslinked helical filaments , based on the interference of the ( ls ) , ( bf ) and ( uf ) boundaries . when , the critical point at and lies deep withinthe filament phase and the ( ls ) line is not relevant .this is the limit of effectively _ rigid linkers _ , characterized by .we sketch the phase behavior in fig .[ fig.phasediagram]a . here ,the boundaries ( uf ) and ( bf ) meet at the point and . below this critical value of cohesive energy of crosslinks , , bundles only form at crosslinker fractions that are larger than .bundles formed for this low - cohesive energy regime are twisted and grow in size with increasing crosslinking density as . for smaller crosslink densities ,the filaments remain in the dispersed state , allowing for a narrow region of slip - induced , macroscopic filament assembly predicted for . for filamentsassemble at all crosslinker fractions , with a state of macroscopic bundle assembly crossing over to microscopic , twisted bundles as increase from 0 to values of order .the second scenario , , occurs in the limit of _ flexible linkers_. in this case the critical point lies inside the regime of linker - mediated filament assembly , as shown in fig .[ fig.phasediagram]b .in addition to the line of first - order transitions separating the groove - locked and groove - slip states of bundles , we also have a triple point , at which the two bundle phases , macroscopic and microscopic , coexist with the state of dispersed filaments . from the scaling expressions given for ( bf ) and ( uf )we locate the triple point at in this case , bundled phases form for all cohesive energies , , with a line of first - order transitions that separates the phase of macroscopic , nearly untwisted bundles at low linker densities from the phase of small radius , twisted bundles at high linker density and terminates at the second order critical point . for weaker cohesive energy per crosslinker , there are three possible states of filament assembly . at low density of bound linkers , , we predict a phase of untwisted , macroscopic filament assembly . for intermediate densities of linkers , , we predict a state of unbound and disperse linkers .finally , at the highest densities of linkers , , we find a re - entrant state of twisted , filament bundles , whose finite diameter grows with linker density as .in this study , we have analyzed a coarse - grained model for crosslinking in ordered arrays , or bundles , of helical filaments .a primary result of this model is a quantitative relationship between the presence of crosslinkers in bundles , and intrinsic torques that act to coherently twist entire bundles superhelically around their central axis .such global distortion naturally competes with the mechanical cost of bending stiff filaments , providing a complex feedback between the torsional structure , size and thermodynamics of bundle assembly . along with the total number and binding affinity of crosslinkers ,the key parameters governing the structure and assembly of crosslinked bundles include the elastic properties of the filaments and the linkers themselves . indeed , we find the ratio of linker stiffness to filament stiffness to be particularly important for controlling not only the structure and properties of self - assembled bundles , but also the sensitivity of these properties size , twist and stability to changes in the availability and affinity of crosslinkers . the effective torsional elastic energy of bundles derived in sec .iii describes an intrinsic frustration between shear distortion of crosslinks that bind together neighboring filament grooves and the torsional elastic response of filaments themselves . in the limiting case of perfectly rigid linkers , where , our model is not unlike the elastic model of coiled - coils of neukirch , goriely and hausrath , in which adhesive interactions between neighboring helical molecules maintain grooves in perfect registry .geometrically , this can be accomplished either by untwisting the rotation of the grooves or by superhelically twisting the filaments about the bundle axis .the twist elastic energy of the filaments is minimal for the state of perfect superhelical twist , .deviations from this optimal geometry , while maintaining perfect groove contact , are described by a coarse - grained twist free - energy cost , .our theory generalizes the coiled - coil model of perfect groove contact in two ways .first , grooves only maintain contact over fraction , , of the length of the filaments due to the finite density of crosslinkers between any filament pair .this distinction accounts for the dependence of the effective torsional modulus , of the stiff - linker regime , a significant effect in the context of linker - mediated assembly .the second important and novel effect captured by the model is the elastic compliance of the linker bonds themselves .flexibility of bound linkers gives rise to highly nonlinear torsional properties of the bundles themselves .the twist elastic behavior predicted by the case of groove - locked contact , gives way to a highly - twist compliant state ( `` groove - slip '' ) when deviations from the optimal twist geometry are large , .the crossover twist , which delineates the groove - locked from the groove - slip regime , increases both with increased linker density , and increased linker stiffness .hence , untwisting bundles out of the groove - locked into the groove - slip state indicates a shift in the mechanical load from filament twist to in - plane crosslink shear distortions . in this limitthe elastic cost is insensitive to bundle twist , and given by .the distinction between a purely harmonic twist energy for stiff linkers and a non - linear twist energy for flexible linkers has key consequences for the structural and mechanical properties of crosslinked , helical filament bundles .this is reflected in the optimal rate of bundle twist , a property that is determined not only by a balance of the cost of linker shear and filament twist , but also the mechanical costs associated with filament bend .the costs of filament bending are themselves highly sensitive to the lateral radius of the bundle due to the increased curvature of filaments away from the bundle center . in the case of rigid linkersgrooves remain locked in close contact , so that the optimal twist derives purely from the competition between the bending cost of rotating filaments superhelically in the bundle , and the twist elastic cost needed to maintain groove - alignment when filaments are parallel .this balance is described by the results of eq .( [ eq : omlocked ] ) , where the increased cost of bending filaments in large bundles leads to a continuous unwinding as .in contrast , bundles bound by relatively flexible linkers are much more sensitive to bundle size , the sensitivity ultimately becoming singular when .these flexibly crosslinked bundles exhibit a discontinuous drop in as a function of , as the bundle rapidly jumps between the groove - locked and groove - slip behaviors .hence , in this flexible linker regime , the unwinding of bundle twist with increased size is largely determined by a competition between filament bending and linker shear due to misaligned grooves .a similar distinction is predicted for the non - linear torsional response of bundles : depends continuously on externally applied torque for rigidly crosslinked bundles ; while flexibly crosslinked bundles are multi - stable , exhibiting discontinuous transitions between groove - locked and groove - slip states driven by external torque .these non - linear structural and mechanical properties reflect the frustration of the geometry of crosslinking within helical filament bundles that ultimately gives rise to intrinsic mechanical torques that lead to _ self - limiting bundle assembly_. considering the thermodynamically optimal state of twist and radius for a system of associating filaments and linkers , we found two basic scenarios for the dependence of assembly behavior on the number and affinity of crosslinkers , as parameterized by and , respectively .for the case of _ rigid linkers _ ( shown in fig .[ fig.phasediagram]a ) , we find that assembled filaments maintain the high - torque , groove - locked state over the relevant range of their assembly . above a critical cohesive energy per crosslinker , , we predict that filament assemblies form at all linker densities , with macroscopically large and untwisted assembly behavior at low crossing over continuously to a regime of twisted and finite sized bundles at large . for less cohesive crosslinking bonds , , we find that filaments remain largely unbundled below a linker density , above which they form twisted bundles whose lateral size grows as . for bundles crosslinked by_ flexible linkers _ ( behavior shown in fig .[ fig.phasediagram]a ) the transition between highly and weakly twisted states as function of increased is discontinuous below a critical value of .we also find that finite - sized bundles , untwisted macroscopic assemblies and unbundled , free filaments coexist at a triple point whose precise location is sensitive to the flexibility of linkers as parameterized by . for values of below this triple point, we predict that macroscopic assembly occurs in the limit of small , giving way to a dispersed filament phase at intermediate -values and at the highest range of linker density , a re - entrant phase of finite - sized linker mediated bundles occurs . the rich spectrum of possible assembly behavior can be fully classified in terms of three parameters .the fundamental cohesive energy scale is determined by , which corresponds to the energy of untwisting the intrinsic rotation of the helical grooves of the filaments .the different modes of elastic deformation induced by crosslinkers in helical filament bundles account for the presence of two fundamental scales of bound crosslinker fraction , .the first , , can be understood as the ratio of the mechanical costs of two pair - wise filament geometries : , roughly the bend cost of winding a filament pair into a groove - locked coiled - coil geometry ; and , the mechanical cost of untwisting the intrinsic helical grooves into parallel filaments .we find that for flexible crosslinks , sets a lower limit for the fraction of bound crosslinkers at which self - limited bundles form , suggesting that self - assembled bundles are thermodynamically favored when the cost of filament bend is relatively small compared to the cost of filament twisting .the final parameter , , reflects the relative elastic cost of linker shear to filament twist and determines the density of crosslinkers below which high - torque , groove - locked bundle states crossover to low - torque , groove - slip behavior .hence , we can classify the thermodynamic distinctions between linker - mediated assembly by the ratio which is much greater or less than 1 for the respective rigid and flexible behaviors depicted in fig .[ fig.phasediagram ] .the predictions of our coarse - grained model are most directly relevant to the formation of parallel actin bundles , self - organized cytoskeletal filament assemblies that form in a variety of cellular specializations under the influence of compact crosslinking proteins .there is a range of experimental evidence of parallel actin bundles formed _ in vivo _ and _ in vitro _ showing that the presence of certain crosslinking proteins affects the torsional state of bundled filaments , leading to a modest adjustment of the rotation rate of primary helix formed by the actin monomers .recent small - angle scattering studies suggest , in fact , that the presence of different crosslinking proteins modify the twist of bundle actin filaments to a similar degree , but reconstituted solutions of actin bundles exhibit a remarkably different sensitivity to the concentration of crosslinking proteins .the crosslinking protein fascin , a primary component of filopodial bundles , was shown to assemble filaments into bundles with a continuously variable degree of filament twist , while the crosslinking protein espin , an abundant crosslinker in microvillar and mechanosensitive stereosciliar bundles , drove a discrete transition between the native state of twist and the torsional geometry of the `` fully - bundled '' actin filaments .theoretical studies of a model actin filaments in bulk , parallel arrays attribute this difference in twist sensitivity to differences in the _ stiffness _ of the crosslinking bonds themselves . in our model , we show that the presence of crosslinking bonds in parallel arrays of helical filaments not only modifies the torsional geometry of individual filaments , but in general determines the amount of global twist of the entire bundle assembly .in particular , we show that the competition between the inter - filament geometry preferred by crosslinking and the bend elasticity of filaments mediates an intrinsic feedback between the lateral assembly size and the amount of superhelical twist .an important and general prediction of our model is the appearance of a thermodynamically stable phase of finite radius bundles at sufficiently high crosslink fractions that exhibits a linker dependent equilibrium radius , .et al _ has explored the dependence of actin bundles formed in reconstituted solutions of filaments and the crosslinker , fascin . above a critical ratio of fascin to actin monomer , , they found that the mean diameter of bundles formed exhibited a powerlaw dependence , , for small bundles , consistent with the predictions our coarse - grained model , which predict that the radius of twisted bundles grows with bound linker fractions as .hence , our model establishes a thermodynamic link between simultaneous influence of crosslinker fraction on bundle size and degree of filament twist in these actin / fascin experiments .additionally , our theory establishes a range of further predictions on both the affinity and flexibility of crosslinking proteins , suggesting the need for further experiments on the influence of size and structure of parallel actin bundles on properties of crosslinkers .notably , the implied differences in the compliance between fascin and espin crosslinks should be correlated with measurably differences in the assembly thermodynamics of reconstituted bundle forming solutions . in summary, we have found that although crosslinking bonds between helical filaments mediate net cohesive interactions between filaments , the complex interplay between the mechanics of linker and filament distortions ultimately gives rise to a nontrivial dependence of the structure of self - assembled bundles on the number of bound linkers .we quantitatively model the thermodynamic influence of a number of microscopic parameters , from filament stiffness and intrinsic twist to the affinity of crosslink bonds . among these , we show that the stiffness of the linkers themselves account for remarkable differences in the sensitivity of bundle structure to the availability of crosslinks .it is natural to expect that living systems may exploit the intrinsic frustration of crosslinking in helical filament bundles as robust means of regulating the size and structure of self - assembled bundles , not by modifying properties of the filaments , but instead by carefully regulating the number and type of crosslinking proteins alone .the authors would like to acknowledge h. shin for useful comments , and the hospitality of the aspen center for physics , where this study originated .ch was supported by a feodor lynen fellowship of the german humboldt foundation .gg was supported by the nsf career program under dmr grant 09 - 55760 .in this appendix we demonstrate how out - of - plane shear deformations of crosslinkers may partially relax through a coupling to in - plane shear modes . generalizing eq.([eq : elink ] )we consider an energy of crosslinking , that contains , next to the in - plane shear deformations , , the out - of plane shear , , which can be associated with filaments tilting into the plane of lattice order . in a twisted bundle , filaments are increasingly tilted along the azimuthal direction as the radial distance from the bundle center increases , where is the radial vector from the center of the bundle . the in - plane distance between crosslinking pointsis fixed to , and we can relate the out - of - plane shear to the average tilt angle , , of two neighboring filaments where in this equation refers to the mean in - plane position of filaments and and is a unit vector that points from one filament to the other . hence , in the twisted geometry , out - of - plane shear depends on , and is different for different filament pairs in the bundle . as we are interested in , we can replace with its average over the 6-fold nearest neighbor directions on the hexagonal lattice , and the neighbor - averaged shear becomes , notice that this shear is insensitive to , unlike . as described in eq .( [ eq : dpar ] ) , out - of - plane shearing occurs when neighboring filaments , say and , sharing crosslinks , tilt along their nearest neighbor separation vector , , making an angle , relative to the straight configuration as shown in fig .[ fig : outofplane ] .this assumes the crosslinks to bind in the `` horizontal '' plane , which is perpendicular to the bundle axis .however , if we allow the crosslinks to relax their binding geometry at fixed after shear takes place , then the actual shear angle of the crosslinks , may be reduced from the shear angle of the crosslinks , .this proceeds by shifting the linker ends bound to the ( ) filament `` upstream '' ( `` downstream '' ) , by an amount ( ) and reduces the out of plane shear to . the amount of linker shift , , is determined from the competing effect of in - plane shear .due to the rotation of the filament groove , as described by , the shift of crosslink ends _ also _ alters the in - plane shear angle to , where we assume that the rotation profile for both filaments is identical . integrating the in - plane shear along one binding zone from to and noting from the analysis of sec.[sec : elastic - free - energy ] that while , the elastic contribution per unit length from out of plane shear from a single - binding zone can be written as , where and . minimizing over the shift of boundlinker ends we find , and an effective free energy contribution per unit length of the interfilament groove , thus , we find that the coupling between out - of - plane and in - plane shear modes leads to a tilt - dependence renormalization of the torsional energy of the filaments ( ) . in the limit that linkers are softer to out - of - plane shear than to in - plane shear , or ,the contribution to the bundle energy from the out - of - plane shear is clear negligible compared to the in - plane cost . from eq .( [ eq : epar ] ) we find that also in the limit that , the out - of - plane shear leads to a renormalization of the effective twist modulus of filaments , , using .relative to the effective energy analyzed in sec.[sec : prop - bundl - with ] and [ sec : link - medi - filam ] , the net correction to the linker mediated - twist energy of the bundle from out - of - plane shear mode ultimately represents a higher - order term in , proportional to , and therefore , will not strongly alter the quantitative analysis of bundle thermodynamics from the case .
|
inspired by the complex influence of the globular crosslinking proteins on the formation of biofilament bundles in living organisms , we study and analyze a theoretical model for the structure and thermodynamics of bundles of helical filaments assembled in the presence of crosslinking molecules . the helical structure of filaments , a universal feature of biopolymers such as filamentous actin , is shown to generically frustrate the geometry of crosslinking between the `` grooves '' of two neighboring filaments . we develop a coarse - grained model to investigate the interplay between the geometry of binding and mechanics of both linker and filament distortion , and we show that crosslinking in parallel bundles of helical filaments generates _ intrinsic torques _ , of the type that tend to wind bundle superhelically about its central axis . crosslinking mediates a non - linear competition between the preference for bundle twist and the size - dependent mechanical cost of filament bending , which in turn gives rise to feedback between the global twist of self - assembled bundles and their lateral size . finally , we demonstrate that above a critical density of bound crosslinkers , twisted bundles form with a thermodynamically preferred radius that , in turn , increases with a further increase in crosslinking bonds . we identify the _ stiffness _ of crosslinking bonds as a key parameter governing the sensitivity of bundle structure and assembly to the availability and affinity of crosslinkers .
|
we are interested in solving inverse problems which can be formulated as the operator equation where is an operator between two banach spaces and with domain ; the norms in and are denoted by the same notation that should be clear from the context .a characteristic property of inverse problems is their ill - posedness in the sense that their solutions do not depend continuously on the data . due to errors in the measurements ,one never has the exact data in practical applications ; instead only noisy data are available .if one uses the algorithms developed for well - posed problems directly , it usually fails to produce any useful information since noise could be amplified by an arbitrarily large factor .let be the only available noisy data to satisfying with a given small noise level .how to use to produce a stable approximate solution to ( [ 1.1 ] ) is a central topic , and regularization methods should be taken into account . when both and are hilbert spaces , a lot of regularization methods have been proposed to solve inverse problems in the hilbert space framework ( ) . in case a bounded linear operator , nonstationary iterated tikhonov regularization is an attractive iterative method in which a sequence of regularized solutions is defined successively by where is an initial guess and is a preassigned sequence of positive numbers . since can be written explicitly as where denotes the adjoint of , the complete analysis of the regularization property has been established ( see and references therein ) when satisfies suitable property and the discrepancy principle is used to terminate the iteration , this method has been extended in to solve nonlinear inverse problems in hilbert spaces .regularization methods in hilbert spaces can produce good results when the sought solution is smooth .however , because such methods have a tendency to over - smooth solutions , they may not produce good results in applications where the sought solution has special features such as sparsity or discontinuities . in order to capture the special features , the methods in hilbert spacesshould be modified by incorporating the information of suitable adapted penalty functionals , for which the theories in hilbert space setting are no longer applicable .the nonstationary iterated tikhonov regularization has been extended in for solving linear inverse problems in banach spaces setting by defining as the minimizer of the convex minimization problem for successively , where , and denotes the bregman distance on induced by the convex function .when is uniformly smooth and uniformly convex , and when the method is terminated by the discrepancy principle , the regularization property has been established if satisfies .the numerical simulations in indicate that the method is efficient in sparsity reconstruction when choosing with close to on one hand , and provides robust estimator in the presence of outliers in the noisy data when choosing on the other hand .however , since is required to be uniformly smooth and uniformly convex and since is induced by the power of the norm in , the result in does not apply to regularization methods with and total variation like penalty terms that are important for reconstructing sparsity and discontinuities of sought solutions .the total variational regularization was introduced in , its importance was recognized immediately and many successive works were conducted in the last two decades . in an iterative regularization method based on bregman distance and total variation was introduced to enhance the multi - scale nature of reconstruction .the method solves ( [ 1.1 ] ) with linear and a hilbert space by defining in the primal space and in the dual space via where ] , we use to denote its effective domain .we call proper if . given we define any element is called a subgradient of at .the multi - valued mapping is called the subdifferential of .it could happen that for some .let for and we define which is called the bregman distance induced by at in the direction .clearly .by straightforward calculation one can see that for all , , and . a proper convex function ] be a proper , weakly lower semi - continuous , and uniformly convex function .then admits the kadec property , i.e. for any sequence satisfying and there holds as . assume the result is not true .then , by taking a subsequence if necessary , there is an such that for all . in view of the uniformly convexity of ,there is a such that using we then obtain on the other hand , observing that , we have from the weakly lower semi - continuity of that therefore , which is a contradiction . in many practical applications , proper , weakly lower semi - continuous , uniformly convex functions can be easily constructed .for instance , consider , where and is a bounded domain in .it is known that the functional is uniformly convex on ( it is in fact -uniformly convex ) .consequently we obtain on the uniformly convex functions where , , and denotes the total variation of over that is defined by ( ) for and the corresponding function is useful for sparsity reconstruction ( ) ; while for and the corresponding function is useful for detecting the discontinuities , in particular , when the solutions are piecewise - constant ( ) .we now return to ( [ 1.1 ] ) , where is an operator between two banach spaces and .we will always assume that is reflexive , is uniformly smooth , and ( [ 1.1 ] ) has a solution . in general, the equation ( [ 1.1 ] ) may have many solutions . in order to find the desired one ,some selection criteria should be enforced . choosing a proper convex function ,we pick and as the initial guess , which may incorporate some available information on the sought solution .we define to be the solution of ( [ 1.1 ] ) with the property we will work under the following conditions on the convex function and the operator .[ a0 ] is a proper , weakly lower semi - continuous and uniformly convex function such that ( [ pconv ] ) holds , i.e. there is a strictly increasing continuous function with such that for , and . [ a1 ] 1 . * is convex , and is weakly closed , i.e. for any sequence satisfying and there hold and ; * there is such that ( [ 1.1 ] ) has a solution in , where ; * is frchet differentiable on , and is continuous on , where denotes the frchet derivative of at ; * there exists such that for all . when is a reflexive banach space , by using the weakly closedness of and the weakly lower semi - continuity and uniformly convexity of it is standard to show that exists .the following result shows that is in fact uniquely defined .[ lem existence and uniqueness of xdag ] let be reflexive , satisfy assumption [ a0 ] , and satisfy assumption [ a1 ] . if is a solution of satisfying ( [ eq definition of xdag ] ) with then is uniquely defined .assume that ( [ 1.1 ] ) has two distinct solutions and satisfying ( [ eq definition of xdag ] ) .then it follows from ( [ eq:20.10june ] ) that by using assumption [ a0 ] on we obtain and . since , we can use assumption [ a1 ] ( d ) to derive that .let for . then and .thus we can use assumption [ a1 ] ( d ) to conclude that since , this implies that .consequently , by the minimal property of we have on the other hand , it follows from the strictly convexity of that for which is a contradiction to ( [ eq:20.7june ] ) . we are now ready to formulate the nonstationary iterated tikhonov regularization with penalty term induced by the uniformly convex function . for the initial guess and , we take a sequence of positive numbers and define the iterative sequences and successively by for , where and denotes the duality mapping of with gauge function which is single - valued and continuous because is assumed to be uniformly smooth . at each step , the existence of is guaranteed by the reflexivity of and , the weakly lower semi - continuity and uniformly convexity of , and the weakly closedness of .however , might not be unique when is nonlinear ; we will take to be any one of the minimizers . in view of the minimality of , we have . from the definition of , it is straightforward to see that we will terminate the iteration by the discrepancy principle with a given constant .the output will be used to approximate a solution of ( [ 1.1 ] ) . in order to understand the convergence property of , it is necessary to consider the noise - free iterative sequences and , where each and with are defined by ( [ eq method ] ) with replaced by , i.e. , in section [ noise - free ] we will give a detailed convergence analysis on ; in particular , we will show that strongly converges to a solution of ( [ 1.1 ] ) . in order to connect such result with the convergence property of , we will make the following assumption .[ a2 ] is uniquely defined for each .we will give some sufficient condition for the validity of assumption [ a2 ] .this assumption enables us to establish some stability results connecting and so that we can finally obtain the convergence property of in the following result .[ th2 ] let be reflexive and be uniformly smooth , let satisfy assumption [ a0 ] , and let satisfy assumptions [ a1 ] and [ a2 ] .assume that , and that is a sequence of positive numbers satisfying and for all with some constant .assume further that then , the discrepancy principle terminates the method ( [ eq method ] ) after steps .moreover , there is a solution of ( [ 1.1 ] ) such that as . if , in addition , for all , then . in this result, the closeness condition ( [ eq:20.8june ] ) is used to guarantee that is in for so that assumption [ a1 ] ( d ) can be applied .this issue does not appear when is a bounded linear operator .furthermore , assumption [ a2 ] holds automatically for linear problems when is strictly convex .consequently , we have the following convergence result for linear inverse problems .[ th3 ] let be a bounded linear operator with being reflexive and being uniformly smooth , let be proper , weakly lower semi - continuous , and uniformly convex , let , and let be such that and for all with .then , the discrepancy principle with terminates the method after steps . moreover , there hold as . in the next section, we will give the detailed proof of theorem [ th2 ] .it should be pointed out that the convergence does not imply directly since is not necessarily continuous .the proof of relies on additional observation .when applying our convergence result to the situation that and with , we can obtain this significantly improves the result in in which only the boundedness of was derived and hence only weak convergence for a subsequence of can be guaranteed .we conclude this section with some sufficient condition to guarantee the validity of assumption [ a2 ] .[ a3 ] there exist and such that ^{1-\kappa } \left[\delta_r(f(\bar x)-y , f(x)-y)\right]^\kappa\end{aligned}\ ] ] for all with and , where denotes the bregman distance on induced by the convex function .when is a -uniformly convex banach space , is a -uniformly convex function on with , and , assumption [ a3 ] holds with if there is a constant such that for , which is a slightly strengthened version of assumption [ a1 ] ( d ) .[ lem : a3 ] let be reflexive and be uniformly smooth , let , let satisfy assumption [ a0 ] , let satisfy assumptions [ a1 ] and [ a3 ] , and let satisfy .assume that ^{1-\frac{1}{r } } < 1\end{aligned}\ ] ] with .then assumption [ a2 ] holds , i.e. is uniquely defined for each .we will prove lemma [ lem : a3 ] at the end of section [ subsect4.1 ] by using some useful estimates that will be derived during the proof of the convergence of .we prove theorem [ th2 ] in this section .we first obtain a convergence result for the noise - free iterative sequences and .we then consider the sequences and corresponding to the noisy data case , and show that the discrepancy principle indeed terminates the iteration in finite steps .we further establish a stability result which in particular implies that as for each fixed . combining all these resultswe finally obtain the proof of theorem [ th2 ] .[ subsect4.1 ] we first consider the noise - free iterative sequences and defined by ( [ eq : method ] ) and obtain a convergence result that is crucial for proving theorem [ th2 ] .our proof is inspired by .[ thm1 ] let be reflexive and be uniformly smooth , let , let satisfy assumption [ a0 ] , let satisfy assumption [ a1 ] , and let satisfy .assume that then there exists a solution of ( [ 1.1 ] ) in such that if in addition for all , then .we first show by induction that for any solution of ( [ 1.1 ] ) in there holds this is trivial for .assume that it is true for for some , we will show that it is also true for . from ( [ 4.3.1 ] ) we have by dropping the first term on the right which is non - positive and using the definition of we can obtain in view of the properties of the duality mapping it follows that in order to proceed further ,we need to show that so that assumption [ a1 ] ( d ) on can be employed . using the minimizing property of , the induction hypothesis , and ( [ 21.1june ] ) we obtain with the help of assumption [ a0 ] on , we have therefore . thus we may use assumption [ a1 ] ( d ) to obtain from ( [ 5.4.1 ] ) that this and the induction hypothesis imply ( [ 7.7.1 ] ) with . as an immediate consequence of ( [ 7.7.1 ] ) , we know that ( [ 7.7.2 ] ) is true for all .consequently and by using the monotonicity of with respect to , we obtain since as , we have as .next we show that converges to a solution of ( [ 1.1 ] ) . to this end , we show that is a cauchy sequence in .for we have from ( [ 4.3.1 ] ) that by the definition of we have by using assumption [ a1 ] ( d ) on and the monotonicity of we can obtain therefore , by using ( [ 8.4.2012 ] ) , we have with that consequently since is monotonically decreasing , we obtain as . in view of the uniformly convexity of , we can conclude that is a cauchy sequence in .thus for some as .since as , we may use the weakly closedness of to conclude that and .we remark that because .next we show that from the convexity of and it follows that in view of ( [ 5.5.1 ] ) we have since as , by using the weakly lower semi - continuity of we obtain this implies that .we next use ( [ 5.5.1 ] ) to derive for that by taking and using we can derive that where whose existence is guaranteed by the monotonicity of .since the above inequality holds for all , by taking we obtain using ( [ 5.5.3 ] ) with replaced by we thus obtain . combining this with ( [ 5.5.2 ] )we therefore obtain .this together with ( [ 5.5.6.1 ] ) then implies that . finally we prove under the additional condition for .we use ( [ 5.5.3 ] ) with replaced by to obtain by using ( [ 5.5.1 ] ) , for any we can find such that we next consider . according to the definition of we have .since is reflexive and , we have from ( [ eq:28.1june ] ) that .thus we can find and such that where is a constant such that for all .consequently \right|\\ & \le \sum_{j=1}^{l_0 } \left(\|v_j\| \|f'(x^\dag ) ( x_n - x^\dag)\| + \|\beta_j\| \|x_n - x^\dag\|\right)\\ & \le ( 1+\eta ) \sum_{j=1}^{l_0 } \|v_j\| \|f(x_n)-y\| + \frac{\varepsilon}{3}.\end{aligned}\ ] ] since as , we can find such that therefore for all .since is arbitrary , we obtain . by taking in ( [ 5.5.7 ] ) and using obtain according to the definition of we must have .a direct application of lemma [ lem existence and uniqueness of xdag ] gives . as a byproduct , now we can use some estimates established in the proof of theorem [ thm1 ] to prove lemma [ lem : a3 ] .0.15 cm _ proof of lemma [ lem : a3 ] ._ we assume that the minimization problem in ( [ eq : method ] ) has two minimizers and .then it follows that with the help of the definition of we can write therefore since as shown in the proof of theorem [ thm1 ] , we may use assumption [ a3 ] and the young s inequality to obtain ^{1-\kappa } \left[\delta_r ( f(\hat{x}_n)-y , f(x_n)-y)\right]^\kappa\\ & \ge \a_n d_{\xi_n } \theta(\hat{x}_n , x_n ) - ( 1-\kappa ) \kappa^{\frac{\kappa}{1-\kappa } } c_0^{\frac{1}{1-\kappa } } \|f(x_n)-y\|^{\frac{r-1}{1-\kappa } } d_{\xi_n } \theta(\hat{x}_n , x_n).\end{aligned}\ ] ] recall that in the proof of theorem [ thm1 ] we have established since and , we therefore obtain with .thus we may use the second condition in ( [ 21.11june ] ) to conclude that and hence . in this subsection we show that the method is well - defined , in particular we prove that , when the data contains noise , the discrepancy principle ( [ dp ] ) terminates the iteration in finite steps , i.e. . [ lem stop ] let be reflexive and be uniformly smooth , let satisfy assumption [ a0 ] , and let satisfy assumption [ a1 ] .let and , and let be such that .assume that ( [ 21.1june ] ) holds .then the discrepancy principle terminates the iteration after steps . if , then for there hold if , in addition , for all with some constant and then there holds where denotes any solution of ( [ 1.1 ] ) in and ] . by the definition of and the property of the duality mapping ,we can obtain , using the similar argument for deriving ( [ eq:19.6june ] ) , that with the help of assumption [ a1 ] ( d ) and the monotonicity ( [ mono ] ) of with respect to , similar to the derivation of ( [ eq:20.1june ] ) we have for that therefore since and for , we thus obtain in view of ( [ eq decrease22 ] ) in lemma [ lem stop ] , we can see that combining this inequality with ( [ eq:19.7june ] ) gives the desired estimate .we will prove some stability results on the method which connect with .these results enable us to use theorem [ thm1 ] to complete the proof of theorem [ th2 ] .[ lem xdelta ] let be reflexive and be uniformly smooth , let satisfy assumption [ a0 ] , and let satisfy assumptions [ a1 ] and [ a2 ] .then for each fixed there hold as .we show this result by induction .it is trivial when since and . in the followingwe assume that the result is proved for and show that the result holds also for .we will adapt the argument from .let be a sequence of data satisfying with . by the minimizing property of we have by the induction hypothesis, we can see that the right hand side of the above inequality is uniformly bounded with respect to .therefore both and are uniformly bounded with respect to .consequently is bounded in and is bounded in ; here we used the uniformly convexity of . since both and are reflexive , by taking a subsequence if necessary , we may assume that and as .since is weakly closed , we have and . in view of the weakly lower semi - continuity of banach space norm we have moreover , by using , the weakly lower semi - continuity of , and the induction hypothesis, we have the inequalities ( [ eq use2 ] ) and ( [ eq use1 ] ) together with the minimizing property of and the induction hypothesis imply according to the definition of and assumption [ a2 ] , we must have .therefore , , and next we will show that let in view of , it suffices to show .assume to the contrary that . by taking a subsequence if necessary, we may assume that it then follows from ( [ eq:18june ] ) that which is a contradiction to ( [ eq use2 ] ) .we therefore obtain ( [ eq lim ] ) . by using the induction hypothesis and , we obtain from ( [ eq lim ] ) that since and since has the kadec property , see lemma [ lem : kadec ] , we obtain that as . finally , from the definition of , the induction hypothesis , and the continuity of the map , and the continuity of the duality mapping , it follows that as .the above argument shows that for any sequence converging to , the sequence always has a subsequence , still denoted as , such that , and as .therefore , we obtain ( [ eq 3convergence ] ) with as .the proof is complete . sinceother parts have been proved in lemma [ lem stop ] , it remains only to show the convergence result ( [ eq : convergence ] ) , where is the limit of which exists by theorem [ thm1 ] .assume first that is a sequence satisfying with such that as for some integer .we may assume for all .from the definition of , we have since lemma [ lem xdelta ] implies , by letting we have .this together with the definition of implies that for all .since theorem [ thm1 ] implies as , we must have .consequently , we have from lemma [ lem xdelta ] that , and as .assume next that is a sequence satisfying with such that as .we first show that let be an arbitrary number . since theorem [ thm1 ] implies as , there exists an integer such that . on the other hand ,since lemma [ lem xdelta ] implies , and as , we can pick an integer large enough such that for all there hold and therefore , it follows from lemma [ lem stop ] that for all .since is arbitrary , we thus obtain ( [ eq:19june ] ) . with the help of, we then obtain in view of we have since , we can conclude from ( [ eq:19june ] ) that . since , we must have as . in view of and( [ eq:20.2june ] ) , we can obtain which together with the uniformly convexity of implies that as .finally we show that as . in view of ( [ eq:19.8june ] ) , it suffices to show that recall that and as which have been established in theorem [ thm1 ] and its proof .thus , for any , we can pick an integer such that then , using ( [ eq:19.9june ] ) in lemma [ lem 19june ] , we can derive by using the definition of bregman distance and ( [ eq:19.10june ] ) we have + \left [ \theta(x_{l_0 } ) -\theta(x_{l_0}^{\d_i})\right ] - \l \xi_{l_0 } , x_*-x_{l_0}\r \\ & \quad \, -\l \xi_{l_0 } , x_{l_0 } -x_{l_0}^{\d_i}\r - \l \xi_{l_0}^{\d_i}-\xi_{l_0 } , x_*-x_{l_0}^{\d_i}\r\\ & \le 2\epsilon + \left| \theta(x_{l_0 } ) -\theta(x_{l_0}^{\d_i})\right| + \left|\l \xi_{l_0 } , x_{l_0 } -x_{l_0}^{\d_i}\r\right| + \left| \l \xi_{l_0}^{\d_i}-\xi_{l_0 } , x_*-x_{l_0}^{\d_i}\r \right|.\end{aligned}\ ] ] therefore in view of lemma [ lem xdelta ] and the facts that and as which we have established in the above , we can conclude that there is an integer such that for all there hold and . since is arbitrary ,we thus obtain ( [ eq:19.11june ] ) .when denotes the integer determined by the discrepancy principle ( [ dp ] ) , from lemma [ lem stop ] we can see that the bregman distance is decreasing up to .this monotonicity , however , may not hold at . therefore , it seems reasonable to consider the following variant of the discrepancy principle .[ rule4.1 ] let be a given number .if , we define ; otherwise we define i.e. , is the integer such that we point out that the argument for proving theorem [ th2 ] can be used to prove the convergence property of for determined by rule [ rule4.1 ] , we can even drop the condition on in theorem [ th2 ] .in fact we have the following result . [ th4 ]let be reflexive and be uniformly smooth , satisfy assumption [ a0 ] , and satisfy assumptions [ a1 ] and [ a2 ] .let and , and let be such that . assume further that then , the integer defined by rule [ rule4.1 ] is finite .moreover , there is a solution of ( [ 1.1 ] ) such that as .if , in addition , for all , then .the proof of lemma [ lem stop ] can be used without change to show that and that ( [ eq decrease2 ] ) and ( [ eq decrease22 ] ) hold for . consequently , ( [ eq:19.9june ] ) in lemma [ lem 19june ] becomes in order to prove the convergence result ( [ eq : convergence222 ] ) , as in the proof of theorem [ th2 ] we consider two cases .assume first that is a sequence satisfying with such that as for some integer .we may assume for all .by rule [ rule4.1 ] we always have . by letting , we obtain .this together with the definition of implies that for all .it then follows from theorem [ thm1 ] that .we claim that . to see this , by using the definition of ,we have therefore this and the strictly convexity of imply that .consequently . a simple application of lemma [ lem xdelta ]then gives the desired conclusion .assume next that is a sequence satisfying with such that as .we can follow the argument for deriving ( [ eq:19june ] ) to show that which in turn implies that by the uniformly convexity of . then we can use ( [ eq:22.1june ] ) and follow the same procedure in the proof of theorem [ th2 ] to obtain as .in this section we present some numerical simulations to test the performance of our method by considering a linear integral equation of the first kind and a nonlinear problem arising from the parameter identification in partial differential equations .we consider the linear integral equation of the form ,\end{aligned}\ ] ] where it is clear that \to \y:=l^2[0,1] ] which is used to reconstruct when the iteration is terminated by the discrepancy principle ( [ dp ] ) . in our numerical simulations , we take and , we divide ] and \times [ 0.2,0.5]$ ; } \\ 0 , & \hbox{elsewhere .} \end{array } \right.\end{aligned}\ ] ] we assume and add noise to produce the noisy data satisfying .we take and .the partial differential equations involved are solved approximately by a finite difference method by dividing into small squares of equal size . and the involved minimization problems are solved by the modified nonlinear cg method in .we take the initial guess and , and terminate the iteration by the discrepancy principle with .figure [ nonlinear2](a ) plots the exact solution .figure [ nonlinear2](b ) shows the result for the method with .figure [ nonlinear2 ] ( c ) and ( d ) report the reconstruction results for the method with for and respectively ; the term is replaced by a smooth one with during computation .the reconstruction results in ( c ) and ( d ) significantly improve the one in ( b ) by efficiently removing the notorious oscillatory effect and indicate that the method is robust with respect to .we remark that , due to the smaller value of , the reconstruction result in ( d ) is slightly better than the one in ( c ) as can be seen from the plots ; the computational time for ( d ) , however , is longer .0.3 cm * acknowledgements * q jin is partly supported by the grant de120101707 of australian research council , and m zhong is partly supported by the national natural science foundation of china ( no.11101093 ) .
|
we consider the nonstationary iterated tikhonov regularization in banach spaces which defines the iterates via minimization problems with uniformly convex penalty term . the penalty term is allowed to be non - smooth to include and total variation ( tv ) like penalty functionals , which are significant in reconstructing special features of solutions such as sparsity and discontinuities in practical applications . we present the detailed convergence analysis and obtain the regularization property when the method is terminated by the discrepancy principle . in particular we establish the strong convergence and the convergence in bregman distance which sharply contrast with the known results that only provide weak convergence for a subsequence of the iterative solutions . some numerical experiments on linear integral equations of first kind and parameter identification in differential equations are reported . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore [ section ] [ section ] [ section ]
|
this paper is concerned with the problem of network analysis for linear quantum optical networks . in recent years, there has been considerable interest in the modeling and feedback control of linear quantum systems ; e.g. , see .such linear quantum systems commonly arise in the area of quantum optics ; e.g. , see .some recent papers have been concerned with the problem of realizing given quantum dynamics using physical components such as optical cavities , squeezers , beam - splitters , optical amplifiers , and phase shifters ; see .this paper is concerned with the problem of constructing a dynamic model , in terms of quantum stochastic differential equations ( qsdes ) ( e.g. , see ) , for a general linear quantum optical network consisting of an optical interconnection between optical cavities , squeezers and beam - splitters .this problem can be considered a quantum optical generalization of the classical electrical circuit analysis problem in which a state space model of the circuit is desired ; e.g. , see .a systematic approach to the modelling of large quantum optical networks is important as the construction of these networks is becoming feasible using technologies such as quantum optical integrated circuits ; e.g. , see .these qsde models can then be used in the design of a suitable quantum feedback controller for the network ; e.g. , see .this paper also describes how to construct alternative models for linear quantum optical networks ; e.g. , see .these models can also be used for controller design or system simulation ; e.g. , see .in this section , we describe the general class of quantum systems under consideration ; see also .we consider a collection of independent quantum harmonic oscillators .corresponding to this collection of harmonic oscillators is a vector of _ annihilation operators _ ^t ] and . here denotes the matrix ] we also define a corresponding vector of output field operators ; e.g. , see .the corresponding quantum white noise processes are defined so that and ; e.g. , see . in order to describe the dynamics of a quantum linear system, we first specify the _hamiltonian operator _ for the quantum system which is a hermitian operator on the underlying hilbert space of the form m \left[\begin{array}{c } a \\a^\#\end{array}\right]\ ] ] where is a hermitian matrix of the form also , we specify the _ coupling operator vector _ for the quantum system to be a vector of operators of the form \left[\begin{array}{c } a \\ a^\#\end{array}\right]\ ] ] where and .we can write = n \left[\begin{array}{c } a \\ a^\#\end{array}\right],\ ] ] where .in addition , we have an orthogonal _ scattering matrix _ which describes the interactions between the quantum fields .these quantities then lead to the following qsdes which describe the dynamics of the quantum system under consideration : & = & f\left[\begin{array}{l } a\\ a^\#\end{array}\right ] + g \left[\begin{array}{l } u \\ u^{\ # } \end{array}\right ] ; \nonumber \\\left[\begin{array}{l } y \\ y^\ # \end{array}\right ] & = & h \left[\begin{array}{l } a\\ a^\#\end{array}\right ] + k \left[\begin{array}{l } u \\ u^{\ # } \end{array}\right],\nonumber \\\end{aligned}\ ] ] where and * annihilation operator quantum systems * an important special case of the above class of linear quantum systems occurs when the qsdes ( [ qsde3 ] ) can be described purely in terms of the vector of annihilation operators ; e.g. , see . in this case , we consider hamiltonian operators of the form and coupling operator vectors of the form where is a hermitian matrix and is a complex matrix . also , we consider an orthogonal scattering matrix . in this case , we replace the commutation relations ( [ ccr2 ] ) by the commutation relations & = & \theta\end{aligned}\ ] ] where is a positive - definite commutation matrix .then , the corresponding qsdes are given by where linear quantum optical networks consist of optical interconnections between the following passive optical components : optical cavities , beamsplitters , optical sources ( lasers or vacuum sources ) , and optical sinks ( detectors or unused optical outputs ) .we now describe each of these optical components in more details .* optical cavities * + optical cavities consist of a number of partially reflecting mirrors arranged in a suitable geometric configuration and coupled to a coherent light source such as a laser ; e.g. , see . from the optical network point of view , we can categorize optical cavities according to the number of partially reflecting mirrors they contain .schematic diagrams for some typical optical cavities are shown in figure [ f1 ] .note that the single mirror cavity actually contains two mirrors but only one of the mirrors is partially reflecting .similarly , the two mirror butterfly cavity actually contains four mirrors but only two of the mirrors are partially reflecting . in the sequel , we will ignore the fully reflecting mirrors in any cavity and only consider the partially reflecting mirrors .+ a cavity with mirrors , can be described by a qsde of the form ( [ qsde4 ] ) as follows : where and is an annihilation operator associated with the cavity mode .the quantities , are the _ coupling coefficients _ which correspond to the partially reflecting mirrors which make up the cavity . also , corresponds to the _ detuning _ between the cavity and the coherent light source .* beamsplitters * + a beamsplitter consists of a single partially reflective mirror as illustrated in figure [ f2 ] .a beamsplitter is governed by the input - output relations = \left[\begin{array}{cc}\xi & -\sqrt{1-\xi^2 } \\-\sqrt{1-\xi^2 } & - \xi\end{array}\right ] \left[\begin{array}{c}u_1\\u_2\end{array}\right]\ ] ] where is a parameter defining the beamsplitter ; e.g. , see . in the sequel, it will be convenient to consider a beamsplitter as arising from a singular perturbation approximation applied to a two mirror cavity of the form shown in figure [ f1](b ) ; see .that is , we consider the following cavity equations of the form ( [ single_cavity ] ) : we now let where is a given constant .then , ( [ 2mirror1 ] ) becomes letting , we obtain and hence substituting this into ( [ 2mirror2 ] ) gives letting , , it follows that = \left[\begin{array}{cc } \frac{\bar \kappa-\tilde \kappa}{\tilde \kappa+ \bar \kappa } & - \frac{2\sqrt { \tilde \kappa \bar \kappa}}{\tilde \kappa+ \bar \kappa } \\-\frac{2\sqrt{\tilde \kappa\bar \kappa}}{\tilde \kappa+\bar \kappa } & \frac{\tilde \kappa-\bar \kappa}{\tilde \kappa+ \bar \kappa}\end{array}\right ] \left[\begin{array}{c}u_1\\u_2\end{array}\right].\ ] ] this equation is the same as ( [ beam1 ] ) when we let * sources and sinks * + optical sources may be coherent sources such as a laser or a vacuum source which corresponds to no optical connection being made to a mirror input ; e.g. , see .we will represent both sources by the same schematic diagram as shown in figure [ f2](a ) .also , optical sinks may be detectors such as a homodyne detector ( e.g. , see ) or they may correspond to an unused optical output , which corresponds to no optical connection being made to a mirror output .we will represent both sinks by the same schematic diagram as shown in figure [ f2](b ) .note that for the networks being considered , the number of sources will always be equal to the number of sinks . *the mirror digraph * + we will consider the topology of an optical network to be represented by a directed graph referred to as the _mirror digraph_. to obtain the mirror digraph , each cavity in the network is decomposed into the mirrors that make up the cavity with an -mirror cavity being decomposed into mirrors .similarly , each beamsplitter is decomposed into two mirrors .this process is illustrated in figure [ f4 ] .+ then a directed graph showing the interconnections of these mirrors , along with the optical sources and sinks is constructed . in this digraph, the nodes correspond to the mirrors or the optical sources and sinks . also , the links in this graph correspond to the optical connections between the components .this process is illustrated in figures [ f5 ] and [ f6 ] in which figure [ f5 ] shows a passive optical network and figure [ f6 ] shows the corresponding mirror digraph . for a quantum optical network with sources , cavities including cavity mirrors , beamsplitters , and sinks , we will employ the following numbering convention .the sources will be numbered from 1 to , the cavity mirrors will be numbered from to , the beamsplitter mirrors will be numbered from to , and the sinks will be numbered from to . associated with the mirror digraph is the corresponding _ adjacency matrix _ defined so that is there is a link going from node to node and otherwise .then , the adjacency matrix can be partitioned as follows corresponding to the different types of nodes : \quad \left . \begin{array}{l } \mbox{sources , } \\\mbox{cavities , } \\\mbox{beamsplitters , } \\ \mbox{sinks . }\end{array}\right.\ ] ] note that it follows from these definitions that the matrices , , , , , and are all zero .the adjacency matrix corresponding to the passive optical network shown in figure [ f5 ] and the corresponding mirror digraph shown in figure [ f6 ] is .\ ] ] hence in this example , , , , ;~~ a_{22 } = \left[\begin{array}{cccccc } 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right];\\ a_{23 } & = & \left[\begin{array}{cc } 0 & 0\\ 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0\\ 0 & 0 \end{array}\right];~~ a_{24 } = \left[\begin{array}{cccc } 1 & 0 & 0&0\\ 0 & 0 & 0&0\\ 0 & 0 & 0&0\\ 0 & 1 & 0&0\\ 0 & 0 & 0&0\\ 0 & 0 & 0&1 \end{array}\right];\\ a_{32 } & = & \left[\begin{array}{cccccc } 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right];~~ a_{34 } = \left[\begin{array}{cccc } 0 & 0 & 0&0\\ 0 & 0 & 1&0\\ \end{array}\right].\end{aligned}\ ] ] we will label the field input for the node of the mirror digraph as and the corresponding field output as . in the case that the node of the mirror digraph corresponds to a source , there is no actual field input but we will simply write . similarly ,if the node of the mirror digraph corresponds to a sink , there is no actual field output but we will simply write . we then write , \quad \tilde y = \left[\begin{array}{c } y_1 \\ \vdots \\y_{2m+n_m+2k } \end{array}\right].\ ] ] we now partition the vectors and according to the different types of nodes as follows : ; \quad \tilde y = \left[\begin{array}{c } \tilde y_1 \\\tilde y_3 \\\tilde y_4 \end{array}\right ] ; \quad \left . \begin{array}{l } \mbox{sources , } \\\mbox{cavities , } \\\mbox{beamsplitters , } \\ \mbox{sinks . }\end{array}\right.\ ] ] then using ( [ adj_part ] ) and the definition of the adjacency matrix , we write = \left[\begin{array}{cccc } i & 0 & 0 & 0\\ a_{12}^t & a_{22}^t & a_{32}^t & 0\\ a_{13}^t & a_{23}^t & a_{33}^t & 0\\ a_{14}^t & a_{24}^t & a_{34}^t & 0\\ \end{array}\right ] \left[\begin{array}{c } \tilde y_1 \\ \tilde y_2 \\\tilde y_3 \\\tilde y_4 \end{array}\right].\ ] ] note that in writing these equations , we have ignored any phase shift which results from the light travelling from the output of node to the input of node .we could allow for this phase shift by replacing the adjacency matrix ( [ adj_part ] ) by a weighted adjacency matrix in which any non - zero element is given by where is the phase shift in the light travelling from the output of node to the input of node .we will number the cavities from 1 to .then , the linear quantum optical network is also specified by a corresponding _ cavity matrix _ defined so that if the mirror corresponding to the node in the mirror graph forms a part of cavity . here , is the coupling coefficient of the mirror corresponding to node .it follows from this definition that the first and last columns of the matrix will be zero since the corresponding nodes in the mirror graph do not correspond to mirrors in a cavity .then , we can partition the matrix as follows corresponding to the different types of nodes : .\end{aligned}\ ] ] also , it follows from this definition that we can write \ ] ] where each quantity is the sum of coupling coefficients of the mirrors forming cavity defined as in ( [ gamma ] ) .in addition , we will define a diagonal _ detuning matrix _ defined so that , the detuning of the cavity . the cavity matrix and the detuning matrix for the passive optical network shown in figure [ f5 ] will be of the form ;\\ d & = & \left[\begin{array}{ccc } \delta_1 & 0 & 0 \\ 0 & \delta_2 & 0 \\ 0 & 0 & \delta_3 \end{array}\right].\end{aligned}\ ] ] hence , .\ ] ]we will number the beamsplitters from 1 to .then , the linear quantum optical network is also specified by a corresponding _ beamsplitter matrix _ defined so that if the mirror corresponding to the node in the mirror graph forms a part of beamsplitter . here , is the coupling coefficient of the mirror corresponding to node .in addition , we assume that each beamsplitter , which is represented by two mirrors in the mirror graph , is such that one mirror has a number and the other mirror has a number .it follows from this definition that the first and last columns of the matrix will be zero since the corresponding nodes in the mirror graph do not correspond to mirrors in a beamsplitter . also , we can partition the matrix as follows corresponding to the different types of nodes : .\nonumber \\\end{aligned}\ ] ] hence , corresponding to each beamsplitter , there one non - zero entry in each of the square matrices and .for example , a network with three beamsplitters with parameters , , , , , respectively would have matrices ; ~~ \bar b = \left[\begin{array}{ccc}\sqrt{\bar \kappa_1 } & 0 & 0 \\ 0 & \sqrt{\bar \kappa_2 } & 0 \\ 0 & 0 & \sqrt{\bar \kappa_3 } \end{array}\right].\ ] ] the coupling coefficients in the beamsplitter matrix form the parameters in the corresponding beamsplitter equations of the form ( [ beam1 ] ) according to the formula ( [ beam3 ] ) ; i.e. , where the mirrors corresponding to the nodes and in the mirror digraph make up the beamsplitter .also , it follows from the definition of that we can write .\ ] ] note that there is some redundancy in the choice of the parameters and for a given beamsplitter since its behaviour is defined by a single parameter .the beamsplitter matrix for the passive optical network shown in figure [ f5 ] will be of the form .\ ] ] hence , * writing the qsdes for a passive quantum optical network * + together the matrices , , , completely specify the a given passive quantum optical network .we will now derive qsdes of the form ( [ qsde3 ] ) in terms of these matrices to describe a given network . to do this ,we first extract all of the sources , sinks , cavities and beamsplitters from the network in a similar fashion to the reactance extraction process which is carried out in circuit theory analysis ; e.g. , see .this is illustrated in figure [ f7 ] . in this picture ,an mirror cavity is regarded as an port network with inputs and outputs using a scattering framework ; e.g. , see .also , a beamsplitter is regarded as a two port network . * cavity equations * we now consider the qsdes ( [ qsde3 ] ) corresponding to the cavity .letting be the annihilation operator corresponding to the cavity , it follows from ( [ single_cavity ] ) and the definitions of the matrices and that we can write for such that .then , we define the vector of system variables \ ] ] and use ( [ gamma_matrix ] ) to write all of the equations ( [ cavity_i ] ) in matrix form as follows : * beamsplitter equations * we now consider the relationship between the inputs to the beamsplitters and the outputs of the beamsplitters .we first consider a single beamsplitter with parameters and .that is , we let and .then , using ( [ bbt_matrix ] ) we calculate ^t \left ( \tilde b\tilde b^t+\bar b\bar b^t\right)^{-1 } \left[\begin{array}{cc } \tilde b & \bar b \end{array}\right ] \\ & & + \left[\begin{array}{cc } -\bar b & \tilde b \end{array}\right]^t \left ( \tilde b\tilde b^t+\bar b\bar b^t\right)^{-1 } \left[\begin{array}{cc } -\bar b & \tilde b \end{array}\right]\\ & = & \left[\begin{array}{cc } \frac{\bar \kappa-\tilde \kappa}{\tilde \kappa+ \bar \kappa } & - \frac{2\sqrt { \tilde \kappa \bar \kappa}}{\tilde \kappa+ \bar \kappa } \\ -\frac{2\sqrt{\tilde \kappa\bar \kappa}}{\tilde \kappa+ \bar \kappa } & \frac{\tilde \kappa-\bar \kappa}{\tilde \kappa+ \bar \kappa}\end{array}\right ] \end{aligned}\ ] ] which is the same as the matrix in ( [ beam2a ] ) .we now extend this formula to the case of beamsplitters and obtain where ^t \left ( \tilde b\tilde b^t+\bar b\bar b^t\right)^{-1 } \left[\begin{array}{cc } \tilde b & \bar b \end{array}\right ] \\ & & + \left[\begin{array}{cc } -\bar b & \tilde b \end{array}\right]^t \left ( \tilde b\tilde b^t+\bar b\bar b^t\right)^{-1 } \left[\begin{array}{cc } -\bar b & \tilde b \end{array}\right].\end{aligned}\ ] ] for the passive optical network shown in figure [ f5 ] , we calculate .\ ] ] we now combine the equations ( [ adjacent ] ) , ( [ cavity_equations ] ) , ( [ beam4 ] ) to obtain a set of qsdes of the form ( [ qsde4 ] ) which describes the complete network . in order to do this, we require that the network satisfies the following assumption : [ a1 ] the matrix $ ] is nonsingular . this assumption will be satisfied if the network does not contain any algebraic loops .if this assumption is not satisfied , the network will need to be modelled by a set of stochastic algebraic - differential equations .it follows from ( [ adjacent ] ) that we can write combining this with ( [ beam4 ] ) and the second equation in ( [ cavity_equations ] ) , we obtain : = \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1 + \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a + \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\left[\begin{array}{cc}\tildeu_2\\ \tilde u_3\end{array}\right].\ ] ] now using assumption [ a1 ] , it follows that we can write & = & \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1\\ & & + \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a .\end{aligned}\ ] ] substituting this into ( [ cavity_equations ] ) and using the last equation in ( [ adjacent ] ) , we obtain the following qsdes of the form ( [ qsde4 ] ) which describe the network : \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a\\ & & - \left[\begin{array}{cc}\tilde c & 0 \end{array}\right ] \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1;\\ \tilde y_4 & = & a_{24}^t\tilde c^ta \\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a\\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1\\ & & + a_{14}^t\tilde u_1.\end{aligned}\ ] ] from this , we can also use the formulas ( 4 ) and ( 5 ) in with to calculate the corresponding matrices , , in the description of this system .this yields \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]+ a_{14}^t;\\ n & = & \left[\begin{array}{l}a_{24}^t+\\ \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\end{array}\right]\tilde c^t;\\ m & = & -d \\ & & -\frac{\imath}{2 } \tilde c \left[\begin{array}{l}\left[\begin{array}{cc}i & 0\end{array}\right ] \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right ] -\\\left[\begin{array}{cc}a_{22 } & a_{23}\end{array}\right ] \left(i - \left[\begin{array}{cc}a_{22 } & a_{23 } \\\hat b^ta_{32 } & \hat b^t a_{33 } \end{array}\right]\right)^{-1 } \left[\begin{array}{c}i\\ 0\end{array}\right ] \end{array } \right]\tilde c^t . \end{aligned}\ ] ] for the passive optical network shown in figure [ f5 ] we calculate the matrices , , , in the corresponding qsdes of the form ( [ qsde4 ] ) to be ;\\ g&= & \left[\begin{array}{rrrr } -\sqrt{\kappa_6}&0 & -\sqrt{\kappa_5}&0\\ 0&-\sqrt{\kappa_7}&0&0\\ \frac{\sqrt{\kappa_8}\left(\kappa_{11 } - \kappa_{12}\right)}{\kappa_{11 } + \kappa_{12 } } & \frac{2\sqrt{\kappa_8}\sqrt{\kappa_{11}\kappa_{12}}}{\kappa_{11 } + \kappa_{12 } } & -\sqrt{\kappa_9 } & -\sqrt{\kappa_{10 } } \end{array}\right];\\ h&=&\left[\begin{array}{rrr } \sqrt{\kappa_5}&0&\sqrt{\kappa_9}\\ -\frac{\sqrt{\kappa_6}\left(\kappa_{11 } - \kappa_{12}\right)}{\kappa_{11 } + \kappa_{12 } } & -\frac{2\sqrt{\kappa_7}\sqrt{\kappa_{11}\kappa_{12}}}{\kappa_{11 } + \kappa_{12 } } & \sqrt{\kappa_8}\\ -\frac{2\sqrt{\kappa_6}\sqrt{\kappa_{11}\kappa_{12}}}{\kappa_{11 } + \kappa_{12 } } & \frac{\sqrt{\kappa_7}\left(\kappa_{11 } - \kappa_{12}\right)}{\kappa_{11 } + \kappa_{12}}&0\\ 0&0 & \sqrt{\kappa_{10 } } \end{array}\right];\\ k&= & \left[\begin{array}{rrrr } 0&0 & 1 & 0\\ -\frac{\kappa_{11 } - \kappa_{12}}{\kappa_{11 } + \kappa_{12 } } & -\frac{2\sqrt{\kappa_{11}\kappa_{12}}}{\kappa_{11 } + \kappa_{12 } } & 0 & 0\\ -\frac{2\sqrt{\kappa_{11}\kappa_{12}}}{\kappa_{11 } + \kappa_{12 } } & \frac{\kappa_{11 } - \kappa_{12}}{\kappa_{11 } + \kappa_{12 } } & 0 & 0\\ 0&0 & 0 & 1 \end{array}\right].\end{aligned}\ ] ] * qsdes for cavity only networks * + in this special case , we replace assumption [ a1 ] by the following assumption . [ a2 ] the matrix is nonsingular .this assumption will be satisfied provided that none of the cavity mirrors have their output directly connected to their input . in this case, it follows from ( [ adjacent ] ) that we can write combining this with the second equation in ( [ cavity_equations ] ) , we obtain : now using assumption [ a2 ] , it follows that we can write substituting this into ( [ cavity_equations ] ) and using the last equation in ( [ adjacent ] ) , we obtain the following qsdes of the form ( [ qsde4 ] ) which describes the network : from this , we can also use the formulas ( 4 ) and ( 5 ) in with to calculate the corresponding matrices , , in the description of this system .this yields * equations for beamsplitter only networks * + in this special case , we replace assumption [ a1 ] by the following assumption . [ a3 ] the matrix is nonsingular. this assumption will be satisfied if the beamsplitter network does not contain any algebraic loops .it follows from ( [ adjacent ] ) that we can write combining this with ( [ beam4 ] ) we obtain : now using assumption [ a3 ] , it follows that we can write substituting this into the last equation in ( [ adjacent ] ) , we obtain the following equation which describes the network : from this , we can also calculate the corresponding parameters in the description of this system to be consider active linear quantum networks , we extend the analysis given in the previous section to allow for dynamic squeezers ( degenerate parametric amplifiers ) instead of passive cavities ; e.g. , see . indeed , by including a nonlinear optical element inside an mirror cavity , an mirror optical squeezer can be obtained . by using suitable linearizations and approximations ,such an optical squeezer can be described by a complex quantum stochastic differential equation as follows : where for , , , , and is a single annihilation operator associated with the cavity mode ; e.g. , see .also , the quantity corresponds to the level of squeezing achieved in the squeezer .note that in the case that , these equations reduce to those of a passive mirror cavity and hence , without loss of generality , we can assume that all cavities in the network are of this form .hence , we can proceed in exactly the same fashion as in the previous section with the only addition being that we define an diagonal matrix defined so that , the squeezing parameter of the cavity. then , the equations corresponding to equations ( [ cavity_equations ] ) , which describe the dynamics of all of the squeezers in the network , become assuming that assumption [ a1 ] is satisfied , these equations are then combined with equations ( [ adjacent ] ) and ( [ beam4 ] ) to obtain the following qsdes of the form ( [ qsde3 ] ) which describe the complete active linear quantum optical network : \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a\\ & & -\mathcal{x}a^\#\\ & & - \left[\begin{array}{cc}\tilde c & 0 \end{array}\right ] \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1;\\ \dot a^\ # & = & -\mathcal{x}^\#a+\left(-\frac{1}{2}\tilde c\tilde c^t - \imath d\right)a^\ # \\ & & - \left[\begin{array}{cc}\tilde c & 0 \end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a^\#\\ & & - \left[\begin{array}{cc}\tilde c & 0 \end{array}\right ] \left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1^\#;\\ \tilde y_4 & = & a_{24}^t\tilde c^ta \\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a\\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1\\ & & + a_{14}^t\tilde u_1;\\ \tilde y_4^\ # & = & a_{24}^t\tilde c^ta^\ # \\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{22}^t\\ a_{23}^t\end{array}\right]\tilde c^t a^\#\\ & & + \left[\begin{array}{cc}a_{24}^t & a_{34}^t\hat b\end{array}\right]\left(i - \left[\begin{array}{cc}a_{22}^t & a_{32}^t \hat b\\ a_{23}^t & a_{33}^t\hat b \end{array}\right]\right)^{-1 } \left[\begin{array}{c}a_{12}^t\\ a_{13}^t\end{array}\right]\tilde u_1^\#\\ & & + a_{14}^t\tilde u_1^\#.\end{aligned}\ ] ] also , the corresponding matrices in the description of this system can be constructed using the formulas given in the proof of theorem 1 in .
|
this paper is concerned with the analysis of linear quantum optical networks . it provides a systematic approach to the construction a model for a given quantum network in terms of a system of quantum stochastic differential equations . this corresponds to a classical state space model . the linear quantum optical networks under consideration consist of interconnections between optical cavities , optical squeezers , and beamsplitters . these models can then be used in the design of quantum feedback control systems for these networks .
|
extinction of an isolated stochastic population after maintaining a long - lived state is a dramatic phenomenon .it occurs , even in the absence of environmental variations , because of an unusual chain of random events when population losses dominate over gains .population extinction risk is a key negative factor in viability of small populations , whereas extinction of a disease following an epidemic outburst is of course favorable .the possibility and consequences of extinction of biologically important components , regulated by chemical reactions in living cells , have also attracted interest . as stochastic population dynamics are usually far from equilibrium , and no general methods of evaluating large fluctuations are available , they are of much interest to physics .this work deals with an isolated single - species population undergoing a set of gain - loss processes .we will assume that the population is well mixed , so that spatial degrees of freedom are irrelevant .at the level of the _ deterministic rate equation _ ( henceforth _ rate equation _ ) , which describes the time history of the mean population size and ignores fluctuations , flows to an attracting fixed point , where the gain and loss processes balance each other .the actual stochastic population , however , behaves differently and ultimately becomes extinct .this is because , in the absence of influx of new individuals , the empty state is _ absorbing _ : the probability of exiting from it is zero . although extinction ( and fluctuations in general ) are beyond its scope , the rate equation is a convenient starting point of our analysis . for an isolated single - species populationthe rate equation can be written as where is a smooth function determined by the specific gain - loss processes , see below . for generic gain - loss processes .for the fixed point is repelling , whereas for it is attracting . in the former case , the next fixed point of eq .( [ rateeqgen ] ) is attracting , see fig .[ mf]a . according to the rate equation ,the mean population size in this case flows to and stays there forever .when varying the rate constants of the gain - loss processes , the attracting fixed point emerges via a transcritical bifurcation .now let be an attracting fixed point of the rate equation ( [ rateeqgen ] ) . to have a long - lived population of a nonzero size , at least two more fixed points of the rate equation ( [ rateeqgen ] )must be present : a repelling point and an attracting point , see fig .when starting from any , the mean population size flows to and , according to the rate equation , stays there forever .the characteristic bifurcation in this case is saddle - node .as we will see shortly , these two cases give rise to two different extinction scenarios of stochastic populations . to account for the intrinsic noise, we employ the master equation \ ] ] which describes the evolution of the probability to have individuals at time . here is the transition rate between the states with and individuals , whereas , and all terms that include with are assumed to be zero . for the master equation is for to be an absorbing state , the process rates must obey , for any , the condition . we will be interested in the important regime of parameters for which the mean population size in the metastable state , as predicted by eq .( [ rateeqgen ] ) , is large compared to one . here , prior to extinction , a long - lived probability distribution function ( pdf ) of the population sets in , on a relaxation time scale , around the corresponding attracting fixed point of the rate equation .this long - lived pdf , however , is metastable : it slowly decays in time .simultaneously , the probability to find the population extinct slowly grows in time , see e.g. refs . : the shape function ( ) of the metastable pdf is called the quasi - stationary distribution ( qsd ) . for metastable populations a very strong inequality , holds , and the decay time is equal to the mean time to extinction ( mte ) : the mean time it takes the stochastic process to reach the absorbing state at .the main objectives of this work is to accurately , and analytically , calculate the qsd and the mte of a population which experiences quite a general set of stochastic gain - loss processes .the crux of the method is a dissipative wkb approximation , where one assumes , treats as a continuous variable and searches for as here is a large parameter which scales as the mean population size in the metastable state . is called the action , whereas is called the amplitude .the wkb approximation breaks down at . herea different approximation must be used , as explained below .here is an overview of the two extinction scenarios as described by the wkb approximation .first , let be a repelling fixed point of the rate equation , see fig .[ mf]a . in a stochastic description extinction occurs via a large fluctuation which , acting against an effective entropy barrier , brings the population from a vicinity of directly to the absorbing state . in the wkb languagethis transition is possible because of the presence of the fluctuational momentum , see fig .[ figb1 ] .the attracting and repelling fixed points of the rate equation and , respectively , become hyperbolic fixed points of an extended phase plane . importantly, an additional hyperbolic fixed point - the fluctuational extinction point - appears here , with a zero coordinate , , but a nonzero momentum .the most probable path to extinction is the heteroclinic trajectory , directly connecting the metastable point " , that is the hyperbolic point , and the fluctuational extinction point " : the hyperbolic point .( such escape trajectories - heteroclinic trajectories with a non - zero momentum - are often called activation trajectories " , see _e.g. _ .) this is what we call extinction scenario a. now let be an attracting fixed point of the rate equation ( [ rateeqgen ] ) , so that the metastable population resides in the vicinity of , see fig .[ mf]b . in the stochastic description ,extinction occurs via a large fluctuation which brings the population from a vicinity of to a vicinity of the _ repelling _ fixed point . from therethe system flows into the absorbing state downhill " , that is almost deterministically . in the framework of wkb theorythe transition from to occurs in the extended phase plane where all three fixed points are hyperbolic , see fig .[ figb1a ] .here the optimal path to extinction is composed of two segments : the non - zero - momentum heteroclinic trajectory connecting the hyperbolic fixed points and ( the activation trajectory ) , and the zero - momentum segment going from to ( the relaxation trajectory ) .this is what we call extinction scenario b. the mean time to extinction ( mte ) and/or the qsd of metastable single - species stochastic populations were calculated previously in particular examples in different contexts of physics , chemistry , population biology , epidemiology , cell biology , _etc_. among them there is a large body of work which approximated the master equation by an effective fokker - planck equation , derived via the van kampen system size expansion or related recipes .once the fokker - planck equation is obtained , the mte and qsd can be calculated by standard methods .unfortunately , this approximation is in general uncontrolled .it fails in its description of the tails of the qsd , and gives exponentially large errors in the mte , as shown in refs . . with a few exceptions , accurate analytic results for the mte and qsdare only available for _ single - step _ gain - loss processes : in eq .( [ master ] ) . in this casethe mte can be determined exactly by employing the backward master equation .this yields a cumbersome analytic expression for the mte which , for a large population size in the metastable state , can be simplified via a saddle - point approximation . such a procedure was implemented in ref . . in its turn, the qsd of single - step processes can be calculated from a recursive relation obtained when substituting eq .( [ qsdintro ] ) in the master equation .several model examples of _ multi - step _ processes were considered in refs . , all of them belonging to extinction scenario a. we will generalize the previous results substantially and determine the mte and qsd for quite a general set of gain - loss processes pertaining to extinction scenario a. we will also determine the mte and qsd for extinction scenario b. our wkb theory starts with applying the ansatz ( [ n30 ] ) to an eigenvalue problem for the qsd which is nothing but the first excited eigenvector of the master equation . in the leading wkb order one arrives at the problem of finding zero - energy trajectories of an effective classical hamiltonian .there are two different types of zero - energy phase trajectories ( in addition to the extinction line ) : the activation and relaxation trajectories , which correspond to the fast and slow wkb modes , respectively . to obtain the pre - exponents, one needs to consider the sub - leading wkb order .the wkb calculations are simpler for scenario a , as the relaxation trajectory does not play any role here . in scenariob both the activation , and the relaxation trajectories are important . in both scenariosthe wkb approximation breaks down at .here we find the qsd , up to a normalization constant , from a recursive relation , obtained by linearizing the process rates with respect to at sufficiently small .in scenario a it suffices to match the recursive solution with the fast - mode solution in their joint region of validity , in much the same way as it was done by kessler and shnerb in a particular example of three stochastic reactions . in scenariob the slow mode dominates the wkb - solution at .it diverges , however , at . to obtain a regular solution there, one needs to go beyond the wkb approximation and account , in a close vicinity of , for strong coupling between the fast- and slow - mode solutions .this can be done via the van - kampen system size expansion of the master equation which does hold in the vicinity of .this procedure was first implemented by meerson and sasorov , in a model problem of noise - driven population explosion. then it has been employed by escudero and kamenev in the context of a wkb theory of stochastic population switches between two different metastable states .the theory of escudero and kamenev was formulated for quite a general set of gain - loss processes . in this paperwe will adopt their general approach , and some of their notation , in the problem of population extinction .here is a plan of the remainder of the paper .section [ general ] starts with a formulation of the eigenvalue problem for the qsd .then we expose the wkb approximation and the fast- and slow - mode wkb solutions .the derivation here is quite general and holds for extinction scenarios a and b. section [ recursion ] presents a derivation of recursive solution of the quasi - stationary master equation for sufficiently small .this derivation , which also holds for extinction scenarios a and b , is specific to population extinction . except for simple particular cases ( see _e.g. _ ref . ) , it has not been attempted before . in section [ sa ]we match the fast - mode wkb solution with the recursive small- solution and obtain general expressions for the qsd and mte in scenario a. in the same section we obtain the qsd and mte for single - step processes and near the transcritical bifurcation , characteristic of scenario a. then we illustrate our theory on several particular examples , some of which investigated previously . in section [ sb ]we determine the qsd and mte for scenario b. then we again apply our results to single - step processes and near the saddle - node bifurcation , characteristic of scenario b. furthermore , we consider a particular example of three stochastic reactions and compare our theoretical predictions with a numerical solution of the master equation .a summary of our results is presented in section [ conclusion ] .when starting at from a sufficiently large population , the probability distribution , as described by the master equation ( [ master ] ) approaches , on a relaxation time scale , a long - lived metastable pdf peaked at a non - zero attracting fixed point of the rate equation .the metastable distribution is slowly leaking " to zero , see eq .( [ qsdintro ] ) .let us denote the non - zero attracting fixed point by ( for scenario a and b , respectively , see figs . [ figb1 ] and [ figb1a ] ) .using eq .( [ qsdintro ] ) , we arrive at an eigenvalue problem for the qsd , : = -e \pi_n\,.\ ] ] importantly , the eigenvalue turns out to be _ exponentially _ small compared to the relaxation time . therefore , the term in the right hand side of eq .( [ qsdmaster ] ) can be neglected , and we have to deal with a quasi - stationary equation = 0\ , , \;n=1,2 , \dots \,.\ ] ] for definiteness , we normalize the qsd to unity : . once is found , we can use eqs .( [ master0 ] ) and ( [ qsdintro ] ) to calculate the mte : let us introduce a rescaled coordinate , where is the large parameter of the problem .the central assumption of our theory is that , after a proper rescaling of time which will be introduced shortly , the process rates can be represented as where , for , and are .this assumption guarantees that the population be long - lived , and is crucial both for the wkb - approximation that we present in this section , and for the recursive solution of eq .( [ qsdmaster1 ] ) that we will be dealing with later . as is the absorbing state , . for we can employ the wkb ansatz ( [ n30 ] ) : where and are assumed to be , and a constant prefactor is introduced for convenience , see below .now we assume that , taylor - expand the functions of in eq .( [ qsdmaster1 ] ) around and keep terms up to order .we obtain the equation derived by escudero and kamenev : =0\,,\end{aligned}\ ] ] where the primes denote differentiation with respect to . in the leading order , this equation yields a stationary hamilton - jacobi equation , where is the effective hamiltonian , and is the momentum .therefore , in the leading wkb order , one needs to find _ zero - energy _phase trajectories of the hamiltonian ( [ hamil ] ) . as for any , one such trajectory is at an arbitrary : the extinction line .this line is of no importance in the wkb theory , however .what we need are phase trajectories .one of them is the relaxation trajectory . in general, there is one and only one additional phase trajectory for which , except in some points .let us prove this statement .the hamiltonian vanishes at .differentiating eq .( [ hamil ] ) twice with respect to , we obtain .therefore is a convex function of , and so it has one and only one additional real root . the relaxation trajectory and activation trajectory give rise to the slow and fast wkb modes , respectively , as was shown in a particular example in ref . .the -dynamics along is described by the hamilton s equation which is nothing but the ( rescaled ) rate equation ( [ rateeqgen ] ) .the nontrivial fixed points of the rate equation , , are positive roots of the equation . as , the activation trajectory crosses the relaxation trajectory in these fixed points . as for any , the small- expansion of eq .( [ qdot ] ) generically starts with a linear term in . in the remainder of this paperwe assume that the linear decay rate is non - zero in the leading order : .therefore , we can always rescale time , and all the rates , by .this procedure uniquely defines the rescaling leading to eq .( [ rateexp ] ) . in extinction scenarioa , see fig .[ figb1 ] , the most probable path to extinction is the heteroclinic trajectory connecting the fixed point of the hamiltonian system ( here is the attracting point of the rate equation ) with the fluctuational extinction point . in extinction scenariob , see fig .[ figb1a ] , the most probable path to extinction is composed of two segments .the first one is the activation trajectory : a non - zero - momentum heteroclinic trajectory connecting the fixed point with an intermediate fixed point , where is a repelling fixed point of the rate equation .the second one is a relaxation segment , connecting the point with the deterministic extinction point . for the fast mode one obtains where the integration constant is already accounted for by the prefactor in eq .( [ fastmode ] ) . for the slow mode . in the subleading order , eq .( [ masterqsd ] ) yields a first - order differential equation for : where for the fast mode , and for the slow mode .it is convenient to use the identities where the subscripts and stand for the partial derivatives . to remind the reader , all the rates in eqs .( [ identities ] ) are rescaled with respect to the linear decay rate constant .the fast - mode solution for can be written as where .this result was obtained by escudero and kamenev in the context of stochastic population switches . the quantity in the denominator of the integrand in eq .( [ s1fast ] ) vanishes in every fixed point of the rate equation , including . to see how the integrand behaves at the fixed points ,consider the equation =0 ] . by taylor - expanding the numerator and denominator in the vicinity of any nontrivial fixed point , one can see that .therefore , the general fast - mode solution is given by eq .( [ fastmode ] ) with from eq .( [ sfast ] ) , and from eqs .( [ s1final ] ) and ( [ psi ] ) .it also includes a constant prefactor which can be found immediately .indeed , at the qsd is strongly peaked around the attracting fixed point . herethe fast - mode solution dominates , and can be found by normalizing to unity the gaussian asymptote of the qsd around .the gaussian asymptote is obtained by expanding the qsd ( [ fastmode ] ) in the vicinity of : where we have used the equalities . to see that , one can again use eq .( [ hpq ] ) . at reads . by virtue of eq .( [ identities ] ) .the quantity is the -derivative , evaluated at , of the expression in the right hand side of the rate equation ( [ qdot ] ) .as the point is by assumption attracting , .therefore , , and the asymptote ( [ gauss ] ) is indeed a gaussian distribution . normalizing it to unity , we obtain so the fast - mode solution is fully determined : +s_1^{(f)}(q_*)-s_1^{(f)}(q)}\,,\end{aligned}\ ] ] with from eq .( [ sfast ] ) and from eqs .( [ s1final ] ) and ( [ psi ] ) .now consider the slow - mode solution for which .the subleading - order contribution is found by putting in eq .( [ s1fast ] ) : then eq . ( [ fastmode ] )yields the general slow - mode solution for the qsd : where is an arbitrary constant .the minus sign is put here for convenience , because in the region of , where the slow - mode solution is relevant ( see section [ sb ] ) , and .one can see from eq .( [ slowmode ] ) that the slow - mode solution diverges in the fixed points of the rate equation .this divergence will be cured in section [ sb ] .we show in the following that , for a given extinction scenario and in a given region of , only one of the modes , either fast or slow , dominates the resulting qsd , while the other one must be discarded . before we deal with this issue , however , we recall that the wkb approximation breaks down at . to find the qsd for _ all _ we will solve eq .( [ qsdmaster1 ] ) in the region of by recursion and then match the recursive solution with either the fast - mode ( in scenario a ) , or the slow - mode ( in scenario b ) wkb solution in the joint region of their validity .the objective of this section is to approximately solve eq .( [ qsdmaster1 ] ) at sufficiently small . the exact criterion of smallness will appear later , when we match different solutions in joint regions of their validity .in the leading order in we take , see eq .( [ rateexp ] ) , and expand it in up to the linear term : .then eq .( [ qsdmaster1 ] ) becomes =0\,,\ ] ] where only processes with contribute .one can look for particular solutions of this recursive equation in the form thus arriving at an equation with -independent coefficients : in the remainder of this paper we make the following simplifying assumption : that is , we assume that the rates of the multi - step _ loss _ processes , where , do not have , in the leading order in , linear terms in their taylor expansion in .this assumption is always satisfied for stochastic chemical reactions ( where pairs , triplets , , of reacting particles are needed to bring down the number of particles by ) .the conditions ( [ wminus ] ) also hold for all models of population biology and epidemiology we are aware of . using eq .( [ wminus ] ) and the equality ( to remind the reader , we are using rescaled variables ) , we rewrite the recursive eq .( [ qsdr2 ] ) as f_n- \sum_{r=1}^{k}w_r^{\prime}(0 ) f_{n - r}\ , , \label{qsdr3}\ ] ] where .if there is no degeneracy , the general solution of eq .( [ qsdr3 ] ) is a linear combination of all particular solutions , where obeys the characteristic polynomial equation of degree : \lambda + 1 = 0\,.\ ] ] note , that is always a root .let us show that eq .( [ pol2 ] ) has one and only one additional positive root , , while all others roots are either negative or complex .first , we establish a connection between the roots of eq .( [ pol2 ] ) and the crossing points with the -axis of the zero - energy trajectories of the wkb hamiltonian ( [ hamil ] ) . by expanding , eq .( [ hamil ] ) becomes where only terms with contribute .putting and and using eq .( [ wminus ] ) , one can see that eq .( [ hamil2 ] ) coincides with eq .( [ pol2 ] ) . as we have shown that the equation has , for any , two and only two real solutions for , eq .( [ pol2 ] ) also has two and only two real solutions for , both of them positive .the roots and correspond to the crossing points with the -axis of the slow and fast modes , respectively . dividing eq .( [ pol2 ] ) by , we arrive at a polynomial equation of degree : for the roots of this polynomial can be expressed in radicals .for , they need to be computed numerically .assume that we have found all of the roots of eq .( [ pol3 ] ) , .if there is no degeneracy , the general solution for is where the coefficients are the following ( see appendix a for the derivation ) : the coefficient , corresponding to the root , can be expressed through the coefficients , ( see appendix a ) : before writing down the general solution of the recursive equation ( [ qsdr1 ] ) for the qsd , we recall a simple relation between and the mte . using the rescaled reaction rates, we can rewrite eq .( [ e ] ) as in view of the conditions ( [ wminus ] ) , only one term in the sum survives in the leading order . as , we obtain so .now we switch to the rescaled variable , use the relation and eq .( [ fn ] ) , and obtain the small- asymptote of : the validity region of this asymptote ( which includes the yet unknown to be found later ) is scenario - dependent .it is a relatively narrow region in scenario a , and a broader region in scenario b. the difference comes from the fact that in scenario a the recursive solution needs to be matched , at , with a rapidly growing fast - mode solution , whereas in scenario b the matching needs to be done with a slowly varying slow - mode solution , see sections [ sa ] and [ sb ] , respectively . what is the role of complex roots of the polynomial equation ( [ pol3 ] ) in the recursive solution ( [ recsolution ] ) ?these can appear only for , and they come in complex conjugate pairs : and . one can show , by using eq .( [ cires ] ) , that the coefficients and , corresponding to and , are also complex conjugate : , so that from eq . ( [ recsolution ] )is real - valued as expected . when complex roots are present , the qsd at small may exhibit rapidly decaying oscillations as a function of .let us now determine the , or , asymptote of the qsd ( [ recsolution ] ) , in each of the two extinction scenarios .this asymptote will be matched , in each scenario , with the dominant wkb mode .equation ( [ recsolution ] ) includes terms . at leading contribution comes from the term with the smallest .the rest of the terms are exponentially small compared to the leading one and can be safely neglected . in scenarioa , the two positive roots of eq .( [ pol2 ] ) are and , whereas the rest of the ( negative or complex ) roots obey the inequality , see appendix b. in this case the asymptote of the recursive solution ( [ recsolution ] ) at , or , is where the positive constant ( see appendix b ) satisfies in this case corresponds , in the wkb - language , to the crossing point of the activation trajectory and the -axis , see fig .[ figb1 ] . therefore ,to set the ground for matching eq .( [ rec1 ] ) with the wkb solution in scenario a , we can rewrite eq .( [ rec1 ] ) as in scenario b the asymptote of eq .( [ recsolution ] ) is quite different . herethe root of eq .( [ pol2 ] ) with the smallest absolute value is , see appendix c. therefore , the term in eq .( [ recsolution ] ) is dominant , and we obtain }\,,\ ] ] where we have used eqs .( [ cires ] ) and ( [ c00main ] ) .the asymptote ( [ rec2 ] ) can be expressed in terms of the wkb hamiltonian ( [ hamil ] ) .using eq .( [ identities ] ) , we obtain .recalling that and using eq .( [ wminus ] ) , we can rewrite as as is an attracting point here , . then , using eq .( [ hpqr ] ) , the asymptote ( [ rec2 ] ) becomes note that corresponds to the zero - momentum ( ) crossing point of the relaxation trajectory and the -axis , see fig .[ figb1a ] .in this section we calculate the mte and qsd for extinction scenario a. here extinction occurs along the activation trajectory : the heteroclinic trajectory , connecting the metastable point and the fluctuational extinction point of the phase plane , see fig .[ figb1 ] . in this casethe slow - mode solution is negligible compared to the fast - mode solution in the entire region of .furthermore , the fast - mode solution ( [ fastmode1 ] ) can be directly matched with the recursive solution ( [ recsolution ] ) in the joint region of their validity which turns out to be , or . to implement the matching procedure, we first find the asymptote of the fast - mode solution ( [ fastmode1 ] ) . because of the divergence of at , we should proceed with care .let us rewrite eq .( [ fastmode1 ] ) as +s_1^{(f)}(q_1)-[s_1^{(f)}(q)-\ln q]}\,.\ ] ] here we have introduced the prefactor which diverges at , and made up for it by adding in the exponent .let us show that the expression in the exponent is regular at .we represent as and use eq .( [ s1fast ] ) to rewrite as an integral over .now we taylor - expand the integrand in the vicinity of up to linear terms .the divergent terms cancel out , and the remaining expression is finite .now we rewrite eq .( [ fm1 ] ) as +\phi(q_1)-\phi(q)}\,,\ ] ] where is regular at . by taylor - expanding the exponent of eq .( [ fm3 ] ) around to first order , we obtain the asymptote of the fast - mode solution : +\phi(q_1)-\phi(0)}.\nonumber\\\end{aligned}\ ] ] this asymptote can be matched with the asymptote of the recursive solution at , given by eq .( [ rec11 ] ) .this matching yields +\phi(0)-\phi(q_1)},\label{tau1}\end{aligned}\ ] ] where is given by eq .( [ phi ] ) , is the linear decay rate constant in physical units , and is given by eq .( [ c1 ] ) . the general expression ( [ tau1 ] ) for the mte in scenarioa is one of the main results of this work .the leading term in the exponent , proportional to , is the effective entropy barrier to extinction .the proportionality factor is the absolute value of the area under the activation trajectory , see an example in fig .[ figb1 ] .noticeable is the presence of the large factor in the pre - exponent .the constant has a clearly non - wkb nature , as it comes from the recursive solution of the quasi - stationary master equation at small and is contributed to by _ all _ of the roots , .another important result is the qsd in extinction scenario a. it is determined by the asymptotes ( [ fastmode1 ] ) and ( [ recsolution ] ) which coincide , in the leading order , in their joint region of validity , or . remaining within scenario a , we now turn to an important sub - class of stochastic population processes : single - step processes .here there are only two non - zero process rates : , where all the rates are normalized by the linear decay rate constant . in this casethe expressions for the mte and qsd can be simplified considerably .the wkb hamiltonian ( [ hamil ] ) becomes the rate equation is . in scenarioa one has .here it is convenient to denote the ratio of the linear birth and death rates by . for the fixed point of the rate equationis repelling .the activation trajectory is ] is absent from their eq .while the assumption may hold in some simple models , it does not hold in general .for example , it does not hold for stochastic chemical reactionswhere the rates are combinatorial , as in one of the examples we present in subsection [ sa]d below .now let us return to a general set of ( not necessarily single - step ) processes .our objective is to simplify the mte ( [ tau1 ] ) in the special regime when the population , as described by the rate equation , is very close to the characteristic ( transcritical ) bifurcation point of scenario a. here the attracting point is very close to the repelling point , so that .this also implies .taylor - expanding eq .( [ hamil ] ) in and around , we obtain =0.\ ] ] the trivial solutions are the extinction line and the relaxation trajectory , whereas the nontrivial solution yields a straight - line activation trajectory .using eq .( [ identities ] ) and expanding the algebraic equations for , and for at small and , we can represent the activation trajectory as here where , and where .exactly at the bifurcation the rate constants are such that . herethe attracting fixed point merges with the repelling point .the coordinate of the attracting fixed point can serve here as the distance to the bifurcation .[ the third derivatives of the hamiltonian , which appear in the denominators of eqs .( [ q1bif ] ) and ( [ pfbif ] ) , are generically of order unity . ] now , using eq .( [ actbif1 ] ) , we can calculate , the fast - mode action and the accumulated action between the points and this quantity is the area of a triangle , see fig .[ bifa ] . to find the fast - mode correction to the action , , we expand eq .( [ s1calculation ] ) in the vicinity of and , keeping only the leading order terms .this yields \sum_r r w_r^{\prime}(0)\simeq 0\,,\ ] ] whereas the subleading terms in the rate expansion do not contribute .the solution of this equation is .then , by virtue of eq .( [ phi ] ) , we obtain .that is , near the bifuraction , the subleading wkb correction vanishes .now let us consider the coefficient [ see eq .( [ c1 ] ) ] which enters eq .( [ tau1 ] ) . among the roots of the polynomial ( [ pol2 ] ) , which contribute to , two are special : and . near the bifurcation , so we can write . as a result , . where is a negative constant of order unity , and we have put in the expression for . furthermore , the roots of the polynomial ( [ pol3 ] ) can be evaluated _ at _ the bifurcation point . as a result, one can express via the linear branching rates .after some algebra ^{-1}\,.\ ] ] substituting all of the above into eq .( [ tau1 ] ) , we obtain the mte close to the bifurcation point : where and should be evaluated _ at _ the bifurcation .equation ( [ taubif ] ) is valid when . for sufficiently large strong inequality is compatible with the strong inequality which describes closeness to the bifurcation .note that the constant is determined by the full small- recursive solution that we found in section [ recursion ] .although eq .( [ tauuni ] ) breaks down at , one can still predict a scaling relation for the mte in this region : .the symbol , here and in the following , means of the same order as " .we will now illustrate our theory by calculating the mte in four pedagogical examples of extinction scenario a. the first three of them are single - step processes : the logistic verhulst model of population dynamics , a set of three chemical reactions , and the sis model of epidemics .the fourth example - another set of three chemical reactions - involves a two - step process .we will also consider all of these examples near the bifurcation .the generalized verhulst model is a stochastic logistic model : a single - step markov process with birth and death rates respectively , where and are non - negative rate constants .the quadratic corrections account for competition for resources .it is customary to put in eq .( [ logrates ] ) , and this is what we will do here .rescaling time by the linear death rate constant , we bring the rates to the form given by eq .( [ rateexp ] ) : and , where is the ratio of the linear birth and death rates , and .according to eq .( [ rateexp ] ) , , and . at the fixed point of the rate equation is repelling , whereas is attracting . herewe have , and therefore , the mte [ eq . ( [ tau11 ] ) ] in physical time units is \,,\ ] ] which coincides with previous results obtained by different methods .in this simple example the process rates satisfied the conditions , so eq. ( [ tauver ] ) could have been obtained from eq .( 19 ) of ref . . in the next examplewe relax one of the two conditions and show that , as predicted by our more general eq .( [ tau11 ] ) , the pre - exponent of the mte changes .consider a set of three reactions among particles : branching , a reverse reaction , and decay .as observed in ref . , this set of reactions can be viewed as a generalization of the verhulst model considered in the previous example .indeed , by imposing a special relation , , between the rate constants , and by denoting and , one recovers the process rates and of the verhulst model . for this special choice of rate constantsone has which yields eq .( [ tauver ] ) for the mte .now , what if the rate constants and are independent ?as usual , we can normalize time and the reaction rates by and denote and . by virtue of eq .( [ rateexp ] ) we can write , , and . the rescaled rate constants are identical to those of the verhulst model , except that is now nonzero . as a result , =\tilde{b}\,,\ ] ] and eq . ( [ tau11 ] ) for the mte yields \,,\ ] ]where we have returned to physical units .this result can not be obtained from eq .( 19 ) of ref . .now let us consider the well - known sis model of epidemics , see _e.g. _ refs . and references therein .the sis model deals with dynamics of a population which consists of two groups of individuals : susceptible to infection and infected .it is assumed that infection does not confer any long - lasting immunity , and infected individuals become susceptible again after infection . when demography ( births and deaths ) is negligible , the total number of individuals in the two groups is conserved . as a result ,the model becomes effectively single - population , with the effective rates mathematically , this model is just another example of the generalized verhulst model , see eq .( [ logrates ] ) , where one chooses but .let us denote and rescale time and rates by the linear decay rate constant .the rescaled rates become and , while .the fixed point of the rate equation is attracting when .furthermore , , and ( as in the above notation ) . finally , therefore , the mte [ eq . ( [ tau11 ] ) ] , in physical time units , is given by \,,\end{aligned}\ ] ] which coincides with previous results obtained by different methods .now we consider another set of stochastic reactions among particles which include , in addition to single - step processes and , a two - step process : binary annihilation .this problem was previously solved by kessler and shnerb . herewe show that their result for the mte follows from our eq .( [ tau1 ] ) .in our notation , the transition rates between the states and are given by rescaling time and denoting and , we obtain eq .( [ rateexp ] ) with in the rescaled notation , the attracting fixed point is which demands . the wkb hamiltonian ( [ hamil ] ) takes the form solving the equation =0 ] [as one can check , both and are positive ] , we obtain the boundary - layer equation where the rescaled constant current is to be found later . the general solution of eq . ( [ fp ] ) is where is another constant .now we can match this solution to the slow - mode solution at and , that is , at . to eliminate the exponential growth at , one must choose , so the asymptote of the boundary - layer solution ( [ fpsolution ] ) at becomes the slow - mode solution ( [ slowmode ] ) at can be approximated as matching the two asymptotes , one obtains to find the still unknown constant , we have to match the asymptote of the boundary - layer solution ( [ fpsolution ] ) , which is \,,\end{aligned}\ ] ] with the asymptote of the fast - mode solution at , which is + s_1^{(f)}(q_2)-s_1^{(f)}(q_1)-(n/2)s^{\prime\prime}(q_1)(q - q_1)^2}.\nonumber\\\end{aligned}\ ] ] here we have used the equalities and neglected terms of order in the exponent . putting into eq .( [ hpq ] ) , we obtain where . matching the asymptotes ( [ fpright ] ) and ( [ fast2 ] ) and using eq .( [ paprime ] ) , we find +[s_1^{(f)}(q_2)-s_1^{(f)}(q_1)]}.\end{aligned}\ ] ] what is left is to find the mte by matching the slow - mode solution at with the recursive solution ( [ rec22 ] ) at .using eq .( [ slowmode ] ) we obtain , at : where is given by eq .( [ hpqr ] ) . comparing this with eq .( [ rec22 ] ) and using eq .( [ cs ] ) , we obtain +[s_1^{(f)}(q_1)-s_1^{(f)}(q_2)]},\label{tau2}\end{aligned}\ ] ] where in the linear decay rate constant in physical units .the expression ( [ tau2 ] ) for the mte in scenario b is an important result of our work .the leading term in the exponent , proportional to , is the effective entropy barrier to extinction .the proportionality factor is the absolute value of the area between the activation trajectory and relaxation trajectory , see an example in fig .[ figb1a ] .in contrast to scenario a , the pre - exponential factors in eq .( [ tau2 ] ) are -independent . using eqs .( [ s1final ] ) , ( [ psi ] ) and ( [ paprime ] ) , one can rewrite eq .( [ tau2 ] ) in a more concise form : +[\psi(q_1)-\psi(q_2)]}\,.\label{tau2simple}\end{aligned}\ ] ] as mentioned above , determining the mte in scenario b does not require any information about the small- recursive solution .furthermore , eq . ( [ tau2 ] ) formally coincides with the result of escudero and kamenev , who calculated a different quantity : the mean time to _ escape _ from one metastable state into another .finally , the same result ( [ tau2 ] ) can be also obtained for the mean time to escape to an absorbing state at infinity , as in the particular example considered by meerson and sasorov .the reason for these coincidences is that , in all these systems , a constant probability current sets in beyond the repelling fixed point of the rate equation .it is the magnitude of this current , carried by the slow wkb mode , rather than the exact nature of the target state for escape ( an absorbing state at zero , infinity or another metastable state ) , that determines , in the leading and subleading orders in , the mean escape rate from a metastable state . to conclude this section ,the qsd ( another main result of this work ) is given by four overlapping asymptotes : ( i ) the recursive solution ( [ recsolution ] ) , valid for , ( ii ) the slow - mode wkb solution ( [ slowmode ] ) , valid for and , ( iii ) the boundary - layer solution ( [ fpsolution ] ) , valid for , and ( iv ) the fast - mode wkb solution ( [ fastmode1 ] ) , valid for . for completeness , we briefly consider the special case of single - step processes , where only are present . here( [ tau2 ] ) simplifies considerably .performing calculations similar to those in scenario a ( see sec . [ sa ] ) and using eqs .( [ identities ] ) and ( [ s11 ] ) and the fact that , one obtains the mte as expected , this result coincides with the single - step result of ref . for the mean time of a population switch between two metastable states .here we calculate the mte near the characteristic ( saddle - node ) bifurcation of scenario b. at the bifurcation , the nontrivial attracting fixed point of the rate equation merges with the repelling point . above but near the bifurcation point . as a result ,the momentum on the activation trajectory is much smaller than unity , see fig .one can always define the parameter such that , at the bifurcation , .furthermore , near the bifurcation and , where the exact definition of will appear shortly .let us taylor - expand from eq .( [ hamil ] ) in the vicinity of and . as we expect to be , we neglect the terms of order and higher and arrive at the following equation for the zero - energy phase trajectories close to : =0.\label{hbifb}\end{aligned}\ ] ] as can be checked _ a posteriori _ , the terms in eq .( [ hbifb ] ) scale as follows : , , , and .therefore , the term can be neglected .the nontrivial solution of eq .( [ hbifb ] ) yields the activation trajectory : a parabola with the roots and . to simplify the notation , we use eq .( [ identities ] ) and evaluate the small difference by expanding the algebraic equation in the vicinity of . neglecting the term , we obtain where .the activation trajectory can be written as as , we find .furthermore , the action is given by ,\end{aligned}\ ] ] whereas is the area of the shaded region in fig .[ bifb ] . as in scenarioa , the sub - leading wkb correction vanishes near the bifurcation .indeed , we taylor - expand eq .( [ s1calculation ] ) in the vicinity of and , keep only leading - order terms , and obtain using eq .( [ actbif1b ] ) , we find that the second and third terms cancel out , and so can be chosen zero . as a result , eq . ( [ tau2 ] )yields the mte near the bifurcation : where and should be evaluated _ at _ the bifurcation .the applicability criterion of this result is .for sufficiently large this strong inequality is compatible with the strong inequality which describes closeness to the bifurcation . at ( [ taubifb ] ) predicts the following scaling of the mte with : .we notice that eq .( [ taubifb ] ) does not require any information about the qsd in the region of small .indeed , as was mentioned in section v a , the exact nature of the target state is of no significance here .note that the same scaling of the effective entropy barrier with the distance from the bifurcation appears in the context of escape from one metastable state to another .finally , the same scaling near the bifurcation is observed in _ continuous _ systems , driven by external delta - correlated gaussian noise and therefore describable by a fokker - planck equation .let us illustrate the extinction scenario b on the following set of reactions : binary reproduction , the reverse process , and linear decay . here rescaling time , and denoting , , and , we arrive at eq .( [ rateexp ] ) with in the rescaled notation the fixed points are ( attracting point ) , ( repelling point ) , and : another attracting point around which the metastable population resides .the wkb hamiltonian ( [ hamil ] ) takes the form solving the equation =0 ] , \,,\cdots$ ] , and .therefore , using eqs .( [ c02 ] ) and ( [ siviete ] ) , we can rewrite as -\cdots - f_1 w_{k}^{\prime}(0 ) } { a_k\displaystyle\prod_{i=1}^{k}(1-\lambda_i)}.\nonumber\\\label{c}\end{aligned}\ ] ] now , using eq .( [ qsdr3 ] ) with , we obtain a relation between and . plugging it into ( [ c ] ) we have \right.\nonumber\\ & -&\left.\cdots - f_1 [ w_{k-1}^{\prime}(0)+w_{k}^{\prime}(0)]\right\}.\label{cc}\end{aligned}\ ] ] one can use this argument repeatedly more times , and obtain that the expression in the curly brackets in eq .( [ cc ] ) equals .in addition , by virtue of eq .( [ siviete ] ) satisfies therefore , is given by thereby proving eq .( [ cires ] ) for .this proof can be generalized to the rest of the coefficients .here one has to multiply the first of eqs .( [ eqset ] ) by , the second by , ... , and finally the last by , and add all the equations .this yields \nonumber\\ & & \hspace{-4mm}+c_1\left[\lambda_1^{-1}-\lambda_1^{-2}s_1^{(i)}+\lambda_1^{-3}s_2^{(i)}-\cdots+(-1)^k \lambda_1^{-k-1}s_k^{(i)}\right]\nonumber\\ & & \hspace{-4mm}+\cdots+c_k\left[\lambda_k^{-1}-\lambda_k^{-2}s_1^{(i)}+\cdots+(-1)^k \lambda_k^{-k-1}s_k^{(i)}\right].\nonumber\\\end{aligned}\ ] ] the coefficient of in the right hand side of eq .( [ cicalc1 ] ) satisfies because is absent from , see eq .( [ si0 ] ) . on the other hand , the coefficients of all other in the right hand side of eq .( [ cicalc1 ] ) can be shown to be equal zero .therefore , }{\displaystyle\prod_{\begin{array}{c}j=0\\j\neq i\\\end{array}}^{k}(\lambda_i-\lambda_j)}\ ] ] by using the recursive equation ( [ qsdr3 ] ) repeatedly and by using eq .( [ si0 ] ) , one finally obtains eq .( [ cires ] ) . finally , from eq .( [ c0 ] ) can be expressed through the reaction rate constants , see eq .( [ c00main ] ) .indeed , let us expand the denominator of eq .( [ c0 ] ) and use eq .( [ s00 ] ) : .\nonumber\\\end{aligned}\ ] ] using eqs .( [ siviete ] ) and ( [ rk ] ) , we rewrite eq .( [ calc ] ) as -[w_{2}^{\prime}(0)+\cdots+w_{k}^{\prime}(0)]-\cdots - w_{k}^{\prime}(0)\nonumber\\ & & \hspace{-7mm}=1-w_{1}^{\prime}(0)-2w_{2}^{\prime}(0)-3w_{3}^{\prime}(0)-\cdots - kw_{k}^{\prime}(0)\,.\end{aligned}\ ] ] plugging this into eq .( [ c0 ] ) one obtains eq .( [ c00main ] ) .here we show that , in extinction scenario a , the two real and positive roots of eq . ( [ pol2 ] ) are and , whereas all other roots obey the inequality .we start by showing that the positive root of eq .( [ pol3 ] ) obeys the inequalities .the left hand side of eq .( [ pol3 ] ) side is a monotone decreasing function of . at is equal to .this quantity is negative , as is a repelling fixed point , so the rescaled reaction rate constants satisfy the inequality . on the other hand , at left hand side is .hence , .now we will prove by contradiction that all other ( negative or complex ) roots satisfy the inequaltity .assume by contradiction that there exists a root so that .denote .then by assumption . substituting into eq .( [ pol3 ] ) we have where both real and imaginary parts have to vanish separately .now we substitute into eq .( [ pol3 ] ) and use eq .( [ pr1 ] ) where the last inequality holds since is complex or negative , so , and there exists some for which .( [ pr2 ] ) shows a contradiction , so all roots obey . as a result , at , the recursive solution ( [ recsolution ] ) reduces to eq .( [ rec1 ] ) , where is given by eq .( [ c1 ] ) . finally , we show that , and so the asymptote ( [ rec1 ] ) is always positive .first , by using eq .( [ rk ] ) , one can see that the numerator in eq .( [ c1 ] ) is always negative .what is the sign of the denominator ? here , whereas all other terms in the product are positive .indeed , for one has .for any complex , there is also a complex conjugate root .therefore , by writing , one has .so , is always positive , and so is the asymptote from eq .( [ rec1 ] ) .here we show that , in extinction scenario b , the root of eq .( [ pol2 ] ) has the smallest absolute value among all of the roots .to this end we will prove that the roots of eq .( [ pol3 ] ) obey the inequality .let us denote , in general a complex number , and assume by contradiction that . plugging into eq .( [ pol3 ] ) , we obtain {2}^{\prime}(0)-\cdots\nonumber\\ & & \hspace{-4mm}-[a_j \cos(\theta_j)+\cdots+a_j^{k}\cos(k\theta_j)]w_{k}^{\prime}(0)+i\,(\cdots)=0,\nonumber\\\end{aligned}\ ] ] where the real and imaginary parts must vanish separately .as we have assumed , we have for all integer .therefore , we can write for the real part of eq .( [ pol6 ] ) : {2}^{\prime}(0)\nonumber\\ & -&\cdots- [ a_j \cos(\theta_j)+\cdots+a_j^{k } \cos(k\theta_j)]w_{k}^{\prime}(0)\nonumber\\&\geq & 1-w_{1}^{\prime}(0)-2w_{2}^{\prime}(0)-\cdots - kw_{k}^{\prime}(0 ) > 0\,,\end{aligned}\ ] ] where the last inequality follows from being an attracting fixed point of the rate equation .equation ( [ pol7 ] ) shows a contradiction .hence , and all the roots of eq . ( [ pol3 ] )obey the inequality .bartlett , _ stochastic population models in ecology and epidemiology _ ( wiley , new york , 1961 ) .s. r. beissinger and d. r. mccullough ( editors ) , _ population viability analysis _( university of chicago press , chicago , 2002 ) .h. andersson and t. britton , _ stochastic epidemic models and their statistical analysis _notes stat . , vol .* 151 * ( springer , new york , 2000 ) .samoilov and a.p .arkin , nature biotech .* 24 * , 1235 ( 2006 ) ; m. assaf and b. meerson , phys .100 * , 058105 ( 2008 ) .n.g . van kampen , _stochastic processes in physics and chemistry _( north - holland , amsterdam , 2001 ) .gardiner , _ handbook of stochastic methods _ ( springer verlag , berlin , 2004 ) .we will assume that the stochastic population does not exhibit an unlimited growth ( escape to infinite population size ) , see ref . , and is the _ only _ absorbing state .m. assaf and b. meerson , phys .e * 74 * , 041115 ( 2006 ) .m. assaf and b. meerson , phys .lett . * 97 * , 200602 ( 2006 ) .bender and s.a .orszag , _ advanced mathematical methods for scientists and engineers _ ( springer , new york , 1999 ) .r. kubo , k. matsuo , and k. kitahara , j. stat . phys .* 9 * , 51 ( 1973 ) .dykman , e. mori , j. ross , and p.m. hunt , j. chem . phys . * 100 * , 5735 ( 1994 ) . v. elgart and a. kamenev , phys .e * 70 * , 041106 ( 2004 ) .v. elgart and a. kamenev , phys .e * 74 * , 041101 ( 2006 ) .m. assaf , a. kamenev , and b. meerson , phys .e * 78 * , 041123 ( 2008 ) .kessler and n.m .shnerb , j. stat . phys . * 127 * , 861 ( 2007 ) . b. gaveau , m. moreau , and j. toth , lett . math . phys .* 37 * , 285 ( 1996 ) .doering , k.v .sargsyan , and l.m .sander , multiscale model . and simul .* 3 * , 283 ( 2005 ) . m. assaf and b. meerson , phys . rev .e * 75 * , 031122 ( 2007 ) .turner and m. malek - mansour , physica a * 93 * , 517 ( 1978 ) . b. meerson and p.v .sasorov , phys .e * 78 * , 060103(r ) ( 2008 ) .c. escudero and a. kamenev , phys .e * 79 * , 041149 ( 2009 ) . at nontrivial fixed points of the rate equation , , the two real roots of the equation merge at . if , then it is more convenient to use eq .( [ s1fast ] ) .darroch and e. seneta , j. appl . probab . * 4 * , 192 ( 1967 ) .we remind the reader that , in order to obtain eq .( [ rateexp ] ) , we rescaled the reaction rates and time by the linear decay rate constant . to express the mte in physical units one needs to put the factor back. m. assaf , a. kamenev , and b. meerson , phys .e * 79 * , 011127 ( 2009 ) .i. nsell , j. theor . biol . * 211 * , 11 ( 2001 ) .o. ovaskainen , j. appl . prob . * 38 * , 898 ( 2001 ) . in refs . the parameter was defined as .therefore , their result for the mte looks slightly different , but it is actually identical to eq .( [ tausis ] ) in the leading and subleading orders of .dykman and m.a .krivoglaz , physica a * 104 * , 480 ( 1980 ) .the distance to bifurcation is defined in refs . and differently than in the present work .their parameter is related to our parameter as , so the entropy barriers in refs . and scale with as .
|
we investigate the phenomenon of extinction of a long - lived self - regulating stochastic population , caused by intrinsic ( demographic ) noise . extinction typically occurs via one of two scenarios depending on whether the absorbing state is a repelling ( scenario a ) or attracting ( scenario b ) point of the _ deterministic rate equation_. in scenario a the metastable stochastic population resides in the vicinity of an attracting fixed point next to the repelling point . in scenario b there is an intermediate repelling point between the attracting point and another attracting point in the vicinity of which the metastable population resides . the crux of the theory is a dissipative variant of wkb ( wentzel - kramers - brillouin ) approximation which assumes that the typical population size in the metastable state is large . starting from the master equation , we calculate the quasi - stationary probability distribution of the population sizes and the ( exponentially long ) mean time to extinction for each of the two scenarios . when necessary , the wkb approximation is complemented ( i ) by a recursive solution of the quasi - stationary master equation at small and ( ii ) by the van kampen system - size expansion , valid near the fixed points of the deterministic rate equation . the theory yields both entropic barriers to extinction and pre - exponential factors , and holds for a general set of multi - step processes when detailed balance is broken . the results simplify considerably for single - step processes and near the characteristic bifurcations of scenarios a and b.
|
most cortical neurons are noisy , or at least appear so to experimenters . when a sensory neuron s spikes are recorded in response to a well - controlled stimulus , they will show a large variability from trial to trial .this noisiness has been acknowledged from early on , as a nuisance preventing experimenters from easy access to the encoding properties of sensory neurons .but what is the impact of trial - to - trial sensory noise on the organism itself ?this question gained renewed interest a few decades ago , with the generalization of experimental setups recording neural activity from awake , behaving animals . in these setups ,animals are presented with a set of stimuli and trained to respond differentially to different values of , thus providing an ( indirect ) report of their percept of . as neural activity and animal behaviorare simultaneously monitored , it becomes possible to seek a causal link between the two .in such setups , one particular hypothesis which we refer to as the `` sensory noise '' hypothesis has proven instrumental in linking neural activity and percepts .it postulates that trial - to - trial noise at the level of sensory neurons is the main factor limiting the accuracy of the animal s perceptual judgements .indeed , signal detection theory provides the adequate tools to estimate such accuracies .any type of biological response to a stimulus be associated to a signal - to - noise ratio ( snr ) , which measures how typical variations in due to a change of stimulus ( the _ signal _ ) compare to intrinsic variations of from trial to trial ( the _ noise _ ) .when measures the response of a neuron to stimulus , the resulting snr is often called the _ neurometric _ sensitivity for that particular neuron .alternatively , may also be the response of the animal itself to stimulus .the resulting snr is called the animal s _ psychometric _ sensitivity , which quantifies the animal s ability to discriminate nearby stimulus values .reformulated in terms of snrs , the `` sensory noise '' hypothesis states that neurometric sensitivity , computed from the population of sensory neurons under survey , is equal to the psychometric sensitivity for the animal in the task .applying this idea , neurometric and psychometric sensitivities have often been computed and compared , in various sensory systems and behavioral tasks ( see , e.g. , * ? ? ?* ; * ? ? ?* for reference ) .however , it was progressively realized that most of these comparisons bear no simple interpretation , because the neurometric sensitivity is not a fixed quantity : it depends on how information is read out from the neurons .for example , if the various sensory neurons in the population behave independently one from another , then the overall snr from the population will essentially be the sum of individual snrs and thus , the experimenter s estimate of neurometric sensitivity will depend on how many neurons say included in their analysis .this intuition still holds in realistic populations where neurons are not independent , with the additional complexity that the evolution of neural snr with is very influenced by the correlation structure of noise in the population .more subtly , another parameter has a direct influence on estimated neurometric snrs : the time scale used to integrate each neuron s spike train , to describe the neuron s activity over the trial . indeed , through the central limit theorem, the more neural spikes are integrated into the readout , the more accurate that readout will be .adding extra neurons through , or extra spikes for each neuron through , will thus have the same type of impact on the readout s overall snr .in fact , if all neurons from the population are identical , independent poisson encoders , one can easily show that the readout s overall snr scales with , emphasizing the duality between and . as there is no unique way of reading out information from a population of sensory neurons ,a question naturally arises : what type of readout does the organism use ?for example , how many sensory neurons , and what typical integration time scale , provide a relevant description of the animal s percept formation ?the `` sensory noise '' hypothesis can precisely be used to answer this question : the ` true ' neuronal readout for the organism must be the one providing the best account of animal behavior .however , the previous discussion clearly shows that comparing neurometric snr to psychometric snr is not sufficient to target the true readout : there will be several combinations of and leading to the same overall neurometric snr , while corresponding to very different extraction strategies by the animal .thus , an additional experimental measure is required to recover the typical scales of integration of the true readout ._ choice signals _ are a good candidate for this additional measure . in two - alternative tasks , where the animal must report a binary discrimination of stimulus value ( say , or ) ,choice signals are generally computed in the form of _ choice probabilities _ ( cp ) .cp is computed for each recorded neuron individually , and quantifies the trial - to - trial correlation between the activity of that neuron and the animal s ultimate ( binary ) choice on the trial , all other features being held constant . in particular , since cp is computed across trials with the same stimulus value ( generally uninformative , i.e. , ) , the observed correlations can not reflect the influence of stimulus on neural activity and animal choice . instead , a significant cp can only result from the process by which the neuron s activity influences or is influenced by the animal s forming perceptual decision .it is intuitively clear that cps reveal something about the way information is extracted from sensory neurons . for example , if the animal s percept is built from a single neuron , then that neuron will have a very large cp , because its activity on every trial directly predicts the animal s percept . instead ,if several independent neurons contribute to form the animal s percept , then they are all expected to have low cp value , as the activity of each neuron has only a marginal impact on the animal s decision . however , converting this intuition about choice signals into a quantitative interpretation was long hampered by the fact that , just like neurometric snr , cp values are largely influenced by the population s noise covariance structure . for example , a neuron may not be utilized by the animal to form its percept , and yet display significant cp because its activity is correlated with that of another neuron being utilized . as a result ,early studies relating cp values to the animal s perceptual readout only relied on numerical simulations , assuming very specific noise correlation structures that weakened the generalizations of their results . only very recently have provided an analytical expression for cp values in the presence of noise correlations ( see section [ sec : cp ] ) , opening the door to general , quantitative interpretations of choice probabilities . in this article , we show how the combined information of animal sensitivity ( snr ) and choice signals allows to estimate the typical scales of percept formation by the animal , both across neurons ( number of neurons involved ) and in time ( integration window ) .our results apply in the standard feedforward model of percept formation , and can be derived for any noise covariance structure in the neural population .we first show how the joint covariance structure of neural activities and animal percept leads to a set of characteristic equations for the readout , which implicitly determine the animal s perceptual readout policy across neurons and time .then , we show how these characteristic equations can be used in a statistical form , across the ensemble of trials and neurons available to the experimenter , to determine the typical scales and of percept formation from the activity of sensory neurons .this approach is mandatory since experimental measurements can only provide statistical samples of the full neural population . using an artificial neural network to provide sensory encoding ,we show that our method can reliably recover the true scales of perceptual integration , without requiring full measurement of the neural population .thus , our method can readily be applied to real experimental data , and provide new insights into the nature of sensory percept formation .we place ourselves in a general framework , describing a typical perceptual decision - making experiment ( fig . [fig : frame ] ) . on each trial , a different stimulus is presented to the animal ( fig .[ fig : frame]a , top ) , which then takes a decision according to its internal judgement of stimulus value .our framework assumes that this percept is directly available to the experimenter on each trial . in real experimental setups ,the animal s report is generally more indirect typically a binary choice based on the unknown percept .we choose the former approach because it applies generically to most perceptual decision - making experiments , whereas the `` choice '' part is more dependent on each particular setup .we detail later how both approaches can be reconciled through simple models of the animal s behavior ( section [ sec : cp ] ) .simultaneously , experimenters record neural activities from a large population of sensory neurons , which is assumed to convey the basic information about used by the animal to take its decision ( fig . [fig : frame]a , bottom ) .typical examples could be area mt in the context of a moving dot discrimination task ( e.g. , * ? ? ?* ) , area mt or v2 in the context of a depth discrimination task ( e.g. , * ? ? ?* ; * ? ? ?* ) , or area s1 in the context of a tactile discrimination task ( e.g. , * ? ? ?we describe the activity of this neural population on every trial as a point process , where each is the spike train for neuron , viewed as a series of dirac pulses . as an important remark, denotes the full population size , a very large and unknown number .it is _ not _ the number of neurons actually recorded by the experimenter , which is generally much smaller . for simplicity , we assume a fine discrimination task , where the different stimulus values presented to the animal display only moderate variations around a central value , say .this substantially simplifies snr computations , because the ` signal ' part of any response is then summarized by its slope in : , where denotes the average response over trials .we assume that this linearization with can be performed both for the psychometric report , and for individual neuron activities .this is mostly a convenience though , and the framework could be generalized to more complex dependencies on stimulus .framework and main experimental measures .( a ) experimental setup .top : a set of stimulus values ( color - coded as blue , yellow , red ) are repeatedly presented to an animal , which reports its percept on each trial ( color - coded as green ) .bottom : in each session , several task - relevant sensory neurons are recorded simultaneously with behavior . ( b )perceptual sensitivity is defined as the square snr of the animal s reports .( c ) the noise covariance structure can be assessed in each each pair of simultaneously recorded neurons , as their joint peri - stimulus histogram ( jpsth ) .( d ) trial - wise response of a particular neuron .each thin line is the schematical representation of the spike train on each trial . segregating trials according to stimulus ( top ), we access the neuron s peri - stimulus histogram ( psth ) and its tuning curve shown in panel ( e ) . segregating trials according to the animal s perceptual error ( bottom ), we access the neuron s percept covariance ( pcv ) curve shown in panel ( f ) . ] from the raw data of and on each trial , a number of measures are routinely used to describe neural activity and animal behavior .first , the psychometric sensitivity describes the animal s accuracy in distinguishing nearby frequency values .it can be computed from the distribution of across trials ( fig .[ fig : frame]b ) , according to the formula : where notation denotes an average across stimulus conditions .this is exactly the ( squared ) snr for random variable , assuming that the ` signal ' term is equal to 1 because the animal s average judgement of is unbiased ( the framework easily generalizes to a biased percept ) . on the other hand , for each recorded neuron , it is common practice to compute its peri - stimulus time histogram ( psth ) in response to each different tested stimulus ( fig .[ fig : frame]d ) : where denotes averaging over trials . since all stimuli are assumed to be close one from another , the dependency of on is essentially linear , and can be summarized by the ( temporal ) tuning curve for the neuron ( fig .[ fig : frame]e ) : furthermore , as recent techniques allow the simultaneous recording of many neurons , experimenters also have access to samples from the trial - to - trial covariance structure in the population ( fig . [fig : frame]c ) . for every pair of neurons and instants in time , this covariance structure is assessed through the neurons joint peri - stimulus time histogram ( jpsth , * ? ? ?* ) : we only consider the average covariance structure , over different stimuli .first , as above , nearby values of insure that the covariance structure will remain mostly unchanged .second , trial - to - trial covariances correspond to second - order effects on neural activity , which require several trials to be reliably estimated another reason to lump data from different stimuli into a single estimate . finally , we can measure a _ choice signal _ for each neuron , estimating the trial - to - trial covariance of neuron activity with the animal s choice ( fig .[ fig : frame]f ) . since in our frameworkthe animal directly reports its percept , we readily describe the choice signal of each neuron by its _ percept covariance _ ( pcv )curve : again , this covariance information is lumped across the different ( nearby ) stimulus values , in order to improve experimental measurement .the pcv curve captures the core intuition behind the more traditional measure of choice probability ( cp ) , while retaining a linear form convenient for analytical treatment .percept covariance curves are not directly measurable in classic experimental setups where the animal only reports a binary choice ; however their analytical link to available measures such as cps can be easily derived given simple models of the animal s decision policy ( see section [ sec : cp ] ) .unlike many characterizations of neural activity that rely only on spike counts , our framework requires an explicit temporal description of neural activity through psths ( eq .[ eq : psth ] ) , jpsths ( eq . [ eq : jpsth ] ) and percept covariance curves ( eq . [ eq : pcov ] ) . indeed, our method ultimately predict _ when _ , and _ how long _ , perceptual integration takes place in the organism .readers may feel uncomfortable that the resulting definitions are directly expressed over trains of dirac pulses . while these notations are fully justified in the framework of point processes , they describe idealized quantities that can not be estimated from a finite number of trials , leading to jaggy estimates formed from the collection of dirac peaks .so in practice , spike trains are computed in temporal bins of finite precision .all experimental measures described above , taken together , provide a full characterization of the joint covariance structure of variables across stimuli and trials ( fig .[ fig : model]c ) . the key argument to exploit these data , which is actually a reformulation of the ` sensory noise ' hypothesis ,is that the animal s percept is built on every trial from the activity of the sensory neurons , meaning that for some unknown readout . as a result , each proposed readout directly yields an estimate for the joint covariance structure of a set of relationships which constitute the readout s _characteristic equations_. conversely , since this joint covariance structure is experimentally measurable , it implicitly constrains the nature of the true readout which was applied by the animal . in this section, we introduce a generic form of linear readout , stemming from the standard feedforward model of perceptual integration , and derive its characteristic equations .we show that in theory , these equations totally characterize the readout applied by the animal .we define a generic linear readout from the activity of sensory neurons ( fig .[ fig : model]a ) , based on a given readout vector : , a given integration kernel with normalized shape and time constant : , and a given readout time : the readout is noted , as it must ultimately be an estimator of stimulus value .we explicitely note the dependence on to emphasize that is built from a sliding temporal average of the spike trains ; so that each instant in time yields a potential readout .this is a classical form of readout from a neural population , which has often been used previously and described as the ` standard ' model of perceptual integration .the temporal parameters and describe how each neuron s temporal spike train is integrated into a single number describing the neuron s activity over the trial : . in turn , the percept is built linearly from the population activity as through a specific readout vector , or ` perceptual policy ' , . however , traditional studies generally make ad hoc choices for the various constituants of this readout .most often , simply describes the total spike count for neuron , which in our model corresponds to choosing a square kernel , and parameters describing an integration over the full period of sensory stimulation .as mentionned in the introduction , there is no reason that this should be a relevant description of sensory integration by the organism : the integration window has a direct influence on predicted snrs for the readout , and experiments suggest that animals do not always use the full stimulation period to build their judgement . instead , we make no assumption on the nature of and , and view them as free parameters of the model .then , the model parameters implicitly characterize the typical scales of perceptual integration by the animal .the number of significantly nonzero entries in , say , defines the number of neurons contributing to the percept .the readout window characterizes the behavioral scale of temporal integration from the sensory neurons , and time characterizes when during stimulation this integration takes place .the exact shape given to the integration kernel is of less importance ; for conceptual and implementational simplicity we assume it to be a square window .however , we note that ( 1 ) other shapes may have a higher biological relevance , such as the decreasing exponential mimicking synaptic integration by downstream neurons , and ( 2 ) nothing prevents our method from making itself a free parameter , provided the data contain enough power to estimate it .finally , our model can also be extended to versions where extraction time is not fixed , but varies from trial to trial ; this issue is discussed in section [ sec : withg ] .thanks to its linear structure , the readout defined in eq .[ eq : readout ] allows for a simple characterization of the covariance structure that it induces between neural activity and the resulting percept ( fig .[ fig : model]b ) .we show in appendix [ sec : appchar ] that this covariance structure can be summarized by three characteristic equations : where vector and matrices and respectively describe the population s tuning and noise covariance structures , derived from the underlying neural statistics and introduced in eq .[ eq : tuning]-[eq : jpsth ] : we here note the explicit dependency of , and on the temporal parameters of the readout and .we will generally omit it in the sequel .thus , the right - hand sides of eq .[ eq : char - tuning]-[eq : char - pcov ] depend only on readout parameters , , and on the statistics of neural activity , independently of the animal s percept . on the other hand , the left - hand sides of eq .[ eq : char - tuning]-[eq : char - pcov ] describe experimental quantities related to the readout s resulting percept .the first line describes the average tuning of to stimulus , that is , which is equal to 1 because we assume that is unbiased .the second line expresses the resulting sensitivity for the readout , defined as in eq .[ eq : zstar ] .it reveals the dual influence of the number of neurons ( through ) and integration window on the readout s overall sensitivity : indeed , under mild assumptions , the covariance matrix scales with ( see appendix [ sec : appw ] ) .finally , the third line expresses the resulting covariance between and the activity of each neuron , defined as in eq .[ eq : pcov ] .this is essentially the relationship already revealed by , that choice probabilities are related to readout weights through the noise covariance matrix ; however , our formalism focuses on the simpler linear measure of pcv curves , and explicitly takes time into account .both the neural measures and on the right - hand side , and the percept - related measures and on the left - hand side , can be estimated from data .as a result , the characteristic equations define an implicit constraint on the readout parameters , and ( fig .[ fig : model]d ) . actually , if the readout model in eq .[ eq : readout ] is true , and precise measures are available for all neurons in the population , one sees easily that these constraints would uniquely determine the readout parameters . indeed , for fixed parameters and , eq .[ eq : char - tuning ] and [ eq : char - pcov ] impose linear constraints on vector .these constraints are generally overcomplete , since is -dimensional , while each time in eq .[ eq : char - pcov ] provides additional linear constraints .thus , in general , a solution will only exist if one has targeted the true parameters and , and it will then be unique . in the previous section we have shownthat , in the standard linear model of percept formation , the trial - to - trial covariance structure between spike trains and the resulting percept leads to a set of characteristic equations which implicitly define the parameters of the perceptual readout , provided the covariance structure has been fully estimated .unfortunately , this direct approach makes a fundamental assumption which can not be reconciled with real , experimental recordings : it assumes we have recorded all neurons from the population under survey , whereas real recordings only ever record from a small subset of that population .thus we can not hope to reconstruct the real vector , simply because some probably most of the neurons contributing to were not recorded . moreover , even across those neurons which were recorded through a series of sessions in a given area , the noise covariance structure can never be fully assessed ; it remains elusive between neurons which were not recorded simultaneously . for this fundamental reason, the characteristic equations [ eq : char - tuning]-[eq : char - pcov ] should be used with a different perspective than the full recovery of readout parameters .instead , we propose to exploit the structure of the equations in a statistical approach , with the restricted goal of estimating the typical scales of readout most compatible with recorded data .a first necessary step in our approach is to statistically describe the nature of readout vector .we are mostly interested in the support of , meaning , the number and nature of neurons contributing to percept formation .thus , we assume that the percept is built only from the activities of an unknown ensemble of neurons in the population and that , for given and temporal parameters , the readout vector is chosen optimally to maximize the snr of the resulting percept . indeed ,through this hypothesis , we totally reformulate the problem of characterizing in that of characterizing ; which allows for much simpler statistical descriptions .the readout vector achieving the maximum sensitivity in eq .[ eq : char - z ] , under the constraints of eq .[ eq : char - tuning ] and having support on , is well known from the statistical literature .it is uniquely given by fisher s linear discriminant formula : where , and are the versions of vectors , ( eq . [ eq : b ] ) and matrix ( eq . [ eq : c ] ) restricted to neuron ensemble . by injecting the form ( eq . [ eq : a - opt ] ) into eq .[ eq : char - z]-[eq : char - pcov ] we obtain a new version of the characteristic equations , under the assumption that percept is built optimally from some given ensemble , and temporal parameters : in eq . [ eq : new - z ] is the ( optimal ) sensitivity associated to this particular choice of , and . in eq .[ eq : new - pcov ] , is the resulting , predicted pcv curve for every neuron in the population ( not only in ensemble ) . is a row vector whose entries are equal to ( eq . [ eq : gamma ] ) for neurons .+ these equations open the door to a statistical description of percept formation in the neural population : we can now parse through a large set of candidate ensembles and temporal parameters , and ask when the predictions for sensitivity ( eq . [ eq : new - z ] ) and pcv curves ( eq . [ eq : new - pcov ] ) match their true psychophysical counterparts ( eq . [ eq : zstar ] ) and ( eq .[ eq : pcov ] ) . for sensitivity ,the straightforward comparison is to require that . on the other hand , for the pcv equation ( eq . [ eq : new - pcov ] ) ,it is pointless to search an elementwise match for every neuron , between the predicted curve and its true measure . indeed , since only a small subset of the neurons have been recorded , no candidate readout ensemble will be equal to the true ensemble ( say ) that was used by the animal ; and there is no guarantee that the covariance structure between and , which gives rise to prediction ( eq . [ eq : new - pcov ] ) , should be similar to that between and .instead , a given set of readout parameters should be deemed plausible if they predict the correct _ distribution _ of pcv signals across the population , irrespective of exact neuron identities .full distributions are difficult to estimate from finite amounts of data , and we will find the following population _averages _ to convey sufficient information : where denotes averaging over the full population of neurons .we will deem a set of readout parameters plausible if they yield depends on parameters only through the neurons tunings . in practice ,as neural activities are rather stationary in time , changes very little for different values of parameters . ] . multiplying each pcv curve by the neuron s tuning ( eq . [ eq : b ] ) yields more stable estimates for , as discussed in section [ sec : full - valid ] and appendix [ sec : appsvd ] .there are many ways to compare the real values of sensitivity and pcv signals , to their predictions given by eq .[ eq : new - z]-[eq : new - pcov ] .we propose here an ad - hoc method , whose main characteristics are the following : ( 1 ) focus mostly on first - order statistics ( i.e. , means ) across the neural population , ( 2 ) use arbitrary tolerance values to compare real and predicted data , ( 3 ) fit the two indicators sequentially : first snr , then percept covariance . due to its simplicity , this method will prove robust to measurements errors arising from finite amounts of data ( section [ sec : finite ] ) . our method is also designed to cope with a fundamental limitation of real recordings : all neurons ( ensemble , neurons ) contributing to predictions eq .[ eq : new - z]-[eq : new - pcov ] must have been recorded simultaneously , to assess their noise covariance structure .this constraint sets a limit on ensemble sizes which can be easily investigated ( but see section [ sec : extrapolation ] ) .moreover , it prevents from estimating the full average of choice signals ( eq . [ eq : w - kap ] ) predicted by a given ensemble is only available for simultaneously recorded neurons . as a result ,predictions ( eq . [ eq : new - pcov ] ) from different tested ensembles must somehow be aggregated to produce a reliable prediction of choice signals .we propose that each tested ensemble contribute to our estimates in proportion to its ability to account for the animal s sensitivity : normalized to insure across all tested ensembles ( and being fixed ) .parameter is the required tolerance for the fit , set by the experimenter .it is a regularization parameter creating a tradeoff between precision of fit ( small ) and reliability of measurements , since a larger leads to more samples with a substantial contribution .when testing our method ( section [ sec : results ] ) we choose as 5% of . for each tested couple , we then use as a weighting factor over all tested ensembles , which yields two quantities : where denotes an average across all neurons available to compute a prediction with eq .[ eq : new - pcov ] .these neurons must have been recorded simultaneously to ensemble and , in order to produce an unbiased estimate of choice signals in the full population , they should not belong to itself . in eq .[ eq : kest ] , is the ensemble size which most likely explains the animal s sensitivity , given readout parameters . in eq .[ eq : west ] , is the mean prediction for pcv signals across neurons in the population , but stemming only from ensembles which are compatible with the animal s sensitivity .considering quantity introduced in eq .[ eq : w - kap ] , we see that the equality is only approximate , because only neurons recorded simultaneously to are available to estimate .however , as neurons are random and we average over many ensembles , rapidly converges to the quantity described in eq .[ eq : wapp ] . both and temporal signals defined over some interval ] , and verifies : . ] : . on the other hand , typically has support on some interval ] , which depends on the connection considered : see table [ tab : l2params ] .note that l2 recurrent connections can be both excitatory and inhibitory , a departure from biology which allows for an easier implementation .finally , the recurrent connections in l2 are associated to synaptic delays : for each pair of connected l2 neurons , the random delay is drawn uniformly between and msec .this substantially increases the diversity of neural responses in the population , particularly at the level of jpsths ( figure 3e from the main text)this is interesting because our method is specifically designed to analyse generic , heterogeneous population activities .+ we implemented and simulated the network using brian , a spiking neural network simulator in python .our simulation consisted of many successive epochs of 500 msec with all possible successions of the three stimulus values ( as in figure 1a from the main text ) .since the input poisson neurons were always firing close to 30 hz , there was no strong transient at stimulus onset as is often observed in real sensory neurons . in our case ,the change of activity between two successive stimuli was always only differential , and rather weak ( see figure 3c from the main text ) .c c c c c c + subtype & & & & & + + pos .biased ( ) & 0 & 0 & 2 & -2 & 2 + neg .biased ( ) & 14 & -3 & 0 & -2 & 2 + unbiased ( ) & 5 & 0 & 0 & -2 & 2 +we detail here our mathematical analysis to understand the evolution of snr and pcv estimates in growing populations of size , as a function of the underlying structure of the full population .these results expand the condensed presentation proposed in appendix b of the main text . for simplicity , we consider a timeless version of neural activities , although the whole analysis could be extended to include time as well . in our readout framework, this means that we fix some candidate temporal integration parameters , and consider the resulting neural activities , constructed from the temporal integration of each neuron s spikes is noted in the main text . ] .since our main results have been presented in the case of linear tuning to stimuli , we stick to this hypothesis .this implies that all signal / noise properties can be understood by considering only two stimuli ( as the difference in response between these two stimuli totally defines the linear tuning of each neuron ) .we thus note the two possible stimulus values which can be input to the network .finally , we may want to consider the possibility of imprecise neural measurements , due to recording from only a finite number of trials ( although it is not the main concern of this note ) .we thus denote the set of all possible different realizations of network activity . in theory, is an infinite set of possible events .however , we will formally assume it to be finite , with ( huge ) cardinality on a given trial , each possible network realization has a probability of coming out .we thus summarize all possible network realizations through the array , where denotes all neurons in the population is noted in the main text .] , denotes stimulus value , and denotes all possible realizations . the notation , somewhat abusive , applies the same indexing for possible realizations in both stimulus conditions and can only be done if both stimulus conditions allow the same number of possible network realizations .however , given the formal nature of ensemble , this notation abuse appears harmless .as we start doing statistics across neurons and trials , we will need to compute expectancies ( i.e. , means ) and covariance structures across various dimensions . in all cases ,we apply the generic notation to denote the empirical mean of quantity when is varied over ensemble ( being any other parameters that are held fixed ) .when ensemble is unambiguous , meaning that it includes all possible values for , we will omit it .finally , second order variances and covariance structures will generically be computed as . as a first application of these notations, remember that the whole sensitivity analysis derived in the main text deals only with variations : the `` signal '' measures variations of activity with a change in stimulus , while the `` noise '' measures variations of activity across trials .thus , the overall mean level of activity for each neuron , that is , plays no role in the analysis : it always disappears from the computations of tuning and noise covariance structure . to clarify further notations, we can thus offset all neural signals and assume that , for every neuron in the population .the key argument of this note relies on interpreting as a very large matrix , and considering its singular value decomposition ( svd ) .the ( compact ) svd is a standard decomposition which can be applied to any rectangular matrix .it writes , where is an diagonal matrix with strictly positive entries ( the singular values ) , is an matrix of orthogonal columns ( meaning ) , and is an matrix of orthogonal columns ( meaning ) . with our current definition of neural activity ,the svd decomposition writes where the orthogonality of writes : and the orthogonality of similarly writes . in the case of ,our above convention that for all neurons actually imposes that for all modes .we thus reinterpret the orthogonality of as a linear independence between the different random variables : note that we reinterpret the sum over trials ( ) as an expectancy ( thus rescaling by ensemble size ) .this allows to emphasize the statistical interpretation of the svd decomposition in this case .+ each triplet defines one particular _mode _ of activity in the population .we call the _ power _ of the mode , ( viewed as an -dimensional vector ) its _ distribution vector _ , and ( viewed as a scalar random variable ) its _ appearance variable_. the appearance variable takes a different value on every repetition of the experiment describes the probability of appearance of each mode across stimuli and trials . through eq .[ eq : orthv ] , each mode verifies , meaning that all modes have the same overall `` expected appearance '' across trials .similarly , eq . [ eq : orthu ] implies that , so describes the normalized distribution of the mode across the neural population .some modes may correspond to a rather homogeneous distribution of across the population , meaning that the mode is very _ distributed _ , whereas other modes may have power concentrated only over a small subensemble of neurons .these are the modes corresponding to local patterns of activity which only impact a small fraction of the total neural population .finally , the power describes the overall impact of mode on population activity .indeed , although distribution vectors and appearance variables display the same normalization across modes , this does not mean that all modes are equivalent . instead , only those modes with the largest values will truly impact the population , in the form of measurable changes of activity across neurons and trials .conversely , modes with small values will scarcely impact population activity , either because they involve only a small fraction of neurons , either because they are distributed but very weak .the overall number of modes is equal to the rank of matrix , so it is by construction smaller or equal to the population size ( which we assume to be smaller than the huge number of possible realizations across trials ) . defines the typical dimension of the manifold in which all neural activity occurs . in real neural populations , although is itself a very large number , there are reasons to believe that is sensibly smaller , due to correlated activity between neurons .we now reinterpret classical measures of neural activity in the framework defined above . at this point, we need to carefully specify the nature of the ensembles truly available for measures : a finite subset of neurons from the population , and a finite ensemble of trials ( each element of providing one realization for stimulus and one realization for ) . for every neuron , recorded over trials , we compute the tuning to stimulus as that is , the difference between the experimental mean firing rates in stimulus conditions and . from this appendixcorresponds to from the main text , where gives typical variations of input stimulus . ]similarly , we compute the noise covariance term between any two neurons and as : that is , the stimulus - averaged noise covariance between and . finally , we introduce the total covariance matrix summing up all sources of variance across the population : the last line provides the classic decomposition of the total covariance matrix into noise covariance matrix and signal covariance matrix has rank 1 under our assumption of linear tuning to stimulus .when ensemble is equal to the full space of possible realizations , the above formulas define the `` true '' measures of covariance , as would be obtained given a sufficient amount of trials . in the sequel , we refer to these true , error - free values , by removing the mention to .that is : , and .+ the svd decomposition ( eq . [ eq : svd ] ) is best interpreted as a change of variables reexpressing neural activities in terms of mode appearance variables . as a result, we can define the respective equivalents of tuning , noise covariance and total covariance in the space of activity modes .indeed , although mode appearance variables are never directly observed , they still have some statistics across trials .we thus define : which define tuning and total covariance in mode space ( noise covariance being implicitly defined as ) .again , we will denote the true tuning and covariance by removing the mention to : true tuning and true total covariance .importantly , the normalization of variables in eq .[ eq : orthv ] implies that .mode powers and distribution vectors then allow to relate the statistics at the levels of neurons and modes . injecting the svd formula ( eq . [ eq : svd ] ) into equations [ eq : tun - i ] and [ eq : tot - ij ] yields( in matricial form ) : in particular , when true noiseless measures are considered so that , we see that and directly provide the standard ( nonzero ) eigenvalue decomposition of the total covariance matrix , as we now wish to understand which factors determine the evolution of curve , the average snr embedded in neural subensembles of cardinal .we can also study the evolution of percept covariance ( pcv ) signals , in the same framework . in the main text, we compute snr and pcv for ensemble through fisher s linear discriminant ( eq . 13 - 16 ) .one sees easily that these definitions , involving tuning and noise covariance matric , are equivalently expressed in terms of tuning and _ total _ covariance matrix : we call the signal - to - total ratio ( str ) , which relates directly to snr by the formula . always takes values between ( ) and ( ) , it thus avoids singularities which may occur in the direct formulation .if matrix is rank - deficient , we consider its ( moore - penrose ) pseudoinverse without loss of generality ( see further down ) .the svd decomposition ( eq . [ eq : svd ] ) reexpresses neural activity in the space of modes .when the full neural population is considered , the full matrix and vector are involved in eq .[ eq : y - kap ] . using the svd formulations ( eq . [ eq : b - svd]-[eq : a - svd ] )we thus find : thus , each mode contributes to total sensitivity by the strength of its intrinsic sensitivity .this computation can also be derived assuming a finite number of experimental trials . in this case however , we must introduce the _ experimental sensitivity _ of each mode , defined as where is the unique ( moore penrose ) pseudo - inverse of the symmetric , non - negative square root matrix of .actually , any other choice of matrix square root could also be used , because by construction , in the sense of symmetric positive matrices .this insures that is orthogonal to , and thus the unicity of as defined in eq .[ eq : bzeta ] .the computation of then goes along the same lines as previously : generally , one expects , because the estimated is flatter than its true value of , with eigenvalues closer to 0 .this is a classic result when estimating snr ( or str ) from an insufficient number of trials , a typical example of overfitting .as mentionned in the main text , there is no miracle cure to this problem , which should be addressed through appropriate methods of regularization and cross - validation .we now turn to the sensitivity embedded in finite subensembles from the population .the definitions of and used in eq .[ eq : y - kap ] amount to a projection from the full neural space to subensemble : where is the orthogonal projector on recorded neurons . through the svd decomposition in eq .[ eq : b - svd]-[eq : a - svd ] , we reexpress these quantities as : where is our so - called _ data matrix _ , an matrix with elements . it represents the experimental data from neurons , expressed in mode space . to compute the resulting sensitivity predicted by eq .[ eq : y - kap ] , we note that through eq .[ eq : ak - svd ] , matrix has the same eigenvalues as its dual gram matrix , an matrix with rank equal to .we introduce the ( compact ) svd decomposition of this matrix : where is a diagonal matrix , and is an matrix of orthogonal columns ( for clarity we remove the unambiguous references to ensemble ) .it is shown easily that this decomposition also provides the svd for , in the form : where is a matrix of orthogonal columns , as required in the svd decomposition .thus , the ( pseudo-)inverse of writes : this allows to finally compute the experimental str , from eq .[ eq : bk - svd]-[eq : ak - svd ] : making use of the fact that .intriguingly matrix , which describes the eigenvalues of , disappears from the final equation .only matrix , corresponding to the _ eigenvectors _ of , remains in the equations .we note which is nothing but the orthogonal projector on .this leads to the final expression : neuron ensemble only appears through . in particular , as soon as is larger than the number of modes , necessarily , and : all modes are available experimentally , and sensitivity estimates saturate to their maximum value , independently of ensemble .+ the whole analysis can be performed similarly assuming a finite number of measurement trials .the only difference is a modification in data matrix , to take into account the biases in mode space induced by an insufficient number of trials : using the same square root of as in eq .[ eq : bzeta ] .similar computations lead to the final result : which depends on experimental mode sensitivities ( eq . [ eq : bzeta ] ) and on , the orthogonal projector on , of dimension . similarly to the approach above , we can express pcv signals in mode space .since we do not model time , we only have access to the temporal average , where is the full pcv curve from the main text . from eq .9 of the main text , it falls easily that . using the optimal for readout ensemble ( eq . [ eq : a - kap ] , with since has support on ), we thus predict : which provides the value of for every neuron in the population ( not only in ensemble ) .making use of the same svd decompositions as above , and of relationship , we finally find : which expresses as a linear combination of mode distribution vectors .as tends to the full population , tends to and we get , the prediction for choice signals in case of optimal readout . in turn , the population average for pcv is for the pcv curve defined in the main text ( eq .using eq .[ eq : pi - kap - final ] , and the general fact that , we obtain because ( eq . [ eq : b - svd ] ) and .this reveals the interest of multiplying by the corresponding tuning ( see discussion in main text ) : it allows to get rid of the unknown distribution vectors , and instead produce a quantity which is directly related to the underlying modes powers and sensitivities .we are now better armed to understand how sensitivity and pcv predictions vary as a function of the readout ensemble .we are mostly interested in averages of these quantities over randomly chosen ensembles of size ; we thus use the generic notation . from eq .[ eq : y - kap - final ] we find : .to understand the properties of the matrix , we view the data matrix ( eq . [ eq : d - kap ] ) as a collection of random vectors in mode space , viewing neuron identities as the random variable .thus , is the orthogonal projector on the linear span of the sample vectors . as a projector ,its trace is equal to its rank , so we have .furthermore , since samples span on average more space than samples , we are insured that , in the sense of positive definite matrices .finally , intuition and numerical simulations suggest that is almost diagonal . indeed , as the various modes are linearly independent ( eq . [ eq : orthu ] ) , there is no linear interplay between the different dimensions of across samples : , or equivalently assuming a form of independence between and , it is reasonable to suppose that is close to diagonal as well . in the general case , small deviations from diagonality can probably occur . ] . assuming that is diagonal , we note its diagonal terms and consider the resulting approximations of sensitivity ( eq . [ eq : y - kap - final ] ) and mean pcv ( eq .[ eq : w - kap - final ] ) : the properties of imply that ( trace property ) , and ( growth property ) . as augments , progressively `` fills - in '' the space of modes , starting from the modes with larger power .indeed , the larger , the more often mode appears in samples . as a useful image, we may think of the ( very ) rough approximation : only the first modes are revealed by a sample of neurons .naturally this is only a gross approximation , as can be seen easily by considering a single sample ( ) . from intuition and simulation ,the true shape of ( at fixed ) is a `` smoothed '' version of , and the degree of smoothing depends on the power law governing the spectrum . with this image in mind , eq .[ eq : eky ] shows that the growth of sensitivity with is linked to the progressive summation of mode sensitivities , starting from modes with highest power : with a saturation as soon as all nonzero mode sensitivities are revealed .conversely , for pcv signals , we can make the rough assumption that , in which case eq .[ eq : ekwy ] rewrites where each mode contributes with a weight , and provides the normalization factor .thus , reflects the average power of modes with the higher sensitivity , that are already revealed with neurons . as grows , progressively `` fills - in '' modes in the order of decreasing .thus we expect to decrease with . finally , as soon as , we have , and reckognizing the expressions for ( eq . [ eq : b2 ] ) and ( eq . [ eq : y - infty ] ) . since ,the predicted evolution of mean pcv signal with follows : is predicted to be positive , to decrease with increasing size , and to saturate at its minimum value once all significant mode sensitivities have been revealed which is also the moment when sensitivity saturates at its maximum value ( eq .[ eq : eky ] ) , and corresponds to an optimal readout from the full population .the implications of these results in terms of extrapolation to large are discussed in the main text .
|
we study a standard linear readout model of perceptual integration from a population of sensory neurons . we show that the readout can be associated to a set of characteristic equations which summarize the joint trial - to - trial covariance structure of neural activities and animal percept . these characteristic equations implicitly determine the readout parameters that were used by the animal to create its percept . in particular , they implicitly constrain the temporal integration window and the typical number of neurons which give rise to the percept . comparing neural and behavioral sensitivity alone can not disentangle these two sources of perceptual integration , so the characteristic equations also involve a measure of choice signals , like those assessed by the classic experimental measure of choice probabilities . we then propose a statistical method of analysis which allows to recover the typical scales of integration and from finite numbers of recorded neurons and recording trials , and show the efficiency of this method on an artificial encoding network . we also study the statistical method theoretically , and relate its laws of convergence to the underlying structure of neural activity in the population , as described through its singular value decomposition . altogether , our method provides the first thorough interpretation of feedforward percept formation from a population of sensory neurons . it can readily be applied to experimental recordings in classic sensory decision - making tasks , and hopefully provide new insights into the nature of perceptual integration . * 1 group for neural theory , inserm u960 , cole normale suprieure , paris , france + * 2 champalimaud neuroscience program , libson , portugal + e - mail : adrien.wohrer.fr * *
|
kramers - kronig analysis , of for the most part reflectance data , is often used to estimate the optical conductivity , dielectric function , sum rules , and other optical functions for new materials .many reports of kramers - kronig analysis of reflectance have appeared , spanning more than 50 years, with studies of metals, pure and doped elemental solids, organic conductors, charge - density - wave materials, conducting polymers, cuprate superconductors, manganites, pnictides, heavy - fermions, multiferroics, topological insulators, and many others . in addition , a number of methods papers have appeared. the experimenter typically has data from far infrared through near ultraviolet , covering , say , 5 mev5 ev ( 40 to 40,000 ) .this is a reasonably wide bandwidth , but the kramers - kronig integral extends from zero to infinity , so that extrapolations need to be made outside the measured range .the high frequency extrapolation is especially problematic and can cause significant distortions to the conductivity over the entire measured range , with consequences for sum rules as well .the approach used by most is to extend the reflectance with a power law , , transitioning to at a considerably higher frequency and continuing this free - carrier extension to infinity .the mid - range power law is adjusted to match the slope of the upper end of the data and to give pleasing curves , but the choice of power ( something between 0.5 and 3 ) is arbitrary .other approaches have been put forward .one heroic method is to carry out ellipsometry on the sample over the high - energy part of the interesting spectral range and extract the ( temperature - dependent if necessary ) complex refractive index over that range .then one can calculate an oscillator - model extrapolation that forces the kramers - kronig - derived refractive index to agree with ellipsometry over the range of overlap. a second approach consists of fitting the spectrum with a sum of a very large number of narrow contributions to the dielectric function .the functions can be lorentz oscillators , triangles , or some other function for the imaginary part and the kramers - kronig - derived counterpart for the real part .the number of these functions is equal to or nearly equal to the number of data points , so that an excellent fit is easy to obtain ; indeed some parameters need to be fixed .the model dielectric function then represents the properties of the material .no actual integral of the reflectance is computed .this approach is especially effective for the case of a thin film on a substrate or a complex device structure. this paper describes an extrapolation method for conventional kramers - kronig analysis that uses x - ray atomic scattering functions developed by henke and co - workers to generate the high - frequency reflectance of a material .the method basically treats the solid as a linear combination of its atomic constituents .knowledge of the chemical formula , the density , and the scattering function enables the computation of the dielectric function , the reflectivity , and other optical functions .the `` henke reflectivity '' is computed for photon energies of 10 ev34 kev , after which a continuation is perfectly fine .this paper also discusses the bridge between experimental data and the henke reflectivity as well as two corrections that needed to be made to the latter .the kramers - kronig relations are a consequence of our experience that observable effects are _ causal _ , _ i.e. , _ that the cause precedes the effect .this notion seems sensible and it is a component of most parts of physics .the kramers - kronig integrals are derived in a number of textbooks, and have been discussed by many authors. the original derivations by kramers and kronig relied on model dielectric functions; however , the subject is mostly approached by considering integrals on the complex frequency plane and using cauchy s integral theorem. this approach , combined with the fact that the material s response functions are either even or odd as a function of the frequency and a consideration of the pole that occurs in conductors when the frequency is zero , lead to the following relations for the dielectric susceptibility : and where is the dc conductivity and means `` principal value . ''there are many complex optical functions in addition to the susceptibility : dielectric function , conductivity , refractive index , _ etc _ ; there are kramers - kronig relations amongst them .( many can be obtained by substitution .for example , and so that equations [ kk4 ] and [ kk6 ] may respectively be converted to an integral containing that gives and one containing to give .others require application of cauchy s theorem to the contour integral along with statements about the behavior at very high frequencies .when i measure the reflectance , where reflectance here means the _ single - bounce _ or _ single - surface _ reflectance , , i am taking the ratio of the reflected intensity or power reflected from the front surface of the sample to the incident intensity or power .the sample is assumed infinitely thick or sufficiently absorbing that no light from the rear surface reaches my detector .phase information is not available .the amplitude reflectivity , the ratio of reflected electric field amplitude to incident electric field amplitude , does have a phase ; indeed , i can write it as where is the magnitude of the reflectivity , is the phase shift on reflection , and is the complex refractive index . here , the phase is set by measuring the field vector relative to the incident vector at the surface. it would be nice to know , because i could invert eq .[ rvac ] to get using the known and the measured reflectance .kramers - kronig analysis is one way of estimating this phase. consider here , is the real part and is the imaginary part .the reflectance must be causal , and hence so must be the log of the reflectance .this requirement , plus the hermiticity of , ^*$ ] ( which makes even and odd ) leads to equation [ kk7 ] is perfectly usable for numerical analysis , but there is an improvement that can be made. consider the negative area for cancels the positive area for .thus , i can add to the right hand side of eq .[ kk7 ] without affecting the phase . collecting terms , replacing with , and using the properties of the log ,i get \over { \omega^{\prime}}^2 - \omega^2}. \label{kkr}\ ] ] this modification has two advantages .first , if there are errors in the calibration of the reflectance measurements , so that the data for are in error by a constant factor , the results for are unaffected .( of course , even if the scale error does not affect the phase , it _ does _ affect , and , eq .[ nfromr ] , depends on both quantities . )second , both numerator and denominator of the integrand are zero when .lhpital s rule shows that the ratio does not diverge ; hence , the pole has been removed .the alert reader will have noticed that the range of the integral eq .[ kkr ] is 0 to and may wonder how one acquires data over that entire range .the answer is that data are always limited to a finite range of frequencies .thus , the user must use extrapolations outside the measured ranges .one must estimate the reflectance between zero and the lowest measured frequency . in my opinion, the best approach is to employ a model that reasonably describes the low - frequency data .such models include drude for metals , lorentz for insulators , a sum of several lorentzians , sometimes a drude plus lorentzians .many other functions exist .when a good fit of the model to the data is obtained , a set of reflectance points may be calculated between zero and the lowest measured frequency , using a spacing between points similar to that of the lowest - frequency data , and combined with the measured data .other approaches to the low - frequency extrapolations include making an assumption that the reflectance is constant to dc ( as might be appropriate for an insulator ) or using a hagen - rubens formula , , ( to describe a metal at low frequencies ) .other power laws can also be used .the constant is adjusted so the extrapolation goes through the first few points and then , using a spacing between points similar to that of the lowest - frequency data , a set of reflectance points is calculated between zero and the lowest measured frequency and combined with the measured data .the high - frequency extrapolation can be a source of major error .it is good to use data from other experiments on identical or similar samples if these exist . moving to the highest frequencies, one knows that in the limit as , the dielectric function is mostly real and slightly smaller than unity , following , where is the plasma frequency of all the electrons in the solid and . then and .typically, the region between the highest - frequency data point and the transition to is filled with another power law , , with a free parameter .( does not have to be an integer . )the value of is chosen so that the power law joins smoothly to the data at the high frequency limit and then is chosen for a smooth transition between mid- and high - frequency extrapolations .the free parameters are the exponent and the frequency for the crossover from to .the integrals in the extrapolation range may be done analytically and their contributions simply added to the phase obtained by numerical integration of eq . [ kkr ] over the low - frequency extrapolation and the measured data .an example of `` typical '' reflectance data is shown in fig .[ fig : ag - ins ] .the data are the reflectance of silver as collected by palik. the low frequencies are supplemented by a drude reflectance , based on resistivity -cm at 300 k. the main panel shows the reflectance from 4040,000 ( 5 mev5 ev ) , a range that can be measured by many laboratories .one can see the high metallic reflectance from far - infrared to near ultraviolet , a sharp and deep plasma edge around 32,000 ( 4 ev ) , and the beginning of transitions from the d - bands to the conduction band above this. the inset shows the data over the entire measured range , up to about 1 kev .i will now explore the kramers - kronig analysis of the reflectance shown in the main panel of fig .[ fig : ag - ins ] .the issue to address is that the parameters , and the frequency of transition to , are completely free and , hence , uncontrolled .it is fair to ask : how much do they affect the outcome of the analysis ?a reason for choosing silver for this discussion is that data for this material extend to 1000 ev ( inset of fig .[ fig : ag - ins ] ) . here , one can see additional interband transitions followed by sharp core - level transitions .note that the reflectance above 100 ev ( ) is pretty close to a power law .kramers - kronig analysis of the full data will be compared to the results of the limited data in the main panel . and a crossover to at 10 ( 125 ev ) . ] after kramers - kronig integration of the reflectance, i can compute the optical conductivity , , from the reflectance and phase .the results are shown in fig .[ fig : agx ] .the intermediate frequency range was extrapolated as with values for of 0 , 0.5 , 1.0 , 1.5 , 2 , 3 , and 4 .the crossover to occurred at 10 ( 125 ev ) .there is considerable differences amongst the results .note that all came to the same dc conductivity , .the figure shows only the first or lowest 5% of the conductivity spectrum , to illustrate the variations in adequate detail .the different power laws give a range of values for the conductivity in the d - transitions that vary by a factor of three or so . other optical functionhave similar variation .the conductivity from the full - range reflectance data is shown as the black dashed line , which covers the typical - range calculation for .the extrapolation strongly affects the outcome for the partial sum rule for silver .this sum rule gives the number of electrons with effective mass ratio participating in optical transitions at frequencies below as where is the effective mass , is the free electron mass , is the unit cell volume ( or formula volume). figure [ fig : sumx ] shows the result of evaluating eq .[ partial ] over 4040,000 ( 5 mev5 ev ) for the conductivity data in fig .[ fig : ag - ins ] . for a simple metal like silver ,the free carrier spectral weight is exhausted in the midinfrared , and the function saturates at the number of conduction electrons / atom ( 1 in the case of ag ) until the interband transitions set in. depending on which power is taken in the intermediate region , this analysis would conclude that silver has between 0.4 and 2.4 free carriers per silver atom .even if the experimenter avoided the extremes and chose in the range , there would be a range of 0.61.3 for the number of electrons per silver atom .note that the full - range data and both saturate at , a quite satisfactory result. of silver from kramers - kronig analysis of reflectance .power - law extensions were used , with exponents and a crossover to at 10 ( 125 ev ) . ]that this result is not unique to a free - carrier metal is evident when i repeat the exercise for la , using data from gao et al. with ultraviolet results from tajima et al. the data over the typical range cover 3538,000 ; the ultraviolet results extend to 340,000 ( 42 ev ) .the reflectance data are shown in fig .[ fig : lsvuv ] .one can see a broad non - drude midinfrared absorption with vibrational features superposed .the charge - transfer excitation of the insulator remains at 10,000 ( 1.2 ev ) with higher - energy electronic transitions in the visible and ultraviolet .sr . ]the next step is to carry out the kramers - kronig integration of the limited - range data and use the phase so obtained to calculate the optical conductivity . above the highest frequency of the measured data ,the reflectance was extrapolated as with values for of 0 , 0.5 , 1.0 , 1.5 , 2 , 3 , and 4 .the crossover to occurred at 10 ( 125 ev ) .the resulting optical conductivities are shown in fig .[ fig : lsold ] , along with a kramers - kronig - derived conductivity that includes the vacuum ultraviolet data. although all features appear , there is considerable variation in the spectral weights , particularly above about 1.2 ev ( 10,000 ) .the full - data spectrum falls midway between the results for and .i find that the most worrisome feature is the large variation in the charge - transfer band , because one believes that the low - energy spectral weight is transferred from the charge - transfer spectrum of the insulator and would like to test this belief by measuring the spectral weight transfer .power - law extensions were used , with exponents and a crossover to at 10 ( 125 ev ) .the conductivity obtained using the full data set , including the vacuum ultraviolet region, is also shown . ]photoabsorption in the x - ray region has been considered by a number of authors. it is described by an atomic scattering function , a complex quantity : .the approach used is to combine experiment and theory and determine imaginary part of the scattering function , , for each atomic species .this quantity has peaks or discontinuities at the absorption thresholds for each electronic level and falls to zero as .the real part is obtained by kramers - kronig integration of , it increases with frequency via a series of plateaus , each approximately equal to the number of `` free '' electrons at that photon energy , those electrons with binding energies less than the photon energy . the limiting high frequency value is except for relativistic corrections then the atomic number . in this worki use tables of the scattering functions reported by henke , gullikson , and davis about 20 years ago . a related web site also exists with the ability to calculate optical properties , including reflectance .i find however that better results are found if two adjustments are made , one to the scattering functions and one to the procedure of using these function to calculate reflectance .the first is that the functions of ref .provide from 1030,000 ev but only have from 3030,000 ev .i have redone the kramers - kronig integrals of to provide also from 10 ev ( 80,000 ) , extrapolating at low frequencies and at high frequencies . to obtain the optical properties of a material, one makes the assumption that the solid consists of a linear combination of its component atomic constituents , with the dielectric function determined by the scattering functions and the number density of the constituents .the dielectric function is then where the sum runs over atoms at number density and with complex scattering function .note that this has the right limiting high - frequency behavior , because ( with here the atomic number ) and , so that the complex refractive index is and the reflectance is calculated from the usual equation , note that ref . and the website , ref . ,write an equation for the refractive index of a monatomic solid : with the classical radius of the electron and the wavelength .this is clearly the first term in an expansion of .so the second adjustment made for this work is to compute the dielectric function from using eq .[ henke ] , take the square root to obtain , and then use eq .[ reflectance ] for the reflectance .note also that many other sets of atomic scattering functions have been reported in addition to the results in ref . .in general the functions are similar at energies where they overlap ; the newer sets often provide finer energy resolution near sharp features in the spectrum .one consequence is that , unlike the henke functions , many are not sampled at the same photon energies , requiring the user to devise interpolation schemes when evaluating eq . [ henke ] .the procedure is implemented in the following way .the user supplies the chemical formula , such as ag or la and either the appropriate volume or the density . with this information, the reflectance will be calculated at 340 logarithmically spaced points over 80,0002.4 ( 10 - 30,000 ev ) using eqs . [ henke][reflectance ] . a bridge need to be placed over the gap between the highest experimental point ( say , 40,000 or 5 ev ) and the beginning of the extrapolated reflectance at 80,000 .the user has the option of a power series in , in or a cubic spline. as it turns out , the bridge has a modest effect in some cases , minimal in others. a low - frequency extrapolation , from , say , 40 ( 5 mev ) to zero , must be added .i find it effective to fit accurately the low - frequency reflectance to a drude - lorentz or other ( well - motivated ) model and calculate the low - frequency reflectance from the model .then , the kramers - kronig integral , eq . [ kkr ] , is computed to obtain the phase .the refractive index can then be calculated from reflectance and phase through eq .[ nfromr ] , with other optical constants following in the usual way . to start, i will explore the use of the atomic scattering functions to analyze the reflectance data of fig .[ fig : ag - ins ] ( main panel ) .the scattering function for silver is used to calculate the reflectance in the uv x - ray region .the beginning of the calculated x - ray reflectance is shown in fig .[ fig : agbr ] .( it extends to 30,000 ev , continuing to fall approximately as with some fine structure . ) the measured high frequency reflectance also is shown , along with a power - law bridge between the infrared uv data and the beginning of the x - ray calculation .the calculated reflectance follows experiment reasonably well , although it is higher at the beginning and lower in the 2 ( 30100 ev ) region .the strong structure around 3 ( 400 ev ) appears , although broader than experiment . ( 11200 ev ) showing the typical experimental region ( 4040,000 ) , data in the vacuum - ultraviolet and soft x - ray region , the reflectance calculated from the ag scattering function , and the power - law bridge over 50,00080,000 ( 520ev ) . ]my goal , of course , is to use the scattering function reflectance for kramers - kronig extension , not to extract accurate uv x - ray reflectance .the conductivity obtained by kramers - kronig analysis using the scattering function extension is shown in fig .[ fig : s1br ] .twelve curves are shown for twelve variations of the bridge function : power laws of with varying from 1 ( straight line ) to 5 and power laws of with varying from 1 to 7 .finally , a curve for the full data is also shown .there is no significant difference among the results .all of the bridge functions give conductivity spectra basically indistinguishable from the result from the full data .there is a small deviation in a couple of cases above 35,000 , which i regard as not really significant .[ fig : sumbr ] shows that the result for the partial sum rule is equally good .any of the bridge functions would support the notion that there is about 1 free electron per silver atom . of silver from kramers - kronig analysis of reflectance using x - ray scattering - function extrapolation with 12 different bridge function ( described in the text ) .the partial sum rule for the full data is also shown . ]that this result is not unique to a free - carrier metal is evident when i repeat the exercise for la. this material is a good test of the method , because , as shown in the upper panel of fig .[ fig : lsxro ] , the slopes of the data at the high - frequency limit , which is about 37,000 ( 4.6 ev ) , is positive whereas the slope of the scattering - function reflectance at its low - frequency limit , which is about 80,000 ( 10 ev ) , is negative .hence the bridge must provide this slope change .the upper panel shows the data of gao et al. and the reflectance calculated from the x - ray scattering functions .the bottom panel shows the data and scattering - function reflectance again , along with 5 bridges , as described in the text . the actual uv reflectance is also shown . ]several trial bridge functions are shown in the bottom panel .four used power laws in , with the upper limit ranging from 3 to 6 .a bridge employing a cubic spline function is also shown .all accomplish the goal of joining the two regions , with some above and some below the the actual uv reflectance, which also is shown .the agreement between the scattering - function - derived reflectance and the actual reflectance is not as good as in the case of ag .however , i am not that interested in the accuracy of the scattering - function reflectance in the vacuum uv and x - ray region ; instead i will use it as an extension in the kramers - kronig analysis of infrared uv reflectance . the outcome of the kramers - kronig analysis is shown in fig .[ fig : lsnew2 ] . above the highest measured or calculated point ,the reflectance was extended as . below the lowest measured frequency , it was extended with a drude - lorentz fit .scattering - function extensions were used , with a variety of power - law in bridge functions spanning the gap between measured and scattering - function extension .the conductivity obtained using the full data set , including the vacuum ultraviolet region, is also shown . ] as opposed to the results of using power - law extensions in the intermediate region , which were shown in fig .[ fig : lsold ] , the scattering - function extension is quite close to the conductivity obtained using actual vacuum - ultraviolet data .in particular , the conductivity through the important charge - transfer band around 12,000 ( 1.5 ev ) is almost independent of bridge function and is very close to what is found using actual data .evaluation of the partial sum rule , eq .[ partial ] , works the same way ; the curves are nearly indistinguishable below about 16,000 ( 2 ev ) . of the various bridge functions, the cubic spline appears to be closest to the result using the full data ( gao + tajima). the two extremes used cubic ( high ) and quartic ( low ) power laws in ; carrying the series to more terms made little difference and neither did the use of power laws in . as a final example , let me discuss the use of this method for al metal . a trivalent metal, al has its plasma edge deep in the vacuum ultraviolet , around 120,000 ( 15 ev ) .this energy is well beyond the reach of most conventional optical spectroscopy laboratories .nevertheless , one might study al ( or similar wide - band , high - carrier - density solids ) in order to probe low - energy features , such as the weak interband transition that occurs in the infrared. the far - infrared ultraviolet reflectance spectrum of al is shown in fig . [fig : al - ins ] .the main panel shows data from 7050,000 ( 0.016.2 ev ) whereas the inset shows the vacuum ultraviolet and x - ray reflectance up to 6 ( 800 ev ) . were i to have only the low energy data of the main panel, i would have a difficult time with the kramers - kronig analysis .i would suspect that the high reflectance did not continue indefinitely but would not _ a priori _ know when or how to start it decreasing .if i were to use a power law for the reflectance , , transitioning to at a considerably higher frequency and continuing this free - carrier extension to infinity , i would obtain the curves shown in fig .[ fig : alold ] . and a crossover to at 10 ( 125 ev ) .in the lower panel , a constant reflectance was used , up to a changeover frequency , above which . ]the upper panel shows the effect of varying the exponent in the mid - region from 0 ( a nearly flat extension , as suggested by the data ) to 4 .the transition to was made at 10 ( 125 ev ) .the kramers - kronig - derived conductivity from the full data set ( shown in the inset to fig .[ fig : al - ins ] ) is also shown .most of the derived curves seriously overestimate conductivity , making the spectral weight in both the free - carrier conductivity and the weak interband transition far too large .the location of the maximum of this transition is pushed up in the small exponent calculations , reaching 16,000 ( 2 ev ) compared to 12,500 ( 1.55 ev ) in the conductivity derived from the full - range - data .note that the weak exponents ( which are implied by the data ) give the poorest results .an exponent of around 2.5 , which is in no way suggested by the data , gives a conductivity spectrum close to that returned by using the full reflectance spectrum .one could argue that transitioning to at 10 ( 125 ev ) is not correct ; the full reflectance spectrum turns over at 120,000 .the lower panel of fig .[ fig : alold ] shows conductivity spectra obtained by extending the reflectance as a constant value up to a frequency where it changes to .the changeover frequency was in the range ( 1260 ev ) ; above this frequency .all of the curves overstate the magnitude of the conductivity , even the one where the reflectance decrease starts at 100,000 ( 12 ev ) , below the 150,000 ( 19 ev ) where the experimental edge exists .note that the initial power - law behavior of the reflectance edge ( inset to fig .[ fig : al - ins ] ) is approximately . presented with the spectra in fig .[ fig : alold ] , i would decide that kramers - kronig analysis of the reflectance in the main panel of fig .[ fig : al - ins ] can not be productive .however , use of the x - ray scattering function helps immensely .the relevant curves are shown in fig .[ fig : alxro3 ] .( 125 ev ) , showing the typical experimental region ( 4050,000 ) , data in the vacuum - ultraviolet and soft x - ray regoion , the calculated reflectance using the al scattering function , and two cubic spline bridges , one over 50,00080,000 ( 6.210 ev ) and one over 50,000130,000 ( 6.216 ev ) . ] at first blush , the calculated reflectance from the scattering functions does not look promising . because there is no band structure , only atomic orbitals, there is no metallic reflectance ; instead a strong peak occurs around 130,000 ( 16 ev ) with a reflectance edge blue shifted from the experimental data . at the low end , 80,000 ( 10 ev ), the reflectance falls to about 1/3 of the metal s reflectance .i consider two approaches to the use of this extension .first , i can just use it , with a short , steeply declining bridge joining the `` ir data '' to the scattering - function extension .second , i can lop off the low energy part of the extension , and bridge to the maximum around 130,000 ( 16 ev ) , making the smallest change in reflectance between data and extension .surprisingly , both approaches give rather similar optical conductivity curves ( and sum rule results ) , as shown in fig .[ fig : als4br ] .the short bridge falls a little bit below the conductivity obtained from the full set of data while the long bridge is a bit above .i could use either ( or their average ! ) to discuss the low frequency electronic structure of aluminum without being plagued by extrapolation - dependent results .that the scattering - function extrapolation works as well as it does suggests that the critical issue in designing the extrapolation is to use one that gets the correct high - energy spectral weight for the material and then places that spectral weight appropriately in energy .the remaining details are not important .the use of reflectance calculated from a dielectric function constructed from a sum of atomic scattering functions for a material provides a reliable and reproducible method of extrapolating measured data .it removes a certain amount of arbitrariness in the use of kramers - kronig analysis .in addition to testing with data where the uv x - ray spectra are known , this extrapolation has been used in a number of recent studies. a comparison with a method that uses ellipsometry in the near - infrared ultraviolet to constrain the extrapolation gave conductivity spectra indistinguishable from those of figs . [ fig : s1br ] and [ fig : lsnew2 ] . for al ,the ellipsometry - based method gave a slightly ( 5% ) higher conductivity over the entire range ; the difference was about the same as that between the two bridge versions in fig .[ fig : alxro3 ] . persons who wish to test the approach will find a windows program to compute the reflectance in the uv x - ray region , the kramers - kronig routine that uses the extrapolation and generates the bridge function , and a program that computes optical constants from reflectance and phase at |http://www.phys.ufl.edu/ tanner / zips / xro.zip| .use xro.exe to generate the file for extrapolation , kk.exe to do the kramers - kronig integral , and op.exe to calculate optical functions .note that to avoid having swings in the phase when its value is close to and there is noise , the kk.exe and op.exe programs respectively compute and use .see the discussion in endnote .i ve had important discussion about this method with ric lobo , tom timusk , and dirk ven der marel .claus jacobsen wrote the kramers - kronig routine i use and charles porter wrote many of the routines used by the data analysis code .naween anand , chang long , catalin martin , kevin miller , zahra nasrollahi , evan thatcher , berik uzakbaiuly , and luyi yan were very helpful in testing the method on data they have measured .i thank dirk van der marel and damien stricker for providing a comparison of the method used here with their ellipsometry - based approach .kramers , `` la diffusion de la lumiere par les atomes , '' j. atti cong . intern .fisici , ( transactions of volta centenary congress ) como * 2 * , 545557 ( 1927 ) ; `` die dispersion und absorption von rntgenstrahlen , '' phys . z. * 30 * , 522523 ( 1929 ) .hagemann , w. gudat , and c. kunz , desy report sr-74/7 , hamburg ( 1974 ) ; h.j .hagemann , w. gudat , and c. kunz , `` optical constants from the far infrared to the x - ray region : mg , al , cu , ag , au , bi , c , and al , '' j. opt .65 * , 742 ( 1975 ) .e. shiles , taizo sasaki , mitio inokuti , and d.y .smith , `` self - consistency and sum - rule tests in the kramers - kronig analysis of optical data : applications to aluminum , '' phys .b * 22 * , 1612 ( 1980 ) .jacobsen , d.b .tanner , and k. bechgaard , `` dimensionality crossover in the organic superconductor tetramethyltetraselenafulvalene hexafluorophosphate [ ( tmtsf) , '' phys .46 , 11421145 , 1981 m. dressel , a. schwartz , g. grner , and l. degiorgi , `` deviations from drude response in low - dimensional metals : electrodynamics of the metallic state of ( tmtsf) , '' phys .. lett . * 77 * , 398402 ( 1996 ) .fincher , jr . ,m. ozaki , m. tanaka , d. peebles , l. lauchlan , a.j .heeger , and a.g .macdiarmid , `` electronic structure of polyacetylene : optical and infrared studies of undoped semiconducting ( ch) and heavily doped metallic ( ch) , '' phys .b * 20 * , 1589 ( 1979 ) .s. stafstrm , j.l .brdas , a.j .epstein , h.s .woo , d.b .tanner , w.s .huang , and a.g .macdiarmid , `` polaron lattice in highly conducting polyaniline : theoretical and optical studies , '' phys .lett . , 59 , 14641467 , 1987 d.a .bonn , j.e .greedan , c.v .stager , t. timusk , m.g .doss , s.l .herr , k. kamars , and d.b .tanner , `` far - infrared conductivity of the high- superconductor yba , '' phys .lett . , 58 , 22492250 , 1987 k. kamars , s.l .herr , c.d .porter , n. tache , d.b .tanner , s. etemad , t. venkatesan , e. chase , a. inam , x.d .hegde , and b. dutta , `` in a clean high- superconductor you do not see the gap , '' phys .lett . , 64 , 8487 , 1990 s.l .cooper , g.a .thomas , j. orenstein , d.h .rapkine , a.j .millis , s .- w .cheong , a.s .cooper , and z. fisk , `` growth of the optical conductivity in the cu - o planes , '' phys .b , 41 , 1160511608 , 1990 f. gao , d.b .romero , d.b .tanner , j. talvacchio , and m.g .forrester , `` infrared properties of epitaxial la thin films in the normal and superconducting states , '' phys .b , 47 , 10361052 , 1993 s.l .cooper , d. reznik , a. kotz , m.a .karlow , r. liu , m.v .klein , w.c .lee , j. giapintzakis , d.m .ginsberg , b.w .veal , and a.p .paulikas , `` optical studies of the - , - , and -axis charge dynamics in yba , '' phys .b , 47 , 82338248 , 1993 d.n .basov , a.v .puchkov , r.a .hughes , t. strach , j. preston , t. timusk , d.a .bonn , r. liang , and w.n .hardy , `` disorder and superconducting - state conductivity of single crystals of yba , '' phys . rev .b * 49 * , 1216512169 ( 1994 ) . d.n .basov , r. liang , d.a .bonn , w.n .hardy , b. dabrowski , m. quijada , d.b .tanner , j.p .rice , d.m .ginsberg , and t. timusk , `` in - plane anisotropy of the penetration depth in yba and yba superconductors , '' phys .lett . , 74 , 598601 , 1995 m.a .quijada , d.b .tanner , r.j .kelley , m. onellion , h. berger , and g. margaritondo `` anisotropy in the -plane optical properties of bi single - domain crystals , '' phys .b , 60 , 1491714934 , 1999 s.g .kaplan , m.a .quijada , h.d . drew , d.b .tanner , g.c .xiong , r. ramesh , c. kwon , and t. venkatesan , `` optical evidence for the dynamic jahn - teller effect in nd , '' phys .lett . , 77 , 2081 - 2084 , 1996 y. murakami , h. kawada , h. kawata , m. tanaka , t. arima , y. moritomo , and y. tokura , `` direct observation of charge and orbital ordering in la , '' phys .lett . * 80 * , 19321935 ( 1998 ) .g. li , w.z .hu , j. dong , z. li , p. zheng , g.f .chen , j.l .luo , and n.l .wang , `` probing the superconducting energy gap from infrared spectroscopy on a ba as single crystal with k , '' phys .* 101 * , 107004 ( 2008 ) . b. cheng , b.f .chen , g. xu , p. zheng , j.l .luo , and n.l .wang , `` electronic properties of 3d transitional metal pnictides : a comparative study by optical spectroscopy , '' phys .b * 86 * , 134503 ( 2012 ) .dai , b. xu , b. shen , h.h .wen , j.p .qiu , and r.p.s.m .lobo , `` pseudogap in underdoped ba as seen via optical conductivity , '' phys .b * 86 * , 100501(r ) ( 2012 ) . s.j .moon , a.a .schafgans , m.a .tanatar , r. prozorov , a. thaler , p.c .canfield , a.s .sefat , d. mandrus , and d.n .basov , `` interlayer coherence and superconducting condensate in the -axis response of optimally doped ba(fe) high- superconductor using infrared spectroscopy , '' phys .* 110 * , 097003 ( 2013 ) .dai , b. xu , b. she2 , h.h .wen , x.g .qiu , and r.p.s.m .lobo , `` optical conductivity of ba : the effect of in - plane and out - of - plane doping in the superconducting gap , '' europhys .104 * , 47006 ( 2013 ) .xu , m. angst , t.v .brinzari , r.p .hermann , j.l .musfeldt , a.d .christianson , d. mandrus , b.c .sales , s. mcgill , j .- w .kim , and z. islam , `` charge order , dynamics , and magnetostructural transition in multiferroic lufe , '' phys .lett . * 101 * , 227602 ( 2008 ) .miller , p.w .stephens , c. martin , e. constable , r.a .lewis , h. berger , g.l .carr , and d.b .tanner , `` infrared phonon anomaly and magnetic excitations in single - crystal cu(seo) , '' phys .b , 86 , 174104 , 2012 k.h . miller , x.s .xu , h. berger , v. craciun , xiaoxiang xi , c. martin , g.l .carr , and d.b .tanner , `` infrared phonon modes in multiferroic single - crystal fete , '' phys .b , 87 , 224108 , 2013 a.d .laforge , a. frenzel , b.c .pursley , tao lin , xinfei liu , jing shi , and d.n .basov , `` optical characterization of bi in a magnetic field : infrared evidence for magnetoelectric coupling in a topological insulator material , '' phys .b * 81 * , 125120 ( 2010 ) .ana akrap , michal tran , alberto ubaldini , jrmie teyssier , enrico giannini , dirk van der marel , philippe lerch , and christopher c. homes , `` optical properties of bi at ambient and high pressures , '' phys .b * 86 * , 235207 ( 2012 ) .d. crandles , f. eftekhari , r. faust , g. rao , m. reedyk , and f. razavi , `` kramers - kronig - constrained variational dielectric fitting and the reflectance of a thin film on a substrate , '' appl . opt . * 47 * , 42054211 ( 2008 ) .x. wu , c.c .homes , s.e .burkov , t. timusk , f.s .pierce , s.j .poon , s.l .cooper , and m.a .`` optical conductivity of the icosahedral quasicrystal al and its 1/1 crystalline approximant -al , '' j. phys . : condens .matter * 5 * , 59755990 ( 1993 ) .k. kamars , k.l .barth , f. keilmann , r. henn , m. reedyk , c. thomsen , m. cardona , j. kircher , p.l .richards , and j.l .stehl , `` the low - temperature infrared optical functions of srtio determined by reflectance spectroscopy and spectroscopic ellipsometry , '' j. appl .* 78 * , 12351240 ( 1995 ) .kuzmenko , f.p .mena , h.j.a .molegraaf , d. van der marel , b. gorshunov , m. dressel , i.i .mazin , j. kortus , o.v .dolgov , t. muranaka , j. akimitsu , `` manifestation of multiband optical properties of mgb , '' solid state commun . * 121 * , 479484 ( 2002 ) .f. carbone , a. b. kuzmenko , h. j. a. molegraaf , e. van heumen , e. giannini , and d. van der marel , in - plane optical spectral weight transfer in optimally doped bi phys .b * 74 * , 024502 ( 2006 ) .d. stricker , j. mravlje , c. berthod , r. fittipaldi , a. vecchione , a. georges , and d. van der marel , `` optical response of sr reveals universal fermi - liquid scaling and quasiparticles beyond landau theory , '' phys .lett . * 113 * , 087404 ( 2014 ) .j. hwang , i. schwendeman , b.c .ihas , r.j .clark , m. cornick , m. nikolou , a. argun , j.r .reynolds , and d.b .tanner , `` _ in situ _ measurements of the optical absorption of dioxythiophene - based conjugated polymers , '' phys .b , 83 , 195121 , 2011 b.l . henke , e.m .gullikson , and j.c .x - ray interactions : photoabsorption , scattering , transmission , and reflection at ev , , atomic data and nuclear data tables * 54 * , 181342 ( 1993 ) .this equation appears at first glance to be wrong , but it is not .recall that the phase shift for non - absorbing materials with is so that is negative .the range of is . if i wanted to have a positive phase , i could define so that . in this case , and , of course , the range of will be .the real part of becomes negative when and is small , requiring .there is occasional confusion about the value to use for the volume , which is described variously as the `` volume of the unit cell '' or the `` volume of one formula unit . ''of course any carefully described volume will work , but generally one wants the number of charge carriers per atom or per primitive unit cell . and of course the conventional cell often contains several times more atoms than one would guess from the chemical formula .so the user must decide what he or she wants to determine .it may be the number of carriers per ag atom in silver metal , the number per buckyball in c , the number per dopant in p - doped si , or the number per copper atom in yba .once the decision is made , compute the volume in the crystal allocated to the desired quantity .one ( almost ) failsafe approach is to obtain the density of the crystal and the mass of the entity one is interested in , such as one ag atom , 60 c atoms , one silicon atom divided by the dopant concentration , or 1/3 the mass of y + 2ba + 3cu + 7o .then .s. tajima , h. ishii , t. nakahashi , t. takagi , s. uchida , m. seki , s. suga , y. hidaka , m. suzuki , t. murakami , k. oka , and h. unoki , `` extensive study of the optical spectra for high - temperature superconducting oxides and their related materials from the infrared to the vacuum - ultraviolet energy region , '' j. opt .b bf 6 , 475482 ( 1989 ) .hubbell , w.j .veigele , e.a .briggs , r.t .brown , d.t .cromer , and r.j .howerton , `` atomic form factors , incoherent scattering functions , and photon scattering cross sections , '' j. phys . chem . ref .data * 4 * , 471538 ( 1975 ) .henke , `` low energy x - ray interactions : photoionization , scattering , specular and bragg reflection , '' in _ low energy x - ray diagnostics _ edited by d.t . attwood and b.l .henke ( american institute of physics conf .proc . * 75 * , new york , 1981 ) , pp .146155 .henke , j.c .davis , e.m .gullikson , and r.c.c .perera , `` a preliminary report on x - ray photoabsorption coefficients and atomic scattering factors for 92 elements in the 1010,000 ev region , '' lawrence berkeley national laboratory report lbl-26259 ( 1988 ) .chantler , `` detailed tabulation of atomic form factors , photoelectric absorption and scattering cross section , and mass attenuation coefficients in the vicinity of absorption edges in the soft x - ray , , kev10 kev ? , addressing convergence issues of earlier work , '' j. phys .chem . ref .data * 29 * , 5971048 ( 2000 ) . s. brennan and p.l .cowan , `` a suite of programs for calculating x - ray absorption , reflection and diffraction performance for a variety of materials at arbitrary wavelengths , '' rev .instrum . * 63 * , 850853 ( 1992 ) .pratt , lynn kissel , and p.m. bergstrom , jr ., `` new relativistic s - matrix results for scattering - beyond the usual anomalous factors / beyond impulse approximation , '' in _ resonant anomalous x - ray scattering , _ edited by g. materlik , c.j . sparks and k. fischer ( north - holland : amsterdam , 1994 ) . lynn kissel , b. zhou , s.c .roy , s.k .sen gupta , and r.h .pratt , `` validity of form - factor , modified - form - factor and anomalous - scattering - factor approximations in elastic scattering calculations , '' acta crystallographica * a51 * , 271288 ( 1995 ) .the cubic spline bridge occasionally causes excitement by taking the reflectance in the bridge region over unity or below zero . in these casesit can not be used .the simple power - law bridges have not caused this trouble .naween anand , sanal buvaev , a.f .hebard , d.b .tanner , zhiguo chen , zhiqiang li , kamal choudhary , s.b .sinnott , genda gu , and c. martin , `` temperature - driven band inversion in pb : optical and hall - effect studies , '' ( arxiv:1407.5726 [ cond-mat.str-el ] . c. martin , k.h .miller , s. buvaev , h. berger , x.s .hebard , and d.b .tanner , `` temperature dependent infrared spectroscopy of the rashba spin - splitting semiconductor bitei , '' arxiv:1209.1656 [ cond-mat.mtrl-sci ] .
|
kramers - kronig analysis is commonly used to estimate the optical properties of new materials . the analysis typically uses data from far infrared through near ultraviolet ( say , 4040,000 or 5 mev5 ev ) and uses extrapolations outside the measured range . most high - frequency extrapolations use a power law , 1/ , transitioning to at a considerably higher frequency and continuing this free - carrier extension to infinity . the mid - range power law is adjusted to match the slope of the data and to give pleasing curves , but the choice of power ( usually between 0.5 and 3 ) is arbitrary . instead of an arbitrary power law , it is is better to use x - ray atomic scattering functions such as those presented by henke and co - workers . these basically treat the solid as a linear combinations of its atomic constituents and , knowing the chemical formula and the density , allow the computation of dielectric function , reflectivity , and other optical functions . the `` henke reflectivity '' can be used over photon energies of 10 ev34 kev , after which a continuation is perfectly fine . the bridge between experimental data and the henke reflectivity as well as two corrections that needed to be made to the latter are discussed .
|
in this work we consider multiparty communication complexity in the _ number - in - hand _ model . in this model , there are players , each with his own -bit input .the players wish to collaborate in order to compute a joint function of their inputs , .to do so , they are allowed to communicate , until one of them figures out the value of and returns it .all players are assumed to have unlimited computational power , so all we care about is the amount of communication used .there are three variants to this model , according to the mode of communication : 1 . the _ blackboard model _ , where any message sent by a player is written on a blackboard visible to all players ; 2 .the _ message - passing model _ , where a player sending a message specifies another player that will receive this message ; 3 . the _ coordinator model _ , where there is an additional -th player called the _ coordinator _ , who receives no input .players can only communicate with the coordinator , and not with each other directly .we will work in all of these , but will mostly concentrate on the message - passing model and the coordinator model .note that the coordinator model is almost equivalent to the message - passing model , up to a multiplicative factor , since instead of player sending message to player , player can transmit message to the coordinator , and the coordinator forwards it to player .lower bounds in the three models above are useful for proving lower bounds on the space usage of streaming algorithms , and for other models as well , as we explain in section [ subsec : motivation ] .most previous lower bounds have been proved in the blackboard model , but lower bounds in the message - passing model and the coordinator model suffice for all the applications we have checked ( again , see more in section [ subsec : motivation ] ) .note that another , entirely different , model for multiparty communication is the _ number - on - forehead _ model , where each player can see the inputs of all other players but _ not _ his own input .this model has important applications for circuit complexity ( see e.g. ) .we do not discuss this model at all in this paper .we allow all protocols to be randomized , with public coins , i.e. all players have unlimited access to a common infinite string of independent random bits .we allow the protocol to return the wrong answer with probability ( which should usually be thought of as a small constant ) ; here , the probability is taken over the sample space of public randomness .note that the public coin model might seem overly powerful , but in this paper we are mainly interested in proving lower bounds rather than upper bounds , so giving the model such strength only makes our results stronger . for more on communication complexity ,see the book of kushilevitz and nisan , and the references therein .we give some more definitions in the preliminaries , in section [ sec : pre ] .we begin by sketching two lower bounds obtained using our technique , both of them for the coordinate - wise xor problem .these lower bounds can be proved without using symmetrization , but their proofs that use symmetrization are particularly appealing . first consider the following problem : each player gets a bitvector and the goal is to compute the coordinate - wise xor of these vectors .we operate in the _ blackboard model _ , where messages are posted on a blackboard for all to see .[ thm : xor_bb_intro ] the coordinate - wise xor problem requires communication in the blackboard model . to see this , first let us specify the _ hard distribution _ : we prove the lower bound when the input is drawn from this distribution , and by the easy direction of yao s minimax lemma ( see e.g. ) , it follows that this lower bound applies for the problem as a whole .the hard distribution we choose is just the distribution where the inputs are independently drawn from the uniform distribution . to prove the lower bound , consider a protocol for this -player problem , which works on this distribution , communicates bits in expectation , and suppose for now that it never makes any errors ( it will be easy to remove this assumption ) .we build from a new protocol for a -player problem . in the -player problem , suppose that alice gets input and bob gets input , where are independent random bitvectors. then works as follows : alice and bob randomly choose two distinct indices using the public randomness , and they simulate the protocol , where alice plays player and lets , bob plays player and lets , and they both play all of the rest of the players ; the inputs of the rest of the players is chosen from shared randomness .alice and bob begin simulating the running of .every time player should speak , alice sends to bob the message that player was supposed to write on the board , and vice versa . when any other player ( ) should speak , both alice and bob know his input so they know what he should be writing on the board , thus no communication is actually needed ( this is the key point of the symmetrization technique ) . a key observationis that the inputs of the players are uniform and independent and thus entirely symmetrical , over is called _ symmetric _ if exchanging any two coordinates in keeps the distribution the same . ] and since the indices and were chosen uniformly at random , then the expected communication performed by the protocol is = 2c(p)/k ] .furthermore , at the end of the running of , alice knows the value of so she can reconstruct the value of . as before , this implies the theorem . the assumption that we never make errorscan once again be easily removed .we see that the crux of the symmetrization technique in the coordinator model is to consider the -player problem that we wish to lower - bound , to find a symmetric distribution which seems hard for it , to give alice the input of one player ( chosen at random ) and bob the input of all other players , and to prove a lower bounds for this two - player problem .if the lower bound for the two player problem is , the lower bound for the -player problem will be . for the blackboard model ,the proofs have the same outline , except in the -player problem alice gets the input of one randomly chosen player , bob gets the input of another , and they both get the inputs of all the rest of the players .there is one important thing to note here : * this argument only works when the hard distribution is symmetric .communication complexity is a widely studied topic . in multiplayer communication complexity , the most studied mode of communication is the blackboard model .the message - passing model was already considered in .( this model can also be called the _ private - message model _ , but note that this name was used in for a different model . )the coordinator model can be thought of as a server - site setting , where there is one server and sites .each site has gathered bits of information and the server wants to evaluate a function on the collection of these bits .each site can only communicate with the server , and a server can communicate with any site .this server - site model has been widely studied in the databases and distributed computing communities .work includes computing top- and heavy hitters .another closely related model is the _ distributed streaming model _ , in which we also have one server and sites .the only difference is that now the computation is dynamic .that is , each site receives a stream of elements over time and the server would like to maintain continuously at all times some function of all the elements in the sites .thus the server - site model can be seen as a one - shot version of the distributed streaming setting .it follows that any communication complexity lower bound in the message - passing model or the coordinator model also hold in the distributed streaming model .a lot of work on distributed streaming has been done recently in the theory community and the database community , including maintaining random samplings , frequency moments , heavy hitters , quantiles , entropy , various sketches and some non - linear functions .we will come back to the latter two models in section [ sec : application ] .it is interesting to note that despite the large number of upper bounds ( i.e. algorithms , communication protocols ) in the above models , very few lower bounds have been proved in any of those models , likely because there were few known techniques to prove such results . a further application of the message - passing model could be for secure multiparty computation : in this model , there are several players who do not trust each other , but want to compute a joint function of their inputs , with each of them learning nothing about the inputs of the others players except what can be learned from the value of the joint function .obviously , any lower bound in the message passing model immediately implies a lower bound on the amount of communication required for secure multiparty computation . for more on this model ,see e.g. .one final application is for the streaming model . in this model, there is a long stream of data that can only be scanned from left to right .the goal is to compute some function of the stream , and minimize the space usage .it is easy to see that if we partition the stream into parts and give each part to a different player , then a lower bound of on the communication complexity of the problem in the coordinator model implies a lower bound of on the space usage .when passes over the model are allowed , a lower bound of in the coordinator model translates to a lower bound of in the streaming model .our main technical result in this paper are lower bounds of randomized communication for the bitwise -party and , or , and maj ( majority ) functions in the coordinator model .this can be found in section [ sec : bitwise ] . in the same sectionwe prove some lower bounds for and and or in the blackboard model as well . back to the coordinator model , we show that the connectivity problem ( given players with subgraphs on a common set of nodes , determine if it is connected ) requires communication .this is in section [ sec : conn ] .the coordinate - wise lower bounds imply lower bounds for the well - studied problems of heavy - hitters and -kernels in the server - site model ( or the other related models ) .we show any randomized algorithm requires at least and communication , respectively .the former is shown to be tight .this is in section [ sec : application ] .we give some direct - sum - like results in section [ sec : directsum ] .in this section we review some basic concepts and definitions .we denote =\{1,\ldots , n\} ] .we state a couple of simple two - party lower bounds that will be useful in our reductions . [[ sec:2-bits ] ] 2- .+ + + let be a distribution over bit - vectors of length , where each bit is with probability and with probability . in this problem alice gets a vector drawn from , bob gets a subset of ] .[ [ sec:2-disj ] ] 2- .+ + + in this problem alice and bob each have an -bit vector .if we view vectors as sets , then each of them has a subset of ] such that and . and with probability , and are random subsets of ] .we now consider multiparty and/or ( below we use -and -for short ) . in the -problem, each player has an -bit vector and we want to establish the bitwise of , that is , for .-is similarly defined with or . observe that the two problems are isomorphic by for , where is obtained by flipping all bits of .therefore we only need to consider one of them . herewe discuss - .we now discuss the hard distribution and sketch how to apply the symmetrization technique for the -or problem in the coordinator model .the formal proof can be found in the next subsection .we in fact start by describing two candidate hard distributions that _ do not _ work .the reasons they do not work are interesting in themselves . throughout this subsection ,assume for simplicity that .the most natural candidate for a hard distribution is to make each entry equal to with probability .this has the effect of having each bit in the output vector be roughly balanced , which seems suitable for being a hard case .this is indeed the hard distribution for the blackboard model , but for the coordinator model ( or the message - passing model ) it is an easy distribution : each player can send his entire input to the coordinator , and the coordinator can figure out the answer .the entropy of each player s input is only , so the total communication would be in expectation using e.g. shannon s coding theorem ; this is much smaller than the lower bound we wish to prove .clearly , we must choose a distribution where each player s input has entropy .this is the first indication that the -player problem is significantly different than the -player problem , where the above distribution is indeed the hard distribution .the next candidate hard distribution is to randomly partition the coordinates into two equal - sized sets : the _ important set _ , where each entry is equal to with probability , and the _ balancing set _ , where all entries are equal to .now the entropy of each player s input is , and the distribution seems like a good candidate , but there is a surprising upper bound for this distribution : the coordinator asks players to send him their entire input , and from this can easily figure out which coordinates are in the balancing set and which are in the important set .henceforth , the coordinator knows this information , and only needs to learn the players values in the important set , which again have low entropy .we would want the players to send these values , but the players themselves do not know which coordinates are in the important set , and the coordinator would need to send bits to tell all of them this information .however , they do not need to know this in order to get all the information across : using a protocol known as _ slepian - wolf coding _( see e.g. ) the players can transmit to the coordinator all of their values in the important coordinates , with only total communication ( and a small probability of error ) .the idea is roughly as follows : each player chooses sets ] .let be random subsets of size from , and be random subsets of size from .let be random elements from . let be this input distribution for - .if contains an element , then we call this element a special element .this reduction can be interpreted as follows : alice simulates a random player , and bob , playing as the coordinator , simulates all the other players .bob also keeps a set containing all the special elements that ever appear in .it is easy to observe the following fact .[ lem : and - symmetric ] all are chosen from the same distribution . since by definition chosen from the same distribution , we only need to show that under is chosen from the same distribution as any is under .given , note that is a random set of size in -y ] be the set of indices where the results are .bob checks whether there exists some such that .if yes , then returns yes " , otherwise returns no " .we start by analyzing the communication cost of .since player is chosen randomly from the players , and lemma [ lem : and - symmetric ] that all players inputs are chosen from a same distribution , the expected amount of communication between ( simulated by alice ) and the other players ( simulated by bob ) is at most a fraction of the total communication cost of .therefore the expected communication cost of is at most .for the error bound , we have the following claim : with probability at least , there exists a such that if and only if .first , if , then can not contain any element , thus the resulting bits in can not contain any special element that is not in .on the other hand , we have \le 4k / n. ] ., let us assume that bob outputs the final result of - .it is easy to see that if we take the bit - wise or of the -bit vectors generated by public random bits and bob s input vector , the resulting -bit vector will have at least a constant density of bits with probability at least , by a chernoff bound . since bob can see the public vectors , to compute the final result , all that bob has to know are bits of alice s vector on those indices where = 0 ] , with the promise that with probability , and with probability , .let be this input distribution for - .now given an input for - , we construct an input for - .let be the complete graph with vertices .given alice sinput and bob s input , we construct players input such that and for all .we first pick a random permutation of ] , according to the distribution , bob s input can also seen as follows : with probability , it is a random matching of size from ; and with probability , it is a matching consists of random edges from and one random edge from , which is where .[ lem : connected ] happens with probability at least when .( this is a proof sketch ; full proof in appendix [ sec : conn - proof ] . )first note that by our construction , both and are with high probability . to locally simplify notation, we consider a graph of nodes where edges are drawn in rounds , and each round disjoint edges are added to the graph .if is connected with probability , then by union bound over and , is true with probability .the proof follows four steps .* we show that all points have degree at least with probability at least ; this uses the first rounds . *we show ( conditioned on ( s1 ) ) that any subset of points is connected to at least distinct points in , with probability at least .* we can iterate ( s2 ) times to show that there must be a single connected component of size at least , with probability at least .* we can show ( conditioned on ( s3 ) and using the last rounds ) that all points are connected to with probability at least .the following lemma shows the properties of our reduction .[ lem : conn - reduction ] assume . if there exists a protocol for -on input distribution with communication complexity and error bound , then there exists a protocol for the -on input distribution with expected communication complexity and error bound . in , alice andbob first construct according to our reduction , and then run the protocol on it . by lemma [ lem : connected ]we have that holds with probability at least .and by our construction , conditioned on that holds , holds with probability at least .thus the input generated by a random reduction encodes the -problem with probability at least .we call such an input a _ good _ input .we repeat the random reduction times ( for some large enough constant ) and run on each of the resulting inputs for - .the probability that we have at least one good input is at least ( by picking large enough ) .if this happens , bob randomly pick an input among those good ones and the result given by running protocol on that input gives the correct answer for -with probability at least , by a markov inequality , since by our reduction at least a fraction of inputs under are good and has error bound under .therefore we obtained a protocol for -with expected communication complexity and error bound . combining lemma [ lem:2-disj ] and lemma [ lem : conn - reduction ], we have the following theorem .assume .then if there exists a protocol that computes -with communication complexity and error , then by lemma [ lem : or - disj ] there exists a protocol that computes -with expected communication complexity and error at most , contradicting lemma [ lem:2-disj ] ( when ) .finally , we have as an immediate consequence .let be an arbitrary function .let be a probability distribution over . consider a setting where we have players : carol , and .carol receives an input from and each receives an input .let denote the randomized communication complexity of computing on carol s input and each of the other players inputs respectively ; i.e. , computing .our direct - sum - like theorem in the message - passing model states the following .[ thm : directsum ] in the message - passing model , for any function and any distribution on , we have ) ] .the theorem follows by yao s min - max principle .suppose that alice and bob get inputs from according to .we can use a protocol for to compute as follows : bob simulates a random player in .w.l.o.g , say it is .alice simulates carol and the remaining players .the inputs for carol and are constructed as follows : , and are picked from according to ( alice knows and so she can compute ) .let be the distribution of in this construction .we now run the protocol for on .the result also gives .since are chosen from the same distribution and is picked uniformly at random from the players other than carol , we have that in expectation , the expected amount of communication between and , , or equivalently , the communication between alice and bob according to our construction , is at most a fraction of the total communication of the -player game .thus ) ] .the proof is similar as that for theorem [ thm : directsum ] .the reduction is the same as that in the proof of theorem [ thm : directsum ] .let .note that if , then with probability , we have .we can boost this probability to by repeating the reduction for times and then focus on a random good input ( if any ) , as we did in the proof of lemma [ lem : conn - reduction ] .the -player protocol on a random good input succeeds with probability at least , by a markov inequality .therefore we have a protocol for with expected communication complexity and error bound .it would be interesting to generalize this to other combining functions such as majority , xor and others .one can also use symmetrization to prove similar direct - sum results for other settings .one such setting is when there are players : players that get and players that get , and the goal is to compute .another setting is the same except that there are only players , and carol receives all of the inputs .we omit the details here .versions of direct - sum in the blackboard model are also possible for some of these settings .we now consider applications where multiparty communication complexity lower bounds such as ours are needed .as mentioned in the introduction , our multiparty communication problems are strongly motivated by research on the server - site model and the more general distributed streaming model .we discuss two problems here , the heavy hitters problem which asks to find the approximately most frequently occurring elements in a set which is distributed among many clients , and the -kernel problem which asks to approximate the convex hull of a set which is distributed over many clients .[ [ varepsilon - kernels . ] ] -kernels .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a set of points , the width in direction is denoted by where is the standard inner product operation . then an -kernel is a subset of so that for any direction an -kernel approximates the convex hull of a point set , such that if the convex hull of is expanded in any direction by an -factor it contains . as such , this coreset has proven useful in many applications in computational geometry such as approximating the diameter and smallest enclosing annulus of point sets .it has been shown that -kernels may require points ( on a -sphere in ) and can always be constructed of size in time .we can note a couple of other properties about -kernels .composibility : if are -kernels of , respectively , then is an -kernel of .transitivity : if is an -kernel of and is an -kernel of , then is an -kernel of .thus it is easy to see that each site can simply send an -kernel of its data of size to the server , and the server can then create and -kernel of of size .this is asymptotically optimal size for and -kernel of the full distributed data set .we next show that this procedure is also asymptotically optimal in regards to communication .[ thm : ekerlb ] for a distributed set of sites , it requires communication total between the sites and the server for the server to create an -kernel of the distributed data with probability at least .we describe a construction which reduces -or to this problem , where each of players has bits of information .theorem [ thm : k - or ] shows that this requires communication .we let each player have very similar data , each player s data points lie in a unit ball . for each player ,their points are in similar position .each player s point is along the same direction , and its magnitude is either or . furthermore , the set of directions are well - distributed such that for any player , and any point that is not an -kernel of ; that is , the only -kernel is the full set .the existences of such a set follows from the known lower bound construction for size of an -kernel .we now claim that the -or problem where each player has bits can be solved by solving the distributed -kernel problem under this construction .consider any instance of -or , and translate to the -kernel problem as follows .let the point of the player have norm when bit of the player is , and have norm if the bit is . by construction , an -kernel of the full set must acknowledge ( and contain ) the point from some player that has such a point with norm , if one exists .thus the full -kernel encodes the solution to the -or problem : it must have points and , independently , the point has norm if the or bit is , and has norm if the or bit is .[ [ heavy - hitters . ] ] heavy hitters .+ + + + + + + + + + + + + + given a multi - set that consists of elements , a threshold parameter , and an error parameter , the _ approximate heavy hitters _ problem asks for a set of elements which contains all elements that occur at least times in and contains no elements that occur fewer than times in .on a static non - distributed data set this can easily be done with sorting .this problem has been famously studied in the streaming literature where the misra - gries and spacesaving summaries can solve the problem in space . inthe distributed setting the best known algorithms use random sampling of the indices and require either or communication to guarantee a correct set with constant probability .we will prove a lower bound of and leave tightening this bound as an open problem .we specify a specific formulation of the approximate heavy hitters problem as -as follows .consider players , each with a bit sequence ( either 0 or 1 ) of length where each coordinate represents an element . the goal is to answer yes for each index with at least elements , no for each index with no more than elements , and either yes or no for any count in between .the reduction is based on a distribution where independently each index has either or 1 bits , each with probability . in the reductionthe players are grouped into sets of players each , and all grouped players for each index are either given a 1 bit or all players are given a 0 bit .these 1 bits are distributed randomly among the groups .the proof then uses corollary [ cor : k - maj ] . - . to lowerbound the communication for - we first show another problem is hard : - ( assume is an integer ) .here there are only players , and each player at each index has a count of or , and we again want to distinguish between total counts of at least ( yes ) and at most ( no ) . by distribution index has a total of either or exactly .and then we distribute these bits to players so each player has precisely either or at each index .when is odd , this is precisely the -maj problem , which by corollary [ cor : k - maj ] takes communication .now it is easy to see that -- , since the former on the same input allows sets of players to talk to each other at no cost .in this paper we have introduced the symmetrization technique , and have shown how to use it to prove lower bounds for -player communication games .this technique seems widely applicable , and we expect future work to find further uses .the symmetrization technique has several limitations , which we wish to discuss here .firstly , there are problems that might be impossible to lower bound using symmetrization .consider for example the -player disjointness problem , where each player gets a subset of , and the goal is to decide whether there is an element that appears in all of the sets .this problem is easier than the coordinate - wise and problem .we believe that this problem has a communication lower bound of in the coordinator model. however , it seems impossible to prove this lower bound using symmetrization , for the following reason .suppose we give alice the input of a randomly - chosen player , and give bob the inputs of all the other players .it seems that for any symmetric input distribution , this problem can be solved using bits in expectation , which is much lower than the lower bound we are aiming for .it is not out of the question that some variant of the symmetrization technique can be used to get the lower bound , but we do not know how to do it , and it might well be impossible .we leave this problem as an intriguing open question .the second limitation is that symmetrization seems to require proving distributional lower bounds for -player problems , often over somewhat - convoluted distributions .this presents some difficulty for the researcher , who needs to start proving lower bounds from scratch and can not use the literature , since lower bounds in the literature are proved for other distributions .yao s minimax principle can not be used , since it only guarantees that there is _ some _ hard distribution , but it does nt guarantee anything for the distribution of interest .this is often only a methodical difficulty , since it is often easy to get convinced that the distribution of interest is indeed hard , but a proof of this must still be found , which is a tedious task. it would be useful if these was some way to circumvent this difficulty , for example by finding a way that standard randomized lower bounds can be used .the third limitation is that in order to use symmetrization , one needs to find a hard distribution for the -player problem which is symmetric .this is usually impossible when the problem itself is not symmetric , i.e. when the players have different roles .for example , one could envision a problem where some of the players get as input elements of some group and the rest of the players get as input integers .however , note that for such problems , symmetrization can still be useful in a somewhat - generalized version .for example , suppose there are two sets of players : in set , the players get group elements , and in set , the players get integers .assume each of the sets contains exactly players .to use symmetrization , we would try to find a hard distribution that is symmetric inside of and also inside of ; namely , a distribution where permuting the players inside has no effect on the distribution , and similarly for permuting the players inside .then , to use symmetrization we can have alice simulate two random players , and , where is from and is from ; bob will simulate all the rest of the players .now symmetrization can be applied .if , alternatively , the set contained just players and contained players , we can have alice simulate one of the players in , bob can simulate the rest of the players in , and either alice or bob can play the three players from . as can be seen , with a suitable choice of distribution , it should still be possible to apply symmetrization to problems that exhibit some amount of symmetry . the main topic for futurework seems to be to find more setting and problems where symmetrization can prove useful . since discovering this approach we have found repeatedly that problems that we encounter in other settings seem susceptible to this approach , and we think it has the potential to be a widely useful tool .the problems we deal with in this paper are simple and natural , possibly the simplest problems one could ask in the multiplayer communication model .we have not found results about them in the literature , but it makes sense to ask if the lower bounds we prove have trivial proofs : maybe the symmetrization framework that we introduce is not needed , and some of these results can be proved trivially ? the analogues of these results in the -player case are indeed trivial to prove , so why should the -player case be any different ? as with any mathematical result , it is difficult to show that our results do not have trivial proofs hiding somewhere .one argument is that we have tried hard to find such proofs , and failed .a good way to get convinced that the -player setting is significantly different than the -player setting is to attempt to argue about -player protocols .these protocols have much more freedom than -player protocols , and it is harder to argue about what they can not do .perhaps the best evidence that at least some of our results are non - trivial can be found in section [ sec : and : idea ] . in this sectionwe discussed the coordinate - wise and problem in the coordinator model , and have shown that some natural candidates for hard distributions are not hard at all , and that proving a lower bound for the problem seems to require ruling out a certain slepian - wolf - related type of protocols ( see more details there ) .this gives some indication that a lower bound for this problem should be at least somewhat non - trivial .symmetrization is a technique that can be applied to a wide range of -player communication problems , as well as having many possible applications in related models .any problem which has a symmetric hard - looking distribution is a good candidate .below is a list of candidate problems , some of which we lower - bound in this paper , and some that we leave for future work .this list is far from exhaustive .* _ coordinate - wise problems : _ each player gets a vector of length .some symmetric coordinate - wise function is applied , resulting in a length vector .then a `` combining function '' is applied to the bits of the result . could be any output domain . in the examples we dealt with in this paper , was and , or , xor or maj , and was typically the identity function .-player disjointness is the case when is and and is or .we believe that for most choices of and that are non - constant symmetric functions , the complexity of this problem is . it might be interesting to conjecture this holds for any such and .* _ equality : _ each player gets a vector of length , and the goal is to decide whether all players have received the same vector .another variant is to decide whether all players have received distinct vectors .this problem should have a lower bound of in the coordinator model , with a proof using symmetrization ; we defer the details to a later version . * _ graph problems : _ each player gets a set of edges in an -vertex graph , and the goal is to compute some graph property of the graph that consists of all of the edges in the input . in this paperwe treat this problem when the graph property is connectivity .obviously , many other properties could be interesting .* _ pointer chasing : _ each player has a set of directed edges in an -vertex graph , where for each vertex , only one player has an edge whose source is .the `` sink '' of the instance is the vertex obtained by starting at vertex number and walking according to the directed edges until reaching a vertex with no outgoing edges .we assume there is always a sink , and the goal is to find it .we would like to thank andrew mcgregor for helpful discussions , and for suggesting to study the connectivity problem .10 p. k. agarwal ,s. har - peled , and k. varadarajan .geometric approximations via coresets ., 2007 . p. k. agarwal ,s. har - peled , and k. r. varadarajan . approximating extent measure of points . , 51:606635 , 2004 . c. arackaparambil , j. brody , and a. chakrabartifunctional monitoring without monotonicity . in _ icalp _ , 2009 .b. babcock and c. olston .distributed top - k monitoring . in _, 2003 . z. bar - yossef , t. s. jayram , r. kumar , and d. sivakumar .an information statistics approach to data stream and communication complexity ., 68:702732 , june 2004 .b. barak , m. braverman , x. chen , and a. rao . how to compress interactive communication .in _ stoc _ , pages 6776 , 2010 . p. cao and z. wang .efficient top - k query calculation in distributed networks . in _podc _ , 2004 .t. chan .faster core - set constructions and data - stream algorithms in fixed dimensions ., 35:2035 , 2006 .g. cormode and m. garofalakis .sketching streams through the net : distributed approximate query tracking . in _ vldb _ , 2005 .g. cormode , m. garofalakis , s. muthukrishnan , and r. rastogi .holistic aggregates in a networked world : distributed tracking of approximate quantiles . in _ sigmod _ , 2005 .g. cormode , s. muthukrishnan , and k. yi . algorithms for distributed functional monitoring . in _ soda _ , 2008 .g. cormode , s. muthukrishnan , k. yi , and q. zhang .optimal sampling from distributed streams . in _ pods _ , 2010 .g. cormode , s. muthukrishnan , and w. zhuang .what s different : distributed , continuous monitoring of duplicate - resilient aggregates on data streams . in _icde _ , 2006 .t. m. cover and j. a. thomas . .wiley - interscience , 1991 .p. duris and j. d. p. rolim .lower bounds on the multiparty communication complexity . , 56(1):9095 , 1998 .a. gl and p. gopalan .lower bounds on streaming algorithms for approximating the length of the longest increasing subsequence . in _ focs _, 2007 .o. goldreich .secure multi - party computation . , 2002 .available at http://www.wisdom.weizmann.ac.il/~oded/pp.html .s. guha and z. huang . revisiting the direct sum theorem and space lower bounds in random order streams . in _icalp _ , 2009 .z. huang , k. yi , y. liu , and g. chen .optimal sampling algorithms for frequency .estimation in distributed data . in _ infocom _ , 2011 .b. kalyanasundaram and g. schintger . ., 5:545557 , 1992 .m. karchmer , r. raz , and a. wigderson .super - logarithmic depth lower bounds via the direct sum in communication complexity ., 5(3/4):191204 , 1995 .r. keralapura , g. cormode , and j. ramamirtham .communication - efficient distributed monitoring of thresholded counts . in _sigmod _ , 2006 .e. kushilevitz and n. nisan . .cambridge , 1997 .a. manjhi , v. shkapenyuk , k. dhamdhere , and c. olston .finding ( recently ) frequent items in distributed data streams . in _ icde _ , 2005 .a. metwally , d. agrawal , and a. e. abbadi .an integrated efficient solution for computing frequent and top - k elements in data streams ., 2006 .s. michel , p. triantafillou , and g. weikum .klee : a framework for distributed top - k query algorithms . in _vldb _ , 2005 .j. misra and d. gries .finding repeated elements .2:143152 , 1982 . m. patrascu . towards polynomial lower bounds for dynamic problems . in _ stoc _ ,pages 603610 , 2010 .b. patt - shamir and a. shafrir .approximate distributed top- queries ., 21(1):122 , 2008 . a. a. razbarov . on the distributional complexity of disjointness . in _icalp _ , 1990 .i. sharfman , a. schuster , and d. keren .a geometric approach to monitoring threshold functions over distribtuted data streams . in _sigmod _ , 2006 .i. sharfman , a. schuster , and d. keren .shape sensitive geometric monitoring . in _ pods _ , 2008 .a. c .- c .probabilistic computations : toward a unified measure of complexity ( extended abstract ) . in _ focs _ , 1977 .k. yi and q. zhang .optimal tracking of distributed heavy hitters and quantiles . in _pods _ , 2009 .h. yu , p. k. agarwal , r. poreddy , and k. r. varadarajan . practical methods for shape fitting and kinetic data structures using coresets . in _ socg _ , 2004 .q. zhao , m. ogihara , h. wang , and j. xu .finding global icebergs over distributed data sets . in _ pods _ , 2006 .* lemma [ lem:2-bits ] ( restated ) . *_ = \omega(n \rho \log(1/\rho)) ] .* lemma [ lem:2-disj ] ( restated ) . *_ when has with probability then = \omega(n) ] , such that are disjoint and .next , with probability , we choose alice s input to be union a random subset of with size ; and with probability , we choose to be a random subset of with size .similarly , with probability , we choose bob s input to be union a random subset of with size ; and with probability , we choose to be a random subset of with size . given a partition , a rectangle , and alice and bob s inputs and , we define & & q(t ) = { \mathbf{pr}}[y \in c\ |\ t ] \\p_0(t ) = { \mathbf{pr}}[x \in c\ |\ t , i\not\in x ] & & q_0(t ) = { \mathbf{pr}}[y \in c\ |\ t , i\not\in y ] \\p_1(t ) = { \mathbf{pr}}[x \in c\ |\ t , i\in x ] & & q_1(t ) = { \mathbf{pr}}[y \in c\ |\ t , i\in y]\end{aligned}\ ] ] we have the following observations . [ ob : properties ] 1 .similarly , .if is fixed , then and are fixed .similarly , if is fixed , then and are fixed . = { \mathbf{e}}_t[p(t)\ |\ t_1] ] , and < 1/5 ] , and $ ] . \cdot { \mathbf{pr}}[(x , y ) \inr\ |\ t , i\in x , i\in y ] ) \\ & = & 1/t \cdot { \mathbf{e}}_t[p_1(t ) q_1(t)]\end{aligned}\ ] ] and \right ) \\ & = & ( 1 - 1/t ) \cdot { \mathbf{e}}[p_0(t)q_0(t)]\end{aligned}\ ] ] ( for lemma [ lem : mono - rect ] ) \\ & \ge & 1/t \cdot { \mathbf{e}}[p_1(t ) q_1(t ) ( 1 - \chi(t ) ) ] \\ &\ge & 1/t \cdot { \mathbf{e}}[(1/3 \cdot p_0(t ) - 2^{-0.01n } ) ( 1/3 \cdot q_0(t ) - 2^{-0.01n})(1 - \chi(t ) ) ] \\ & \ge & 1/t \cdot 1/9 \cdot \left(1 - \frac{2/5}{1 - 1/\sqrt{t}}\right ) \cdot { \mathbf{e}}[p_0(t)q_0(t ) ] - 2^{-0.01n } \\ & = & 1/t \cdot 1/9 \cdot \left(1 - \frac{2/5}{1 - 1/\sqrt{t}}\right ) \cdot \frac{1}{1 - 1/t } \cdot \mu(a \cap r ) - 2^{-0.01n } \\ &\ge & 1/40 t \cdot \mu(a \capr ) - 2^{-0.01n}\end{aligned}\ ] ]we provide here a full proof for the probability of the event , that both subset of the graph and are connected . first note that by our construction , both and are with high probability . to locally simplify notation, we consider a graph of nodes where edges are drawn in rounds , and each round disjoint edges are added to the graph .if is connected with probability , then by union bound over and , is true with probability .the proof follows four steps . 1 . _all points have degree at least ._ since for each of rounds each point s degree increases by with probability , then the expected degree of each point is after the first rounds .a chernoff - hoeffding bound says that the probability that a point has degree less than is at most .then by the union bound , this holds for none of the points with probability at least .2 . _ conditioned on ( s1 ), any subset of points is connected to at least distinct points in ._ at least points are outside of , so each point in expects to be connected at least times to a point outside of .each of these edges occur in different rounds , so they are independent. thus we can apply a chernoff - hoeffding bound to say the probability that the number of edges outside of for any point is less than is at most .thus the probability that no point in has fewer than edges outside is ( since ) at most .+ if the edges outside of ( for all points ) are drawn independently at random , then we need to bound the probability that these go to more than distinct points or distinct points . sincethe edges are drawn to favor going to distinct points in each round , it is sufficient to analyze the case where all of the edges are independent , which can only increase the chance they collide . in either case or each time an edge is chosen ( until vertices have been reached , in which case we can stop ) , of all possible vertices are outside the set of edges already connected to .so if we select the edges one at a time , each event connects to a distinct points with probability at least , so we expect at least distinct points .again by a chernoff - hoeffding bound , the probability that fewer than distinct points have been reached is at most ( for ) .together the probability of these events not happening is at most .there is a single connected component of size at least . _start with any single point , we know from step 1 , its degree is at least .then we can consider the set formed by these points , and apply step 2 to find another points ; add these points to .the process iterates and at each round , by growing only from the newly added points .so , by round the set has grown to at least size .taking the union bound over these rounds shows that this process fails with probability at most .all points in are connected to . _each round each point is connected to with probability at least .so by coupon collector s bound , using the last rounds all points are connected after sets of rounds with probability at least .
|
in this paper we prove lower bounds on randomized multiparty communication complexity , both in the _ blackboard model _ ( where each message is written on a blackboard for all players to see ) and ( mainly ) in the _ message - passing model _ , where messages are sent player - to - player . we introduce a new technique for proving such bounds , called _ symmetrization _ , which is natural , intuitive , and relatively easy to use in comparison to other techniques for proving such bounds such as the _ icost method _ . for example , for the problem where each of players gets a bit - vector of length , and the goal is to compute the coordinate - wise xor of these vectors , we prove a tight lower bounds of in the blackboard model . for the same problem with and instead of xor , we prove a lower bounds of roughly in the message - passing model ( assuming ) and in the blackboard model . we also prove lower bounds for bit - wise majority , for a graph - connectivity problem , and for other problems ; the technique seems applicable to a wide range of other problems as well . the obtained communication lower bounds imply new lower bounds in the _ functional monitoring model _ ( also called the _ distributed streaming model _ ) . all of our lower bounds allow randomized communication protocols with two - sided error . we also use the symmetrization technique to prove several direct - sum - like results for multiparty communication .
|
raman micro - spectroscopy is well suited for studying a variety of properties including chemical , magnetic , lattice , thermal , electronic , symmetry , and crystal orientation. as such this technique has been applied to wide - ranging areas including chemistry, physics, materials science , and biology .many interesting phenomena only emerge at low temperatures , and as such it is often highly desirable to measure and/or image a sample below room temperature .in addition , the temperature dependence of raman features often reveals new information such as the strength of phonon anharmonicity crucial for thermal properties. for organic samples , low temperatures can immobilize the material in a near - native state , revealing much more detailed information about the samples and their interaction spectra. thus , numerous insights can be gained by measuring the temperature dependence of the raman response .typically , temperature control requires the use of cumbersome and expensive cryogenic liquids . for raman micro - spectroscopythis can be extremely challenging , due to a number of factors including the rapidly rising cost of helium , the low raman cross sections ( typically 10 to 10) , the requirement of high spatial and spectral resolution , as well as the need to use low laser power to prevent heating .this often creates a competing set of requirements , long integration times , the use of high numerical aperture ( na ) objectives with low working distances , minimized use of helium , the need to place the sample in vacuum , and the need to keep the objective at a fixed location / temperature . to date , this has led to two different designs of low - temperature raman microscopes .the first approach is to place the objective inside the cooling medium / vacuum , which enables high na , but requires cryo - compatible objectives and leads to strong temperature dependence of the objective s performance as well as its relative alignment with the sample .the second approach employs an intermediate n.a ., long working distance , glass compensated objective outside the cryostat .this results in higher mechanical stability of the objective , but at the cost of the spot size , polarization and especially the collection efficiency. in addition , one also desires to make systems as automated as possible to reduce the operational errors and enable higher temperature resolution . therefore ,there has been an increasing demand to implement an automated system with high collection efficiency , thermal and mechanical stability . in this article, we describe a new raman microscope design , equipped with automated cryogenic temperature , laser power and polarization control as well as motorized imaging functions .temperature changes in our system are based on an automated closed - cycle cryostation , designed and manufactured by montana instruments inc . for raman excitation and collection ,a cryo optic module was employed , comprising a 100x , 0.9na microscope objective , installed inside the cryostation and kept at a constant temperature by a proportional - integral - derivative ( pid ) control loop .an agile temperature sample mount was designed and installed , ensuring fast thermal response ( less than 5 minutes from 4 k to 350 k ) as well as excellent cryostation platform mechanical ( 5 nm ) and thermal ( mks ) stability .in addition to the small spot size and excellent collection efficiency this enables the improved collimation by the objective , which provides excellent spectral resolution , stability , and rayleigh rejection with notch filters ( allowing signals down to 30 ) .moreover , nearly perfect polarization response can be measured at any in - plane angle using a fresnel rhomb .these combined features produce much more reliable measurements with long integration times and continuous experiments lasting multiple weeks without the need for human intervention .this opens the door to widespread use of cryogenic raman microscopy to probe nano - materials with low thermal conductivities and very weak raman responses .raman scattering or the raman effect is the inelastic scattering of a photon .the energy difference between the incoming and outgoing photons corresponds to an excitation energy within the measured materials , enabling the use of a single wavelength light source to probe multiple excitations of a material .the basic components of a raman microscope comprise a continuous wave laser , optical components to guide and focus the beam onto the sample , a laser filter , and a detector . after illuminating a sample , the elastic rayleigh as well as raman scattered light is collected by the same objective in the backscattering configuration. usually , the collected light must pass through a set of filters before entering a spectrometer because of the very small scattering cross - section of the raman processes. in raman experiments , one often employs group theory to determine the symmetry of the mode measured .specifically , the mode intensity is given by a raman tensor ( ) and the polarization of the incoming ( ) and outgoing ( ) light .indeed , the crystallographic axes are the bases of and the intensity is given by : therefore by measuring the raman response for various configurations of as well as at different angles with respect to the sample , one can gain insights into the symmetry of the excitations and determine the crystallographic orientation.+ we now describe in detail the overall design and layout of our raman cryo - microscope system , which is shown in in fig . [fig : raman_setup ] ) .our raman setup starts with a laser source ( the laser beam is shown by the red beams in fig . [fig : raman_setup ] ) . to achieve high spatial and spectral resolution , a laser quantum torus 532 nm laser with a ghz - bandwidth was used for the excitation .a true zero - order half waveplate ( hw1 ) mounted on a motorized universal rotator and a cubic polarizer ( p1 ) were placed after the laser .since the rotation of the half waveplate will induce a change in the polarization of the laser , the laser power after p1 can be effectively adjusted by rotating hw1 , while maintaining the same polarization at the sample .in addition , the thinness of hw1 minimizes changes in the direction of the laser beampath upon changing the power .a second true zero - order half waveplate ( hw2 ) was placed after p1 , such that the measurement configuration could be switched between collinear ( xx ) and crossed ( xy ) polarizations . following the optical path and several silver mirrors ( m ) ,the laser was directed to a diffractive 90/10 beamsplitter ( dbs ) from ondax inc , reflecting 90 % of the excitation source , and rejecting 90% of rayleigh scattered light after exciting the sample .the laser is then guided towards a cryostation . before entering the cryostation ,the laser passes a double ( standa inc ., 14fr2-vis - m27 ) , which effectively acts as a broad - band half waveplate . between the s and p polarizationsis added for a total phase .thus , rotating the polarization of incoming laser(shown in red ) can be achieved through the rotation of fresnel rhomb ( fr ) .the blue lines are both raman scattered lights and rayleigh scattered lights .half waveplate ( hw2 ) , analyzer ( p2 ) , 90/10 beam splitter ( dbs ) , objective ( o). for clarity , the guiding mirrors between hw2 and dbs are omitted here ., title="fig : " ] + after exciting the sample , raman scattered light ( shown in the blue beam in fig .[ fig : raman_setup ] , see details in the caption of fig . [fig : raman_setup ] about the shared beampath ) is collected by the objective inside the cryostation and follows the incoming path .the raman scattered light passes through the and the analyzer ( p2 ) after the 90/10 beamsplitter , as shown in fig.[rhomb_figure ] . as mentioned above , the selection rules provide symmetry information about the excitation as well as the crystallographic axes . however , to achieve this without the, one would have to rotate the sample , resulting in the loss of focus , lateral displacement of the beam with respect to the sample , and adding more complexity to the setup .another method would be to simultaneously rotate components hw2 and p2 ( analyzer / polarizer ) .however , the diffraction grating in the spectrometer is most efficient for a fixed polarization ( s - polarization , detailed discussion will be given in sec .[ sec : charac ] ) .thus , rotating p2 would inadvertently affect the signal , even for an isotropic raman tensor .the optimal result can be achieved by a double - loop search method .basically , one can rotate hw2 by a small amount , then rotate p2 while monitoring the signal level of the rayleigh scattered lights to find the maximum counts , repeating until the counts are optimized . by introducing the , one can fix p2 to optimize the efficiency of the spectrometer , manipulate hw2 to change from cross to co - polarized configuration and use the to effectively rotate the sample s crystallographic axes about the fixed axes of the optical system . to reject rayleigh scattered light , and to reach a cut - off energy of 30 , two ondax sureblock volume holographic bragg grating based diffractive notch filters ( dnf ) were placed after the analyzer . using a notch filter as opposed to commonly used edge filtersalso allows both stokes and anti - stokes raman signals to be recorded , which is a useful indicator to test local heating of the sample. these two filters plus dbs result in od 8 attenuation to the rayleigh scattered light .finally , the raman scattered light is directed to a sine - drive spectrometer module equipped with an ultra - high resolution 2400 grooves / mm holographic grating . prior to entering the spectrometer the light passes through an external mechanical slit mounted on a 3-axis translation stage .this allows for maximum resolution and collection efficiency by positioning the slit at the focal point of the first lens inside the spectrometer . to reduce chromatic and spherical aberration effects , anti - reflection coated achromatic doublet lens ( al ) with focal length of 50 mm ( thorlabs , inc .ac254 - 050-a - ml ) was used to focus the light onto the slit .ultimately , the diffracted light is detected by an andor idus back illumination spectroscopy ccd .the detector operates in sub - image bin mode and at the lowest rate adc ( analog - digital - conversion ) channel to reduce the noise level . to bring samples into focus during measurements , and for finding features and/or micron sized samples on a substrate ,imaging the sample is necessary . to get more repeatable images ,our raman microscope is equipped with computer controlled white light illumination and imaging capabilities ( all parts denoted in red in fig .[ fig : raman_setup ] ) .illumination lights ( shown by the green beam in fig .[fig : raman_setup ] ) are delivered using a multi - mode fibre through a fiber coupler ( fc ) to a condenser ( cds ) and then reflected by a beamsplitter ( bs ) towards a cubic beamsplitter ( cbs ) .a home - built motorized long range translation stage was used to move the cbs in / out of ( shown by the transparent and opaque cbs respectively , in fig .[ fig : raman_setup ] ) the beampath for imaging ( raman signal collection ) .the cbs was fixed on a linear ball bearing carrier . a nema 17 stepper motor ( sm ) coupled with a 1/4 - 20 acme threaded shaft was used for the translation of the linear ball bearing carrier .the motion of the motor was controlled by a computer interfaced arduino uno microcontroller and a motor shield .two microswitches connected to the uno were used as stoppers for the carrier . when the cbs is moved into the beampath ,the illumination lights share the same beampath as the laser and is directed onto samples .after illumination , the reflected light followed the incoming path to the cbs , then is transmitted ( shown by the gold beam in fig .[ fig : raman_setup ] ) through the bs and ultimately directed by a mirror ( m ) through a lens with 15 cm focal length to form an image on a thorlabs cmos camera ( cmos ) . to focus the excitation source on our sample , and collect ( anti- )stokes shifted radiation , we used a modified 100x , 0.90 na zeiss objective with a working distance of 310 m , vented to operate in vacuum .the commercially available closed cycle montana instruments cryostation is equipped with a cryo optic module that was developed in collaboration with the burch group , and designed to mount the zeiss objective .the cryostation includes a low vibration ( nm peak - to - peak ) cold platform ( .5 k ) for the sample , and a sample radiation shield ( see fig . [ fig : cryochamber ] ) .the radiation shield reduces radiative thermal loads on the sample , and serves as a thermal anchor for lagging wires that enter the sample space .built - in attocube 101-series xyz nanopositioners handle sample translation and focusing , providing 5 mm of translation range with sub - micron resolution .the cryo optic module mounts directly onto the radiation shield ( as shown in fig .[ fig : cryochamber ] ) so that the aperture separating the sample and the objective is cooled to 60 k. moreover , the direct mechanical coupling of the objective and sample space inside the vacuum chamber ensure a stable focal plane with respect to the sample ; an important requirement for long integration times , and difficult to achieve with an objective mounted outside of the vacuum chamber .this rigid mechanical connection is pivotal for the high na objective , as the depth of focus is inversely proportional to the square of the na . with the 532 nm excitation source , a spot size of achieved , as optically observed , and confirmed by the diameter of holes purposely burned into bi flakes .the combination of a small spot size , high na , and high mechanical stability solves some of the key challenges in temperature dependent raman investigations of novel samples as small crystal size , low heat capacity , and weak raman signals require high lateral and axial resolution as well as low excitation power and thus long integration times .a cross - section of the mechanical design of the cryo optic module is shown in fig .[ fig : cryochamber ] .alignment of the objective with respect to the 50 m thick beryllium copper aperture is achieved by attaching a reference surface to the bottom of the aperture plane and employing the previously described white light imaging capabilities of the system .the threaded copper rings to which the objective is mounted are rotated until the reference surface is in focus , and are subsequently locked in place . after installing the cryo optic module onto the radiation shield, the sample is brought into focus using the nanopositioners described above .a key feature of the cryo optic module is the thermal isolation of the zeiss objective and the 60 k aperture .this allows for a closed loop temperature controlled objective ( kept at 310 0.5 k ) while the 60 k aperture limits the radiative heatload of the objective to a fraction of the cryostation cooling power at 4 k. moreover , with such careful temperature control , the objective and sample are impervious to typical environmental temperature fluctuations and the objective s optical performance is unaffected by the sample temperature .hence , drift of the sample with respect to the objective is limited only by the thermal contraction and expansion of the cryostation base platform , the nanopositioners , and the sample mount . to further reduce this drift ,a sample stage was developed by montana instruments in collaboration with the burch group that can change temperature nearly independent of the cryostation platform and the nanopositioners , which is discussed in sec .[ sec : atsm ] . to achieve high mechanical and thermal stability of the sample platform ,an agile temperature sample mount ( atsm ) was developed and integrated into the raman microscope .[ fig : heatloadcycle]c shows two pictures of the atsm , in which a 500 thick copper sample platform is surrounded by a 4 k radiation shield , and is radially supported just below the platform by g-10 thermal stand - offs .this platform thickness and support geometry with rotational symmetry results in minimal drift in the focal plane of the microscope , and strongly enhances in - plane positional stability . for minimal error in sample temperature readings ,a cernox cx-1050-ht is mounted on the bottom of the platform , thus separated from the sample by a 500 layer of oxygen - free high thermal conductivity copper ( ofhc ) .moreover , the sample platform is equipped with closed loop controlled solid state heating elements ( chip resistors in dead - bug configuration ) for highly isotropic heat exchange and temperature control .this allows for precise temperature control over the sample platform , nearly independent of the cryostation base platform , which maintains a temperature between 3 - 14 k over the full 4 - 350 k range of the atsm . hence , while cycling between temperatures , the thermal stability outside of the atsm platform is only minimally affected , reducing thermal drifts within the system s radiation shield and housing over longer time - scales . as a result , a comparison of measurements taken before and after installing the atsm revealed over an order of magnitude improvement in the time that is required to reach a stable temperature and position .this was determined both by temperature readings and producing a focused image between temperature changes .this is in part due to the agile temperature response of the stage ( as described below ) , and in part due to the rapid thermal stability of the atsm and optical train , as the temperature change and thermal expansions / contractions settle rapidly due to the minimal mass .[ fig : xyzstability]a illustrates the positional stability in the -plane and along the -axis , where is parallel to the poynting vector of the laser .these data were obtained by cooling down an nt - mdt scanning probe microscope ( spm ) calibration grating ( model tgz3 ) with a period of 3 . using the closed loop positioners , a white light image of the grating was kept in focus and centered on the screen throughout the cooldown .all required _ xyz _ displacements as a function of temperature were recorded and plotted in fig . [ fig : xyzstability]a . beyond overall stability , one may worry about vibrations caused by the closed - cycle system .this potential effect on the optical system was minimized by placing the optics on a separate platform from the cryostation , though on the same optical table . the effect on the raman signalwas checked by collecting raman spectra of a sample at room temperature with the cryostation compressor turned on and with it turned off .no significant difference was observed .furthermore , the overall design of the cryostation and the atsm minimizes the effects of vibration on the sample , as was measured using a lion precision cpl490 capacitive displacement sensor . with the atsm mounted on top of the positioners stack , and with the cryostation compressor running at high power , the room temperature in - plane ( and ) vibrations did not exceed 60 nm , and are thus well below the diffraction limit of our 532 nm excitation source . while vibrations along the poynting vector ( -axis ) were not measured with the capacitive displacement sensor , they are unlikely to exceed the in - plane vibrations as a result of the positioner stage stack construction .white light imaging of the above mentioned spm calibration grating ( while the compressor was running ) confirmed that mechanical vibrations along were below the axial resolution of our system as no defocussing was observed .vibrations of the atsm were also measured in isolation of the positioners , while mounted directly to the cryostation platform . in this arrangement, peak - to - peak vibrations did not exceed 5 nm , and an atsm platform resonance frequency of 8.2 khz was found . the thermal stability of the atsm platform is shown in fig .[ fig : xyzstability]b , where each temperature point was recorded over a period of 20 minutes .we note that while the conventionally large sample platform mass passively aids thermal damping , the _ low drift_ geometrical constraints of the atsm ( and thus it s low mass ) force it to strictly rely on a finely tuned pid loop for thermal stability , in addition to the thermal stability of the cryostation .indeed , with the low platform mass of the atsm , and high thermal conductivity of ofhc copper below 50 k, we observed temperature changes in excess of 100 k / s at low temperatures , thus imposing stringent requirements on the frequency and i / o resolution of the pid control algorithm .a lakeshore 335 temperature controller was used to meet these pid requirements .our observed decline in thermal stability below 50 k coincides with a large jump in the thermal conductivity of copper , and is thus attributed to a sub - optimally tuned temperature control loop .nevertheless , thermal stability over the full temperature range never exceeds 4.5 mk rms ( 32 mk peak - to - peak ) , which easily meets the requirements for raman microscopy , even at high temperature resolution , as shown in section [ sec : charac ] .it is important to note that while the ultimate goal of the atsm is positional and thermal stability , its design also leads to a reduction in sample cooling power compared to the cryostation base platform .this concern was addressed by optimizing the thermal standoff of the atsm platform so that it can easily withstand typical radiative heatloads ( 0.1 mw ) of the raman excitation source by several orders of magnitude , while maintaining a low base temperature .[ fig : heatloadcycle]a shows a log - log heatload map of the atsm over the full temperature range , emphasizing its low temperature cooling power .in addition to stability , the design of the atsm also allows for highly agile temperature control , resulting in minimal time loss between temperature set - points . to illustrate this agility , fig .[ fig : heatloadcycle]b shows a number of 20 minute thermal cycles between 4 - 350 k , stabilizing down to mk rms at both extremes of the cycle .to demonstrate the performance of our setup , we measured two challenging samples , bi and v .they are of wide interest for applications in thermoelectric and memristive devices as well as their topological insulating ( bi) and strongly correlated behavior ( v) .the first data set was obtained from single crystal bi as it is of interest to a wide range of researchers such as those working in topological insulator , thermoelectric and nano materials .of particular interest is the potential for the temperature dependence of its phonons to unravel the origin of its low thermal conductivity. however the low thermal conductivity which makes it an excellent thermoelectric , also forces one to keep the laser power extremely low to avoid local heating .thus , if the collection efficiency is not high enough , getting a sufficient signal to noise ratio is extremely challenging . besides, the raman response of bi is well characterized , making it a good sample to test the capability of our system. the sample was freshly cleaved just before being placed into the sample chamber .the power of the laser used for the measurement was 40 to avoid the local laser heating and the laser spot size was in diameter .the primary results are shown in fig .[ raman_temperature_bs ] .each spectra consists of an average of 5 acquisitions taken for 3 minutes each and is normalized by the height of the first phonon peak. temperature dependent raman spectra of bi in xx configuration .excellent signal to noise is observed at all temperatures , despite the low raman response and thermal conductivity of bi ] we can see there are three phonon peaks visible in the whole temperature range .they are located at 71.7 , 131.5 and 174.0 at 271 k , in agreement with previous studies. to confirm the absence of local laser heating , we exploit the well known relationship between the ratio of the stokes(s ) and anti - stokes(as ) intensity and the local temperature , given by : where is the phonon energy and the local temperature .thus we expect a linear relationship between the sample temperature and the inverse of the log of this ratio . the absence of local heating is indeed demonstrated in fig . [ ratio_ans_s ] where we plot the ^{-1}12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevlett.110.107401 [ * * , ( ) ] * * , ( ) * * , ( ) _ _ , , vol .( , ) link:\doibase 10.1103/physrevb.82.064503 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) _ _ , vol .( , ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) link:\doibase 10.1103/physrevb.13.5448 [ * * , ( ) ] link:\doibase 10.1103/physrevb.16.4239 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.147201 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevb.76.064304 [ * * , ( ) ] * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) https://books.google.ca/books?id=t3ofaqaamaaj[__ ] , pearson education ( , ) `` , '' ( ) , link:\doibase 10.1103/physrevb.89.075138 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) \doibase http://dx.doi.org/10.1063/1.3685465 [ * * , ( ) ] link:\doibase 10.1002/pssb.2220840226 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , ( , ) * * , ( )
|
raman micro - spectroscopy is well suited for studying a variety of properties and has been applied to wide - ranging areas . combined with tuneable temperature , raman spectra can offer even more insights into the properties of materials . however , previous designs of variable temperature raman microscopes have made it extremely challenging to measure samples with low signal levels due to thermal and positional instability as well as low collection efficiencies . thus , contemporary raman microscope has found limited applicability to probing the subtle physics involved in phase transitions and hysteresis . this paper describes a new design of a closed - cycle , raman microscope with full polarization rotation . high collection efficiency , thermal and mechanical stability are ensured by both deliberate optical , cryogenic , and mechanical design . measurements on two samples , bi and v , which are known as challenging due to low thermal conductivities , low signal levels and/or hysteretic effects , are measured with previously undemonstrated temperature resolution .
|
a challenge in biology is to understand how complex regulatory networks in a cell execute different biological functions , such as the dna replication event and mitosis event in eukaryotic cell cycle processes , orderly and reliably .we call the process as the sequential - task process . in this work ,we establish a simplified toy model for a budding yeast cell cycle , and develop the analysis methods to describe the evolving process .our results show that the cell cycle process is an excitable system with global robustness towards different initial states , where the phase transitions of the cell cycle are controlled by feedback in the network and driven by bifurcations with a slow passage that we refer to as a ghost effect .these ghost effect bifurcations are sufficient for providing both the longer duration of each event and the modularity in state / parameter space .the modularity and decoupling of the kinetic parameters in different events ensures the stability of the system against parameter changes ; thus small parameter changes in any given event should not influence the general character of the cell cycle process .we describe this as the wave transition `` domino '' model of the cell cycle .this wave transition model not only provides an efficient and easily controlled strategy for the cell cycle , but also suggests a guide for building synthetic networks of sequential - task processes .it has been suggested that dynamic robustness and modularity are the important characteristics of a cellular regulatory network for fulfilling complex biological processes .recent progress in quantitative modeling and experiments of some fundamental cellular processes , including the chemotaxis in e. coli and the cell cycle process in yeast and oocyte , have provided a better systematic and quantitative view of these biological processes .these studies have revealed the structural and dynamic properties of regulatory networks , such as the basic motifs and modules of network structure , the feedback loops that dynamically control the genetic switches and transitions , and the dynamic stability and robustness of such networks . in the cell cycle process, the cell successively executes dna replication and mitosis events using a checkpoint mechanism .the dynamic regulatory mechanism in cell cycle processes was revealed by the pioneering work of tyson , novak , and ferrell .the cell cycle process is considered to be a series of irreversible transitions from one state to another , which is regulated by system - level feedback .for example , the cell cycle process in oocyte maturation is built upon a hysteretic switch with two saddle - node bifurcations .the skeleton models of the cell cycle in a budding yeast and mammalian cells have also been established to investigate the temporal order and sequential activation in the cell cycle .furthermore , a dna replication checkpoint in a mammalian cell cycle has been proposed to lengthen the cell cycle period and lead to a better separation of the s and m phases . to reveal the essential dynamic properties of a sequential - task cell cycle process, some fundamental questions must be studied .for instance , how can the cell cycle network ensure the cell reliably executes a successive order of events ?how does the network ensure that the process is robust against environmental fluctuations ?what is the function of the cell cycle checkpoint from a dynamic view ? here , we address these questions using a simplified cell cycle network model in a budding yeast _saccharomyces cerevisiae_. we first develop a simplified yeast cell cycle model and select a set of parameters that can qualitatively describe the cell cycle process .we then apply perturbation methods to analyze the dynamic properties of the local manifold along the evolving cell cycle trajectory .we demonstrate the following : that the yeast cell cycle process is an excitable system driven by sequential saddle - node bifurcations ; the yeast cell cycle trajectory is globally attractive with modularity in both state space and parameter space ; and that the convergent manifold of the trajectory corresponds to the cell cycle checkpoints . with these advantages ,the yeast cell cycle process becomes effective and reliable throughout its execution .the budding yeast cell cycle process is a well - established system for investigating and exploring the control mechanisms of regulatory networks .recent works integrating modeling and experiment have highlighted the function of positive and negative feedback in the start point , spindle separation process , and mitotic exit . to investigate the essential dynamic features of budding yeast cell cycle , especially the dynamic robustness and modularity, we construct and analyze a coarse - graining model that is mainly based on the abstract architectures and functions of cell cycle and ignored the molecular details .these simplification approaches have been applied widely to study the design principles of regulatory networks the cell cycle process in a budding yeast produces new daughter cells through two major events : dna replication in the s phase and mitosis in the m phase .we denote this kind of processes that execute different biological functions and events in order and reliably as the sequential - task processes .the different cyclins and transcription factors ( tf ) control different successive events , and the dna replication checkpoint and spindle assembly and separation checkpoint ensure the stability and hereditary validity of the genetic information .the checkpoints ensure the completion of earlier events before the beginning of later events to maintain the orderly progression of the cell cycle .based on the cell cycle regulatory network ( figure 3 - 34 in ) , our previous network with an intra - s and spindle checkpoint , and the full cell cycle network in the supplemental information ( si ) , we separated the yeast cell cycle regulatory network into the g1/s , early m and late m modules , where the positive and negative feedback loops play an essential role in governing the yeast cell cycle process .we then obtained a coarse - graining simplified 3-module network , shown in figure [ network ] . in the figure, represents the activities of key regulators such as the cyclins cln2 and clb5 , and sbf and mbf in the g1 and s phases ; represents the activities of key regulators such as the cyclin clb2 and the transcriptional factor mcm1/sff in the early m phase ; and represent the activities of key inhibitors such as cdh1 , cdc20 , and sic1 in the late m / g1 phase . for simplification ,we ignore the g2 phase .then a simplified three - variable ordinary differential equation ( ode ) model can be established .the 3-node cell cycle model is not only reduced on the topological structure of budding yeast cell cycle network , but also constructed to describe the key dynamic features of yeast cell cycle , especially the genetic switches , irreversible transitions , and the sequential - task process from dna replication to mitosis .the 3-node model is written as follows : [ nondimensionalequ ] in this 3-node model , we assume that the simplest forms to represent the interactions in the cell cycle network in order to qualitatively describe the genetic switches in the start point , g2/m transition and the late m phase , and we introduce the checkpoint mechanism by changing kinetic parameters .the positive feedback in each module is represented by a 2-order hill function , while the inhibition and repression component among different modules is assumed to have its simplest form , i.e. , , , and . the parameters and are the hill coefficients and degradation rates of , , and respectively ; is the activation rate from to ; and is the activation rate from to .furthermore , the term in eq .[ nondimensionalequ]c represents the repression from to , in eq .[ nondimensionalequ]b represents the repression from to , and in eq .[ nondimensionalequ]a represents the repression from to , these are negative feedback . more detail concerning network reduction , model assumptions , formulations , and the dimensionless form deduction can be found in si .we focus on the dynamic robustness and modularity of the cell cycle process , particularly from the excited g1 state to the final steady g1 state .before we quantitatively analyze the model , we qualitatively illustrate the dynamics of , , and and the corresponding yeast cell cycle events and process . at the beginning of the cell cycle process in the g1 phase state , inhibitor is at a high level with low levels of and , and the cell is in the resting g1 state .once the system enters the cell cycle process , is activated and dominant , which represses the inhibitor to a lower level . in the early m phase , is activated and dominant , and the activated represses in turn . in the late m and g1 phases ,a wave is triggered by and increases to high levels .sequentially , the x wave is dominant in the s phase , the y wave is dominant in the early m phase , and the z wave is dominant in the late m and g1 phases .we denote this set of sequential steps as the wave transition domino " model of the yeast cell cycle process .the excited g1 state of the yeast cell cycle process can be settled as follows . if we set and ignore the repression terms , , and in eq .[ nondimensionalequ ] , the equations for , , and can be decoupled .in this situation , the steady state of in equation has three fixed points , i.e. , , and , where and are stable nodes and is an unstable saddle that serves as the threshold to the excited g1 state and separates the attractive ranges of and .these dynamic features are only determined by and .similarly , if we set and , we have , and the excited g1 state is defined as . in the yeast cell cycle process, the cell should finish the dna replication before mitosis , and there should be enough time to execute both the dna replication and mitosis events .the question then arises as to what criteria are available to qualitatively describe the yeast cell cycle process .we use an event order and the long duration of the and waves as constraints to search for suitable kinetic parameter sets in eq .[ nondimensionalequ ] .the first constraint is that the wave is followed by the wave , while the wave decreases in the s phase and increases in the late m and g1 phases .the second constraint is that the duration times ( calculated as the full duration at half maximum ) of both the and waves should be sufficiently long ( i.e. , arbitrary units ( au ) ) in order to ensure the successful execution of any dna replication and mitosis events . using the random parameter sets we produced by latin hypercube sampling and the excited g1 state as initial state, we obtained 239 groups of parameter sets in a simulation iteration that satisfied the above conditions .these parameter sets were clustered as shown in figure [ clustering01]a .we found that the above constraint conditions only required small activation rate values for ( from the wave to the wave ) and ( from the wave to the wave ) .furthermore , and are related to be correlated , while , , and are relatively large in value .different simulation iterations showed similar results .we then set and ( ) to search for the satisfied parameters , , , and .the results are plotted in figure [ clustering01]b , with the wave duration as a function of .the envelope curve corresponds to , where will be discussed in the following section .furthermore , a suitable value of is for the constraint . for the same reason, we select .these results give us the parameter set that can qualitatively describe the yeast cell cycle trajectory : , ( ) , , , and .we denote this set as the perfect yeast cell cycle parameter set .what is the relationship between the duration of the or wave and or ? to illustrate this relationship , we analyze how the wave triggers the activation of the wave . during the late s phase and early m phase , is almost fully activated and has repressed to zero , so we can ignore in eq .[ nondimensionalequ ] to get [ xyequations ] we showed in the previous section that and determine and ( ) , so the activation term in eq .[ xyequations]b will trigger the activation of , and the repression term in eq .[ xyequations]a will repress . in the early m phase, triggers the activation of through a saddle - node bifurcation , where two smaller fixed points of ( and ) collide and vanish , leaving only .we denote as the critical bifurcation point of , and its value can be estimated as follows : is defined as part of the right hand side terms of eq .[ xyequations]b , while is the minimum of .we can ignore the inhibition term in eq .[ xyequations]a in the late s phase because is approaching its maximum and is not fully activated .when or , the saddle - node bifurcation will occur .note that is only determined by and , while is determined by and .thus , is determined by , , , and .furthermore , if , then just after the bifurcation we have , , and , so it is just after the two smaller fixed points collide that the saddle - node remnant , or ghost effect , with a slow passage will be observed .we estimate the duration of the wave as a function of by calculating from zero to , i.e. , considering the inhibition term in eq .2a , if is near , then can reach , where .when , can be triggered and activated , while is mainly determined by parameters , , , and .the envelope curve of figure [ clustering01]b is based on this analysis , and is consistent with our simulations . using the selected parameter set , we calculated the value of based on the above estimation , and found , ( i.e. , very close to ) , which shows that the ghost effect in the x wave exists . in the early m phase , is activated and then evolves to , and the repression term in eq .[ xyequations]a makes unstable and represses to zero .a similar mechanism occurs in the late m phase , where the activation of is triggered by , and the long duration of the wave is caused by the saddle - node remnant , i.e. , the ghost effect . if we ignore the inhibition term from to in eq .[ nondimensionalequ]b , then and is the minimum of the curve determined by , , and , while is determined by and .the duration of the wave is determined by , and is determined by , , , , and .again , using the selected parameter set , we calculated the value of and found , a value very close to .in this section , we develop methods to illustrate local stability along an evolving dynamic trajectory , then apply those methods to the yeast cell cycle process . using the perfect parameter set , the perfect cell cycle trajectory evolving from the excited g1 state to the stable g1 stateis shown in figure [ wtcellcycle]a and b. the figure also shows , , and in the upper panels of figure [ jacob1]b , where is the trajectory length from the initial state .note that and are the velocity of the trajectory as a function of time and of , respectively .the normal plane of the trajectory is defined as the plane that is always perpendicular to .points on a small spherical surface of radius around the exited g1 state were taken as perturbed initial states . in figure [ jacob1 ] and figure [ jacob2 ] , we set .this sphere of perturbed states evolves together along the perfect trajectory with varying deviations , leading to a change in the shape of the sphere . at each time or trajectory length , the deviation of each point on the perturbed sphere from the perfect state at the center was calculated as or , where is the index of the perturbed trajectory .the normal radius of the perturbed sphere is the average of all for each value of on the cross section of the sphere that goes through the sphere center and is normal to .the forward and backward tangential radius and are defined as the farthest distance on the perturbed sphere from the center , in the direction parallel and antiparallel to the velocity , respectively . as an example , the normal radius and of the perfect yeast cell cycle trajectory are plotted in the middle panels in figure [ jacob1 ] . in the above method ,the manifolds are formed by the trajectories evolving from the given small spherical surface at or . in order to fairly examine the system s dynamic properties at different and , circular perturbations on the normal plane of the yeast cell cycle trajectory were added at each step of the trajectory .the velocity vector of a perturbed trajectory can be decomposed with respect to the axis into the normal component and the tangential / parallel component .the average velocities on the perturbed circle were then calculated .the magnitude of the average velocities indicates how fast the manifold evolves , while the average of the normal components and tangential components denote how the manifolds converge and disperse on average .the and of the perfect cell cycle trajectory are plotted in the middle panels of figure [ jacob1 ] as an example .the stability of any fixed point can be examined through the eigenvalues and eigenvectors of the jacobian matrix of the odes . for a 3-dimensional dynamic system , , , the jacobian matrix is defined as if any real part of the eigenvalue of the jacobian matrix is positive , the fixed point will be unstable in the direction of the correlative eigenvector .in other words , a fixed point remains stable only when all of its eigenvalues are negative .along the evolving trajectory , all state points should evolve in the velocity direction except for the final fixed point . to analyze the convergence properties of each manifold along the yeast cell cycle trajectory, the jacobian matrix should be projected onto the normal plane of the trajectory .we adopt two unit orthogonal vectors and , together with unit velocity vector , to compose a set of orthogonal unit vectors .then the projecting matrix can be defined as . through the transaction of projection ,the normal jacobian matrix is equal to , in the form the four non - zero elements consist of a normal jacobian matrix whose eigenvalues and and eigenvector dictate the convergence properties in the normal plane of the evolving trajectory . as an example , the real parts of and along the perfect cell cycle trajectory are shown in the lower part of figure [ jacob1 ] . the real parts are almost always negative except for the first part of the cell cycle and near the two vertex in figure [ wtcellcycle]b . at the two vertex , where and reach their respective maximums and begin to decrease , there are positive peaks followed by large negative eigenvalues . because the perfect cell cycle trajectory changes its evolving direction quickly near these vertices , we assume the positive eigenvalue peaks are caused by a numerical error in the velocity directions at the vertices .using the perfect parameter set , we depict the perfect yeast cell cycle trajectory in figure [ wtcellcycle]a and b , and its local dynamic analysis results in figure [ jacob1 ] . in figure[ jacob1 ] , we plot the normal radius to describe the deviation from the perfect trajectory in the normal direction of the velocity of the initial state , the local average velocity to depict the velocities of the circular perturbations along the evolving trajectory , and the real parts of eigenvalues and to describe the convergence properties in the normal plane of the trajectory .these results reveal some interesting dynamic properties along the perfect cell cycle trajectory .in the first part of the trajectory ( g1 and s phases ) , figure [ jacob1 ] shows that reaches a peak at and , suggesting that the perfect trajectory diverges at the beginning of the s phase .when the system evolves at and to the first vertex ( , , ) in figure [ wtcellcycle]b , is fully activated and has repressed to zero , and also begins to trigger the activation of by saddle - node bifurcation .the duration of the wave is given by . because in the perfect parameter set, there is a ghost effect with long duration of wave .the results in figure [ jacob1 ] show that , near the first vertex , the normal radius decreases to near zero , the real parts of eigenvalues and sharply decrease to negative values , and the average local velocity .these results demonstrate that the local manifold of the cell cycle trajectory converges to the first vertex , and that the trajectory evolves very slowly near the first vertex because of the ghost effect .note that the first vertex state corresponds to the dna replication event .this suggests that the ghost effect after bifurcation provides a sufficiently long duration for the wave to execute a dna replication event , while the converging manifold and slowly evolving trajectory near the first vertex provides a suitable state for the dna replication checkpoint . on the other hand , the vertex with a convergent manifold works as a hidden intrinsic checkpoint mechanism for the cell cycle process .a similar dynamic property is found in the later part of the perfect cell cycle trajectory . when and in the early m phase , the trajectory diverges .near the second vertex ( 0 , , 0 ) , is fully activated and has repressed to zero , so is triggering the activation of through saddle - node bifurcation .this saddle - node bifurcation and ghost effect lead to a converging , attractive and slowly evolving manifold near the second vertex .the second vertex corresponds to the spindle assembly and separation event .the modularity of both state and parameter space can be found in the perfect cell cycle model . in figure [ wtcellcycle]a and b , not onlydo the , , and waves have little overlap time , the three vertices in the phase map have sharp pointed ends and the kinetic parameters in the , , and waves are also decoupled into independent groups . near the first vertex , , where is the minimum of and .so the first bifurcation near the first vertex ( , 0 , 0 ) is mainly controlled by the wave s kinetic parameters and through , and the wave s kinetic parameters and through .note that these values are almost independent of the wave s kinetic parameters because the event occurs mainly in the plane with .similarly near the second vertex , , where is the minimum of and .the second bifurcation near the second vertex ( 0 , , 0 ) is mainly in the plane with , and is controlled by the wave s kinetic parameters and , and by through and the wave s kinetic parameters and through .again , these values are almost independent of the wave s kinetic parameters . when and , the relative saddle - node bifurcations occur in sufficient order .the duration of wave is mainly controlled by the and the maximum of wave shown in [ clustering01]b .similar is the wave , in which the duration of wave is mainly controlled by the and the maximum of wave . in this case, the system not only provides a long duration for the execution of the dna replication and mitosis events , but also separates the , , and waves into relatively independent events and decouples the kinetic parameters of different waves and phases .these results provide a suitable strategy for the regulation of sequential events , especially because the change and modification parameters of one event influence other events very little .the robustness and manifold analysis along the cell cycle trajectory provide more dynamic information of the cell cycle process .novak and coworkers discussed a kinetic differential equation model in which the budding yeast cell cycle trajectory could be separated into excitation and relaxation periods according to the eigenvalues of the jacobian of the trajectory .the divergence and convergence manifold has also been observed in a boolean network model , where the manifold converges at the s phase state corresponding to a dna replication checkpoint . in the perfect cell cycle model we find some new results ,including the manifold of key regulators diverges and converges in different cell cycle phases , and the dynamic robustness and modularity of cell cycle process .let us interpret the results from the biological perspective .suppose that the states ( activities of the key regulators ) of yeast cells can be dynamically observed , if a group of yeast cells start from different excited g1 states with fluctuations of biochemical parameters that tend to vary from cell to cell , how do the yeast cells evolve during the whole cell cycle process ? during the early s phase , the states of the yeast cells should be observed to separate and diverge significantly ( diverge manifold ) and their states are changing quickly ( large average velocity ) , and in this phase the yeast cells activate kinases and express genes for the execution of dna synthesis and replication task .when the cells enter the late s phase that the dna replication task is working , the yeast cell states begin to change slowly and gradually converge to the same point ; this corresponds to the system evolves into the first vertex , there is long duration with decreased velocity and convergent manifold .similarly , in the early m phase the yeast cell states separate , diverge and change quickly . when the metaphase / anaphase transition with spindle assembly and separation task takes place , where the system near the the second vertex , the yeast cell states should change slowly and converge again .finally , all the yeast cells should evolve to the stable g1 state waiting for another cell cycle signal . during the whole cell cycle process , from the dna replication event to the mitosis , the dynamic robustness and modularity of the system provide a suitable strategy to execute the sequential events reliably .so in cell cycle process , the manifolds diverge and converge , wave after wave , cycle in cycle out .this is the dynamical picture of cell cycle .the durations of the and waves in the perfect cell cycle model are controlled by , and sensitive to , and , respectively . in the siwe investigate a simplified cell cycle model with inhibitor , in which represses and represses by one - step or multi - step phosphorylation , noted as . in the si, we show that the inhibitor with phosphorylation can largely reduce the duration sensitivity of the wave to changes in the relative kinetic parameters from the to inhibitor , and the wave duration is not sensitive to the maximum of wave .these differences can be also used to distinguish the direct activation and indirect activation ( with inhibitor ) between the successive waves . in a real yeast cell cycleprocess , yeast cells produce new daughter cells through dna replication in the s phase and undergo mitosis in the m phase , with each event associated with a relative checkpoint mechanism .for example , when the cell is in a dna replication event , or the dna is damaged in the s phase , the relative pathway is activated and turns on the dna replication checkpoint to keep the cell in the s phase ; only when the damaged dna is repaired or the yeast cell finishes the dna replication event is the dna replication checkpoint turned off and the the yeast cell allowed to enter the g2 and m phases . to simulate the above cell cycle checkpoint function, we artificially changed and to represent the on or off states of the cell cycle checkpoints . when the dna replication checkpoint is turned on , ; when it is turned off , .similar to the spindle checkpoint , represents the on state while is the off state .we denote this as the real or ideal " yeast cell cycle trajectory .the ideal trajectory evolves through the 3 vertex states , , and in order , and the checkpoints not only control the duration of the and waves , but also separate the , , and waves .furthermore , the checkpoint mechanisms make the kinetic parameters in different waves independent .this kind of extrinsic checkpoint mechanism in real yeast cell cycle process provides a more stable duration for each phase .the ideal yeast cell cycle trajectory is globally attractive and stable .the following discussion provides further evidence for this conclusion . if we were to ignore the activations and repressions among , , and in eq .[ nondimensionalequ ] as formulated in the model section of this work , there would be 3 fixed points , one on each of the , , and axis .thus , , , , and are stable nodes , while , , and are unstable saddles .consider the repression terms , , and in eq .[ nondimensionalequ ] , where only one of , , and can be in its maximum state , and is unstable . in this case , only , , and are the possible final stable attractors .when discussing the evolution of the cell cycle engine , andrew murray suggested to make toy systems " that are much simpler and mimic the key features of cell cycle process for the deeper understanding of cell cycle . from the intrinsic checkpoint mechanism in the perfect cell cycle to the outside checkpoint mechanism in real yeast cell cycle process, our results suggest a possible evolution course of cell cycle network and sequential - task regulatory networks . in this sectionwe investigate a counter example , the imperfect yeast cell cycle process , where we set and , and and , all of which are relatively far from the bifurcation points . the imperfect trajectory ( figure [ wtcellcycle]c and d ) and its local dynamic properties ( figure [ jacob2 ] ) are quite different from the above perfect yeast cell cycle trajectories . in figure [ wtcellcycle]c and figure [ wtcellcycle]d , the maxima of and just reach half of and , there is a relatively large overlap time for the , , and waves , and there is no sharp pointed end near the 1st and 2nd vertices in phase space .the duration times of the and waves are only 1/6 that of the duration times of the perfect cell cycle trajectory .all of these results show that the , , and waves are coupled together . in figure[ jacob2 ] , one observes that the expansion is less than the perfect trajectory ( ) , even though the local manifold also expands and converges along the trajectory .near the vertices in the phase map , the manifold can not converge to a narrow state space with a certain evolving velocity ( ) , which then can not provide a long enough duration for dna replication and spindle assembly and separation events .furthermore , there is no suitable point for the checkpoint mechanism .the yeast cell cycle process executes dna replication and mitosis in subsequent order , and should be robust against any intrinsic fluctuations and environmental change . in this paper , we constructed and investigated a simplified yeast cell cycle model as a toy model to capture the dynamic essence of sequential - task biological processes .we have shown that the cell cycle process is an excitable system with global robustness in the state space , and that the phase transitions are driven by dynamic bifurcations containing ghost effects .these bifurcations and associated ghost effect sufficiently provide the longer durations required for each event or wave , as well as the modularity of the state and parameter spaces .this cell cycle model is robust and modular in both state and parameter space .the robustness in state space ensures that the cell cycle process is stable against fluctuations in protein activity .the modularity and decoupling of the kinetic parameters in the different waves avoids allowing small parameter changes in any one event to influence the general cell cycle process .our wave transition model provides an efficient and easily controlled strategy for executing the cell cycle processes .furthermore , our toy cell cycle model suggests a possible synthetic network design for robustly executing other sequential - task processes . in the synthetic network ,each event can be controlled by the relevant key regulators , and the duration of each event is regulated by the activation strength between successive events ( similar to and in our toy model ) .these synthetic networks could be used to check our prediction that the long duration and modularity of the sequential - task process is caused and controlled by the ghost effect .most of the insights we obtained from the toy model are independent of the specific formulation of the model and the number of dimensions , and the sufficient condition is that the dynamic cell cycle models should be governed by sequential saddle - node bifurcations containing ghost effects .we believe that our results and analysis method are applicable to other eukaryotic cell cycle processes and other sequential - task biological processes .life is modular in parts and more than the sum of its parts , and this robustness and modularity are the design principles for biological regulatory networks . indeed ,if a network behaves along a globally attractive trajectory as well as being modular in state and parameter space , it can execute successive irreversible events more robustly and effectively and thus obtain an evolutionary advantage .* text s1 * this file contains details needed to understand the main body of this work , which are arranged as follows : i the regulatory network in budding yeast cell - cycle process and a 3-node model ii a simplified cell - cycle model with inhibitor * figure s1 * the regulatory network of key regulators in the budding yeast cell - cycle process , which can be separated into the g1/s , early m , and late m modules .it is contained within section i of the si .* figure s2 * the network of the one - step phosphorylation inhibitor that is inserted between and in the 3-node yeast cell - cycle model .it is contained within section ii of the si .* figure s3 * an inhibitor with 4-step phosphorylation is added between and in the yeast cell - cycle model , which decreases the sensitivity of wave duration to changes in the triggering rate .it is contained within section ii of the si .the authors are grateful to chao tang , louis tao , tiejun li , xiaomeng zhang , tianqi liu , and yuhang hou for helpful discussions .the work is supported by nsfc grants nos .11174011 , 11021463 , and 91130005 ( f.li ) , and nos . 11074009 and 10721463 ( q.ouyang ) .30 barkai n , leibler s ( 1997 ) robustness in simple biochemical networks .nature 387 : 913 - 917. hansen c , endres rg , wingreen ns ( 2008 ) chemotaxis in _ escherichia coli _ : a molecular model for robust precise adaptation .plos comput biol 4(1 ) : e1 .chen k , _ et al ._ ( 2000 ) kinetic analysis of a molecular model of the budding yeast cell cycle .cell 11 : 369 - 391 .chen k , calzone l , csikasz - nagy a , cross fr , novak b et al .( 2004 ) integrative analysis of cell cycle control in budding yeast .cell 15 : 3841 - 3862 .li f , long t , lu y , ouyang q , tang , c ( 2004 ) the yeast cell cycle is designed robustly .u s a 14 : 4781 - 4786 .charvin g , oikonomou c , siggia ed , cross fr ( 2010 ) origin of irreversibility of cell cycle start in budding yeast .plos biol 8 : e1000284 .murray aw , kirschner mw ( 1989 ) cyclin synthesis drives the early embryonic cell cycle .nature 339 : 275 - 280 .ferrell j , pomerening j , kim s , trunnell n , xiong w , huang c , machleder e ( 2009 ) simple , realistic models of complex biological processes : positive feedback and bistability in a cell fate switch and a cell cycle oscillator .febs letters 583 : 3999 - 4005 .shen - orr s , milo r , mangan s , alon u ( 2002 ) network motifs in the transcriptional regulation network of _ escherichia coli_. nature genetics 31 : 64 - 68 .alon u ( 2007 ) network motifs : theory and experimental approaches .nat rev genet 8 : 450 - 461 .alon u ( 2007 ) an introduction to system biology : design principles of biological circuits .chapman and hall .isaacs f , hasty j , cantor c , collins j ( 2003 ) prediction and measurement of an autoregulatory genetic module . proc .u s a 100 : 7714 - 7719 .xiong w , ferrell j ( 2003 ) a positive - feedback - based bistable ` memory module ' that governs a cell fate decision .nature 426 : 460 - 465 .thattai m , lim h , shraiman b , van oudenaarden a ( 2004 ) multistability in the lactose utilization network of escherichia coli .nature 427 : 737 - 740 .skotheim jm , di talia s , siggia ed , cross f ( 2008 ) positive feedback of g1 cyclins ensures coherent cell cycle entry .nature 454 : 291 - 296 .ma w , trusina a , el - samad h , lim w , tang c ( 2009 ) defining network topologies that can achieve biochemical adaptation .cell 138 : 760 - 773. tyson jj , chen k , novak b ( 2001 ) network dynamics , cell physiology .nature reviews molecular cell biology 2 : 908 - 916 .csikasz - nagy a , battogtokh d , chen kc , novak b , tyson jj ( 2006 ) analysis of a generic model of eukaryotic cell - cycle regulation .biophys j 90 : 4361 - 4379 .ferrell j , tsai t , yang q ( 2011 ) modeling the cell cycle : why do certain circuits oscillate ?cell 144 : 874 - 885. morgan d ( 2007 ) the cell cycle : principles of control .united kingdom : new science press .55 p. novak b , tyson j , gyorffy b , csikasz - nagy a ( 2007 ) irreversible cell - cycle transitions are due to systems - level feedback .nat cell biol 9 : 724 - 728 .gerard c , tyson jj , novak b ( 2013 ) minimal models for cell - cycle control based on competitive inhibition and multisite phosphorylations of cdk substrates .biophys , j 104 : 1367 - 1379 .gerard c , goldbeter a ( 2011 ) a skeleton model for the network of cyclin - dependent kinases driving the mammalian cell cycle .interface focus 1 : 24 - 35 .gerard c , goldbeter a ( 2009 ) temporal self - organization of the cyclin / cdk network driving the mammalian cell cycle .u s a 106 : 21643 - 21648 .cross f , siggia e ( 2005 ) mode locking the cell cycle .e 72 : 021910(6 ) .battogtokh d , aihara k , tyson j ( 2006 ) synchronization of eukaryotic cells by periodic forcing .96 : 148102(4 ) .yang x , lau ky , sevim v , tang c ( 2013 ) design principles of the yeast g1/s switch .plos biol 11 : e1001673 .holt l , krutchinsky a , morgan d ( 2008 ) positive feedback sharpens the anaphase switch .nature 454 : 353 - 357 .lopez - aviles s , kapuy o ,novak b , uhlmann f ( 2009 ) irreversibility of mitotic exit is the consequence of systems - level feedback .nature 459 : 592 - 595. lu y , cross f ( 2010 ) periodic cyclin - cdk activity entrains an autonomous cdc14 release oscillator . cell 141 : 268 - 279 .ma w , lai l , ouyang q , tang c ( 2006 ) robustness and modular design of the drosophila segment polarity network .mol syst biol . 2 : 70 lim wa , lee cm , tang c ( 2013 ) design principles of regulatory networks : searching for the molecular algorithms of the cell .mol cell 49 : 202 - 12 .hartwell lh , weinert ta ( 1989 ) checkpoints : controls that ensure the order of cell cycle events .science 246 : 629 - 634 .press wh , teukolsky sa , vetterling wt , flannery bp ( 2007 ) numberical recipes .cambridge university press .409 p. strogatz steven h ( 2001 ) nonlinear dynamics and chaos .westview press .99 p. lovrics a , csikasz - nagy a , zsely i , zador j , turanyi t , novak b ( 2006 ) time scale and dimension analysis of a budding yeast cell cycle model .bmc bioinformatics 7 : 94 - 505 .murray a ( 2004 ) recycling the cell cycle : cyclins revisited .cell 116 : 221 - 234 .hartwell l , hopfield j , leibler s , murray a ( 1999 ) from molecular to modular cell biology .nature 402 : c47-c52 .kitano h ( 2002 ) systems biology : a brief overview .science 295 : 1662 - 1664 .sneppen k ( 2005 ) physics in molecular biology .cambridge university press .the durations of the g1/s and early m phases are controlled the activation rates and , respectively . ( a ) clustering map of the parameter sets in log scale that satisfy the fundamental constraint conditions of the yeast cell - cycle process .the insert gives the scale of value .( b ) the wave duration of the satisfied parameter sets as a function of .the envelope curve corresponds to .,scaledwidth=80.0% ] the evolving cell cycle trajectory .( a ) and ( b ) using the perfect parameters ( ) .( c ) and ( d ) using the imperfect parameters ( ) . in panel b, the first vertex ( , 0 , 0 ) corresponds to the dna replication event , while the second vertex ( 0 , , 0 ) corresponds to the spindle assembly and separation event . ]local dynamic analysis of a perfect cell cycle trajectory ( ) as ( a ) a function of time and ( b ) a function of curve length .the upper panels plot the evolving trajectory .the middle panels plot the normal radius of the local spherical perturbation surface ( red curve ) and the average local velocity ( black curve ) , showing that the manifolds converge to a small state space when or are fully activated .the lower panels plot the real parts of eigenvalues and of the normal jacobian matrix ., scaledwidth=80.0% ] local dynamic analysis of an imperfect cell cycle trajectory ( ) .the upper panels plot the evolution trajectory .the middle panels plot the local spherical surface perturbation normal radius ( red line ) and the average local velocity . the lower panels plot the real parts of the eigenvalues and of the normal jacobian matrix.,scaledwidth=70.0% ]
|
yeast cells produce daughter cells through a dna replication and mitosis cycle associated with checkpoints and governed by the cell cycle regulatory network . to ensure genome stability and genetic information inheritance , this regulatory network must be dynamically robust against various fluctuations . here we construct a simplified cell cycle model for a budding yeast to investigate the underlying mechanism that ensures robustness in this process containing sequential tasks ( dna replication and mitosis ) . we first establish a three - variable model and select a parameter set that qualitatively describes the yeast cell cycle process . then , through nonlinear dynamic analysis , we demonstrate that the yeast cell cycle process is an excitable system driven by a sequence of saddle - node bifurcations with ghost effects . we further show that the yeast cell cycle trajectory is globally attractive with modularity in both state and parameter space , while the convergent manifold provides a suitable control state for cell cycle checkpoints . these results not only highlight a regulatory mechanism for executing successive cell cycle processes , but also provide a possible strategy for the synthetic network design of sequential - task processes .
|
the discrimination of quantum states is one of the fundamental problems in quantum information and a basic task for several applications in communication , cryptography , fundamental questions , measurement and control and algorithms .triggered by the observation that non - orthogonal quantum states can not be perfectly discriminated , this subject has stimulated much work , both from a theoretical and practical point of view : the seminal works of helstrom , holevo and yuen _ et al . _ formalized the problem , obtaining a set of conditions for the optimal measurement operators , which in turn provide the optimal success probability , then solved it for sets of states symmetric under a unitary transformation ; more recently , acknowledging that a general analytical solution is hard to find , research focused on finding a solution for sets with more general symmetries , computing explicitly the optimal measurements for the most interesting sets of states and studying the implementation of such measurements with available technology ( see for example for the case of two optical coherent states , the most relevant for optical communication ) .also , the problem of discrimination has been identified as a convex optimization one , arguing that it can be solved efficiently with numerical optimization methods .+ in this article we attempt to solve the optimal discrimination of quantum states from a different perspective , by providing a structured expression for the -outcome positive operator valued measure ( povm ) used to discriminate the states .indeed it can be shown that any povm comprising elements is equivalent to a collection of binary povm s , i.e. , comprising two elements , as the one employed in ref . : depending on the binary outcome of the first measurement , a second one is applied ; its binary outcome in turn affects the choice of the third binary measurement and so on . in this waya sequence of nested binary povm s can be constructed , where the povm applied at a given level depends on the string of binary outcomes of the previous ones .this result was already obtained in ref .when applied to state discrimination , it acquires a more operational meaning : each binary povm can be seen as discriminating between two subsets of the initial set of states , identified by previous outcomes .hence the sequence of measurements induces a sequence of discrimination probabilities , so that , if the optimization problem is solved independently for any set of a fixed number of states , the result can be employed in the optimization problem for larger sets of states . + in the second part of the article , employing this decomposition and the two - state optimal probability , we obtain an expression for the success probability of discrimination of any states , depending on a single measurement operator , and solve the problem analytically for specific sets of states .then we restrict our attention to qubit states and obtain a compact expression which can be easily optimized numerically case by case , at variance with less compact results for presented in previous works based on bloch - space geometry .we recover the results of those works and highlight in particular some interesting lesser - known implications of ref .+ the article is structured as follows : in sec . [ deco ] we describe the decomposition in terms of nested binary povm s and provide a proof of its validity , similar to that of ref . ; in sec .[ statedisc ] we apply it to state discrimination and obtain an explicit expression for the case of arbitrary states , then discuss its optimization in some specific cases ; in sec . [ qubits ] we treat the case of qubit states , computing a compact expression which can be optimized numerically and highlighting some results obtained in this way . eventually in sec .[ conc ] we draw some conclusions .detailed computations of the quantities appearing in the article are provided in the appendices .in this section we prove that any quantum measurement with an arbitrary number of outcomes can be decomposed into a sequence of nested measurements with binary outcomes , where the previous results determine the choice of successive measurements .we stress that the same result was obtained in ref . . at variance with the latter ,our proof does not make use of the spectral decomposition of the initial measurement operators ; we present it here in a form adapted to the main purpose of the article .let us suppose we want to perform a quantum measurement with possible outcomes : it can be expressed in general as a povm of elements , one for each outcome , satisfying the positivity and completeness conditions , i.e. , respectively and , where is the identity operator on the hilbert space of the system to be measured .this expression can be interpreted as a one - shot measurement with several possible results and its practical realization may often be very hard . on the other hand we could restrict to performing only measurements with two outcomes , as described by _povm s : . this may be useful when limited technological capabilities or specific theoretical requirements constrain the number of allowed outcomes and the complexity of our measurement .it is then natural to ask whether this smaller set of resources is sufficient to describe a general quantum measurement .we answer positively by showing that the more general -outcome formalism can be broken up into several binary steps and interpreted as a sequence of nested povm s with two outcomes , trading a _one - shot _ , _ multiple - outcome _measurement for a _ multiple - step _ , _ yes - no _ measurement .+ the nested povm can be expressed in terms of _ conditional binary _povm s , each complete by itself , to be applied only if a specific string of previous results is obtained .for example for the nested povm can be realized in two steps and written compactly as the collection of three binary povm s : , properly composed as follows and shown in fig .[ schema ] .the measurement starts by applying the first - step binary povm then , depending on its outcome , it selects among the two povm s available in the second - step collection . eventually the chosen second - step povm is applied , receiving an outcome .the total outcome is a string of two bits , i.e. , , whose value identifies one of four possible outcomes , as desired .suppose now to apply this measurement on a state of some physical system : if the first - step outcome is , the resulting unnormalized evolved state is ; if then the second - step outcome is , the final unnormalized state of the system is .this means that the nested povm has a more explicit representation as where is the square of the absolute value of an operator ., explicitly discussed in the text .any four - outcome measurement acting on a state is equivalent to the concatenation of two - outcome measurements : the first - step one , with result , and the second - step ones , which are mutually exclusive and applied only if the corresponding first outcome was obtained . ] in the general case , let us indicate a sequence of bits as and define as the binary povm to be performed at the -th step if the previous measurements had a sequence of results .then we can define a nested povm of order as i.e. , the collection of binary povm s , for all previous outcomes at a given step and all steps .we can certify that so constructed actually is a povm by checking positivity and completeness of its elements : the former requirement is trivial , while the latter follows from the fact that each binary povm is complete , as shown in appendix [ appa ] . in light of the previous discussion we can now state the main theorem : [ decomposition ] any -outcome povm is equivalent to a nested povm , , as in eq . , composed exclusively of binary povm s , with a total number of steps equal to : 1 . , if is a power of ; 2 . otherwise , where is the ceiling function , equal to the smallest integer following the argument .consider the first case above , i.e. , .we start by providing a binary representation of the labels of the initial povm , i.e. , we define , with . in order to prove the theorem we have to show that by combining the elements of the initial -outcome povm one can always define a set of binary povm s , for all and , such that : i ) their nested composition is a povm of the form , eq . ;ii ) the elements of the latter are equal to the elements of .+ first of all we construct the binary elements at each step , by taking the sum of all the elements with a fixed value of the first bits , then renormalizing it by all previous binary elements , as in a square root measurement . for example define the elements of the first - step povm as for each value of the outcome .being a sum of positive operators , the elements so defined are themselves positive ; moreover their sum equals the sum of all the elements of , implying that they are complete . at the second stepdefine the elements of the two possible povm s as where the inverse of an operator is to be computed only on its support , while it is equal to on its kernel , i.e. , its pseudo - inverse .also in this case the defined elements are positive by construction , but they are not complete . indeed it is easy to show , employing the definition , that . here is the projector on the support of the previous outcome operator , , which may have a non - trivial kernel , so that in general it holds .this problem may be overcome easily by redefining the povm elements as , i.e. , trivially expanding the support of those already defined in , so that .this operation is trivial because , in the construction of the nested povm , the operators always act after the operator , so that the value of the former outside the support of the latter is completely irrelevant . in other words ,completeness of the binary povm s is not necessary for the definition of as a proper povm ; it is sufficient to ask for _ weak completeness _ ,i.e. , that is complete on the support of the operator preceding it in the decomposition , . + generalizing the previous discussion , at the -th step we can define the elements of the possible povm s as these elements are positive by construction and they satisfy the weak completeness relation , which is sufficient to define the povm , as discussed in appendix [ appa ] .hence we are left to show that , when combining the binary elements eq . as in eq ., the elements so constructed are equal to the .indeed let us evaluate eq . for ,i.e. , at the last step , noting that the sum contains only one term : let us then successively invert the outer square roots on the left - hand side of the equation exactly times , to obtain the relation which demonstrates that we can recover the initial povm with the procedure outlined above .+ this completes the proof when is an exact power of .if this is not the case , it means that is not an integer and it suffices to consider the nested decomposition for the next higher integer , i.e. , set , .let us then trivially expand the initial -outcome povm to a -outcome one as by adding null elements .the nested decomposition equivalent to can be computed again by eqs .( [ nested],[elements ] ) and it comprises null elements too .if we isolate these elements from the rest we obtain a decomposition where can be interpreted as a nested representation of the initial povm .in this section we apply the previous povm decomposition to the problem of optimal state discrimination .let us suppose we are given one copy of a quantum state , represented by a positive and trace - one operator , chosen at random from a set of states weighted with probability , so that ; we have to perform a measurement to decide which state was sent . if the states are not orthogonal , i.e. , for some values of , and we are constrained to give a conclusive answer , there exists no measurement that can succeed with unit probability .the average success probability of discriminating the set of states with a -outcome povm , as defined in sec .[ deco ] , can be computed as ,\end{aligned}\ ] ] where each measurement outcome is associated with the detection of the respective weighted state .we are particularly interested in the optimal success probability , obtained by optimizing over all measurements : + following sec . [ deco ], we can always decompose the discrimination measurement into a sequence of nested binary ones , writing the success probability as \nonumber\\ & & = \sum_{k_{(1,u_{f})}}\operatorname{tr}\left[\left|\sqrt{b^{(u_{f})}_{k_{(1,u_{f } ) } } } \cdots\sqrt{b^{(1)}_{k_{1}}}\right|^{2}\tilde{\rho}_{k_{(1,u_{f})}}\right],\label{probnest}\end{aligned}\ ] ] where we have introduced the binary representation for the labels of the states and measurement operators and employed the definition for the elements of the nested povm .this decomposition is interesting because it establishes a relation between the discrimination probability of a given set of states and that of its subsets of smaller size .let us indeed suppose that the first measurement is successful , i.e. , that an outcome occurs if one of the states with that value of the first bit was present .this happens with probability ] is the trace norm of the argument . then by plugging this expression into the optimization of eq . for states we can write : \nonumber\\ & + & { \ensuremath{\left|\left|\sqrt{b^{(1)}_{k_{1}}}\left(\tilde{\rho}_{k_{1},0}-\tilde{\rho}_{k_{1},1}\right)\sqrt{b^{(1)}_{k_{1}}}\right|\right|}}_{1}\bigg).\end{aligned}\ ] ]we can write the latter equation more compactly by introducing the function ,\end{aligned}\ ] ] where is a positive and less - than - one operator , while the arguments are hermitian operators , and its maximum over , i.e. , setting and , we obtain : with , and .similarly for states we have : with , as before and .thus the optimal discrimination problem of states has been reduced to the evaluation of the function , which requires an optimization over a single operator . + as already discussed , if the problem of eq .were to be solved exactly for any set of states , then the result could be plugged into eq . , obtaining an expression for the optimal discrimination probability of states dependent only on the first binary povm .unfortunately a solution of eqs .( [ prob4],[prob3 ] ) can be found only in some specific cases , listed below and discussed in detail in appendix [ appb ] . in the followingwe employ the positive part of an operator , defined as .[ cases ] the value of the function of eq .( [ deffabc ] ) is +{\ensuremath{\left|\left|c\right|\right|}}_{1 } , \end{aligned}\ ] ] when at least one of the following conditions holds : i ) the operators and have support respectively on the positive and negative support of ; ii ) and have a definite sign ; iii ) , and all commute with each other . in the first case of proposition [ cases ] , i.e. , that the operators and have support respectively on the positive and negative support of , the expression can be simplified as +{\ensuremath{\left|\left|b\right|\right|}}_{1}+{\ensuremath{\left|\left|c\right|\right|}}_{1}. \end{aligned}\ ] ] [ recrem ] the optimal success probability is invariant under exchange of the states , i.e. , under relabelling of the indices in our case .hence it can happen that the conditions listed in proposition [ cases ] are valid only for given by a specific ordering of the states .the previous remark implies that , when checking whether a set of states satisfies the conditions of proposition [ cases ] or not , one has to consider all possible sets of obtainable by different orderings of the states , not only the conventional one employed in eqs .( [ prob4],[prob3 ] ) .alternatively , one can apply this symmetry under exchange of the states to obtain recursive relations for , e.g. , for and by exchanging , it holds ;\label{rec3}\ ] ] then proposition [ cases ] holds on the right - hand side of when has a definite sign , but the latter is simply , an expression of the operator for the new ordering of the states . in all the cases listed in proposition [ cases ] , with the conventional ordering of the states of eqs .( [ prob4],[prob3 ] ) , the optimal success probabilities for the discrimination of states become ,\label{p3}\\ \label{p4 } & & \mathbb{p}_{succ}\big(\mathcal{s}^{(4)}\big)=\frac{p_{1,0}+p_{1,1}}{2}+\operatorname{tr}\left[\frac{\left(\tilde{\rho}_{0,0}+\tilde{\rho}_{0,1}-\tilde{\rho}_{1,0}-\tilde{\rho}_{1,1}+|\tilde{\rho}_{0,0}-\tilde{\rho}_{0,1}|-|\tilde{\rho}_{1,0}-\tilde{\rho}_{1,1}|\right)_{+}}{2}\right]+{\ensuremath{\left|\left|\frac{\tilde{\rho}_{1,0}-\tilde{\rho}_{1,1}}{2}\right|\right|}}_{1}. \end{aligned}\ ] ]in this section we analyze the discrimination probability obtained with the nested povm decomposition in the case of _ qubit _ states . indeed , since eqs .( [ prob4],[prob3 ] ) seem not to be solvable analytically for generic sets of states , it is interesting to tackle the problem by choosing the simplest possible hilbert space for the measured system , i.e. , the qubit space of dimension two. it is well known that the density matrices of this system can be represented as a real vector inside a three - dimensional unit sphere ( the bloch sphere ) , i.e. where is the identity operator and is the vector of pauli matrices in particular , pure states are situated on the sphere s surface , i.e. , for , while the completely mixed state is at the origin . more generally , any hermitian operator on the qubit space can be expressed in terms of four real coefficients : a scalar , which represents the normalization coefficient of the operator , and a vector , which represents the operator in the bloch space , i.e. the trace of the operator being determined by =2c_{x} ] .consider the set of operators and apply the subadditivity property : +{\ensuremath{\left|\left|b\right|\right|}}_{1}+{\ensuremath{\left|\left|c\right|\right|}}_{1}.\label{subaddapp}\end{aligned}\ ] ] the latter inequality can be saturated under the hypotheses of this lemma , by taking .[ aneg ] let us suppose that is negative semidefinite , has support inside the support of and that and have orthogonal supports .then .consider the same set of operators of lemma [ apos ] and apply again the subadditivity property , then use the fact that : the latter inequality can be saturated under the hypotheses of this lemma , by taking .hence we can prove the first case of proposition [ cases ] : let be the decomposition of in terms of its positive and negative parts , with , and suppose that , have support respectively inside the support of , . then consider the set of operators and apply the subadditivity property , together with lemmas [ apos ] , [ aneg ] : +{\ensuremath{\left|\left|b\right|\right|}}_{1}+{\ensuremath{\left|\left|c\right|\right|}}_{1},\label{spec}\end{aligned}\ ] ] which is saturated by a measurement operator .this expression is equivalent to that given in under the current hypotheses , indeed in this case it holds so that becomes +{\ensuremath{\left|\left|c\right|\right|}}_{1}\nonumber\\ & = & \operatorname{tr}\left[(a_{+}+|b|-|c|)_{+}\right]+{\ensuremath{\left|\left|c\right|\right|}}_{1}.\end{aligned}\ ] ] as for the second and third cases of proposition [ cases ] , let us first note that = \operatorname{tr}[q|b|]\;,\end{aligned}\ ] ] where are the positive and negative parts of as defined above for , and analogously \;.\ ] ] we then have + { \ensuremath{\left|\left|c\right|\right|}}_1\label{f1}\\ & \le & \operatorname{tr}[(a+|b|-|c|)_+ ] + { \ensuremath{\left|\left|c\right|\right|}}_1\;.\label{f2}\end{aligned}\ ] ] the inequality , is saturated by taking equal to the projector onto the support of .the inequalities ( [ nqb],[nqc ] ) and hence are saturated in both the second and third cases of proposition [ cases ] , though for different reasons : * if and have a definite sign , then it holds or , so that eq .is saturated and analogously ; * if , , all commute with each other , then eqs . ( [ nqb],[nqc ] )are saturated by any operator which commutes with both and .eventually the choice necessary to saturate eq .satisfies this latter condition in the case considered .finally we note that the previous case of commuting operators , as well as further results , can also be derived by applying the symmetry property of the optimal success probabilities ( [ prob4 ] , [ prob3 ] ) to obtain recursive formulas , as discussed after remark [ recrem ] , but still a full solution can not be found in this way .in this section we derive the results ( [ fqub ] , [ defsign ] ) explicitly . as a preliminary recallthat , for any three vectors and the pauli matrices it holds : moreover , given a positive operator on , the coefficients , of its square root can be expressed in terms of its coefficients , as : with . in order to evaluate can compute its first two terms , while the third one is similar to the second one .let us start with the product : it is a generic operator with coefficients computed by applying eq . .thus the first term of is simply =2c_{qa} ] , easily obtained by relabelling eq . .the vector of coefficients instead is where we have first computed the product between and , then substituted the expression for the former by relabelling once again the product and employed . *if both and have definite sign then they can always be taken to be positive semidefinite , up to a relabeling of the second bits of the original states. then we have * if instead and do not have a definite sign , then it must hold and and similar relations for , so that note that the third term can be expressed in terms of the coefficients of by observing that and .+ we can conclude that for and of non - definite sign while for and of definite sign which give respectively eqs .( [ fqub ] , [ defsign ] ) after inserting the values of the coefficients computed above , i.e. , eqs .( [ cqa ] , [ rmod ] ) .
|
a method to compute the optimal success probability of discrimination of arbitrary quantum states is presented , based on the decomposition of any -outcome measurement into sequences of nested two - outcome ones . in this way the optimization of the measurement operators can be carried out in successive steps , optimizing first the binary measurements at the deepest nesting level and then moving on to those at higher levels . we obtain an analytical expression for the maximum success probability after the first optimization step and examine its form for the specific case of states of a qubit . in this case , at variance with previous proposals , we are able to provide a compact expression for the success probability of any set of states , whose numerical optimization is straightforward ; the results thus obtained highlight some lesser - known features of the discrimination problem .
|
a self - similar system is called fractal and it is one of the subjects which attract attention broadly not only in natural science but also in social science , in recent years . in many cases , fractal structure appears in some restricted scale , does not do in all scale of the system concerned . for example , in the case of distribution of personal income interested in econophysics , the distribution of top several percent income earners follows fractal power law , while that of the rest earners does not do . in other words , in such a system , self - similarity is not maintained over all scale but is broken in small scale region . in usual ,the fractal property and the deviation from it in each system are discussed by using individual models for the system .for example , as for the personal income distribution , the fractal power law in high income region is studied using a stochastic evolution equation .in addition , whole profile of the distribution is investigated based on a concept of small world network , and in , the distribution is explained using q - gaussian distribution emerged from nonextensive statistical mechanics . in this paper , however , we will discuss characteristics common to systems which show both fractal and non - fractal properties independent of the detail of the individual models .a scale invariant model does not have any scale , so we expect that distributions derived from the model follow power law .this is thought to be a universal property that does not depend on the detailed structure of the model .one of the simplest methods to realize both fractal and non - fractal scale regions analytically is to introduce a typical scale into a scale invariant model to break the original scale invariance .if the interaction term with a typical scale is added to the scale invariant model , the model obtains a typical scale and we expect that the distributions derived form the model become non - fractal in the scale region where the typical scale is meaningful , while keeping fractal property in the large scale region . in this paper, we will discuss fractal and deviation from it in the above framework .there are various ways to construct a scale invariant model . as a tool to discuss fractal property and deviation from it concretely , we take a model of 2-dimensional quantum gravity coupled with conformal matter fields . because of special nature of 2-dimension and the conformal property of the matter fields , the model has scale invariance and fractal property .some distributions derived from the model follow power law .one of the reasons to take this model is that the model is decided by the action only and it is suitable to treat the whole system analytically .in addition , the model has a simple geometrical meaning , so that it is easy to understand the fractal property and the deviation from it intuitively . ) . ]the above model , the standard 2d gravity model coupled with conformal matter fields , is scale invariant . to introduce a typical scale andbreak the original scale invariance , we add the interaction term to the action . here , is the square of scalar curvature .the obtained model * is * called 2d gravity theory .because of the typical scale introduced , we expect the deviation from fractal in the scale region where the typical scale is meaningful , keeping the original fractal property in the large scale region . in this paper, we would like to point out that the typical scale is a useful new concept to understand various distributions which have both fractal and non - fractal regions , employing the 2d gravity model as a tool to understand the features of such distributions in a unified way . as examples of distributions which have fractal and non - fractal regions ,we take those of personal income and citation number of scientific papers .we show that these distributions are well understood by the typical scales and theoretical curves derived from the framework of the 2d gravity model .we also point out that the 2d gravity model also provides us with an effective tool to read the typical scales of various distributions in a systematic way .in this section , we review 2d gravity model . first let us consider standard 2-dimensional quantum gravity coupled with conformal matter fields . to make the argument concrete , as conformal matter fields we take scalar fields .the action of the matter part takes the form where is the metric of 2d surface . in 2-dimension , the standard einstein action where is the scalar curvature , merely yields a constant which characterizes the topology of the 2d surface , so that we can neglect the einstein term .the total action is given by and it is invariant under the scale transformation of the metric .the partition function for fixed area of 2d surface is given by the action and the integration measure are invariant under 2d diffeomorphisms , so that the measure should be divided by the volume of the diffeomorphisms , which is denoted by .the partition function is evaluated to be , where is a constant determined by the central charge and the number of handles of the 2d surface , note that this model has no scale parameter , so that the partition function follows power law .it is expected that a typical 2d surface has self - similar structure ( fig .[ fig:2-dim random surface ] ) . in order to investigate the breaking of fractal structure concretely , it is appropriate to treat 2d surface discretely . in study of 2d gravity ,one of the useful methods of discretizeing 2d surface is known as dynamical triangulation ( dt ) . in usual ,2d surface is discretized using small equilateral triangles , where each triangle has the same size . from various evidences ,dt is believed to be equivalent to the continuum theory of 2d gravity in the continuum limit . in dt, the evaluation of the partition function is performed by replacing the path integral over the metric with the sum over possible triangulations of 2d surface . here, we represent the number of triangles sharing the vertex as , which is called a coordination number . in dt ,the term in the action ( [ action r2 ] ) is expressed by from the correspondence and . here is the area of a triangle and is the discretized local scalar curvature at the -th vertex . from eq .( [ discretized r2 ] ) , we can recognize that the term has the effect to make 2d surface flat ( ) .this effect is parametrized by the coefficient of the term .in dt , fractal structure ( and non - fractal structure ) of 2d surface can be discussed by considering so - called minimum - neck baby universe ( minbu ) .a minbu is defined as a simply connected area region of 2d surface whose neck is composed of three links ( three sides of triangles ) , where the neck is closed and non - self intersecting . in general , a lot of minbus of various sizes are formed on a 2d surface . a typical dynamically triangulated surface is shown in fig . [ fig:2-dim random surface ] .distribution of the area of minbu is one of the important observable quantities in dt .now let us evaluate the distribution of minbu .consider a closed 2d surface of area .there are many minbus on the surface , and each one is connected by a minimum neck one another .paying attention to one of the minimum necks , the whole surface can be divided into two minbus ( fig .[ fig : divided minbs ] ) , where one has area and the other has area . representing the partition functions of the two minbus as and , the statistical average number of finding a minbu of area on a closed surface of area , , can be expressed as here we set for simplicity , and denotes the partition function of a closed surface of area . as for the case , the asymptotic form ( [ eq - n_a - hight ] ) follows power law , therefore , the surfaces are expected to be fractal .term , where no typical length scale exists . ] in this range , even if the model contains the term , at an area scale much larger than , the surfaces are fractal . on the other hand ,as for the case , the asymptotic form ( [ eq - n_a - low ] ) is highly suppressed by the exponential factor $ ] , hence , the fractal structure of 2d surface is broken . in this range , at an area scale much smaller than , the surfaces are affected by the typical length scale , and are not fractal . in the case of , the distribution ( [ eq - n_a - low ] ) is known as weibull distribution . in this paper , we call the distribution ( [ eq - n_a - low ] ) as weibull - like distribution .the analytic results ( [ eq - n_a - low ] ) and ( [ eq - n_a - hight ] ) can be confirmed in the simulation of dt for the simple case that 2d surface is sphere ( ) and there is no matter field on it ( ) .the simulation results are expressed in fig .[ fig : the simulation results of dt ] . here , we plot minbu distributions , versus with a log - log scale for , , , , , , which are coefficients of the discretized term ( [ discretized r2 ] ) . ), we can replace with because of the range . ] in this simulation , the total number of triangles is 100,000 .these minbu distributions can be well explained by the asymptotic formulae ( [ eq - n_a - low ] ) and ( [ eq - n_a - hight ] ) with and , which are obtained from .we can read the typical scale for each case .for example , the data fittings for the cases of and are represented in figs .[ fig : beta=50 ] and [ fig : beta=100 ] , and we obtain and as the value of respectively . in each of these figures ,several data points for small minbus are apart from the line of weibull distribution ( [ eq - n_a - low ] ) .we consider that it is the finite lattice effect . in small region ,each of the corresponding minbus consists of a small number of triangles , so that it is not appropriate to treat the area of minbu as a continuous variable .we apply the distribution of minbu in 2d gravity to other distributions observed in the real world , and examine whether it can explain these distributions . here , we investigate distributions of personal income and citation number of scientific papers .these two kinds of distributions have fractal power law and non - fractal regions , so it is possible that the theoretical curves ( [ eq - n_a - low ] ) and ( [ eq - n_a - hight ] ) can explain them .first , let us consider the personal income distributions of japan in the years 1997 and 1998 .the distributions and data fittings are shown in figs .[ fig:1997-japan ] and [ fig:1998-japan ] . here, we do not accumulate the data in this analysis .the horizontal axis indicates the income in units of thousand yen and the vertical axis indicates the number density of persons per a period of thousand yen .in this paper , we proposed the concept of a typical scale in order to understand distributions which have both fractal and non - fractal scale regions in a unified framework .the point was to introduce a typical scale into a scale invariant model to break the original scale invariance and to produce non - fractal feature in the small scale region .we employed the 2d gravity model as a tool to understand such distributions through the typical scale .minbu distribution in this model followed the power law in the large scale region and provided weibull - like one in the small scale region .as examples of distributions where fractal and non - fractal regions coexist , we took those of personal income and citation number of scientific papers .we showed that these distributions were fitted fairly well by the theoretical curves of minbu distribution , adjusting the values of , and the typical scale . from these fittings , as for the personal income , we consider that there is no scale with respect to money for the top several percent high income earners , on the other hand , the rest earners are highly influenced by the typical scale of income. we can understand whole profile of the distribution merley by introducing the typical scale . as a result, we consider that the typical scale is a useful concept to understand various distributions where both fractal and non - fractal scale regions exist .we also consider that the 2d gravity model provides us with an effective tool to read the typical scales of various distributions in a systematic way .in the distributions studied in this paper , the values of the typical scale are comparable with the average values of the distributions . we consider that the typical scale is a significant characteristic parameter similar to average in such distributions .the typical scale , however , can be read mainly from the data in the small scale region .it can thus be a more efficient characteristic parameter than average in some situations .we use the 2d gravity model as a tool to discuss the significance of the typical scale concretely . besides the coincidence of the distributions , is there any direct physical connection between 2d gravity and personal income or citation number ?we ca nt answer this question at present .however , 2d gravity can be also formulated by a stochastic evolution equation . as we mentioned in sect . 1 ,personal income distribution can be also described by a stochastic evolution equation .it may be possible to find the physical relation by investigating these formulations in detail .the authors would like to express our gratitude to dr .w. souma , dr . y. fujiwara , professor .h. aoyama and professor h. terao for valuable advices and discussions .we are also grateful to professor h. kawai for useful comments especially on his work .thanks are also due to members of yitp , where one of the authors ( a. i. ) stayed several times during the completion of this work . b.p .mandelbrot , the fractal geometry of nature , ( freeman , san francisco , 1982 ) .mategna and h.e .stanley , an introduction to econophysics , ( cambrige univ .press , u. k. , 2000 ) .v. pareto , cours deconomique politique , ( macmillan , london , 1897 ) .r. gibrat , les inegalits economiques ( paris , sirey , 1931 ) . w.w .badger , mathematical models as a tool for the social science , ed . b. j. west , ( new york , gordon and breach , 1980 ) , pp . 87 ; + e.w .montrll and m.f .shlesinger , j. stat .32 , 209 ( 1983 ) .h. aoyama , w. souma , y. nagahara , h.p .okazaki , h. takayasu and m. takayasu , fractals 8 , 293 ( 2000 ) ; + w. souma , fractals 9 , 463 ( 2001 ). m. levy and s. solomon , int .c7 , 595 ( 1997 ) ; + h. takayasu , a.h . sato and m. takayasu ,79 , 966 ( 1997 ) ; + j. p. bouchaud and m. mezard , physica a282 , 536 ( 2000 ) ; + j.p .bouchaud , arxiv : cond - mat/0008103 ; + d. sornette , physica a 290 , 211 ( 2001 ) .w. souma , y. fujiwara and h. aoyama , cond - mat/0108482 , kucp0189 .e. p. borges , cond - mat/0205520 . c. tsallis , j. stat .52 , 479 ( 1998 ) ; + e.m.f .curado and c. tsallis , j. phys .a : math . gen .24 , l69 ( 1991 ) ; 24 , 3187 ( corrigendum ) , ( 1991 ) ; 25 , 1019 ( corrigendum ) ( 1192 ) ; + c. tsallis , r.s .mendes and a.r .plastino , physica a 261 , 534 ( 1998 ) .e. brezin , c. itzykson , g. parisi and j.b .zuber , commun .59 , 35 ( 1978 ) ; + f. david , mod . phys .a3 , 1651 ( 1988 ) ; + j. distler and h. kawai , nucl .b321 , 509 ( 1989 ) ; + v.g .knizhinik , a.m. polyakov and a.b .zamolodchikov , mod .a3 , 819 ( 1988 ) .kazakov , i.k .kostov and a.a .migdal , phys .66 , 2051 ( 1991 ) ; + j. ambjorn , b. durhuus and j. frohlich , nucl .b257 , 433 ( 1985 ) ; + f. david , nucl . phys .b 257 , 543 ( 1985 ) .h. kawai and r. nakayama , phys .b306 , 224 ( 1993 ) .s. ichinose , n. tsuda and t. yukawa , int .a12 , 757 ( 1997 ) ; + a. fujitsu , n. tsuda and t. yukawa , int .a13 , 583 ( 1998 ) .e. brezin and v. kazakov , phys .b236 , 144 ( 1990 ) ; + m. douglas and s. shenker , nucl .b335 , 635 ( 1990 ) ; + d.j .gross and a. migdal , phys .64 , 127 ( 1990 ) ; + m. douglas , phys .b238 , 176 ( 1990 ) ; + m. fukuma , h. kawai and r. nakayama , int . j. moda6 , 1385 ( 1991 ) ; commun .143 , 371 ( 1992 ) ; 148 , 101 ( 1992 ) ; + r. dijkgraaf , e. verlinde and h. verlinde , nucl .b348 , 435 ( 1991 ) ; + a. jevicki and t. yoneya , mod .a5 , 1615 ( 1990 ) ; + p. ginsparg , m. goulian , m. plesser and j. zinn - justin , nucl .b342 , 539 ( 1990 ) ; + t. yoneya , commun . math . phys .144 , 623 ( 1992 ) ; int .a7 , 4015 ( 1992 ) ; + j. goeree , nucl .b358 , 737 ( 1991 ) .s. jain and s.d .mathur , phys .b286 , 239 ( 1992 ) ; + j. ambijorn , s. jain and g. thorleifsson , phys .b307 , 34 ( 1993 ) .s. render , eur .j. b4 , 131 ( 1998 ) .a. jevicki and j.p .rodgigues , nucl .b421 , 278 ( 1994 ) .
|
in order to understand characteristics common to distributions which have both fractal and non - fractal scale regions in a unified framework , we introduce a concept of typical scale . we employ a model of 2d gravity modified by the term as a tool to understand such distributions through the typical scale . this model is obtained by adding an interaction term with a typical scale to a scale invariant system . a distribution derived in the model provides power law one in the large scale region , but weibull - like one in the small scale region . as examples of distributions which have both fractal and non - fractal regions , we take those of personal income and citation number of scientific papers . we show that these distributions are fitted fairly well by the distribution curves derived analytically in the 2d gravity model . as a result , we consider that the typical scale is a useful concept to understand various distributions observed in the real world in a unified way . we also point out that the 2d gravity model provides us with an effective tool to read the typical scales of various distributions in a systematic way .
|
la historia de los orgenes de la geometra es igual a los de la aritmtica .los conceptos geomtricos ms antiguos provienen de diversas culturas y son una consecuencia de las actividades prcticas del hombre . en el siglo viiac la geometra pas de egipto a grecia .la escuela pitagrica , secta religiosa - filosfica idealista , hizo grandes contribuciones a la geometra que se plasmaron luego en un documento del siglo iii ac conocido como `` los elementos '' de euclides . en este trabajo , la geometra se present como un sistema muy slido , y sus fundamentos no sufrieron cambios esenciales hasta la llegada de n.i .lobatchevski escribi en 1835 `` los nuevos elementos de geometra '' , con el cual se di inicio a una nueva geometra no euclidiana conocida geometra de lobatchevski o geometra hiperblica .el 10 de junio de 1854 , b. riemann di una conferencia en la universidad de gotinga para ser profesor universitario .el tema de la conferencia por sugerencia de gauss , su protector y antiguo profesor durante la licenciatura y el doctorado , fue sobre geometra , y se titul `` sobre las hiptesis que estn en los fundamentos de la geometra '' .en 1872 f. klein present un programa de investigacin con motivo de su ingreso como profesora la facultad de filosofa y al senado de la universidad de erlangen , alemania .en este trabajo klein propuso una nueva solucin al problema de cmo clasificar y caracterizar las geometras existentes sobre la base de la geometra proyectiva y la teora de grupos .el problema inicial es , cmo dibujar en el papel algo que ven mis ojos ?el primer paso en tratar de encontrar leyes que resolvieran este problema lo dieron los pintores .por ejemplo el artista leon battista alberti ( 1404 - 1472 ) y el ingeniero y arquitecto florentino fillipo brunelleschi ( 1377 - 1446 ) desarrollaron mtodos y una teora matemtica de la perspectiva la cual fue seguida por los pintores de renacimiento .para comprender mejor un dibujo en perspectiva primero veamos la proyeccin de un objeto sobre un plano . en el espacio tridimensional consideremos un punto fijo ( sobre un plano ) que llamaremos centro de la proyeccin y un plano fijo , paralelo a , llamado plano de la proyeccin , que no contiene a .la proyeccin sobre de un punto en el espacio , respecto a , es el punto tal que , y son colineales . en con centro de la proyeccin en . dibujo hecho con geogebra http://www.geogebra.org/cms/ ] debe ser claro que , 1 . es la imagen ( proyeccin ) en no de un nico punto , si no de todos los puntos sobre la recta .es decir dos puntos , diferentes a en tienen la misma imagen ( proyeccin ) en si y solamente si , y son colineales .2 . existen puntos en que no tienen imagen en .todos aquellos puntos que pertenecen al plano paralelo a y que contiene a .si tomamos en el sistema coordenado cartesiano en el plano como el plano , el centro de la proyeccin como el origen del sistema , entonces las coordenadas de un punto son . en este caso si no pertenece al plano ( ) , se proyecta en donde podramos decir que geometra es el estudio de objetos que estn en cierto ambiente ( espacio ) .el estudio se refiere a la bsqueda de invariantes de relaciones entre los objetos ( independientes del sistema de referencia ) bajo la accin de un grupo .para estudiar un tipo de geometra debemos entonces definir : 1 .un espacio ambiente .2 . los objetos que vamos a estudiar en este espacio .3 . un grupo que define los movimientos .los movimientos son transformaciones del espacio que preservan ciertas relaciones entre los objetos , las cuales estamos interesados en estudiar .estas transformaciones forman un grupo .por ejemplo , la geometra euclidiana en el plano estudiada en la escuela y el bachillerato , el espacio ambiente es el plano , los objetos son puntos , rectas , polgonos , circunferencias , etc .y el grupo de transformaciones es el grupo de transformaciones del plano que preservan la distancia ( grupo euclidiano ) .el plano proyectivo es una extensin del concepto de plano euclidiano que conocemos .en el plano proyectivo cualquier par de rectas se intersectan a diferencia de las rectas en el plano euclidiano .veamos el siguiente modelo del plano proyectivo . en el espacio tridimensional sin el origen consideremos la siguiente relacin de equivalencia : dados dos puntos y , es equivalente con , , si y solamente si existe un nmero real , tal que , , y , el plano proyectivo lo definimos como .los objetos fundamentales del plano proyectivo son puntos y rectas : 1 .* puntos . *los puntos proyectivos del plano proyectivo son las clases de equivalencia de la relacin .si ] y se llaman coordenadas homogneas del punto del plano proyectivo .en este modelo podemos `` visualizar '' un punto proyectivo como una recta en el espacio tridimensional que pasa por el origen .* rectas . *dos puntos proyectivos diferentes en el plano proyectivo determinan una nica recta proyectiva . eneste modelo una recta proyectiva del plano proyectivo es un plano en que pasa por el origen .este proceso descrito anteriormente es la proyectivizacin de , es decir el plano proyectivo es la proyectivizacin de .similarmente la recta proyectiva es la proyectivizacin de .el grupo general lineal es el grupo de matrices no singulares , una matriz \in gl(3 ) ] , donde ] donde es un nmero real . en este caso denotaremos por el punto con su coordenada .la _ razn doble _ entre cuatro puntos propios distintos ] , ] de la recta proyectiva se define como el caso de puntos propios distintos sobre una recta la razn doble se simplifica a el nico invariante numrico de la recta proyectiva bajo la accin del grupo proyectivo es la razn doble . .dibujo y clculos hechos con geogebra http://www.geogebra.org/cms/ ]* se llama geometra proyectiva a la pareja , donde es la proyectivizacin de y es el grupo proyectivo .es la recta proyectiva , es el plano proyectivo , espacio proyectivo , etc . ] .en general , los puntos proyectivos son lneas rectas en que pasan por el origen .* imaginemos que estamos observando un objeto con un solo ojo desde un nico punto .entonces todo lo que vemos es gracias a los rayos de luz que entran a nuestro ojo .estas son lneas rectas o puntos proyectivos .as , las observaciones pueden ser consideradas como funciones en los puntos de la geometra proyectiva . *si tenemos un objeto colocado en un plano distante y un lienzo en el cual vamos a dibujarlo , un dibujo en perspectiva se hace por la interseccin de los puntos de la geometra proyectiva entre puntos del objeto real y puntos sobre el lienzo .es decir un dibujo en perspectiva es simplemente una homografa . *el nico invariante de la geometra proyectiva es la razn doble entre cuatro puntos distintos colineales y ste define todas las reglas de la perspectiva .
|
the three key documents for study geometry are : 1 ) `` the elements '' of euclid , 2 ) the lecture by b. riemann at gttingen in 1854 entitled ber die hypothesen welche der geometrie zu grunde liegen " ( on the hypotheses which underlie geometry ) and 3 ) the erlangen program " , a document written by f. klein ( 1872 ) on his income as professor at the faculty of philosophy and the senate of the erlangen university . the latter document f. klein introduces the concept of group as a tool to study geometry . the concept of a group of transformations of space was known at the time . the purpose of this informative paper is to show a relationship between geometry and algebra through an example , the projective plane . erlangen program until today continues being a guideline of how to study geometry .
|
the fast growth of internet bandwidth usage , mainly due to the exponential increase in internet videos ( youtube ) and iptv , has put the internet infrastructure under high pressure . according to a cisco survey , by 2014the network traffic is expected to approach 64 exabytes per month , with videos accounting for more than 91% of global traffic .redundancy elimination ( re ) techniques have been proposed to handle the huge amount of data in the access networks .their main aim is to remove requests and/or responses of redundant data in the network , reducing the traffic and costs in the access network .re techniques can be classified into two kinds : ( a ) caching to remove transfers , and ( b ) data replacement with a shim header .former relies on caching network - level objects and storing them temporarily in the network .caching techniques rely on redundancy of the traffic , implying that a large portion of the network traffic is duplicated and could be cached for later requests .another incentive is that storage prices have decreased faster than bandwidth costs .the second approach replaces redundant data with a shim header in an upstream middlebox ( usually close to the server ) and reconstructing it in a downstream middlebox before delivering it to the client .commercial products provide wan optimization mechanisms through re in enterprise networks .recently , re has received considerable attention from the research community . in ,the authors propose a network - wide approach for redundancy elimination through deployment of routers that are able to remove redundant data in ingress routers and reconstruct it in egress routers .however , they also require tight synchronization between ingress and egress routers in order to correctly reconstruct the packet and they also require a centralized entity to compute the redundancy profiles . in , the authors propose to use caches in the local host and use prediction mechanisms to inform servers that they have already the following redundant data .however , they are not able to share the cached data among other nodes due to the local characteristic of the cache . although both caching and re have been around in the research community , there has not been any thorough comparison in the effectiveness of the two above - mentioned strategies : in - network caching vs. redundancy elimination .work in combines in - network caching and re , but limiting the applicability of the solution to a single content source only . in this paper, we perform a comparison between an in - network caching architecture ( inca ) and state - of - the - art re solutions .although inca models a generic network caching architecture , it is effectively ccn - like . however , as we want to understand the performance differences between caching and re , we do consider low - level protocol details .we perform an extensive comparison , using real network topologies from rocketfuel , between inca and re .we have implemented the different solutions on our testbed and compare them by running them on real network topologies .we consider the position of a single isp interested in reducing its traffic both within and outside of its own network .our key findings can be summarized as follows : * in terms of reducing external network traffic , inca is always superior when compared to isp - internal re solutions .end - to - end re solutions can reduce external traffic , but are outside the control of the isp ; furthermore , they are not as effective as inca . * in terms of reducing internal network traffic ,inca is in most cases clearly superior to state - of - the - art re solutions , with at least 5065% improvements in internal traffic reduction .the organization of this paper is as follows .section [ s : background ] presents the background information and related work about in - network caching and redundancy elimination solutions .section [ s : architecture ] introduces the in - network caching architecture ( inca ) , describing its main features .section [ s : evaluation ] presents the evaluation methodology , and the comparison results between inca and re solutions .finally , section [ s : conclusion ] summarizes the paper .recently , information - centric networking ( icn ) , e.g., has emerged as a more general , network - wide caching solution . in icn ,content caches in the network ( e.g. , in routers ) store content that passes through them and if they see requests for the same content , they are able to serve it from their cache .inca is essentially an icn architecture , but our intention is not to provide yet - another - icn - architecture .instead , inca simply considers the key features of icn architectures , namely caching and routing towards some point of origin for content , and ignores practical , low - level protocol details .inca draws inspiration from ccn and our previous work , but does not specify low - level behavior .other caching proposals also exist .cache - and - forward ( cnf ) is an in - network caching architecture where routers have a large amount of storage . these routers perform content - aware caching , routing and forwarding _ packet _ requests based on location - independent identifiers , similar to ccn .modern re schemes use a fingerprint - based data stripping model .nodes generate a set of fingerprints for each packet in transit , where each fingerprint can be generated over a pre - defined block size . upon detecting a cached fingerprint ,the upstream node replaces the data by a fingerprint and the downstream node replaces the fingerprint with the original data , reducing the overall data transmission over the network . as described in ,both upstream and downstream nodes need to be strongly synchronized in order to work correctly .a similar approach is presented in .work in proposes to extend the re technique to the whole network , i.e. , to make re as a basic primitive for internet.the main idea is to collect redundancy profiles from the network and use a centralized entity to compute paths between destinations within an isp with higher re capabilities .therefore , data going through these networks have higher re footprint reduction than going to other paths in the network . despite the improved re capacity, it still requires strong synchronization between the upstream and downstream routers in order to work properly .a third re approach was recently proposed to overcome the synchronization issue in order to be deployed in data - center networks .as cloud elasticity favors the migration and distribution of work among a set of nodes , it is hard to set up the synchronization between two fixed nodes .therefore , the main idea of is to create a local cache together with a predictive mechanism to acknowledge already cached data to the server . in this scenario, the service sends a predictive acknowledgement to the server informing that the requested data is already present in the client , thus , removing the redundant data . despite the improvement over the fixed node requirement , the use of local storage prevents the sharing among other nodes , increasing the overall sharing capacity and hit ratio .therefore , the re is not network wide , but for redundant data that may be requested again in the local node .inca focuses on the following key aspects of icn architectures : routing requests for content towards a known point , caching of content , and forwarding responses back to the requesting entity .this model is similar to ccn .the basic in - network caching mechanism is performed by a _ content router _ ( cr ) .a cr is a data forwarder similar to a regular router , but has some internal memory that can be used to store data in transit .each piece of content has a _chunk i d _ as its permanent identifier from a cryptographic hash function .any cr on the path between a server and clients caches the data in its memory .further requests can be served by the local copy in the cr . for a further discussion on this model and its limitations ,we refer the reader to . as in , we use three admission policies for deciding which content a cr caches . * * all * admits all objects into the storage at the cr . in other words ,every object that transits through the cr is taken into storage and another object is possibly evicted .this is the typical behavior of web caches . ** cachedbit * sets one bit in the cr header to indicate whether a given piece of content has already been cached or not , preventing duplicated content along the same path .if the path between the client and server is hops , then a cr will cache the content with probability and once the content is cached , downstream crs will not cache it , with the exception of the last cr on the path which will always cache it ( see section [ ss : experimental - results ] for an explanation ) . * * neighbor search ( nbsc ) * works like _ cachedbit _ , but if a cr encounters a miss , it will query neighboring crs for that piece of content .crs periodically exchange bloom filters of their contents with their neighbor crs .please see for details about the size of bloom filters , exchange frequency , and query radius .we use least recently used policy to decide what to evict when the storage at the cr is full .the results in showed that a cachedbit - like admission policy is needed to get good caching performance , but that the addition of nbsc gives a considerable boost in reducing network traffic .we chose 4 real - world networks from rocketfuel : exodus , sprint , at&t and ntt , and performed a set of experiments using different cooperative caching strategies .table [ tab : topologies ] shows an overview of the networks .all the experiments are performed on our department cluster consisting of dell poweredge m610 nodes .each node is equipped with 2 quad - core cpus , 32 gb memory , and connected to 10-gbit network .all the nodes run ubuntu smp with 2.6.32 kernel ..topologies used in experiments [ cols="^,^,^,^",options="header " , ] -5 mm figure [ fig : footprint ] shows the internal traffic reduction as measured by the network footprint reduction .the y - axis shows the fraction of internal traffic that was reduced by the caches in the crs . as with the other metrics ,the differences between the three admission policies are small .again , nbsc is clearly superior to cachedbit which , in turn , is clearly superior to the all policy .footprint reduction is the reason why we tweaked cachedbit to create a copy of the chunk at the cr closest to the client . without the additional copy ,all - policy is better at footprint reduction than cachedbit .we observed that this additional copying drops the hit rate by a negligible amount , but raises the footprint reduction considerably .contrasting the numbers in table [ tab : smartre - footprint - ideal ] with the inca footprint reductions in figure [ fig : footprint ] , we see that they are similar in value . for small inca cache sizes , smartre yields a higher reduction , whereas for larger cache sizes , inca has the upper hand . however , even for very modest cache sizes , nbsc is able to achieve an equal footprint reduction to smartre and for large cache sizes , the footprint reduction is improved by 5065% .cooperative caching is therefore much more efficient at reducing internal traffic than smartre .recall that our inca experiments considered one chunk to represent one file , whereas in the smartre experiments , a chunk is one packet .this means that the footprint reduction numbers can not be directly compared since traffic is different in the two cases .however , based on the numbers presented in , we can infer a mapping between smartre and inca experiments . in it is shown that smartre gets close to its ideal performance with 6 gb of storage per router .assuming the same 6 gb of storage per cr , the case of 1024 chunks of storage , where 1 chunk equals 1 file , would imply the average file size to be about 6 mb .if the content is a mixture of text , images , and short videos , this seems like a reasonable , if not even conservative , number .( for content consisting mainly of larger videos , this would not be sufficient . )we ran experiments with smartre where we took the ideal cache size used to obtain the numbers for table [ tab : smartre - footprint - ideal ] , and set it to , , and of that value . for each case , we then ran the experiment to obtain the reduction in footprint .this allows us to plot the inca and smartre footprint reductions on the same x - axis , shown in figure [ fig : footprint ] .this confirms that inca is more efficient in reducing internal traffic in the network .the additional reduction in traffic varies between almost 200% for small caches and 50% for large caches .-5 mm cachedbit is similar to the heuristic `` heur1 '' from in how it attempts to place the content . in ,the performance of these two heuristics was found lacking when compared to the smartre algorithm with its centralized controller deciding on what to cache where .if the same translates to an inca caching network , a centralized controller deciding on placement of chunks in crs would be a superior choice .however , similar placement problems are often np - complete , although some simplifications are likely to yield a linear program .we have not considered a central placement agent in inca , although it could be included in future work .an important difference is that inca is able to share cache space between clients , whereas smartre has fixed buckets for each ingress - egress flow .this gives inca more possibilities in exploiting the cached data , thus reducing footprint and improving hit rate ._ we believe this sharing of cache space between all client and server pairs is what gives inca an advantage over smartre . _contrasting our results to the single server case presented in is part of our future work . comparing inca with smartre , we come to the following conclusions : * for external traffic reduction , inca is always superior , because smartre has no effect on external traffic .* for internal traffic reduction , performance of inca ( with neighbor search ) is in most cases clearly superior , up to 5065% more reduction in internal traffic . however , the differences depend on how the mapping between cache sizes is done and the file size distribution , thus in different environments the results could be different. however , in our experimental environment inca with neighbor search is far more effective in reducing both internal and external traffic .+ -5 mm figure [ fig : inca - vs - endre ] shows the bandwidth savings of both inca and endre on three different networks .we show cache sizes of 128 and 256 chunks .the bandwidth savings of endre remains the same on three networks because it is end - to - end solution .the network topology does not affect its performance .we can clearly see that inca is superior to endre .even the all strategy is slightly better than endre in all three networks .pack is another end - to - end re solution , but according to , its performance is about 2% worse than endre .larger cache sizes improve inca s performance ; figures not shown due to space limitations .note that inca s savings are a combination of results shown in figures [ fig : hitrate ] and [ fig : footprint ] .anand et al . have evaluated real trace captures and their results suggest that a middlebox - based solution ( i.e. , something akin to inca ) has an advantage over end - to - end re solutions in saving network bandwidth .inca does have a definite advantage in not requiring synchronization between the server and client and since some content can be served from crs along the path , we avoid having to do a round - trip to the origin of the content , possibly speeding up the transfer .in this paper we have compared in - network caching with standard redundancy elimination solutions in terms of their effectiveness at reducing network traffic load . as an example of in - network caching ,we have presented inca , a caching architecture which aims at capturing the salient features of information - centric networks . we have kept the design of inca minimal and only consider simple solutions for the problems of caching and routing .our comparison on rocketfuel topologies shows that inca is superior to smartre in the ability to reduce external and internal network traffic , with additional reductions of up to 65% in internal traffic .similar results hold for comparisons against end - to - end re solutions .
|
network - level redundancy elimination ( re ) techniques have been proposed to reduce the amount of traffic in the internet . and the costs of the wan access in the internet . re middleboxes are usually placed in the network access gateways and strip off the repeated data from the packets . more recently , generic network - level caching architectures have been proposed as alternative to reduce the redundant data traffic in the network , presenting benefits and drawbacks compared to re . in this paper , we compare a generic in - network caching architecture against state - of - the - art redundancy elimination ( re ) solutions on real network topologies , presenting the advantages of each technique . our results show that in - network caching architectures outperform state - of - the - art re solutions across a wide range of traffic characteristics and parameters .
|
program termination is a hot research topic in program analysis .the last few years have witnessed the development of termination analyzers for mainstream programming languages such as c and java with remarkable precision and performance .these systems are largely based on techniques and tools coming from the field of declarative constraint programming . beyond the specificities of the targeted programming languages andafter several abstractions ( see , e.g. , ) , termination analysis of entire programs boils down to termination analysis of individual loops .various categories of loops have been identified : for the purposes of this paper we focus on _ single - path linear constraint _ ( slc ) loops .an slc loop over variables , , has the form where and are column vectors of variables , is an integer matrix , , and .such a loop can be conveniently written as a constraint logic programming rule : when variables take their values in ( resp . , ) , we call such loops _ integer _, _ rational _ ) loops .they model a computation that starts from a point ; if is false , the loop terminates ; otherwise , a new point is chosen that satisfies and iteration continues replacing the values of by those of .loop termination can always be ensured by a _ ranking function _ , a function from or to a well - founded set .as the domain of is well - founded , the computation terminates . to the best of our knowledge , decidability of universal termination of slc loops ( i.e. , from any starting point and for any choice of the next point at each iteration ) is an open question .some sub - classes have been shown to be decidable .for instance , braverman proves that termination of loops where the body is a _ deterministic _assignment is decidable when the variables range over .the problem is open for the non - deterministic case , as stated in his paper . on the other hand ,various generalizations have been shown to be undecidable .a way to investigate loop termination is to restrict the class of considered ranking functions . in the following section, we recall a well - known technique for computing linear ranking functions for rational slc loops .in section [ sec : eventual - linear - ranking - functions ] we present the main contribution of the paper , namely the definition of _ eventual linear ranking functions _ : these are linear functions that become ranking functions after a finite unrolling of the loop .we shall see that the number of unrolling is not pre - defined , but depends on the data processed by the loop .section [ sec : eventual - linear - ranking - functions ] presents complete decision procedures for the existence of eventual linear ranking functions of slc loops .the presentation is gradual and illustrates the algorithms by means of constraint logic programming ( clp ) technology and dialogs with real clp tools .section [ sec : related - work - and - experiments ] discusses related work and a preliminary experimentation conducted on the benchmarks proposed in two very recent papers .section [ sec : conclusion - and - future - work ] concludes the paper .we first define the notion of linear ( resp . , affine )ranking function for an slc loop .[ def - fn - rng - lin ] let be the slc loop where is an n - ary relation symbol .a _ linear _ ( resp . ,_ affine _ ) _ ranking function _ for is a linear ( resp ., affine ) map from to such that in words , continuation of the iteration , i.e. , , entails that stays positive and strictly decreases by at least for each iteration .we point out that if is not satisfiable , the loop ends immediately and any linear function is a ranking function . in the paper ,we assume that is satisfiable. might seem too restrictive when working with rational numbers as one might prefer to replace the decrease by by a decrease by , a fixed positive quantity .actually , by multiplying such an -decrease ranking function by , we see that the two definitions are equivalent with respect to the existence of a ranking function .although the class of affine ranking functions subsumes the class of linear ranking functions , any decision procedure for the existence of linear ranking functions can be extended to a decision procedure for the existence of affine ranking functions . to see this , note that an affine ranking function for where is distinct from the variables in . in this section ,we focus on linear ranking functions for slc loops . after the presentation of a formulation of farkas lemma we consider the problem of verifying linear ranking functions , and then the detection of such ranking functions .a linear inequation over rational numbers is a logical consequence of a finite satisfiable conjunction of linear inequations when is a linear positive combination of the inequations of .more formally , let be and suppose that has at least one solution .farkas lemma states the equivalence of and given an slc loop and a linear function , we can easily check whether is a ranking function for by testing the unsatisfiability of and .this test has polynomial complexity and can be done with a complete rational solver such as , e.g. , clp( ) .[ ex : linear - ranking - function ] for the slc loop : the linear function is a ranking function , as proved by the following _ sicstus prolog _ session . .... ? - use_module(library(clpq ) ) .% library(clpq ) compiled true . ? - { x > = 0 , y1 = < y - 1 , x1 = < x + y , y = < -1 , x < 1 + x1}. false . ?- { x > = 0 , y1 = < y - 1 , x1 = < x + y ,y = < -1 , x < 0}. false . ?- .... given an slc loop , we would like to know whether it admits a linear ranking function .this problem , which has been studied in depth , is decidable in polynomial time .let us consider example [ ex : linear - ranking - function ] and formally ask whether there exists a ranking function of the form : this formulation of the problem is executable by quantifier elimination on a symbolic computation system like reduce : .... 1 : load_package redlog ; 2 : rlset r ; 3 : f:=ex({a , b},all({x , y , x1,y1 } , ( x>=0 and y1<=y-1 and x1<=x+y and y<= -1 ) impl ( a*x+b*y>=1+a*x1+b*y1 and a*x+b*y>=0 ) ) ) ; 4 : rlqe f ; .... statement ` 1 ` loads the quantifier elimination module .statement ` 2 ` defines as the domain of discourse .statement ` 3 ` initializes formula .statement ` 4 ` runs quantifier elimination over and returns an equivalent formula , ` true ` in this case .hence , formula is true and there exists at least one linear ranking function .we can now determine the coefficients of function as follows : .... 5 : g:=all({x , y , x1,y1 } , ( x>=0 and y1<=y-1 and x1<=x+y and y<= -1 ) impl ( a*x+b*y>=1+a*x1+b*y1 and a*x+b*y>=0 ) ) ; 6 : rlqe g ; .... we obtain and all values for and satisfying the above formula , such as and , are equally good . unfortunately , the complexity of the algorithms involved will prevent us from systematically obtaining such a result within acceptable time and memory bounds .we now recall the most famous algorithm for this problem . considering and as _ parameters _ of the problem , we can apply farkas lemma . for the strict decrease of the ranking function we have application of farkas lemma to this problemcan be depicted as follows : we know that formula ( [ decrease - of - the - ranking - function ] ) is equivalent to the existence of four non - negative rational numbers , , such that : the positivity of the ranking function , that is , can be written as by farkas lemma , formula is equivalent to the existence of four other non - negative rational numbers , , such that : summarizing , by farkas lemma , formula is equivalent to the conjunction of formulas and : in theory , the problem of the existence of a linear ranking function is polynomial .since computing one solution ( that is , values for and ) is not harder than determining its existence , a `` witness '' function , which would constitute a _ termination certificate _ , can also be computed in polynomial time .the space of all linear ranking functions as defined in definition [ def - fn - rng - lin ] , described by parameters and , can be obtained by elimination of and from using , e.g. , the algorithm of fourier - motzkin .for example the sicstus prolog program .... fm(a , b ) : - { l1 > = 0 , l2 > = 0 , l3 > = 0 , l4 > = 0 , lp1 > = 0 , lp2 > = 0 , lp3 > = 0 , lp4 > = 0 , a = l1 + l2 , b = l2 + l3 - l4 , a = l2 , b = l3 , 1 = < l3 + l4 , a = lp1 + lp2 , b = lp2 + lp3 - lp4 , 0 = lp2 , 0 = lp3 , 0 = < lp3 + lp4}. .... can be queried as follows : .... b = 0 , { a > = 1}. .... it can be shown that the computed answer is equivalent to the ( significantly more involved ) condition generated by reduce .in the previous section we have illustrated a method to decide the existence of a linear ranking function for a rational slc loop , something that implies termination of the loop .of course , the method can not decide termination in all cases .[ ex - fn - rng - lin - evt - p ] the loop does not admit a linear ranking function .can we conclude that such loop does not always terminate ?no , because it may admit a non - linear ranking function .in this section we will extend the previous method so as to detect _ eventual linear ranking functions _, that is , linear functions that behave as ranking functions _ after a finite number of executions of the loop body_. suppose that the considered slc loop is always given with a linear function that increases at each iteration of the loop in the following sense : [ def - inc - lin - fn ] let be the slc loop .a function is _ increasing for _ if it is linear and satisfies : [ ex - fn - rng - lin - evt - f ] the function is increasing for the loop of , since decreases by at least at each iteration .the generalization to affine functions is useless .moreover , as we are merely interested in the existence of an increasing function , the value of the increase ( or ) is irrelevant. we can now give the definition which is central to our paper .[ def - fn - rng - lin - evt ] let be the rational slc loop in clausal form , where is an -ary relation ; let also be a linear increasing function for .an _ eventual linear ranking function _ for is a linear map of to such that for comparison with definition [ def - fn - rng - lin ] , remark that the threshold is existentially quantified and that is imposed in the implication antecedent .it should also be noted that , if such a rational exists , then each satisfies the condition of definition [ def - fn - rng - lin - evt ] . on the other hand ,since , by hypothesis , strictly increases at each iteration , there are two cases : either is bounded from above by a constant , and thus the loop will terminate ; or , after a finite number of iterations , will cross the threshold and becomes a linear ranking function in the sense of section [ sec : linear - ranking - functions ] so that , again , the loop terminates .eventual linear ranking functions are a generalization of linear ranking functions .[ lrf - implies - elrf ] let be an slc loop .if is a linear ranking function for , then there exists an increasing function such that has an eventual linear ranking function . by hypothesis, there exists a linear ranking function for .the linear function is non - positive and strictly increasing for . considering it can be seen that the function is an eventual linear ranking function for .the generalization is strict as the loop of example [ ex - fn - rng - lin - evt - p ] has no linear ranking function , but does have an eventual linear ranking function , as will be shown in the next section . as a first step towards full automation of the synthesis of eventual linear ranking functions , we assume that an slc loop is given with a particular linear increasing function .let us consider , e.g. , the slc loop of example [ ex - fn - rng - lin - evt - p ] and the increasing function of example [ ex - fn - rng - lin - evt - f ] .defining , is an eventual linear ranking function when this definition of the problem , that we will denote for brevity with , is also solvable via quantifier elimination , hence the problem is decidable . considering , and as parameters, we can apply farkas lemma as follows : hence , formula is equivalent to the conjunction of formulas , i.e. , ensuring the positivity of the ranking function .let us focus on .we observe that the product leads to a non - linearity that we can circumvent by noting that , as , either ( hence ) or .in the latter case , we introduce a new variable .we have the property : [ lem1-fn - rng - lin - evt ] formula is equivalent to the disjunction . in our case, is equivalent to ( ) let be a rational number and s for four non - negative rational numbers such that holds . if then simplifies to which is true . if , we take and we can see that is true .( ) assume first that is true .then , taking and ( any rational number would be fine for ) , we see that is true . assume then that is true .taking ( this is always possible as ) , we observe that there exists such that is true . for the positivity condition , we can prove in a similar way [ lem2-fn - rng - lin - evt ] formula is equivalent to the disjunction . in our case, is equivalent to combining the previous results gives formula is equivalent to \land [ { \mathrm{pos}}_1(a , b ) \lor { \mathrm{pos}}_2(a , b)]$ ] .thanks to the previous lemmata , it only remains to justify the equivalence between the formulas and .( ) let be a rational such that .we have and because ( ) assume the existence of such that and the existence of such that .then the rational verifies and shows that .back to our initial problem , the existence of an eventual linear ranking function is equivalent to the satisfiability of at least one of the following four linear systems : which we can decide in polynomial time . for our running example, is satisfiable as proved by the following sicstus prolog query : .... ?- dec2pos1 .? - .... after compilation of the program : .... dec2pos1 : - { l1 > = 0 , l2 > = 0 , l3 > = 0 , l4 > 0 , a = l1 + l2 , b = l2 + l3 - l4 , a = l2 , b = l3 , 1 = <l3 + p , lp1 > = 0 , lp2 > = 0 , lp3 > = 0 , a = lp1 + lp2 , b = lp2 + lp3 , 0 = lp2 , 0 = lp3 , 0 = < lp3}. .... the procedure we have informally outlined by means of examples is actually completely general . it is embodied in algorithm [ algo1 ] , which is a ( correct and complete ) decision procedure for the existence of an eventual linear ranking function given a linear increasing function . , an slc loop , and , a linear increasing function for returns if and only if , for some vector , is an eventual linear ranking function for . [ algo - is - a - decision - procedure - for - elrf ] let be an slc loop and an increasing function for . decides in polynomial time the existence of an eventual linear ranking function for .computing an eventual linear ranking function and its associated threshold can be done as follows : * if is satisfiable , we compute a solution , is a standard linear ranking function and proposition [ lrf - implies - elrf ] applies ; * if is satisfiable , we compute a solution , , and we take ; * if is satisfiable , we compute a solution , , and we take ; * if is satisfiable , we compute a solution , , , , and we take . continuing with , here is the most general solution of : .... ? - { l1 > = 0 , l2>= 0 , l3 > = 0 , l4 > 0 , a = l1 + l2 , b = l2 + l3 - l4 , a = l2 , b = l3 , 1 = < l3 + p , lp1 > = 0 , lp2 > = 0 , lp3 > = 0 , a = lp1 + lp2 , b = lp2 + lp3 , 0 = lp2 , 0 = lp3 , 0 = < lp3}. b = 0 , l1 = 0 , l3 = 0 , lp2 = 0 , lp3 = 0 , { lp1 = l4 , l2 = l4 , a = l4 , l4 > 0 , p > = 1}. ? - .... one particular solution is , , . hence is an eventual linear ranking function from the threshold . we also provide a decision procedure for the existence of an eventual _ affine _ ranking function .the existence of an eventual affine ranking function for an slc loop and associated increasing function , , can be decided in polynomial time . from , ,we construct , where does not occur in . note that is an slc loop and that is an increasing function for .algorithm [ algo1 ] applied to gives an answer in polynomial time .if algorithm [ algo1 ] returns * true * then , by correctness , there exists a threshold and an eventual linear function for .we readily check that is an eventual affine ranking function for from .if algorithm [ algo1 ] returns * false * then , by completeness , there is no eventual linear ranking function for .assuming there exists an eventual affine ranking function from for , then should be an eventual linear ranking function from for , which is a contradiction .hence there is no eventual affine ranking function for .[ ex - fn - rng - aff - evt - p ] the slc loop associated to the linear increasing function does not admit an eventual linear ranking function , but does admit as an eventual affine ranking function from .we now consider the problem in its full generality : given an slc loop , does there exist an increasing function for such that admits an eventual linear ranking function ?note that the space of increasing functions can be obtained as a convex set over their coefficients via the farkas lemma and existentially quantified variables elimination .[ def - inc ] let be an slc loop .we denote by the set of vectors such that is increasing for .[ ex - full - detection ] a linear ranking function does not exist for the slc loop induces the space of functions of the form , which are increasing for .let us consider the slc loop of example [ ex - full - detection ] associated to an increasing function induced by .defining and considering and as parameters , is an eventual linear ranking function when this definition of the problem is denoted .we can apply farkas lemma as follows : formula is equivalent to the conjunction of formulas , i.e. , ensuring the positivity of the ranking function .let us focus on .we observe that the products with lead to a non - linearity that we can circumvent by noting that , as , either or . in the latter case, we introduce a vector of two new variables where and together with , as previously , the new variable . formula is equivalent to the disjunction where in our case , is equivalent to for the positivity condition , formula is equivalent to the disjunction where we introduce a vector of two new variables where , together with , as previously , the new variable . in our case, is equivalent to back to our initial problem , the existence of an eventual linear ranking function is equivalent to the satisfiability of at least one of the following four systems : 1 . : this case means that the increasing function and are irrelevant . in other words , for each solution , is a standard linear ranking function and proposition [ lrf - implies - elrf ] applies .2 . : note that satisfiability of is not sufficient , as its solution might lead to the coefficients and ( is strictly positive by definition ) , which could correspond to a non - increasing linear function .the third conjunct , , ensures that we stay within the space of increasing functions . : this case is symmetric to previous one . : this case combines the two previous ones . note that the condition ensures that we consider the same linear ranking function and the same increasing function both in and in . for our running example , the following sicstus prolog query proves that is satisfiable .... ?- dec2incpos1 .? - .... after compilation of the program .... dec2incpos1 : - { % dec2 : l1 > = 0 , l2 > = 0 , l3 > = 0 , a1 = l1 + l2 + p1 , a1 = l2 , l > 0 , a2 = l2 - l3 + p2 , a2 = l3 , -1 > = -l3 - p , % inc : b1 = < -2 , b1 - 2*b2 = 0 p1 = < -2*l , p1 - 2*p2 = 0 , % pos1 : lp1 > = 0 , lp2 > = 0 , lp3 > = 0 , a1 = lp1 + lp2 , 0 = lp2 , a2 = lp2 - lp3 , 0 = lp3 , 0 > = -lp3}. .... the procedure we have informally outlined by means of examples is actually completely general and is embodied in algorithm [ algo2 ] . , an slc loop returns if and only if there exists an increasing function for and such that is an eventual linear ranking function for . [ algo2-is - a - decision - procedure - for - elrf ]let be an slc loop. decides the existence of an increasing function and a linear function such that is an eventual linear ranking function for .exactly as in the previous section , if algorithm [ algo2 ] returns * true * then we can extract an increasing function , a threshold , and a linear function .we can also generalize the approach to the fully automated detection of eventual affine ranking functions . with respect to complexity , algorithm [ algo2 ]is not polynomial for two reasons . in step 1 ,computing the set of linear increasing functions for requires elimination of existentially quantified variables . in step 2 ,formula leads to a non - linear system and we may have to check its satisfiability in step 10 .although decidable , we are not aware of the existence of polynomial algorithms for these problems . given an slc loop , an associated increasing function , and a linear function , we want to know whether is a ranking function .we can run algorithm [ algo1 ] , with the coefficients fully instantiated .if needed , we can compute the threshold as explained in section [ sec : elrf - semi - detection ] .it follows that the verification problem is polynomial .we have implemented both algorithms in sicstus prolog . however , as of algorithm [ algo2 ] leads to a non - linear system , we relaxed this formula to which is now linear .as shown in the following proposition , the existence of an eventual linear ranking function ( hence termination ) is preserved , but the associated increasing function is not linear .let be an slc loop and assume that is true .then there exists a non - linear increasing function such that is an eventual linear ranking function for ( , ) .as is true , there exists an increasing function and a rational such that when the value of is beyond , decreases .similarly , as is true , there exists an increasing function and a rational such that when the value of is beyond , is non - negative .let and .one readily checks that is a non - linear increasing function for and is an eventual linear ranking function for .as eventual linear ranking functions generalize linear ranking functions , we focus on related work that goes beyond linear ranking functions for slc loops . in order to appreciate the relative power of the different methods , we report on the results obtained with our algorithms on the loops discussed in the papers where the other approaches were introduced . the method proposed in divides the state space to find a linear ranking function on each subspace , and then checks that the transitive closure of the transition relation is included in the union of the ranking relations . as the process may not terminate , one needs to bound the search . also proposes a test suite , upon which we tested our approach . as expected , every loop ( * ? ? ?* table 1 ) which terminates with a linear ranking also has an eventual linear ranking .moreover , loops 6 , 12 , 13 , 18 , 21 , 23 , 24 , 26 , 27 , 28 , 31 , 32 , 35 , and 36 admit an eventual linear ranking function ( which is discovered without using neither nor its relaxation ) . these are all shown terminating with the tool of . on the other hand ,loops 14 , 34 , and 38 do have a _ disjunctive ranking function _ ( following the terminology of ) , but do not admit an eventual linear ranking function . shows how to partition the loop relation into behaviors that terminate and behaviors to be analyzed in a subsequent termination proof after refinement .this work addresses both termination and conditional termination problems in the same framework . concerning the benchmarks proposed in ( * ?? ? * table 1 ) , loops 641 all have an eventually linear ranking function except for loops 11 , 14 , 30 , 34 , and 38 . a method based on abstract interpretation for synthesizing ranking functionsis described in .although the work contains no completeness result , the approach is able to discover piecewise - defined ranking functions .finally , let us point out that the concept of _ eventual termination _ appeared first in .the class loops studied in these works is wider but , as the technique of relies on finite differences , this approach is incomplete . on the other hand , while is also based on farkas lemma , it seems [ a. r. bradley , personal communication , may 2013 ] that the _ polyranking _ approach can not prove , e.g. , termination of the slc loop , which admits an eventual linear ranking function .we have proposed a definition of eventual linear ranking function for slc loops that strictly generalizes the concept of linear ranking function .we also defined two correct and complete algorithms for detecting such ranking functions under different hypotheses .the first algorithm shows that the mere knowledge of the right increasing function allows checking the existence or even synthesizing an eventual linear ranking function in polynomial time .the second algorithm decides the existence of an eventual linear ranking function in its full generality but is not polynomial .we have also explained how to extend the algorithms for deciding eventual affine ranking functions .the algorithms admit a simple formulation as a constraint logic program and have been fully implemented in sicstus prolog inside the binterm termination prover .it has to be noted that a nice property of the notion of eventual ( not necessarily linear ) ranking function is its simplicity .this is important when functions that witness termination have to be provided ( and/or understood ) by humans .this is the case when annotating a c / acsl program with loop variants : for the cases when a ranking function to be specified in a ` loop variant ` clause is not obvious , one could extend acsl with a ` loop prevariant ` clause that allows the annotator to indicate a candidate increasing function . in the linear case , our first algorithm can efficiently decide whether the two clauses constitute a termination witness . on the other hand ,there obviously are , as indicated in section [ sec : related - work - and - experiments ] , more complex classes of ranking functions and algorithms that allow to establish the termination of slc loops that do not admit an eventual linear ranking functions . a proper assessment of the relative merits of these approaches , all extremely recent, requires an extensive experimental evaluation that is one of our objectives for future work .the verification of linear ranking functions for integer slc loops , i.e. , checking the satisfiability of and , is an -complete problem .concerning the existence of linear ranking functions , as the farkas lemma is not true for the integers , the method presented in section [ sec : linear - ranking - functions ] is not valid .the problem , which has been solved very recently in , is -complete , and the paper proposes an exponential - time algorithm .extending the present approach to integer slc loops is another interesting idea to consider for future work .a. m. ben - amram and s. genaim . on the linear ranking problem for integer linear - constraint loops . in r.giacobazzi and r. cousot , editors , _ proceedings of the 40th annual acm sigplan - sigact symposium on principles of programming languages ( popl 13 ) _ , pages 5162 , rome , italy , 2013 .association for computing machinery .m. bozga , r. iosif , and f. konecn .deciding conditional termination . in c. flanagan and b. knig , editors , _ tools and algorithms for the construction and analysis of systems : proceedings of the 18th international conference ( tacas 2012 ) _ , volume 7214 of _ lecture notes in computer science _ , pages 252266 , tallinn , estonia , 2012 .springer .a. r. bradley , z. manna , and h. b. sipma .the polyranking principle . in l.caires , g. f. italiano , l. monteiro , c. palamidessi , and m. yung , editors , _ automata , languages and programming : proceedings of the 32nd international colloquium ( icalp 2005 ) _ , volume 3580 of _ lecture notes in computer science _ , pages 13491361 , lisbon , portugal , 2005 .springer .a. r. bradley , z. manna , and h. b. sipma .termination of polynomial programs . in r.cousot , editor , _ verification , model checking and abstract interpretation : proceedings of the 6th international conference ( vmcai 2005 ) _ , volume 3385 of _ lecture notes in computer science _ ,pages 113129 , paris , france , 2005 .springer - verlag , berlin .m. braverman .termination of integer linear programs . in t. ball and r. b. jones , editors ,_ computer aided verification : proceedings of the 18th international conference ( cav 2006 ) _ , volume 4144 of _ lecture notes in computer science _ , pages 372385 , seattle , wa , usa , 2006 .springer .b. cook , a. podelski , and a. rybalchenko .termination proofs for systems code . in m.i. schwartzbach and t. ball , editors , _ proceedings of the acm sigplan 2006 conference on programming language design and implementation _ , pages 415426 , ottawa , ontario , canada , 2006 .association for computing machinery .a. c. hearn .the first forty years . in a.dolzmann , a. seidl , and t. sturm , editors , _ algorithmic algebra and logic : proceedings of the a3l 2005 conference in honor of the 60th birthday of volker weispfenning _ , pages 1924 , passau , germany , 2005 . c. otto , m. brockschmidt , c. von essen , and j. giesl .automated termination analysis of java bytecode by term rewriting . in c.lynch , editor , _ proceedings of the 21st international conference on rewriting techniques and applications ( rta 2010 ) _ , volume 6 of _ leibniz international proceedings in informatics ( lipics ) _ , pages 259276 , edinburgh , scotland , uk , 2010 . schloss dagstuhl leibniz - zentrum fuer informatik .a. podelski and a. rybalchenko . a complete method for the synthesis of linear ranking functions . in b. steffen and g. levi , editors , _ verification , model checking and abstract interpretation : proceedings of the 5th international conference ( vmcai 2004 ) _ , volume 2937 of _ lecture notes in computer science _ , pages 239251 ,venice , italy , 2004 .springer .k. sohn and a. van gelder .termination detection in logic programs using argument sizes ( extended abstract ) . in d.j. rosenkrantz , editor , _ proceedings of the tenth acm sigact - sigmod - sigart symposium on principles of database systems _ , pages 216226 ,denver , co , usa , 1991 .association for computing machinery .a. tiwari .termination of linear programs . in r.alur and d. peled , editors , _ computer aided verification : proceedings of the 16th international conference ( cav 2004 ) _ , volume 3114 of _ lecture notes in computer science _ ,pages 7082 , boston , ma , usa , 2004 .springer . c. urban .the abstract domain of segmented ranking functions . in f.logozzo and m. fahndrich , editors , _ proceedings of the 20th international symposium on static analysis ( sas 2013 ) _ , lecture notes in computer science , seattle , wa , usa , 2013 .springer . to appear .h. yi chen , s. flur , and s. mukhopadhyay .termination proofs for linear simple loops . in a.min and d. schmidt , editors , _ proceedings of the 19th international symposium on static analysis ( sas 2012 ) _ , volume 7460 of _ lecture notes in computer science _ , pages 422438 ,deauville , france , 2012 .
|
program termination is a hot research topic in program analysis . the last few years have witnessed the development of termination analyzers for programming languages such as c and java with remarkable precision and performance . these systems are largely based on techniques and tools coming from the field of declarative constraint programming . in this paper , we first recall an algorithm based on farkas lemma for discovering linear ranking functions proving termination of a certain class of loops . then we propose an extension of this method for showing the existence of _ eventual linear ranking functions _ , i.e. , linear functions that become ranking functions after a finite unrolling of the loop . we show correctness and completeness of this algorithm . termination analysis , ranking function , eventual linear ranking function .
|
model - based development ( mbd ) is a paradigm in which software and system development focus on high - level executable models , cf . . in the early development phases ,formal models allow a wide range of exploration and analysis using domain - specific notations in order to simplify the system design , development or verification / testing .application of formal models provides many benefits for the software and system development . in 40 years of formal methods " , bjrner and havelund admit that the gap between academic research on formal methods and its integration in large industrial projects is yet to be bridged .there are a number of hindering factors for adoption of formal methods in industry . as crucial obstaclescan be named lack of understandability and readability , and our aim is to find appropriate ways to avoid these obstacles .also , human factors play a crucial role and have to be taken into account .application of formal models requires an interplay between formal and informal methods , which use different levels of formality in descriptions . a manual solution to this problemwas suggested many years ago : guiho and hennebert reported a communication problem in the sacem project between the verifiers and other engineers , who were not familiar with the formal specification method .the problem was solved by providing the engineers with a natural language description derived _ manually _ from the formal specification . for a large - scale projects, it would be too time - consuming to derive a natural language specification ( nls ) manually . in this paper, we propose a framework for _ automated generation _ of nls from the basic modelling artefacts , such as data type definitions , state transition diagrams ( stds ) , and architecture specifications .* contributions : * the proposed solution would serve not only increasing the understandability of formal models , but also keeping the system documentation up - to - date .system documentation is an important part of the development process , but it is often considered by industry as a secondary appendage to the main part of the development modelling and implementation .it is hard to keep the documentation up - to - date if the system model is frequently changing during the modelling phase of the development .thus , system requirements documents and the general systems description are not updated according to the system s or model s modifications .sometimes the updates are overlooked , sometimes they are omitted on purpose .for example , it is because of timing or costs constraints on the project . as a result, the system documentation is often outdated and does not describe the latest version of the system model .the question is whether we need to update the documentation _ manually _ , cf . .* outline : * the rest of the paper is organised as follows .section [ sec : related ] describes the related work .section [ sec : framework ] introduces the proposed framework and a a small case study to illustrate the ideas of the framework . in section [ sec : conclusions ] we summarise the paper and propose directions for future research .the research field of automated translation from formal modelling languages to natural languages is almost uncovered , however , there are many approaches on automated generation of ( semi-)formal specifications from natural language ones .lee and bryant presented an approach automatically generate formal specifications in an object - oriented notation from nls .cabral and sampaio suggested to use a controlled natural language ( cnl ) , a subset of english to analyse system characteristics represented by a set of declarative sentences .cnl use restricted vocabulary , grammar rules in defined knowledge based for the aim of formal models generation .this also allows to generate structured models at different levels of abstraction , as well as provides formal refinement of user actions and system responses .schwitter et al . introduced ecole , an editor for a controlled language called peng ( process - able english ) , that defines a mapping between english and first - order logic in order to verify requirements consistency , as well as to help writing manuals and system specifications to improve documentation quality , which is our goal of generated specifications in natural language .as several attempts have been made to automate the requirement capture , there is another approach for the automatic construction of object - oriented design model in uml diagram from natural language requirement specification .mala and uma present a methodology that utilizes the automatic reference resolution and eliminates the user intervention .the input problem statement is split into sentences for tagging by sentence splitter in order to get parts of speech for every word .the nouns and verbs are then identified by tagged texts based on simple phrasal grammars .reference resolver is used to remove ambiguity by pronouns .the final text is then simplified by the normaliser for mapping the words into object - oriented system elements .the result produced by the system is compared with human output on the basic analysis of the text .the approach is promising to introduce a method to restructure the natural language text into modelling language in respect of system requirements specifications .although there is a shortage of the efficiency in the tagger and reference resolver that result in unnatural expressions and misunderstandings , it can be improved by building a knowledge base for the system elements generation .juristo et al . introduced an approach to formalise the requirement analysis process .the goal of this approach was to generate conceptual models in a precise manner , which provides support for resolving difficulties of misunderstanding the system requirements .the approach is based on examining the information extraction at the beginning of the development process ( i.e. , describing the problems in natural language sentences ) , and consists of two different activities : formalisation of the conceptual model and creation of the formal model .the limitation of this approach is in the difficulties to retrieve the rigorous and concise problem descriptions .gangopadhyay suggested to design a conceptual model from a functional model , expressed in natural language sentences .although its application is mainly for database applications , it can be extended to other design problems such as web engineering and data warehousing . in order to interpret natural language expressions , gangopadhyay applied the theory of conceptual dependencies developed by schank , cf .the main goal of this approach was to identify data elements from functional model expressed in nls , to locate missing information , as well as to integrate all individual data elements into an overall conceptual schema for data model establishment .a prototype system using oracle database management system has been implemented to contain a parser for information collection .however , the lexicon in use is developed incrementally and semi - automated , so domain specialists still need to manually categorise words and phrases , to ensure non - relevant words are included in the system during the development of the conceptual model and to prevent systematic bias .bryant suggested the theory of two - level grammar for natural language requirements specification , in conjunction with specification development environment to allow user interaction to refine model concepts .this approach allows the automation of the process of transition from requirements to design and implementation , as well as producing an understandable document on which software system will base on .ilieva and ormandjieva proposed an approach on transition of natural language software requirements specification into formal presentation .the authors decided their method into three main processing parts : ( 1 ) the linguistic component as the text sentences to be analysed ; ( 2 ) the semantic network as the formal nl presentation ; and ( 3 ) modelling as the final phase of formal presentation of the specification .however , the approach of ilieva and ormandjieva involves manual human analysis process , to break down problems into smaller parts that are easily understood .figure [ fig : suggested_framework ] illustrates the general ideas of the suggested framework . to build a prototype for generation of nls from the basic modelling artefacts, we have selected the autofocus3 modelling tool as the basis for our models , because this tool ( 1 ) embeds the basic modelling artefacts , ( 2 ) is open source , as well as ( 3 ) has a well defined formal syntax behind all its modelling elements .autofocus3 is developed on system models based on the focus theory that allows to specify system on different levels of abstraction formally and precisely .source code of autofocus3 models are coded in xml , which makes it easy to parse and to analyse .autofocus3 has many advantages and is constantly evolving through last 10 years .the tool was applied as a part of tool chain within a number of development methodologies , e.g. , for safety - critical systems in general , and for automotive - systems .the tool can also be successfully applied for service - oriented modelling , which gives us another reason to select autofocus3 for the framework we develop . to allow further formal analysis of the generated specification, we restrict english to its subset , attempto controlled english ( ace ) , cf .specifications written in ace give the impression of being informal , though they are in fact formal and machine executable .ace provides a set of principles and recommendations for the strategy : to reduce the amount of lexical resources and structural sentences for a specification text to be unambiguously represented , and to fulfil the communication gap between domain specialist and software developer . basically , the construct of ace specification is the declarative sentence that is expressive enough to allow both natural usage and computer - processed purpose .+ * implementation : * we are currently implementing an automated translator from the autofocus3 models to ace sentences in the python programming language .python was chosen as the development language due to its rapid prototyping features , as well as due to its increasing uptake by researchers as a scientific software development language because of good code readability and maintainability . with regard to the python performance ,it is sufficient for many common tasks and turns out to be very close to c language for parsing a file and a tree - like structure , cf . . for the execution environment, we will research on the installation of ace parsing engine , cf . , to execute natural language sentences in prolog , cf . .+ * xml code of autofocus3 models .* while parsing the xml code of an autofocus3 model , we have to identify three core sections : * specifications of data types and functions / constants ( introduced by the xml - tag _ rootelements _ with the type _ data dictionary _ , cf .below for an example from the simpletrafficlight case study ) . *specifications of the system and components architecture ( introduced by the xml - tag _ rootelements _ with the type _ componentarchitecture _ ) ; * specifications of the state machines , used to describe the behaviour of system components ( introduced by the xml - tag _ containedelements _ with the type _ stateautomaton _ ) : as each of these parts consists of xml representation of the autofocus3 elements , we can define a translation schema for each of these elements to generate english sentences out of the xml code. the sentences should be conform to the ace rules . to validate that this constraint is fulfilled, we have to analyse syntax and semantics of the generated sentences .+ * translation schema .* let us discuss the translation schema in more details , focusing for simplicity on the specifications of data types and functions / constants .the definition of each data type is provided within the xml - tag _ typedefinitions _ , where the keyword _ enumeration _ indicates that this is an enumeration type .the name of the data type is coded within the attribute _name_. the elements of the type are introduces with the tag _members_. for the case of an enumeration type , we would have the following xml structure , where is a natural number representing a number of elements in the data type , and are some natural numbers representing internal identifiers of autofocus3 elements : + + to generate an ace sentence from this structure , we define two templates : * for the case we have only one element , i.e. , , we would use the template + is a datatype .it consists - of one element that is .* for the case we have more than one element , i.e. , , we would use the template + is a datatype .it consists - of elements that are , , .+ the definition of each function / constant is provided within the tag _ function _ , where its name and value are coded within the attributes _ name _ and _value_. for the case of constant function , we would have the following xml structure , where are some natural numbers representing internal identifiers of autofocus3 elements : + + + to generate an ace sentence from this structure , we define the following template : + is a constant .it is equal to .+ similar translation patterns apply for architecture specifications and state transition diagram sections .+ * ace : syntax check .* ace supports declarative sentences , which includes simple sentences , there is / are - sentences , boolean formulas , composite sentences , interrogative sentences , imperative sentences .ace construction rules determine whether an english sentence is an ace sentence , cf .each ace sentence is an acceptable english sentence , but not every english sentence is justified as a valid ace sentence .thus , to be conformed to ace construction rules , an nls in english should be constructed from the following elements : * function words : determiners , quantifiers , coordinators , negation words , pronouns , query words , modal auxiliaries , be " , saxon genitive marker s ; * fixed phrases : there is " , it is true that " ; * content words : nouns , verbs , adjectives , adverbs , prepositions .the function words and fixed phrases are predefined and can not be changed , whereas content words can be modified by users within the lexicon format , cf .the content words can not contain blank spaces .for instance , interested in " should be reformulated to interested - in " .+ * ace : semantics check . *the mentioned above rules can not remove all ambiguities in english . to avoid ambiguity , ace provides a set of interpretation rules .thus , each ace sentence can have only one meaning , based on its syntax and on syntax of previous sentences .the correctness of the generated sentences can be validated by the ace query sentences , cf .they can be subdivided into three forms that are -questions ( questions that require answer yes " or no " ) , -questions ( questions starting with the words what " , when " , where " , etc . ) , and _ how much / many_-questions , cf . . for example, we could use the following questions to check the definition of an enumeration data type : * what is ? * how many elementsdoes have ? * is an element of ?+ * case study : simpletrafficlight system . *we present the core ideas of the framework on example of a small case study , simple traffic lights , introduced by lam and teufl in . in the simple traffic lights case study , we the following elements in the data definitions section : * functions _ tgreen _ , _ tred _ , and _ tyellow _ that return a constant integer value to represent the time in seconds for the active pedestrian or traffic light . *enumeration data types : * * _ pedastriancolor _ : pedestrian lights ( _ stop _ , _ walk _ ) ; * * _ trafficcolor _ : traffic lights ( _ green _ , _ red _ , _ redyellow _ , _ yellow _ ) ; * * _ signal _ : one - element data type to represent the _ present _ signal ; * * _ indicatorsignal _ : pedestrian requests to pass the street ( _ off _ , _ on _ ) . figure [ fig : mapping_af3_xml_ace_datatypes ] illustrates the translation process from the autofocus3 data types and the corresponding xml descriptions , to ace sentences .after translation , we check the definition of each data type as shown on table [ tab : acequestionstl ] and in figure [ fig : acequestionstl ] . in a similar mannerthe natural language description of the system and components architecture as well as of state machines , representing components behaviour , are generated and checked ..validation the generated sentences using ace - questions [ cols= " < , < " , ]this paper introduces our ongoing work on nls from formal models .the goal of our current work is to generate documentation in english from the basic modelling artefacts of the autofocus3 modelling language , that are data types , state machines , and architectural components .this would allow to have an easy - to - read and easy - to - understand specifications of systems - under - development , written in english . to allow further formal analysis of the generated specification , we restrict english to its subset , ace . the proposed framework , in its current version , can be applied to build a prototype for generation of ace specifications from the autofocus3 models .m. broy , j. fox , f. hlzl , d. koss , m. kuhrmann , m. meisinger , b. penzenstadler , s. rittmann , b. schtz , m. spichkova , et al .service - oriented modeling of cocome with focus and autofocus . in _ the common component modeling example _ , pages 177206 .springer berlin heidelberg , 2008 .m. feilkas , a. fleischmann , f. hlzl , c. pfaller , k. scheidemann , m. spichkova , and d. trachtenherz . a top - down methodology for the development of automotive software .technical report tum - i0902 , tu mnchen , 2009 .m. feilkas , f. hlzl , c. pfaller , s. rittmann , b. schtz , w. schwitzer , w. sitou , m. spichkova , and d. trachtenherz . a refined top - down methodology for the development of automotive software systems - the keylessentry system case study .technical report tum - i1103 , tu mnchen , 2011 .m. ilieva and o. ormandjieva . automatic transition of natural language software requirements specification into formal presentation . in_ natural language processing and information systems _ , pages 392397 .springer , 2005 .b. lee and b. r. bryant .automated conversion from requirements documentation to an object - oriented formal specification language . in _ proceedings of the 2002 acm symposium on applied computing _ , pages 932936 .acm , 2002 .m. spichkova .design of formal languages and interfaces : `` formal '' does not mean `` unreadable '' . in k. blashki and p. isaias , editors , _ emerging research and trends in interactivity and the human - computer interface_. igi global , 2013 .a. zamansky , g. rodriguez - navas , m. adams , and m. spichkova .formal methods in collaborative projects . in _11th international conference on evaluation of novel approaches to software engineering ( enase)_. ieee , 2016 .
|
application of formal models provides many benefits for the software and system development , however , the learning curve of formal languages could be a critical factor for an industrial project . thus , a natural language specification that reflects all the aspects of the formal model might help to understand the model and be especially useful for the stakeholders who do not know the corresponding formal language . moreover , an _ automated generation _ of the documentation from the model would replace manual updates of the documentation for the cases the model is modified . this paper presents an ongoing work on generating natural language specifications from formal models . our goal is to generate documentation in english from the basic modelling artefacts , such as data types , state machines , and architectural components . to allow further formal analysis of the generated specification , we restrict english to its subset , attempto controlled english .
|
markov chains are versatile dynamical systems that model a broad spectrum of physical , biological , and engineering systems . along with the broad range of its applications ,one of the main advantages of markov chains is that some of them can be easily handled and cast as time - invariant , linear systems . in this paper , we focus on continuous - time , discrete - state , homogeneous , irreducible markov chains with a finite number of states .the probability of being in any state is governed by a linear set of ordinary differential equations ( odes ) , where individual odes correspond to each state of the system , describing all possible transitions in and out of such states .this set of odes is commonly referred to as forward kolmogorov equation or chemical master equation and might have large dimensions even for simple systems .hence , obtaining a solution for such a system might be analytically intractable and computationally demanding . provided that one is interested only in some states or a combination of states of the markov chain , it is possible to obtain a reduced order model via the balanced realisation of the linear system that describes the probability of being in such states .the reduced model has a smaller number of coupled differential equations , yet approximates the output of the full model with an error bound proportional to the sum of the hankel singular values neglected to obtain the reduced model . since chemical reaction networks in a homogeneous media with a low number of molecules , and in thermodynamic equilibrium can be described as markov chains , it is possible to apply our methodology to this class of systems .there exist alternative approaches to obtain reduced order models from the cme .for instance , the finite state projection method obtains the probability density function in prescribed subsets of the state space and for a specific time point . from this method, it is also possible to obtain a linear set of odes that can , in general , be further reduced via the methodology we use in this paper .other approaches make use of krylov subspaces to approximate the solution of the exponential matrix that generates the solutions of the markov chain .additionally , when the species can be classified by its behaviour into stochastic or deterministic , propose a methodology in which the cme can be solved directly and efficiently , when the number of species with stochastic behaviour is low . in this direction , works like avail of a time scale separation to estimate the solution of the fast - varying species ; and use this estimation to approximate the trajectories of the slow - varying species . on a different perspective , analysed methods to approximate the solution of selected states of the cme , when such solutions can be expressed as the product of two probability density functions : one that describes probabilities of states of interest and a second that depends on the rest of states .this later probability distribution can be approximated by its mean , for instance , so as to yield an approximated probability density function for those probabilities of interest .however , this approach might yield coarse results if the underlying assumptions are crude . as an alternative ,when the analytical or computational treatment of the markov chain is infeasible , it is common to opt for numerical simulations of the stochastic system and analyse the outcome statistically . , and , among many others , provide surveys of simulation methods of stochastic reaction networks. however , these methods might require large computational times to yield accurate results .a different way to reduce the cmes is to consider subsystems that focus on features of interest . in the chemical context, showed as a proof - of - concept that the simple reaction [k_2 ] } { \ensuremath{s}}_2 } \ce{->[k_3 ] } { \ensuremath{s}}_3} ] under special conditions on the parameters , which render the dynamics of the species irrelevant for the behaviour of .this study highlights the shortcomings of neglecting species within a stochastic reaction network . in this paper, we adopt a different approach and overcome these difficulties by deriving a reduced - order model .such reduced - order model accurately approximates the dynamics of the underlying markov chain for selected states of any chain with any kind of reaction propensities .there exist , however , exact approaches that abridge specific topologies of reaction networks .for instance , in different classes of monomolecular reaction networks are exactly represented as reactions characterised by delay distributions . in turn , works like are committed to obtain exact analytical solutions of stochastic chemical reaction networks with linear and nonlinear reactions .importantly , once a reduced ode set via balanced realisation is obtained , one can avail of the results in to derive a closed - form expression for approximation of the cme solution .we illustrate our methodology with the analysis of a reversible , stochastic reaction whose cme has states . in contrast ,an adequate reduced order model has only states and yield an gain of the approximation error of .later , we obtain a reduced order model that approximates the catalysed conversion of a substrate to a product , even in cases in which a stochastic michaelis - menten approximation fails to obtain accurate results .for such a system , the simulation of the reduced model may be several orders of magnitude faster than the simulation of the cme . however , there exist an initial cost in computational time to derive the reduced order model .hence , obtaining the model reduction is profitable when the lower - dimensional ode set is used repeatedly .finally , we derive a model that approximates the probability of having predefined ranges of product molecules , in the same catalytic substrate conversion .consider a discrete and finite set of states \right \}\end{aligned}\ ] ] and let the system s state at time be denoted by .moreover , we consider that the transition from one state to another can be modelled by a time - homogeneous markov chain , i.e. , the next state , , only depends on the current state , ( ) , independently of .we use \subset \mathbb r ] .that is to say , the time - homogeneity property of the markov chain implies in matrix form , known as _ chapman - kolmogorov equation _ , is ^{{w}\times{w } } : \mathbf{q}({\ensuremath{\mathnormal{t}}}+\tau ) = \mathbf{q}({\ensuremath{\mathnormal{t } } } ) \mathbf { q}(\tau ) = \mathbf{q}(\tau ) \mathbf { q}({\ensuremath{\mathnormal{t } } } ) .\label{eq : semigroup}\end{aligned}\ ] ] this matrix gathers all the transition probabilities as a function of time and , by consequence , its columns add to one for all . additionally , if the markov chain is _ irreducible _ , has a simple eigenvalue , and .this is consequence of the perron - frobenius theorem as described in ( * ? ? ?6 ) , for example . in the rest of this paper , we will deal with finite , irreducible , homogeneous , continuous - time , discrete - state markov chains exclusively .our main interest is to determine the time - dependent probabilities of being in any state of the chain . to this end , we consider the _ infinitesimal generator _ of the markov chain defined as the elements of the matrix above are where it can be shown that the elements satisfy the last relationship above shows that every column of adds up to zero , provided each column of add up to one .it is well - known that is the generator of the positive semigroup that governs the evolution of ( ) ( see ( * ? ? ?5.6 ) , for instance ) : under our assumptions , the markov chain is irreducible and with a finite number of states .hence , has a unique frobenius eigenvalue with algebraic multiplicity one .the simple perron - frobenius eigenvalue of the stochastic matrix in is 1 .now , let and be the right perron - frobenius eigenvector and eigenvalues of , then the eigenvalues of satisfy that is , preserves the configuration of the eigenvalues of , upon shifting one unit to the left and rescaling .this implies that has a zero eigenvalue and the rest of its eigenvalues have negative real part , as confirmed by analysing the gershgorin circles of the columns of .note that the dimension of , , might be large as it represents all the configurations of a system with characteristics . in the population and biochemical context, represents the number of species , whereas is the number of all the possible combination of species population counts . in the following section , we model a stochastic chemical reaction network with the markov chains described above .now , let us consider species in a homogeneous medium and in thermodynamic equilibrium and a set of reactions represented by } \sum_{i=1}^{n}\beta_{ij } s_i}.\label{eq : reac}\ ] ] let the entries of the stoichiometric matrix be furthermore , let us consider a vector comprised of the number of molecules , , for every species , : the finite set above was defined in and contains , at least , all the possible combinations of the species molecular numbers in the reaction network .consider that the reaction is the only reaction happening within the interval ] , we implemented the model in in matlab 2012b and obtained its balanced realisation with the command _balreal_. figure [ fig : hsv ] shows the first hankel singular values of the balanced realisation s grammian .we observe that the first ten singular values have a large norm in comparison to the rest . by using the command _modred _ , we obtained the reduced order model with different number of states ; hence , achieving different degrees of approximation .we depict the impact of the number of states on the error of approximation , in figure [ fig : compreduction ] .there , we note that a very coarse approximation is achieved when we try to approximate the full model with states with a model of only state ( see the lower panel of figure [ fig : compreduction]**(a ) * * ) . in turn ,when the reduced order model has states , the error of approximation is of order , as depicted in the lower panel of figure [ fig : compreduction]**(c)**. furthermore , if the reduced model has states , the approximation error might already range in the order of the integration error , as suggested by the irregular fluctuations shown in the lower panel of figure [ fig : compreduction]**(d)**. to finalise this section , we note that gain of the approximation error is and , for the reduced models with , , , and states , respectively .these bounds were obtained by evaluating expression .we note that this is a theoretical bound and does not account for numerical errors during the integration or computation of the hankel singular values . in the forthcoming section ,we obtain reduced order models for a catalytic substrate conversion , and asses the computational burden required to obtain the reduced order model .in addition , we benchmark the time required for simulating the reduced order model against both the computational load required to simulate the full order model and the stochastic simulation algorithm ( ssa ) . in this section ,we consider the reaction network [{\ensuremath{k_{\mathrm{b } 1 } } } ] } c } \ce{->[{\ensuremath{k_{\mathrm{f } 2 } } } ] } p+e},\end{aligned}\ ] ] which represents conversion of a substrate , , to a product , , mediated by a catalytic agent , , which binds to the substrate to form the complex . in the deterministic case ,it is common practice to approximate the mass - action - based reaction network in via the reaction } p},\end{aligned}\ ] ] with nonlinear reaction rate [ eq : vmm ] ) = \frac{v_{max}}{k_m + [ s]}[s],\end{aligned}\ ] ] where ] and 10 initial molecules of substrate . with these parameters ,the method proposed in would not yield accurate results , as the condition is not fulfilled ( ) .the only difference between the upper and lower panels in figure [ fig : reductionmm ] is the number of initial molecules considered for the enzyme . in the upper panel we considered 1 molecule of the enzyme, hence condition is fulfilled , and the stochastic michaelis - menten representation may be used to approximate the full model . moreover , one can use the stochastic michaelis - menten to derive a reduced model via balanced realisation , as compared in the upper panel of the upper panel of figure [ fig : reductionmm ] * ( c)*. there , we approximated the stochastic michaelis - menten model of states with a reduced order model with states ; the gain of the approximation error is less than as given by . in contrast ,when we consider molecules of enzyme initially , condition is violated ( as ) and the stochastic michaelis - menten model does not reproduce the dynamics of the full reaction network in , as depicted in the lower panels of figure [ fig : reductionmm ] .we note , however , that we can still obtain a reduced model via balanced realisation that accurately approximates the dynamics of the full model ( cfr .figure [ fig : reductionmm]**(c ) * * lower ) .there we approximated the full model with states by a reduced model of states , whose approximation error gain is less that .now we focus on the time required to simulate the cme and the time required to simulate the reduced order model . to compute the latter ,we need to apply some state transformations to the cme to derive a balanced realisation that can be further truncated .once the reduced model is obtained , the time required for its numerical solution is significantly smaller compared to the time required for the numerical solution of the full cme . to illustrate this reduction on the computational time, we obtained the cme of the reaction network with an equal number of molecules for the substrate and enzyme and zero molecules for the rest of the species , in the initial state ; later , we obtained the reduced order model via balanced realisation , which represents the state of total conversion of the substrate to the product ; and compared the time required for obtaining the numerical solution of the full cme ( ) and the reduced model ( ) by the expression we depict the results of this assessment in figure [ fig : performance ] .there we observe that as the number of molecules for and in the initial state increase , the savings on the computational time required to obtain the numerical solution of the lower - order model also increases .we note that for the comparison in we did not account for the time required to obtain the reduced order model . to finalise this section , we compare the computational time required by the derivation of the reduced model via balanced realisation plus the simulation of the reduced model ; and the time required by the fsp for each time point .we note that the fsp obtains an approximated probability vector with a desired error bound ( ) for _ one specific time point _ ; hence , if one is interested in the transient response of the probability distribution , one has to implement such an algorithm for every time step of interest .in contrast , once one obtains the reduced model via balanced realisation it is possible to use the lower - dimensional system for any number of time points .these results are summarised in figure [ fig : fspvsbalreal ] , where the panels , , and consider , , and initial molecules for and , respectively , and zero molecules for the rest of the species .the remaining parameter values are identical to those of figure [ fig : reductionmm ] . we note that for the fsp the 1-norm of the error bound is less than a predefined for the specific time points of interest ( discrete signal ) , whereas the gain of the approximation error ( continuous signal ) , obtained with the reduced model via balanced realisation , satisfies the bound given by .as the nature of both error signals is different , is difficult to perform a fair comparison of the methods accuracy . in the forthcoming section ,we obtain a reduced order model that approximates the probability of having a certain range of molecules . up to now, we have obtained reduced models that approximate the probability of being in one state of the markov chain . in this section ,we revisit the reaction network in by obtaining the probability of having a certain number of molecules within predefined ranges . herewe consider the following parameter definitions : , initial molecules of substrate , initial molecules of enzyme , and zero initial molecules for the rest of the species . by denoting the number of molecules with , we can formulate our problem as approximating the following probabilities to derive the cme , one needs to obtain and label all the possible combinations of species molecular counts and organise them in the set in. then we have to evaluate the infinitesimal generator as in with the corresponding reaction propensities of ( see table [ tb : prope ] ) . to obtain an expression for , we need to define the matrix in so that the product of the first row of by the vector yield the sum of the probability of all the states such that is within the range $ ] .the next two rows of are defined likewise , but accounting for the ranges described in the second and third entries of .the cme for this system , parameters , and initial number of molecules has states . by applying the model reduction technique in section [ sec : orderred ] , we can approximate the probabilities in by a dynamical system with states , whose output is depicted in figure [ fig : intervals ] . the gain of the approximation error is less than , as estimated by .in this paper we addressed the order reduction of the infinitesimal generator of a homogeneous , continuous - time , finite and discrete state - space markov chain via the reduction of its balanced realisation .although the application range of these dynamical systems is broad , here we focus on its use on stochastic chemical reaction networks , without loss of generality . in this context , the infinitesimal generator of the markov chain that describes the probability of having a particular species molecular count is a large set of odes . to reduce the order of the infinitesimal generator of a markov chain , we used an alternative coordinate system to represent the chemical master equation ( cme ) .this representation , denoted as lyapunov balanced realisation , has interesting property that the states are organized in decreasing order according to the probabilities of interest .hence , an accurate approximation can be obtained , for example , by neglecting the last states of the lyapunov balanced model , as discussed in section [ sec : orderred ] .although one may focus on particular states of the markov chain , it is also possible to account for marginal probability distributions such as in the case study in section [ sec : range ] , or even mean values , by properly defining the matrix in . in many cases ,only selected states of the markov chain might be of practical relevance .for instance , this is the case when facing limited or inexact measurement data , or when only a few states are relevant for downstream signalling in biochemical reactions . also , in imaging analysis of chemical reaction networks , obtaining the exact count of intracellular protein reporters might be challenging due to limited resolution .hence , the validation of the mathematical model that describes the process under observation should yield the probability of having an specific range of molecules count of the observed species .we presented this procedure in section [ sec : range ] , for a very simple reaction network .even in such a simple case , the associated markov chain presented approximately 5000 distinct states of the system .this highlights how simulation of a system , even in the simplest cases , might imply a computationally intensive task . to alleviate such a burden , the model reduction via balanced realisation used in this paper yields lower - dimensional ode sets, whose numerical solution might be several orders of magnitude quicker than the numerical solution of the original cme .moreover , the method used to derive the lower dimensional model provides an upper bound on the approximation error , depending on the number of states neglected to derive the approximation . of note , the processes required for deriving the reduced order model itself might take longer computations times compared to the mere simulation of the cmenevertheless , the numerical solution of the reduced model might be obtained orders of magnitude faster , depending on the number of molecules of the system , as shown in figure [ fig : performance ] .hence , there will be real savings on the computational time when the reduced model is repeatedly utilised , for instance when adopting different initial probability distributions .we would like to stress that to obtain a reduced order model , we have to fix kinetic parameters and to define which are the states of interest .should we require to modify either of them , a new reduced model has to be derived .likewise , all methods that require computational calculations , such as the fsp , ssa , and numerical solution of the cme will require numeric values for the parameters and , moreover , specific numerical values for the initial probability distribution .when either of them are modified , a new numerical solution has to be obtained. additionally , the reduction and simulation of the cme might be orders of magnitude faster than the application of the fsp method , as suggested by the example analysed in section [ sec : smm ] .another possible use for the reduced model is to derive closed - form expressions of its solution ( see , for instance ) , thereby avoiding the need for numerical solution of the reduced ode set .when the number of states of the markov chain to reduce is so large that using only one computer is unfeasible , we suggest the use of parallel algorithms to obtain the model reduction by truncation ( see e.g. ) it is important to note that the reduced order model might lack some properties of the full model .for instance , the infinitesimal generator of the markov chains studied here describes a positive system : the value of the probabilities will be always positive .however , the reduced order model obtained by truncation used in this paper will not , in general , preserve such a property .this implies that if most of the states of the balanced realisation are neglected to obtain the reduced model , there is a risk of having small , negative values for the approximated probabilities .an example of such phenomenon can be observed on the upper panels of figure [ fig : compreduction]**(a , b)**. this suggests the existence of a trade - off on the order and the accuracy of the reduced - order model . as a rule of thumb, a good approximation can be obtained by neglecting those states associated to hankel singular values which are three orders of magnitudes smaller than the largest one .if the possibility of small , negative values for the probability can not be afforded for the application of the reduced order model , there are other model order reduction methods that preserve the positivity of the original model , such as the recent works .however , it is equally important to note that these approaches are not generally applicable ; are more time consuming ; and have larger error bounds . along this paper, we have considered that the set in has all the possible states of the markov chain under consideration . however , when the number of states is prohibitively large , it is possible to consider a truncation of the set ; thereby , obtaining smaller master equations .this truncation has two implications : the master equation derived from the truncated will not capture the full probability density function of the markov chain , but will only focus on the probability of being in those states of interest as characterised in ; and the set of odes arising from the truncated will not have the properties in .hence the change of coordinates in is not necessary , and balanced model reduction can be applied directly to the set of odes obtained from the truncated .this , in turn , implies that those methods that depend on the truncation of the set to derive approximated probability distributions , such as , do not antagonise with the model reduction via balanced realisation used in this paper , as both approaches can be complementary .here , we provide some definitions and the derivation of the approximation error bound that arises from the model reduction via balanced realisation described in section [ sec : orderred ] . the material of this section is based on the refs . . first , to asses the size of the error of approximation ,let us define the norm of a real , time - dependent vector as when , one obtains the norm of the truncated signal . to increase readability, we will not explicitly show the upper limit of integration in the norm s subscript .now , in the frequency domain , the linear ode becomes the following algebraic equation where is the complex frequency variable that arises from the laplace transform of , and the complex matrix is denoted as the _ transfer function _ of the system and characterises its input - output behaviour .the norm of the complex matrix is defined as here denotes the largest eigenvalue of the argument . in turn , the norm of , for analytic matrices on the open right half - plane , is when in belongs to the banach space endowed of the norm , theorem 4.4 in states that in order to relate the frequency - domain norms with the time - domain norms , we note that the laplace transform used to obtain the transfer function of is an isomeric isomorphism between the space in the frequency - domain and the space in the time - domain .thus , from , we can infer that let be a stable rational transfer function with hankel singular values and let be obtained by truncating or residualising the balanced realization of to the first states .then k. burrage , m. hegland , s. macnamara , and r. b. sidje , `` a krylov - based finite state projection algorithm for solving the chemical master equation arising in the discrete modelling of biological systems , '' in _ proceedings of the markov 150th anniversary conference_.1em plus 0.5em minus 0.4emboson books , raleigh , nc , 2006 , pp . 2138 .s. menz , j. c. latorre , c. schtte , and w. huisinga , `` hybrid stochastic deterministic solution of the chemical master equation , '' _ multiscale modeling & simulation _ , vol .10 , no . 4 , pp . 12321262 , 2012 .m. barrio , k. burrage , p. burrage , a. leier , and t. marquez - lago , `` computational approaches for modelling intrinsic noise and delays in genetic regulatory networks , '' in _ handbook of research on computational methodologies in gene regulatory networks _ , s. das , d. caragea , s. welch , and w. h. hsu , eds.1em plus 0.5em minus 0.4emhershey pa : igi global , 2010 , pp. 169197 .j. m. bada , p. benner , r. mayo , and e. s. quintana - ort , `` parallel algorithms for balanced truncation model reduction of sparse systems , '' in _ applied parallel computing .state of the art in scientific computing_.1em plus 0.5em minus 0.4emspringer , 2006 , pp .267275 .c. grussler and t. damm , `` a symmetry approach for balanced truncation of positive linear systems , '' in _ decision and control ( cdc ) , 2012 ieee 51st annual conference on_.1em plus 0.5em minus 0.4emieee , 2012 , pp .
|
we consider a markov process in continuous time with a finite number of discrete states . the time - dependent probabilities of being in any state of the markov chain are governed by a set of ordinary differential equations , whose dimension might be large even for trivial systems . here , we derive a reduced ode set that accurately approximates the probabilities of subspaces of interest with a known error bound . our methodology is based on model reduction by balanced truncation and can be considerably more computationally efficient than the finite state projection algorithm ( fsp ) when used for obtaining transient responses . we show the applicability of our method by analysing stochastic chemical reactions . first , we obtain a reduced order model for the infinitesimal generator of a markov chain that models a reversible , monomolecular reaction . in such an example , we obtain an approximation of the output of a model with states by a reduced model with states . later , we obtain a reduced order model for a catalytic conversion of substrate to a product ; and compare its dynamics with a stochastic michaelis - menten representation . for this example , we highlight the savings on the computational load obtained by means of the reduced - order model . finally , we revisit the substrate catalytic conversion by obtaining a lower - order model that approximates the probability of having predefined ranges of product molecules .
|
models for describing detailed reaction mechanisms of hydrocarbon fuels and biochemical processes in living cells are typical examples of large multiscale dynamical systems . in this respect ,modern research has to cope with an increasing difficulty mainly in two aspects : first , the number of degrees of freedom is tremendously large making it difficult to obtain a physical understanding of the above phenomena .in addition , computations are often dramatically time consuming due to a wide range of time - scales to be resolved . as a result ,methodologies able to tackle the above issues become highly desirable .the issue of physical understanding is drawing an increasing attention in the realm of kinetic modeling of biological systems with many degrees of freedom .modern simplification techniques ( often referred to as _ model reduction methods _ ) are based on a systematic decoupling of the fast processes from the rest of the dynamics , and are typically implemented by seeking a _ low dimensional manifold _ in the phase - space . towards this end , several methods have been suggested in the literature which are based on the following picture ( for dissipative systems with unique steady state to be addressed below ) : multiscale systems are characterized by a short transient towards a stable low - dimensional manifold in the phase space , known as the slow invariant manifold ( sim ) .the subsequent dynamics is slower and proceeds along the manifold , until a steady state is reached .recently , the relaxation redistribution method ( rrm ) has been proposed and implemented in realistic combustion mechanisms for hydrogen and methane mixtures .rrm has been introduced as a particularly efficient scheme to implement the film equation of dynamics ( see section [ background ] below ) , which can be used to construct the sim and adaptively choose the minimal description of complex multiscale systems . in the latter references ,the minimal description is understood as the minimal dimension of a convergent sim ( by rrm ) . for completeness, we stress that an alternative approach for the adaptive simplification of multiscale systems is the g - scheme in . in the present work , following the basic idea behind the rrm , we derive a set of ordinary differential equations which approximate the rrm dynamics ( here , referred to as _ governing equations of the linearized rrm _ ) . a remarkably easy implementation of the latter method for constructing sim in any dimensions is then proposed .this manuscript is organized as follows . in section [ background ] , the problem of model reduction is introduced and the notions of both invariance equation and film equation are briefly reviewed .the governing equations of the linearized rrm are presented in section [ slow.equations ] , where the link between the presented method and other approaches ( i.e. direct solution of invariance and film equations , ildm , csp ) is shortly discussed .a novel algorithm for approximating the fast subspace is introduced in section [ fast.algorithm ] .the accuracy in describing the sim by governing equations of the linearized rrm at steady state is discussed in section [ steady.state.accuracy ] , whereas a possible initialization of them is proposed in section [ inizio ] .the suggested linearized rrm is tested in section [ ben - ex ] , while conclusions are drawn in section [ conclusions ] .let an autonomous system of ordinary differential equations = f\left ( y \right),\ ] ] describe the time evolution of a state ^t ] belongs to a _ reduced _ space with dimension , and evolves in time according to the slow dynamics of system ( [ odegen ] ) .by analogy with classical thermodynamics , a reduced model ( [ odered ] ) represents a _macroscopic description _ of a physical phenomenon ( given by ( [ odegen ] ) ) where various processes with disparate timescales occur .formally , the link between the microscopic world and the corresponding macroscopic description can be established by resorting to the notion of slow invariant manifold ( sim ) . in other words ,the reduced ( macroscopic ) dynamics ( [ odered ] ) occurs along a -dimensional sim , , embedded in the phase - space .thus , through , it is possible to pick up the most likely microstate among all the possible ones which are consistent with a macrostate characterized by the _ macroscopic observables _ ( see also ) . in the following , by model reduction, we mainly refer to constructive methods of both the slow and fast subspaces , and we assume that an arbitrary manifold can be defined ( at least locally ) by means of a mapping by definition , is _ invariant _ with respect to the system ( [ odegen ] ) if inclusion implies that for all future times .equivalently , if the tangent space to is defined at , invariance requires : in order to transform the latter condition into an equation , it proves convenient to introduce projector operators . forany subspace , let a projector onto be defined with image and .then the invariance condition can be expressed as : where is often called _ defect of invariance _ , and represents the identity matrix .it is worth stressing that , although the notion of invariance discussed above is relatively straightforward , _ slowness _ instead is much more delicate .we just notice that invariant manifolds are not necessarily suitable for model reduction ( e.g. , all semi - trajectories are , by definition , one - dimensional ( 1d ) invariant manifolds ) . for singularly perturbed systems ,the notion of slow invariant manifold has been defined in the framework of the geometric singular perturbation theory by fenichel .however , we should also point out that in general the different methodologies proposed in the literature for model reduction purposes are based on different objects .for instance , it is known that the rate controlled constrained equilibrium ( rcce ) manifold typically does not even fulfill the invariance condition ( [ inveq ] ) , whereas other methods ( see , e.g. , , , ) attempt the construction of invariant objects ( with different accuracy ) . here , we follow the rationale behind the method of invariant manifold ( mim ) , where slowness is understood as _ stability _ ( see also chapter 4 of ) , so that a sim is the _ stable stationary _ solution of a relaxation process ( _ film equation _ ) we notice that the projector operator introduces first order spatial derivatives ( with respect to the manifold parameters ) .therefore , ( [ inveq ] ) and ( [ film ] ) are partial differential equations ( see also ) whose unknown is the function , which is conveniently utilized for a parametric representation of the manifold , with being the image of : .several numerical schemes have been suggested in the literature for solving eqs .( [ inveq ] ) and ( [ film ] ) : the newton method with incomplete linearization and the relaxation method in , the semi - implicit scheme in represent a few examples .the latter approaches aiming at the direct solution of both the invariance condition and film equation are often hindered by severe numerical ( courant type ) instabilities . toward the end of overcoming the latter issues , the relaxation redistribution method ( rrm )has been recently introduced ( see also fig .[ schizzo ] ) . in the following , exploiting the rationale behind the rrm, we devise a set of ordinary differential equations approximating the dynamics of the film equation ( [ film ] ) in a neighborhood of a fixed macrostate .let the dynamical system ( [ odegen ] ) be characterized by a hierarchy of time scales , and let be of the order of the fastest scale of ( [ odegen ] ) .let the matrix and its -th row ,\;b_j = \left [ { b_{j1 } , ... , b_{jn } } \right],\ ] ] define a linear mapping from the phase - space into a reduced space of dimension : \in \xi , \ ] ] such that a macrostate can be associated with any microstate via the ( [ micro2macro ] ) . in the following , we develop an _ iterative _ methodology for refining an initial approximation of the sim in a vicinity of a given macrostate . to this end , at each iteration , we assume that , in a neighborhood of , the sim is approximated by an affine linear mapping , , of the form : where and are a matrix and a column vector , respectively , such that : ,\;a_i^k = \left [ { \begin{array}{*{20}c } { a_{1i } ^k } \\ \vdots \\ { a_{ni } ^k } \\\end{array } } \right],\;\;l^k = \left [ { \begin{array}{*{20}c } { l_1 ^k } \\ \vdots \\ { l_n ^k } \\\end{array } } \right].\ ] ] notice that , the over - bar denotes the pivot ( in fig . [ schizzo ] ) at an arbitrary iteration along with the corresponding macrostate . for the state belongs to the space defined by the linear function in ( [ linearized.manifold ] ) , the column vector satisfies : assuming the existence of a -dimensional sim , , we aim at devising a procedure for updating the linear mapping ( [ linearized.manifold ] ) : such that ( [ linearized.manifold.up ] ) describes with a better accuracy than ( [ linearized.manifold ] ) in a neighborhood of . toward this end , we follow the rationale behind the relaxation redistribution method ( rrm ) introduced in .we stress that , at any iteration , the manifold is described by the mapping with the microstates , belonging to the affine subspace ( [ linearized.manifold ] ) . in fig .[ schizzo ] , we pictorially show the relaxation of ( large circle ) and one of its neighbors ( small circle ) ( all in the space defined by ) toward during the time , where is a small parameter . notice that , owing to arbitrariness in picking the -th neighbor , for simplicity we make the choice . according to the rrm algorithm , describes the subspace defined by the set of relaxed states .the updated points can be written as : which represent the advance in time of the points , during a period , according to an explicit euler scheme .upon linearization of the right - hand side of ( [ odegen ] ) , , ( [ relaxed.points ] ) take the approximate form : with ] .equation ( [ yetanotherf ] ) stems from ( [ f.updated ] ) where the origin , , of the affine subspace has been replaced with .equations ( [ yxi.mat ] ) yield the function in the form ( [ linearized.manifold.up ] ) : where the updating rules ( [ updating ] ) and ( [ akp1 ] ) can be interpreted as the explicit euler numerical scheme for solving the following dynamical system : \tau^{-1}.\ ] ] the second equation in ( [ y.evol ] ) can be derived , by analogy with ( [ updating ] ) , after recasting the first equation in ( [ akp1 ] ) as follows : \tau.\ ] ] the above equations ( [ y.evol ] ) are the governing equations of the linearized rrm , which dictate a fictitious temporal evolution of a state and a matrix , \quad a_i = \left [ { \begin{array}{*{20}c } { a_{1i } } \\ \vdots \\ { a_{ni } } \\\end{array } } \right],\ ] ] ( defining an affine linear mapping of the form ) towards the corresponding -dimensional slow invariant manifold in a neighborhood of a given macroscopic state , with , \ ; \phi(i , j)= \delta_{ij } + b_i j a_j \tau . \ ] ] for the sake of clarity , we point out that the dynamics ( [ y.evol ] ) ( as well as the rrm dynamics in ) is referred to as _ fictitious _ because , unlike the original detailed system ( [ odegen ] ) , no physical or chemical processes are typically described by it .moreover , we stress that the presence of the time - scale in the right - hand side of the equations in ( [ y.evol ] ) introduces a remarkable stiffness , thus the odes ( [ y.evol ] ) typically require state of the art stiff integrators ( see , e.g. , ) . according to the rrm method , the sim is obtained when the relaxation and redistribution steps balance each other ( details can be found in ) . in the suggested algorithm ,the analogous condition is satisfied at the steady state ( here denoted as and ) of the dynamical system ( [ y.evol ] ) .hence , the sim is given ( in a vicinity of ) by : we stress that computation of the quantity does not require additional refinements , and it is performed by ( [ sim.linear ] ) ( upon convergence of ( [ y.evol ] ) ) if a linear approximation is to be provided for approximating the mapping ( [ mapping ] ) in a neighborhood of .it is worth noticing that , inspection of the right - hand side of the first equation in ( [ y.evol ] ) reveals a clear connection between the rrm method introduced in and the film equation ( [ film ] ) .in fact , although ( [ y.evol ] ) represents a system of ordinary differential equations whereas ( [ film ] ) is a partial differential equation , the former only describes the ( [ film ] ) locally in a vicinity of a macrostate . in this respect ,the projector onto the tangent space of a manifold takes the explicit form : . in this respect ,the latter operator satisfies the condition of projectors : due to the relation .similarly to ( [ film ] ) , the governing equations of the linearized rrm ( [ y.evol ] ) prescribe a composition of two motions : the first one along the detailed dynamics , while the second one along the tangent space of , .finally , at steady state of ( [ y.evol ] ) , the invariance condition ( [ inveq ] ) is satisfied : the above equation imposes that , on the sim , the component of dynamics in the fast subspace vanishes . since that condition lies at the heart of other popular methods ( such as ildm and csp ) , this explains the formal resemblance of ( [ y.evol ] ) ( at steady state ) to the equations adopted in ildm and csp .the methodology proposed in the previous section can be utilized for extracting the slow invariant manifold ( i.e. the subspace of slow motions or _slow subspace _ , for short ) with respect to the ode system ( [ odegen ] ) . nevertheless , this is only one aspect of model reduction : computing the _ fast subspace _ is indeed required in order to achieve the complete decomposition of the full dynamics ( slow - fast decomposition ) .we notice that , towards this end , several approaches have been proposed in the literature . for instance , the notion of _ thermodynamic projector _ for dissipative systems supported by a thermodynamic lyapunov function , the spectral decomposition of the jacobian matrix , and the csp algorithm are some of the most popular examples .those methods might be adopted in combination with the above technique ( [ y.evol ] ) as well , for an _ a posteriori _ reconstruction of the fast subspace . however , here in the same spirit of the method presented in section [ slow.equations ] , we propose an alternative procedure for computing the fast subspace , in a neighborhood of a given macrostate , once the linear function ( [ sim.linear ] ) has been computed .let us assume that the fast subspace can be uniquely parameterized ( at least locally ) by the variables .let the matrix and its -th row ,\ ; \tilde{b}_j = \left [ { b_{j1 } , ... , b_{jn } } \right ] , \quad z = n - q - r,\ ] ] define a linear mapping : ,\ ] ] where the dynamics of the system ( [ odegen ] ) obeys a set of linear conservation laws . in a neighborhood of the sim point , at a given iteration , the fast sub - space can be represented by a linear function as follows : with , \ ; \tilde a_i^k = \left [ { \begin{array}{*{20}c } { \tilde a_{1i } ^k } \\ \vdots \\ { \tilde a_{ni } ^k } \\\end{array } } \right],\;\ ;\tilde l^k = \left [ { \begin{array}{*{20}c } { \tilde l_1 ^k } \\ \vdots \\ { \tilde l_n ^k } \\\end{array } } \right].\ ] ] similarly to the procedure of section [ slow.equations ] , here we aim at devising an iterative procedure so that the function ( [ y.fast ] ) , in the limit , accurately describes the fast subspace .let denote an arbitrary small parameter .following the pictorial representation of fig .[ schizzo.fast ] , for every variable we consider the relaxation of the two neighbors ( in the affine space ( [ y.fast ] ) ) of under the anti - parallel dynamics . after time , these states move to the new locations : which , upon linearization of the vector field , take the approximate form : therefore , a set of vectors spanning the fast sub - space at the iteration reads : where , for the sake of notations , .we can thus describe the updated fast sub - space as follows : or equivalently in matrix notations with , \tilde{\lambda}=\left[\tilde \alpha_1, ... ,\tilde \alpha_z \right]^t.\ ] ] by substituting the ( [ function.fast ] ) into ( [ b.eta.fast ] ) , where the generic element of the matrix reads where , owing to ( [ b.eta.fast ] ) and ( [ y.fast ] ) , from ( [ function.fast ] ) and ( [ eta.eq.01 ] ) , it follows that at the iteration the linear function describing the fast sub - space is : so that similarly to the ( [ akp1 ] ) , the updating rule ( [ a.fast.updated ] ) can be regarded as the explicit euler scheme for integrating the dynamical system : \tau^{-1},\ ] ] with , \ ; \tilde{\phi } \left ( i , j \right ) = \delta_{ij } - \tilde{b}_i j^{ss } \tilde{a}_j \tau .\ ] ] we notice that the matrix keeps evolving under the dynamics ( [ y.evol ] ) until the following steady condition holds : which can be recast in the more explicit form : it is straightforward to prove that right eigenvectors of the jacobian satisfy the stationary condition ( [ stationary.a ] ) .let the columns of represent a set of eigenvectors of such that where is a diagonal matrix whose non - zero components are the corresponding eigenvalues .upon substitution of ( [ eigen.def ] ) in ( [ stationary.a ] ) , we obtain the identity : due to the condition .similar considerations apply to the evolution of under the ( [ a.fast.dynamical ] ) .hence , we can conclude that the eigenvectors of ( evaluated at ) do provide stationary solution for both the equations and . the identity ( [ identity ] ) also suggests that , if ( [ eigen.def ] ) holds , the projector operator in the first equation of ( [ y.evol ] ) takes the simple stationary form , such that the pivot evolution is ruled by : the above considerations suggest that the proposed method can deliver approximations of the sim up to an accuracy of the order of ildm .it is worth stressing that such a limit is not due to the rrm approach , rather to the linear approximations of ( [ mapping ] ) and ( [ relaxed.points ] ) by ( [ linearized.manifold ] ) and ( [ relaxed.points.approx ] ) , respectively .hence , other governing equations leading to more accurate description of the sim compared to ( [ y.evol ] ) may be also devised , abandoning the present linear expressions ( [ linearized.manifold ] ) and ( [ relaxed.points.approx ] ) in favor of higher order approximations ( at the cost of a more demanding implementation ) .the method described in section [ slow.equations ] for constructing local approximations of sim requires the initial choice of and .several approximations of the sim can be adopted for this purpose as proposed in . in the following ,we discuss in detail another possible initialization strategy for the case of dissipative systems . closed chemically reactive mixture of gasesare prototypical examples of large dissipative systems that can be addressed by model reduction techniques .in fact , due to the second law of thermodynamics , the dynamical system ( [ odegen ] ) , describing the temporal evolution of chemical species concentrations , is equipped with a thermodynamic lyapunov function related to entropy and always decreasing in time . in this case ,a rough approximation of the sim is often provided by the _ quasi - equilibrium manifold _ ( qem ) , also referred to as _ constrained equilibrium manifold _a qem is defined by means of the following constrained optimization problem : ^t , \\ dy = \left [ { \chi _ 1 , ... , \chi _ r } \right]^t . \\\end{array } \right.\ ] ] where denotes the qem dimension , while the matrix imposes the conservation of the number of moles ( ) of chemical elements involved in the reaction . in fig .[ schizzo.qem ] , the geometry behind the notion of qem is shown schematically .let and be the second derivative matrix of the lyapunov function and the null space of the matrix of contraints in ( [ qem.def ] ) : ,\quad d = \left [ \begin{array}{l } d_1 \\ \vdots \\\end{array } \right ] .\ ] ] let the matrix be defined as follows : = \ker d.\ ] ] with the basis vectors spanning the null space of ( ) , an arbitrary vector along the tangent space of a qem can be written in terms of the vector ^t$ ] as : .\ ] ] ) , and being the second derivative matrix of the lyapunov function and the null space of the full set of constraints in ( [ qem.def ] ) , respectively.,scaledwidth=100.0% ] the geometry behind the optimization problem ( [ qem.def ] ) imposes the orthogonality condition ( see also fig .[ schizzo.qem ] ) , and the tangent space to a qem is spanned by : recalling the definition of the matrix in ( [ linearized.manifold ] ) , a possible initialization of ( with describing locally a qem ) takes the form : whereas can be found by solving the optimization problem ( [ qem.def ] ) using for example tools suggested in and .finally , possible choices for the matrix are discussed in ( spectral quasi equilibrium parameterization ) and ( constrained equilibrium parameterization ) while exact formulae for computing matrices and can be found in . moreover , eq .( [ a.fast.updated ] ) and the dynamical system ( [ a.fast.dynamical ] ) require the initial condition .a possible option is the following : since fast motions are necessarily transversal to the slow subspace ( [ sim.linear ] ) , a reasonable choice for the matrix reads : \right)^t.\ ] ] as a first guess ( ) , let the mapping ( [ y.fast ] ) describe the orthogonal subspace to the sim ( [ sim.linear ] ) .more specifically , let the former space be spanned by the columns of the matrix ,\ ] ] similarly to ( [ slope_qem ] ) , initial condition for ( [ a.fast.updated ] ) and ( [ a.fast.dynamical ] ) takes the explicit form we notice that owing to the relations and , as an alternative to ( [ slope_qem ] ) and ( [ ic.fast ] ) , and can be initialized by computing the moore - penrose pseudoinverse matrices of and , respectively .stability of the governing equations of the linearized rrm ( [ y.evol ] ) can be exploited for adaptive construction of sim . in the first place , eqs .( [ y.evol ] ) can be solved with : if convergence is experienced , we assume that a 1d reduced model of the system ( [ odegen ] ) can be constructed in a vicinity of the macrostate .in other words , a _ minimal description _ ( [ odered ] ) of the detailed system ( [ odegen ] ) can be accomplished by means of one degree of freedom . on the contrary , with no convergence ,the manifold dimension is updated to and the procedure repeated . upon convergence with some , we may infer that a _ minimal description _ of the detailed dynamics requires degrees of freedom . in this sense, the suggested method enables an adaptive construction of sim ( i.e. varying dimension in the phase - space without any _ a priori _ assumptions on the value of ) .the above idea relies upon the assumption that rrm is stable provided the existence of sim of a certain dimension .more details on the stability of the rrm can be found in , where a comparative study between a method for the direct solution of the film equation ( [ film ] ) and rrm is performed .for the sake of simplicity , we consider here a four - dimensional model where the dynamics of two fast variables ( and ) is _ slaved _ to the motion of the slow variables and .let the functions , , and depend on and only . in the following , we focus on the ode system : + f_1 \partial_{c_1 } \theta_1 \left ( c_1,c_2 \right ) + f_2 \partial_{c_2 } \theta_1 \left ( c_1,c_2 \right ) } \\ { -\frac{1}{\epsilon } \left[c_4 - \theta_2 \left ( c_1,c_2 \right ) \right ] + f_1 \partial_{c_1 }\theta_2 \left ( c_1,c_2 \right ) + f_2 \partial_{c_2 } \theta_2 \left ( c_1,c_2 \right ) } \\ \end{array } } \right],\ ] ] where and denote a fixed small quantity and partial derivative with respect to variable , respectively .assuming that the dynamics of and is slaved to the slow variables , and , according to the chain rule , time derivatives of ( [ slaving ] ) take the explicit form : upon substituting equations ( [ chain_rule ] ) in ( [ odeben ] ) , one obtains the following _ invariance conditions _ with respect to ( [ odeben ] ) ( see also section [ background ] and eq .( [ inveq ] ) ) : + f_1 \partial_{c_1 } \theta_1 \left ( c_1,c_2 \right ) + f_2 \partial_{c_2 } \theta_1 \left ( c_1,c_2 \right ) , } \\ { \partial_{c_1 } c_4 f_1 + \partial_{c_2 } c_4 f_2 = -\frac{1}{\epsilon } \left[c_4 - \theta_2 \left ( c_1,c_2 \right ) \right ] + f_1 \partial_{c_1 } \theta_2 \left ( c_1,c_2 \right ) + f_2 \partial_{c_2 } \theta_2 \left ( c_1,c_2 \right ) . } \\\end{array}}\ ] ] a common approach to obtain solutions to the above invariance conditions is the _ chapman - enskog method _ , which is based on the assumption that is small compared to all other quantities , and it is implemented by series expansions of the ( [ slaving ] ) in powers of : hence , the first equation in ( [ invariance.conditions ] ) reads : + f_2 \partial_{c_2 } \left [ c_3^{(0 ) } + \epsilon c_3^{(1 ) } + \epsilon^2 c_3^{(2 ) } + ... \right ] = \\-\frac{1}{\epsilon } \left[c_3^{(0 ) } + \epsilon c_3^{(1 ) } + \epsilon^2 c_3^{(2 ) } + ... - \theta_1 \right ] + f_1 \partial_{c_1 } \theta_1 + f_2 \partial_{c_2 } \theta_1 .\end{split}\ ] ] after collecting terms with the same power of , we obtain : namely , and similarly . for illustration purposes ,we choose : ^{-1 } , \\f_1=-c_1 , \ ; f_2=-2c_2 .\end{array}}\ ] ] in fig .[ caso - ben01 ] , the chapman - enskog solution to ( [ invariance.conditions ] ) is plotted to illustrate the relaxation of system ( [ y.evol ] ) , starting from the following initial pivot and tangent space : ^t , \;\;a^1 = \left [ { \begin{array}{*{20}c } 1 & 0 \\ 0 & 1 \\ { - 0.276 } & { - 1.405 } \\ { 0.225 } & { 0.0282 } \\\end{array } } \right].\ ] ] finally , the sim parameterization is chosen as follows : , .we stress that , in the computations , the steady state ( , ) does not depend on the initial choice of and ( see fig .[ caso - ben02 ] ) .the latter observation is consistent with the idea behind the rrm , which can be elucidated by saying that states on the sim represent _ stable steady states _ of the dynamical system ( [ y.evol ] ) .the reduced system ( [ odered ] ) for the above example ( [ odeben ] ) rules the evolution of the slow variables : whereas fast variables can be reconstructed by means of the mappings , ( fig .[ glob ] ) . in this respect , in fig .[ relax.comp ] , a solution trajectory of ( [ odeben ] ) is compared to the trajectory of ( [ reducedode ] ) , where the reconstruction of and is performed using both the exact chapman - enskog solution ( , ) and a linear look up table based on the nodes refined by the linearized rrm ( see also figs . [ glob ] and [ deviation ] ) ..the steady state ( , , , ) of ( [ y.evol ] ) is computed for several choices of the parameter starting from the state , , , , with and .no convergence is observed for .[ cols="^,^,^,^,^",options="header " , ] in table [ tabletau ] , we report a sensitivity analysis with respect to the parameter .we notice that an estimate of the time scales of a dynamical system can be obtained by a spectral decomposition of the jacobi matrix . for the case in fig .[ glob ] , at equilibrium ( , ) : where denotes the -th eigenvalue .hence , in the above computations we use . however , the latter parameters was varied within a wide range of values and no significant effect was noticed on both the stability of ( [ y.evol ] ) and the value of its steady state . in addition , we test the equations ( [ a.fast.dynamical ] ) for computing the mapping ( [ y.fast ] ) describing the local fast subspace . to this end , we make use of ( [ fast.b.matrix ] ) and ( [ ic.fast ] ) ( with , ) . at steady state of ( [ a.fast.dynamical ] ) ,we observe ( at any node of the grid in fig .[ glob ] ) that the columns of the matrix span the subspace defined by the vectors : ,\quad \left [ { 0,0,0,1 } \right],\ ] ] in accordance with the assumption that and are the fast variables .finally , the governing equations of the linearized rrm ( [ y.evol ] ) were applied to a more complicated case of the detailed reaction mechanism for combustion of hydrogen in air . here , eq . ( [ y.evol ] ) were tested for computing states of the sim with dimensions up to . in fig .[ h2 ] , we report an example with . moreover , we observed that any steady state of ( [ a.fast.dynamical ] ) corresponds to a matrix whose columns are linear combinations of the fast eigenvectors of ( i.e. , eigenvectors corresponding to the largest eigenvalues in absolute value ) .based on the rationale behind the film equation ( [ film ] ) and the relaxation redistribution method ( rrm ) , a set of ordinary differential equations ( odes ) is obtained with the aim of mimicking the only fast relaxation of a multiscale dynamical system towards a slow invariant manifold ( sim ) .this approach is characterized by a straightforward implementation consisting in solving the odes by state of the art stiff numerical schemes , and it proves useful for constructing accurate approximations of sim in any dimensions .it is worth stressing that , like rrm , convergence of equations ( [ y.evol ] ) towards a steady state might be used for a fully adaptive construction of heterogeneous ( i.e. varying dimension in different regions of the phase - space ) slow invariant manifolds .this work sheds light on the connection between the rrm method and the solution of both the invariance and film equations as postulated in ( see discussion at the end of section [ slow.equations ] ) .in addition , the novel algorithm ( [ a.fast.dynamical ] ) for approximating the fast subspace is suggested , and a possible initialization procedure for both ( [ y.evol ] ) and ( [ a.fast.dynamical ] ) is proposed .the methods are tested in the case of detailed combustion of hydrogen and air , as well as in a benchmark problem of a model with exact chapman - enskong solution of the invariance equation .we stress that , although the presented methodology has been tested in the case of dissipative systems with a unique steady state ( see section [ ben - ex ] ) , in this paper we show that the governing equations ( [ y.evol ] ) and ( [ a.fast.dynamical ] ) of the linearized rrm are based on the general notions of film equation ( [ film ] ) and sim ( local ) parameterization .therefore , investigations on the performance of the presented method in more general systems with multiple steady states and chaotic behavior ( see , e.g. , ) are planned for future publications . however , in the latter case , new initialization procedures are needed since the method discussed in section [ inizio ] is suitable for dissipative systems equipped with thermodynamic lyapunov function . finally , it is worth noticing that the proposed approach represents only one possible implementation of the rrm method ( ) .more accurate descriptions of the sim ( to be addressed in future publications ) can be obtained as well , abandoning the present linear expressions ( [ linearized.manifold ] ) and ( [ relaxed.points.approx ] ) in favor of higher order approximations .i. kevrekidis and b. sonday are gratefully acknowledged for suggesting the model ( [ odeben ] ) with the functions ( [ choice ] ) .the author wishes to thank s. pope and i. karlin for inspiring discussions .acknowledgments go to c. frouzakis and all referees for their valuable help in improving the quality of the manuscript .e. chiavazzo and i.v .adaptive simplification of complex systems : a review of the relaxation - redistribution approach . in a.gorban and d. roose , editors , _ coping with complexity : model reduction and data analysis _ , pages 231240 .springer , 2011 .d. lebiedz , d. skanda , and m. fein .automatic complexity analysis and model reduction of nonlinear biochemical systems . in m.heiner and a. uhrmacher , editors , _ computational methods in systems biology _ , pages 123140 .springer , 2008 .g. p. smith , d. m. golden , m. frenklach , n. w. moriarty , b. eiteneer , m. goldenberg , c. t. bowman , r. k. hanson , s. song , w. c. gardiner , v.v .lissianski , and z. qin . http://www.me.berkeley.edu / gri_mech/. 1999 .m. valorani , d. a. goussis , and h. n. najm .higher order corrections in the approximation of low - dimensional manifolds and the construction of simplified problems with the csp method ., 209:754786 , 2005 .
|
in this paper , we introduce a fictitious dynamics for describing the only fast relaxation of a stiff ordinary differential equation ( ode ) system towards a stable low - dimensional invariant manifold in the phase - space ( _ slow invariant manifold _ - sim ) . as a result , the demanding problem of constructing sim of any dimensions is recast into the remarkably simpler task of solving a properly devised ode system by stiff numerical schemes available in the literature . in the same spirit , a set of equations is elaborated for local construction of the fast subspace , and possible initialization procedures for the above equations are discussed . the implementation to a detailed mechanism for combustion of hydrogen and air has been carried out , while a model with the exact chapman - enskog solution of the invariance equation is utilized as a benchmark . slow invariant manifold , film equation , stiff dynamical system , model reduction
|
the problem of optimal portfolio which is nowadays introduced in a new framework , called _ modern portfolio theory ( mpt ) _ , has been extensively studies in the past decades .the mpt is one of the most important problems in financial mathematics .harry markowitz introduced a new approach to the problem of optimal portfolio so called _ mean - variance _ analysis .he chose a preferred portfolio by taking into account the following two criteria .the expected portfolio return and the variance of the portfolio return .in fact , markowitz preferred one portfolio to another one if it has higher expected return and lower variance .+ later , we find attempts in the literature to replace variance with well - known risk measures , such as _ value at risk _ and _ expected shortfall_. for instance , embrechts et al. have shown that replacing mean - variance with any other risk measure having the translation invariant and positively homogeneous properties under elliptical distributions yields to the same optimal solution . basak and shapiro studied an alternative version of markowitz problem by applying var for controlling the incurred risk in an expected utility maximization framework which allows to maximize the profit of the risk takers .studying the markowitz model has been done in the same framework by considering the cvar as risk measure .later , acerbi and simonetti studied the same problem as the one studied in with spectral risk measures .recently , cahuich and hernandez solved the same problem within the framework of utility maximization using the class of distortion risk measures .+ there are both practical and theoretical weaknesses that can be made about the relevant framework of optimal portfolio problem in the literature. one of such criticisms relates to the asset returns model itself .in fact , elliptical distribution is the most and relevant distribution which is used to model asset returns in mpt .one of the reason for choosing this distribution ties with the tractability of this class of distribution .but , in practice financial returns do not follow an elliptical distribution .a second objection focuses in the choice of a measure of risk for the portfolio .unfortunately , one drawback with the previous works , for instance , is that no explicit formulas are available and numerical approximations are used to solve the optimization problem .the stochastic models which we are proposing for the asset returns in this paper are based on _jumps- diffusion _ * ( j - d ) * distributions .this family of distributions are more compatible with stylized features of asset returns and also allows for a straight - forward statistical inference from readily available data .we also tackle the second issue by choosing a suitable ( coherent ) risk measure as our objective function . in this paper, we propose to use a new coherent risk measure , so - called , _ entropic value at risk(evar ) _ , in the optimization problem . as this risk measureis based on laplace transform of asset returns , applying it to the jump - diffusion models yields an explicit formula for the objective function so that the optimization problem can be solved without using numerical approximations .+ the organization of this paper is as follows . in section 2, we provide a summary of properties about coherent risk measures and _ entropic value at risk _ measure .we also continue this section by presenting a typical representation of optimal portfolio problem where we minimize the risk of the portfolio for a given level of portfolio return . in section 3 ,we introduce our two models to fit as asset returns and we apply them into the optimization problem . we also derive some distributional properties for these models and finish section 3 by discussing about the kkt conditions and optimal solutions . in section 4 , we discuss about parameters estimation method which we have used in this paper .we also provide a numerical example for three different stocks and analyze the efficient frontiers for evar , mean - variance and var for these three stocks . in this paperwe use optimization package in matlab to do the computations .we are considering as the set of all bounded random variables representing financial positions .the following definition is taken from . .a function is a coherent risk measure if for any and ] and for a comprehensive study on this risk measure we may refer to .consider a portfolio in a financial market with different assets .denote the assets returns by the vector in which shows the return of the i - th asset .the returns are random variables and their mean is denoted by where is the the expected return of the i - th asset , .moreover , assume as a risk measure . then following [ general model ]the _ optimal portfolio problem _ can be written mathematically as follows . where is a given level of return .applying various risk measures along with different models for random returns yields to interesting problems in both theoretical and practical point of views .for instance , the classical mean - variance model introduced by markowitz is a special case of the model introduced in definition .in fact , markowitz used variance as a risk measure and apply it into the objective function given in and he also considered returns from the portfolio are normally distributed .[ rem ] it has been shown in that if we assume the return variables follow elliptical distributions(like multivariate normal distribution ) , then the solution for the markowitz mean - variance problem will be the same as the optimal solution for optimal portfolio problem by minimizing any other risk measure having the translation invariant and positively homogeneous properties for a given level of return . has shown in his phd thesis that for two different examples of elliptical distributions(normal and student t ) the portfolio decomposition for expected shortfall and value at risk are the same as the one for standard deviation .+ referring to remark [ rem ] we see that if the underlying distribution is elliptical , then for any coherent risk measure the optimal solution for the problem in is the same as the optimal solution for the classical model by markowitz .in this section , we propose two multivariate models which do not follow elliptical distributions .these models which are based on jump - diffusion distributions can be fitted as the underlying models for returns .distributional properties of these models will be also studied .* multivariate model 1 . * consider the following multivariate model : where are -variate vectors such that here , follows the normal distribution with and s are mutual independent for . is assumed to follow the multivariate normal distribution with for each where is mean and is covariance matrix .moreover , s are assumed to be mutually independent .the random variable follows the poisson distribution with intensity and is independent of for each . are assumed to have poisson distribution with intensity and mutually independent for .the are assumed to be mutually independent for all k and all and is normal distributed with .finally , and are mutually independent as well as .+ this model can be driven from a jump - diffusion model which is the solution for a stochastic differential equation .we can rewrite this multivariate model as follows . * multivariate model 2 . * the model in prepared the ground to introduce another non - elliptical multivariate model which can be fitted for portfolio returns .this proposed model is given as follows . here , are -variate vectors such that where follows the multivariate normal distribution with with covariance matrix . is assumed to follow the multivariate normal distribution with for each where is mean and is covariance matrix .moreover , s are assumed to be mutually independent .the random variable follows the poisson distribution with intensity and is independent of for each . also , are mutually independent. + like the model introduced in subsection 3.1 we can rewrite the multivariate model as consider the multivariate models and .as these models are given in terms of summation of multivariate normal and compound poisson distributions we can provide the joint density functions for each of these models . gives the following presentation for the density function of model and also provides a proof but we give a proof here for the sake of completeness .[ density1 ] consider the model. then the joint density functions of the vector is given by where , and .the idea we put forward to prove this proposition is using conditional density function .since the are mutual independent with normal distribution so the vector follows a multivariate normal distribution with mean and covariance matrix where is the identity matrix of order n. moreover by conditioning on each of and using independency between we obtain for each .thus , independency between and for all and yields conditioning on the random variable and using the independency between and gives the following conditional distribution . putting and together and using independency between and provide the conditional distribution of given .i.e. , gives the conditional density of given . to get the density function of we need to multiply the conditional density by the probability functions associated to each and and add them up .this completes the proof . if we follow the same procedure done for proposition [ density1 ] and apply it for the model we can obtain the density function for the vector . the density function for the modelis where , and . in the sequel of this part we provide the laplace exponents for both models and .consider the multivariate model .then the laplace exponent for the vector at is where and are the column vectors associated to the row vectors and respectively .the independency between and and using this point that laplace transform for normal and compound poisson distributions exists , yield the result .consider the multivariate model .then the laplace exponent for the vector at is the laplace transform for gaussian distributions and compound poisson distributions exists .so , can be driven by using the independency between and .now , we apply evar along with the model proposed in to the optimal portfolio problem .thus , is written as follows . applying evar and the model into the optimal portfolio problem yield to in this section we would like to identify the necessary and sufficient conditions for optimality of problems and .in fact , we want to examine the karush - kuhn - tucker ( kkt ) conditions for these problems and study whether the constrained problems in the last two sections have optimal solutions . being the objective functions for both problems and smooth enough ( they are continuously differentiable functions ) , will help us to verify the kkt conditions much easier .the kkt conditions provide necessary conditions for a point to be optimal point for a constrained nonlinear optimal problem .we refer to chapter 5 page 241 for a comprehensive study of kkt conditions for nonlinear optimal problems . here , we study these conditions for the model by using the same notation used in page 200 .we rewrite problem as follows . let be a _regular point _ be a feasible point .then , is said to be a regular point if the gradient vectors for are linearly independent . ] for the problem .then , the point is a local minimum of subject to the constraints in if there exists lagrange multipliers and for the lagrangian function such that the followings are true .1 . 2 . 3 . 4 . 5 . where is the ith entry of the row vector .+ since , the functions and in are linear and the functions for are convex , then by referring to section of we see that the feasible region is a convex set . on the other hand , the risk measure is a convex function subject to the variables and for all .we refer to for a proof .thus , the objective function in problem is convex too .we see that any * local minimum * for problem is a * global minimum * too and the kkt conditions are also * sufficient*. see page 212 . in this section we will provide the kkt conditions for the optimal problem .we show that these conditions are also sufficient for a solution to be an optimal one .first , we rewrite the problem in the following way . by applying the same definition and notation used in the previous section we can provide the kkt conditions as follows .1 . 2 . 3 . 4 . 5 . where and are the ith entry of the row vectors and respectively .+ since the feasible region and the objective function for the optimal problem are convex , so again by referring to we can see that the kkt conditions are also * sufficient * and any * local minimum * for problem is a * global minimum * as well .in this section we study the optimization problem for multivariate model 1 given in .in fact , we analyze the efficient frontier for this problem when the risk measures are evar and standard deviation .our analysis shows that we have different portfolio decomposition corresponding to evar and standard deviation as the underlined model for returns is followed a non - elliptical distribution(model 1 ) .thanks to the closed form for evar we can use optimization packages in mathematical software to solve the optimization problem without using simulation techniques like monte carlo simulation . studying the optimization problems and requires knowing the parameters of the multivariate models and . to estimate these parameterswe use a method of estimation for joint parameters so called _ extended least square(els)_ .in fact , assume that we are given a sample of individuals .let $ ] denote the subject s vector of repeated measurements where the are assumed to be independently distributed with mean and covariance matrices given by where and are vectors of unknown parameters which should be estimated .extended least square(els ) estimates are obtained by minimizing the following objective function . where and are defined in and is the determinant of the positive definite covariance matrix .following it can be seen that esl is joint normal theory maximum likelihood estimation . in fact , minimizing is equivalent to maximizing the log - likelihood function of the when the are independent and normally distributed with mean anc covariance matrices given by .we construct the portfolio by choosing 3 stocks .they are apple , intel and pfizer(pfe ) .we use the close data ranged from 20/09/2010 to 26/08/2013 .the weekly close data are converted to log return .i.e. , if we consider as the close price for the week then log return is .now consider the model .we try to apply this model to these three stocks and determine the parameters in in order to solve the optimization problem . in this casewe have , the number of our sample and is a row vector associated to the mean of returns .then the vector is , for all .let be the covariance matrix for the multivariate normal distribution .then , the covariance matrix in has the following representation . for all .therefore , by plugging and into we get the objective function for the els method . doing the same procedure for the model we can find the parameters in .let and be the covariance matrices for the multivariate normal distribution and respectively .then we have , in the following we provide the results for the portfolio decomposition corresponding to the three stocks , evar and standard deviation .this results have been driven for the model 1 given in . in order to estimate our parameters for the model 1 we call _ fminsearch _ in matlab , where the function to be optimized is the objective function introduced in . to find the efficient frontiers of evar we also call _ fmincon _ in matlab , where the function to be optimized is the objective function in .figure [ fig : minipage2 ] shows the two efficient frontiers based on model 1 for evar and standard deviation . table [ table : compositionevar ] and [ table : compositiondeviation ] show the portfolio compositions and the corresponding evar and standard deviation respectively .ahmadi - javid a. , _ entropic value - at - risk : a new coherent risk measure_. journal of optimization theory and application , springer .basak , s. , shapiro , a. , _ value - at - risk based risk management : optimal policies and asset prices_. rev .financ . stud .14 , 371405 , 2001 .belegundu a. d. , chandrupatla r. t. , _ optimization concepts and applications in engineering_. cambridge university press , 2nd edition , 2011 .cahuich , l.d . ,hernandez , d.h ., _ quantile portfolio optimization under risk measure constraints_. appl math optim 68:157179 , 2013 .embrechts , p. , mcneil , a. ,straumann . _ correlation and dependency in risk management : properties and pitfalls . in risk managemenmt : value at risk and beyond , ed . by m. dempster , and h. moffatt_. cambridge university press , 2001 .follmer , h. , knispel , t.,_entropic risk measures : coherence vs. convexity , model ambiguity , and robust large deviations_. stoch .2 - 3 , 333 - 351 , 2011 .follmer , h. , schied a. , _ stochastic finance , an introduction in discrete time_. volume 27 of de gruyter studies in mathematics .walter de gruyter & co. , berlin , extended edition , 2004 .hu , w.,_calibration of multivariate generalized hyperbolic distributions using the em algorithm , with applications in risk management , portfolio optimization and portfolio credit risk_. thesis ( ph.d.)-the florida state university . 115 pp. isbn : 978 - 0542 - 66575 - 2 , 2005 .markowitz , h.m . , _portfolio selection_. the journal of finance 7 ( 1 ) : 77 - 91 , 1952 .press s. j. , _ a compound events model for security prices_. journal of business , vol .40 , no . 3 , 1967 .press s. j. , _ a compound poisson process for multiple security analysis_. random counts in scientific work : random counts in physical science , geo science and business v. 3 , 1971 .rachev , s.t . ,stoyanov , s.v . ,fabozzi , f.j . ,_ advanced stochastic models , risk assessment , and portfolio optimization : the ideal risk , uncertainty , and performance measures_. wiley , isbn : 978 - 0 - 470 - 05316 - 4 , 2008 .ruijter m. j. , oosterlee c. w. , _ two - dimensional fourier cosine series expansion method for pricing financial options_. siam j. sci .34 , no . 5 , 2012 , p. 642 - 671tsukahara , h. _ one - parameter families of distortion risk measures_. math .finance 19 , 691705 , 2009 .vonesh , e. , chinchilli , v.m . , _ linear and nonlinear models for the analysis of repeated measurements_. crc press , 1996 .
|
this paper is devoted to study the optimal portfolio problem . harry markowitz s ph.d . thesis prepared the ground for the mathematical theory of finance . in modern portfolio theory , we typically find asset returns that are modeled by a random variable with an elliptical distribution and the notion of portfolio risk is described by an appropriate risk measure . in this paper , we propose new stochastic models for the asset returns that are based on _ jumps- diffusion _ * ( j - d ) * distributions . this family of distributions are more compatible with stylized features of asset returns . on the other hand , in the past decades , we find attempts in the literature to use well - known risk measures , such as _ value at risk _ and _ expected shortfall _ , in this context . unfortunately , one drawback with these previous approaches is that no explicit formulas are available and numerical approximations are used to solve the optimization problem . in this paper , we propose to use a new coherent risk measure , so - called , _ entropic value at risk(evar ) _ , in the optimization problem . for certain models , including a jump - diffusion distribution , this risk measure yields an explicit formula for the objective function so that the optimization problem can be solved without resorting to numerical approximations . + * keywords . * optimization portfolio problem , coherent risk measure , entropic value at risk , conditional value at risk , elliptical distribution , jump - diffusion distribution .
|
we consider the following conservation law where is the solution function , and the flux function .the computational domain is divided into non - overlapping cells or elements ] that transforms the real mesh cell ] are solved by following riemann problems(drp ) , { \mathcal b}}_{xi-{1\over2}}={\rm driemann } \left ( f^{[1]l}_{xi-{1\over2 } } , f^{[1]r}_{xi-{1\over2 } } \right ) , \end{split}\ ] ] where " and " denote the solvers for the conventional and derivative riemann problems respectively .is a hermite interpolation to determine the modified flux function which is written in a polynomial form as , the first order derivative ( gradient ) of reads then , the derivatives of the modified flux function at the solution points are obtained as the solutions are then immediately computed by with a proper time integration algorithm . in the original mcv3 scheme ,the solution points are equally spaced and including two cell ends , i.e. , and .the left / right - most solution points coincide with the cell boundaries . in this case , the continuity conditions of flux function at the cell boundaries are automatically satisfied , and only the derivatives of the flux function need to be computed from the drp .the derivatives of the modified flux function at the solution points are obtained as it is straightforward to show the following conservation property , the solution points can be chosen as other quadrature point sets , such as the legendre or chebyshev gauss points , but we find from fourier analysis and numerical tests that the different solution point sets do nt alter significantly the numerical result .we present here two variants by making use of different constraints in determining the modified flux function . instead of the constraint conditions of , we impose the multi - moment constraints at the cell center , }_{\xi i}(0)=f^{[1 ] } _ { \xi i}(0 ) ; \\ & \tilde{f}^{[2]}_{\xi i}(0)=f^{[2 ] } _ { \xi i}(0 ) .\end{split } \right . \label{mcv3_2}\ ] ] we retain the continuity of the modified flux function at cell boundaries , which is required from the numerical conservation and stability .the rest of the constraints are determined from the primary interpolation function in terms of derivatives .same as in the original mcv3 scheme , the solution points are equally spaced and including two cell ends , , and .constraint conditions allows to reconstruct a polynomial of 4th degree , the first - order derivative then reads , the derivatives of the modified flux function at the solution points are obtained as it is straightforward to show the following conservation property , we use the chebyshev - gauss points , i.e. , and , as the solution points .constraint conditions leads to the following polynomial of 4th degree , the first - order derivative then reads , the derivatives of the modified flux function at the solution points are obtained as from , the numerical conservation can be immediately proved by the following equality , this section , we evaluated the numerical schemes previously discussed by examining the fourier mode transported with the following advection equation . a wave solution and assuming a uniform grid spacing , we have and , which recast the time evolution equations for the solutions into the properties of the numerical schemes can be examined by analyzing the eigenvalues of .fig.[spec-3 ] shows the spectrum ( collection of all eigen values ) of for different schemes .it is observed that all eigenvalues lie on the left half of the real axis , i.e the negative real parts indicate that all the schemes are stable under the cfl conditions .the allowable cfl numbers for computational stability can be estimated by the largest eigenvalue , the spectral radius for each scheme , i.e. a scheme has a larger spectral radius has to use a smaller cfl number for computational stability .we know from fourier analysis that , and , which reveals that mcv3_cpcc scheme has the largest stable cfl number .this is confirmed by numerical tests for the linear advection equation . with a 3rd - order runge kutta scheme ,the largest allowable cfl numbers are 0.47 , 0.44 and 0.41 for mcv3_upcc , mcv3_cpcc and mcv3 respectively .the numerical errors of different schemes can be examined by comparing the principal eigenvalue of , , with the exact solution , , of the advection equation for initial condition , the error of a given semi - discrete formulation is and the convergence rate is evaluated by [ error-3ps ] .numerical errors and convergence rates . [ cols="<,^,^,^,^",options="header " , ] consistent with the observations aforementioned , we find from table 2 that mcv3_upcc and mcv3_cpcc have less truncation errors in both dissipation ( real part ) and dispersion ( imaginary part ) compared to the original mcv3 .mcv3 and mcv3_upcc have a third order accuracy in dissipation and all schemes have a fourth order accuracy in dispersion . the mcv3_cpcc is superior in dissipation accuracy which is fifth order , two orders higher than the others .the computational modes are represented by and .it is observed that all the schemes have real negative parts with the leading terms of order in and , which means that the computational modes will be exponentially dampened out .* the proposed variants have improved numerical features in both numerical accuracy and computational efficiency compared to the original mcv3 scheme . * in the new schemes , only the continuity of flux function is required at the cell boundaries where the constraint on the derivative of flux is not required anymore .this makes the schemes directly applicable to any quadrilateral and xahedral mesh . * schemes with more solution points and higher order accuracycan be devised by the same spirit .
|
two variants of the mcv3 scheme are presented based on a flux reconstruction formulation . different from the original multi - moment constrained finite volume method of third order ( mcv3 ) , the multi - moment constraints are imposed at the cell center on the point value , the first and second order derivatives . the continuity of the flux function at cell interfaces are also used as the constraints to ensure the numerical conservation . compared to the original mcv3 scheme , both two variants have higher numerical accuracy and less restrictive cfl condition for computational stability . moreover , without the need to solve derivative riemann problem at cell boundaries , the new schemes benefit the implementations in arbitrary quadrilateral in 2d and hexahedron in 3d . high order scheme , flux reconstruction , multi - constraint , nodal formulation , conservation
|
aggregate dynamical equations derived from individual - based models are usually exact only if the micro - level dynamics satisfy certain rather restrictive symmetry conditions brought about by homogeneity of the agent properties and interaction rules .micro - level heterogeneity or complex interaction structures often lead to correlations in the dynamics that a macro - level description may not account for .that is , it is not possible to exactly describe the system evolution as a closed set of macroscopic equations .for this reason , it is important to understand the consequences of heterogeneous agent properties on macroscopic formulations of the system dynamics . in this paper, we present results into that direction for the diamond search equilibrium model also known as coconut model which has been introduced in 1982 by the 2010 nobel laureate peter diamond as a model for an economy with trade frictions .imagine an island with agents that like to eat coconuts .they search for palm trees and harvest a nut from it if the tree is not too tall , meaning that its height does not exceed an individual threshold cost ( ) .however , in order to consume the nut and derive utility from this consumption agents have to find a trading partner , that is , another agent with a nut .therefore , the agents have to base their harvest decision _ now _( by setting ) on their expectation to find a trading partner _ in the future_. or , less metaphorically , agents are faced with production decisions that have to be evaluated based on their expectations about the future utility of the produced entity which in turn depends on the global production level via a trading mechanism .for this reason , the coconut model is useful not only for the incorporation of heterogeneity , but also for the analysis of adaptive agents that rationally or not have to form expectations about the future system state in order to evaluate their decision options . in the original papers this problem of inter - temporal optimizationis formulated using dynamic programming principles and the bellmann equation in particular .the author(s ) arrive at a differential equation ( de ) that describes the evolution of the cost threshold along an optimality path ( where the individual thresholds are all equal ) which is coupled to a second de describing the evolution of the number of coconuts in the population .however , knowing the optimal dynamics , that is , the differential equations that an optimal solution has to fulfill , is not sufficient to study problems such as equilibrium selection or stability in general , because the optimality conditions do not say anything about the behavior of the system when it is perturbed into a suboptimal state . on the other hand ,the bellmann equation is also at the root of reinforcement learning algorithms and temporal difference ( td ) learning in particular which are known to converge to this optimality under certain conditions .the incorporation of learning in the agent - based version of the coconut model and the assessment of its adequacy by comparison to the original solution is a second contribution of this paper .the necessity to take into account not only the result of rational choice but to focus more on the processes that may lead to it has been pointed at by simon almost 40 years ago . around ten years later , the notion of artificial adaptive agents has been proposed by who define an _ adaptive _ agent by two criteria : ( 1 . )the agent assigns a value ( fitness , accumulated reward , etc . ) to its actions , and ( 2 . )the agent intends to increase this value over time ( p. 365 ) .virtually all models with adaptive agents proposed since then follow these principles . in genetic algorithms , for instance , an evolutionary mechanism is implemented by which least fit strategies are replaced by fitter ones and genetic operators like recombination and mutation are used to ensure that potential improvements are found even in high - dimensional strategy spaces , ( e.g. * ? ? ?* ; * ? ? ?another approach which became prominent during the last years could be referred to as strategy switching ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?agents constantly evaluate a set of predefined decision heuristics by reinforcement mechanisms and chose the rule that performs best under the current conditions .the reader may be referred to the excellent introductory chapter to > > behavioral rationality and heterogeneous expectations in complex economic systems < < by for a very instructive account .the td approach used here differs mildly from these models but fits well with the abstract specification of adaptive behavior proposed in . in our case agentslearn the value associated to having and not having a coconut in form of the expected future reward and use these values to determine their cost threshold .that is , agents are forward - looking by trying to anticipate their potential future gains . while checking genetic algorithms or strategy switching methods in the context of the coconut model is an interesting issue for future research , in this first paper we would like to derive an agent - based version of the model that is as closely related to the original model as possible .the motivation behind this is well - captured by a quote from ( p. 366 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > as a minimal requirement , wherever the new approach overlaps classical theory , it must include verified results of that theory in a way reminiscent of the way in which the formalism of general relativity includes the powerful results of classical physics .< < _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to our opinion , this relates to the tradition in abm research to verify and validate computational models by replication experiments ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the main idea is that the same conceptual model implemented on different machines and possibly using different programming languages should always yield the same behavior . to our point of viewthis attempt to develop scientific standards for abms should not be restricted to comparing different computer implementations of the same conceptual model , but it should also aim at aligning or comparing the results of an abm implementation to analytical formulations of the same processes .at least , whenever such descriptions are available . for the coconut modelthis is the case , and for its rich and sometimes intricate behavior on the one hand and the availability of a set of analytical results on the other the model is well suited as a testbed for this kind of model comparison . in particular when it comes to extend a model that is formulated for an idealized homogeneous population so to incorporate heterogeneity at the agent level , we should make sure that it matches with the theoretical results that are obtained for the idealized case . hence , the main objective of this paper is to derive an agent - based version of the coconut model as conceived in the original papers where the model dynamics have been derived for an idealized infinite and homogeneous population .we will see ( in section [ sec : homogeneous ] ) that this implementation does not lead to the fixed point(s ) of the original system . in order to align the abm to yield the right fixed point behavior we have at least two different options which again lead to slight differences in the dynamical behavior that are not obvious from the de description .this then allows us to study the effects that result from deviating from the idealized setting of homogeneous strategies , which we address in section [ sec : heterogeneous ] .heterogeneity may arise from the interaction structure , the information available to the agents as well as from heterogeneous agent strategies as an effect of learning in finite populations , and we will concentrate on the latter here . in particular , we will show that heterogeneity can by accounted for in a macroscopic model formulation by the correction term introduced in .a similar program shall be followed in section [ sec : learning ] where agents use td learning to learn the optimal strategy . as the learning scheme used in this paper can in fact be derived from the bellman equation used to set up the original model , an agent population that adapts according to this method should converge to the same equilibrium solution in a procedural way . however , as such an approach implements rationality as a process it describes the route to optimality and allows to analyze questions related to equilibrium selection and stability .we shall now describe the original model more carefully .consider an island populated by agents . on the islandthere are many palm trees and the agents wish to consume the coconuts that grow on these trees . the probability that agents find a coco treeis denoted by and harvesting a nut from the tree bears a cost ( the metaphor is the height of the tree ) that is described by a cumulative distribution defining the probability that the cost of a tree . in what follows we consider that the costs that come with a tree are uniformly distributed in the interval ] . in this section ,the strategies are homogeneous and do not change over time so that all agents have identical strategies during all times .point ( 1a ) in the iteration process means that at each step we randomly choose one agent from the population .notice that this means that within iteration steps some agents may be chosen more than once whereas others might not be chosen at all . for point ( 1b )the climbing decision with probability is evaluated by drawing two random numbers , one for the rate of coco trees and another for the cost of the coconut which is uniformly distributed in the interval ] ) . before looking at these specific scenarios , however ,we derive a correction term that accounts for the effect of strategy heterogeneity .namely , if strategies are different we can expect that those agents in the population with a lower will also climb less often and are therefore less often with a coconut .that is , there is a correlation between the agent strategy and the probability that an agent has a coconut . to account for this we have to consider that the rate that an agent of the population will climb from one time step to the other is given by equation covering the homogeneous case ( [ eq : homomc ] ) is satisfied because is equal for all agents and can be taken out of the sum .for heterogeneous strategies this is not possible but we can come to a similar expression by formulating as the expected value ( denoted as ) \right]\vspace{6pt}\\ & = f\left [ ( 1- \langle s_i\rangle ) \langle g(c_i)\rangle - \sigma[s_i , g(c_i ) ] \right ] , \end{array}\ ] ] where ] , 2 .there are two different strategies distributed at equal proportion over the population 3 .the probability of a strategy decreases linearly from to and reaches zero at 4 .probability of a strategy decreases according to a -distribution with shape and scale notice that the first two cases are chosen such that the mean climbing probability is whereas lower thresholds and therefore a lower average climbing probability are implemented with the latter two . , , and the comparison to the frequencies observed in a series of 10 realization ( 10000 steps ) of the simulation model.,title="fig : " ] , , and the comparison to the frequencies observed in a series of 10 realization ( 10000 steps ) of the simulation model.,title="fig : " ] , , and the comparison to the frequencies observed in a series of 10 realization ( 10000 steps ) of the simulation model.,title="fig : " ] , , and the comparison to the frequencies observed in a series of 10 realization ( 10000 steps ) of the simulation model.,title="fig : " ]the results of applying the heterogeneity correction are shown in figs .[ fig : pi100korr01 ] and [ fig : pi100korr02 ] . for the four scenarios , the fixed point with correction ( [ eq : fixkorr ] )is shown by the vertical line along with the two stationary distributions of the mcs with and without correction .the crosses in the plot correspond to simulation results measured on 10 simulation runs 10000 steps .in general , while significant deviations from the simulation results are visible for the descriptions without correction ( homogeneous case ) , the stationary statistics of the model are well matched after the covariance correction is applied .this shows that very effective macroscopic formulations of heterogeneous agent systems may be possible by including correction terms that efficiently condense the actual micro - level heterogeneity in the system .having gained understanding about how to deal with heterogeneous strategies in the finite - size coconut model , we shall now turn to an adaptive mechanism by which the strategies are endogenously set by the agents . as in section[ sec : homogeneous ] , we follow in this implementation the conception of diamond as closely as possible . that is , firstly , the threshold has to trade off the cost of climbing against the expected future gain of earning a coconut from it . in other words , agents have to compare the value ( or expected performance if you wish ) of having a coconut with the value of staying without a nut .if the difference between the expected gain from harvesting at time and that of not harvesting ( ) is larger than the cost of the tree , agents can expect a positive reward from harvesting a nut now .therefore , in accordance to , it is reasonable to set .now , how do agents arrive at reliable estimates of and ?we propose that they do so by a simple temporal difference ( td ) learning scheme that has been designed to solve dynamic programming problems as posed in the original model .notice that for single - agent markov decision processes temporal difference schemes are proven to converge to the optimal value functions . in the coconut model with agents updated sequentially it is reasonable to hypothesize that we arrive at accurate estimates of and as well . but notice also that the decision problem as posed in is not the only possibility to formulate the problem .namely the model assumes that agents condition their action only on their own current state neglecting previous trends and information about other agents the consideration of which might lead to a richer set of solutions . in our case agentsdo not learn the dependence explicitly , which means that the agents will only learn optimal stationary strategies .the consideration of more complex ( and possibly heterogeneous ) information sets points certainly to interesting extensions of the model .however , we think that it is useful to first understand the basic model and relate it to the available theoretical results as this will also be needed to understand additional contributions by model extensions .the learning algorithm we propose is a very simple value td scheme .agents use their own reward signal to update the values of and independently from what other agents are doing . in eachiteration agents compute the td error by comparing their current reward plus the discounted expected future gains to their current value estimate }_{\text{\normalsize estimated discounted future value } } - \underbrace{\left [ s_i^t v_i^t(1 ) + ( 1-s_i^t ) v_i^t(0 ) \right]}_{\text{\normalsize current estimate } } \label{eq : tderror}\ ] ] which becomes for the different possible transitions of the agents .notice that the discount factor as defined for the time continuous de system is rescaled as for the discrete - time setting and in order to account for the finite simulation with asynchronous update in which only one ( out of ) agents is updated in each time step ( ) .the iterative update of the value functions is then given by such that ( ) is updated only if agent has been in state ( ) in the preceding time step .the idea behind this scheme and td learning more generally is that the error between subsequent estimates of the values is reduced as learning proceeds which implies convergence to the true values .the form in which we implement it here is probably the most simple one which does not involve update propagation using eligibility traces usually integrated to speed up the learning process . in other words ,agents update only the value associated with their current state .while simplifying the mathematical description ( the evolution depends only on the current state ) we think this is also plausible as an agent decision heuristic .all in all the model implementation is + * initialization : set initial values and states according to the desired initial distribution .set initial strategies .* iteration loop i ( search and trade ) : * * random choice of an agent with probability * * ` if ` climb a coco tree with probability and harvest a nut , i.e. , * * ` else ` trade ( consume ) with probability such that * iteration loop ii ( learning ) : * * compute td error for all agents with reward signal and depending on the action of in part ( 1 ) * * update relevant value function by for all agents * * update strategy by notice that for trading we adopt the mechanism introduced as ` am2 ` in section [ sec : aligning ] . if not stated otherwise , the simulation experiments that follow are performed with the following parameters .the interval from which the tree costs are drawn is given by and .a strategy larger than hence means that the agent accepts any tree , that no tree is accepted at all .the rate of tree encounter is and the utility of coconuts is .we continue considering a relatively small system of 100 agent and the learning rate is .the parameter much of the analysis will be concentrated on is the discount rate with small values indicating farsighted agents whereas larger values discount future observations more strongly .the system is initialized ( if not stated otherwise ) with , and for all agents such that .the first part of this paper ( exogonously fixed strategies ) has shown that the abm reproduces well the fixed point curve obtained for the coconut dynamics by setting . herewe want to find out whether the simulation with td learning ( adaptive strategies ) matches with the fixed point behavior of the strategy dynamics of the original model obtained by setting .[ fig : learningmatchtheory ] shows the respective curves for three different .notice that the last value is so large that the and do not intersect so that there is actually no fixed point solution . in order to check these curves in the simulations we fix the expected probability of finding a trading partner by .independent of the actual level of coconuts in the population , an agent finds a trading partner with that probability , consumes the coconut and derives a reward of . for fig .[ fig : learningmatchtheory ] , for each the abm is run a single time for 200000 steps and the last system configuration ( namely , at the final step ) is used in the computation of the mean strategy which is then plotted against . and comparison to the td learning dynamics implemented in the agent model ., scaledwidth=55.0% ] the model generally matches with the theoretical behavior , especially when is small ( farsighted agents ) .however , for and we observe noticeable differences between the simulations and the fixed point curve of the theoretical model .notice that the number of coconuts ( which we fix for the trading step ) actually also affects the probability with which an agent is chosen to climb and that the actual level of coconuts in the simulation is generally different from .this might explain the deviations observed in fig .[ fig : learningmatchtheory ] . setting up the experimentso that the level of coconuts is constant at , however , is not straightforward because an additional artificial state - switching mechanism would have to be included that has no counterpart in the actual model . on the other hand , the results shown in fig .[ fig : learningmatchtheory ] are actually a promising indication that agents which adapt according to td learning align with the theoretical results in converging to the same ( optimal ) strategy for a given .the next logical step is now to compare the overall behavior of the abm with learning to the theory . for this purpose, we check the overall convergence behavior of the abm as a function of and compare it to the fixed point solution of , see also .there are two interesting questions here : 1 .what happens as we reach the bifurcation value at which the two fixed point curves and cease to intersect ?2 . in the parameter space where they intersect , which of the two solutions is actually realized by the abm with td learning ? both questions are answered with fig .[ fig : fixedpointsgammaai ] . ) -( [ eq : evodglstrat ] ) for ] . in the experiments the model always reached the upper fixed point.,scaledwidth=100.0% ] first , if becomes large , the abm converges to the state in which agents do not climb any longer .that is , and .however , as the close - up view on the right - hand side shows , the bifurcation takes place at slightly lower values of .this is probably related to the deviations observed in fig .[ fig : learningmatchtheory ] .in fact , further experiments revealed that the learning rate governing the fluctuations of the value estimates plays a decisive role ( the larger , the smaller the bifurcation point ) .the larger is , the more likely a perturbation takes place on the values of an agent ( ) which takes meaning that agent does not climb any longer .besides this small deviation , however , fig .[ fig : fixedpointsgammaai ] shows that on the whole the abm reproduces the theoretical results with considerable accuracy . regarding the second question that is , equilibrium selection it seems that the only stable solution for the simulated dynamics is the upper fixed point , sometimes referred to as > > optimistic < < solution .we will confirm this in the sequel by providing numerical arguments for the instability of the lower fixed point by a series of simulation experiments .the previous experiments indicate that the lower fixed point derived in the original system is generally unstable under learning dynamics . in this sectionwe present some further results to confirm this observation by initializing the model at the lower fixed point .we concentrate again on the parameterization used in the previous sections with , climbing costs uniformly distributed in ] and ] such that the probability for each agent to have a coconut in the beginning is . for each initial combinationwe compute 10 simulations 10000 steps and compare the initial point with the respective outcome after 10000 steps .the result is shown in fig [ fig : stability03 ] for a discount rate of ( l.h.s . ) and ( r.h.s . ) .( left ) and ( right ) .the fixed point curves of the de system are also shown.,title="fig:",scaledwidth=49.0% ] ( left ) and ( right ) .the fixed point curves of the de system are also shown.,title="fig:",scaledwidth=49.0% ] the vector field indicates convergence to a state close to the upper fixed point for most of the initial conditions . for this is true even for very small initial strategies .however , we should notice that this point is very close to and that the sampling does not resolve the region around the low fixed point well enough .for where the strategy value in the lower fixed point increases to the dynamics around that point become visible . in this casewe observe that initial strategies below this value lead to convergence to , that is to the situation in which agents do not climb any longer ( and therefore ) .however , if initially the level of coconuts is high enough the system is capable of reaching the stable upper solution because there is at least one instant of learning that having a nut is profitable ( ) for agents initially endowed with a nut .finally , a close - up view on this region is provided in fig .[ fig : stability04 ] for .it renders visible that the lower fixed point acts as a saddle under the learning dynamics . as noticed earlier , the exact fixed point values and are slightly different for the de system and the learning agents model which may be attributed to small differences in the models such as explicit exclusion of self - trading ( see section [ sec : homogeneous ] ) or the discrete learning rate ( this section ) .this paper makes four contributions .first , it develops a theory - aligned agent - based version of diamond s coconut model . in the model agentshave to make investment decisions to produce some good and have to find buyers for that good .step by step , we analyzed the effects of single ingredients in that model from homogeneous to heterogeneous to adaptive strategies and relate them to the qualitative results obtained from the original dynamical systems description .we computationally verify that the overall behavior of the abm with adaptive strategies aligns to a considerable accuracy with the results obtained in the original model .the main outcome of this exercise is the availability of an abstract baseline model for search equilibrium which allows to analyze more realistic behavioral assumptions such as trade networks , heterogeneous information sets and different forms of bounded rationality but contains the idealized solution as a limiting case .secondly , this work provides insight on the effects of micro - level heterogeneity on the macroscopic dynamics and shows how heterogeneous agents can be taken into account in aggregate descriptions .we derive a heterogeneity correction term that condenses the present heterogeneity in the system and show how this term should be coupled to the mean - field equation .these mathematical arguments show that a full characterization of the system with heterogeneity leads to an infinite dimensional system of differential equations the analysis of which will be addressed in the future . in this paperwe have provided support for the suitability of the heterogeneity term by simulation experiments with four different strategy distributions .we envision that the heterogeneity correction may be useful for other models such as opinion dynamics with heterogeneous agent susceptibilities as well .the third contribution this paper makes , is the introduction of temporal difference ( td ) learning as a way to address problems that involve inter - temporal optimization in an agent - based setting .the coconut model serves this purpose so well because the strategy equation in the original paper is based on dynamic programming principles which are also at the root in this branch of reinforcement learning . due to this common foundationwe arrive at an adaptive mechanism for endogenous strategy evolution that converges to one of the theoretical equilibria , but provides , in addition to that , means to understand how ( and if ) this equilibrium is reached from an out - of - equilibrium situation .such a characterization of the model dynamics is not possible in the original formulation .our fourth contribution relates to that in providing some new insight into equilibrium selection and stability of equilibria in the coconut model . under learning dynamicsonly the upper > > optimistic < < solution with a high coconut level ( high productivity ) is realized .furthermore , convergence to this equilibrium takes place for a great proportion of out - of - equilibrium states .in fact , the phase diagrams presented at the end of the previous section show that in a system with farsighted agents ( ) the market failure equilibrium ( no production , no trade ) is reached only if agents are exceedingly pessimistic . if agents are less farsighted ( ) , this turning point increases slightly and makes market failure probable if the production level ( ) is currently low for some reason .however , we do not want to make general claims about the absence of cyclic equilibria in the artificial search and barter economy that the coconut model exemplifies .it is possible even likely that a richer behavior is obtained when agents learn not only based on their own state but take into account information about the global state of the system , trends or the strategy of others .this paper has been a necessary first step to address such question in the future .all models described in this paper have been implemented and analyzed using mathematica and matlab .the matlab version is made available at the openabm model archive ( see https://www.openabm.org/model/5045/version/1 ) .grimm , v. , berger , u. , bastiansen , f. , eliassen , s. , ginot , v. , giske , j. , goss - custard , j. , grand , t. , heinz , s. k. , huse , g. , huth , a. , jepsen , j. u. , jorgensen , c. , mooij , w. m. , muller , b. , peer , g. , piou , c. , railsback , s. f. , robbins , a. m. , robbins , m. m. , rossmanith , e. , ruger , n. , strand , e. , souissi , s. , stillman , r. a. , vabo , r. , visser , u. & deangelis , d. l. ( 2006 ) . a standard protocol for describing individual - based and agent - based models . _ ecological modelling _ , _ 198 _ , 115126 vriend , n. j. ( 2000 ). an illustration of the essential difference between individual and social learning , and its consequences for computational analyses ._ journal of economic dynamics and control _ , _24_(1 ) , 119
|
in this paper , we develop an agent - based version of the diamond search equilibrium model also called coconut model . in this model , agents are faced with production decisions that have to be evaluated based on their expectations about the future utility of the produced entity which in turn depends on the global production level via a trading mechanism . while the original dynamical systems formulation assumes an infinite number of homogeneously adapting agents obeying strong rationality conditions , the agent - based setting allows to discuss the effects of heterogeneous and adaptive expectations and enables the analysis of non - equilibrium trajectories . starting from a baseline implementation that matches the asymptotic behavior of the original model , we show how agent heterogeneity can be accounted for in the aggregate dynamical equations . we then show that when agents adapt their strategies by a simple temporal difference learning scheme , the system converges to one of the fixed points of the original system . systematic simulations reveal that this is the only stable equilibrium solution . * the coconut model with heterogeneous strategies and learning * + s. banisch , e. olbrich + max planck institute for mathematics in the sciences , leipzig , germany ( 1,0 ) 450 ( 1,0 ) 450 the results of this paper have been subsequently presented at the 2015 and 2016 conference on artificial economics ( ) . we acknowledge the valuable feedback from the participants of these two events . the work was supported from the european community s seventh framework programme ( fp7/2007 - 2013 ) under grant agreement no . 318723 ( mathemacs ) . s.b . also acknowledges financial support by the klaus tschira foundation . ( 1,0 ) 450
|
colloidal dispersions display a broad range of nontrivial rheological response to externally applied flow .even the simplest systems of purely repulsive spherical colloids exhibit a rate dependent viscosity in steady state flows , yielding and complex time - dependent phenomena , such as thixotropy and ageing [ brader ( 2010 ) , mewis and wagner ( 2009 ) ] .understanding the emergence of these collective dynamical phenomena from the underlying interparticle interactions poses a challenge to nonequilibrium statistical mechanics and the fundamental mechanisms involved are only beginning to be understood .theoretical advances have largely been made hand - in - hand with improved simulation techniques [ banchio and brady ( 2003 ) ] and modern experimental developments , combining confocal microscopy or magnetic resonance imaging with classical rheological measurements [ besseling _ et al . _( 2010 ) , frank _( 2003 ) ] . despite considerable progress , a comprehensive constitutive theory , capable of capturing the full range of response , remains to be found .existing approaches are tailored to capture the physics of interest within particular ranges of the system parameters ( e.g. density , temperature ) but fail to provide the desired global framework . moreover, the vast majority of studies have concentrated on the specific , albeit important , case of simple shear flow .such scalar constitutive theories , relating the shear stress to the shear strain and/or strain - rate , provide important information regarding the competition of timescales underlying the rheological response , but do not acknowledge the true three dimensional character of experimental flows .tensorial constitutive equations have long been a staple of continuum rheology ( such as the giesekus or oldroyd models [ bird _ et al . _( 1987 ) , larson ( 1988 ) ] ) and enable e.g. normal forces and secondary flows to be addressed in realistic curvilinear experimental geometries .the first steps towards a unified , three dimensional description of colloid rheology have been provided by recent extensions of the quiescent mode - coupling theory ( mct ) to treat dense dispersions under flow [ brader _ et al . _( 2008 ) ] .these developments are built upon earlier studies focused on simple shear [ brader _ et al . _( 2007 ) , fuchs ( 2009 ) , fuchs and cates ( 2002 ) , fuchs and cates ( 2009 ) ] and capture the competition between slow structural relaxation and external driving , thus enabling one of the most challenging aspects of colloid rheology to be addressed : the flow response of dynamically arrested glass and gel states .given the equilibrium static structure factor as input ( available from either simulation or liquid state theory [ brader ( 2006 ) ] ) , the deviatoric stress tensor may be determined for any given velocity gradient tensor .however , implementation of the theory has been hindered by the numerical resources required to accurately integrate fully anisotropic dynamics over timescales of physical interest ( although progress has been made for two dimensional systems [ henrich _ et al . _( 2009 ) , krger _ et al . _( 2011 ) ] ) . in [ brader _( 2009 ) ] a simplified ` schematic ' constitutive model was proposed , which aims to capture the essential physics of the wavector dependent theory , while remaining numerically tractable .applications so far have been to steady - state flows , step strain and dynamic yielding [ brader _ et al . _( 2009 ) ] , as well as oscillatory shear [ brader _ et al . _( 2010 ) ] . both the full [ brader _ et al . _ ( 2008 ) ] and schematic [ brader _ et al . _( 2009 ) ] mode - coupling theories predict an idealized glass transition at sufficiently high coupling strength , characterized by an infinitely slow structural relaxation time .ageing dynamics are neglected .an important prediction of the approach is that application of any steady strain - rate leads to fluidization of the arrested microstructure , with a structural relaxation time determined by the characteristic rate of flow . in recent experiments on various soft glassy materials ,ovarlez _ et al ._ have indicated that when a dominant , fluidizing shear flow is imposed , then the sample responds as a liquid to an additional perturbing shear flow , regardless of the spatial direction in which this perturbation is applied .these findings imply that once the yield stress has been overcome by the dominant shear flow , arrested states of soft matter become simultaneously fluidized in all spatial directions .in particular , the low shear viscosity in a direction orthogonal to the primary flow is determined by the primary flow rate .the rheometer employed in [ ovarlez ( 2010 ) ] consisted of two parallel discs which enabled the simultaneous application of rotational and squeeze shear flow , with independent control over the two different shear rates .although this set - up indeed provides a useful way to study superposed shear flows of differing rate , it does not provide a mean to test the three - dimensional yield surface , as claimed in [ ovarlez ( 2010 ) ] .a true exploration of the yield surface poses a considerable challenge to experiment and requires a parameterization of the velocity gradient tensor which can incorporate the entire family of homogeneous flows , including both extension and shear as special cases .the superposition of two shear flows is yet another shear flow and does not enable the entire space of homogeneous velocity gradients to be explored . in the present work we will employ the constitutive theory of brader _ et al . _( 2009 ) to investigate the response of a generic colloidal glass to a ` mixed ' flow described mathematically by the linear superposition of two independently controllable velocity gradient tensors .numerical results will be presented for the special case in which simple shear is combined with uniaxial compression . despite the fact that we employ a combination of compression and shear ,as opposed to the superposition of two shear flows , our theoretical results are broadly consistent with the experimental findings of ovarlez _ et al . _( 2010 ) regarding the response of shear fluidized glasses .in particular , our calculations reveal clearly the relevant timescales dictating the three dimensional response of the system . following this specific application, we proceed to extend our description to treat more general mixed flows .the paper will be organized as follows : in sec .[ model ] we will introduce the deformation measures required to describe flow in three dimensions and summarize the schematic model of brader _ et al . _( 2009 ) . in sec .[ coupled_flows ] we will consider the application of our constitutive model to a specific mixed flow , namely a combination of uniaxial compression and simple shear . in sec . [ results ] we will present numerical results for the flow curves and low shear viscosity for the aforementioned flow combination . in sec .[ viscosity ] we will perform a perturbation analysis of our constitutive equation which enables us to address the general problem of superposing a mechanical perturbation onto a dominant flow . finally , in sec .[ discussion ] we will discuss the significance of our results and give concluding remarks .spatially homogeneous deformations are encoded in the spatially translationally invariant deformation tensor .any given vector at time may be transformed into a new vector at later time using the linear relation where [ brader ( 2010 ) ] .calculating the time derivative of the deformation tensor and using the chain rule for derivatives yields an equation of motion for the deformation tensor where is the velocity gradient tensor with components . in the present work we will assume incompressibility , which may be expressed by the condition or , equivalently , ( volume is conserved ) .if the deformation rate is constant in time , then the velocity gradient matrix loses its time dependence ( ) and the deformation tensor becomes a function of the time difference alone ( ) .the formal solution of eq.([motione ] ) for such steady flows is thus given by the deformation tensor contains information about both the stretching and rotation of material lines ( vectors embedded in the material ) .a more useful measure of strain is the finger tensor , which is defined for steady flows by the finger tensor is invariant with respect to physically irrelevant solid body rotations of the material sample and occurs naturally in many constitutive models ( e.g. the doi - edwards model of polymer melts [ doi and edwards ( 1989 ) ] ) . the schematic model developed in [ brader _ et al .( 2009 ) ] expresses the deviatoric stress tensor in integral form (t , t')\ , .\label{constit1}\ ] ] an equation of the form ( [ constit1 ] ) has been derived from first principles [ brader _ et al . _( 2008 ) ] , starting from the -particle smoluchowski equation and applying mode - coupling approximations to a formally exact generalized green - kubo relation for the stress tensor . in [ brader _ et al . _( 2009 ) ] the theory was simplified to ( [ constit1 ] ) by assuming spatial isotropy of the modulus .the physical content of eq.([constit1 ] ) is that , in order to calculate the stress at the present time , increments of an appropriate , material objective strain measure ( the finger tensor ) are integrated over the flow history , each weighted with a ` fading memory ' . approximating by an exponential recovers the well - known lodge equation [ larson ( 1988 ) ] , which is just the integral form of the upper - convected maxwell model .however , eq.([constit1 ] ) differs from the simple lodge equation in that , ( i ) the modulus is generally not time translationally invariant , due time - dependent variation of the flow in the time interval between and , ( ii ) the memory does not decay exponentially to zero , but displays the two - step relaxation characteristic of dense colloidal dispersions . within the wavevector dependentapproach of [ brader _ et al . _( 2008 ) ] the autocorrelation function of stress fluctuations is assumed to relax in the same way as the density fluctuations .this leads to an approximation for the nonlinear modulus , given by a weighted -integral over a bilinear function of density correlators at two different ( but coupled ) wavevectors .the schematic model replaces this with the simpler form where is a single mode transient density correlator ( normalized to ) and is a parameter measuring the strength of stress fluctuations .the dynamics of the single mode density correlator are determined by a nonlinear integro - differential equation where is an initial decay rate , the inverse of which sets our basic unit of time .the function is a three - time memory - kernel which depends upon the strain accumulated between its time arguments and describes how this competes with the slow structural relaxation arising from the colloidal interactions .the memory kernel is given by \ , .\label{memory}\ ] ] the dependence of the memory upon is taken from the model developed by gtze [ gtze ( 2008 ) ] .the coupling constants are given by and , where is a parameter expressing the distance to the glass transition .the system is fluid for and in a glassy state for .the entering ( [ memory ] ) are decaying functions of the accumulated strain . for simplicitywe assume . to allow consideration of any kind of flow ( not only shear ) ,the function is taken to depend upon the two invariants and of the finger tensor }\ , , \label{h}\ ] ] where a mixing parameter and a cross - over strain parameter have been introduced [ brader _ et al . _( 2009 ) ] .the scalars and are the trace of the finger tensor and its inverse , respectively . in principle , the time evolution of the density correlator and thus , via eqs.([constit1 ] ) and ( [ modulus ] ) , the stress tensor , can be calculated by solving eq.([stdeq ] ) numerically for any given velocity gradient tensor .the model outlined above contains a set of five independent parameters .the least important of these is , which determines the relative influence of the invariant with respect to in determining the strain induced decay of the memory function .however , numerical results prove to be extremely insensitive to the value of , at least for all flows to which the schematic model has so far been applied .trivial scaling of stress and time scales is provided by the parameters and .a statistical mechanical calculation of the dynamics of colloids ( in the absence of hydrodynamic interactions ) identifies the modulus as the autocorrelation function of stress fluctuations . therefore determines the initial value of the modulus and , via ( [ constit1 ] ) , sets the overall stress scale .the reciprocal of the initial decay rate simply acts as the fundamental timescale .for the purpose of our theoretical investigations both and can , without loss of generality , be set equal to unity .the theoretical results thus generated can then be fit to experimental data by scaling stress and time ( or frequency ) with alternative values for these two parameters [ brader _ et al . _( 2010 ) ] the two most important parameters in the model are and .the cross - over strain sets the strain value at which elastic response gives way to viscous flow . for example , in experiments considering the shear stress response of dense colloidal systems to the onset of steady shear flow , can be identified from the peak of the overshoot on the stress - strain curve .the parameter characterizes the thermodynamic state point of the system relative to the glass transition and serves as proxy for the true thermodynamic parameters of the physical system ( volume fraction , temperature etc . ) .for example , in a simple system of hard - sphere colloids of volume fraction one can identify , where is the volume fraction at the glass transition . for more complicated systems can be regarded as a general coupling parameter which , in the absence of flow , yields fluid - like behaviour for and amorphous solid - like response for .with the constitutive relation ( [ constit1 ] ) , we are in a position to determine the rheological behaviour of a colloidal glass undergoing any type of homogeneous deformation . in [ ovarlez _( 2010 ) ] , ovarlez _ et al . _considered various soft glassy materials loaded between two parallel discs .each sample was simultaneously sheared by rotating the upper disc about its axis at a given angular velocity and squeezed by lowering the height of the upper disc at a given rate . by independently varying the rotation and compression ratesthe stress could be determined as a function of one of the rates , for a fixed value of the other . in these experiments ,the rotation of the upper plate induces a shear flow in the direction ( in cylindrical coordinates ) , the rate of which increases linearly with radial distance from the axis of rotation . as a consequence of the stick boundary conditionsthe compression of the sample leads to an inhomogeneous shear flow in the direction ( somewhat akin to a poiseuille flow ) with a maximum shear rate at the boundaries and zero shear rate in the plane equidistant between the two plates . ) for various values of the compressional rate .the dashed line is a newtonian viscous law .for the limit of the flow curve identifies the dynamic yield stress [ brader _ et al . _( 2009 ) ] ., scaledwidth=48.0% ] the experiments of ovarlez _ et al . _( 2010 ) were performed in a curvilinear geometry using a flow protocol which induces an inhomogeneous velocity gradient tensor . in principle , spatial variations of the velocity gradient could be treated within the present theoretical framework by assuming that the constitutive relations remain valid locally and enforcing the local stress balance appropriate to the geometry of the rheometer under consideration .in addition to the increased numerical resources required for such an investigation , the local application of our constitutive equation would represent a further approximation , over and above those already underlying the schematic model . the main conceptual point emerging from the experimental studies of ovarlez _ et al . _( 2010 ) is that if a primary flow restores ergodicity and fluidizes the glass , then the response to the secondary flow is also fluid like .spatial inhomogeneity of one or both flows is merely a complicating factor .we thus choose to focus on a more idealized homogeneous flow which is convenient for numerical implementation , but nevertheless captures the salient features of the experiment in a minimal way .the homogeneous flow we choose to implement is a superposition of simple shear and uniaxial compressional flow .we anticipate that the key physical mechanism at work in fluidized systems under superposed flow is the competition between the two imposed relaxation timescales . as the superposition of two shear flows is itself another shear flow , the experiments of ovarlez _ et al . _( 2010 ) leave open the possibility that the observed phenomena could be a special feature of shear . for this reason we chose to implement the mathematically more general case of superposed extension and shear , for which the geometrical coupling of the flows is more involved . working in a cartesian coordinate systemour flow is specified by the shear and compressional flows are represented by the following matrices where and are the shear and compression rates , respectively .our choice of flow thus differs from those of ovarlez in two respects , ( i ) both and are translationally invariant and , ( ii ) we superpose shear with genuine elongation , as opposed to superposing two shear flows .we consider the flow ( [ mixed ] ) as a thought experiment intended to highlight the fundamental physical mechanism of fluidization in a simple and transparent fashion .a direct experimental realization of ( [ mixed ] ) is not feasible , as this would require a rheometer with stick boundary conditions for generating the shear flow , but slip boundaries for the compressional flow . as we will see below , our assumptions do not seem to lead to qualitative differences between our theoretical findings and the experimental results and simplify considerably the theoretical calculations .eq.([sol ] ) enables calculation of the deformation tensor for our mixed flow .the non - zero elements are given by & & e_{yy}=e^{-{\dot{\gamma}_{c}}t}\label{diagonal2}\quad , \notag\vspace*{-0.5cm}\\ & & e_{xy}=\frac{2{\dot{\gamma}_{s}}}{3{\dot{\gamma}_{c}}}e^{-{\dot{\gamma}_{c}}t}\left ( e^{\frac{3{\dot{\gamma}_{c}}t}{2}}-1\right)\,.\label{nontrivial}\end{aligned}\ ] ] scaled by the yield stress as a function of compression rate ( glassy states with and ) .the continuous line is a power - law fit to the numerical data points for over the range and yields an exponent of -1 .the -dependent deviations apparent for indicate that short - time relaxation processes are becoming relevant . , scaledwidth=47.5% ] employing eq.([finger ] ) yields the finger tensor e_{xy}e_{yy } & e_{yy}^2 & 0\\[0.25 cm ] 0 & 0 & e_{zz}^2 \end{array } \right)\quad , \label{fing}\ ] ] with inverse given by \frac{-e_{xy}}{e_{xx}^2e_{yy } } & \frac{e_{xx}^2e_{zz}^2 + e_{xy}^2e_{zz}^2}{e_{xx}^2e_{yy}^2e_{zz}^2 } & 0\\[0.3 cm ] 0 & 0 & \frac{1}{e_{zz}^2 } \end{array } \right)\quad .\label{fing-1}\ ] ] the invariants required for the memory function prefactors ( [ h ] ) are thus finally , we need to calculate the time derivative of the finger tensor . in sec .[ results ] we will present results for the shear stress as a function of , treating as a parameter . inspection of ( [ constit1 ] ) shows that we require only the component of the finger tensor time derivative substituting ( [ dbxy ] ) into ( [ constit1 ] ) and assuming time translational invariance ( as appropriate for the steady flows under consideration ) we obtain our final expression \nu_{\sigma}\phi^2(t ) .\label{sigxy}\ ] ] the component of the shear stress tensor is now completely characterized .when numerically evaluating the integral in ( [ sigxy ] ) we find that truncation at provides accurate results .we note that , in an analogous way , all other components of the shear stress can be calculated , which is useful if one is interested for example in the first and second normal stress differences , and , respectively .in fig.[flow_curves ] we show flow curves generated from numerical solution of eqs.([stdeq]-[h ] ) and ( [ sigxy ] ) . for each curvewe set the compressional rate equal to a fixed value , in effect treating as a parameter , and plot the shear stress as a function of .the model parameters used to generate these data are as follows : .for we recover the simple shear flow curve which , for the glassy state under consideration , tends to a dynamic yield stress in the limit of vanishing shear rate . within the theory ,the existence of a dynamic yield stress is a direct consequence of the scaling of the structural relaxation time with shear rate , .the flow curves calculated at finite differ qualitatively from that at .in particular , is a discontinuous function of the parameter , such that . for finite values the flow curves present a newtonian regime for rates , followed by a shear thinning regime for .the existence of two regimes is quite intuitive : for compression is the dominant , i.e. fastest , flow and sets the timescale of structural relaxation , whereas for the shear flow dominates and the flow curve converges to the result .is shown as a function of the compressional rate , for different values of .the points are numerical data and the lines provide a guide for the eye .the inset shows the same data on a logarithmic scale ., scaledwidth=48.0% ] the above findings are in good qualitative agreement with the experimental results obtained in [ ovarlez _ et al . _( 2010 ) ] ( cf .fig.3 therein ) . in order to characterize more precisely the flow curves at finite we show in fig.[visc_comp ] the low shear viscosity , scaled by the yield stress , as a function of for three different positive values of . for find very good data collapse onto a master curve .for clear deviations from universality set in , signifying that the compression induced structural relaxation processes are occurring on a timescale within the microscopic regime , for which becomes an independent quantity ( around for the parameter set used in fig.[flow_curves ] ) .provided that we find that the numerical data are well represented by the power - law scaling with , in agreement with the experimental findings of ovarlez _ et al . _the constant of proportionality is independent of ( both and vary in the same way with this parameter ) . given the lack of detailed material specificity in the schematic model , we are led to believe that is a universal exponent , independent of both the details of the material under consideration and of the precise nature of the primary and perturbing flows .our findings suggest that any constitutive theory capable of describing a three dimensional dynamic yield stress ( ` yield stress surface ' [ brader _ et al . _( 2009 ) ] ) will inevitably recover the scaling ( [ rel_visc_comp ] ) with , when applied to tackle mixed flows .in particular , we anticipate that the full wavevector dependent mode - coupling constitutive equation [ brader _ et al . _( 2008 ) ] would predict the same scaling behaviour , although this claim remains to be confirmed by explicit calculations . within mode - coupling - based approachesthe value of the scaling exponent is a natural consequence of the way in which strain enters the memory function ( [ memory ] ) .the flow curves presented in fig.[flow_curves ] for various values of are very reminiscent of the ( more familiar ) flow curves either measured or calculated under simple shear with and , i.e. states which would remain fluid in the absence of flow ( see , e.g. fuchs and cates ( 2003 ) ) .this similarity suggests that it may be possible to map , at least approximately , the shear response of a steadily compressed , glassy system with and onto an uncompressed , fluid system , , at some effective , negative value of the separation parameter .one possible way to realize such a mapping is to adjust for a given to obtain equal values for the low shear viscosity of the compressed glass and effective fluid systems .the results of performing this procedure for three values of are shown in fig.[eff_eps ] .it should be noted that the mapping between and becomes discontinuous at at which point .the inset of fig.[eff_eps ] shows the same data on a logarithmic scale . in this representationit becomes apparent that the data follow a power law fits to our numerical data yield values for the exponent . within the quiescent schematic model [ gtze ( 2008 ) , gtze ( 1984 ) ] , to which the present theory reduces in the absence of flow , it is known that the zero shear viscosity exhibits a power law divergence as approaches the glass transition from below where is the same exponent as that describing the divergence of at the glass transition .note that the symbol is employed here for this exponent , rather than the standard choice , in order to avoid confusion with the strain .when employing the percus - yevick approximation to the static structure factor as input , the wavevector dependent mode - coupling theory predicts that for hard - spheres the viscosity exponent takes the value ( identifying as the volume fraction , relative to the transition point ) [ gtze and sjgren ( 1992 ) ] . within the present schematic model we obtain ( see also footnote ) . given this information about the divergence of in the quiescent system ,the power law relation for the mapping ( [ rel_eff ] ) is already implicit in the data shown in fig.[visc_comp ] . using the relations ( [ rel_visc_comp ] ) and ( [ rel_visc_eps ] ) the relation ( [ rel_eff ] )can be deduced , where the exponent is given by , which is consistent with the results of our numerical fits .we have so far focused on the special case of mixed shear and compressional flows . for any given value of have shown that there exists a newtonian regime in the stress response to the shear flow , provided ( see fig.[flow_curves ] ) . in this section ,we now consider more general situations for which a second slow flow is added to a dominant flow ( while keeping the requirements of incompressibility and homogeneity ) . in the present context a sufficient condition for the second flow to be considered ` slow ' is that , where the characteristic shear rates are now identified as for ( where ) . in subsections [ newt]-[experiment ] , we provide perturbative constitutive equations for three different cases . in the first of these cases , we consider and as steady flows ( without any other restriction ) , derivate the corresponding pertubative constitutive equation and finally apply this latter to our coupled compressional and shear flows , in order to theoretically account for the newtonian viscous response to discussed in sec .[ results ] , and to finally make the connection with the phenomenological constitutive equation obtained by ovarlez _et al._. in the second case , we still consider steady flows , but this time with the additional requirement of ` commutating ' flows , i.e. \equiv{\boldsymbol{\kappa}}_1\cdot{\boldsymbol{\kappa}}_2-{\boldsymbol{\kappa}}_2\cdot{\boldsymbol{\kappa}}_1=0 ] , where is an integer . within the schematic model the relaxation time determining the decay of given by , where is the cross - over strain parameter entering ( [ h ] ) .this decay serves to cut off the integral in ( [ perturb_1 ] ) at the upper limit , with the consequence that the numerically largest elements of {ij} ] .such flows have the property that the total deformation tensor can be formed from the product of the individual deformations , .as we will see , this restriction allows for more tractable pertubative constitutive equations . with =0 ] , we obtain a formula for the stress tensor analogous to ( [ stress_expansion ] ) , \right\}.\notag\\ \label{stress_expansion_time}\end{aligned}\ ] ] substituting ( [ sh ] ) and ( [ osc ] ) into ( [ stress_expansion_time ] ) and making use of standard trigonometric addition formulas yields where the orthogonal superposition moduli are given by in eqs.([osc_stress]-[gpp ] ) we have made explicit the dependence of the moduli upon the steady shear rate .the application of oscillations perpendicular to the flow thus enable the modulus under steady shear to be investigated and provide information about the shear induced relaxation of stress fluctuations .we note that identical moduli ( [ gp ] ) and ( [ gpp ] ) would be obtained had we chosen the alternative perturbing flow and determined the stress component . in fig.[moduli ]we show the orthogonal superposition moduli as a function of frequency for three different values of the steady shear rate .for we recover the standard linear response moduli , for which the viscous loss dominates the elastic storage for frequencies less than . for the fluid stateconsidered is finite ( ) . as the steady shear rate is increased , relaxation processes with rates less than are suppressed and the point at which the storage and loss moduli cross moves to higher frequency .these findings are consistent with the experimental results of both booij ( 1966 ) and vermant _ et al . _the underlying physics here is essentially the same as that leading to the shift of the newtonian regime shown in fig.[flow_curves ] . at low frequenciesthe orthogonal superposition moduli retain the same frequency scaling as in the familiar unsheared situation , namely and .we note also that the kramers - kronig relations remain valid for finite values of .in addition to the speeding up of structural relaxation induced by the steady shear , the loss modulus also displays a more pronounced -peak compared to the unsheared function .this feature is related to the functional form of the -decay of the transient density correlator . in the absence of shear as a stretched exponential , whereas under shear the final decay is closer to pure exponential .however , it is likely that more accurate ( i.e. beyond schematic ) orthogonal moduli , obtained either from experiment or more detailed microscopic calculations / simulations , would differ qualitatively in the region of the -peak .there is accumulating evidence [ zausch _ et al . _( 2008 ) ] that becomes negative at long times and , as this feature is not captured by the simple schematic model employed here , differences in the fourier transformed quantity around may be anticipated .the negative tail of is related to the existence of a maximum in the shear stress as a function of time following the onset of steady shear [ zausch _ et al . _( 2008 ) ] .the ` stress overshoot ' is present in the full wavevector dependent mode - coupling equations [ brader _ et al . _( 2008 ) ] but gets lost in simplifying the theory to the schematic level .in this paper we have demonstrated that the mct - based schematic model of brader _ et al . _( 2009 ) can qualitatively account for the experimental results on three dimensional flow of soft glassy materials reported in [ ovarlez _ et al . _( 2010 ) ] .in particular , the competition of timescales which arises from applying flows of differing rate appears to be correctly incorporated into the model .the main outcome of our analysis is that the viscous response to a perturbing secondary flow is dominated by the primary flow rate .although subtle anisotropic corrections to this picture do emerge from our equations , it remains to be seen whether these have significant consequences for experiments in any particular rheometer geometry .a key feature of the mode - coupling constitutive theory is that it captures the transition from an ergodic fluid to an arrested glass as a function of the coupling strength .the present study demonstrates that the theory qualitatively accounts for experimental data on mechanically fluidized glassy systems in three dimensional situations . what remains to be establishedis whether the experimental yield surface of a colloidal glass agrees with the ( almost ) von mises form [ hill ( 1971 ) ] predicted by the schematic model [ brader _ et al . _( 2009 ) ] .a true measurement of the yield surface would require a rheometer which enables parameterization of the entire family of velocity gradients , incorporating both uniaxial and planar extensional flows . this has not yet been achieved .given the very different mechanisms underlying plastic flow in colloidal glasses and metals ( for which the von mises yield surface was originally proposed ) direct measurements of the yield surface could prove very informative .the good qualitative agreement of our theory with the experimental results of ovarlez _ et al . _( 2010 ) on fluidized glasses is perhaps all the more surprising when recalling that the theory is constructed specifically for dispersions of spherical colloidal particles ( without hydrodynamic interactions ) , whereas the experiments were performed on large aspect ratio bentonite clay , a carbopol gel and an emulsion .the consistent phenomenology presented by these disparate systems would seem to indicate that the sufficient elements required for a successful theory are ( i ) a well - founded geometrical structure ( in the sense that its tensorial structure is appropriate ) , ( ii ) correct incorporation of flow induced relaxation rates .the specific nature of the interparticle interactions does not seem to be of particular importance , although we note that certain interaction potentials may be more susceptible to inhomogeneous flow ( e.g. shear banding instabilities ) than others .the presence of a spatially varying velocity gradient tensor would conflict with the assumption of translational invariance underlying our constitutive equation . for the case of superposed compression and shear flowwe have found that the viscosity felt by the perturbing shear flow is given by , which for the typical value is around % less than the primary viscosity . in [ ovarlez _( 2010 ) ] the sedimentation velocity of a sphere falling in the vorticity direction of a shear fluidized glass was observed to be a factor of larger than one would expect from a sphere falling through a fluid of viscosity ._ attributed this to hydrodynamic interactions between sedimenting particles . although the flow around a falling sphere in shear flow is more complicated than the flows considered in the present work , it is nevertheless tempting to speculate that enhanced sedimentation velocity could be connected to a reduced effective viscosity , as occurs in ( [ visc_reduction ] ) , arising from a nontrivial coupling of the superposed flows .we leave a detailed application of our constitutive equation to the problem of sedimentation under shear to future work .an aspect of the present work which may warrant further investigation is the possible analogy between systems with isotropic interparticle interactions , upon which anisotropy is imposed by external mechanical force fields , and intrinsically anisotropic materials such as liquid crystals [ de gennes and prost ( 1993 ) , larson ( 1988 ) ] .the theory of anisotropic fluids has a long history , beginning essentially with the work of oseen in the 1930s [ oseen ( 1933 ) ] and developed through the work of ericksen ( 1959 ) and leslie ( 1968 ) . in all of these theoretical developmentsthe anisotropy of the viscous response originates from the underlying anisotropy of the constituents ; usually oriented polymers or rod - like particles with liquid crystalline order .take the nematic phase as an example . within a continuum mechanicsdescription the orientational order is characterized by the director . in certain systemsthe director may be held fixed by the application of a suitably strong external field , in others it interacts with the imposed flow in a more complicated way .either way , the presence of a preferred direction in the sample gives rise to an anisotropic viscous response ( characterized e.g. by the five scalar leslie viscosity coefficients ) . in the present case the anisotropy of the perturbation responseis determined by the geometry of the primary flow .it may thus be anticipated that the eigenvectors of the primary deformation may play a role in the present theory analogous to that of in the dynamics of nematics . throughout the present work we have focussed on the response of a glass which has already been fluidized by a primary flow of constant rate. however , within the same formalism we can also consider the predictions of our constitutive equation for the elastic response of a colloidal glass which has been pre - strained by a primary deformation at some point in the past ( see the appendix for more details on this point ) .for example , a colloidal glass subject to a shear strain below the yield strain may reasonably be expected to possess an anisotropic elastic response to an additional small perturbing strain ( see ( [ anis_elast ] ) for verification of this assertion ) .taking this idea a step further , it would be of interest to investigate the nature of the yield stress surface in such pre - strained glasses . although both the topology of the surface and its invariance with respect to hydrostatic pressure will remain unchanged by pre - straining , significant deviations from circularity ( over and above those already arising from normal stresses ) can be envisaged . in realitythese deviations may well be nonstationary , decaying away as the sample ages , but such subtle dynamic effects are beyond current formulations of the mode - coupling theory . integrating ( [ constit1 ] ) by parts yields the following form for the schematic constitutive equation for glassy states the modulus relaxes to a plateau value for long times and the contribution of the integral term to the overall numerical value of the stress is limited to a negligible integration over the relaxation ( beyond this time the time derivative vanishes ) .the integral term may therefore be neglected to a good level of approximation .we first consider the situation when the system is subjected to a mixed strain field , where is the infinitessimal strain due to flow .both of these strains are sufficiently small that the system remains in the purely elastic regime .the stress for long times after the application of the two strains is given by a simple linear superposition of the two elastic responses as is expected from the linear theory of isotropic elasticity [ landau and lifshitz ( 1986 ) ] .note that in ( [ lineareq ] ) we have suppressed an irrelevant isotropic contribution ( the system is incompressible ) .we next consider the situation whereby remains small but the strain due to the primary flow is allowed to be sufficiently large that some plastic rearrangements are induced .we nevertheless require that the total strain must still remain sufficiently small that the yield stress is not exceeded and the system remains solid .the analysis of this case neccessitates use of the partial expansion ( [ visc4 ] ) .we assume that the primary strain has been applied at some time in the distant past and that all plastic rearrangements have ceased by the time we apply the perturbing strain .the finger tensor is thus independent of time at the present time . for times following application of by analogy with the situation considered in sec .[ anisvis ] , eq.([anisstrain ] ) can be expressed in terms of a fourth rank stiffness tensor where .we note that the plateau value of the modulus may differ from its quiescent value as a result of plastic deformation occurring during or after application of the primary strain .brader j.m ., m. siebenbrger , m. ballauff , k. reinheimer , m. wilhelm , s. j. frey , f. weysser , and m. fuchs,``nonlinear response of dense colloidal suspensions under oscillatory shear : mode - coupling theory and fourier transform rheology experiments '' , phys.rev.e * 82 * , 061401 ( 2010 ) .brader j.m ., t. voigtmann , m. fuchs , r.g .larson , and m.e .cates,``liquids and structural glasses special feature : glass rheology : from mode - coupling theory to a dynamical yield criterion '' , proceedings of the national academy of sciences * 106 * , 15186 - 15191 ( 2009 ) .vermant j. , p. moldenaers , j. mewis , m. ellis , and r. garritano , `` orthogonal superposition measurements using a rheometer equipped with a force rebalanced transducer '' , rev.sci.instr .* 68 * , 4090 ( 1997 ) .zausch j. , j. horbach , m. laurati , s.u .egelhaaf , j.m .brader , th .voigtmann , and m , fuchs , `` from equilibrium to steady state : the transient dynamics of colloidal liquids under shear '' , j.phys.cond.mat . *20 * , 404210 ( 2008 ) .the specific numerical value for the exponent from the schematic model depends upon the path chosen in the two - dimensional space of memory function coupling constants . by taking a standard linear path parameterized by ( see text below eq.([memory ] ) ) we reproduce rather closely the viscosity divergence of the wavevector dependent theory for hard - spheres .a completely analogous picture would have emerged , had we chosen a dominant shear flow with a perturbing compressional component . in this inverted thought experiment the dominant timescale would simply be set by .
|
recent experiments performed on a variety of soft glassy materials have demonstrated that any imposed shear flow serves to simultaneously fluidize these systems in all spatial directions [ ovarlez _ et al . _ ( 2010 ) ] . when probed with a second shear flow , the viscous response of the experimental system is determined by the rate of the primary , fluidizing flow . motivated by these findings , we employ a recently developed schematic mode - coupling theory [ brader _ et al . _ ( 2009 ) ] to investigate the three dimensional flow of a colloidal glass , subject to a combination of simple shear and uniaxial compression . despite differences in the specific choice of superposed flow , the flow curves obtained show good qualitative agreement with the experimental findings and recover the observed power law describing the decay of the scaled viscosity as a function of the dominant rate . we then proceed to perform a more formal analysis of our constitutive equation for different kind of ` mixed ' flows consisting of a dominant primary flow subject to a weaker perturbing flow . our study provides further evidence that the theory of brader _ et al . _ ( 2009 ) reliably describes the dynamic arrest and mechanical fluidization of dense particulate suspensions .
|
compressive sensing ( cs) is a promising technique that recovers a high - dimensional signal represented by a few non - zero elements using far - fewer measurements than the signal dimension .this technique has immense applications ranging from image compression to sensing systems requiring lower power consumption .the mathematical heart of cs is to solve a under - determined linear system of equations by harnessing an inherent sparse structure in the signal .let and be a real - valued spare signal vector and a compressive sensing matrix that linearly projects a high - dimensional signal in to a low - dimensional signal in where , respectively . formally , the noiseless cs problem is to reconstruct sparse signal vector by solving the following -minimization problem : where the collection of non - zero elements positions in , , is defined as \} ] be an unknown sparse signal vector whose sparsity level is equal to , i.e. , .the measurement equation of quantized compressed sensing is given by where denotes the -level _ scalar _ quantizer that applied component - wise , and ^{\top} ] denote the measurement and noise vector , respectively .all entries of the noise vector are assumed to be independent and identically distributed ( iid ) gaussian random variables with zero mean and variance , i.e. , for all .our objective is to reliably estimate the unknown sparse signal vector given in the presence of gaussian noise , by appropriately constructing a linear measurement matrix and -level scalar quantizer .we define a sparse signal recovery decoder , which maps the measurement vector to an estimate of the original sparse signal vector .it is said that the average probability of error is at most if \leq \epsilon ] mds code , where is a prime power for any positive integer .in this paper we focus on a -ary ] .we define the one - to - one mapping that maps each element of into an -length word in . for instance , when , it is possible to express an element of as a binary vector of length . using this mapping, we can transform each dictionary vector into where .the transformed column vector is referred to as the dictionary basis vector .dictionary coding is to map a dictionary basis vector in into a lattice point in using lattice encoding where .we commence by providing a brief background for a lattice construction .let be the ring of gaussian integers and be a gaussian prime .let us denote the addition over by , and let be the natural mapping of onto .we recall the nested lattice code construction given in [ 14 ] .let be a lattice in , with full - rank generator matrix .let denote a linear code over with block length and dimension , with generator matrix where .the lattice is defined through `` _ _ construction a _ _ '' ( see and references therein ) as where is the image of under the mapping function .it follows that is a chain of nested lattices , such that and . for a lattice and , we define the lattice quantizer , the voronoi region and \mod \lambda = { \bf r}-{\rm q}_\lambda({\bf r}) ] , where .the proof of this theorem is based on the proposed two - stage decoding method called `` _ _ compute - and - recover _ _ '' . in the first stage, we decode an integer linear combination of coded dictionary vectors by removing noise , which essentially yields a finite - field sparse signal recovery problem . in the second stage, we apply syndrome decoding over the finite - field to reconstruct the sparse signal vector . in this stage , we decode a noise - free measurement vector from using the key property of a lattice code . recall that dictionary vector is a lattice code ; thereby , any integer - linear combination of lattice codewords is again a lattice codeword .thus we have that \mod \lambda \in \mathcal{l } ]-rs code whose minimum distance , achieves a singleton bound , i.e. , . as a result ,the syndrome decoding method allows us to recover the sparse signal perfectly , provided that putting two inequalities in and together and using the fact and , the number of required measurements for the sparse signal recover in the presence of gaussian noise boils down to which completes the proof .* remark 1 ( decoding complexity ) * : the proposed two - stage decoding method can be implemented with a polynomial time computational complexity . in the first stage, the lattice equation can be efficiently decoded with the -level scalar quantizer in and the successive decoding algorithm of the polar code , which essentially uses operations .syndrome decoding used in the second stage can be implemented with polynomial time computational complexity algorithms such as berlekamp - massey algorithm , which requires operations in .considering is a vector space over , this amount corresponds to operations in . since and , the overall computational complexity of the proposed method is at most operations for recovery .* remark 2 ( universality of the measurement matrix ) * : the proposed coded compressive sensing method is universal , as it is possible to recover all sparse signals using a fixed sensing matrix .this universality is practically important , because one may needs to randomly construct a new measurement matrix for each signal .some existing one - bit compressive sensing algorithms do not hold the universality property .* remark 3 ( non - integer sparse signal case ) * : one potential concern for our integer sparse signal setting is that a sparse signal can have real value components in some applications .this concern can be resolved by exploiting an integer - forcing technique in which is quantized into an integer vector and interpreting the residual as additional noise .then , the effective measurements are obtained as where denotes effective noise .utilizing this modified equation , we are able to apply the proposed coded compressive sensing method to estimate the integer approximation . assuming the non - zero values in are bounded as for some , we conjecture that the proposed scheme guarantees to recover the sparse signal with a bounded estimation error with an increased number of measurements than that in theorem 1 . the rigorous proof of this conjecture will be provided in our journal version . *remark 4 ( noiseless one - bit compressive sensing ) * : one interesting scenario is that when a one - bit quantizer and a binary signal are used . in the case of noise - free , the number of required measurements for the perfect recovery is lower bounded by with one - bit and noisy measurements.,width=326 ]in this section , we provide the signal recovery performance of the proposed coded compressive sensing method for , i.e. , one - bit compressive sensing , by numerical experiments . to test the proposed algorithm ,-sparse binary vector is generated in which the non - zero positions of is uniformly distributed between 1 and 511 . a fixed binary sensing matrix designed by the concatenation of compression matrix and the generator matrix of polar code ( which is completely determined by the rate - one arikan s kernal matrix and the information set ) , as illustrated in fig .[ fig:1 ] .in particular , the binary compression matrix is obtained from that is the parity check matrix of -ary ] of the sparse signal in the presence of noise with variance when the proposed algorithm is applied .we compare our coded compressive sensing algorithm with the following two well - known one - bit compressive sensing algorithms with some modification for a binary signal .* convex optimization : a variant of the -minimization method proposed in for a binary sparse signal , which is summarized in table [ table1 ] ; * binary iterative hard thresholding ( biht ) : a heuristic algorithm in with some modifications for the binary signal recovery as in step 3 ) and 4 ) of table [ table1 ] .for the two modified reference algorithms , we use a gaussian sensing matrix whose elements are drawn from iid gaussian distribution . for each setting of , , , and , we perform the recovery experiment for 500 independent trials , and compute the average of perfect recovery rate ..a convex optimization algorithm for binary sparse signal [ cols= " < " , ] [ table1 ] .,width=297 ] fig . [ fig:2 ] plots the perfect recovery probability versus snr for each algorithm , when , , and . as can be seen in fig .[ fig:2 ] , the proposed method outperforms biht significantly in terms of the perfect signal recovery performance .specifically , biht is not capable of recovering the signal with high probability until snr=12 db , because there are a lot of sign flips in the measurements due to noise .whereas the proposed algorithm is robust to noise ; thereby it recovers the signal with probability one when snr is 6 db above .the convex optimization approach provides a better performance than the other algorithms ; yet , it requires the computational complexity order of , which is much higher than that of the proposed one .in this paper , we proposed a novel compressive sensing framework with noisy and quantized measurements for integer sparse signals . with this framework we derived the sufficient condition of the perfect recovery as a function of important system parameters .considering one - bit compressive sensing as a special case , we demonstrated that the proposed algorithm empirically outperforms the existing greedy recovery algorithm .n. lee , `` map support detection for greedy sparse signal recovery algorithms in compressive sensing , '' aug .2015 ( available at : http://arxiv.org / abs/1508.00964 ) .e. j. cand , j. romberg , and and t. tao , `` stable signal recovery for incomplete and inaccurate measurements , '' vol .59 , pp . 1207 - 1223 ,
|
in this paper , we propose _ coded compressive sensing _ that recovers an -dimensional integer sparse signal vector from a noisy and quantized measurement vector whose dimension is far - fewer than . the core idea of coded compressive sensing is to construct a linear sensing matrix whose columns consist of lattice codes . we present a two - stage decoding method named _ compute - and - recover _ to detect the sparse signal from the noisy and quantized measurements . in the first stage , we transform such measurements into noiseless finite - field measurements using the linearity of lattice codewords . in the second stage , syndrome decoding is applied over the finite - field to reconstruct the sparse signal vector . a sufficient condition of a perfect recovery is derived . our theoretical result demonstrates an interplay among the quantization level , the sparsity level , the signal dimension , and the number of measurements for the perfect recovery . considering 1-bit compressive sensing as a special case , we show that the proposed algorithm empirically outperforms an existing greedy recovery algorithm .
|
with the basic idea being able to be traced back to , problems of distributed consensus seeking have been widely studied in the past decade sparked by the work of .the states of a group of interconnected nodes can asymptotically reach the average value of their initial states via neighboring node interactions and simple distributed control rule , which forms a foundational block for the further development in the broad range of control of network systems .the understanding of distributed consensus algorithms has been substantially advanced in aspects ranging from convergence speed optimisation and directed links to switching interactions and nonlinear dynamics .on the other hand , recent work brought the idea of distributed averaging consensus to quantum networks , where each node corresponds to a qubit , i.e. , a quantum bit . in quantum mechanics , the state of a qubitis represented by a density matrix over a two - dimensional hilbert space , and the state of a quantum network with qubits corresponds to a density matrix over the tensor product of .the concepts regarding the network density matrix reaching a quantum consensus were systematically developed in , and it has been shown that a quantum consensus can be reached with the help of quantum swapping operators for both continuous - time and discrete - time dynamics . in fact , the two categories of dynamics over classical and quantum networks can be put together into a group - theoretic framework , and quantum consensus dynamics can even be equivalently mapped into some parallel classical dynamics over disjoint subsets of the entries of the network density matrix . in this paper, we make an attempt to look at the relation between the two categories of dynamics from a _ physical _ perspective , despite their various consistencies already shown in .the density matrix describes a quantum system in a mixed state that is a statistical ensemble of several quantum states , analogous to the probability distribution function of a random variable .first of all we investigate the evolution of the network entropy for consensus dynamics in classical or quantum networks .we show that in classical consensus dynamics , the network entropy decreases at the consensus limit if the node initial values are i.i.d .bernoulli random variables , and the network differential entropy is monotonically non - increasing if the node initial values are i.i.d . gaussian . while for the quantum consensus dynamics , the network s von neumann entropy is in contrast non - decreasing .these observations suggest that the two types of consensus schemes may have different physical footings .then , we compare several gossiping algorithms with random or deterministic coefficients for classical or quantum networks and present novel convergence conditions for gossiping algorithms with random coefficients .the result shows that quantum gossiping algorithms with deterministic coefficients are physically consistent with classical gossiping algorithms with random coefficients .the remainder of the paper is organized as follows .section 2 presents the problem of interest as well as the main results .section 3 presents the proofs of the statements .finally section 4 concludes the paper .for a network with nodes in the set with an interconnection structure given by the undirected graph , the standard distributed consensus control scheme is described by the dynamics where with representing the state of node , and is the laplacian of the graph . here , we refer to for a detailed introduction as well as for the definition of the graph laplacian . also consider a quantum network with qubits indexed in the set .we can introduce a quantum interaction graph , where specifies a swapping operator between the two qubits .the state of each qubit is represented by a density matrix over the two - dimensional hilbert space , and the network state corresponds to a density matrix over , the tensor product of .continuous - time quantum consensus control can be defined by where is the network density matrix , and is the swapping operator between the qubits and ( see for details on the definition and realization of the swapping operators ) .let the graph be connected for either the classical or the quantum dynamics .it has been shown that for the system ( [ classical ] ) , there holds ( e.g. , ) where is the all - ones vector . for the system ( [ quantum ] ) , there holds where is the permutation group and represents the quantum permutation operator induced by . the conceptual consistency of the systems ( [ classical ] ) and ( [ quantum ] ) , as well as the logical consistency of the two consensus limits , have been discussed in .the shannon entropy is a fundamental measure of uncertainty of a random variable .the entropy , of a discrete random variable with alphabet is defined as here is to the base and is the probability mass function .the differential entropy of a continuous random variable with density is defined as where is the support of . as a natural generalization of the shannon entropy , for a quantum - mechanical system described by a density matrix , the von neumann entropy is defined as where is the trace operator .we present the following result for classical consensus dynamics .[ thmclassical ] ( i ) let be independent and identically distributed ( i.i.d . ) bernoulli random variables with mean . then obeys binomial distribution .therefore , for the system ( [ classical ] ) , there holds ] follows straightforwardly from the independence of the . on the other hand, follows a binomial distribution whose entropy is well - known to be .since for all , there holds .this proves ( i ) .\(ii ) the solution of the system ( [ classical ] ) is as a result , for any , is a gaussian random vector. then [x(t)-\mathbf{e}(x(t))]\t \big]\big|\nonumber\\ & = \frac{1}{2}\log\big |(2\pi e \sigma^2)^n e^{-2tl_{\mathrm{g } } } \big|,\end{aligned}\ ] ] where represents the matrix determinant .we take and compare with .there holds from ( [ 1 ] ) that since is the laplacian of a connected undirected graph , has a unique zero eigenvalue , and all non - zero eigenvalues of are positive ( cf . , ) .consequently , all eigenvalues of are positive and no larger than one , which yields that this proves . since is chosen arbitrarily , we conclude that is a non - increasing function .the calculations of and are straightforward .we have now completed the proof of theorem [ thmclassical ] . the proof relies on the following lemma .[ lem1 ] let and fix .for the system ( [ quantum ] ) , there exist with such that _ proof . _define a set where stands for the convex hull .it is straightforward to see that if . as a result, is an invariant set of the system ( [ quantum ] ) in the sense that for all as long as .the desired lemma thus follows immediately . recalling that the von neumann entropy is a concave function of , and that for any unitary operator , we conclude from lemma [ lem1 ] that for any and in light of the fact that is unitary for all .this proves that is a non - decreasing function and theorem [ thmquantum ] holds . ( i ) .first of all it is clear that is markovian from its definition . recall that is the permutation group .we denote the permutation matrix associated with as . in particular , the permutation matrix associated with the swapping between and is denoted as .the state transition of along the algorithm a2 can be written as since the graph is connected , the swapping permutations defined along the edges of form a generating set of the permutation group .consequently , given , the set is the state space of , which contains at most elements .finally it is straightforward to verify that for any given , is irreducible and aperiodic , and therefore forms an ergodic markov chain .the statement is in fact a direct consequence from the ergodicity of .we however need to be a bit more careful since we assume that takes value from an arbitrary ( not necessarily discrete ) probability space and the are not necessarily independent .we denote the state transition matrix for as .we calculate from basic probability equality under and , and then immediately obtain where is the unit vector with the entry being one .it is clear that the above calculation does not rely on being discrete or continuous , and represents probability mass or density function wherever appropriate . from the definition of the algorithm a2 a symmetric matrix and the ergodicity of leads to at an exponential rate .the desired conclusion thus follows .we have investigated the evolution of the network entropy for consensus dynamics in classical or quantum networks . in the classical case ,the network entropy decreases at the consensus limit if the node initial values are i.i.d .bernoulli random variables , and the network differential entropy is monotonically non - increasing if the node initial values are i.i.d . gaussian . for quantum consensus dynamics , the network s von neumann entropy is on the contrary non - decreasing .this observation can be easily generalized to balanced directed graphs . in light of this inconsistency , we also compared several gossiping algorithms with random or deterministic coefficients for classical or quantum networks , and showed that quantum gossiping algorithms with deterministic coefficients are physically consistent with classical gossiping algorithms with random coefficients .j. n. tsitsiklis ._ problems in decentralized decision making and computation_. ph.d .thesis , dept . of electrical engineering and computer science ,massachusetts institute of technology , boston , ma , 1984 .f. ticozzi , l. mazzarella and a. sarlette , symmetrization for quantum networks : a continuous - time approach , " _ the 21st international symposium on mathematical theory of networks and systems ( mtns ) _ , groningen , the netherlands , jul .2014 .g. shi , m. johansson and k. h. johansson , randomized gossiping with unreliable communication : dependent and independent node updates , " _the 51st ieee conference on decision and control _ , pp .48464851 , maui , hawaii , dec . 2012 .
|
in this paper , we investigate the evolution of the network entropy for consensus dynamics in classical or quantum networks . we show that in the classical case , the network entropy decreases at the consensus limit if the node initial values are i.i.d . bernoulli random variables , and the network differential entropy is monotonically non - increasing if the node initial values are i.i.d . gaussian . while for quantum consensus dynamics , the network s von neumann entropy is in contrast non - decreasing . in light of this inconsistency , we compare several gossiping algorithms with random or deterministic coefficients for classical or quantum networks , and show that quantum gossiping algorithms with deterministic coefficients are physically related to classical gossiping algorithms with random coefficients . * keywords : * consensus dynamics , quantum networks , entropy evolution
|
asymmetries of cross sections , e.g. spin - asymmetries and forward - backward asymmetries , are often interesting physics quantities . for concreteness let us consider a situation as shown in fig. [ massplot ] , where the asymmetry of the signal events , in the central gaussian peak with a width of , should be determined from data taken in two different spin configurations .the number density of events as a function of some kinematic variable , , ( typically a reconstructed mass ) is given by with .here ( ) denotes the cross section of the signal ( background ) events in the two different spin configurations and .the factor is a luminosity and acceptance factor , assumed to be the same for the two spin configurations .the goal is to determine from spectra as shown in fig .[ massplot ] , and taken in two spin configurations , the two unknown asymmetries and , assumed to be independent of .it is of course not known event - by - event whether a particular event is signal or background ; one only knows the fraction of signal events as a function of , from a fit to the event spectrum as in fig .[ massplot ] .section [ simple ] presents the simplest method , based on counting rate asymmetries .section [ lh ] describes the unbinned likelihood method which is known to yield the smallest possible variance of all unbiased estimators in the limit of an infinite number of events , thus reaching the minimal variance bound ( mvb ) given by the cramr - rao inequality .section [ weighting ] presents a new asymmetry estimator , based on weighted events .this estimator is also unbiased in the large limit , i.e. it is consistent , and it is very close to reach the minimal variance bound .the advantage is that it can also be used in cases where the unbinned likelihood method is cumbersome because of large number of events .event weighting to extract the number of signal and background events was discussed in ref . but extraction of asymmetries is not discussed in this reference .the different methods are compared in section [ dis ] . andwidth sitting on a constant background .[ massplot ] ]a method often found in the literature is to determine the asymmetry in a -standard - deviation region around the peak , a region which includes both signal and background ; then to measure the background asymmetry in some side bands around the signal peak ( and ) and to use the result to correct the asymmetry measured in the peak region . for sake of simplicitywe will set , so that everywhere below we can write instead of . the expectation value of the counting rate asymmetry , , in the range is related to and in the following way : where we used and .an estimator for is given by : note that , strictly speaking , the first equality in eq .( [ eq:<a_cnt > ] ) is valid only in the large limit . in this limit eqs .( [ eq:<a_cnt > ] ) and ( [ eq : tilde - a_s ] ) indicate that , i.e. is a consistent estimator .the corresponding figure of merit , fom= , reads here and in the following we assume small asymmetries , such that for the error calculation the approximation is valid . in this case onefinds { { \rm d}}x ] . introducing these values of and in eqs .[ fom1 ] shows that the fom depends on the choice of both the signal region ( ) and the background region ( and ) . the solid line in fig .[ fom ] shows the fom as a function of , for which is a reasonable value to make sure that the side bands include a negligible amount of signal .the signal region , i.e. the value for , is chosen in order to maximize the fom for the given .the fom depends also on the signal to background ratio , here chosen to be 1:1 at , as in fig .[ massplot ] .in the large limit , the unbinned maximum likelihood method is known to provide an unbiased estimator for the parameters and , which reaches the minimal variance bound .since the numbers of events and are not fixed , an extended maximum likelihood method has to be used . with the definitions , and the log likelihood function reads : where ( ) runs over all events in the ( ) configuration and in the range , while .the first derivative is with a similar expression for .note that the terms with and cancel each other because the same is assumed for the two configurations .the set of equations can be solved for and . for small asymmetries a first order expansion in and the set of equations and the covariance matrix of the two parameters and reads : for the fom of one finds note that , if not otherwise stated , all sums run over both event samples , 1 and 2 . the dotted line in fig .[ fom ] shows this fom as a function of , i.e. for events in the region . for a given range of data available , defined by ,it is always larger than the fom obtained with the side band subtraction method shown by the solid line .the latter method does not reach the minimal variance bound . as a function of the maximum range of data available defined by , for the classical method of side band subtraction ( solid line ) and for the likelihood or weighting method ( dotted line ) . in the side band subtraction method and for each value of the value of , defining the signal region ,is chosen in order to maximize the fom .the figures of merit are normalized to the maximum fom reachable in the likelihood or weighting method in the limit . in this case .in this section a method to extract ( and simultaneously ) using event weighting is developed .it is clear that the estimator based on the counting rate asymmetry is not statistically optimal since it gives the same weight to all events .better estimators can be obtained by weighting each event by the signal strength , , and by the background strength , .these weight factors coincide with the optimal weights found in to extract the number of signal events .they are used to build the following asymmetries : in the large limit , the expectation values of and are where , as in section [ lh ] .the ratios of integrals can easily be obtained from the event sample , e.g. , which results exactly in the set of equations ( [ likelihood - set - equations ] ) found for the likelihood method in the small asymmetry limit .so the fom is still .this result can of course also be obtained directly , by simple error propagation using the expressions found for and from eqs .( [ as ] ) and ( [ ab ] ) .appendix [ cov ] shows that the factor is actually the correlation coefficient between and .this shows that the weighting method and the unbinned likelihood method are identical for small asymmetries .the advantage of the weighting method is that the estimators derived from eqs .( [ as ] ) and ( [ ab ] ) can also be used for arbitrary asymmetries , whereas the likelihood method requires in this case a numerical maximization of with loops over all events .for sake of simplicity , the error calculation was only presented for small asymmetries .extending it to arbitrary asymmetries is straightforward but lengthy ; it shows that the fom of the weighting method is only slightly smaller than the fom of the unbinned lh method .for example for a signal to background ratio as given in fig .[ massplot ] and asymmetries smaller than 50% the decrease in the fom is less than 1% .the weighting method can also be extended to more complicated cases where for example the acceptance factors are not the same in the two spin configurations or even when the asymmetries have to be determined from four counting rates in order to cancel differences of acceptances and flux factors for the two spin configurations , as in ref .a comparison of the two curves in fig . [ fom ] shows that the fom of the likelihood or event weighting method is always larger than the corresponding fom for the classical method . for a signal - to - background ratio of 1:1 at , as in fig .[ massplot ] , the gain is 23% for and 7% for . for gain is 2% and 10% for a signal - to - background ratio of 10:1 and 1:10 , respectively .apart from the gain in statistics it should also be noted that the weighting method avoids the arbitrary choice of the background region which starts here at . for breit - wigner distributions for examplethis choice is less obvious . in summary ,a new set of two estimators was presented to determine simultaneously signal and background asymmetries .these estimators are unbiased in the large limit , i.e. they are consistent . for small asymmetries they are also efficient , i.e. they reach the minimal variance bound , like the statistically optimal unbinned likelihood method .this is in contrast to the classical method of side band subtraction .these estimators can actually be derived from the likelihood method in the case of vanishing asymmetries . for large asymmetriestheir variances are still very close to the minimal variance bound .the advantage of the method is its applicability in cases where the likelihood method is cumbersome .consider two weight factors and .the covariance between and is given by : if the number of events is poisson distributed , i.e. , one finds cov .the error on the sums of weights is given by .thus the correlation coefficient is
|
this article discusses the determination of asymmetries . we consider a sample of events consisting of a peak of signal events on top of some background events . both signal and background have an unknown asymmetry , e.g. a spin or forward - backward asymmetry . a method is proposed which determines signal and background asymmetries simultaneously using event weighting . for vanishing asymmetries the statistical error of the asymmetries reaches the minimal variance bound ( mvb ) given by the cramr - rao inequality and it is very close to it for large asymmetries . the method thus provides a significant gain in statistics compared to the classical method of side band subtraction of background asymmetries . it has the advantage with respect to the unbinned maximum likelihood approach , reaching the mvb as well , that it does not require loops over the event sample in the minimization procedure . event weighting , minimal variance bound , cramr - rao inequality , asymmetry extraction , optimal observables , side band subtraction 02.70.rr , 13.88.+e
|
in networked control systems ( ncss ) communication between controller(s ) and plant(s ) is made through unreliable and rate - limited communication links such as wireless networks and the internet ; see e.g. , many interesting challenges arise and successful ncs design methods need to consider both control and communication aspects .in particular , so - called _ packetized predictive control _ ( ppc ) has been shown to have favorable stability and performance properties , especially in the presence of packet - dropouts . in ppc ,the controller output is obtained through solving a finite - horizon cost function on - line in a receding horizon manner .each control _ packet _ contains a sequence of tentative plant inputs for a finite horizon of future time instants and is transmitted through a communication channel .packets which are successfully received at the plant actuator side , are stored in a buffer to be used whenever later packets are dropped .when there are no packet - dropouts , ppc reduces to model predictive control . for ppc to give desirable closed loop properties , the more unreliable the network is , the larger the horizon length ( and thus the number of tentative plant input values contained in each packet ) needs to be chosen .clearly , in principle , this would require increasing the network bandwidth ( i.e. , its bit - rate ) , unless the transmitted signals are suitably compressed . to address the compression issue mentioned above , in the present work we investigate the use of _ sparsity - promoting optimizations _ for ppc .such techniques have been widely studied in the recent signal processing literature in the context of _ compressed sensing _ ( aka _ compressive sampling _ ) .the aim of compressed sensing is to reconstruct a signal from a small set of linear combinations of the signal by assuming that the original signal is sparse .the core idea used in this area is to introduce a sparsity index in the optimization . to be more specific ,the sparsity index of a vector is defined by the amount of nonzero elements in and is usually denoted by , called the `` norm . ''the compressed sensing problem is then formulated by an -norm optimization , which , being combinatorial is , in principle hard to solve .since sparse vectors contain many 0-valued elements , they can be easily compressed by only coding a few nonzero values and their locations .a well - known example of this kind of sparsity - inducing compression is jpeg .the purpose of this work is to adapt sparsity concepts for use in ncss over erasure channels .a key difference between standard compressed sensing applications and ncss is that the latter operate in closed loop .thus , time - delays need to be avoided and stability issues studied , see also . to keep time - delays bounded ,we adopt an iterative greedy algorithm called _ orthogonal matching pursuit _ ( omp ) for the on - line design of control packets .the algorithm is very simple and known to be dramatically faster than exhaustive search . in relation to stability in the presence of bounded packet - dropouts ,our results show how to design the cost function to ensure asymptotic stability of the ncs .our present manuscript complements our recent conference contribution , which adopted an -regularized optimization for ppc .a limitation of the approach in is that for open - loop unstable systems , asymptotic stability can not be obtained in the presence of bounded packet - dropouts ; the best one can hope for is practical stability .our current paper also complements the extended abstract , by considering bit - rate issues and also presenting a detailed technical analysis of the scheme , including proofs of results . to the best of our knowledge ,the only other published works which deal with sparsity and compressed sensing for control are which studies compressive sensing for state reconstruction in feedback systems , and which focus on sampling and command generation for remote applications .the remainder of this work is organized as follows : section [ sec : plant - model - control ] revises basic elements of packetized predictive control . in section [ sec : design ] , we formulate the design of the sparse control packets in ppc based on sparsity - promoting optimization . in section[ sec : stability ] , we study stability of the resultant networked control system . based on this , in section [ sec : relax ] we propose relaxation methods to compute sparse control packets which leads to asymptotic ( or practical ) stability .a numerical example is included in section [ sec : examples ] .section [ sec : conclusion ] draws conclusions .we write for , refers to modulus of a number . the identity matrix ( of appropriate dimensions )is denoted via . for a matrix ( or a vector ) , denotes the transpose . for a vector ^{\top}\in{{\mathbb{r}}}^n ] , that is , , and denotes the -th column of the matrix . .:={{{\boldsymbol{0}}}} ] .:=\operatorname*{supp}\{{{{\boldsymbol{x}}}}[0]\}=\emptyset ] .\|_2 ^ 2 ] such that , for all ] :=\operatorname*{arg\,min}_{\operatorname*{supp}\{{{{\boldsymbol{u}}}}\}={{\mathcal{s}}}[k+1]}\|g{{{\boldsymbol{u}}}}-h{{{\boldsymbol{x}}}}\|_2 ^ 2 ] . $ ] .next , we study stability of the ncs with control packets computed by algorithm [ alg : omp ] .since algorithm [ alg : omp ] always returns a feasible solution for , we have the following result based on theorem [ thm : stability ] .[ thm : omp ] suppose that assumption [ ass : dropouts ] holds and that the matrices , , and are chosen according to the procedure given in theorem [ thm : stability ] .then , the control packets , obtained by the omp algorithm [ alg : omp ] provide an asymptotically stable ncs .consequently , when compared to the method used in , algorithm [ alg : omp ] has the following main advantages : * it is simple and fast , * it returns control packets that asymptotically stabilize the networked control system we note that in conventional transform based compression methods e.g. , jpeg , the encoder maps the source signal into a domain where the majority of the transform coefficients are approximately zero and only few coefficients carry significant information .one therefore only needs to encode the few significant transform coefficients as well as their locations . in our case , on the other hand , we use the omp algorithm to sparsify the control signal in its original domain , which simplifies the decoder operations at the plant side . to obtain a practical scheme for closed loop control ,we employ memoryless entropy - constrained scalar quantization of the non - zero coefficients of the sparse control signal and , in addition , send information about the coefficient locations .we then show , through computer simulations , that a significant bit - rate reduction is possible compared to when performing memoryless entropy - constrained scalar quantization of the control signal obtained by solving the standard quadratic control problem for ppc as in .to assess the effectiveness of the proposed method , we consider the following continuous - time plant model : ,\\ b_c&=\left[\begin{array}{r } -0.3\\ 0\\ -17\\ 0\\ \end{array}\right ] .\end{split } \label{eq : example - model}\ ] ] this model is a constant - speed approximation of some of the linealized dynamics of a cessna citation 500 aircraft , when it is cruising at an altitude of 5000 ( m ) and a speed of 128.2 ( m / sec ) ( * ? ? ?* section 2.6 ) . to obtain a discrete - time model, we discretize by the zero - order hold with sampling time ( sec ) .we set the horizon length ( or the packet size ) to .we choose the weighting matrix in ( [ eq : opt0 ] ) as , and choose the matrix according to the procedure shown in theorem [ thm : stability ] with .we first simulate the ncs in the noise - free case where .we consider the proposed method using the omp algorithm and also the optimization of : where is a positive constant . to compare these two sparsity - promoting methods with traditional ppc approaches, we also consider a finite - horizon quadratic cost function where is a positive constant , yielding the -optimal control to choose the regularization parameters in and in , we empirically compute the relation between each parameter and the control performance , as measured by the norm of the state .[ fig : norm_vs_nu ] shows this relation . versus control performance for ( solid ) and ( dash ) .the circles show the chosen parameters and . ] by this figure , we first find the optimal parameter for that optimizes the control performance , i.e. , .then , we seek that gives the same control performance , namely , .furthermore , we also investigate the ideal least - squares solution that minimizes . with these parameters ,we run 500 simulations with randomly generated ( markovian ) packet - dropouts that satisfy assumption [ ass : dropouts ] , and with initial vector in which each element is independently sampled from the normal distribution with mean 0 and variance 1 . fig .[ fig : sparsity ] shows the averaged sparsity of the obtained control vectors . with regularization parameter , omp ( solid ) , with ( dash - dot ) and ( dash ) .] the optimization with always produces much sparser control vectors than those by omp .this property depends on how to choose the regularization parameter .in fact , if we choose smaller , the sparsity changes as shown in fig .[ fig : sparsity ] . on the other hand ,if we use a sufficiently large , then the control vector becomes .this is indeed the sparsest control , but leads to very poor control performance : the state diverges until the control vector becomes nonzero ( see ) . fig .[ fig : control_performance ] shows the averaged 2-norm of the state as a function of for all 5 designs .we see that , with exception of the optimization based ppc , the ncss are nearly exponentially stable .in contrast , if the optimization of is used , then only practical stability is observed .the simulation results are consistent with corollary [ thm : omp ] and our previous results in .note that the optimization with shows better performance than that with , while gives a much sparser vector .this shows a tradeoff between the performance and the sparsity .for the four ppc designs : log plot ( above ) and linear plot ( below ) . ] fig .[ fig : cpu_time ] shows the associated computation times .the optimization is faster than omp in many cases .note that the ideal and the optimizations are much faster , since they require just one matrix - vector multiplication .we next investigate bit - rate aspects for a gaussian plant noise process . to keep the encoder and decoder simple, we will be using memoryless entropy - constrained uniform scalar quantization ; see .thus , the non - zero elements of the control vector are independently encoded using a scalar uniform quantizer followed by a scalar entropy coder . in the simulations ,we choose the step size of the quantizer to be , which results in negligible quantization distortion .we first run 1000 simulations with 100 time steps and use the obtained control vectors for designing entropy coders .a separate entropy coder is designed for each element in the control vector .for the first elements in the vector , we always use a quantizer followed by entropy coding . for the remaining elements ,we only quantize and entropy code the non - zero elements .we then send additional bits indicating , which of the elements have been encoded .the total bit - rate for each control vector is obtained as the sum of the codeword lengths for each individual non - zero codeword bits . for comparison, we use the same scalar quantizer with step size and design entropy coders on the data obtained from the optimization .since the control vectors in this case are non - sparse , we separately encode _ all _ elements and sum the lengths of the individual codewords to obtain the total bit - rate . in both of the above cases , the entropy coders are huffman coders .moreover , the system parameters are initialized with different random seeds for the training and test situations , respectively .the average rate per control vector for the omp case is bits , whereas the average rate for the case is bits .thus , due to sparsity , a percent bit - rate reduction is on average achieved . with gaussian noise .] fig .[ fig : performance-2 ] shows the 2-norm of the state and fig .[ fig : sparsity-2 ] shows the sparsity .we can also see the tradeoff between the performance and the sparsity in this case .we have studied a packetized predictive control formulation with a sparsity - promoting cost function for error - prone rate - limited networked control system .we have given sufficient conditions for asymptotic stability when the controller is used over a network with bounded packet dropouts .simulation results indicate that the proposed controller provides sparse control packets , thereby giving bit - rate reductions when compared to the use of , more common , quadratic cost functions .future work may include further study of performance aspects and the effect of plant disturbances .d. e. quevedo and d. nei , `` input - to - state stability of packetized predictive control over unreliable networks affected by packet - dropouts , '' _ ieee trans .56 , pp . 370375 ,d. e. quevedo , j. stergaard , and d. nei , `` packetized predictive control of stochastic systems over bit - rate limited channels with packet loss , '' _ ieee trans .56 , pp .28542868 , dec .g. pin and t. parisini , `` networked predictive control of uncertain constrained nonlinear systems : recursive feasibility and input - to - state stability analysis , '' _ ieee trans .56 , pp . 7287 , jan .2011 .y. c. pati , r. rezaiifar , and p. s. krishnaprasad , `` orthogonal matching pursuit : recursive function approximation with applications to wavelet decomposition , '' in _ proc .the 27th annual asilomar conf . on signals , systems and computers _ , pp . 4044 , nov .m. nagahara , d. e. quevedo , and j. stergaard , `` sparsely - packetized predictive control by orthogonal matching pursuit ( extended abstract ) , '' in _20th international symposium on mathematical theory of networks and systems ( mtns ) _ , july 2012 . to be presented .m. nagahara , d. e. quevedo , t. matsuda , and k. hayashi , `` compressive sampling for networked feedback control , '' in _ proc .ieee int .acoust . speech signal process ( icassp ) _ , pp . 27332736 , mar .d. e. quevedo , j. stergaard , and d. nei , `` packetized predictive control of stochastic systems over bit - rate limited channels with packet loss , '' _ ieee trans .56 , no . 11 , 2011
|
we study a networked control architecture for linear time - invariant plants in which an unreliable data - rate limited network is placed between the controller and the plant input . the distinguishing aspect of the situation at hand is that an unreliable data - rate limited network is placed between controller and the plant input . to achieve robustness with respect to dropouts , the controller transmits data packets containing plant input predictions , which minimize a finite horizon cost function . in our formulation , we design sparse packets for rate - limited networks , by adopting an an optimization , which can be effectively solved by an orthogonal matching pursuit method . our formulation ensures asymptotic stability of the control loop in the presence of bounded packet dropouts . simulation results indicate that the proposed controller provides sparse control packets , thereby giving bit - rate reductions for the case of memoryless scalar coding schemes when compared to the use of , more common , quadratic cost functions , as in linear quadratic ( lq ) control .
|
image understanding is becoming one of the most important problems in computer vision and many research efforts have been devoted to this topic . while object recognition and scene recognition have been extensively studied in the task of image classification , event recognition in still images received much less research attention , which also plays an important role in semantic image interpretation .as shown in figure [ fig : example ] , the characterization of event is extremely complicated as the event concept is highly related to many other high - level visual cues , such as objects , scene categories , human garments , human poses , and other context . therefore , event recognition in still images poses more challenges for the current state - of - the - art image classification methods , and needs to be further investigated in the computer vision research .convolutional neural networks ( cnns ) have recently enjoyed great successes in large - scale image classification , in particular for object recognition and scene recognition . for event recognition ,much fewer deep learning methods have been designed for this problem .our previous work proposed a new deep architecture , called _ object - scene convolutional neural network _( os - cnn ) , for cultural event recognition .os - cnns are designed to extract useful information for event understanding from the perspectives of containing objects and scene categories , respectively .os - cnns are composed of two - stream cnns , namely object nets and scene nets .object nets are pre - trained on the large - scale object recognition datasets ( e.g. imagenet ) , and scene nets are based on models learned from the large - scale scene recognition datasets ( e.g. places205 ) .decomposing into object nets and scene nets enables us to use the external large - scale annotated images to initialize os - cnns , which may be further fine tuned elaborately on the event recognition dataset .finally , event recognition is performed based on the late fusion of softmax outputs of object nets and scene nets . following the research line of os - cnns , in this paper, we try to further explore different aspects of os - cnns and better exploit os - cnns for better event recognition .specifically , we design four types of investigation scenarios to study the performance of os - cnns . in the first scenario, we directly use the softmax outputs of cnns as recognition results . in the next three scenarios ,we treat cnns as feature extractors , and use them to extract both _ global _ and _ local _ features of an image region .global features are more compact and aim to capture the holistic structure , while local features focus on describing the image details and local patterns .our experimental results indicate these two kinds of features are complementary to each other and robust for event recognition . based on our empirical explorations with os - cnns ,we come up with our solution for the cultural event recognition track at the iccv chalearn looking at people ( lap ) challenge and we secure the third place .the rest of this paper is organized as follows . in section [ sec : os - cnn ] , we will give a brief introduction to os - cnns , including network architectures and implementation details .after that , we will introduce our extensive explorations with os - cnns for event recognition in section [ sec : feature ] .we then report our experimental results in section [ sec : exp ] .finally , we conclude our method and present the future work in section [ sec : con ] .in this section , we will first briefly introduce the architecture of _ object - scene convolutional neural networks _ ( os - cnns ) , which was proposed in our previous work .then , we will present the implementation details of os - cnns , including network structures , data augmentations , and learning policy .event is a relatively complicated concept in computer vision research and highly related with other two problems : object recognition and scene recognition .the basic idea behind os - cnn is to utilize two separate components to perform event recognition from the perspectives of occurring objects and scene context .specifically , os - cnns are composed of object nets and scene nets , as shown in figure [ fig : os - cnn ] .* object nets . *object net is designed to capture useful information of objects to help event recognition .intuitively the occurring objects are able to provide useful cues for event understanding .for instance , in the cultural event of australia day as shown in figure [ fig : example ] , australian flag will be a representative object . as the main goal of object net is to deal with object cues ,we build it based on recent advances on large - scale object recognition , and pre - train the network on the public imagenet models .then , we further fine tune the model parameters on the training dataset of cultural event recognition by setting the output number as ( cultural event recognition dataset containing 100 classes ) . * scene nets .* scene net is expected to extract scene information of image to assist event understanding . in general, the scene context will be helpful for recognizing the event category in the image .for example , in the cultural event of sapporo snow festival as shown in figure [ fig : example ] , outdoor will be usually the scene category . specifically , we pre - train the scene nets by using the models learned on the dataset places205 , which contains 205 scene classes and 2.5 millions images .similar to object nets , we then fine tune the network weights of scene nets on the event recognition dataset , where we set network output number as .based on the above analysis , recognizing cultural event will benefit from the transferred representations learned for object recognition and scene recognition .thus , we will fuse the network outputs of both object nets and scene nets as the prediction of os - cnns . in this subsection, we will describe the implementation details of training os - cnns , including network structures , data augmentations , and learning policy .* network structures .* network structures are of great importance for improving the performance of cnns . in the past several years, many successful network architectures have been proposed for object recognition , such as alexnet , clarifainet , overfeat , googlenet , vggnet , msranet , and inception2 .some good practices can be drawn from the evolution of network architectures : smaller convolutional kernel size , smaller convolutional stride , more convolutional channel , deeper network structure . in this paper , we choose the vggnet-19 as our main investigated structure due to its good performance in object recognition , which is composed of 16 convolutional layers and 3 fully connected layers .the detailed description about vggnet-19 is out of the scope of this paper and can be found in .* data augmentations .* by data augmentation , we mean perturbing an image by transformations that leave the underlying class unchanged .typical transformations include corner cropping , scale jittering , and horizontal flipping . specifically , during the training phase of os - cnns ,we randomly crop image regions ( ) from 4 corners and 1 center of the whole image .meanwhile these cropped regions undergo horizontal flipping randomly .furthermore , we use three different scales to resize training images , where the smallest size of an image is set to .it should be noted that data augmentation is a method applicable to both training images and testing images . during training phase ,data augmentation will generate additional training examples and reduce the influence of over - fitting .for testing phase , data augmentation will help to improve the classification accuracy .the augmented samples can be either regarded as independent images or combined into a single representation by pooling or stacking operations . in the current implementation , during the test phase , we use sum pooling to aggregate these representations of augmented samples into a single representation .* learning policy .* effective training methods are very crucial for learning cnn models .as the training dataset of cultural event recognition is relatively small compared with imagenet and places205 , we resort to pre - training os - cnns by using these public available models trained on imagenet and places205 .specifically , we pre - train object nets with public vggnet-19 model , which achieved the top performance at ilsvrc2014 .for scene net , we use the model released by to initialize the network weights , which has obtained the best performance on the places205 dataset so far .the network weights are learned using the mini - batch stochastic gradient descent with momentum ( set to 0.9 ) . at each iteration, a mini - batch of 256 images is constructed by random sampling .the dropout ratios for fully connected layers are set as .as we pre - train network weights with imagenet and places205 models , we set a smaller learning rate for fine tuning os - cnns : learning rate starts with , decreases to after 5k iterations , decreases to after 10k iterations and the training process ends at 12k iterations . to speed up the training process, we use a multi - gpu extension version of caffe toolbox , which is publicly available online .we have introduced the architectures and implementation details about os - cnns in the previous section . in this section ,as shown in figure [ fig : pipeline ] , we will focus on describing the explorations of os - cnn activations from different layers and try to improve the recognition performance . the simplest way to utilize os - cnns forcultural event recognition is directly using the outputs ( softmax layer ) of cnn networks as final prediction results .specifically , given an image , its recognition score is calculated as follows : where and are the prediction scores of object nets and scene nets , and are the fusion weights of object nets and scene nets . in the current implementation ,fusion weights are set to be equal for object nets and scene nets .another way to deploy os - cnns for cultural event recognition is to treat them as generic feature extractors and use them to extract the global representation of an image region .we usually extract the activations of * fully connected layers * , which are very compact and discriminative . in this case, we only use the pre - trained models without fine - tuning .specifically , given an image region , we extract this global representation based on os - cnns as follows : ,\ ] ] where and are the cnn activations from pre - trained object nets and scene nets , and are the fusion weights of object nets and scene nets . in current implementation ,the fusion weights are set to be equal for object nets and scene nets . in previous scenario , os - cnnsare only pre - trained on large scale dataset of object recognition and scene recognition , and directly applied to the smaller event recognition dataset .however , it was demonstrated that fine - tuning a pre - trained cnns on the target data can improve the performance a lot .we consider fine - tuning the os - cnns on the event recognition dataset and the resulted image representations become dataset - specific .after fine - tuning process , we obtain the following global representation with the fine - tuned os - cnns : ,\ ] ] where and are the cnn activations from the fine - tuned object nets and scene nets , and are the fusion weights of object nets and scene nets . in current implementation ,the fusion weights are set to be equal for object nets and scene nets .in previous two scenarios , we extract a global representation of an image region with os - cnns .although this global representation is compact and discriminative , it may lack the ability of describing local patterns and detailed information .inspired by the recent success on video - based action recognition with deep convolutional descriptors , we investigate the effectiveness of * convolutional layer * activations .convolutional layer features have been also demonstrated to be effective in image - based tasks , such as object recognition , scene recognition and texture recognition . in this scenario ,os - cnns are first pre - trained on large - scale imagenet and places205 datasets , and then fine - tuned on the event recognition dataset , just as in scenario 3 .specifically , given an image region , we first extract the convolutional feature maps of os - cnns ( activations of convolutional layers ) , where is feature map size and is feature channel number .each activation value in the convolutional feature map corresponds to a local receptive field in the original image , and therefore we call these activations of convolutional layers as os - cnn local representations . after extracting os - cnn local representations , we utilize two normalization methods , namely _ channel normalization _ and _ spatial normalization _ proposed in , to pre - process these convolutional feature maps into transformed convolutional feature maps .more details regarding these two normalization methods are out scope of this paper and can be found in .the normalized cnn activation at each postion is called as the _ transformed deep - convolutional descriptor _ (these two kinds of normalization methods have turned out to be effective for improving the performance of cnn local representations in . moreover , the combination of them can obtain higher performance .therefore , we will use both normalization methods in our experimental explorations .finally , we employ fisher vector to encode these tdds into a global representation due to its good performance in object recognition and action recognition . in particular , according to our previous comprehensive study on encoding methods , we first use pca to reduce the dimension of tdd to . then each tdd is soft - quantized with a gaussian mixture model ( gmm ) with components ( set to 256 ) .the first and second order differences between each tdd and its gaussian center are aggregated in the block and , respectively . the final fisher vector representation is yielded by concatenating these blocks together : .\ ] ] for os - cnns , the fisher vector of local representation is defined as follows : ,\ ] ] where is the fisher vector representation from object nets , is the fisher vector representation from scene nets , and are their fusion weights and set to be equal to each other in the current implementation . all the representations in previous three scenariosare used to construct a linear classifier , where is the weight of linear classifier . in our implementation , we choose libsvm as the classifier to learn the weight , where the parameter , balancing regularizer and loss , is set as .it is worth noting that all these representations are first normalized before fed into svm for training . for os - cnn global representations , we use -normalization , and for os - cnn local representations, we use intra normalization and power -normalization ..event recognition performance of os - cnn global and local representations on the validation data . [ cols="^,^,^,^",options="header " , ] * datasets . *the iccv chalearn lap challenge 2015 contains a track of cultural event recognition and provides an event recognition dataset .this dataset contains images collected from two image search engines ( google images and bing images ) .there are totally 100 event classes ( 99 event classes and 1 background class ) from different countries and some images are shown in figure [ fig : example ] . from these samples , we see that cultural event recognition is really complicated , where garments , human poses , objects and scene context all constitute the possible cues to be exploited for event understanding .this dataset is divided into three parts : development data ( 14,332 images ) , validation data ( 5,704 images ) , and evaluation data ( 8,669 images ) .as we can not access the label of evaluation data , we mainly train our models on the development data and report the results on the validation data. * evaluation protocol . *the principal quantitative measure is based on precision recall curve .they use the area under this curve as the computation of the average precision ( ap ) , which is calculated by numerical integration . finally , they average these per - class ap values across all event classes and employ the mean average precision ( map ) as the final ranking criteria . hence , in our exploration experiments , we report our results evaluated as ap value for each class and map value for all classes . * settings . * in this exploration experiment , we use the vggnet-19 as the os - cnn network structure .we extract activations from two fully connected layers ( ` fc6 ` , ` fc7 ` ) as os - cnn global representations , and activations from four convolutional layers ( ` conv5 - 1 ` , ` conv5 - 2 ` , ` conv5 - 3 ` , ` conv5 - 4 ` ) as os - cnn local representations .it should be noted that we choose the activations after rectified linear units ( relus ) .we use -normalization to further process os - cnn global representations for better svm training . for fisher vector representation of os - cnn local representation ,we employ intra - normalization and power -normalization , as suggested by . * analysis .* we first report the numerical results in table [ tbl : result ] . from these results ,several conclusions can be drawn as follows : * we see that the object nets outperform scene nets on the task of cultural event recognition , which may imply that object cues play more important roles than scene cues for cultural event understanding .* we observe that os - cnns are effective for event recognition as it extract both object and scene information from the image .they achieve superior performance to object nets and scene nets , no matter what scenario is adopted . *we can notice that combining fine tuned features with linear svm classifier ( scenario 3 ) is able to obtain better performance than direct using the softmax output of cnns ( scenario 1 ) .this result may be ascribed to the fact that cnns are easily over - fitted to the training samples when the number of training images is relatively small .* comparing fine - tuned features ( scenario 3 ) with pre - trained features ( scenario 2 ) , we may conclude that fine tuning on the target dataset is very useful for improving recognition performance , which agrees with the findings of . * comparing the local representations ( scenario 4 ) and global representations ( scenario 3 ) of cnns , we see that global representation achieve slightly higher recognition accuracy . *we further combine the global representation ( ` fc7 ` ) with local representation ( ` conv5 - 3 ` ) of cnns and find that this combination is capable of boosting final recognition performance .this performance improvement indicates that different layers of cnns capture different level abstraction of original image .these feature activations from different layers are complementary to each other .we also plot the ap values for all event classes in figure [ fig : ap ] . from these ap values, we see that the events of ` monkey buffet festival ` and ` battle of the oranges ` achieve the highest performance ( 100% ) .this result may be ascribed to the fact that there are specific objects in these two event categories . at the same time, we notice that some event classes obtain very low ap values , such as ` halloween festival of the dead ` , ` fiesta de la candelaria ` , ` apokries ` , and ` viking festival ` .the ap values of these cultural event classes are below 50% . in general , there are no specific objects and scene context in these difficult event classes , and besides these classes are easily confused with other classes from the perspective of visual appearance , as observed from figure [ fig : result_example ] .we visualize several recognition examples in figure [ fig : result_example ] . in the row 1, we give eight examples that are successfully predicted by our method , from classes like ` keene pummpking ` , ` boryeong mud ` , ` afrikaburn ` and so on .meanwhile , we also provide some failure cases with high confidence from our method in the rows 2,3,4 . from thesewrong predicted examples , we see that these failure cases are rather reasonable and there exists great confusion between some cultural event classes .for example , the event classes of ` dia de los muertos ` and ` halloween festival of the dead ` share similar human make - up and garments .the event classes of ` up helly aa ` and ` viking festtival ` share similar human dresses and containing objects .the event classes of ` harbin icen and snow festival ` and ` sapporo snow festival ` share similar scene context and color appearance .the event classes of ` chinese new year ` and ` pingxi lattern festival ` share similar containing objects . in summary, these examples in figure [ fig : result_example ] indicate that the concept of event is really complicated and there only exist slight difference between some event classes . for final evaluation, we merge the development data ( 14,332 images ) and validation data ( 5,704 images ) into a single training dataset ( 20,036 images ) and re - train our os - cnn models on this new dataset .our final submission results to the iccv chalearn lap challenge are based on our re - trained model . according to the above experimental explorations , we conclude that the os - cnn global and local representations are complementary to each other .thus , we choose to combine activations from ` fc7 ` and ` conv5 - 3 ` layers , to keep a balance between performance and efficiency .meanwhile , our previous study demonstrated that googlenet is complementary to vggnet .hence , we also extract a global representation by using the os - cnns of googlenet in our challenge solution . in summary , our challenge solution is composed of three representations : ( i ) os - cnn vggnet-19 local representations , ( ii ) os - cnn vggnet-19 global representations , and ( iii ) os - cnn googlenet global representations .the challenge results are summarized in table [ tbl : challenge ] .we see that our method is among the top performers and our map is very close to the best performance of this challenge ( 84.7% vs. 85.4% ) .regarding computational cost , our implementation is based on cuda 7.0 and matlab 2013a , and it takes about 1s to process one image in our workstation equipped with 8 cores cpu , 48 g ram , and tesla k40 gpu .in this paper , we have comprehensively studied different aspects of os - cnns for better cultural event recognition .specifically , we investigate the effectiveness of cnn activations from different layers by designing four types scenarios of adapting os - cnns to the task of cultural event recognition . from our empirical study , we demonstrate that the cnn activations from convolutional layers and fully connected layers are complementary to each other , and the combination of them is able to boost recognition performance . finally , we come up with a solution by using os - cnns at the iccv chalearn lap challenge and secure the third place . in the future, we may consider how to incorporate more visual cues such as human poses , garments , object and scene relationship in a systematic manner for event recognition in still images .this work is supported by a donation of two tesla k40 gpus from nvidia corporation .meanwhile this work is partially supported by national natural science foundation of china ( 91320101 , 61472410 ) , shenzhen basic research program ( jcyj20120903092050890 , jcyj20120617114614438 , jcyj20130402113127496 ) , 100 talents program of cas , and guangdong innovative research team program ( no.201001d0104648280 ) .
|
event recognition from still images is one of the most important problems for image understanding . however , compared with object recognition and scene recognition , event recognition has received much less research attention in computer vision community . this paper addresses the problem of cultural event recognition in still images and focuses on applying deep learning methods on this problem . in particular , we utilize the successful architecture of _ object - scene convolutional neural networks _ ( os - cnns ) to perform event recognition . os - cnns are composed of object nets and scene nets , which transfer the learned representations from the pre - trained models on large - scale object and scene recognition datasets , respectively . we propose four types of scenarios to explore os - cnns for event recognition by treating them as either `` end - to - end event predictors '' or `` generic feature extractors '' . our experimental results demonstrate that the global and local representations of os - cnns are complementary to each other . finally , based on our investigation of os - cnns , we come up with a solution for the cultural event recognition track at the iccv chalearn looking at people ( lap ) challenge 2015 . our team secures the third place at this challenge and our result is very close to the best performance .
|
the 2-state quantum walk ( qw ) on the line has been intensively studied , and the limit theorems are obtained. for example , the limit distribution of the usual walks was calculated. in the present paper , we consider a localization model of 2-state qws .the motivation is the analysis of the time - inhomogeneous qws .our walks are determined by two matrices , one of which operates the walk at only half - time .the walk can be considered as one of the time - dependent models , for which there are some results. particularly , ref . and discuss localization .we present the two limit theorems that show the localization of the probability distribution .one is calculation of the limit value for the probability which walker is at position starting from the origin , where is time , and the other is the convergence in distribution . in the usual walks , localization can not occurs .however , if we change the matrix at only half - time , then we find that the localization occurs from the results in this paper .the localization of qws , which can be applied to quantum search, is often investigated. if for a position , we call that the localization occurs .therefore , our result insists that the localization occurs for any initial state .moreover , we obtain the convergence in distribution of as .this limit distribution is described by both -function and a density function . for 3-state groverwalk , similar limit theorems were shown. the limit distribution of a 4-state walk corresponding to the 2-state walk with memory was also computed. the present paper is organized as follows . in section 2 , we define our walk .we present the limit theorems as our main result in section 3 .section 4 is devoted to the proofs of the theorems . by using the fourier analysis, we obtain the limit distribution .summary is given in the final section .in this section , we define a localization model of 2-state qws on the line . let ( ) be an infinite components vector which denotes the position of the walker . here, -th component of is 1 and the other is 0 .let be the amplitude of the walker at position at time .the walk at time is expressed by the time evolution of our walk is depicted with the following two unitary matrices : = \left[\begin{array}{cc } c & s \\ s & -c \end{array}\right],\\ h=&\left[\begin{array}{cc } \cos\theta_1 & \sin\theta_1 \\\sin\theta_1 & -\cos\theta_1 \end{array}\right ] = \left[\begin{array}{cc } c_1 & s_1 \\ s_1 & -c_1 \end{array}\right],\end{aligned}\ ] ] where and .moreover , we introduce four matrices : ,\ , q=\left[\begin{array}{cc } 0&0\\ s & -c \end{array}\right],\ , p_1=\left[\begin{array}{cc } c_1 & s_1\\ 0&0 \end{array}\right],\ , q_1=\left[\begin{array}{cc } 0&0\\ s_1 & -c_1 \end{array}\right].\ ] ] then , the evolution is determined by p_1\ket{\psi_t(x+1)}+q_1\ket{\psi_t(x-1)}&(t=\tau ) \end{array}\right ., \label{eq : te}\ ] ] where .note that and .the probability that the quantum walker is at position at time , , is defined by in our main results , we focus on the probability distribution at time .so , time is called half - time in our walk .the fourier transform of is given by by the inverse fourier transform , we have from ( [ eq : te ] ) and ( [ eq : ft ] ) , the time evolution of becomes \hath(k)\ket{\hat{\psi}_{t}(k)}&(t=\tau ) \end{array}\right.,\label{eq : timeevo}\ ] ] where and ] as and .,title="fig : " ] + ( b ) ] by density plot as and .,title="fig : " ] + ( b ) ] with .,title="fig : " ] + ( b ) ] by density plot with .,title="fig : " ] + ( b ) ] , then we have figure [ fig : p0 ] corresponds to the behavior of each probability in ( [ eq : prob_012_1 ] ) , ( [ eq : prob_012_0 ] ) and ( [ eq : prob_012_2 ] ) .next , we present the theorem of the convergence in distribution for , where .some similar results corresponding to theorem 2 were shown for a 3-state walk or a 4-state walk. moreover , localization of multi - state walks was computed. the limit distribution of the usual 2-state walk does not have the delta measure. however , we find that the limit distribution of the 2-state walk defined in this paper has a delta - measure from theorem 2 . + as .,title="fig: " ] + ( a ) as .,title="fig : " ] + ( b ) as .,title="fig : " ] + ( c ) _ for our localization model of 2-state qws , we have _ _ where _\nonumber\\ & \times \frac{a_2x^4+a_1x^2+a_0}{c^2(1-x^2)}\,i_{(-|c|,|c|)}(x),\end{aligned}\ ] ] _ and denotes the delta - measure at the origin and if , if . the values are independent on initial state as follows : _ _ particularly , if , then we obtain the density function of the usual walk , that is ,_ \,i_{(-|c|,|c|)}(x).\end{aligned}\ ] ] figure [ fig : density ] shows the density function with .we have in figure [ fig : density ] .we should note that the following relation between theorems 1 and 2 : therefore , is not a probability measure ..,title="fig : " ] + ( a ) ]in this section , we will prove theorems 1 and 2 in section 3 .our approach is based on the fourier analysis. at first , the eigenvalues of can be computed as the normalized eigenvector corresponding to is .\ ] ] therefore , the fourier transform is expressed by as follows : in the proof , we focus on even time . from ( [ eq : ft_even ] ) and ( [ eq : psi_0_kiyosato ] ) , the fourier transform at time is given by moreover , rewriting as we obtain we should note . calculating the inverse fourier transform , we have by using the riemann - lebesgue lemma, we see where denotes . from ( [ eq : psi_t_sim ] ) , we get &(x=0),\\[5 mm ] \frac{(-1)^\tau(c_1s - s_1c)}{c^2 } \left[\begin{array}{c } -csi_2\alpha-(1-|s|)\beta\\ i_2\left\{|s|(1-|s|)\alpha+cs\beta\right\ } \end{array}\right]&(x=-2),\\[5 mm ] \frac{(-1)^\tau(c_1s - s_1c)}{c^2 } \left[\begin{array}{c } i_2\left\{cs\alpha-|s|(1-|s|)\beta\right\}\\ ( 1-|s|)\alpha - csi_2\beta \end{array}\right]&(x=2),\\[5 mm ] \frac{(-1)^\tau(c_1s - s_1c)i_x}{c^2 } \left[\begin{array}{c } \pm cs\alpha-|s|(1\mp |s|)\beta\\ |s|(1\pm |s|)\alpha\mp cs\beta \end{array}\right]&(x=\pm 4,\pm 6,\ldots),\\[5 mm ] \left[\begin{array}{c } 0\\0 \end{array}\right]&(x=\pm 1,\pm 3,\ldots ) , \end{array}\right.\label{eq : psi_even}\end{aligned}\ ] ] where .similarly we can compute as follows : &(x=-1),\\[5 mm ] \frac{(-1)^\tau(c_1s - s_1c)(1-|s|)}{c^3 } \left[\begin{array}{c } cs\alpha-|s|(1-|s|)\beta\\ -c(c\alpha+s\beta ) \end{array}\right]&(x=1),\\[5 mm ] ( -1)^\tau j_x \left[\begin{array}{c } -c^2\left\{s(1-|s|)\alpha+c|s|\beta\right\}\\ ( 1-|s|)\left\{c|s|(1-|s|)\alpha+c^2s\beta\right\ } \end{array}\right]&(x=-3,-5,\ldots),\\[5 mm ] ( -1)^\tau j_x \left[\begin{array}{c } ( 1-|s|)\left\{-c^2s\alpha+c|s|(1-|s|)\beta\right\}\\ -c^2\left\{c|s|\alpha - s(1-|s|)\beta\right\ } \end{array}\right]&(x=3,5,\ldots),\\[5 mm ] \left[\begin{array}{c } 0\\0 \end{array}\right]&(x=0,\pm 2,\pm 4,\ldots ) , \end{array}\right.\label{eq : psi_odd}\end{aligned}\ ] ] where . from ( [ eq : prob ] ) , ( [ eq : psi_even ] ) and ( [ eq : psi_odd ] ) , the proof is completed . we calculate the characteristic function as , where denotes the expected value of . at first , ( [ eq : psi_2t+2 ] ) can be written as where and substituting , we can calculate the _ r_-th moment of as follows : where , and . by using the riemann - lebesgue lemma, we have where therefore , we obtain \nonumber\\ & \times\frac{a_2x^4+a_1x^2+a_0}{c^2(1-x^2)}\,i_{(-|c|,|c|)}(x)\,dx\nonumber\\ = & \int_{-\infty}^{\infty } x^r f(x)\,dx,\label{eq : r - th_mom}\end{aligned}\ ] ] where we should remark by ( [ eq : r - th_mom ] ) , we can compute the characteristic function as .thus the proof of theorem 2 is completed .in the final section , we conclude and discuss the probability distribution of our walks . in the usual 2-statewalk defined by the matrix , the localization does not occur at all .however , if another matrix operates the walk at only half - time , then localization occurs . in theorem 1, the behavior of the probability was calculated as .moreover , we found that the limit distribution of had both a delta measure and a density function from theorem 2 .the interesting problem is calculation of the limit distribution for the walk which the matrix operates more than twice .the author is grateful to norio konno for useful comment and also to joe yuichiro wakano and the meiji university global coe program `` formation and development of mathematical sciences based on modeling and analysis '' for the support .t. machida and n. konno , _ proceedings of the 4th international workshop on natural computing ( iwnc2009 ) , in the series of proceedings in information and communications technology ( pict ) _ * 2 * ( 2010 ) 226 .
|
we consider 2-state quantum walks ( qws ) on the line , which are defined by two matrices . one of the matrices operates the walk at only half - time . in the usual qws , localization does not occur at all . however , our walk can be localized around the origin . in this paper , we present two limit theorems , that is , one is a stationary distribution and the other is a convergence theorem in distribution . # 1#1 :
|
in the classical statistical estimation problem , the starting point is a family of probability measures depending on the parameter belonging to some subset of a finite - dimensional euclidean space .each is the distribution of a random element .it is assumed that a realization of one random element corresponding to one value of the parameter is observed , and the objective is to estimate the values of this parameter from the observations .the intuition is to select the value corresponding to the random element that is _ most likely _ to produce the observations .a rigorous mathematical implementation of this idea leads to the notion of the regular statistical model : the statistical model ( or estimation problem ) , is called regular , if the following two conditions are satisfied : * there exists a probability measure such that all measures are absolutely continuous with respect to ; * the density , called the likelihood ratio , has a special property , called local asymptotic normality . if at least one of the above conditions is violated , the problem is called singular . in regular models ,the estimator of the unknown parameter is constructed by maximizing the likelihood ratio and is called the maximum likelihood estimator ( mle ) . since , as a rule , , the consistency of the estimator is studied , that is , the convergence of to as more and more information becomes available . inall known regular statistical problems , the amount of information can be increased in one of two ways : ( a ) increasing the sample size , for example , the observation time interval ( large sample asymptotic ) ; ( b ) reducing the amplitude of noise ( small noise asymptotic ) .the asymptotic behavior of in both cases is well - studied .it is also known that many other estimators in regular models are asymptotically equivalent to the mle .while all regular models are in a sense the same , each singular model is different .sometimes , it is possible to approximate a singular model with a sequence of regular models . for each regular model , an mle is constructed , and then in the limit one can often get the true value of the parameter while both the sample size and the noise amplitude are fixed . some singular models can not be approximated by a sequence of regular models and admit estimators that have nothing to do with the mle . in this paper , section [ subsection - exact ] , we introduce a completely new type of such estimators for a large class of singular models .infinite - dimensional stochastic evolution equations , that is , stochastic evolution equations in infinite - dimensional spaces , are a rich source of statistical problems , both regular and singular .a typical example is the it equation where , \\mathcal{a}_0 , \\mathcal{a}_1 , \\mathcal{m}_j ] .depending on the operators in the equation , the estimation model can be regular , a singular limit of regular problems , or completely singular .if are partial differential or pseudo - differential operators , becomes a stochastic partial differential equation ( spde ) , which is becoming increasingly popular for modelling various phenomena in fluid mechanics , oceanography , temperature anomalies , finance , and other domains .various estimation problems for different types of spdes have been investigated by many authors : ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* etc . ) . depending on the stochastic part ,is classified as follows : * equation with * additive noise * , if for all ; * equation with * multiplicative noise * ( or bilinear equation ) otherwise .depending on the operators , is classified as follows : * * diagonalizable equation * , if the operators , , and , , have a common system of eigenfunctions , and this system is an orthonormal basis in a suitable hilbert space . ** non - diagonalizable equation * otherwise .a diagonalizable equation is reduced to an infinite system of _ uncoupled _ one - dimensional diffusion processes ; these processes are the fourier coefficients of the solution in the basis . as a result , while somewhat restrictive as a modelling tool , diagonalizable equations are an extremely convenient object to study estimation problems and often provide the benchmark results that carry over to more general equations .the parameter estimation problem for a diagonalizable equation with additive space - time white noise ( that is , and for all ) was studied for the first time by huebner , khasminskii , and rozovskii , and further investigated in .the main feature of this problem is that every -dimensional projection of the equation leads to a regular statistical problem , but the problem can become singular in the limit ( a singular limit of regular problems ) ; when this happens , the dimension of the projection becomes a natural asymptotic parameter of the problem .once the diagonalizable model is well - understood , extensions to more general equations can be considered ( ) .this paper is the first attempt to investigate the estimation problem for infinite - dimensional bilinear equations .such models are often completely singular , that is , can not be represented as a limit of regular models .we consider the more tractable situation of diagonalizable equations . in section [ section - settings ]we provide the necessary background on stochastic evolution equations , with emphasis on diagonalizable bilinear equations .the maximum likelihood estimator ( mle ) and its modifications for diagonalizable bilinear equations are studied in section [ section - estimates ] .we give sufficient conditions on operators that ensure consistency and asymptotic normality of the mle .we also demonstrate that the mle in this model is not always the best estimator , which , for a singular model is not at all surprising .section [ subsection - exact ] emphasizes the point even more by introducing a closed - form exact estimator . due to the specific structure of stochastic term , for a large class of infinite - dimensional systems with finite - dimensional noise, one can get the _exact _ value of the unknown parameter after a finite number of arithmetic manipulations with the observations .the very existence of such estimators in these models is rather remarkable and has no analogue in classical statistics . as an illustration ,let be a positive number , a standard wiener process , and consider the it equation with zero boundary conditions . if , and then and each is a geometric brownian motion : we assume that for all . in sections [ section - estimates ] and [ subsection - exact ] we establish the following result .if the solution of equation is observed in the form , then the parameter can be computed in each of the following ways : 1 . for every ; 2 . for every ; 3 . for every and .both ( e1 ) and ( e2 ) are essentially the same maximum likelihood estimator , but the infinite - dimensional nature of the equation makes it possible to study this estimator in two different asymptotic regimes .( e3 ) is a closed - form exact estimator .while it is most likely to be the best choice for this particular problem , we show in section [ subsection - exact ] that computational complexity of closed - form exact estimators can dramatically increase with the number of wiener processes driving the equation , while the complexity of the mle is almost unaffected by this number .the result is another unexpected feature of closed - form exact estimators : ever though they produce the exact value of the parameter , they are not always the best choice computationally .in this section we introduce the diagonalizable stochastic parabolic equation depending on a parameter and study the main properties of the solution .let be a separable hilbert space with the inner product and the corresponding norm .let be a densely - defined linear operator on with the following property : there exists a positive number such that for every from the domain of .then the operator powers are well defined and generate the spaces : for , is the domain of ; ; for , is the completion of with respect to the norm ( see for instance krein at al . ) . by construction , the collection of spaces has the following properties : * for every ; * for the space is densely and continuously embedded into : and there exists a positive number such that for all ; * for every and , the space is the dual of relative to the inner product in , with duality given by let be a stochastic basis with the usual assumptions , and let be a collection of independent standard brownian motions on this basis .consider the following it equation where are linear operators , and are adapted process , and is a scalar parameter belonging to an open set .[ def000 ] + ( a ) equation is called an equation with additive noise if for all . otherwise , is called an equation with multiplicative noise ( also known as a bilinear equation ) .+ ( b ) equation is called diagonalizable if the operators , have a common system of eigenfunctions such that is an orthonormal basis in and each belongs to every .+ ( c ) equation is called parabolic in the triple if * the operator is uniformly bounded from to for there exists a positive real number such that for all , ; * there exists a positive number and a real number such that , for every , , [ rm00 ] ( a ) note that and imply uniform continuity of the family of operators , from to ; in fact , ( b ) if equation is parabolic , then condition implies that where is the identity operator . the cauchy - schwartz inequality and the continuous embedding of into imply for some uniformly in . as a result, we can take for some fixed ._ from now on , if equation is parabolic and diagonalizable , we will assume that the operator has the same eigenfunctions as the operators ; by remark [ rm00 ] , this leads to no loss of generality . _ [ ex : main ] \(a ) for and , consider the equation with periodic boundary conditions ; .then is the sobolev space on the unit circle ( see , for example , shubin ( * ? ? ?* section i.7 ) ) and , where is the laplace operator on with periodic boundary conditions .direct computations show that equation is diagonalizable ; it is parabolic if and only if .\(b ) let be a smooth bounded domain in .let be the laplace operator on with zero boundary conditions .it is known ( for example , from shubin ) , that 1 .the eigenfunctions of are smooth in and form an orthonormal basis in ; 2 .the corresponding eigenvalues , can be arranged so that , and there exists a number such that , that is , we take , , where is the identity operator . then and the operator generates the hilbert spaces , and , for every , the space is the closure of the set of smooth compactly supported function on with respect to the norm which is an equivalent norm in .let and be real numbers. then the stochastic equation is * always diagonalizable ; * parabolic in for every if and only if .indeed , we have , , , , , and and so holds with and . taking in , where is a smooth bounded domain in , and , , , , we get a bilinear equation driven by space - time white noise .direct analysis shows that this equation is not diagonalizable .moreover , the equation is parabolic if and only if , that is , when is an interval ; for details , see the lecture notes by walsh . for a diagonalizable equation, the parabolicity condition can be expressed in terms of the eigenvalues of the operators in the equation .[ th0 ] assume that equation is diagonalizable , and with no loss of generality ( see remark [ rm00 ] ) , we also assume that then equation is parabolic in the triple if and only if there exist positive real numbers and a real number such that , for all and , we show that , for a diagonalizable equation , is equivalent to and is equivalent to .indeed , note that for every , then is with , and is with . since both and are uniform in and the collection is dense in every , the proof of the theorem is complete .the following is the basic existence / uniqueness / regularity result for parabolic equations ; for the proof , see rozovskii ( * ? ? ?* theorem 3.2.1 ) .[ th1 ] assume that equation is parabolic in the triple and 1 .the initial condition is deterministic and belongs to ; 2 .the process is -adapted with values in and 3 .each process is -adapted with values in and then there exists a unique -adapted process with the following properties : * ;* is a solution of , that is , the equality holds in for all ] .the objective is to estimate the real number from the observations ] coincides with the sigma - algebra generated by $ ] ( some of can , in principle , be zeroes ) .moreover , as was mentioned above , the statistical estimation model for , involving two or more processes , is singular . in what follows, we will see how to use this singularity to gain computational advantage over .the problem can now be stated as follows : given a sequence of numbers such that , can we transform it into a sequence such that if holds , it is natural to say that converges to faster than .accelerating the convergence of a sequence is a classical problem in numerical analysis .the main features of this problem are ( a ) there are many different methods to accelerate the convergence , and ( b ) the effectiveness of every method varies from sequence to sequence .we will investigate two methods : 1 .weighted averaging ; 2 .aitken s method .let be a sequence of non - negative numbers and define the weighted averaging estimator by then 1 .for every and , is an unbiased estimator of .2 . for every , as , converges to with probability one and converges in distribution to a gaussian random variable with zero mean and variance 3 .if , in addition , holds then , for every , as , converges to with probability one . by , from which the first two statement of the theorem follow . for the last statement , we combine with the toeplitz lemma : if and , then the behavior of , as can be just about anything .take , , .then , * with , , and we get and for some ; recall that , for , notation means * with , , and we get and * with , , and we get and * with , , and we get and next , we consider * aitken s method*. this method consists in transforming a sequence to a sequence the main result concerning this method is that if and then and that is , the sequence converges to the same limit but faster . accordingly , under the condition , we define with a hope that in general , there is no guarantee that this will be the case because typically for some and , and so , if we set we get by theorem [ th : mle ] direct investigation of the sequence is possible if there is only one wiener process driving the equation , that is , for in this case , shows that where .then direct computations show that * if , , then * if , then for more than one wiener process , we find where is a two - dimensional gaussian vector with known distribution . the analysis of this estimator , while possible , is technically much more difficult and will require many additional assumptions on .we believe that this analysis falls outside the scope of this paper , and we present here only some numerical results .we suppose that fourier coefficients satisfy with , the noise term is driven by wiener processes , and the true value of the parameter . fromwe note that the estimates can be calculated if we only know the value of , rather than the whole path . using the closed - form solution of equation , we simulate directly , without applying some discretization schemes to the process .three type of estimates are presented in figure 1 .the obtained numerical results are consistent with above theoretical results : aitken s method performs the best , weighted averages estimates with perform better than simple estimates .in regular models , the estimator is consistent in the large sample or small noise limit ; neither of these limits can be evaluated exactly from any actual observations . in singular models , thereoften exists an estimator that is consistent in the limit that can potentially be evaluated exactly from the available observations .still , no expression can be evaluated on a computer unless the expression involves only finitely many operations of addition , subtraction , multiplication , and division . an estimator is called closed - form exact if it produces the exact value of the unknown parameter after a finite number of additions , subtractions , multiplications , and divisions performed on the elementary functions of the observations .in addition to conditions of theorem [ th : mle ] assume that there exist two finite sets of indices , and a positive integer such that then there exists a closed - form exact estimator for .if there are wiener processes driving the equation , then the extra condition of the theorem can always be ensured with , because every collection of vectors in an -dimensional space is linearly dependent .while relation gives a closed - form exact estimator , the resulting formulas can be rather complicated when the number of wiener processes in the equation is large ; if this number is infinite , then the estimator might not exist at all . for comparison ,the complexity of the maximum likelihood estimator does not depend on the number of wiener processes in the equation . as a result , when it comes to actual computations , the closed - form exact estimator is not necessarily the best choice . on the other hand ,the very existence of such an estimator is rather remarkable .we conclude this section with three examples of closed - form exact estimators .the first example shows that such estimators can exist for equations that are not diagonalizable in the sense of definition [ def000 ] .consider the equation by the it formula , where solves the heat equation , .assume that is a smooth compactly supported function .then is a smooth bounded function for all and for all , . in particular , the fourier transform of is defined and satisfies let . then and consider the equation on with zero boundary conditions .clearly both and are not satisfied . while the equation is not parabolic , there exists a unique solution in weighted wiener chaos spaces , and we can therefore consider for we find in particular , and so m. huebner , s. lototsky , b. l. rozovskii ( 1997 ) _ asymptotic properties of an approximate maximum likelihood estimator for stochastic pdes _ , statistics and control of stochastic processes ( moscow , 1995/1996 ) , world sci .publishing , pp139155 .s. v. lototsky , b. l. rozovskii ( 2000 ) _ parameter estimation for stochastic evolution equations with non - commuting operators _ , in : skorohod s ideas in probability theory , v.korolyuk , n.portenko and h.syta ( ed ) , institute of mathematics of national academy of sciences of ukraine , kiev , ukraine , 2000 , pp271280 .b. l. rozovskii ( 1990 ) _ stochastic evolution systems _ , mathematics and its applications ( soviet series ) , vol . 35 , kluwer academic publishers group , dordrecht , linear theory and applications to nonlinear filtering .j. b. walsh ( 1986 ) _ an introduction to stochastic partial differential equations _ , cole dt de probabilits de saint - flour , xiv1984 , lecture notes in math .1180 , springer , berlin , 1986 , pp265439 .
|
a parameter estimation problem is considered for a stochastic parabolic equation with multiplicative noise under the assumption that the equation can be reduced to an infinite system of uncoupled diffusion processes . from the point of view of classical statistics , this problem turns out to be singular not only for the original infinite - dimensional system but also for most finite - dimensional projections . this singularity can be exploited to improve the rate of convergence of traditional estimators as well as to construct completely new closed - form exact estimator .
|
galaxies are morphologically complex entities .even seemingly simple systems like elliptical galaxies can have outer envelopes and distinct cores or nuclei , while so - called `` bulge - less '' spiral galaxies can still have nuclear star clusters and disks with complex radial or vertical profiles . in order to accurately describe the structure of galaxies , it is often necessary to decompose galaxies into component substructures .even single - component systems are often modeled with analytic functions in order to derive quantitative measurements such as scale lengths or half - light radii , srsic indices , etc . the traditional method for dealing with this complexityhas been to model 1d surface - brightness profiles of galaxies derived from 2d images as the sum of separate , additive components ( e.g. , bulge + disk ) ; pioneering examples of this include work by , , , , , and .while this 1d approach can be conceptually and computationally simple , it has a number of limitations , above and beyond the fact that it involves discarding most of the data contained in an image . to begin with ,there are uncertainties about _ what type _ of 1d profile to use should one use major - axis cuts or profiles from ellipse fits to isophotes , should the independent variable be semi - major axis or mean radius , etc .it is also difficult to correctly account for the effects of image resolution when fitting 1d profiles ; attempts to do so generally require simple analytic models of the point - spread function ( psf ) , extensive numerical integrations , and the assumption of circular symmetry for the psf , the surface - brightness function , or both ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?furthermore , there are often intrinsic degeneracies involved : images of galaxies with non - axisymmetric components such as bars can yield 1d profiles resembling those from galaxies with axisymmetric bulges , which makes for considerable ambiguity in interpretation .finally , if one is interested in the properties of non - axisymmetric components ( bars , elliptical rings , spiral arms ) themselves , it is generally impossible to extract these from 1d profiles .a better approach in many cases is to directly fit the images with 2d surface - brightness models .early approaches along this line include those of , , and .the first general , self - consistent 2d bulge+disk modeling of galaxy images that is , constructing a full 2d model image , comparing its intensity values with the observed image pixel - by - pixel , and iteratively updating the parameters until the is minimized was that of , with being the first to include extra , non - axisymmetric components ( bars ) in fitting galaxy images .an interesting alternate approach developed at roughly the same time was the multi - gaussian expansion method , which involves modeling both psf and image as the sum of an arbitrary number of elliptical gaussians ; the drawback is the difficulty that lies in trying to associate sets of gaussians with particular structural components and parameters .the most commonly used galaxy - fitting codes at the present time are probably gim2d , ] galfit , ] budda , ] and mge . ] gim2d is specialized for bulge - disk decompositions and is implemented as an iraf package , using the metropolis algorithm to minimize the total for models containing an exponential disk and a srsic bulge .budda is written in fortran and is also specialized for bulge - disk decompositions , though it includes a wider variety of possible components : exponential disk ( with optional double - exponential profile ) , srsic bulge , srsic bar , analytic edge - on disk , and nuclear point source .it uses a version of the nelder - mead simplex method , also known as the `` downhill simplex '' , for minimization .galfit , which is written in c , is the most general of these codes , since it allows for arbitrary combinations of components ( including components with different centers , which allows the simultaneous fitting of overlapping galaxies ) and includes the largest set of possible components ; the latest version includes options for spiral and other parametric modulation of the basic components .galfit uses a version of the fast levenberg - marquardt gradient - search method for its minimization .mge , available in idl and python versions , is rather different from the other codes in that it uses what is effectively a non - parametric approach , fitting images using the sum of an arbitrary number of elliptical gaussians ( it is similar to galfit in using the levenberg - marquardt method for minimization during the fitting process . ) for most astronomical image - fitting programs the source code is not generally available , or else is encumbered by non open - source licenses . even when the code _ is _ available , it is not easy to extend the built - in sets of predefined image components .the simplest codes provide only elliptical components with exponential and srsic surface brightness profiles ; more sophisticated codes such as budda and ( especially ) galfit provide a larger set of components , including some sophisticated ways of perturbing the components in the case of galfit .but if one wants to add completely new functions , this is not easy .( the case of mge is somewhat different , since it does not allow parametric functions at all . ) as an example of why one might want to do this , consider the case of edge - on ( or nearly edge - on ) disk galaxies .both budda and galfit include versions of the analytical solution for a perfectly edge - on , axisymmetric , radial - exponential disk of , with a function for the vertical light distribution . but real galaxy disks are not always perfectly edge - on , do not all have single - exponential radial structures , and their vertical structure may in some cases be better described by a sech or exponential profile , or something in between ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?various authors studying edge - on disks have suggested that models using radial profiles other than a pure exponential would be best fit via line - of - sight integration through 3d luminosity - density models ( e.g. , * ? ? ? * ; * ? ? ?* ; * ? ? ?more sophisticated approaches could even involve line - of - sight integrations that account for scattering and absorption by dust ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?another potential disadvantage of existing codes is that they rely on the gaussian approximation of poisson statistics for the fitting process .while this is eminently sensible for dealing with many ccd and near - ir images , it can in some cases produce biases when applied to images with low count rates ( see and section [ sec : biases ] of this paper ) .this is why packages for fitting x - ray data , such as sherpa , often include alternate statistics for fits . in this paper , i present imfit , a new , open - source image - fitting code designed to overcome some of the limitations mentioned above . in particular ,imfit uses an object - oriented design which makes it relatively easy to add new , user - designed image components ; it also provides multiple fitting algorithms and statistical approaches .it can also be extremely fast , since it is able to take advantage of multiple cpu cores on the same machine to execute calculations in parallel .the outline of this paper is as follows .section [ sec : gen - outline ] provides a quick sketch of how the program works , while section [ sec : modelimage ] details the process of generating model images and the configuration files which describe the models .the different underlying statistical models and minimization algorithms used in the fitting process are covered in section [ sec : fitting ] ; methods for estimating confidence intervals for fitted parameters are discussed in section [ sec : confidence - intervals ] . the default 2d image functions which can be used in modelsare presented in section [ sec : image - funcs ] ; this includes functions which perform line - of - sight integration through 3d luminosity - density models ( section [ sec : image - funcs-3d ] ) .after a brief discussion of coding details ( section [ sec : programming ] ) , two examples of using imfit to model galaxy images are presented in section [ sec : examples ] : the first involves fitting a moderately - inclined spiral galaxy with disk , bar , and ring components , while the second fits an edge - on spiral galaxy with thin and thick edge - on disk components .finally , section [ sec : biases ] discusses possible biases to fitted parameters when the standard statistic is used in the presence of low - count images , using both model images and real images of elliptical galaxies .an appendix discusses the relative sizes and accuracies of parameter error estimates using the two methods available in imfit . to avoid any confusion ,i note that the program described in this paper is unrelated to tasks with the same name and somewhat similar ( if limited ) functionality in pre - existing astronomical software , such as the `` imfit '' tasks in the radio - astronomy packages aips and miriad and the `` images.imfit '' package in iraf .imfit begins by processing command - line options and then reads in the data image , along with any optional , user - specified psf , noise , and mask images ( all in fits format ) .the configuration file is also read ; this specifies the model which will be fit to the data image , including initial parameter values and parameter limits , if any ( see section [ sec : config - file ] ) .the program then creates an instance of the modelobject class , which holds the relevant data structures , instances of the image functions specified by the configuration file , and the general code necessary for computing a model image .if minimization ( the default ) is being done , a noise image is constructed , either from a user - specified fits file already read in or by internally generating one , assuming the gaussian approximation for poisson noise . the noise image is then converted to form and combined with the mask image , if any , to form a final weight image used for calculating the value . ( if model - based minimization has been specified , then the noise image , which is based on the model image , is recalculated and combined with the mask image every time a new model image is computed ; if a poisson maximum - likelihood statistic ( or pmlr ; see section [ sec : poisson ] ) is being used for minimization , then no noise image is read or created and the weight image is constructed directly from the mask image .see section [ sec : statistics ] for more on the different statistical approaches . )the actual fitting process is overseen by one of three possible nonlinear minimization algorithms , as specified by the user .these algorithms proceed by generating or modifying a set of parameter values and feeding these values to the aforementioned model object , which in turn calculates the corresponding model image , convolves it with the psf ( if psf convolution is part of the model ) , and then calculates the fit statistic ( e.g. , ) by comparing the model image with the stored data image .the resulting fit statistic is returned to the minimization algorithm , which then updates the parameter values and repeats the process according to the details of the particular method , until the necessary stop criterion is reached e.g. , no further significant reduction in the fit statistic , or a maximum number of iterations .finally , a summary of the fit results is printed to the screen and saved to a file , along with any additional user - requested outputs ( final model image , final residual image , etc . ) .the model which will be fit to the data image is specified by a configuration file , which is a text file with a relatively simple and easy - to - read format ; see figure [ fig : config - file ] for an example .the basic format for this file is a set of one or more `` function blocks '' , each of which contains a shared center ( pixel coordinates ) and one or more image functions .a function block can , for example , represent a single galaxy or other astronomical object , which itself has several individual components ( e.g. , bulge , disk , bar , ring , nucleus , etc . ) specified by the individual image functions .thus , for a basic bulge / disk decomposition the user could create a function block consisting of a single srsic function and a single exponential function .there is , however , no a priori association of any particular image function or functions with any particular galaxy component , nor is there any requirement that a single object must consist of only one function block .the final model is the sum of the contributions from all the individual functions in the configuration file .the number of image functions per function block is unlimited , and the number of function blocks per model is also unlimited .each image function is listed by name ( e.g. , `` ` function sersic ` '' ) , followed by the list of its parameters . for each parameter ,the user supplies an initial guess for the value , and ( optionally ) either a comma - separated , two - element list of lower and upper bounds for that parameter or the keyword `` fixed '' ( indicating that the parameter will remain constant during the fit ) .. ] the total set of all individual image - function parameters , along with the central coordinates for each function block , constitutes the parameter vector for the minimization process .an image function can be thought of as a black box which accepts a set of parameter values for its general setup , and then accepts individual pixel coordinates and returns a corresponding computed intensity ( i.e. , surface brightness ) value for that pixel .the total intensity for a given pixel in the model image ( prior to any psf convolution ) is the sum of the individual values from each image function .this design means that the main program needs to know nothing about the individual image functions except the number of parameters they take , and which subset of the total parameter vector corresponds to a given image function .the actual calculations carried out by an image function can be as simple or as complex as the user requires , ranging from returning a constant value for each pixel ( e.g. , the flatsky function ) to performing line - of - sight integration through a 3d luminosity density model ( e.g. , the exponentialdisk3d function ) ; user - written image functions could even perform modest simulations in the setup stage .the list of currently available image functions , along with descriptions for each , is given in section [ sec : image - funcs ] .to simulate the effects of atmospheric seeing and telescope optics , model images can be convolved with a psf image .the latter can be any fits file which contains the point spread function .psf images should ideally be square with sides measuring an odd number of pixels , with the peak of the psf centered in the central pixel of the image .( off - center psfs can be used , but the resulting convolved model images will of course be shifted . ) imfit automatically normalizes the psf image when it is read in . the actual convolution follows the standard approach of using fast fourier transforms of the internally - generated model image and the psf image , multiplied together , with the output convolved model image being the inverse transform of the product image .the transforms are done with the fftw library ( fastest fourier transform in the west * ) , which has the advantage of being able to perform transforms on images of arbitrary size ( i.e. , not just images with power - of - two sizes ) ; in addition , it is well - tested and fast , and can use multiple threads to take advantage of multiple processor cores . to avoid possible edge effects in the convolution, the internal model - image array is expanded on all four sides by the width and height of the psf image , and all calculations prior to the convolution phase use this full ( expanded ) image .( for example , given a -pixel data image and a -pixel psf image , the internal model image would be pixels in size . )this ensures that model pixels corresponding to the edge of the data image are the result of convolution with an extension of the model , rather than with zero - valued pixels or the opposite side of the model image .this is in _ addition _ to the zero - padding applied to the top and right - hand sides of the model image during the convolution phase .( i.e. , the example -pixel expanded model image would be zero - padded to pixels before computing its fourier transform , to match with the zero - padded psf image of the same size . ) [ [ makeimage - generating - model - images - without - fitting ] ] make pixels ) .although analytic expressions for total flux exist for some common components , this is not true for all components and one of the goals of imfit is to allow users to create and use new image functions without worrying about whether they have simple analytic expressions for the total flux .this mode can be used to help determine such things as bulge / total and other ratios after a fit is found , although it is up to the user to decide which of the components is the `` bulge '' , which is the `` disk '' , and so forth .given a vector of parameter values , a model image is generated with per - pixel predicted data values , which are then compared with the _ observed _ per - pixel data values .the goal is to find the which produces the best match between and , subject to the constraints of the underlying statistical model .the usual approach is based on the maximum - likelihood principle ( which can be derived from a bayesian perspective if , e.g. , one assumes constant priors for the parameter values ) , and is conventionally known as maximum - likelihood estimation ( mle ; e.g. , * ? ? ?* ) . to start , one considers the per - pixel likelihood , which is the probability of observing given the model prediction and the underlying statistical model for how the data are generated .the goal then becomes finding the set of model parameters which maximizes the total likelihood , which is simply the product over all pixels of the individual per - pixel likelihoods : it is often easier to work with the logarithm of the total likelihood , since this converts a product over pixels into a sum over pixels , and can also simplify the individual per - pixel terms . as most nonlinear optimization algorithms are designed to _ minimize _ their objective function , one can use the _ negative _ of the log - likelihood .thus , the goal of the fitting process becomes minimization of the following : during the actual minimization process , this can often be further simplified by dropping any additive terms in which do not depend on the model , since these are unaffected by changes in the model parameters and are thus irrelevant to the minimization .in some circumstances , multiplying the negative log - likelihood by 2 produces a value which has the property of being distributed like the distribution ( e.g. , * ? ? ? * and references therein ) ; thus , it is conventional to treat as the statistic to be minimized .the data in astronomical images typically consist of detections of individual photons from the sky + telescope system ( including photons from the source , the sky background , and possibly thermal backgrounds in the telescope ) in individual pixels , combined with possible sources of noise due to readout electronics , digitization , etc .photon - counting statistics obey the poisson distribution , where the probability of detecting photons per integration , given a true rate of , is [eq : poisson ] additional sources of ( additive ) noise such as read noise tend to follow gaussian statistics with a mean of 0 and a dispersion of , so that the probability of measuring counts after the readout process , given an input of counts from the poisson process , is .\ ] ] the general case for most astronomical images thus involves both poisson statistics ( for photon counts ) and gaussian statistics ( for read noise and other sources of additive noise ) .unfortunately , even though the individual elements are quite simple , the combination of a gaussian process acting on the output of a poisson process leads to the following rather frightening per - pixel likelihood ( e.g. , * ? ? ?* ; * ? ? ?* ) : the resulting negative log - likelihood for the total image ( dropping terms which do not depend on the model ) is \right).\ ] ] since this still contains an infinite series of exponential and factorial terms , it is clearly rather impractical for fitting images rapidly . fortunately , there is a way out which is often ( though not always ) appropriate astronomical images .this is to use the fact that the poisson distribution approaches a gaussian distribution when the counts become large . in this approximationthe poisson distribution is replaced by a gaussian with .it is customary to assume this is valid when the counts are per pixel ( e.g. , * ? ? ?* ) , though point out that biases in the fitted parameters can be present even when counts are higher than this ; see section [ sec : biases ] for examples in the case of 2d fits .since the contribution from read noise is also nominally gaussian , the two can be added in quadrature , so that the per - pixel likelihood function is just ,\ ] ] where , with being the dispersion of the read - noise term .twice the negative log - likelihood of the total problem then becomes ( dropping terms which do not depend on the model ) the familiar sum : this is the default approach used by imfit : minimizing the as defined in eqn .[ eqn : chi2 ] .the approximation of the poisson contribution to is based on the model intensity .traditionally , it is quite common to estimate this from the _ data _ instead , so that .this has the nominal advantage of only needing to be calculated once , at the start of the minimization process , rather than having to be recalculated every time the model is updated .values is often negligible .] however , the bias resulting from using data - based errors in the low - count regime can be worse than the bias introduced by using model - based values ( see section [ sec : biases ] ) .both approaches are available in imfit , with data - based estimation being the default .the data - based and model - based approaches are often referred to as `` neyman s '' and `` pearson s '' , respectively ; in this paper i use the symbols and to distinguish between them . in the case of `` error '' images generated by a data - processing pipeline , the corresponding or ( variance ) values can easily be used in equation [ eq : chi2 ] directly , under the assumption that the final per - pixel error distributions are still gaussian .so why not always use the gaussian approximation , as is done in most image - fitting packages ? in the absence of any noise terms except poisson statistics something often true of high - energy detectors , such as x - ray imagers the individual - pixel likelihoods are just the probabilities of a poisson process with mean , where the probability of recording counts is this leads to a very simple version of the negative log - likelihood , often referred to as the `` cash statistic '' , after its derivation in : where the factorial term has been dropped because it does not depend on the model . a useful alternative is to construct a statistic from the likelihood ratio test that is , a maximum likelihood ratio ( mlr ) statistic which is the ratio of the likelihood to the maximum possible likelihood for a given dataset . in the case of poisson likelihood , the latter is the likelihood when the model values are exactly equal to the data values ( e.g. , * ? ? ?* ) , and so the likelihood ratio is and the negative log - likelihood version ( henceforth pmlr ) is ( this is the same as the `` cstat '' statistic available in the sherpa x - ray analysis package and the `` poisson likelihood ratio '' described by . ) comparison with equation [ eq : cashstat ] shows that pmlr is identical to apart from terms which depend on the data only and thus do not affect the minimization . in the remainder of this paper , i will refer to and pmlr collectively as _ poisson mle _ statistics .since minimizing pmlr will produce the same best - fitting parameters as minimizing , one might very well wonder what is the point in introducing pmlr .there are two practical advantages in using it .the first is that in the limit of large , statistics such as pmlr approach a distribution and can thus be used as goodness - of - fit indicators .the second is that they are always ( since itself is by construction always ) ; this means they can be used with fast least - squares minimization algorithms .this is the practical drawback to minimizing : unlike pmlr , it can often have negative values , and thus requires one of the slower minimization algorithms . point out that using a poisson mle statistic ( e.g. , ) is preferable to using or even when the counts are above the nominal limit of per pixel , since fitting pure - poisson data using the or gaussian approximations can lead to biases in the derived model parameters .section [ sec : biases ] presents some examples of this effect using both artificial and real galaxy images , and shows that the effect persists even when moderate ( gaussian ) read noise is _ also _ present .using a poisson mle statistic such as or pmlr is also appropriate when fitting simulated images , such as those made from projections of -body models , as long as the units are particles per pixel or something similar .for convenience , table [ tab : terminology ] summarizes the main symbols and terms from this section which are used elsewhere in the paper .ll poisson mle & maximum - likelihood estimation based on poisson statistics + & ( includes both and pmlr ) + & poisson mle statistic from + pmlr & poisson mle statistic from maximum likelihood ratio + & gaussian mle statistic using data pixel values for + & ( `` neyman s '' ) + & gaussian mle statistic using model pixel values for + & ( `` pearson s '' ) + imfit s default behavior , as mentioned above , is to use as the statistic for minimization .to do so , the individual , per - pixel gaussian errors must be available .if a separate error or noise map is not supplied by the user ( see below ) , imfit estimates values from either the data values or the model values , using the gaussian approximation to poisson statistics .to ensure this estimate is as accurate as possible , the data or model values must at some point be converted from counts to actual detected photons ( e.g. , photoelectrons ) , and any previously subtracted background must be accounted for . by default , imfit estimates the values from the data image by including the effects of a / d gain , prior subtraction of a ( constant ) background , and read noise . rather than converting the image to electrons pixel and then estimating the values , imfit generates values in the same units as the input ] [ eqn : error - est ] where is the data intensity in counts pixel , is any pre - subtracted sky background in the same units , is the read noise in electrons , is the number of separate images combined ( averaged or median ) to form the data image , and is the `` effective gain '' ( the product of the gain , , and optionally the exposure time if the image pixel values are actually in units of counts s pixel rather than integrated counts pixel ) .if model - based minimization is used , then model intensity values are used in place of in equation [ eqn : error - est ] . in this case, the values must be recomputed each time a new model image is generated , though in practice this adds very little time to the overall fitting process .if a mask image has been supplied , it is converted internally so that its pixels have values for valid pixels and for bad pixels. then the mask values are divided by the variances to form a weight - map image , where individual pixels have values of .these weights are then used for the actual calculation : instead of data - based or model - based errors , the user can also supply an error or noise map in the form of a fits image , such as might be produced by a reduction pipeline .the individual pixel values in this image can be gaussian errors , variances ( ) , or even pre - computed weight values .in the case of cash - statistic minimization , the sum is computed directly based on equation [ eq : cashstat ] ; for pmlr minimization , equation [ eq : pmlr ] is used .the `` weight map '' in either case is then based directly on the mask image , if any ( so all pixels in the resulting weight map have values of or 1 ) .the actual minimized quantities are thus and with and .the default minimization algorithm used by imfit is a robust implementation of the levenberg - marquardt ( l - m ) gradient search method , based on the minpack-1 version of and modified by craig markwardt , ] which includes optional lower and upper bounds on parameter values .this version of the basic l - m algorithm also includes auxiliary code for doing numerical differentiation of the objective function , and thus the various image functions do not need to provide their own derivatives , which considerably simplifies things when it comes to writing new functions .the l - m algorithm has the key advantage of being very fast , which is a useful quality when one is fitting large images with a complex set of functions and psf convolution .it has the minor disadvantage of requiring an initial starting guess for the parameter values , and it has two more significant disadvantages .the first is that like gradient - search methods in general it is prone to becoming trapped in local minima in the objective - function landscape .the second is that it is designed to work with least - squares objective functions , where the objective function values are assumed to be always .in fact , the l - m algorithm makes use of a vector of the individual contributions from each pixel to the total , and these values as well ( not just the sum ) must be nonnegative . for the case ,this is always true ; but this is _ not _ guaranteed to be true for the cash statistic .thus , it would be quite possible for the l - m minimizer to fail to find the best - fitting solution for a particular image , simply because the solution has a value .( fortunately , minimizing pmlr leads to the same solution as minimizing , and the individual terms of pmlr are always nonnegative . )a second , more general algorithm available in imfit is the nelder - mead simplex method , with constraints as suggested by , implemented in the nlopt library . like the l - m algorithm , this method requires an initial guess for the parameter set ; it also includes optional parameter limits .unlike the l - m algorithm , it works only with the final objective function value and does not assume that this value must be nonnegative ; thus , it is suitable for minimizing all the fit statistics used by imfit .it is also as a rule less likely to be caught in local minima than the l - m algorithm .the _ disadvantage _ is that it is considerably _ slower _ than the l - m method roughly an order of magnitude so .a third alternative provided by imfit is a genetic - algorithms approach called differential evolution ( de ; * ? ? ? * ) .this searches the objective - function landscape using a population of parameter - value vectors ; with each `` generation '' , the population is updated by mutating and recombining some of the vectors , with new vectors replacing older vectors if they are better - performing .de is designed to be in the context of genetic algorithms fast and robust while keeping the number of adjustable _ algorithm _ parameters ( e.g. , mutation and crossover rates ) to a minimum .it is the least likely of the algorithms used by imfit to become trapped in a local minimum in the objective - function landscape : rather than starting from a single initial guess for the parameter vector , it begins with a set of randomly generated initial - parameter values , sampled from the full range of allowed parameter values ; in addition , the crossover - with - mutation used to generate new parameter vectors for successive generations helps the algorithm avoid local minima traps .thus , in contrast to the other algorithms , it does not require any initial guesses for the parameter values , but _ does _ require lower and upper limits for all parameters .it is definitely the _slowest _ of the minimization choices : about an order of magnitude slower than the n - m simplex , and thus roughly _ two _ orders of magnitude slower than the l - m algorithm .the current implementation of de in imfit uses the `` de / rand - to - best/1/bin '' internal strategy , which controls how mutation and crossover are done , along with a population size of 10 parameter vectors per free parameter .since the basic de algorithm has no default stop conditions , imfit halts the minimization when the best - fitting value of the fit statistic has ceased to change by more than a specified tolerance after 30 generations , or when a maximum of 600 generations is reached . for most purposes , the default l - m methodis probably the best algorithm to use , since it is fast enough to make exploratory fitting ( varying the set of functions used , applying different parameter limits , etc . ) feasible , and also fast enough to make fitting large numbers of individual objects in a reasonable time possible .if the problem is relatively small ( modest image size , few image functions ) and the user is concerned about possible local minima , then the n - m simplex or even the de algorithm can be used .table [ tab : minimizers ] provides a general comparison of the different minimization algorithms , including the time taken for each to find the best fit for a very simple case : a -pixel cutout of an sdss -band image of the galaxy ic 3478 , fit with a single srsic function and convolved with a -pixel psf image . for this simple case, the n - m approach takes times as long as the l - m method , and the de algorithm takes _ 60 _ times as long .( all three algorithms converged to the same solution , so there was no disadvantage to using the l - m method in this case . )llllllr levenberg - marquardt ( l - m ) & yes & no & high & no & fast & 2.2s + nelder - mead simplex ( n - m ) & yes & no & medium & yes & slow & 9.1s + differential evolution ( de ) & no & yes & low & yes & very slow & 2m15s+ when imfit finishes , it outputs the parameters of the best fit ( along with possible confidence intervals ; see section [ sec : confidence - intervals ] ) to the screen and to a text file ; it also prints the final value of the fit - statistic . the best - fitting model image and the residual ( data model ) imagecan optionally be saved to fits files as well . for fits which minimize , imfit also prints the _ reduced _ value , which can be used ( with caution ) as an indication of the goodness of the fit .( the best - fit value of pmlr can also be converted to a reduced- equivalent with the same properties . ) for fits which minimize the cash statistic , there is no direct equivalent to the reduced ; the actual value of the cash statistic does not have any directly useful meaning by itself .all the fit statistics ( including ) can also be used to derive comparative measures of how well _ different _ models fit the same data . to this end ,imfit computes two likelihood - based quantities which can be used to compare different models .the first is the akaike information criterion ( aic , ) , which is based on an information - theoretic approach .imfit uses the recommended , bias - corrected version of this statistic : where is the likelihood value , is the number of ( free ) parameters in the model and is the number of data points .the second quantity is the bayesian information criterion ( bic , ) , which is when two or more models fit to the same data are compared , the model with the _ lowest _ aic ( or bic ) is preferred , though a difference or of at least is usually required before one model can be deemed clearly superior ( or inferior ) ; see , e.g. , and for discussions of aic and bic in astronomical contexts , and for more general background .needless to say , all models being compared in this manner should be fit by minimizing the same fit statistic .in addition to its speed , the levenberg - marquardt minimization algorithm has the convenient advantage that it can automatically produce a set of approximate , 1- confidence intervals for the fitted parameters as a side product of the minimization process ; this comes from inverting the hessian matrix computed during the minimization process ( see , e.g. , section 15.5 of * ? ? ? * ) .the other minimization algorithms available in imfit do not compute confidence intervals .although one can , as a workaround , re - run imfit using the l - m algorithm on a solution that was found using one of the other algorithms , this will not work if ( rather than or pmlr ) was being minimized ( see section [ sec : minimization ] ) .an alternate method of estimating confidence intervals is provided by bootstrap resampling .each iteration of the resampling process generates a new data image by sampling pixel values , with replacement , from the original data image .( what is actually generated inside imfit is a resampled vector of pixel _ indices _ into the image , excluding those indices corresponding to masked pixels ; the corresponding coordinates and intensities then form the resampled data . ) the fit is then re - run with the best - fit parameters from the original fit as starting values , using the l - m algorithm for and pmlr minimization cases and the n - m simplex algorithm when minimization is being done .after iterations , the combined set of bootstrapped parameter values is used as the distribution of parameter values , from which properly asymmetric 68% confidence intervals are directly determined , along with the standard deviation .( the 68% confidence interval corresponds to - if the distribution is close to gaussian . ) in addition , the full set of best - fit parameters from all the bootstrap iterations can optionally be saved to a file , which potentially allows for more sophisticated analyses .figure [ fig : bootstrap ] shows a scatter - plot matrix comparing parameter values for five parameters of a simple srsic fit to an image of a model srsic galaxy with noise ( see section [ sec : biases ] for details of the model images ) .one can see that the distributions are approximately gaussian , have dispersions similar to those from the l - m estimates ( plotted as gaussians using red curves ) , and also that certain parameter distributions are _ correlated _( e.g. , and or , more weakly , ellipticity and ) .of course , this simple case ignores the complexities and sources of error involved in fitting real images of galaxies ; see the appendix for sample comparisons of l - m and bootstrap error estimates for a small set of real - galaxy images .the only drawback of the bootstrap - resampling approach is the cost in time .since bootstrap resampling should ideally use a minimum of several hundred to one thousand or more iterations , one ends up , in effect , re - running the fit that many times .( some time is saved by starting each fit with the original best - fit parameter values , since those will almost always be close to the best - fit solution for the resampled data . ) from the levenberg - marquardt output of the original fit ( thin red curves ) .vertical solid gray lines show the parameter values from the fit to the original image ; vertical dashed gray lines show the true parameters of the original model.[fig : bootstrap ] ]image functions are implemented in imfit as subclasses of an abstract base class called functionobject .the rest of the program does not need to know the details of the individual functions , only that they adhere to the functionobject interface .this makes it relatively simple to add new image functions to the program : write a header file and an implementation file for the new function , add a reference to it in another source file , and recompile the program .further notes on how to do this are included in the documentation .this section describes the various default image functions that come with imfit .specifications for the actual parameters ( e.g. , the order that imfit expects to find them in ) are included in the documentation , and a summary of all available function names and their corresponding parameter lists can be printed using the ` --list - parameters ` command - line flag . most image functions , unless otherwise noted , have two `` geometric '' parameters : the position angle pa in degrees counter - clockwise from the vertical axis of the image and the ellipticity , where and are the semi - major and semi - minor axes , respectively . in most cases, the image function internally converts the ellipticity to an axis ratio ( ) and the position angle to an angle relative to the image -axis ( ) , in radians .then for each input pixel ( or subpixel if pixel subsampling is being done ) with image coordinates a scaled radius is computed as where and are coordinates in the reference frame centered on the image - function center and rotated to its position angle : this scaled radius is then used to compute the actual intensity , using the appropriate 1-d intensity function ( see descriptions of individual image functions , below ) .pure circular versions of any of these functions can be had by specifying that the ellipticity parameter is fixed , with a value of 0 .some functions ( e.g. , edgeondisk ) have only the position angle as a geometric parameter , and instead of computing a scaled radius , convert the pixel coordinates to corresponding and values in the rotated 2d coordinate system of the model function .this is a very basic function which produces a uniform background : for all pixels . unlike most image functions, it has _ no _ geometric parameters .this is an elliptical 2d gaussian function , with central surface brightness and dispersion .the intensity profile is given by this is an elliptical 2d function with a function for the surface brightness profile , with parameters for the central surface brightness , full - width half - maximum ( fwhm ) , and the shape parameter .the intensity profile is given by where is defined as in practice , fwhm describes the overall width of the profile , while describes the strength of the wings : lower values of mean more intensity in the wings than is the case for a gaussian ( as , the moffat profile converges to a gaussian ) .the moffat function is often a good approximation to typical telescope psfs ( see , e.g. , ) , and makeimage can easily be used to generate moffat psf images .the exponential function is an elliptical 2d exponential function , with parameters for the central surface brightness and the exponential scale length .the intensity profile is given by together with the position angle and ellipticity , there are a total of four parameters .this is a good default for galaxy disks seen at inclinations , though the majority of disk galaxies have profiles which are more complicated than a simple exponential ( e.g. , * ? ? ?the exponential_genellipse function is identical to the exponential function except for allowing the use of generalized ellipses ( with shapes ranging from `` disky '' to pure elliptical to `` boxy '' ) for the isophote shapes .following and , the shape of the elliptical isophotes is controlled by the parameter , such that a generalized ellipse with ellipticity is described by where and are distances from the ellipse center in the coordinate system aligned with the ellipse major axis ( corresponds to in the original formulation of athanassoula et al . ) .thus , values of correspond to disky isophotes , while values describe boxy isophotes ; for a perfect ellipse .this pair of related functions is analogous to the exponential and exponential_genellipse pair above , except that the intensity profile is given by the function : \right\},\ ] ] where is the surface brightness at the effective ( half - light ) radius and is the index controlling the shape of the intensity profile .the value of is formally given by the solution to the transcendental equation where is the gamma function and is the incomplete gamma function .however , in the current implementation is calculated via the polynomial approximation of when and the approximation of when .the srsic profile is equivalent to the de vaucouleurs ( ) profile when , to an exponential when , and to a gaussian when ; it has become the de facto standard for fitting the surface - brightness profiles of elliptical galaxies and bulges .though the empirical justification for doing so is rather limited , the combination a srsic profile with and isophotes with a boxy shape is often used to represent bars when fitting images of disk galaxies .in addition , the combination of boxy isophotes and high values may be appropriate for modeling luminous boxy elliptical galaxies .this function generates an elliptical 2d function where the major - axis intensity profile is given by the core - srsic model , which was designed to fit the profiles of so - called `` core '' galaxies ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?it consists of a srsic profile ( parameterized by and ) for radii the break radius and a single power law with index for radii .the transition between the two regimes is mediated by the dimensionless parameter : for low values of , the transition is very gradual and smooth , while for high values of the transition becomes very abrupt ( a perfectly sharp transition can be approximated by setting equal to some large number , such as 100 ) .the intensity profile is given by ^{\gamma/\alpha } \exp \left [ -b \left ( \frac{r^\alpha+r_b^\alpha}{r_e^\alpha } \right)^{1/(n\alpha ) } \right],\ ] ] where is the same as for the srsic function .the overall intensity scaling is set by , the intensity at the break radius : .\ ] ] this is similar to the exponential function , but it has _ two _ exponential radial zones ( with different scalelengths ) joined by a transition region at of variable sharpness : ^{\frac{1}{\alpha } ( \frac{1}{h_{1 } } \ , - \ , \frac{1}{h_{2}})},\ ] ] where is the central intensity of the inner exponential , and are the inner and outer exponential scale lengths , is the break radius , and parameterizes the sharpness of the break .low values of mean very smooth , gradual breaks , while high values correspond to abrupt transitions . is a scaling factor , given by see figure[ fig : brokenexp ] for examples .note that the parameter has units of length ( pixels for the specific case of imfit ) .the 1d form of this profile was designed to fit the surface - brightness profiles of disks which are not single - exponential : e.g. , disks with truncations or antitruncations . and pixels , respectively , break radius = 6 pixels , and varying values of ( black = 100 , medium gray = 3 , light gray = 1 ) .the lower ( dashed ) curves show the effects of varying the outer scale length only ( , 3 , 2 pixels).[fig : brokenexp ] ] this function creates an elliptical ring with a gaussian radial profile , centered at along the major axis .see figure [ fig : rings ] for an example .this function is similar to gaussianring , except that it uses an asymmetric gaussian , with different values of for and ) .that is , the profile behaves as for , and for ; see figure [ fig : rings ] for an example . pixels .the larger , rounder ring is an example of the gaussianring2side function , with an ellipticity of 0.1 , a semi - major axis of 160 pixels , pixels , and pixels .[ fig : rings ] ] this function provides the analytic form for a perfectly edge - on disk with a radial exponential profile , using the bessel - function solution of for the radial profile . although it is common to assume that the vertical profile for galactic disks follows a function , based on the self - gravitating isothermal sheet model of , suggested a more generalized form for this , one which enables the profile to range from at one extreme to exponential at the other : with the vertical coordinate and the vertical scale height .the parameter produces a profile when , when , and converges to an exponential as .see for examples of fitting the vertical profiles of edge - on galaxy disks using this formula , and for examples of 2d fitting of edge - on galaxy images . in a coordinate system aligned with the edge - on disk, is the distance from the minor axis ( parallel to the major axis ) and is the perpendicular direction , with on the major axis .( the latter corresponds to height from the galaxy midplane . )the intensity at is given by where is the exponential scale length in the disk plane , is the vertical scale height , and is the modified bessel function of the second kind .the central surface brightness is given by where is the central luminosity _ density _ ( see ) .note that is the actual input parameter required by the function ; is calculated internally .the result is a function with five parameters : , , , , and the position angle ; figure [ fig : edgeondisk ] shows three examples with differing vertical profiles parameterized by , 2 , and 100 .this is a simplistic model for an edge - on ring , using two offset subcomponents located at distance from the center of the function block .each subcomponent ( i.e. , each side of the ring ) is a 2d gaussian with central surface brightness and dispersions of in the radial direction and in the vertical direction .it has five parameters : , , , , and the position angle .see figure [ fig : edgeonring ] for examples of this function . a potentially more correct ( though computationally more expensive ) model for a ring seen edge - on ring or at other inclinations is provided by the gaussianring3d function , below .this is a slightly more sophisticated variant of edgeonring , where the radial profile for the two components is an asymmetric gaussian , as in the case of the gaussianring2side function , above : the inner ( ) side of each component is a gaussian with radial dispersion , while the outer side has radial dispersion .it thus has six parameters : , , , , , and the position angle .see the right - hand panel of figure [ fig : edgeonring ] for an example .all image functions in imfit produce 2d surface - brightness output .however , there is nothing to prevent one from creating a function which does something quite complicated in order to produce this output . as an example, imfit includes three image functions which perform line - of - sight integration through 3d luminosity - density models , in order to produce a 2d projection .these functions assume a symmetry plane ( e.g. , the disk plane for a disk galaxy ) which is inclined with respect to the line of sight ; the inclination is defined as the angle between the line of sight and the normal to the symmetry plane , so that a face - on system has and an edge - on system has . for inclinations , the orientation of the line of nodes ( the intersection between the symmetry plane and the sky plane ) is specified by a position - angle parameter . instead of a 2d surface - brightness specification ( or 1d radial surface - brightness profile ) , these functions specify a 3d luminosity density , which is numerically integrated along the line of sight for each pixel of the model ] to carry out the integration for a pixel located at in the image plane , the coordinates are first transformed to a rotated image plane system centered on the coordinates of the component center , where the line of nodes lies along the axis ( cf .[ eqn : xy ] in section [ sec:2d ] ) : with being the angle between the line of nodes and the image axis ( as in the case of the 2d functions , the actual user - specified parameter is , which is the angle between the line of nodes and the axis ) .the line - of - sight coordinate is then defined so that in the sky plane ( an instance of the image plane located in 3d space so that it passes through the center of the component ) , corresponding to in the component s native cartesian coordinate system .a location at along the line of sight then maps into the component coordinate system as with by construction .the luminosity - density value is then .see figure [ fig:3d - integration ] for a side - on view of this arrangement .although a fully correct integration would run from to , in practice the limit is some large multiple of the component s normal largest scale size ( e.g. , 20 times the horizontal disk scale length ) , to limit the possibility of numerical integration mishaps . with respect to the line of sight , with the line of nodes rotated to lie along the sky - plane axis , perpendicular to the page ;the disk center is by construction at the intersection of the disk plane and the sky plane . for a pixel with sky - plane coordinates , the luminosity - density is integrated along the line of sight ( variable , with at the sky plane ) . for each value of used by the integration routine ,the luminosity density is computed based on the corresponding values of radius and height in the disk s native coordinate system .[ fig:3d - integration ] ] this function implements a 3d luminosity density model for an axisymmetric disk where the radial profile of the luminosity density is an exponential and the vertical profile follows the function of ( see the discussion of the edgeondisk function in section [ sec : edgeondisk ] ) .the line - of - sight integration is done numerically , using functions from the gnu scientific library . in a cylindrical coordinate system aligned with the disk ( where the disk midplane has ) , the luminosity density at radius from the central axis and at height from the midplane is given by where is the exponential scale length in the disk plane , is the vertical scale height , controls the shape of the vertical distribution , and is the central luminosity density .note that in the context of the introductory discussion above , and .figure [ fig : expdisk3d ] shows three views of the same model , at inclinations of 75 , 85 , and 89 ; the latter is almost identical to the image produced by the analytic edgeondisk with the same radial and vertical parameters ( right - hand panel of figure [ fig : edgeondisk ] ) .this function is identical to the exponentialdisk3d function , except that the radial part of the luminosity density function is given by the broken - exponential profile used by the ( 2d ) brokenexponential function , above ( section [ sec : brokenexp ] ) .thus , the luminosity density at radius from the central axis and at height from the midplane is given by where is the vertical scale height , and the radial part is given by ^{\frac{1}{\alpha } ( \frac{1}{h_{1 } } \ , - \ , \frac{1}{h_{2}})},\ ] ] with being the central luminosity density and the rest of the parameters as defined for brokenexponential function ( section [ sec : brokenexp ] ) .this function creates the projection of a 3d elliptical ring , seen at an arbitrary inclination .the ring has a luminosity density with a radial gaussian profile ( centered at along ring s major axis , with in - plane width ) and a vertical exponential profile ( with scale height ) .the ring can be imagined as residing in a plane which has its line of nodes at angle and inclination ( as for the exponentialdisk3d function , above ) ; within this plane , the ring s major axis is at position angle relative to the perpendicular to the line of nodes . to derive the correct luminosity densities for the line - of - sight integration ,the component coordinate values from equation [ eqn:3d ] are transformed to a system rotated about the normal to the ring plane , where the ring s major axis is along the axis : figure [ fig : ring3d ] shows the same gaussianring3d component ( with ellipticity = 0.5 ) seen at three different inclinations .imfit is written in standard c++ , and should be compilable with any modern compiler ; it has been tested with gcc versions 4.2 and 4.8 on mac os x and gcc version 4.6 on ubuntu linux systems .it makes use of several open - source libraries , two which are required ( cfitsio and fftw ) and two which are optional but recommended ( nlopt and the gnu scientific library ) .imfit also uses the python - based scons build system and cxxtest for unit tests .since the slowest part of the fitting process is almost always computing the model image , imfit is written to take advantage of openmp compiler extensions ; this allows the computation of the model image to be parceled out into multiple threads , which are then allocated among available processor cores on machines with multiple shared - memory cpus ( including single cpus with multiple cores ) . as an example ofhow effective this can be , tests on a macbook pro with a quad - core i7 processor , which has a total of eight virtual threads available , show that basic computation of large images ( without psf convolution ) is sped up by a factor of when openmp is used . even when the overhead of an actual fit is included , the total time to fit a four - component model with 21 free parameters ( without psf convolution ) to a -pixel image is reduced by a factor of .additional computational overhead is imposed when one convolves a model image with a psf . to mitigate this, imfit uses the fftw library to compute the necessary fourier transforms .this is one of the fastest fft libraries available , and it can be compiled with support for multiple threads . when the same -pixel image fit mentioned aboveis done including convolution with a -pixel psf image , the total time drops from without any multi - threading to when just the fft computation is multi - threaded , and down to when openmp threading is enabled as well .multithreading can always be reduced or turned off using a command - line option if one does not wish to use all available cpu cores for a given fit .imfit has been used for several different astronomical applications , including preliminary work on the euclid photometric pipeline , testing 1d convolution code used in the analysis of core galaxies , fitting kinematically decomposed components of the galaxy ngc 7217 , determining the psf for data release 2 of the califa survey , and separation of bulge and disk components for dynamical modeling of black hole masses in nearby s0 and spiral galaxies ( erwin et al ., in prep ) . in this sectioni present two relatively simple examples of using imfit to model images of real galaxies .the first case considers a moderately inclined spiral galaxy with a prominent ring surrounding a bar , where use of a separate ring component considerably improves the fit .the second case is an edge - on disk galaxy with both thin and thick disks ; i show how this can be fit using both the analytic pure - edge - on disk component ( edgeondisk ; section [ sec : edgeondisk ] ) and the 3d luminosity - density model of an exponential disk ( exponentialdisk3d ; section [ sec : expdisk3d ] ) .pgc 35772 is a , early / intermediate - type spiral galaxy ( classified as sa0/a by and as sb by ) which was observed as part of the h galaxy groups imaging survey ( haggis ; kulkarni et al ., in prep . )using narrow - band filters on the wide field imager of the eso 2.2 m telescope .the upper - left panel of figure [ fig : haggis ] shows the stellar - continuum - filter image ( central wavelength nm , slightly blueward of the redshifted h line ) . particularly notable is a bright stellar ring , which makes this an interesting test case for including rings in 2d fits of galaxies .ellipse fits to the image show strong twisting of the isophotal position angle interior to the ring , suggesting a bar is also present .the rest of figure [ fig : haggis ] shows the results three different fits to the image , each successive fit adding an extra component .these fits use a -pixel cutout of the full wfi image , and were convolved with a moffat - function image with fwhm = 0.98 , representing the mean psf ( based on moffat fits to stars in the same image ) .the best - fit parameters for each model , determined by minimizing pmlr , are listed in table [ tab : haggis ] , along with the uncertainties estimated from the l - m covariance matrix .since the fitting times are short , i also include parameter uncertainties from 500 rounds of bootstrap resampling ( in parentheses , following the l - m uncertainties ) .the first fit is uses a single sersic component ; the residuals of this fit show a clear excess corresponding to the ring , as well as mis - modeling of the region inside the ring .the fit is improved by switching to an exponential + srsic model , with the former component representing the main disk and the latter some combination of the bar + bulge ( if any ) .this two - component model ( middle row of the figure ) produces less extreme residuals ; the best - fitting srsic component is elongated and misaligned with the exponential component , so it can be seen to be modeling the bar .the residuals to this `` disk + bar '' fit are still significant , however , including the ring itself . to fix this ,i include a gaussianring component ( section [ sec : gaussian - ring ] ) in the third fit ( bottom row of figure [ fig : haggis ] ) .the residuals to _ this _ fit are better not just in the ring region , but also inside , indicating that this three - component model is doing a much better job of modeling the inner flux of the galaxy ( the three - component also has the smallest aic value of the three models ; see table [ tab : haggis ] ) .llrll + sersic & pa & 138.14 & 0.48 ( 0.34 ) & deg + & & 0.254 & 0.0038 ( 0.0024 ) & + & & 1.041 & 0.0085 ( 0.027 ) & + & & 39.16 & 0.33 ( 0.52 ) & cont .flux + & & 8.267 & 0.042 ( 0.034 ) & arcsec + + + exponential & pa & 137.79 & 0.49 ( 0.29 ) & deg + ( disk ) & & 0.259 & 0.0039 ( 0.0021 ) & + & & 194.58 & 1.22 ( 0.97 ) & cont .flux + & & 5.17 & 0.025 ( 0.014 ) & arcsec + sersic & pa & 16.27 & 3.00 ( 1.13 ) & deg + ( bar ) & & 0.562 & 0.056 ( 0.025 ) & + & & 0.897 & 0.282 ( 0.074 ) & + & & 171.82 & 25.32 ( 6.77 ) & cont .flux + & & 0.713 & 0.041 ( 0.021 ) & arcsec + + + exponential & pa & 140.70 & 1.25 ( 0.53 ) & deg + ( disk ) & & 0.277 & 0.0098 ( 0.0053 ) & + & & 111.19 & 11.55 ( 4.05 ) & cont .flux + & & 5.74 & 0.18 ( 0.052 ) & arcsec + sersic & pa & 7.27 & 2.28 ( 1.04 ) & deg + ( bar ) & & 0.364 & 0.028 ( 0.092 ) & + & & 1.14 & 0.114 ( 0.046 ) & + & & 80.78 & 7.25 ( 2.47 ) & cont .flux + & & 1.42 & 0.080 ( 0.028 ) & arcsec + gaussianring & pa & 128.40 & 1.69 ( 0.69 ) & deg + ( ring ) & & 0.258 & 0.013 ( 0.0053 ) & + & & 26.90 & 3.22 ( 1.10 ) & cont .flux + & & 5.50 & 0.36 ( 0.14 ) & arcsec + & & 3.43 & 0.22 ( 0.13 ) & arcsec + ic 5176 is an edge - on sbc galaxy , included in a `` control '' sample of non - boxy - bulge galaxies by and . noted that both the gas and stellar kinematics were consistent with an axisymmetric , unbarred disk ; concluded from their -band image that it had a very small bulge and a `` completely featureless outer ( single ) exponential disk . ''this suggests an agreeably simple , axisymmetric structure , ideal for an example of modeling an edge - on galaxy . to minimize the effects of the central dust lane ( visible in optical images of the galaxy ), i use a _ spitzer _irac1 ( 3.6 ) image from s4 g , retrieved from the _ spitzer _ archive . for psf convolution ,i use an in - flight point response function image for the center of the irac1 field , downsampled to the 0.6 pixel scale of the post - processed archival galaxy image. inspection of major - axis and minor - axis profiles from the irac1 image ( figure [ fig : ic5176-prof ] ) suggests the presence of both thin and thick disk components ; the -band image of was probably not deep enough for this to be seen .the major axis profile and the image both suggest a rather round , central excess , consistent with the small bulge identified by bureau et al .consequently , i fit the image using a combination of two exponential - disk models , plus a central srsic component for the bulge . the fast way to fit such a galaxy with imfit is to assume that the galaxy is perfectly edge - on and use the 2d analytic edgeondisk functions ( section [ sec : edgeondisk ] ) for the thin and thick disk components .table [ tab : ic5176 ] shows the results of this fit .the dominant edgeondisk component , which can be thought of as the `` thin disk '' , has a nearly sech vertical profile and a scale height of pc ( assuming a distance of 26.4 mpc ; * ? ? ?the second edgeondisk , with a more exponential - like vertical profile and a scale height of 1.4 kpc , is then the `` thick disk '' component ; it has a radial scale length times that of the thin disk .the central srsic component of this model contributes 1.8% of the total flux , while the thin and thick disks account for 70.5% and 27.7% , respectively .the thick / thin - disk luminosity ratio of 0.39 is consistent with the recent study of thick and thin disks by : using their two assumed sets of relative mass - to - light ratios gives a mass ratio or 0.94 , which places ic 5176 in the middle of the distribution for galaxies with similar rotation velocities ( see their fig .llrrl + sersic & pa & 149.7 & 0.0049 & deg + ( bulge ) & & 0.206 & 0.014 & + & & 0.667 & 0.033 & + & & 12.90 & 0.000 & mag arcsec + & & 1.48 & 0.019 & arcsec + edgeondisk & pa & 149.7 & 0.0049 & deg + ( thin disk ) & & 11.829 & 0.0008 & mag arcsec + & & 14.17 & 0.012 & arcsec + & & 2.607 & 0.025 & + & & 2.01 & 0.0044 & arcsec + edgeondisk & pa & 151.3 & 0.019 & deg + ( thick disk ) & & 15.557 & 0.0057 & mag arcsec + & & 40.97 & 0.011 & arcsec + & & 9.89 & 0.700 & + & & 10.88 & 0.036 & arcsec + + + sersic & pa & 168.71 & 9.73 & deg + ( bulge ) & & 0.046 & 0.016 & + & & 0.762 & 0.033 & + & & 13.10 & 0.023 & mag arcsec + & & 1.46 & 0.019 & arcsec + exponentialdisk3d & pa & 149.73 & 0.001 & deg + ( thin disk ) & & 87.21 & 0.015 & deg + & & 11.475 & 0.0010 & mag arcsec + & & 14.44 & 0.011 & arcsec + & & 50 & & + & & 2.04 & 0.004 & arcsec + exponentialdisk3d & pa & 151.43 & 0.019 & deg + ( thick disk ) & & 89.40 & 0.126 & deg + & & 15.604 & 0.0040 & mag arcsec + & & 42.07 & 0.115 & arcsec + & & 50 & & + & & 11.74 & 0.038 & arcsec + a slower but more general approach is to use the edgeondisk3d function ( section [ sec : expdisk3d ] ) for both components , which allows for arbitrary inclinations .the cost is in the time taken for the fit : minutes , versus a mere 3m20s for the analytic 2d approach . using the edgeondisk3d functions _does _ give what is formally a better model of the data than the analytic 2d - component fit , with , though most of the parameter values in particular , the radial and vertical scale lengths are almost identical to previous fit .the only notable changes are the srsic component becoming rounder ( with a different and probably not very well - defined position angle ) and the vertical profiles of both disk components becoming pure exponentials ( the values of in table [ tab : ic5176 ] are imposed limits ) .the relative contributions of the three components are essentially unchanged : 1.8% of the flux from the srsic component and 71.7% and 26.5% from the thin and thick disks , respectively .the best - fitting model converges to for the outer ( thick ) disk component , but does find for the thin - disk component . ]the reality is that the combination of low spatial resolution of the irac1 image and the presence of residual structure in the disk midplane ( probably due to a combination of spiral arms , star formation , and dust ) means that we can not constrain the vertical structure of the disk(s ) very well. a vertical profile which is best fit with a sech function when the disk is assumed to be perfectly edge - on can also be fit with a vertical exponential function , if the disk is tilted slightly from edge - on .the low spatial resolution also means that the central bulge is not well constrained , either ; the half - light radius of the srsic component from either fit is pixels and thus barely larger than the seeing .in section [ sec : statistics ] , i discussed two different practical approaches to fitting images from a statistical point of view : the standard , gaussian - based statistic and poisson - based mle statistics ( and pmlr ) .the approach can be further subdivided into the common method of using data values to estimate the per - pixel errors ( ) and the alternate method of using values from the model ( ) . outside of certain low - s / n contexts ( e.g. , fitting x - ray and gamma - ray data ), minimization is pretty much the default . even in the case of low sn , when the gaussian approximation to poisson statistics which motivates the approach might start to become invalid , one might imagine that the presence of gaussian read noise in ccd detectors could make this a non - issue .is there any reason for using poisson - likelihood approaches outside of very - low - count , zero - read - noise regimes ? used a combination of analytical approximations and fits of models to artificial data to show how fits ( using data - based or model - based errors ) can lead to biased parameter estimation , even for surprisingly high s / n ratios ; these biases were essentially absent when poisson mle was used .( used for their analysis , but minimizing pmlr would yield the same fits , as noted in section [ sec : poisson ] . ) a fuller discussion of these issues in the context of fitting x - ray data can be found in that paper , and references therein ( e.g. * ? ? ?* ) . in this section ,i focus on the typical optical imaging problem of fitting galaxy images with simple 2d functions and use the flexibility of imfit to explore how fitting poisson ( or poisson + gaussian ) data with different assumptions can bias the resulting fitted parameter values . as a characteristic example ,i consider a model galaxy described by a 2d srsic function with , pixels , and an ellipticity of 0.5 .this model is realized in three count - level regimes : a `` low - s / n '' case with a sky background level of 20 counts / pixel and model intensity at the half - light radius counts / pixel ; a `` medium - s / n '' version which is equivalent to an exposure time ( or telescope aperture ) five times larger ( background level = 100 , ) ; and a `` high - s / n '' version with total counts equal to 25 times the low - s / n version ( background level = 500 , ) .these values are chosen partly to test the question of how rapidly the gaussian approximation to poisson statistics becomes appropriate : 20 counts / pixel is often given as a reasonable lower limit for using this approximation ( e.g. , * ? ? ?* ) , while for 500 counts / pixel the gaussian approximation should be effectively indistinguishable from true poisson statistics . the images were created using code written in python .the first stage was generating a noiseless -pixel reference image ( including subpixel integration , but not psf convolution ) .this was then used as the source for generating 500 `` observed '' images of the same size , using poisson statistics : for each pixel , the value in the reference image was taken as the mean for a poisson process ( equation [ eq : poisson ] ) , and actual counts were ( pseudo)randomly generating using code in the numpy package ( ` numpy.random.poisson ` ) .for simplicity , the gain was set to 1 , so 1 count = 1 photoelectron .the resulting images were then fit with imfit three times , always using the nelder - mead simplex method as the minimization algorithm . the first two fits used statistics , either the data - based or the model - based approach , with read noise set to 0; the third fit minimized .( essentially identical fits are obtained when minimizing pmlr instead of . ) the fitted model consisted of a single 2d srsic function with very broad parameter limits and the same starting parameter values for all fits ( with the initial value scaled by 5 for the medium - s / n images and by 25 for the high - s / n images ) , along with a fixed flatsky component for the background .figure [ fig : sersic - histograms ] shows the distribution of best - fit parameters for fits to all 500 individual images in each s / n regime , with thick red histograms for the fits , thinner magenta histograms for the fits , and thin blue histograms for the poisson mle ( ) fits , along with the true parameter values of the original model as vertical dashed gray lines .a clear bias for the approaches can be seen in the fits to the low - s / n images ( top panels of figure [ fig : sersic - histograms ] ) . for the approach ,the fitted values of and are too small : the average value of is 12.3% low , while the average value of is 15.4% too small .the fitted values of , on the other hand , are on average 34% too large . as can be seen from the figure ,these biases are significantly larger than the spread of values from the individual fits .the overall effect also biases the total flux for the srsic component , which is underestimated by 10.4% when using the mean parameters of the fit ; see figure [ fig : lum - histograms ] .the ( model - based ) approach also produces biases , though these are smaller and are in the opposite sense from the biases : and are overestimated on average by 7.0% and 10.4% , respectively , while is 15.6% too small ; the total flux is overestimated by 6.5% .finally , the fits using are _ unbiased _ : the histograms straddle the input model values , and the mean values from the fits are all % different from the true values .the other parameters of the fits galaxy center , position angle , ellipticity do not show any systematic differences , except for a very slight tendency of the ellipticity to be biased high with the fit , but only at the % level . for the parameters which show biases in the fits , the trends are exactly as suggested by , including the fact that the biases are smaller and have the opposite sign from the biases . in the medium - s / n case ( middle panels of the same figure ) ,the bias in the and fits is clearly reduced : for the fits , and are on average only 2.6% and 3.5% too small , while is on average 6.4% too high ( in the fits , the deviations are 1.3% and 1.9% too large and 3.2% too small , respectively ) though the bias is clearly still present , and in the same pattern .the biases in total flux are smaller , too : 2.3% low and 1.2% high for the data - based and model - based fits , respectively ( figure [ fig : lum - histograms ] ) .these biases are even smaller in the high - s / n case : e.g. , in the case , and are 0.54% and 0.74% too small , while is 1.3% too high . in both s / n regimes ,the poisson mle fits remain unbiased .what is the effect of adding ( gaussian ) read noise to the images ? to investigate this , additional sets of images were prepared as before , except that the value from the poisson process was further modulated by adding a gaussian with mean = 0 and width e .( this value was chosen as a representative read noise for typical modern ccds ; it is also roughly equal to the dispersion of the gaussian approximation to the _ poisson _ noise of the background in the low - s / n limit i.e. , . )the fits were done as before , with the read noise properly included in the fitting ; the histograms of the resulting fits are shown in figures [ fig : sersic - histograms ] and [ fig : lum - histograms ] with dashed lines .what is clear from the figure is that while the addition of a gaussian noise term reduces the bias in the fits slightly in the low - s / n regime , the bias is still present .even though the poisson mle approach is no longer formally correct when gaussian noise is present , the fits remain unbiased in the presence of moderate read noise .how large is the bias produced by fits ? suggested that the absolute or relative size of the bias might not be as important as the size of the bias relative to the nominal statistical errors of the fits .there are , in principle , three different ways of estimating these errors : from the distribution of the fitted values for all 500 images ( similar to what was done by humphrey et al . for their examples ); from the mean of individual - fit error estimates produced by using the l - m algorithm ; and from the mean of individual - fit error estimates produced by bootstrap resampling . for this simple model , all three approaches produce very similar values .for example , fitting the images in mode with the l - m algorithm produces estimated dispersions within % of the dispersion of values from the individual fits ; the latter are in turn very similar to the dispersion of the individual fits ( as is evident from the similar histogram widths in figure [ fig : sersic - histograms ] ) .the errors estimated from bootstrap resampling also agree to within % of the other estimates ; see figure [ fig : bootstrap ] for a comparison of bootstrap and l - m error estimates for a fit to a single low - s / n image .figure [ fig : sersic - deviations ] shows the biases for the , , and poisson mle fits , plotted against the background value for the different s / n regimes : the top panels show the deviations relative to the true parameter values , while the bottom panels shows the deviations in units of the statistical errors ( using the standard deviation of the 500 fitted values ) .the left and right panels show the cases for and fits , respectively , with the poisson mle fits shown in each panel for reference . in all cases ,there is a clear trend of the biases becoming smaller as the overall exposure level ( represented by the mean background level ) increases , asymptotically approaching the zero - bias case exhibited by the poisson mle fits . derived an estimate for the bias ( relative to the statistical error ) that would result from fitting pure - poisson data using the statistic , based on the total number of counts and the total number of bins ( i.e. , the total number of fitted pixels ) : where is the true value , is the mean fitted value , and is the statistical error on the parameter value .they did the same for the approach and found the estimates derived from these equations are plotted as dotted lines in figure [ fig : sersic - deviations ] . although the actual biases are systematically smaller than the predictions , the overall agreement is rather good . is there any evidence that the -bias effect is significant when fitting images of real galaxies ?figure [ fig : real - deviations ] shows the differences seen when fitting single srsic functions to images of three elliptical galaxies . in the first case , i fit -pixel cutouts from sdss , , and images of ngc 5831 ; these images correspond to successively higher counts per pixel in both background and galaxy .( the cutouts , as well as the mask images , were shifted in and to correct for pointing offsets between the different images . )although color gradients may produce ( genuinely ) different fits for the different images , these should be small for an early - type galaxy like ngc 5831 ; more importantly , the bias estimates i calculate ( see next paragraph ) are between the and poisson mle fits for each individual band . in the second and third cases i fit same - filter images with different exposure times : -pixel cutouts from short ( 15s ) and long ( 60s ) -band exposures of ngc 4697 , obtained with the isaac newton telescope s wide field camera on 2004 march 17 , and 15s and 40s -band exposures of ngc 3379 , obtained with the prime focus camera unit of the int on 1994 march 14 ( image size = pixels ) .all images were fit with a single srsic function , convolved with an appropriate moffat psf image ( based on measurements of unsaturated stars in each image ) .all fits were done with , , and poisson mle ( ) minimization ; the fits included appropriate read - noise contributions ( 5.8 e and 4.5 e for the wfc and pfcu images , respectively ) . unlike the case for the model images in the preceding section , the `` correct '' srsic model for these galaxies is unknown ( as is , for that matter , the true sky background ) .thus , figure [ fig : real - deviations ] shows the differences between the best -fit parameters and the parameters from the poisson mle fits , relative to the value of the latter , instead of the difference between all three and the ( unknown ) `` true '' solution .the trends are nonetheless very similar to the model - image case ( compare figure [ fig : real - deviations ] with the top panels of figure [ fig : sersic - deviations ] ) : values of and from the fits are smaller , and values of are larger , than the corresponding values from the poisson mle fits , and the offsets are reversed when fitting is done . although there is some scatter , the tendency of offsets to be larger than offsets is present as well : in fact , the average ratio of the former to the latter is 1.99 ( median = 1.65 ) , which is strikingly close to the ratio of 2 predicted by . even the fact that the offsets are always larger than the offsets replicates the pattern from the fits to artificial - galaxy images .in addition , the offsets between the -fit values and the poisson mle fits diminish as the count rate increases , as in the model - image case .if we make the plausible assumption that the higher s / n images are more likely to yield accurate estimates of the true galaxy parameters ( to the extent that the galaxies _ can _ be approximated by a simple elliptical srsic function ) , then the convergence of estimated parameter values in the high - count regime strongly suggests that the poisson mle approach is the least biased in all regimes . of course ,for many typical optical and near - ir imaging situations , the count rates even in the sky background are high enough that differences between and poisson mle fits can probably be ignored .for example , typical backgrounds in sdss , , , and images range between and 200 adu / pixel , or photoelectrons / pixel .sdss dr7 fields . ] only for -band images does the background level become low enough ( photoelectrons / pixel ) for the -fit bias to become a significant issue . a qualitative explanation for the bias is relatively straightforward .( a more precise mathematical derivation can be found in , e.g. , . ) in the low - count regime , pixels with downward fluctuations from the true model will have significantly lower values than pixels with similar - sized upward fluctuations ; since the weighting for each pixel in the fit is proportional to , the downward fluctuations will have more weight , and so the best - fitting model will be biased downward .the bias is slightly more complicated .here , one has to consider the effects of different possible models being fitted , because the values are determined by the _ model _values , not the data values . in the low - count regime , a model with slightly higher flux than the true model will have higher values , which will in turn _ lower _ the total .a model with lower flux will have smaller values , which will increase the total .the overall effect will thus be to bias the best - fitting model upward .figure [ fig : bias - demo ] provides a simplified example of how the two forms of bias operate , using a gaussian + constant background model and a small set of poisson `` data '' points .the original ( true ) model is shown by the solid gray line , while the dashed red and blue lines show potential models offset above and below the correct model , respectively .the calculated ( left ) and ( right ) values for the offset models are also indicated , showing that the downward - offset model has the lowestchisquaredata of the three , while the upward - offset model has the lowest value . in both cases ,the bias is strongest when the mean counts are low , and so for srsic fits affects the outer , low - surface - brightness part of galaxy . in order to accommodatethe downward bias of fits , srsic models with lower and smaller ( fainter and steeper outer profiles ) are preferred ; the value increases in compensation to ensure a reasonable fit in the high - surface - brightness center of the galaxy , where the bias is minimal .the opposite trends hold for fits .it is important to note that the biases illustrated above apply to specific cases of estimating gaussian errors from the data or model values on a pixel - by - pixel basis .they do _ not _ necessarily apply when an external error map is used in the fitting , unless the error map is itself primarily based on the individual pixel values .error maps generated in other ways may have little or no effective bias .for example , the default behavior of galfit is to compute an error map from the image by estimating a global background rms value from the image ( after rejecting a fraction of high and low pixel values ) , combined with per - pixel values from a background - subtracted , _ smoothed _ version of the data .this means that galift s calculation is actually where is a global value for the image and is the background - subtracted , smoothed version of the data ; the underlying rationale is that a constant background should have , in the gaussian approximation , a single value ( chien peng , private communication ) .this approach has two advantages over the simpler method .first , smoothing the ( background - subtracted ) data before using it as the basis for estimation helps suppress the fluctuations which give rise to the bias , as demonstrated by for the case of 1d spectroscopic data .second , the galfit version of the statistic is similar in some respects to the so - called _ modified _ neyman s : where is the per - pixel data value . in both cases ,the effect of a fixed lower bound to the error term ( or 1 ) is to transition from approximately poisson weighting of pixels when or is large to _ equal _ weighting for pixels when the object counts approach zero ( this also removes the problem of pixels with zero counts ) .this tends to weaken , though not eliminate , the bias in the low - count regime ( e.g. , * ? ? ?* ; * ? ? ?a hint of this effect can even be seen in the top panel of figure [ fig : sersic - histograms ] , where the addition of a constant read - noise term to the estimation reduces both the and biases . figure [ fig : sigma1-galfit - histograms ] shows the distribution of , , and total luminosity for srsic fits to the low - s / n model images of section [ sec : model - images ] for the , , and pmlr ( ) fits , along with fits to the same images using galfit ( version 3.0.5 ) . the distributions from fits using galfit ( black histograms )are almost identical to those from the pmlr fits using imfit ( blue histograms ) . a very slight bias in the sense i.e. , underestimation of srsic , , and luminosity can still be seen in the galfit results , but this is marginal and in any case much smaller than the dispersion of the fits .similarly , some evidence for the same biases can be seen in the galfit simulations of ( * ? ? ?* their fig . 5 ) , ( * ? ?* their fig .6 ) , and ( * ? ? ?* their figs . 4 and 5 ) , but these deviations are only visible at the very lowest s / n levels and are tiny compared to the overall scatter of best - fit values .the alert reader may have noticed that the discussion of how the galfit approach reduces bias in the fitted parameters implies that _ unweighted _ least - squares fits should be unbiased .so would it be better to forgo various estimation schemes entirely , and treat all pixels equally ?figure [ fig : sigma1-galfit - histograms ] shows that fits to the same ( simple srsic ) model images are indeed unbiased when all pixels are weighted ( thick gray histograms ) .but the drawback of completely unweighted fitting is clear in the significantly larger dispersion of fitted results : unweighted fits are less _ accurate _ than either the poisson mle or galfit approaches .i have described a new open - source program , imfit , intended for modeling images of galaxies or other astronomical objects .key features include speed , a flexible user interface , multiple options for handling the fitting process , and the ability to easily add new 2d image functions for modeling galaxy components .images are modeled as the sum of one or more 2d image functions , which can be grouped into multiple sets of functions , each set sharing a common location within the image .available image functions include standard 2d functions used to model galaxies and other objects e.g. , gaussian , moffat , exponential , and srsic profiles with elliptical isophotes as well as broken exponentials , analytic edge - on disks , core - srsic profiles , and symmetric ( and asymmetric ) rings with gaussian radial profiles .in addition , several sample `` 3d '' functions compute line - of - sight integrations through 3d luminosity - density models , such as an axisymmetric disk with a radial exponential profile and a vertical profile .optional convolution with a psf is accomplished via fast fourier transforms , using a user - supplied fits image for the psf .image fitting can be done by minimization of the standard statistic , using either the image data to estimate the per - pixel variances ( ) or the computed model values ( ) , or by using user - supplied variance or error maps .fitting can _ also _ be done using poisson - based maximum - likelihood estimators ( poisson mle ) , which are especially appropriate for cases of images with low counts per pixel and low or zero read noise .this includes both the traditional cash statistic frequently used in x - ray analysis and an equivalent likelihood - ratio statistic ( pmlr ) which can be used with the fastest ( levenberg - marquardt ) minimization algorithm and can also function as a goodness - of - fit estimator .other minimization algorithms include the nelder - mead simplex method and differential evolution .confidence intervals for fitted parameters can be estimated by the levenberg - marquardt algorithm from its internal covariance matrix ; they can also be estimated ( with any of the minimization algorithms ) by bootstrap resampling .the full distribution of parameter values from bootstrap resampling can also be saved to a file for later analysis .a comparison of fits to artificial images of a simple srsic - function galaxy demonstrates how the -bias discussed by manifests itself when fitting images : fits which minimize result in values of the srsic parameters and ( as well as the total luminosity ) which are biased low and values of which are biased high , while fits which minimize produce smaller biases in the opposite directions ; as predicted , these biases decrease , but do not vanish , when the background and source intensity levels increase .fits using poisson mle statistics yield essentially _ unbiased _ parameter values ; this is true even when gaussian read noise is present .srsic fits to images of real elliptical galaxies with varying exposure times or background levels show evidence for the same pattern of biased parameter values when minimizing or .this suggests that the fitting of galaxy images with imfit should generally use poisson mle minimization instead of minimization whenever possible , especially when the background level is less than photoelectrons / pixel .precompiled binaries , documentation , and full source code ( released under the gnu public license ) are available at the following web site : + http://www.mpe.mpg.de/~erwin / code / imfit/. various useful comments and suggestions have come from maximilian fabricius , martin kmmel , and roberto saglia , and thanks are also due to michael opitsch and michael williams for being ( partly unwitting ) beta testers .further bug reports , suggestions , requests , and fixes from giulia savorgnan , guillermo barro , sergio pascual , and ( especially ) andr luiz de amorim are gratefully acknowledged .i also thank the referee , chien peng , for a very careful reading and pertinent questions which considerably improved this paper .this work was partly supported by the deutsche forschungsgemeinschaft through priority programme 1177 , `` witnesses of cosmic history : formation and evolution of galaxies , black holes , and their environment . ''this work is based in part on observations made with the _ spitzer _ space telescope , obtained from the nasa / ipac infrared science archive , both of which are operated by the jet propulsion laboratory , california institute of technology under a contract with the national aeronautics and space administration .this paper also makes use of data obtained from the isaac newton group archive which is maintained as part of the casu astronomical data centre at the institute of astronomy , cambridge .funding for the creation and distribution of the sdss archive has been provided by the alfred p. sloan foundation , the participating institutions , the national aeronautics and space administration , the national science foundation , the u.s .department of energy , the japanese monbukagakusho , and the max planck society .the sdss web site is ` http://www.sdss.org/ ` .the sdss is managed by the astrophysical research consortium ( arc ) for the participating institutions .the participating institutions are the university of chicago , fermilab , the institute for advanced study , the japan participation group , the johns hopkins university , the korean scientist group , los alamos national laboratory , the max - planck - institute for astronomy ( mpia ) , the max - planck - institute for astrophysics ( mpa ) , new mexico state university , university of pittsburgh , university of portsmouth , princeton university , the united states naval observatory , and the university of washington .table [ tab : haggis ] lists the best - fit parameters for three progressively more complex models of the spiral galaxy pgc 35772 ( see section [ sec : haggis - fit ] and figure [ fig : haggis ] ) , along with both the levenberg - marquardt ( l - m ) uncertainties and the uncertainties derived from 500 rounds of bootstrap resampling ( the latter listed in parentheses after the l - m uncertainties ) .a comparison of the two types of uncertainty estimates suggests they are similar in size for the simplest model ( fitting the galaxy with just a srsic component ) , with mean and median values of and 0.81 , respectively .however , the bootstrap uncertainties are typically about half the size of the l - m uncertainties for the more complex models : the mean and median values for the uncertainty ratios are 0.48 and 0.51 , respectively , for the srsic + exponential model and 0.61 and 0.41 for the srsic + gaussianring + exponential model . [ [ parameter - estimates - for - multiple - exposures - of - elliptical - galaxies ] ] parameter estimates for multiple exposures of elliptical galaxies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in section [ ref : biases - real ] i compared different fits with poisson mle fits for several elliptical galaxies . in this section , i compare l - m and bootstrap parameter error estimates for two of the same elliptical galaxies ( plus a third observed under similar conditions ) , always using poisson mle fits in order to avoid bias effects .specifically , i compare best - fit parameters from srsic fits to multiple images of the same galaxy , in two ways .first , i compare best - fit parameter values ( e.g. , srsic index ) from fits to short ( 15s ) exposures with values from fits to longer ( or ) exposures with the same telescope + filter system on the same night .i do this by comparing differences in parameter values with the error estimates for the same parameter from the short exposures ( left panel of figure [ fig : uncertainty - ratios ] ) .this can be thought of as a crude answer to the question : how well do the error estimates describe the uncertainty of parameters from short - exposure fits relative to more `` correct '' parameters obtained from higher s / n data ? ( i do not compare values because these can vary due to changes in the transparency between exposures ; similarly , i do not compare values for the pixel coordinates of the galaxy center because these depend on the telescope pointing and are intrinsically variable . )to first order , if the uncertainty estimates were reasonably accurate we should expect % of the points to be found at .as the left panel of the figure shows , neither approach is ideal , but the bootstrap- estimates are somewhat better : 50% of those are , while this is true for only 1/8 of the deviations if the l - m estimates are used .the second approach , seen in the right - hand panel of figure [ fig : uncertainty - ratios ] , is to compare how well the differences in parameter values obtained from similar samples ( i.e. , multiple images of the same galaxy with the _ same _ exposure time ) compare with the error estimates .this is , in a limited sense , a test of the nominal frequentist meaning of confidence intervals : how often do repeated measurements fall within the specified error bounds ? in this case , i am comparing parameters from fits to the two 40s -band exposures of ngc 3379 ( squares ) and also parameters from fits to three 15s -band exposures of the lower - luminosity elliptical galaxy ngc 3377 ( diamonds ) , also from the int - wfc .again , we should expect % of the points to lie within if the error estimates are accurate ; as the figure shows , essentially _ all _ the bootstrap and l - m estimates lie inside this range and so tend to be too small , particularly for .the bootstrap estimates do a better job in the case of ngc 3379 and a worse job in the case of ngc 3377 .the implication of the preceding subsections is that the l - m and bootstrap estimates of parameter errors are very roughly consistent with each other , though there is some evidence that the latter tend to become smaller than the former as the fitted models become more complex ( i.e. , more components ) . in general , both the l - m and bootstrap estimatesshould probably be considered _ underestimates _ of the true parameter uncertainties , something already established for l - m estimates from tests of other image fitting programs ( e.g. , * ? ? ?* ) . , s. , elmegreen , b. g. , knapen , j. h. , salo , h. , laurikainen , e. , laine , j. , athanassoula , e. , bosma , a. , sheth , k. , regan , m. w. , hinz , j. l. , gil de paz , a. , menndez - delmestre , k. , mizusawa , t. , muoz - mateos , j .- c . , seibert , m. , kim , t. , elmegreen , d. m. , gadotti , d. a. , ho , l. c. , holwerda , b. w. , lappalainen , j. , schinnerer , e. , & skibba , r. 2011 , apj , 741 , 28 , p. , doe , s. , & siemiginowska , a. 2001 , in society of photo - optical instrumentation engineers ( spie ) conference series , vol .4477 , society of photo - optical instrumentation engineers ( spie ) conference series , ed .starck & f. d. murtagh , 7687 , b. , mcintosh , d. h. , barden , m. , bell , e. f. , rix , h .- w . , borch , a. , beckwith , s. v. w. , caldwell , j. a. r. , heymans , c. , jahnke , k. , jogee , s. , koposov , s. e. , meisenheimer , k. , snchez , s. f. , somerville , r. s. , wisotzki , l. , & wolf , c. 2007 , apjs , 172 , 615 , c. , den brok , m. , kleijn , g. v. , carter , d. , balcells , m. , guzmn , r. , peletier , r. , ferguson , h. c. , goudfrooij , p. , graham , a. w. , hammer , d. , karick , a. m. , lucey , j. r. , matkovi , a. , merritt , d. , mouhcine , m. , & valentijn , e. 2011 , mnras , 411 , 2439 , m. , koppenhoefer , j. , riffeser , a. , mohr , j. , desai , s. , henderson , r. , paech , k. , & wetzstein , m. 2013 , in astronomical society of the pacific conference series , vol .475 , astronomical data analysis software and systems xxii , ed .d. n. friedel , 357
|
i describe a new , open - source astronomical image - fitting program called imfit , specialized for galaxies but potentially useful for other sources , which is fast , flexible , and highly extensible . a key characteristic of the program is an object - oriented design which allows new types of image components ( 2d surface - brightness functions ) to be easily written and added to the program . image functions provided with imfit include the usual suspects for galaxy decompositions ( srsic , exponential , gaussian ) , along with core - srsic and broken - exponential profiles , elliptical rings , and three components which perform line - of - sight integration through 3d luminosity - density models of disks and rings seen at arbitrary inclinations . available minimization algorithms include levenberg - marquardt , nelder - mead simplex , and differential evolution , allowing trade - offs between speed and decreased sensitivity to local minima in the fit landscape . minimization can be done using the standard statistic ( using either data or model values to estimate per - pixel gaussian errors , or else user - supplied error images ) or poisson - based maximum - likelihood statistics ; the latter approach is particularly appropriate for cases of poisson data in the low - count regime . i show that fitting low - s / n galaxy images using minimization and individual - pixel gaussian uncertainties can lead to significant biases in fitted parameter values , which are avoided if a poisson - based statistic is used ; this is true even when gaussian read noise is present .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.