article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
the past years orthogonal frequency division multiplexing access ( ofdma ) has been an important technique for broadband wireless communications .a major advantage of ofdma is its robustness in the presence of multi - path fading in cellular applications . in third generation partnership projectlong term evolution ( 3gpp - lte ) standard , the uplink access scheme is single - carrier frequency - division multiple access ( sc - fdma ) , a modified version of ofdma but having similar throughput performance and essentially the same overall complexity as ofdma .there are two approaches to assign users among channels . in localized sc - fdma ( lfdma ) , each user uses a set of adjacent channels to transmit data .the other is distributed sc - fdma in which the channels used by a user are spread over the entire channel spectrum .one realization of distributed sc - fdma is interleaved sc - fdma ( ifdma ) where the occupied channels for each user are equidistant from each other .currently , ifdma as well as lfdma has been investigated in 3gpp - lte for the uplink transmission .the trade - off on channel allocation between lfdma and ifdma is investigated in many literatures . in , song et.al .state that ifdma has less carrier frequency offset ( cfo ) interference but lfdma achieves more diversity gain . in myung et.al .find that lfdma with channel - dependent scheduling ( cds ) results in higher throughput than ifdma , whereas the peak to average power ratio ( papr ) performance of ifdma is better than that of lfdma .battery - powered equipments are increasingly employed by mobile users .it is of significance to consider minimum sum power ( min - power ) , subject to meeting demand target .the min - power problem for lfdma is proved to be -hard in .as far as our information goes , few literatures investigate the min - power problem in ifdma , while some heuristic algorithms for consecutive channel allocation are presented in . in this paper, we present a minimal power channel allocation ( ) algorithm for ifdma , which achieves global optimality in polynomial time . the min - power in ifdmais modeled as a combinatorial optimization problem .the rate function is not restricted to any particular one in order to stress the generality of the proposed approach .we compare mpca with the global optimal solution for lfdma , as same as in .our key contributions are as follows . *we show that the min - power for ifdma is polynomial - time solvable .* a polynomial - time algorithm mpca is developed to solve the min - power problem in ifdma .* numerically , we find that on min - power , lfdma outperforms ifdma in maximal supported user demand .when the user demand can be satisfied , lfdma performs slightly better than ifdma .this paper is organized as follows . in section [ sec: details ] , we introduce system model and min - power problem for ifdma . we further prove that min - power is polynomial solvable in ifdma .the algorithm s description and its pseudo - code are proposed in section [ sec : pseudo - code ] .numerical results are given in section [ sec : comparison ] .section [ sec : conclusion ] concludes this paper . & the users set + & the channels set + & the number of users + & the number of channels + & the sub block + & the set of allocated channels to user + & the set of all channel blocks + & the channel block identified by + & the number of allocated channels for each user + & the interspace size between neighbored sub blocks + & the shift distance of the first allocated channel + & the total length of the channel block + let and denote the sets of users and channels , respectively . for uplink , the users in send data concurrently to a base station .each user has a total power limit , denoted by .moreover , for a user , the power has to be equal on all allocated channels , subject to a given channel peak power limit .therefore , a user being allocated channels will use power at most on each channel .we assume that all users are allocated with the same number of channels .the number of allocated channels to each user is denoted by .the total number of allocated channels is . for ifdma ,the channels allocated to users are distributed equidistantly .an example for ifdma is illustrated in [ fig : example ] .the occupied segment of channels is up to three parameters , , and , shown in [ tab : math_table ] .the parameter ranges from to .we let be the interspace ranging from to .we represent the shift distance from the left end of the channels spectrum as that ranges from to , where is the length of the segment ( composed of channels and interspaces ) .we use the term ` channel block ' to denote the occupied segment of the spectrum , as illustrated in [ fig : example ] .if the parameters , and are fixed , then the corresponding channel block is determined .we divide each channel block into ` sub blocks ' , as respectively , with .the set of all allocated channels for any user in each , is denoted by , with . the total number of different channel blocks is denoted by .we use to represent the set of all the channel blocks , where each element is denoted by , respectively .all the possible channel blocks are obtained by the procedure .the two cases , and , should be treated differently .this is because we have only one sub - block when . in this case, is meaningless . in line 24 , we get the channel blocks with . in line 59 , we get the channel blocks with .all the possible channel blocks are saved as , as shown in line 3 and 8 .then there is one increase on the index variable for the next iteration , as shown in line 4 and 9 . in line 5, the total number of possible channel blocks , , is thus obtained .we remark that for any channel block , each user is assigned a unique number , representing the order in each .the interleaved channel allocation for all the users is corresponding to a permutation of integers to , i.e. , ] , which indicates that user appears in order , user in and user in order , in all the sub blocks and .it can be observed that the allocation for all users is a function of the permutation , denoted by .together , for each user , is a function of , denoted by . we give the definition of min - power problem in ifdma , where we consider to minimize the total uplink power required to support users target demand , denoted by for user .we use to denote the rate of user on channel with power . for the sake of not losing generality, we do not assume any specific power function .instead , we use to denote the minimum total power required to satisfy all users demand on channel block . for each user , the power required to satisfy its demand represented as .specifically , the power of user on channel is denoted by . for that powerhas to be equal on all channels of user , , subject to and .thus , is the minimal power for the channel block .given , this minimization is straightforward ( e.g. , bi - section search assuming is monotonic in ) . if the power limits are not exceeded for all users , the allocation is feasible , otherwise the allocation is infeasible .then the power minimization problem is given , as follows .[ * min - power * ] for each feasible channel - user block minimizing by exploring all permutation ~~(\forall i~,1\leq v_i\leq m) ] .more specifically , any left node numbered is linked with the right node numbered in each matching .the weight for each link between and is set to be the negative value of the corresponding power cost , respectively .the minimal power cost is equal to the maximal sum of weights among all perfect matchings .the channel allocation problem is thereby identical to the maximum weight perfect matching in a bipartite graph , which can be solved by ( km ) algorithm in .the details of km are given in the appendix .in this section , we prove that for interleaved channel allocation , min - power is polynomial - time solvable for global optimality . for interleaved channel allocation ,min - power admits polynomial - time algorithm for global optimality . then is .we can achieve the global optimality by resorting to the enumeration method , i.e. , to check every possible channel block by running km once .the total cost for the whole process should be .in this section , we give the description and pseudo - code of optimal min - power algorithm for the interleaved case , . in , firstly the procedure is called to obtain all the channel blocks .the cost for this part is .then the weight for each corresponding matching between any user - channel pair is calculated for every channel block , as shown in line 25 , of which the cost is . since , we have .note that we flip the sign for the power cost so as to make sure the min - power to coincide with the maximum matching problem in the bipartite graph .the set is used to record the past solutions obtained from km algorithm , and is initialized to be empty at the beginning , in line 6 . in line 79 ,the km procedure is called for all the possible channel blocks and the total cost is . in line 8 ,the km returns a two tuple , where is the power value and is the corresponding permutation .finally the two - tuple that minimizes the value , is returned as the optimal solution .then the total computation cost of is . \gets-\sum_{j\in\mathcal{j}_i(\bm{\nu})}p_{i , j} ] >l_x[i] ] =\infty ] =[false,\ldots , false] ] * * false * * * and * ] \isequal ] \isequal ] =s[i]-d ] \gets ] * false * +l_y[y]-w[x , y] ] * * true * * \isequal -1 ] * true * >t ] * false *
|
optimal channel allocation is a key performance engineering aspect in single - carrier frequency - division multiple access ( sc - fdma ) . it is of significance to consider minimum sum power ( min - power ) , subject to meeting specified user s demand , since mobile users typically employ battery - powered handsets . in this paper , we prove that min - power is polynomial - time solvable for interleaved sc - fdma ( ifdma ) . then we propose a channel allocation algorithm for ifdma , which is guaranteed to achieve global optimum in polynomial time . we numerically compare the proposed algorithm with optimal localized sc - fdma ( lfdma ) for min - power . the results show that lfdma outperforms ifdma in the maximal supported user demand . when the user demand can be satisfied in both lfdma and ifdma , lfdma performs slightly better than ifdma . however min - power is polynomial - time solvable for ifdma whereas it is not for lfdma .
|
multi time - scale modeling and study of singularly perturbed systems find application in model order reduction , optimal control , stochastic filtering and composite control etc .a two time scale singularly perturbed system consists of an interconnection of two dynamical systems referred as slow and fast subsystems .generally , we refer a singularly perturbed systems as standard models if there exists a unique root for the fast subsystem when the perturbation parameter goes to zero . whereas in nonstandard models ,fast system will have multiple roots or without any root .quadratic lyapunov function has been effectively utilized for stability analysis and controller design for singularly perturbed systems .for the purpose of analysis the overall system is separated into two reduced order models by setting the perturbation parameter to zero .stability of each reduced system is investigated by selection of two appropriate quadratic lyapunov functions .subsequently the convex sum of these two functions ( composite lyapunov function ) is employed to assure stability of the overall system .the resulting stability bounds are valid for certain range of the perturbation parameter depending on the interconnection conditions satisfied by the lyapunov functions . in addition to solving regulation problems quadratic lyapunov functionshave been efficient on examining closed loop stability of output feedback controllers , high gain feedback , dynamic surface control etc .however the composite lyapunov approach encounters complication in analyzing nonstandard models .indirect manifold construction with a modified composite control law is used for stabilization of nonstandard problems and references therein .nevertheless it is not always easy to search for two quadratic lyapunov functions which should satisfy all the interconnection conditions and moreover presence of uncertainties further complicates the search process .the condition for stability is a sufficient one and hence there is no guarantee of stability beyond the critical value of perturbation parameter .recently a differential form of stability analysis , namely contraction theory is proposed in . in agreement with this hypothesis all the trajectories of a contracting dynamical system exponentially converge towards each other irrespective of their initial condition .the region in the state space is called contraction region , if every trajectory starting inside the region will converge towards each other .contraction framework does not necessitates the presence of an attractor a priori , however a contracting autonomous system indirectly assures the presence of an equilibrium point .the exponential convergence property is inherently robust to bounded disturbances and hence easier to deal with uncertainties in system model .these interesting properties are utilized for analysis of mechanical systems , stability of networks , observer design , synchronization , kalman filter , frequency estimator design , backstepping controller synthesis etc .moreover contraction framework is extended to analyze singularly perturbed systems and its application to retroactive attenuation in biological systems .these results are employed for stabilization of approximate feedback linearizable systems . partial contraction analysis and robustness of contraction propertyis exploited to derive new stability bounds for singularly perturbed nonlinear systems in these works .the procedure is recursive and can be extended to three or multi time scale systems .the stability bounds obtained hold for a broad range of perturbation parameter rather than a small range found in quadratic lyapunov based methods. therefore contraction framework based analysis of singular perturbed system provides less conservative bounds compared to conventional lyapunov methods .+ in this paper we will show , how contraction theory tools can be adopted for stabilization problems in standard and nonstandard singularly- perturbed systems .the use of contraction tools completely circumvents the need of interconnection conditions and guarantees convergence behavior for a broad range of perturbation parameter .thus the proposed method provide guaranteed stability bounds for a wide range of perturbation parameter which is difficult to obtain using quadratic lyapunov based formalism .the design procedure is also extended to high gain scaling based control law for a class of approximate feedback linearizable systems .for these cases , parameter selection for controller and convergence analysis is guaranteed in the formalism of contraction theory .the method presented in this paper will complement the composite controller design approach when searching for quadratic lyapunov functions satisfying all the interconnection condition becomes difficult .+ the paper is organized as follows .the motivation and problem formulation is discussed in the first section followed by some discussion on contraction theory .stabilization of standard and nonstandard singularly perturbed systems are derived next .application of these results to high gain feedback controller design for approximate feedback linearizable systems is presented in subsequent section . finally in the last section we present simulation results for some examples .+ throughout this paper , we adopt the following notations and symbols . denote compact subsets , denotes a m - dimensional real vector space . for real vectors , denotes the euclidean norm and for real matrix denotes induced matrix norm .a metric denotes a symmetric positive definite matrix and is an identity matrix .the composite controller design approach for standard singularly perturbed systems is discussed through an example .the system is described as [ mot1 ] \ ] ] \ ] ] where is small positive number less than one .the design goal is to stabilize the system around the origin using composite lyapunov function based technique discussed in .the composite control law consists of a slow ( ) and a fast component ( ) which are selected in a recursive manner .we divide the procedure into three distinct steps mentioned below . + _stabilization of reduced slow system : _ the slow component is selected under the assumption that , there exists a slow manifold for the model and all the fast states have converged to this manifold .equating in , the root of the fast subsystem or the slow manifold is given by , note that , the fast component of the control law vanishes when .the slow component has to be selected in such a way that the reduced system will be stable .a control law will stabilize the reduced slow system around origin . for this choice of , a candidate lyapunov function will satisfy the following inequality + where .+ _ stabilization of boundary layer system : _ the boundary layer system for is written as : where .a choice of will achieve stability of boundary layer system using lyapunov function .the derivative of along the trajectories of boundary layer system will follow where and .+ _ interconnection conditions : _ overall system stability of is examined by selecting a composite lyapunov function which is a convex sum of and .moreover to assure asymptotic stability of , and must satisfy the following interconnection conditions . in the region ( ) given in , the choice of scalars will satisfy the interconnection conditions .the maximum value of perturbation parameter for which the system is asymptotically stable depends on the interconnection conditions and selection of composite lyapunov function .the maximum bound of perturbation parameter can be achieved by selecting a composite lyapunov function where . from above discussionit can be concluded that , composite control approach provides an elegant and step by step design for stabilization problems .the idea is to reduce the complexity of the overall system by converting it into two reduced order models and thereafter sensibly selecting two components of the control law for two reduced systems .the stability of the overall system hinges on the selection of lyapunov functions , and interconnection conditions .moreover the condition for stability is a sufficient one and the system may be stable beyond the maximum predicted range of .the closed loop system for is simulated for a choice of and the result is shown in figure 1 ., width=312,height=211 ] it is hard to conclude asymptotic stability of from the figure but it is clear that the closed loop system trajectories are converging in the neighborhood of origin .this behavior of trajectories can not be concluded from composite lyapunov function approach .a proportionate stability bound with respect to may give some relaxation to control design in many practical cases where a uniform ultimate boundedness is the design requirement .in high gain observer based output feedback design or high gain feedback based control designs , closed loop system stability is guaranteed only for a very small range of perturbation parameter .these restriction can be relaxed if the stability of the closed loop system can be assured for a wide range of perturbation parameter .+ also in presence of systems uncertainties , searching for lyapunov functions satisfying all the interconnection condition is a difficult task . in this paperwe are proposing a contraction theory framework for the stabilization of singularly perturbed system addressing these stability issues .the control algorithm proposed in this paper retains the idea of reducing the model order by equating the perturbation parameter to zero .however there is no need to search for lyapunov functions satisfying interconnection conditions . in this paper , we investigate the stabilization of in contraction theory framework .consider the standard singularly perturbed systems described as : [ mot3 ] where , and ] so the proposed controller can give a relative stability result depending on .+ suppose the z sub - system is disturbed by a uncertainty , which can be written as : where . from theorem 1, the unperturbed part of is partially contracting in . following boundcan be established using lemma 2 . from triangle inequality , latexmath:[\[\label{vir1 } assuming the initial condition for and are same , the bound for can be reformulated as : replacing with in we can obtain the steady state bounds for the overall system .there are certain cases when exponential convergence of the trajectories to equilibrium can be inferred rather than convergence to an ultimate bound .assume the right hand side of the z- subsystem is independent of , which can be written in the following form . [ ac1 ] where ] . to derive the results it is assumed that and where are positive constants .this assumption is not conservative in nature and is true for many cases such as flexible link manipulators .the usual backstepping method will not work for the class of systems considered here due to the presence of .the controller design algorithm is divided into two steps .the dynamical systems and are transformed into a singularly perturbed form through a high gain scaling .then a control law is selected to stabilize the transformed system . in the absence of , is in parametric strict feedback form .suppose a control law is selected as : \ ] ] \\ & \alpha_1=0\\ & \alpha_{i}=\frac{1}{b_{i-1}}[-g_{i-1}(x_1, .. x_{i-1 } ) + \sum_{k=1}^{i-2}{\frac{\partial{\alpha_{i-1}}}{\partial{z_{k}}}}\dot{z}_{k } ] \quad(\text{for } i \geq 2)\\ \end{split}\ ] ] the closed loop system is transformed into a brunovsky canonical form . {6}\\&0 & 0 & 0 & \dots & b_{m-1}\\&0 & 0 & 0 & \dots & 0\\\end{bmatrix},b=\begin{bmatrix}0\\0\\\dots\\0\\1\end{bmatrix}\\ ] is contracting .\ii ) }{\mu^2}\ \text{in } \ ( d_x\times d_z) ] are used for simulation .the peaking phenomenon observed in control law is due to the high gain feedback .it can be reduced by using a saturater of desirable magnitude .+ _ _ * remark 6*:__the stability guaranteed for a broad range of $ ] rather than a restrictive maximum value .this perspective gives more freedom in the choice of the high gain parameter as .in other words , the controller can achieve stabilization of the closed loop system using less control effort .however the contraction rate and error bounds will be different for different values of .the closed loop system trajectories is shown in figure [ example 1 ek ] for .another advantage of contraction based control design is that , the bound of error between fast subsystem and slow manifold , ( ) can be changed by changing the parameters of control law .the choice of matrix has a direct effect on the stability bounds because its maximum eigen value decides the steady state error for the closed loop system .a new approach for stabilization of singularly perturbed system is formulated based on contraction theory .the controller design formalism does not require any interconnection conditions .the trajectories of the closed loop system converge to an ultimate bound irrespective of the magnitude of perturbation parameter .moreover an exponential convergence of trajectories can also be achieved under certain restrictions .the proposed design framework is extended to develop a high gain based control law for approximate feedback linearizable systems .the design methodology presented here can assure ultimate boundedness of trajectories even if the lyapunov based bound on perturbation parameter is breached due to some design constraints .the methodology presented here provides some relaxation in the choice of high gain parameter and can be useful to high gain observers for nonlinear systems .99 khalil .h.k . , nonlinear systems " .newyork : macmillan , 1992 . kokotovic .p , khalil . h.k . and reilly .jsingular perturbation methods in control " .siam , 1986 .narang - siddarth , a. , valasek , j. ( 2014 ) nonlinear multiple time scale systems in standard and nonstandard forms : analysis and control .siam , philadelphia angeli .d , a lyapunov approach to incremental stability properties .ieee transactions on automatic control , 2002 .47(3):410 - 421 . f. forni and r. sepulchre , a differential lyapunov framework for contraction analysis , ieee trans . on auto .control , 59 ( 3 ) , pp .614628 , ( 2014 ) joufroy .j , some ancestors of contraction analysis , in proc . conference on decision and control , sevilla , spain , 2005 .w and slotine .e , on contraction analysis for nonlinear systems , automatica , 34(6):683 - 696 , 1998 .e. aylward , p. parrilo and j .- j .slotine , stability and robustness analysis of nonlinear systems via contraction metrics and sos programming , automatica , vol .44 , no . 8 , pp.2163 -2170 2008 e.d .sontag , contractive systems with inputs , perspectives in mathematical system theory , control , and signal processing , pages 217 - 228 .springer - verlag , 2010 sharma .b.b , kar .i.n , contraction theory - based recursive design of stabilising controller for a class of non - linear systems , iet control theory appl ., vol 4(6 ) , pp .1005 - 1018 , 2010 j. jouffroy and j. lottin , integrator backstepping using contraction theory : a brief technological note , in proc .ifac world cong . ,barcelona , spain , 2002 b. b. sharma and i. n. kar , contraction based adaptive control of a class of nonlinear systems , in proc .control conf . , jun .808 - 813 , 2009 w. wang and j.j .e. slotine , on partial contraction analysis for coupled nonlinear oscillators , biol .1 , pp . 3853 , jan .2005 joufroy .j and lottin .j , on the use of contraction theory for the design of nonlinear observers for ocean vehicles , in proc .american control conference , anchorage , alaska , pages 2647 - 2652 , 2002 .w , and slotine .e , control system design for mechanical systems using contraction theory , ieee trans . on auto .control , 45(5):984 - 989 , 2000a .e and wang .w , a study of synchronization and group cooperation using partial contraction theory , block island workshop on cooperative control .springer - verlag , 2003 .m zamani and p tabuada , backstepping design for incremental stability , ieee trans . on auto .control , vol .9,pp 2184 - 2189 , 2011 bb sharma and in kar , design of asymptotically convergent frequency estimator using contraction theory , ieee trans . on auto .control , vol .53 , no . 8 , pp 1932 - 1937 , 2008 .j. jouffroy and j. j. e. slotine , methodological remarks on contraction theory , ieee conf . on decision and control , atlantis , paradise island ,bahamas , pp.2537 - 2543 , 2004 .j. jouffroy , a simple extension of contraction theory to study incremental stability properties , in european control conference , cambridge , uk , 2003 .d. del vecchio and j .- j .e. slotine , a contraction theory approach to singularly perturbed systems , ieee transactions on automatic control , vol .752 - 757 , 2013 ir manchester , jje slotine , control contraction metrics : convex and intrinsic criteria for nonlinear feedback design arxiv preprint arxiv:1503.03144 d. del vecchio and j .- j .e. slotine , a contraction theory approach to singularly perturbed systems with application to retroactivity attenuation , in proc . of the 50th ieee conference on decision and control , orlando , forida ,usa , december 2011 , pp . 5831 - 5836 .jun - won son and jong - tae lim , stabilization of approximately feedback linearizable systems using singular perturbation , ieee ttransactions on automatic control , vol .53 , no . 6 , july 2008 .david angeli , further results on incremental input - to - state stability , ieee ttransactions on automatic control , vol .54 , no . 6 , june 2009 .e. slotine and w. li , applied nonlinear control .englewood cliffs , nj : prentice hall , 1991 .b. yao and m. tomizuka , adaptive robust control siso nonlinear systems in a semi - strict feedback form , automatica , vol .893 - 900 , may 1997 .b.b.sharma and i.n.kar , adaptive control of wing rock system in uncertain environment using contraction theory , in proc .american control conference , seattle , washington , pp 2963 - 2968 , 2008 .ali saberi , h. k. khalil , `` quadratic - type lyapunov functions for singularly perturbed systems '' , ieee trans . automat .control , 29 ( 1984 ) , 542550 a. singh and h.k .regulation of nonlinear systems using conditional integrators .j. robust and nonlinear control , vol .15 , 339 - 362 , 2005 .a. saberi and h. khalil , `` stabilization and regulation of nonlinear singularly perturbed systems - composite control , '' ieee trans .ac-30 , no .739 - 746 , 1985 .
|
recent development of contraction theory based analysis of singularly perturbed system has opened the door for inspecting differential behavior of multi time - scale systems . in this paper a contraction theory based framework is proposed for stabilization of singularly perturbed systems . the primary objective is to design a feedback controller to achieve bounded tracking error for both standard and non - standard singularly perturbed systems . this framework provides relaxation over traditional quadratic lyapunov based method as there is no need to satisfy interconnection conditions during controller design algorithm . moreover , the stability bound does not depend on smallness of singularly perturbed parameter . combined with high gain scaling , the proposed technique is shown to assure contraction of approximate feedback linearizable systems . these findings extend the class of nonlinear systems which can be made contracting . contraction theory , singular perturbation , high gain feedback , approximate feedback linearizable systems , composite controller design
|
in the past decades , there has been great interest in understanding and modelling different processes for information diffusion in a population .most of the time , the mathematical theory of epidemics is adapted for this purpose , even though there are differences between the process of spreading information and the process of spreading a virus or a disease . in the standard versions of the models , the most noticeable differences are between the way spreaders cease to spread an item of information and the way infected individualsare removed from epidemic processes .still , some slightly modified models fit both processes ( see , for example , , where the general stochastic epidemic model is considered as a model for the diffusion of rumours ) . recently introduced a model using a complete graph in which , as soon as an individual is infected , an anti - virus is given to that individual in such a way that the next time a virus tries to infect it , the virus is ineffective . besides , a virus can survive up to individuals empowered with anti - virus .individuals are represented by the vertices of the complete graph , while the virus is represented by a moving agent that replicates every time it hits a healthy individual .the authors prove a weak law of large numbers and a central limit theorem for the proportion of infected individuals after the process is completed .there are two classical models for the spreading of a rumour in a population , which were formulated by and . in the model proposed by ,a closed homogeneously mixing population experiences a rumour process .three classes of individuals are considered : ignorants , spreaders and stiflers .the rumour is propagated through the population by directed contact between spreaders and other individuals , which are governed by the following set of rules .when a spreader interacts with an ignorant , the ignorant becomes a spreader ; whenever a spreader contacts a stifler , the spreader turns into a stifler and when a spreader meets another spreader , the initiating spreader becomes a stifler . in the last two cases, it is said that the spreader was involved in a _ stifling experience_. observe that the process eventually ends ( when no more spreaders are left in the population ) .we show how the techniques used by in the context of epidemic models can be useful in studying a general rumour process .in particular , we propose a generalization of the maki - thompson model . in our model, each spreader decides to stop propagating the rumour right after being involved in a random number of stifling experiences . to define the process ,consider a closed homogeneously mixing population of size .let be a nonnegative integer valued random variable with distribution given by for , and let >0 ] . assign independently to each initially ignorant individual a random variable with the same distribution as .once an ignorant hears the rumour , the value of assigned to him determines the number of stifling experiences the new spreader will have until he stops propagating the rumour .if this random variable equals zero , then the ignorant joins the stiflers immediately after hearing the rumour . for , we say that a spreader is of type if this individual has exactly remaining stifling experiences .we denote the number of ignorants , spreaders of type and stiflers at time by , and , respectively .let be the total number of spreaders at time , so for all .notice that the infinite - dimensional process is a continuous time markov chain with increments and corresponding rates given by & ( 0,-1,0,0,\dots ) & & \left(n - x\right ) y_1 .& & \end{aligned}\]]we see that the first case indicates the transition of the process in which a spreader interacts with an ignorant and the ignorant becomes a stifler immediately ( which happens with probability ) .the second case indicates the transition in which a spreader interacts with an ignorant and the ignorant becomes a spreader of type ( which happens with probability ) .the third case represents the situation in which a spreader of type is involved in a stifling experience but remains a spreader ( of type ) , and finally the last transition indicates the event that a spreader of type is involved in a stifling experience , thus becoming a stifler .we suppose that the process starts with that is , ] given by we define as the unique root of in the interval ] .since and , we have that for large enough and in this case ( given by with ) goes to as .this completes the proof of corollary [ c : muinf ] .we now present the central limit theorem for the ultimate proportion of ignorants in the population .[ t : clt ] suppose that .assume also that or that and .then , where denotes convergence in distribution , and is the gaussian distribution with mean zero and variance given by observe that our results refer to a general initial condition , similar to that considered in the deterministic analysis presented in .the process starting with one spreader and ignorants corresponds to and , in which case the limiting fraction of ignorants and the variance of the asymptotic normal distribution in the clt reduce respectively to \sigma^2 = \frac{{x_{\infty}}(1 - { x_{\infty } } ) ( 1 - ( 1 + \mu - \nu^2 ) \ , { x_{\infty}})}{(1 - ( 1 + \mu ) \ , { x_{\infty}})^2}.\end{gathered}\ ] ] the behaviour of as a function of is shown in figure [ fig : proportion ] . .] here are some important cases : * ( a ) * for ( an integer ) , we have the -fold stifling maki - thompson model ( so called by , in the context of the daley - kendall model ) , for which where table [ tab : 1 ] exhibits the values of and in this case for . the original maki - thompson model is obtained by considering , and , consequently our theorems generalize classical results proved by and . for the -fold stifling maki - thompson model ,the asymptotic value of was originally obtained by .formula is presented in appendix d of belen s doctorate thesis . * ( b ) * let , that is , and , in this model , an ignorant always becomes a spreader upon hearing the rumour and each time a spreader meets another spreader or a stifler , he decides with probability to become a stifler , independently for each spreader and each meeting .thus , given that a spreader has not yet stopped propagating the rumour , the conditional distribution of the additional number of stifling experiences he will have does not depend on how many stifling experiences he already had .this means that every time a spreader chooses whether or not to become a stifler , he does not have a `` memory '' of how many unsuccessful telling meetings he has been involved in .table [ tab : 2 ] shows the values of and for , and some arbitrarily chosen values of . *( c ) * consider , in which case an ignorant individual has the choice ( with a positive probability equal to ) of becoming a stifler as soon as he learns the rumour .moreover , in his successive decisions about stifling , a spreader does have some `` memory '' of the number of his previous stifling experiences .table [ tab : 3 ] presents the values of and for , and some values of . . , and . [ cols="^,>,<,>,<,>,<,>,<,>,<,>,<,>,<,>,<",options="header " , ]here are the main ideas in the proofs of theorems [ t : lln ] and [ t : clt ] . first , by means of a suitable time change of the process , we define a new process with the same transitions as , so that they end at the same point of the state space .next , we work with a reduced markov chain obtained from in order to apply the theory of density dependent markov chains presented in .as the arguments follow a path similar to that presented in , we present only a brief sketch of the proofs .since the distribution of depends on the process only through the embedded markov chain , we consider a time - changed version of the process .let be the infinite - dimensional continuous time markov chain with increments and corresponding rates given by & ( 0,-1,0,\dots ) & & ( n-\tilde x ) \, \tilde y_1 \ , ( \tilde y)^{-1}. & & \end{aligned}\]]furthermore , can be defined in such a way that it has the same initial state and the same transitions as , so both have the same embedded markov chain .thus , by defining we have that in order to prove the desired limit theorems using theorem 11.2.1 of , we work with a reduced markov chain . we define and note that the process is a continuous time markov chain with increments and rates given by now we define , for , and consider notice that the rates in can be written as so is a density dependent markov chain with possible transitions in the set .now we use theorem 11.2.1 of to conclude that the process converges almost surely as to a deterministic limit .the drift function defined in by is in this case given by hence the limiting deterministic system is governed by the following system of ordinary differential equations with initial conditions and .the solution of this system is given by , where according to theorem of , we have that on a suitable probability space , uniformly on bounded time intervals . in particular , it can be proved that uniformly on .see lemma 3.6 in for an analogous detailed proof . to prove both theorems , we use theorem 11.4.1 of .we adopt their notations , except for the gaussian process defined on p. 458 , that we would rather denote by .here , , and moreover , _ proof of theorem [ t : lln ] ._ we note that and imply that and for .then , the almost sure convergence of to uniformly on bounded intervals yields that in the case where and , this result is also valid because and still holds . on the other hand , if and , then for all , and again the almost sure convergence of to uniformly on bounded intervals yields that almost surely .therefore , as , we obtain theorem [ t : lln ] from and. _ proof of theorem [ t : clt ] . _ from theorem 11.4.1 of , we have that if or and , then converges in distribution as to the resulting normal distribution has mean zero , so , to complete the proof of theorem [ t : clt ] , we need to calculate the corresponding variance . to this end , we have to compute the covariance matrix , a task that can be accomplished using a mathematical software .the first step is to calculate the matrix of partial derivatives of the drift function and the matrix .we obtain and next , we compute the solution of the matrix equation which is given by hence , the covariance matrix of the gaussian process at time is obtained by the formula }^t \ , ds.\ ] ] as the final step to compute , we have to replace and in the formula obtained from by and , respectively .the resulting formulas are that , and well - known properties of the variance , we get formula .we have proposed a general maki - thompson model in which an ignorant individual is allowed to have a random number of stifling experiences once he is told the rumour .the assigned numbers of stifling experiences are independent and identically distributed random variables with mean and variance .we prove that the ultimate proportion of ignorants converges in probability to an asymptotic value as the population size tends to .a central limit theorem describing the magnitude of the random fluctuations around this limiting value is also derived .the asymptotic value and the variance of the gaussian distribution in the clt are functions of , and some constants related to the initial state of the process .we observe that in fact it is possible to obtain another result , concerning the mean number of transitions that the process makes until absorption .using an argument analogous to that presented in theorem 2.5 of , it can be proved that , if , then as a final remark , we would like to point out the usefulness of the theory of density dependent markov chains as a tool for studying the limiting behaviour of stochastic rumour processes .this approach constitutes an alternative to the pgf method and the laplace transform presented in , and .the authors are grateful to tom kurtz , alexandre leichsenring , nancy lopes garcia , pablo groisman and sebastian grynberg for fruitful discussions .thanks are also due to three reviewers for their helpful comments .kurtz , t.g ., lebensztayn , e. , leichsenring , a.r . , machado , f.p .limit theorems for an epidemic model on the complete graph .latin american journal of probability and mathematical statistics _ * 4 * , 4555 .
|
we propose a realistic generalization of the maki - thompson rumour model by assuming that each spreader ceases to propagate the rumour right after being involved in a random number of stifling experiences . we consider the process with a general initial configuration and establish the asymptotic behaviour ( and its fluctuation ) of the ultimate proportion of ignorants as the population size grows to . our approach leads to explicit formulas so that the limiting proportion of ignorants and its variance can be computed .
|
hedging problem for contingent claims in incomplete markets is a centerpiece of mathematical finance . actually , many hedging methods for incomplete markets have been suggested . above all, we focus on mean - variance hedging ( mvh ) , which has been studied very well for about three decades .our aim of this paper is twofold : the first is to derive an explicit closed - form representation of mvh strategies for exponential additive models using malliavin calculus for levy processes .the second is to develop a simple numerical method for exponential lvy models , and to illustrate numerical results .we consider throughout an incomplete financial market in which only one risky asset and one riskless asset are tradable .let be the maturity of our market , and suppose that the interest rate of the riskless asset is for sake of simplicity .the risky asset price process , denoted by , is given as a solution to the following stochastic differential equation : ,\hspace{3mm}s_0>0,\ ] ] where , is a one - dimensional standard brownian motion , and is the compensated version of a homogeneous poisson random measure . here , and are deterministic measurable functions on ] .we assume , which ensures the positivity of .then , is given as an exponential of an additive process , that is , is continuous in probability and has independent increments .in addition , when and are given by a real number and a non - negative real number , respectively , and , we call an exponential lvy process. let be a square integrable random variable .we consider its value as the payoff of a contingent claim at the maturity . in principle , since our market is incomplete , we can not find a replicating strategy for , that is , there is no pair satisfying where is a set of predictable processes , which is considered as the set of all admissible strategies in some sense , and denotes the gain process induced by , that is , . note that each pair represents a self - financing strategy . instead of finding the replicating strategy, we consider the following minimization problem : ,\ ] ] and call its solution the mvh strategy for claim if it exists . in other words ,the mvh strategy defined as the self - financing one minimizing the corresponding -hedging error over .remark that gives the initial cost which is regarded as the corresponding price of , and represents the number of shares of the risky asset in the strategy at time .in addition to mvh strategy , locally risk - minimizing ( lrm ) strategy has been studied well as alternative hedging method in quadratic way .being different from the mvh approach , an lrm strategy is given as a replicating strategy which is not necessarily self - financing .thus , we need to take an additional cost process into account . roughly speaking ,an lrm strategy is defined as the one minimizing in the -sense the risk caused by such an additional cost process among all replicating strategies which are not necessarily self - financing . for more details , see schweizer and . as for expressions of lrm strategies ,arai and suzuki obtained an explicit form for lvy markets using malliavin calculus for lvy processes . here , lvy market is a similar model framework to ours , but the coefficient functions , and may have randomness .in other words , our model is a lvy market with deterministic coefficients .there is much literature on mvh strategies for jump type models . among others , arai , ern and kallsen , and jeanblanc et al . provided feedback - form representations of mvh strategies for general model frameworks using semimartingale approaches , duality approaches or backward stochastic differential equations , but their representations are not given concretely for concrete models . here , a representation of the mvh strategy is said to be feedback - form if it includes . moreover , lim considered a lvy market and gave a closed - form expression of , that is , an expression without using the values of up to time .however , he restricted to be bounded , and his expression is not an explicit form , since it includes solutions to backward stochastic differential equations . on the other hand , as researches on explicit representations for concrete models , hubalek et al . obtained a representation in feedback - form for exponential lvy models , and also their results have been extended to the additive process case and affine stochastic volatility models by goutte et al . , and kallsen and vierthauer , respectively .the discussion in is based on bilateral laplace transforms and the fllmer - schweizer ( fs ) decomposition , which is an expression of by the sum of a stochastic integral with respect to and a residual martingale .in addition , combining their theorems 3.1 and 3.3 , they also gave an explicit closed - form representation .as mentioned before theorem 3.3 in , an explicit closed - form representation might be preferred to one in feedback - form from a numeric analytical point of view , since is approximated with an involved recursive calculation , which is very time - consuming and entails a drop of the accuracy . for more details on this matter , see remark [ rem - rec ] .in addition , the closed - form representation obtained in is given as a direct extension of the feedback - form one using a general stochastic exponential , namely , it is also not appropriate for the development of numerical schemes . besides , their closed - form representation includes a stochastic integral with respect to the quadratic variation of which is not observable .therefore , we shall derive a different explicit closed - form representation for exponential additive models in order to develop a simple numerical method for which we need discrete observational data of alone .this is the first main purpose of this paper . in particular , we make use of results of , and represent in terms of malliavin calculus for lvy processes .furthermore , we rely on the argument of , which is based on a different decomposition of from the fs decomposition . as one more advantage of our representation ,path - dependent options are covered as seen in examples [ ex1 ] and [ ex3 ] , while excluded them . as the second goal of this paper, we shall develop a numerical method to compute for non - zero given ] so far as we know .the most difficulty on the development of numerical methods for mvh strategies lies in the fact that for given ] ; and its coordinate mapping process , that is , a one - dimensional standard brownian motion with . denotes the canonical lvy space for a pure jump lvy process on ] , where for ] .note that \times\bbr_0)^0 ] be the canonical filtration completed for .let be a square integrable centered lvy process on represented as denoting by the poisson random measure defined as , and ] , and is a deterministic jointly measurable function on \times\bbr_0 ] .now , we assume throughout this paper the following : [ ass1 ] 1 . for any \times\bbr_0 ] for some .there exists an such that [ rem1 ] 1 . under assumption[ ass1 ] , ( [ sde ] ) has a solution satisfying the so - called structure condition ( sc ) , that is , has the following three properties : 1 . is a semimartingale of the space , that is , a special semimartingale with the canonical decomposition such that ^{1/2}+\int_0^t|da_s|\r\|_{l^2(\bbp)}<\infty,\ ] ] where and .we have .the mean - variance trade - off process is finite , that is , is finite -a.s .+ + the sc is closely related to the no - arbitrage condition . for more details on the sc , see and .the process as well as is continuous . in particular, is deterministic .( [ eq - s2 ] ) implies that }|s_t|\in l^2(\bbp) ] . to see item 1 , we define ] .now , we calculate as follows : \\ & = \exp\l(\int_0^t\lambda_uda_u\r)\bbe_{\tp}\l[\cale_t\l(-\int_0^\cdot\lambda_uds_u\r)\big|\calf_t\r ] \\ & = \exp\l(\int_0^t\lambda_uda_u\r)\cale_t\l(-\int_0^\cdot\lambda_uds_u\r)=\bbe[(d^*)^2]\cale_t\l(-\int_0^\cdot\lambda_uds_u\r)\end{aligned}\ ] ] for any ] from the view of item 3 of assumption [ ass1 ] , we have , from which item 3 follows .the essence of the above proof lies in the fact that ) is deterministic .general speaking , when is deterministic , the variance - optimal martingale measure becomes as well the minimal martingale measure , which is defined as an equivalent martingale measure under which any square - integrable -martingale orthogonal to remains a martingale .actually , example 2.8 of showed that is the minimal martingale measure .note that the minimal martingale measure is essential in the lrm approach .now , we prepare some notation for later use .the girsanov theorem implies that is a one - dimensional standard brownian motion under .moreover , denoting and for ] and .one of our aims in this paper is to obtain a closed - from representation of mvh strategies in terms of malliavin calculus for lvy processes .now , we prepare some notation and definitions with respect to malliavin calculus .we adapt the canonical lvy space framework undertaken by , which is a malliavin calculus on the lvy process given in ( [ eq - x ] ) .first of all , we define measures and on \times\bbr ] and is the dirac measure at . for ,we denote by the set of product measurable , deterministic functions \times\bbr)^n\to\bbr ] . now, we define a malliavin derivative operator as follows : 1 .let denote the set of random variables with satisfying .2 . for any ,\times\bbr\times\omega\to\bbr ] , -a.s .let be a random variable representing the payoff of a claim to hedge .in addition to assumption [ ass1 ] , we assume throughout the following : [ ass2 ] 1 .2 . and . under assumptions[ ass1 ] and [ ass2 ] , example 3.9 of implies that is described as +\int_0^ti_tdw^{\tp}_t+\int_0^t\int_{\bbr_0}j_{t , z}\tn^{\tp}(dt , dz),\ ] ] where ] for ] .we derive in this section a closed - from expression of the mvh strategy for claim in terms of malliavin calculus .as mentioned in introduction , the mvh strategy for is defined as a pair which minimizes .\ ] ] the following theorem is shown in subsection 3.1 , and some examples will be introduced in subsection 3.2 . [main - thm ] under assumptions [ ass1 ] and [ ass2 ] , the mvh strategy for claim is represented in closed - form as \ ] ] and where and as shown in , defined in ( [ eq - lrm ] ) represents the lrm strategy for claim , that is , for each ] , where , . under assumption[ ass1 ] , is a solution to the following stochastic differential equation : where .supposing assumption [ ass2 ] additionally , we have , by theorem [ main - thm ] where . as seen in , the condition guarantees assumption [ ass2 ] for options introduced in subsection 3.2 .in addition , introduced a numerical method to compute , , and for the case where is a call option . _ step 1 : _ obtained a similar feedback - form representation to for more general discontinuous semimartingale models . in , he defined a new decomposition of , which is different from the fs decomposition .remark that treated , instead of ( [ eq - mvh ] ) , the following minimization problem : .\ ] ] now , we introduce an outline of the argument in .recall that proposition [ prop1 ] holds true under assumption [ ass1 ] .this fact ensures that our setting satisfies assumption 1 of .thus , the solution to ( [ eq - mvh2 ] ) exists , and we denote it by . we define a new probability measure as as shown in ( 4.7 ) of , admits the following decomposition : +g_t(\hetah)+\hnh_t,\ ] ] where , and is a -martingale with . here is represented as \ ] ] with a square integrable -martingale .note that the processes and both are -martingales .remark that the decomposition ( [ eq - new ] ) is neither the kunita - watanabe one nor the fs one in our setting .furthermore , ( 4.5 ) in provides that is given by \lambda_t\cale_{t-}+\hlh_{t-}\lambda_t\tz_{t-}.\ ] ] _ step 2 : _ replacing with the constant in ( [ eq - mvh2 ] ) , we consider the following minimization problem : .\ ] ] letting , we have , =0 ] .thus , we obtain } \\ & = \bbe[(g_t(\vt)-g_t(\hvt^1))^2]+2\bbe[\cale_t(1-g_t(\vt))]-\bbe[(1-g_t(\hvt^1))^2 ] \\ & \geq 2\bbe[\cale_t]-\bbe[(1-g_t(\hvt^1))^2]=\bbe[(1-g_t(\hvt^1))^2]\end{aligned}\ ] ] for any , which means that is the solution to ( [ proj1 ] ) .hence , theorem 4.2 of hou and karatzas implies that ] for any ) ] , for some predictable processes and .since the product process is a martingale under , we get for any ] , which yields and solving the simultaneous equation ( [ eq - sim ] ) on by using ( [ eq - nu * ] ) and ( [ eq - ortho ] ) , we have ( [ eq - lrm ] ) implies that coincides with , which is the lrm strategy for .consequently , we get as well as noting that we obtain from ( [ eq - qr ] ) , ( [ eq - tnq ] ) and ( [ eq - hlh ] ) that _ step 5 : _ from the view of ( [ eq - tvth ] ) together with ( [ eq - hlh2 ] ) , we calculate as follows : we have and thus , from ( [ eq - xiw ] ) and ( [ eq - xin ] ) , we obtain this completes the proof of theorem [ main - thm ] .[ ex1 ] we consider two representative options as contingent claims to hedge : call options and asian options for . in order to obtain explicit representations of for such options, we have only to show expressions of and from the view of ( [ eq - main - thm ] ) .in addition to assumption [ ass1 ] , we assume the following condition : which ensures assumption [ ass2 ] as seen in sections 4 and 5 of . for and ] and . for an exponential lvy model introduced in example [ ex2 ] ,section 6 of implies that , under assumption [ ass1 ] and the condition , , \\ j_{t , z } & = & \bbe_{\tp}\l[\l(\sup_{u\in[0,t]}\l(s_ue^{z{\bf 1}_{\{t\leq u\}}}\r)-k\r)^+-(m^s - k)^+\big|\calf_{t-}\r ] , \end{array}\r.\ ] ] where |s_t\vee s_{t-}=m^s\} ] for exponential lvy models , and illustrate in subsection 4.1 some numerical results .to our best knowledge , no numerical methods for the values of at non - zero time ] is depending on not only but also the whole trajectory of from to .however , it is impossible to observe the trajectory of continuously from a practical point of view .accordingly , we compute approximately using discrete observational data , , where and for .remark that we divide the time interval ] .moreover , we approximate using a recursive calculation as and for , which is a discretization of a stochastic exponential .[ rem - rec ] an approximation of using a feedback - form expression is basically given as a discretization of a general stochastic exponential , defined as a solution to the following type of stochastic differential equation : where and are semimartingales .theorem v.52 of implies that , if is continuous , then is given as )\r\},\ ] ] which is much more complicated than ordinary stochastic exponentials . as a result, a recursive calculation for a discretization of is involved in contrast to ( [ eq - cale ] ) , which means that feedback - form expressions are not appropriate to develop an approximation method for . [ rem - quad] it is almost impossible to develop a similar approximation method to ( [ approx ] ) using the closed - form expression obtained by , since their expression , which is given as a general stochastic exponential , includes a stochastic integral with respect to the quadratic variation of , which we can not observe directly .we focus on the case where the process , is given as a variance gamma process , and is a call option with . note that a variance gamma process is defined as a time - changed brownian motion subject to a gamma subordinator . in summary , represented as \,,\end{aligned}\ ] ] where , , is a one - dimensional standard brownian motion , and is a gamma process with parameters for .its lvy measure is then given as where note that has no brownian component , that is , in example [ ex2 ] is given by . as a result , the approximation ( [ approx ] ) for is simplified as where recall that and are computed with the fft - based scheme developed in .we consider european call options on the s&p 500 index matured on 19 may 2017 , and set the initial date of our hedging to 20 may 2016 .we fix to .there are 251 business days on and after 20 may 2016 until and including 19 may 2017 .thus , for example , 20 may 2016 and 23 may 2016 are corresponding to time and , respectively , since 20 may 2016 is friday .we compute the values of mvh strategies on 10 november 2016 .since 10 november 2016 is the 121st business day after 20 may 2016 , letting , we compute the values of .remark that is constructed on 9 november 2016 .thus , it is computed using 121 dairy closing prices of s&p 500 index on and after 20 may 2016 until and including 9 november 2016 as discrete observational data . as contingent claims to hedge, we consider call options with strike price 1500 , 1550 , , 2500 .in addition , we set model parameters as , , and , which are calibrated by the data set of european call options on the s&p 500 index at 20 april 2016 .note that the above parameter set was used in arai and imai , and satisfies assumptions [ ass1 ] and [ ass2 ] .figure [ fig1 ] shows the values of .the computation time to obtain the 21 values of on figure [ fig1 ] is 28.85 s , which indicates that we achieve fast computation using an fft - based method , nevertheless computation for mvh strategies is time - consuming in general .in addition , we compute the values of , which is the difference between the values of the mvh and lrm strategies .figure [ fig2 ] shows that the differences are very small , more precisely , the absolute values of are no more than 0.0025 . note that our numerical experiments are carried out using matlab ( 9.0.0.341360 r2016a ) on an intel core i7 3.4 ghz cpu with 16 gb 1333 mhz ddr3 memory .00 t. arai , an extension of mean - variance hedging to the discontinuous case , finance stoch ., 9 ( 2005 ) , pp.129139 .t. arai and y. imai , on the difference between locally risk - minimizing and delta hedging strategies for exponential lvy models , preprint .t. arai , y. imai and r. suzuki , numerical analysis on local risk - minimization for exponential levy models , int .finance , 19 ( 2016 ) , 1650008 .t. arai and r. suzuki , local risk - minimization for levy markets , int .j. financ .eng . , 2 ( 2015 ) , 1550015 .f. benth , g. di nunno , a. lkka , b. ksendal and f. proske , explicit representation of the minimal variance portfolio in markets driven by lvy processes , math .finance , 13 ( 2003 ) , pp.5572 .a. ern and j. kallsen , on the structure of general mean - variance hedging strategies , ann .probab . , 35 ( 2007 ) ,t. choulli , l. krawczyk and c. stricker , -martingales and their applications in mathematical finance , ann .probab . , 26 ( 1998 ) , pp.853876 . c. de franco , p. tankov and x. warin , numerical methods for the quadratic hedging problem in markov models with jumps , j. comput .finance , 19 ( 2015 ) , pp.2967 .s. goutte , n. oudjane and f. russo , variance optimal hedging for continuous time additive processes and applications , stochastics 86 ( 2014 ) , pp.147 - 185 .c. hou and i. karatzas , least - squares approximation of random variables by stochastic integrals , in stochastic analysis and related topics in kyoto , h. kunita , s. watanabe and y. takahashi ed ., mathematical society of japan , tokyo , 2004 , pp.141 - 166. f. hubalek , j. kallsen and l. krawczyk , variance - optimal hedging for processes with stationary independent increments , ann ., 16 ( 2006 ) , pp.853 - 885 . j. jacod and a. shiryaev , limit theorems for stochastic processes , 2nd eds . ,springer , berlin , 2003 .m. jeanblanc , m. mania , m. santacroce and m. schweizer , mean - variance hedging via stochastic control and bsdes for general semimartingales , ann .probab . , 22 ( 2012 ) , pp.2388 - 2428 . j. kallsen and r. vierthauer , quadratic hedging in affine stochastic volatility models , rev. derivatives res ., 12 ( 2009 ) , pp.3 - 27 .lim , mean - variance hedging when there are jumps , siam j. control optim ., 44 ( 2006 ) , pp.1893 - 1922 .p. protter , stochastic integration and differential equations , 2nd eds , springer , berlin , 2004 .m. schweizer , a guided tour through quadratic hedging approaches , in handbooks in mathematical finance : option pricing , interest rates and risk management , e. jouini , j. cvitanic and m. musiela ed . , cambridge university press , cambridge , 2001 , pp.538574 .m. schweizer , local risk - minimization for multidimensional assets and payment streams , banach center publ ., 83 ( 2008 ) , pp.213229 .j. l. sol , f. utzet and j. vives , canonical lvy process and malliavin calculus , stochastic process .appl . , 117 ( 2007 ) , pp.165187 .
|
we derive an explicit closed - form representation of mean - variance hedging strategies for models whose asset price follows an exponential additive process . our representation is given in terms of malliavin calculus for lvy processes . in addition , we develop an approximation method to compute mean - variance hedging strategies for exponential lvy models , and illustrate numerical results . * keywords : * mean - variance hedging , additive processes , malliavin calculus , fast fourier transform . + * ams 2010 subject classification : * 91g20,60h07,91g60 .
|
einstein s field equations for general relativity predict the existence of closed timelike curves ( ctcs ) in certain exotic spacetime geometries , but the bizarre consequences lead many physicists to doubt that such time machines could exist .closed timelike curves , if they existed , would allow particles to interact with their former selves , suggesting the possibility of grandfather - like paradoxes in both classical and quantum theories .physicists have considered the ramifications of closed timelike curves for quantum mechanics by employing path - integral approaches in an effort to avoid contradictions .deutsch showed that closed timelike curves also have consequences for classical and quantum computation , and he suggested imposing a self - consistency condition on the density matrix of a ctc qubit in order to avoid grandfather - like paradoxes . since deutsch s seminal work ,quantum information theorists have produced a flurry of results under his model .they have shown that deutschian closed timelike curves ( d - ctcs ) can help solve np - complete problems , that a d - ctc - assisted classical or quantum computer has computational power equivalent to that of pspace , that a d - ctc - assisted quantum computer can perfectly distinguish an arbitrary set of non - orthogonal states , that evolutions of chronology - respecting qubits can be a discontinuous function of the initial state , and that it is not possible to purify mixed states of qubits that traverse a d - ctc while still being consistent with interactions with chronology - respecting qubits .the result of brun _et al_. concerning state distinguishability is perhaps the most striking for any firm believers in unitarity , considering that a d - ctc - assisted quantum computer can violate both the uncertainty principle and the holevo bound .since these findings , bennett _et al_. questioned the above results of aaronson and watrous and brun _et al_. on d - ctc - assisted computation and distinguishability , respectively .they showed that the circuits of aaronson _et al_. do not operate as advertised when acting on a classically - labeled mixture of states and argued that this implies their circuits become impotent . in their work , they exploited _ linear _ mixtures of states to suggest that the aforementioned authors fell into a linearity trap .but recent papers cast doubt on the claims of bennett _et al_. and come to the same conclusion as aaronson and watrous and brun _et al_. a first paper tracks the information flow of quantum systems in a d - ctc with a heisenberg - picture approach , and another paper shows how a density matrix description is not valid in a nonlinear theory .further work revisits deutsch s self - consistency conditions , showing that they are concealing paradoxes from an observer rather than eliminating them as they should .these dramatically differing conclusions have to do with the ontological status of quantum states , which , for the most part , is not a major concern in standard linear quantum mechanics , but clearly leads to differing results in a nonlinear quantum mechanics .recently , a different model of closed timelike curves has emerged , based on bennett and schumacher s well - known but unpublished work on postselected quantum teleportation .this alternative theory features a postselected closed timelike curve ( p - ctc ) , which is physically inequivalent to a d - ctc . sending a qubit into the past by a p - ctc is somewhat like teleporting the qubit s state .normally , states can only be teleported forward in time , because the receiver requires a measurement outcome from the sender in order to recover the state . by somehow postselecting with certainty on only a single measurement outcome, however , this requirement is removed .postselection of quantum teleportation in this fashion implies that an entangled state effectively creates a noiseless quantum channel into the past .p - ctcs have the benefit of preserving correlations with external systems , while also being consistent with path - integral formulations of ctcs .et al_. have proven that the computational power of p - ctcs is equivalent to that of the complexity class pp , by invoking aaronson s results concerning the power of quantum computation with postselection . in this paper , we show that the same result can be derived from a different direction : by invoking the ideas of ref . to eliminate invalid answers to a decision problem by making them paradoxical .one can exploit this particular aspect of p - ctcs to give explicit constructions of p - ctc - assisted circuits with dramatic computational speedups .our first result is to show that one can postselect with certainty the outcome of any generalized measurement using just one p - ctc qubit .et al_. state that it is possible to perform any desired postselected quantum computation with certainty with the help of a p - ctc system , but they did not explicitly state that it requires just one p - ctc qubit .next , we discuss a difference between d - ctcs and p - ctcs , in which the existence of a future p - ctc might affect the outcome of a present experiment. this observation might potentially lead to a way that one could test for a future p - ctc , by noticing deviations from expected probabilities in quantum mechanical experiments . further results concern state distinguishability with p - ctc - assisted circuits .we begin by showing that the swap - and - controlled - hadamard circuit from ref . can perfectly distinguish and when assisted by a p - ctc ( recall that this circuit can distinguish and when assisted by a d - ctc ) .we show that the circuit from ref . for distinguishing the bb84 states , , , and when assisted by a d - ctc can not do so when assisted by a p - ctc .the proof of theorem [ thm : p - ctc - state - distinguish ] then constructs a p - ctc - assisted circuit , similar to the general construction from ref . , that can perfectly distinguish an arbitrary set of linearly independent states .the proof offers an alternate construction that accomplishes this task with just one p - ctc qubit , by exploiting the generalized measurement of ref . and the ability of a p - ctc to postselect with certainty on particular measurement outcomes .the theorem also states that no p - ctc - assisted circuit can perfectly distinguish a set of linearly dependent states .bennett and smith both suggested in private communication that such a theorem should hold true .the theorem implies that a p - ctc - assisted circuit can not beat the holevo bound , so that their power is much weaker than that of a d - ctc - assisted circuit for this task .we then discuss how different representations of a quantum state in p - ctc - assisted circuits lead to dramatically differing conclusions , even though they give the same results in linear quantum mechanics .our final set of results concerns the use of p - ctc - assisted circuits in certain computational tasks .we first show that a p - ctc - assisted circuit can efficiently factor integers without the use of the quantum fourier transform .we then generalize this result to a p - ctc - assisted circuit that can efficiently solve any decision problem in the intersection of np and co - np .our final construction is a p - ctc - assisted circuit for probabilistically solving any problem in np .all of our circuits can accomplish these computational tasks using just one p - ctc qubit .these circuits exploit the idea in ref . of making invalid answers paradoxical , which yields results that are surprisingly similar to aaronson s construction in ref . concerning the power of postselected quantum computation .we structure this paper as follows . the next section briefly reviews the p - ctc model , and we prove that a single qubit in a p - ctc allows postselecting on any measurement outcome with certainty .we also discuss an important difference between d - ctcs and p - ctcs and provide an example to illustrate this difference .section [ sec : p - ctc - distinguish ] presents our results for p - ctcs and state distinguishability , and section [ sec : p - ctc - compute ] presents our results for p - ctcs in certain computational tasks .we end by summarizing our results .we first briefly review the theory of p - ctcs .a p - ctc - assisted circuit operates by combining a chronology respecting qubit in a state with a chronology - violating qubit and interacting them with a unitary evolution . after the unitary, the chronology - respecting qubit proceeds forward in time while the chronology - violating qubit goes back in time .the assumption of the model is that this evolution is mathematically equivalent to combining the state with a maximally entangled bell state where there is then a unitary interaction between the cr qubit and half of the entangled state .the final step is to project the two systems of the entangled state onto the state , renormalize the state , and trace out the last two systems .the renormalization induces a nonlinearity in the evolution .this approach is the same as the controversial final state projection method from the theory of black hole evaporation .figure [ fig : pctc - operation ] depicts the operation of a p - ctc . as pointed out by lloyd _et al_. , the action of a unitary on a joint system consisting of a chronology - respecting pure state and a ctc system is as follows ( before renormalization): implying the following evolution for a mixed state ( before renormalization): where the fourth equality follows from the above development for pure states .thus , the induced map on the chronology - respecting state is as follows ( after renormalization): where there is always the possibility that the operator is equivalent to the null operator , in which case lloyd _et al_. suggest that the evolution does not happen .this result is perhaps strange , suggesting that somehow the system interacting with the p - ctc is annihilated .an explanation for what could happen resorts to potential imperfections in the unitary interaction .there is only a paradox for the evolution if the overlap of the ctc qubit with the final projected state is identically zero . in practice, evolutions do not occur with arbitrary precision , so that the p - ctc - assisted circuit magnifies errors dramatically , and unlikely influences outside the system of interest could intervene before the circuit can create a paradox .occurs with certainty .we adopt the nomenclature `` postselection with certainty '' in order to make this point clear . ]p - ctcs allow us to postselect with certainty the outcomes of a generalized measurement .suppose the generalized measurement consists of measurement operators .suppose that we would like to postselect the measurement in such a way so that outcome 0 definitely occurs .we can perform the generalized measurement by appending an ancilla of dimension at least , in state , to the system , which we assume to be in a state .the initial state is thus .we then perform a unitary that has the following effect: ( this is the standard construction for a generalized measurement . )now we do a second unitary , from the ancilla to the p - ctc qubit .this unitary is as follows: where the third operator in the tensor product acts on the p - ctc qubit .this construction makes every outcome except paradoxical , and measuring the ancilla in the standard basis postselects so that the resulting state is we can postselect on any subset of the measurement outcomes by varying the projectors in . for example , we can postselect by accepting any measurement outcome except . to do this, we would use the following unitary as the last one: there is an important difference between d - ctcs and p - ctcs that follows straightforwardly from their definitions .recall that deutsch s self - consistency condition requires that the density matrix of the d - ctc system after an interaction be equal to the density matrix before the interaction . in this way , deutsch designed d - ctcs explicitly to replicate exactly the predictions of standard quantum mechanics in the absence of ctcs .that is , before a ctc comes into existence , or after it ends , quantum mechanics behaves exactly as usual .p - ctcs , by contrast , act in a way equivalent to `` postselection with certainty , '' and they specifically rule out evolutions that lead to a paradox .this implies that the probabilities of measurement outcomes can be altered _ even in the absence of ctcs _, if ctcs _ will _ come into existence in the future . in principlethis means that the possibility of ctcs could be tested indirectly , by looking for deviations from standard quantum probabilities , a fact that was also pointed out by hartle in ref .needless to say , it is far from obvious how to do such a test in practice .the bizarre behavior of nonlinear quantum mechanics would seem to cast doubt that ctcs can exist in the real world .we offer a simple example to illustrate the idea in the previous paragraph .suppose we have systems , , and , where is a p - ctc qubit .we prepare and in a maximally entangled state and measure in the pauli basis .then we perform a cnot from qubit to qubit .this circuit leads to a paradox if the result of measuring is , so it must be ( equivalently , one can check that the transformation induced by the p - ctc is ) .but now consider what happens if we move the preparation and measurement of before the p - ctc comes into existence .there are two possibilities : 1. the usual rules of quantum mechanics apply , and the probabilities of and are equal .if the result is , we avoid a paradox by magnifying tiny deviations from the exact unitaries , or other external effects to prevent the cnot from happening .2 . the certain postselection forces the measurement result on to be , even though the p - ctc does not exist yet .option 2 is perhaps more natural in this ideal noiseless setting , and it also matches the qualitative results found by hartle using path integrals .it is interesting that the system does not have to interact directly with the ctc in order for this effect to occur .we begin this section by discussing some simple examples , and we then prove a general theorem that states that a p - ctc - assisted circuit can perfectly distinguish an arbitrary set of linearly independent states and can not do so if the states are linearly dependent .this section ends with a discussion of how these circuits act on different ontological representations of a quantum state .our first circuit in figure [ fig : b92 ] distinguishes from and can thus break the security of the bennett-92 protocol for quantum key distribution .the circuit consists of a cascade of a swap gate followed by a controlled hadamard , where so that the first qubit upon which the unitary acts is the system qubit , and the second one is the ctc qubit . after tracing over the ctc system ( as prescribed in ( [ eq : pctc - transform ] ) ) , we get the following transformation this transformation then gives if we input and if we input ( after renormalization ) .interestingly , this same circuit distinguishes the antipodal states and when assisted by a d - ctc . we can generalize the above example to find a p - ctc - assisted circuit that can distinguish two arbitrary non - orthogonal states . without loss of generality ,suppose that the two states we are trying to distinguish are and where .we would like to build the following transformation: so that and ( after renormalization ) .we follow the same prescription as above and exploit the following unitary : we use a cascade of a swap and a controlled- where so that the cascade is as follows: after tracing out the ctc system , we get which is the desired transformation . the d - ctc - assisted circuit presented in ref . for distinguishing bb84 states is not able to distinguish these same states when assisted by a p - ctc .in fact , the orthogonality relations of the bb84 states remain the same after going through the p - ctc - assisted circuit . the transformation induced by the circuit in ref . is as follows under the p - ctc model: it is perhaps striking that the transformation takes on this form , considering that the transformation in ref . takes , , , and .one can check that the output states of the above transformation are as follows ( after renormalization): these states have the same orthogonality relations as the original input states , and there is thus no improvement in distinguishability .this result leads us to the main theorem of the next section .we now state a general theorem regarding state distinguishability and p - ctcs .one of our constructions in the proof has similarities with the general construction in ref . for distinguishing an arbitrary set of non - orthogonal states with a d - ctc - assisted circuit .[ thm : p - ctc - state - distinguish]there exists a p - ctc - assisted circuit that can perfectly distinguish an arbitrary set of linearly independent states , but a p - ctc - assisted circuit can not perfectly distinguish a set of linearly dependent states .we present two constructions with the first requiring an -dimensional p - ctc system , while the second requires only one p - ctc qubit .our first construction is similar to the construction in ref . that uses d - ctc qubits to distinguish states .consider a particular vector in the set .arbitrary superpositions of all the other vectors besides this one outline a hyperplane of dimension because these states form a linearly independent set on their own , and let also refer to this hyperplane .we can not write the vector as an arbitrary superposition of the other states in the set because all the states in it are linearly independent: for each hyperplane , there is a normal vector such that it follows that it not so , then would lie in hyperplane , which contradicts the assumption of linear independence .we would like to have a circuit that implements the following transformation: such a transformation acts as follows ( after renormalization ) on any state in the linearly independent set: the output of this transformation is then distinguishable with a von neumann measurement .we now explicitly construct a unitary that implements the above transformation after tracing over the ctc system .it is a cascade of a qudit swap gate and a particular controlled unitary gate ( a generalization of our examples from before ) .the qudit swap gate is as follows: and the controlled unitary gate is where we choose each unitary above so that and its action on other basis states besides is not important .then the cascade of these gates gives we finally trace out the ctc system to determine the actual transformation on the chronology - respecting system: this last step proves that the construction gives the desired transformation .there is another construction which can accomplish the same task with just one p - ctc qubit . by choosing the povm that distinguishes any set of linearly independent states , and ruling out result ( which is i do not know ) , we can construct a p - ctc - assisted circuit of the form in section [ sec : review ] that can distinguish any set of linearly independent states with just one p - ctc qubit .this circuit performs the transformation so that the states at the output of the circuit are perfectly distinguishable with a von neumann measurement .we now prove the other part of theorem [ thm : p - ctc - state - distinguish]that linearly - dependent states are not perfectly distinguishable with a p - ctc - assisted circuit .consider an arbitrary unitary that acts on the chronology - respecting system and the ctc system .we can decompose it as follows: with respect to some basis for the ctc system .tracing out the ctc system gives the transformation that the p - ctc - assisted circuit induces now suppose that a p - ctc can distinguish a state from in the sense that and after renormalization .then consider a linearly dependent state where we can write for some .then , by linearity of the transformation before renormalization , it follows that where and are non - zero normalization constants .after renormalization , this state is not distinguishable from or by any measurement .this proof generalizes easily so that any p - ctc transformation would not be able to distinguish a general set of linearly dependent states . as an afterthought, the above theorem demonstrates that the power of a p - ctc - assisted circuit is rather limited in comparison to a d - ctc - assisted one .a d - ctc - assisted circuit can violate the holevo bound , but a p - ctc - assisted one can never violate it because one can never have more than linearly independent states in dimensions . of course , if the receiver has access to a p - ctc , that will raise the classical capacity of certain channels , since it increases the ability to distinguish states beyond that of ordinary quantum mechanics .the theorem also implies that a p - ctc - assisted circuit can not break the security of the bb84 or sarg04 protocols for quantum key distribution , though a p - ctc will increase the power of the eavesdropper to a certain degree .these results might lend further credence to the belief that p - ctcs are a more reasonable model of time travel because their information processing abilities are not as striking as those of d - ctcs ( even though they still violate the uncertainty principle ) .the operation of a d - ctc - assisted circuit on a labeled mixture of states is controversial and can lead to dramatically different conclusions depending on how one interprets such a labeled mixture .a similar phenomenon happens with p - ctcs as we discuss below .let us consider a general ensemble of non - orthogonal , linearly independent states . in linearquantum mechanics , this ensemble has a one - to - one correspondence with the following labeled mixture: where the states are an orthonormal set .suppose the preparer holds on to the label and sends the system through the transformation from theorem [ thm : p - ctc - state - distinguish ] . a first way to renormalize would be to act with the p - ctc transformation on each state in the ensemble and renormalize each resulting state .this procedure assumes that the classical labeling information is available , in principle .this process leads to the output ensemble , which has a one - to - one correspondence with the following labeled mixture: so that the systems on and are now classically correlated according to the distribution .another method for renormalization leads to a drastically different result . considering the labeled mixture as a true density matrix and acting on this state with the transformation from theorem [ thm : p - ctc - state - distinguish ] gives after renormalization ,the state is as follows: where the systems on and are classically correlated again , but the distribution for the correlation can be drastically different if the overlap is not uniform over .the interpretation of the above result is bizarre : the p - ctc - assisted circuit changes the original probabilities of the states in the mixture , in spite of the fact that the preparer generated these probabilities well before the p - ctc even came into existence .let us examine a third scenario .consider the following purification of the state in ( [ eq : labeled - mixture]): suppose that alice sends the system through the p - ctc - assisted circuit .acting on this state with the transformation from theorem [ thm : p - ctc - state - distinguish ] gives renormalizing the last line above leads to the following state: where the coefficients can generally be complex , and this state is different from the other outcomes illustrated above because there is quantum interference .though , this state is a purification of the state in ( [ eq : strange - state ] ) , so that the resulting state is the same as in ( [ eq : strange - state ] ) if we discard the system . what should we make of these differing results ? in standard quantum mechanics , there are three concepts that are indistinguishable from each other : 1 .an ensemble of pure states ( classical ignorance ) ; 2 .an entangled state where subsystem is assumed to be inaccessible ; 3 . a density matrix .the first is what despagnat called a proper mixture , and the second is what he called an improper mixture .the density matrix is a mathematical object introduced to summarize the observable consequences of either of the other two .one could imagine such a thing as a true density matrix an intrinsically mixed state that does not represent either classical ignorance or entanglement though it is not clear what such an object would mean , physically . however , some researchers suggest that density matrices , rather than pure states , should be the fundamental objects in quantum theory . for instance , mixed states arise naturally in deutsch s approach to ctcs .these three different ontological representations of a quantum state are all indistinguishable in standard quantum mechanics because it is a linear theory . but in a nonlinear version of quantum mechanics( as we get using either d - ctcs or p - ctcs ) we have no reason to expect them to behave the same , and they do not .this is the major criticism that ref. levies against ref .et al_. use labeled mixtures to represent classical ensembles , which is not necessarily justifiable in nonlinear quantum mechanics . so what should be done ?if the labeled mixture represents classical ignorance about how the system was prepared , that implies that , in principle , information exists which would specify a pure state .one should therefore apply the calculation separately to each state in the ensemble , and then combine them in a new ensemble . if the output state is random ( e.g. , the result of a measurement ) , one gets the probabilities of the new ensemble using the bayes rule .this is the first approach in ( [ eq : individual - renormalize ] ) .if the mixture is really part of an entangled state , one should apply the calculation to the purification of the state and then trace out the inaccessible subsystem as in ( [ eq : purified - calculation ] ) .this procedure will , in general , give a different answer , as ( [ eq : purified - calculation ] ) demonstrates . finally ,if there are such objects as true density matrices , one can calculate with them directly .this is what we do in ( [ eq : strange - state ] ) , and it gives the same answer as tracing out the reference system of the purified state in ( [ eq : purified - calculation ] ) .also , bennett _et al_. assume that a labeled mixture of states is a true density matrix , and this assumption is what leads them to conclude that d - ctcs are impotent in refs . . in summary , the first ontological representation of a quantum state as a proper mixture leads to a dramatically different conclusion for p - ctc - assisted circuits than the second and third ontological representations ( which both lead to the same conclusion ) .we also note that it is the same with d - ctc - assisted circuits ( the first representation leads to differing conclusions than the second / third ) , but the states output by a d - ctc - assisted circuit are different from those output by a p - ctc - assisted one .lloyd _ et al_. prove that the computational power of quantum computers and p - ctcs is equivalent to pp ( probabilistic polynomial time ) , whereas the computational power of deutsch s ctcs is pspace .the proof is simple since we can simulate any postselected measurement with p - ctcs , and can simulate p - ctcs with postselected measurements , the two paradigms have equivalent computational power . since aaronson proved that quantum mechanics with postselection has computational power pp , pctcs indeed have computational power equivalent to that of pp .pp is a rather powerful computational class , including ( for example ) all problems in np .it is known to be contained in pspace , however , and is generally believed to be less powerful .it is instructive to outline explicit p - ctc - assisted circuits that illustrate the power of p - ctcs . in the next few sections , we give explicit p - ctc - assisted circuits that can factor efficiently ,can solve any problem in the intersection of np and co - np , and can probabilistically solve any problem in np .all of these circuits use just one p - ctc qubit .the structure of these circuits draws on ideas from the algorithms in ref . , and are closely related to the construction of aaronson in his proof that np postbqp .let be the number to factor and suppose that it is bits long ( so that ) .the p - ctc - assisted circuit consists of a ctc qubit and two -qubit registers : a remainder register and a factor register .the steps of the circuit are as follows ( depicted in figure [ fig : factoring ] ) : 1 .initialize the factor and remainder registers to .apply hadamard gates to all the qubits in the factor register .this first set of hadamards is equivalent to the following unitary: where f denotes the factor register .2 . act with a controlled unitary that calculates the modulo operation on the factor register and places it in the remainder register: in the above , r indicates the remainder register , and is some unitary chosen so that{cc}\left\vert 1\right\rangle & \text{if\ } j\in\left\ { 0,q\right\ } \\\left\vert q\operatorname{mod}j\right\rangle & \text{else}\end{array } \right . .\] ] if divides and is not equal to or , then the remainder register contains .otherwise , it contains a nonzero number , the remainder of .3 . apply the following controlled unitary from the remainder register to the ctc register: 4 .measure the factor register , and let the ctc qubits continue through the ctc .we can verify that the p - ctc - assisted circuit is behaving as it should by considering the induced transformation ( as in ( [ eq : pctc - transform ] ) ) .the cascade of , , and is the following unitary: _ { \text{r}}\otimes\left [ \left\vert j\right\rangle \left\langle j\right\vert h^{\otimes n}\right ] _ { \text{f}}\otimes i_{\text{ctc}}+\sum_{j=0}^{2^{n}-1}\left [ \left ( i-\left\vert 0\right\rangle \left\langle 0\right\vert \right ) u_{j}\right ] _ { \text{r}}\otimes\left [ \left\vert j\right\rangle \left\langle j\right\vert h^{\otimes n}\right ] _ { \text{f}}\otimes x_{\text{ctc}}.\ ] ] tracing over the ctc qubit gives the induced transformation: _ { \text{r}}\otimes\left [ \left\vert j\right\rangle \left\langle j\right\vert h^{\otimes n}\right ] _ { \text{f}}.\ ] ] applying this transformation to a factor and remainder register both initialized to gives the following state: where we see that the effect of the last controlled gate in figure [ fig : b92 ] is to eliminate all of the invalid answers or the ones for which by making these possibilities paradoxical . measuring the factorregister then returns a factor of .the algorithm fails in the case where is prime .but since primality can be checked efficiently , we just assume that we only use the algorithm with a composite ( non - prime ) .if the ctc qubits produce a number that is not a factor of , then a not is applied to the ctc qubit .this would produce a paradox , which is forbidden for p - ctcs . because the initial state coming out of the ctc contains components including all numbers , those components that are factors of have their probabilities magnified , and all others have their probabilities suppressed .the circuit uses the grandfather paradox such that the only histories that return a factor are self - consistent .this idea is the same as that in ref . and also exploited by aaronson to illustrate the power of postselected quantum computation .a decision problem lying in the intersection of np and co - np is one for which there is a short witness for both a yes answer and a no answer . we can solve any decision problem in this complexity class with a p - ctc - assisted circuitthe idea here is essentially the same as in the p - ctc - assisted factoring algorithm , only there are now two parts .suppose that for a particular decision problem both witnesses can be represented using no more than bits .the p - ctc - assisted circuit consists of four quantum registers : a flag qubit , a valid witness qubit , a witness register with qubits , and a ctc qubit .it operates as follows ( depicted in figure [ fig : np - and - co - np ] ) : 1 .initialize the flag qubit , the valid witness qubit , and the witness register to .apply hadamard gates to the flag qubit and to the qubits in the witness register .the flag qubit being equal to 1 means that the answer is yes .the flag qubit being equal to 0 means the answer is no .2 . conditioned on the flag qubit being equal to 1 ,the answer is ( claimed to be ) yes and the remaining qubits hold a witness .the flag qubit acts as a control bit .if flag = 1 , then pass the qubits of the witness , plus the valid witness qubit in the state , through a circuit that verifies whether the witness is valid: where v denotes the valid witness qubit , w denotes the witness register , if the witness is valid , and otherwise .3 . conditioned on the flag qubit being equal to 0 ,the answer is ( claimed to be ) no and the remaining qubits hold a witness .the flag qubit again acts as a control bit .if flag = 0 , then pass the qubits of the witness , plus the valid witness qubit in the state , through a circuit that verifies whether the witness is valid: where if the witness is valid , and otherwise .4 . apply a cnot from the valid witness qubit holding to the ctc qubit .measure the flag qubit , the valid witness qubit , and the witness register .the measurement results give an answer to the decision problem ( in the flag qubit and in the valid witness qubit ) plus the witness .the reasoning that this algorithm works is essentially the same as that for factoring .the last cnot gate makes a paradox out of any scenario in which the valid witness qubit is equal to one , thus eliminating the possibilities for which the witness is invalid .a satisfiability ( sat ) decision problem tries to determine if there exists a satisfying solution for a boolean formula ( one which makes the formula evaluate to true ) .we now show that we can probabilistically solve a sat decision problem , which implies that we can probabilistically solve any problem in np because sat is np - complete .suppose that we want to solve sat on bits .there is a boolean function defined by a formula that can be evaluated efficiently .we want to know if there are values of that make , and if so , we would like to have a satisfying assignment .( the latter is not necessary for a decision problem , of course , but we get it for free . ) the p - ctc - assisted circuit acts on four different quantum registers : a flag qubit , a valid witness qubit , an -qubit witness register , and a ctc qubit .if the flag qubit is equal to 1 , the function has a satisfying assignment , and if it is 0 , it does not .the circuit has the following steps : 1 .initialize the flag qubit , the valid witness qubit , and the witness register to .apply hadamard gates to the flag qubit and the -qubit witness register .conditioned on the flag qubit being equal to 1 , the answer is ( claimed to be ) yes , and the witness register holds a satisfying assignment .the flag qubit acts as a control bit .if flag = 1 , then pass the -qubit witness register , plus the valid witness qubit in the state , through a circuit that calculates where .( that is , if is satisfying , and if not . )3 . conditioned on the flag qubit being equal to 0 ,the answer is ( claimed to be ) no , and we require that the -qubit witness register hold all zeros : .apply the following controlled - unitary: the valid witness qubit now holds if the qubits in the -qubit witness register are not all zeros , and if they are .4 . apply a cnot from the valid witness qubit to the ctc qubit .measure all the ancillas .there are two cases : 1 . if the function has no satisfying assignment , then the only non - paradoxical output is all zeros ( including the flag bit ) : .this outcome occurs with probability one in this case .2 . if the function has satisfying assignments , then there are non - paradoxical results : the satisfying assignments , plus the all zero state .these results occur with equal probability .so in case 1 , the correct answer ( no ) always occurs , and in case 2 , a satisfying assignment ( yes ) occurs with probability and the false answer ( no ) occurs with probability to improve the probabilities , we can replicate some of these steps while still using just one ctc qubit .replicate steps 1 - 3 times on copies of all of the above registers ( except for the ctc qubit ) .so we now get different flag bits and ( potentially ) satisfying assignments .we then do the following unitary from the valid witness qubits to the p - ctc qubit : in case 1 , we get the result every time . in case 2 , we get the wrong answer every time only with probability: we can make this failure probability as small as we like with only logarithmic overhead .prior research has shown that closed timelike curves operating according to deutsch s model can have dramatic consequences for computation and information processing if one operates on proper mixtures of quantum states .et al_. then showed that postselected closed timelike curves have computational power equivalent to the complexity class pp , by exploiting a result of aaronson on postselected quantum computation . in this paper , we showed how to implement any postselected operation with certainty with just one p - ctc qubit , and we discussed an important difference between d - ctcs and p - ctcs in which the future existence of a p - ctc could affect the probabilistic results of a present experiment by creating a paradox for particular outcomes .theorem [ thm : p - ctc - state - distinguish ] then proves that p - ctcs can help distinguish an arbitrary set of linearly independent states , but they are of no use for helping to distinguish linearly dependent states .we also discussed how three different ontological descriptions of a state ( equivalent in standard linear quantum mechanics ) do not necessarily lead to the same consequences in the nonlinear theory of postselected closed timelike curves .finally , we provided explicit p - ctc - assisted circuits that efficiently factor an integer , solve any decision problem in the intersection of np and co - np , and probabilistically solve any decision problem in np ( all using just one p - ctc qubit ) . _acknowledgements_we acknowledge useful conversations with charles h. bennett , hilary carteret , patrick hayden , debbie leung , and graeme smith .we also acknowledge the anonymous referees for helpful comments .tab acknowledges the support of the u.s .national science foundation under grant no .mmw acknowledges the support of the mdeie ( qubec ) psr - siiri international collaboration grant .charles h. bennett , debbie leung , graeme smith , and john a. smolin .can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems ?, 103(17):170502 , october 2009 .charles h. bennett , debbie leung , graeme smith , and john smolin .the impotence of nonlinearity : why closed timelike curves and nonlinear quantum mechanics do nt improve quantum state discrimination , and havent been shown to dramatically speed up computation , if computation is defined in a natural , adversarial way .rump session presentation at the 13th workshop on quantum information processing , zurich , switzerland , january 2010 .seth lloyd , lorenzo maccone , raul garcia - patron , vittorio giovannetti , yutaka shikano , stefano pirandola , lee a. rozema , ardavan darabi , yasaman soudagar , lynden k. shalm , and aephraim m. steinberg .closed timelike curves via post - selection : theory and experimental demonstration .arxiv:1005.2219 .charles h. bennett , gilles brassard , claude crpeau , richard jozsa , asher peres , and william k. wootters .teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels . , 70:18951899 , 1993 .valerio scarani , antonio acin , gregoire ribordy , and nicolas gisin .quantum cryptography protocols robust against photon number splitting attacks for weak laser pulse implementations ., 92:057901 , february 2004 .
|
bennett and schumacher s postselected quantum teleportation is a model of closed timelike curves ( ctcs ) that leads to results physically different from deutsch s model . we show that even a single qubit passing through a postselected ctc ( p - ctc ) is sufficient to do any postselected quantum measurement with certainty , and we discuss an important difference between deutschian ctcs ( d - ctcs ) and p - ctcs in which the future existence of a p - ctc might affect the present outcome of an experiment . then , based on a suggestion of bennett and smith , we explicitly show how a party assisted by p - ctcs can distinguish a set of linearly independent quantum states , and we prove that it is not possible for such a party to distinguish a set of linearly dependent states . the power of p - ctcs is thus weaker than that of d - ctcs because the holevo bound still applies to circuits using them , regardless of their ability to conspire in violating the uncertainty principle . we then discuss how different notions of a quantum mixture that are indistinguishable in linear quantum mechanics lead to dramatically differing conclusions in a nonlinear quantum mechanics involving p - ctcs . finally , we give explicit circuit constructions that can efficiently factor integers , efficiently solve any decision problem in the intersection of np and conp , and probabilistically solve any decision problem in np . these circuits accomplish these tasks with just one qubit traveling back in time , and they exploit the ability of postselected closed timelike curves to create grandfather paradoxes for invalid answers .
|
computational social choice ( comsoc ) has delivered impactful improvements in several real world settings ranging from optimizing kidney exchanges to devising mechanisms which to assign students to schools and/or courses more fair and efficient manners .comsoc has also had impact in a number of other disciplines within computer science including recommender systems , data mining , machine learning , and preference handling . from its earliest days , much theoretical work in comsoc has centered on worst case assumptions .indeed , in the last 10 years , there has been a groundswell of such research which shows little signs of slowing . within comsoc ,much work focuses on manipulative or strategic behavior , which may take many different forms including manipulation and control of election and aggregation functions . often , advanced algorithmic techniques such as fixed parameter tractability to move beyond these worst case assumptions .approximation algorithms have played an important part , helping to determine the winner of some hard to compute voting rules .approximation has been used in other areas of social choice including mechanism design , often to achieve good results when the `` worst case '' is too hard .additional algorithmic work has centered on average case complexity ( which typically suppose very uniform sampling of instances ) and/or attempting to understand the parameters which make an aggregation or choice rule hard to manipulate . in one of the papers that founded the field , warned against exclusively focusing on worst case assumptions stating , `` the existence of effective heuristics would weaken any practical import of our idea .it would be very interesting to find such heuristics . '' for the last several years we have championed the use of real world data in comsoc and are happy to see more and more researchers working in this area ( e.g. , ) .we see an increased focus on experimentation , heuristics , and verifying theory and models through properly incentivized data collection and experimentation as _ key research direction _ for comsoc . some of the most impactful work in comsoc has come from the development of theory that is specifically informed by real world data and/or practical applicationthat is then rigorously tested ( e.g. , ) .* contribution . * in this short note we detail a study of _ tournaments _ using real world and generated data .we show that , despite the np - completeness of the tournament fixing problem ( tfp ) , enumerating all the seedings for which a particular player can win is quickly solvable for a large number of generated and real world instances . additionally , we show that the popular condorcet random ( cr ) model used to generate synthetic tournaments ( i ) does not match real world data and ( ii ) is drawing from a fundamentally different distribution than real world tournaments .the statistical and modeling methodologies we use for this research may be of independent interest to empirical researchers in social choice .whether or not teams can advance through a knockout tournament tree in order to claim a championship is a question on the minds of many during the fifa world cup , atp tennis tournaments , ncaa basketball tournaments , and numerous soccer leagues around the world .the scheduling of the tournament , the seeding which dictates whom will play whom , and its manipulation in order to maximize a particular team s chance of winning is a well studied problem in comsoc and other areas .following , we are given a set of players and a deterministic pairwise comparisons for all players in .for every in , if then we say that player beats player in a head to head competition ; this means that in a pairwise comparison . in a _balanced knockout tournament _ we have that . given a set of players , a balanced knockout tournament is a balanced binary tree with leaf nodes and a draw .there are multiple isomorphic ( ordered ) assignments of agents in to the leaf nodes , we represent these as a single unordered draw .observe that there are assignments to leaf nodes but only draws .a knockout tournament is the selection procedure where each pair of sibling leaf nodes competes against each other .the winner of this competition proceeds up the tree into the next round ; the winner of the knockout tournament is the player that reaches the root note . in this studywe want to understand the computational properties of the tournament fixing problem ( tfp ) .tournament fixing problem ( tfp ) : + * instance : * a set of players , a deterministic pairwise comparision matrix , and a disginuished player .+ * question : * does there exist a draw for the players in where is the winner of ) ? it was recently proven that even if a manipulator knows the outcome of each pairwise matchup , the problem of finding a seeding for which a particular team will win is np - hard , thus concluding a long line of inquiry into this problem .more recent results have shown that , for a number of natural model restrictions , the tfp is easily solvable . note that the code developed for the next section can be easily generalized to the case where we do not enforce the balanced constraint on . one popular model for generating for experiment is the condorcet random ( cr ) model , introduced by .the model has one tuning parameter which gives the probability that a lower ranked team will defeat ( upset ) a higher ranked team in a head - to - head matchup . in general , .a uniform random tournament has .the manipulation of tournament seedings has been studied using both random models and a mix of real data and random models .a more sophisticated model of random tournaments was developed by upset probabilities were fitted to historical data , and varied according to the difference in the ranking of the teams ( surprisingly for their tennis data , the upset probability remained relatively invariant to the difference in player rankings ) .to explore how theory lines up with practice , we looked at two questions : 1 .does the np - completeness of knockout tournament seeding manipulations tell us about the complexity of manipulation in practice ?2 . are random models which are popularly used in comsoc supported by real world data ? to answer these questions we use data from the 2009 - 2014 english premier league and german bundesliga , along with lifetime head - to - head statistics of the top 16 tennis players on the atp world tour to create 16 team pairwise tournament datasets .in order to convert the datasets into deterministic pairwise tournaments we used two different strategies . using the lifetime head to head atp tour results we get a set of weighted pairwise comparisons that we can use as input to understand tournaments . using all data available up until feb . 1, 2014 provides something that is not a tournament graph .there are several ties and several players who have never played each other . given the matchup matrix , we extracted a tournament graph by saying one player beats another if their historical average is does not create a tournament graph , hence we award all ties , including if two players have never met , to the higher ranked player .this results in rafael nadal being a condorcet winner and he is thus removed from the following analysis . for the soccer data, we deterministically assigned as the pairwise winner the team which had scored more total goals in the home - and - away format for a particular year ( ties broken by away goals ) .we implemented a simple constraint program in minizinc to search for a seeding that ensures a given team wins .the minizinc model which was then solved using a modified version of gecode ( http://www.gecode.org ) . the modifications to gecode served to make the counting of all solutions faster by removing all diagnostic printing to stdout and implementing some other small caching optimizations .all experiments were run on a system running debian 6.0.10 with a 2.0 ghz intel xeon e5405 cpu and 4 gb of ram ; gecode was restricted to using 4 cores . for all the tournaments in our experiments we have 16 teams which means that the entire search space is possible seedings. table [ tab : tennis ] shows the results for each of the players in the atp world top 16 including the total number of seedings won ( and percentage of the overall total number of seedings ) , the number of choice points ( nodes of the search tree ) explored to find the first ( resp .all ) seedings , and and time ( minutes ) explored to to find all seedings for which a player can win a tournament .table [ tab : all ] gives summary statistics for the number of choice points ( nodes of the search tree ) and time ( minutes ) explored to to find the first ( resp .all ) seedings for which a team can win a knockout tournament for all teams across all 13 datasets . despite this being a np - hard problem, it was extremely easy in every dataset to find a winning seeding for any given team ( or prove that none existed ) .exhaustively , counting all the winning seedings took more time but even this was achievable . only the bundseliga 2011 experiment came anywhere close to exploring even a small fraction of the total possible seedings .the practical upshot of these computational results is that real world tournaments exhibit a lot of structure which is possible to leverage for practical computation .while the simple techniques we employed may not scale to the possible seedings in the 128 player wimbledon tournament , they can be used for more modestly sized tournaments .the low number of choice points explored for these instances may indicate that there is practically exploitable structure in larger tournaments ; an interesting avenue for future research .we turn to a second fundamental question .the cr model has been used to derive theoretical bounds on the complexity of manipulating the seeding in knockout tournaments .but does it adequately model real world tournaments ? in the soccer datasets we take the teams goals scored over the total as the pairwise probability , while we use the life time head - to - head average to determine probabilities in the tennis data . to test the modeling power of cr, we generated pairwise probabilities with .we then computed , for each of the real world datasets and all values of in steps of , the probabilities that teams would win a knockout tournament using a simple sampling procedure which converged to the actual probability quickly ; uniformly sampling over all possible seedings and updating the probability estimates for each team for a particular seeding , the probability that every team wins the tournament can be computed efficiently .our hypothesis is that the probability of winning a knockout tournament for a team in the real world data is a random variable drawn from the same distribution as the cr model .we approach this question in two parts .( 1 ) using a kolmogorov - smirnov ( ks ) tests with as our significance threshold , we can determine if the data is drawn from the same type of distribution , i.e. a normal distribution or a heavy tailed distribution such as a log - normal or power law distribution .( 2 ) then for a candidate pair of samples , we determine if the fitting parameters of the distribution are similar .the ks test compares the distance between the cumulative distribution ( cdf ) of two empirical samples to determine whether or not two different samples are drawn from the same distribution .figure [ fig:2012data](a ) shows the cdf of the 2014 bundesliga league data along with several settings of .table [ tab : min - max ] gives the minimum and maximum values of , per dataset , for which we can say that the probability distribution of a team winning a knockout tournament according to the cr model is likely drawn from the same distribution as the respective real world dataset ( ks test , ) .we can reject cr models with values of outside these ranges ; as these models are not likely to emerge from the same distribution as the real world datasets .we also provide average upset probability for each datafile to compare with the results of .examining our results , we find no support for the uniform random tournament model . likewise , setting or generates data which is drawn from a different distribution than most real world datasets we survey .the tennis data seems to be an outlier here , supporting a very low value of , likely due to rafael nadal , who has a winning lifetime record against all other players in the atp top 16 as of feb . 1 , 2014 .as we can not reject all models given by cr outright we must look more closely at the underlying distribution and attempt to fit the empirical data to a likely distribution . for this we will dive more deeply into the 2014 bundesliga league data , as the range for is similar to the average of across all datasets and the average upset probability for 2014 yields a model which is a good match for the underlying data .the 2014 bundesliga data has an average upset probability of and a best fit probability according to the ks test of .we must first identify what kind of probability distribution the samples are drawn from in order to tell if they are the same or different . at first glance , the winning probabilities appear to be drawn from a power law or some other heavy tailed distribution distribution such as a log - normal .the study of heavy tailed distributions in empirical data is a rich topic that touches a number of disciplines including physics , computer science , literature , transportation science , geology , biology as these distributions describe a number of natural phenomena such as the spread of diseases , the association of nodes in scale free networks , the connections of neurons in the brain , the distribution of wealth amongst citizens , city sizes , and other interesting phenomena .recently , more sophisticated methods of determining if an empirical distribution follows a particular heavy tailed distribution have been developed , consequently showing strong evidence that distributions once thought power laws ( e.g. , node connections on the internet and wealth distribution ) are likely not explained by a power law distribution but rather by log - normal distributions .the current standard for fitting heavy tailed distributions in physics and other fields ( and the one we will employ ) involves the use of robust statistical packages to estimate the fitting parameters then testing the fitted model for basic plausibility through the use of a likelihood ratio test .this process will help us decide which distribution is the strongest fit for our data as well as provide us with the actual fitting parameters to compare the real world and generated data .figure [ fig:2012data ] ( b ) shows the results of fitting the 2014 bundesliga league data to a power law for a random variable of the form as well as the fit for a log - normal distribution with median and multiplicative standard deviation . using a likelihood ratio testwe find that the log - normal is a significantly better fit for the data than the power law distribution ( , ) .this makes intuitive sense in this context as each matchup can be seen as a ( somewhat ) independent random variable , and the product of multiple positive random variables gives a log - normal distribution .the fit parameters for the 2014 bundesliga league data are and while for the best fitting cr model with is and .while those two distributions are similar , it implies that perhaps a more nuanced , multi - parameter model is needed to capture the matchup probabilities for tournaments .in order to transform into a more impactful area we must demonstrate the effectiveness of our methods in real settings , and let these real settings drive our research . describe the journey for experimental economics : evolving from theory to simulated or repurposed data to full fledged laboratory and field experiments .this progression enabled a `` conversation '' between the experimentalists and the theoreticians which enabled the field to expand , evolve , and have the impact that it does today in a variety of contexts .our case study of tournaments shows that the need to verify our models with data is an important and interesting future direction for comsoc .working to verify these models can point the way to new domain restrictions or necessary model generalizations .there has been more data driven research , like the kind presented here , thanks to preflib and other initiatives , e.g. , ; we hope this trend continues .this research complements the existing research on axiomatic characterizations , worst case complexity , and algorithmic considerations ( see , e.g. , ) . while there are some issues with using repurposed data and discarding context ( see , e.g. , the discussion by john langford of microsoft research about the uci machine learning research repository at http://hunch.net/?p=159 ) it is a _ start _ towards a more nuanced discussion about mechanisms and preferences .results about preference or domain restrictions can lead to elegant algorithmic results , but these restrictions should be complemented by some evidence that the restrictions are applicable .for example , in the over 300 complete , strict order datasets in preflib , none are single - peaked , a popular profile restriction in voting .untested normative or generative data models can sometimes lead us astray ; if we use self reported data from surveys , or hold out data which does not fit our preconceived model , we may introduce bias and thus draw conclusions that are spurious .our study of tournaments makes unrealistic assumptions about model completion in order to produce deterministic tournaments . however , even these simple rounding rules yield instances that are much simpler than the worst case results would imply .perhaps one way forward for comsoc is incorporating even more ideas from experimental economics including the use of human subjects experiments on mechanical turk like those performed by , e.g. , and .work with human subjects can lead to a more refined view of strategic behavior and inform more interesting and realistic models on which we can base good , tested theory .we would like to thank haris aziz for help collecting data and insightful comments .data61/csiro ( formerly known as nicta ) is funded by the australian government through the department of communications and the arc through the ict centre of excellence program .davies , j. , katsirelos , g. , narodytska , n. , walsh , t. , 2011 . complexity of and algorithms for borda manipulation . in : proceedings of the 25th aaai conference on artificial intelligence ( aaai ) . pp . 657662 .dickerson , j. p. , procaccia , a. d. , sandholm , t. , 2012 .optimizing kidney exchange with transplant chains : theory and reality . in : proceedings of the 11th international joint conference on autonomous agents and multi - agent systems ( aamas ) .711718 .nethercote , n. , stuckey , p. j. , becket , r. , brand , s. , duck , g. j. , tack , g. , 2007 .minizinc : towards a standard cp modelling language . in : proceedings of the 13th international conference on principles and practice of constraint programming ( cp ) .springer , pp .529543 .rossi , f. , venable , k. , walsh , t. , 2011 . a short introduction to preferences : between artificial intelligence and social choice .synthesis lectures on artificial intelligence and machine learning 5 ( 4 ) , 1102 . russell , t. , van beek , p. , 2011 . an empirical study of seeding manipulations and their prevention . in : proceedings of the 22nd international joint conference on artificial intelligence ( ijcai ) . pp .350356 .skowron , p. , faliszewski , p. , slinko , a. , 2013 .achieving fully proportional representation is easy in practice . in : proceedings of the 12th international joint conference on autonomous agents and multi - agent systems ( aamas ) .399406 .stanton , i. , vassilevska williams , v. , 2011 . manipulating stochastically generated single - elimination tournaments for nearly all players .in : proceedings of the 7th international workshop on internet and network economics ( wine ) . pp .326337 .tal , m. , meir , r. , gal , y. , 2015 .a study of human behavior in online voting . in : proceedings of the 14th international joint conference on autonomous agents and multi - agent systems ( aamas ) .665673 .thompson , d. r. m. , lev , o. , leyton - brown , k. , rosenschein , j. , 2013 .empirical analysis of plurality election equilibria . in : proceedings of the 12th international joint conference on autonomous agents and multi - agent systems ( aamas ) .391398 .vu , t. , altman , a. , shoham , y. , 2009 . on the complexity of schedule control problems for knockout tournaments . in : proceedings of the 8th international joint conference on autonomous agents and multi - agent systems ( aamas ) .
|
computational social choice ( comsoc ) is a rapidly developing field at the intersection of computer science , economics , social choice , and political science . the study of tournaments is fundamental to comsoc and many results have been published about tournament solution sets and reasoning in tournaments . theoretical results in comsoc tend to be worst case and tell us little about performance in practice . to this end we detail some experiments on tournaments using real wold data from soccer and tennis . we make three main contributions to the understanding of tournaments using real world data from english premier league , the german bundesliga , and the atp world tour : ( 1 ) we find that the np - hard question of finding a seeding for which a given team can win a tournament is easily solvable in real world instances , ( 2 ) using detailed and principled methodology from statistical physics we show that our real world data obeys a log - normal distribution ; and ( 3 ) leveraging our log - normal distribution result and using robust statistical methods , we show that the popular condorcet random ( cr ) tournament model does not generate realistic tournament data . tournaments , computational social choice , economics , preferences , reasoning under uncertainty
|
multimodal function refers to the function which has more than one extreme point . to find all extreme pointsis called extremum problem , which is a well known difficult issue in optimization fields .many practical engineering problems can be converted as this problem , such as the detection of multiple objects in military field .therefore , solving the extremum problem is a useful study topic . to solve the extremum problem ,many methods of optimization are applied , such as genetic algorithm ( ga ) , simulated annealing ( sa ) , particle swarm optimization algorithm ( pso ) , immune algorithm ( ia ) , and so on . however , currently there is rare report that applying ant colony optimization ( aco ) to solve the extremum problem .the motivation of this paper is to apply aco to search all extreme points of function .ant colony optimization ( aco ) was first proposed by dorigo ( 1991 ) .the inspiring source of aco is the foraging behavior of real ants . when ants search for food, they initially explore the area surrounding their nest in a random manner .as soon as an ant finds a food source , it remembers the route passed by and carries some food back to the nest . during the return trip , the ant deposits pheromone on the ground .the deposited pheromone , guides other ants to the food source . and the feature has been shown , indirect communication among ants via pheromone trails enables them to find the shortest routes between their nest and food sources .aco imitates this feature and it becomes an effective algorithm for the optimization problems .it has been successfully applied to many combinatorial optimization problems , such as traveling salesman problem ( tsp ) , quadratic assignment problem(qap ) , job - shop scheduling problem(jsp ) , vehicle routing problem(vrp ) , data mining(dm ) and so on .the application of aco pushes the study of aco theory , and its two main study topics are the analysis of convergence and runtime .m. birattari proves the invariance of aco and introduced three new aco algorithms .convergence is one of focus study of aco .walter j. gutjahr studied the convergence of aco firstly in 2000 .t. st and m. dorigo proved the existence of the aco convergence under two conditions , one is to only update the pheromone of the shortest route generated at each iteration step , the other is that the pheromone on all routes has lower bound .pang and et.al . found a potential new view point to study aco convergence under general condition , that is entropy convergence in 2009 . in ref.pangacoentropy ,the following conclusion is get : aco may not converges to the optimal solution in practice , but its entropy is convergent .the other study focus of aco is time complexity .aco has runtime ) , where , and refers to the number of iteration steps , ants , cities and ] .the task of this paper is to extract all extreme points that the corresponding value is minimal locally .the basic idea of this paper is stated roughly as below : divide interval ] and initialization , rule of ant moving , rule of pheromone updating , and keeping the intervals containing ants .the contents of the four parts are stated as below .suppose interval ] into many small intervals and put an ant in each interval ; do other initialization .the detail is shown at section .suppose is the length of interval and is a stop threshold .* step2 * while ( ) \ { * step2.1 : * all ants move to new intervals according the rule shown at section * step2.2 : * update pheromone according the rule shown at section * step2.3 : * update search range according to section and divide it into smaller intervals ( suppose the number of these intervals is ) .calculate the size of interval and set it to . } * step3 * extract all intervals that contain ants , the centers of the intervals are the approximations of extreme points . if argument is multi - dimensional vector , divide the range of every component of vector into smaller intervals , the combination of these intervals forms many small lattices . and then put an ant in each lattice , apply the above method , all extreme points can be extracted . to understand above method easily ,an simple example is stated as below : assume that the domain of 1-dimensional function is divided into 3 intervals , , , which associated center is , and respectively .initially ant , , and is put at , and respectively .check the first ant : if , ant moves to interval . otherwise , do nothing .check the 2nd ant : if their is unique interval ( e.g. ) such that , ant moves to interval .if is smaller than both and , do nothing . if is bigger than both and , it is uncertain that ant moves to or . and ant will select its visiting interval randomly according to its transition probability defined at eq.function probability .check the 3rd ant using same way .after all ants are checked , update their associated interval ( position ) and interval pheromone .repeat above processing until all ants can not move .then keep the intervals which contains ants , and delete other blank intervals . and divided the intervals containing ants into smaller interval , repeat above process until the size of interval is sufficient small .and then all interval centers are the approximations of extreme points .in this section , several functions will be tested .the parameters are listed as below : , , , , , two performances are considered , which are error ( ratio of inaccuracy ) runtime .error is defined as , where denotes the true extreme point on theory and is its approximation calculated by the method presented in this paper .in addition , the hardware condition is : notebook pc dell d520 , cpu 1.66 ghz . ] instance 2 is a typical test function , which include many extreme points and any small change of argument will result in big change . in addition , the theoretical calculation of extreme points of instance 2 is difficult .the additional parameters are and .fig.[figinstance2 ] shows all the calculated extreme points , and the real numbers are listed at appendix ( see appendix ii ) . , ] is divided into intervals initially . and at next iteration steps, search domain is divided into small intervals .36 extreme points are got and shown at fig.[figinstance3 ] , and the digital solutions are listed at appendix ( see appendix iii ) instance 3 is a 2-dimensional function and 3203.2968 seconds is cost . and it is slower than 1-dimensional function instance 1 and instance 2 . to improve running speed is the next work .many functions are tested by the authors , and some experiments results are listed at appendix .these tests demonstrate that solution error is less than except the special case that extreme point is at the border of the domain .in addition , these testes also demonstrate that the method is very fast for 1-dimensional function . &theory value & calculated value & error ( % ) + & & & + & & & 2.1e-06 + & & & 1.3e-06 + & & & 1.0e-07 + & & & 2.0e-07 + & & & 5.0e-07 + & & & 0.0145 + + notice : from the table 1 , we can see that the border point has big error because the value calculated is the center of the interval , not boundary . to evade this drawback, the function value at boundary can be calculated directly .to find all extreme points of multimodal functions is called extremum problem , which is a well known difficult issue in optimization fields . it is reported rarely that applying ant colony optimization(aco ) to solve the problem . andthe motivation of this paper is to explore aco application method to solve it . in this paper ,the following method is presented : divide the domain of function into many intervals and put an ant in each interval .and then design rule such that every ant moves to the interval containing extreme point near by .at last all ants stay around extreme points . the method presented in this paperhas following three advantages : \1 .solution accuracy is high .experiment shows that solution error is less than .solution calculated is stable ( robust ) .ant only indicates the interval containing extreme point , not the accurate position of extreme point .it is easy for ant to find a interval although finding a special point in interval is difficult .the method is fast for 1-dimensional function .aco is slow . but some feature is found to speed aco ( see section 2.5 ) the authors appreciate the discussion from the members of gene computation group , j. gang , x. li , c .- b .wang , w. hu , s .-wang , q. yang , j .- l .zhou , p. shuai , l .- j .the authors appreciate the help from prof .j. zhang , z .-lin pu , x .-wang , j. zhou , and q. li .99 min - qiang li , ji - song kou , coordinate multi - population genetic algorithms for multimodal function optimization , acta automatica sinica , 2002,28(04):497 - 504 .qing - yun tao , hui - yun quan , the simulated annealing algorithm for multi - modal function problem , computer engineering and applications , 2006,(14):63 - 64,92 .li li , hong - qi li , shao - long xie , effective optimization algorithm for multimodal functions , application research of computers , 2008,25(10):4792 , 5792,6792 .jiang wu , han - ying hu , ying wu , application - oriented fast optimizer for multi - peak searchin , application research of computers 2008,25(12):3617 - 3620 .rui - ying zhou , jun - hua gu , na - na li , qing tan , new algorithm for multimodal function optimization based on immune algorithm and hopfield neural network , computer applications , 2007,27(7):1751 - 1753,1756 .m. dorigo , v. maniezzo , and a. colorni , positive feedback as a search strategy .technical report 91 - 016 , dipartimento di elettronica , politecnico di milano , milan , italy , 1991 .a. colorni , m. dorigo , and v. maniezzo , distributed optimization by ant colonies , in f. j.varela and p. bourgine , editors , towards a practice of autonomous systems : proceedings of the first european conference on artificial life , pages 134 - 142 . mit press , cambridge , ma,1992 .m. dorigo , optimization learning and natural algorithms , phd thesis , dipartimento di elettronica , politecnico di milano , milan , italy , 1992 .m. dorigo , v. maniezzo , and a. colorni , the ant system : optimization by a colony of cooperating agents , ieee transactions on systems , man , and , cybernetics part b : cybernetics .26(1 ) : 29 - 41.1996 .ball , t.l .magnanti , c.l .monma , and g.l .nemhauser , handbooks in operations research and management science , 7 : network models , north holland , 1995 .ball , t.l .magnanti , c.l .monma , and g.l .nemhauser , handbooks in operations research and management science , 8 : network routing , north holland , 1995 .k. doerner , walter j. gutjahr , r.f .hartl , c. strauss , c. stummer , pareto ant colony optimization with ip preprocessing in multiobjective project portfolio selection , european journal of operational research 171 , pp .830 - 841 , 2006 .m. dorigo and l. m. gambardella , ant colony system : a cooperative learning approach to the traveling salesman problem , ieee transactions on evolutionary computation , 1(1):53 - 66 , 1997 .m. manfrin , m. birattari , t. sttzle , and m. dorigo , parallel ant colony optimization for the traveling salesman problem , in m. dorigo , l. m. gambardella , m. birattari , a. martinoli , r. poli , and t. sttzle ( eds . ) ant colony optimization and swarm intelligence , 5th international workshop , ants 2006 , lncs 4150 pp .224 - 234 .springer , berlin , germany .conference held in brussels , belgium .september 4 - 7 , 2006 .l. m. gambardella , d. taillard , and m. dorigo , ant colonies for the quadratic assignment problem , journal of the operational research society , 50(2):167 - 176 , 1999 .l. m. gambardella and m. dorigo , ant - q : a reinforcement learning approach to the traveling salesman problem , in a. prieditis and s. russell , editors , machine learning : proceedings of the twelfth international conference on machine learning , pages 252 - 260 .morgan kaufmann publishers , san francisco , ca , 1995 .b. bullnheimer , r. f. hartl , and c. strauss , applying the ant system to the vehicle routing problem , in i. h. osman , s. vo , s. martello and c. roucairol , editors , meta - heuristics : advances and trends in local search paradigms for optimization , pages 109 - 120 .kluweracademics , 1998 .forsyth and a. wren , an ant systemfor bus driver scheduling , technical report 97.25 , university of leeds , school of computer studies , july 1997 . presented at the 7th international workshop on computer - aided scheduling of public transport , boston , july 1997 .rafael s. parpinelli , heitor s. lopes , data mining with an ant colony optimization algorithm , ieee transactions on evolutionary computation , vol .321 - 332 , 2002 .m. birattari , p. pellegrini , and m. dorigo , on the invariance of ant colony optimization , ieee transactions on evolutionary computation , vol .732 - 742 , 2007 .w. j. gutijahr , a graph - based ant system and its convergence , future generation computer systems 16 , 873 - 888 , 2000 .t. st and m. dorigo . a short convergence proof for a class of aco algorithms .ieee transactions on evolutionary computation , 6(4):358 - 365 , 2002 . chao - yang pang , chong - bao wang and ben - qiong hu , experiment study of entropy convergence of ant colony optimization , arxiv:0905.1751v4 [ cs.ne ] 25 oct 2009 .[ on line ] http://arxiv.org/abs/0905.1751 walter j. gutjahr and giovanni sebastiani , runtime analysis of ant colony optimization with best - so - far reinforcement , methodology and computing in applied probability 10 , pp .409 - 433 , 2008 .walter j. gutjahr , first steps to the runtime complexity analysis of ant colony optimization , computers and operations research 35 ( no .9 ) , pp . 2711 - 2727 , 2008 .chao - yang pang , wei hu , xia li , and ben - qiong hu , applying local clustering method to improve the running speed of ant colony optimization , arxiv:0907.1012v2 [ cs.ne ] 7 jul 2009 .[ on line ] http://arxiv.org/abs/0907.1012 chao - yang pang , wei hu , xia li , and ben - qiong hu , applying local clustering method to improve the running speed of ant colony optimization , [ on line]http://arxiv.org / pdf/0907.1012 chao - yang pang , chong - bao wang and ben - qiong hu , experiment study of entropy convergence of ant colony optimization , on line ] http:// arxiv.org/pdf/0905.1751 g. a. bilchev , i. c. parmee , the ant colony metaphor for searching continuous spaces , lecture notes in computer science , 993:2539 , 1995 . m. r. jalali , a. afshar and m. a. mario , multi - colony ant algorithm for continuous multi - reservoir operation optimization problem , water resour manage 21:14291447,2007 .wodrich m , bilche g , cooperative distributed search : the ant s way , control cybern ( 3):413446,1997 .mathur m , karale sb , priye s , jyaraman vk , kulkarni bd , ant colony approach to continuous function optimization , ind eng chem res 39:38143822 , 2000 .m. r. jalali , a. afshar , semi - continuous aco algorithms ( technical report),hydroinformatics center , civil engineering department , iran university of science and technology , tehran , iran,2005b .j. dreo , p .-siarry , a new ant colony algorithm using the hierarchical concept aimed at optimization of multiminima continuous functions , in : m. dorigo , gd .caro , m. sampels ( eds ) proceedings of the 3rd international workshop on ant algorithms ( ants 2002 ) , vol 2463 of lncs .springer , berlin heidelbergnew york , pp 216221,2002 .y -j li , t -j wu , an adaptive ant colony algorithm for continuous - space optimization problems , journal of zhejiang university science,2003,4(1):4046 .& & & + & theory value & calculated value & error ( % ) + & & & + & & & 5.12e-004 + & & & 1.91e-009 + & & & 1.20e-008 + + + + + & & & + & theory value & calculated value & error ( % ) + & & & + & & & 5.39e-004 + & & & 7.97e-008 + & & & 16.98e-008 + & & & 9.82e-009 + & & & 6.55e-008 + & & & 1.17e-011 + & & & 1.27e-008 + + + + + + & calculated value & extreme & calculated value & extreme & calculated value + & & points & & points & + & & 12 & & 23 & + & & 13 & & 24 & + & & 14 & & 25 & + & & 15 & & 26 & + & & 16 & & 27 & + & & 17 & & 28 & + & & 18 & & 29 & + & & 19 & & 30 & + & & 20 & & 31 & + & & 21 & & 32 & + & & 22 & & & + + & & & & + & calculated value & extreme & calculated value & extreme & calculated value + & & points & & points & + & & 13 & & 25 & + & & 14 & & 26 & + & & 15 & & 27 & + & & 16 & & 28 & + & & 17 & & 29 & + & & 18 & & 30 & + & & 19 & & 31 & + & & 20 & & 32 & + & & 21 & & 33 & + & & 22 & & 34 & + & & 23 & & 35 & + & & 24 & & 36 & + +
|
to find all extreme points of multimodal functions is called extremum problem , which is a well known difficult issue in optimization fields . applying ant colony optimization ( aco ) to solve this problem is rarely reported . the method of applying aco to solve extremum problem is explored in this paper . experiment shows that the solution error of the method presented in this paper is less than .
|
at the beginning of 17th century , galileo galilei turned his small telescope towards the sun and noticed the dark regions on sun s surface now called sunspots .the larger telescopes built subsequently revealed that sunspots have different structures like dark region called umbra and less dark with filamentary structures called penumbra .as the technology improved in 20th century , bigger telescopes were built to observe the characteristic features which include the light bridge across sunspots , moving magnetic features around sunspots , supergranulation network and its elements etc .most of these observations have been done using about 50 cm class telescopes or smaller . from the theoretical calculations , the size of the elementary building blocks ( size of the flux tube ) of magnetic structures , for example ,is estimated to be about 70 km .a large aperture telescope with diameter of 2 m or more is therefore , necessary to resolve important and intriguing features of the solar atmosphere at smaller scales .additionally , the photon starved polarization measurements , crucial to estimate the magnetic fields distribution and strength in the active regions on the sun , are hard to come by from small telescopes .the next generation , large aperture ( 2 m and above ) solar telescopes equipped with adaptive optics and supported by advanced backend instruments , will be able to probe the solar atmosphere at unprecedented details and thus help solving many outstanding problems in solar physics .some of the notable solar facilities planned for the future are : a 4 m advanced technology solar telescope ( atst ) to be build by national solar observatory , national large solar telescope ( nlst ) proposed by indian institute of astrophysics , india , and a soon to be commissioned 1.5 m gregor telescope by a consortia led kiepenheuer - institute , germany . the design and construction of a solar telescope is considerably different from a typical night - time telescope . among others ,the most formidable task is to build an efficient temperature control system capable of handling the large amount of heat generated by the intense solar radiation .the immense difficulty of the heat management problem has been a major bottleneck in constructing large aperture primary mirror for solar studies .evidently , by the time largest solar telescope ( 4 m atst ) starts its operation , the night - time astronomy would have leap - frogged to next generation of 20 - 40 m class telescopes . besides the direct heating caused by light absorption ,the diurnal temperature variations of ambient air are also crucial in driving the thermal response of the telescope mirror . the rise or fall in mirror temperature with respect to ambient ,adversely affect the imaging performance of the telescope in two major ways .first , when the mirror is at different temperature than the surrounding , a thin layer with a temperature gradient is formed closed to the mirror surface and air .the temperature fluctuations in the surface layer lead to the variations in the refractive index of the air that produce wavefront aberrations .the final image is blurred as the telescope beam passes through this layer twice .the second detrimental effects of heating or cooling is the structural deformations of the mirror arising from thermally induced stress and temperature inhomogeneities within the substrate volume .any deviation from the nominal temperature distorts the mirror surface from its desired shape .thermally induced surface imperfections therefore , deteriorate the final image quality of the telescope . to minimize the effect of mirror seeing , the temperature difference between the ambient air and the mirror surface should be maintained within a very narrow ( ) range . at the same time, the temperature difference across the reflecting face of the mirror should not exceed . for night - time astronomy, the temperature of the mirror and telescope dome can be effectively controlled by air conditioning system during the day .blowing cool air over the mirror surface is helpful in mitigating the mirror seeing effects .forced air flow breaks the thermal plumes closed to the mirror surface and thus homogenize the temperature gradients which otherwise drive the refractive index fluctuations leading to image degradation .the induced air flow also improves the convective heat transfer rate and thereby reducing the temperature difference between the mirror material and the surrounding air . for effective cooling, the air temperature and speed have to be regulated according to the temperature profile of the mirror surface . however , the accurate measurements of the temperature of the mirror require an array of temperature sensors placed close to the reflecting face of the mirror which is not just difficult but impractical during the scientific observations .in such a case , numerical simulations are the only viable solution to predict the temperature profile of the mirror surface . in another approach , a ` cool reservoir ' is created by lowering the the mirror temperature well below the temperature expected for the night observations .a near thermal equilibrium with ambient is accomplished in a resistive heating by passing electrical current through reflective coating on the front surface of the mirror .this approach does not quite suit the solar observations as the primary mirror is directly heated by sun s radiation .the effect of mirror seeing is even more severe if the same telescope is later switched to night - time observations . to avoid large heat loads and thermal gradients , ideally a substrate material should have low heat capacity and high thermal conductivity . among traditional materials used in primary mirror substrates , ultra low expansion ( ule )glass ceramics such as zerodur and fused silica have gained wide acceptance among astronomical community .the semiconducting sic is another important material that is beginning to find a niche in astronomy .in addition to its low thermal expansion coefficient , high hardness and rigidity , sic has exceptionally high thermal conductivity which is about two order of magnitude higher than the other materials . for large size telescopes ,the lightweighted mirror geometry is preferred to reduce the overall weight and thermal inertia of the system .the reduced mass facilitates efficient and faster cooling necessary for the mirror to reach the thermal equilibrium with surroundings quickly .however , lightweighting is a complex , time consuming and often an expensive process . with a good thermal control system , a classical primary mirror of diameter not exceeding 2 mcould possibly be used without lightweighting . for a given shape and geometry of the mirror ,the analytical solutions become intractable . therefore , in traditional engineering practice , the thermal response of such a mirror blank is studied by finite element methods ( fem ) . the heat transfer problem described by a diffusion - type partial differential equation ,is numerically solved to obtain a steady - state solutions for a constant temperature typically found at telescope location .in addition , the mirror heating is modeled assuming fixed heat flux and fixed ambient air temperature which serves as the boundary conditions . the time scale for a mirror to reach a thermal equilibrium is of the order of few days while the ambient air temperature changes significantly in course of the day .this means , the usual mirror substrate with large heat capacity can never reach the thermal equilibrium with ambient unless some external cooling is used .the static analysis is not enough to determine the level of accuracy and other parameters of interest ( e.g. level of wind flushing or ventilation points etc ) necessary for designing an efficient thermal control system .a complete time dependent solution of 3d heat transfer problem under the varying solar flux and ambient temperature is therefore necessary to evaluate the mirror performance , a problem which we address in the present paper . in this paper , we have solved the 3d heat transfer and structural model that takes into account the ambient thermal conditions existing at the telescope site .the location dependent solar flux and ambient heating model is incorporated into commercially available finite element analysis software to investigate the time dependent thermal and structural response of 2 m class primary mirror .these numerical simulations are carried out at four different ambient temperature range , representative of mild to extreme thermal variations that may occur at different observatory locations around the world .the general heat flow problem involves all three modes of heat transfer , namely conduction , convection , and radiation .the reflecting face coated with a thin metallic layer absorbs about 10 - 15% of the total solar flux incident on the mirror surface . the heat generated as a result of absorption of optical energyis diffused to other parts of the mirror via conduction and partly lost to the surroundings via convective and radiative processes .the temperature of the mirror rises when the heat produced by light absorption exceeds the overall heat loss to surroundings .the general heat transfer problem can be cast into the partial differential equation of the form : where , , and are density , temperature , thermal conductivity and heat capacity of the material .the terms and represent the velocity field and heat source , respectively .most of the material properties are temperature dependent .the boundary condition for the heat flux to be maintained at the surface can be written as : where is the inward heat flux at the mirror surface , is the ambient air temperature , is the effective radiation temperature of the surroundings , is surface emissivity , is the stefan - boltzmann constant and is the heat transfer coefficient indicating the rate at which heat is exchanged between the mirror and the surroundings .the 2nd and the 3rd terms in eq.(2 ) are statements of newton s law of cooling and stefan - boltzmann s law , accounting for the heat loss by free convection and radiation , respectively .the diurnal and annual temperature variations is one of the important criteria for selecting a suitable observatory site .it also plays a vital role in determining the degrading effects of atmospheric turbulence , mirror seeing and structural stability of the telescope . the earth s surface is mainly heated by the irradiated solar energy during the day .the sun s light also contains the spectral signatures of both solar and terrestrial atmospheres .the amount and duration of the solar radiation reaching the earth s surface is given by where is the flux amplitude when sun is at zenith , and is the solar zenith angle which is given by where is latitude of the place , is declination angle of the sun , is the solar hour angle with respect to noon and is the local time . the exact nature of daily temperature variations at a given location , however , depends on the local weather conditions ( pressure , wind speed , humidity etc ) , surface topography , soil type , vegetation , and the presence of water body etc . proposed a simple physics - based model to describe the thermal heating of the earth s surface in cloud free conditions . according to this model , the day - time rise in temperature due to the sun and the night - time cooling be described by \qquad\qquad\qquad\qquad\textrm{for}\quad \;\;t < t_s \\t_{2}(t)\!&=&\!s(t_{0}+\delta t ) + s\left\{t_{a } \cos\left[\frac{\pi}{\omega}(t-\tau_m)\right]-\delta t \right\ } \textrm{e}^{-\frac{t - t_s}{\kappa } } \;\;\textrm{for } \;\;t \geq t_s \;\;\ ; \label{eqs56 } \end{aligned}\ ] ] for heating part , the choice of harmonic term in eq.(5 ) is based on the solution of equation of thermal diffusion , while the exponential term in eq.(6 ) accounts for the temperature fall in accordance to the newton s law of cooling .the constant is included to account for the thermal time lag between the peak solar flux and the peak ambient temperature during the day .the meaning and values of various parameters in eq.(5 ) and eq.(6 ) are listed in table [ tab1 ] ..model parameters and the values for eqs.[eqs56 ] [ cols="<,<,<",options="header " , ]the primary mirror heating caused by the light absorption is a complex and challenging problem .the thermal performance of a solar telescope mirror needs to be accurately predicted under real observing conditions .this is essential to design an efficient and durable temperature control system devised to mitigate the detrimental effects of excessive heating during the day .we have outlined an approach to study the thermal and structural response of a primary mirror under varying observing conditions . in the fem model for a 2 m class primary mirror , the location dependent solar flux and a simple physics based heating and cooling model of the ambient air temperaturewas incorporated .the spatial and temporal evolution of temperature field inside two well known materials for optical telescope , silicon carbide ( sic ) and zerodur , were examined using 3d numerical simulations .the low thermal conductivity of zerodur mirror gives rise to strong radial and axial temperature gradients that are quite distinct for the day - time heating and night - time cooling .heat loss by free convection is very slow so the mirror retain significant heat during the night .the thermal response of the sic mirror is significantly different from the zerodur .the temperature within the sic mirror substrate equilibrates rather quickly due to high thermal conductivity .the absence of thermal gradients and the advantage of high thermal conductivity of sic can not be favorably leveraged without some temperature regulation by external means .thermal distortions of the mirror were analyzed using the structural fem model .high surface distortion seems to result if the operating temperature of the mirror deviates significantly from the nominal temperature of the material . in extremely low cte materials, the mirror seeing ultimately limits the telescope performance .it is not the high temperature alone , but the relative incremental change in ambient air temperature and the mirror which contribute most to the ` seeing effect ' .large temperature changes in ambient air and slow thermal response of the glass materials invariably results in bad seeing . in order to augment the scientific productivity of a solar telescope ,there have been some suggestions from the astronomers about the possibility of utilizing the same telescope for limited , but useful night - time observations .a site with minimum diurnal changes in day - night temperature ( e.g. 0 - 5 c ) would ideally suit for such dual purpose observations .we have only considered the thermal gradients and surface distortions in solid mirror which would be quite different for the lightweighted mirror made of same materials .the most conspicuous fallout of the different cell geometries and side walls structure is the appearance of thermal footprint in the surface distortions. this would be the subject matter of future work .we thank dr s. chatterjee for several discussions and valuable suggestions that helped us to improve the form and content of the manuscripts .37 natexlab#1#1[2]#2 ( ed . )( ) . . . , & ( ) . . in ( ed .) , _ _ ( pp . ) .volume of _ _ ., & ( ) . . in ( ed . ), _ _ ( pp . ) . volume of _ _ ., , & ( ) . . in ( ed . ), _ _ ( pp . ) . volume of _ _ ., , & ( ) . , __ , . , & ( ) . ., & ( ) . .the gemini 8 m telescopes project documentation ., , & ( ) . . in ( ed . ), _ _ ( pp . ) . volume of _ _ ., , & ( ) . . in _ _ . volume of _ _ . , , & ( ) . . ., , , & ( ) . . in ( ed . ) , _ _ ( pp . ) . volume of _ _ ., & ( ) . . , _, , & ( ) . . in ( ed . ) , _ _ ( pp . ) .volume of _ _ ., & ( ) . .( ) . . in ( ed . ) , _ _ ( pp . ). volume of _ _ . , &( ) . . . , , & ( ) . . , __ , . , & ( ) . .( ) . . , _, , , & ( ) . . in _ _ . , , , & ( ) . . , __ , . , & ( ) . . in ( ed .) , _ _ ( pp . ) .volume of _ _ ., & ( ) . . in _ _ ( pp . ) . volume of _ _ ., , , , & ( ) . . in ( ed . ) , _ _ ( pp . ) . volume of _ _ .( ) . . . , , , , , & ( ) . . in ( ed . ) ,_ _ ( pp . ) .volume of _ _ .( ) . in _ _ ( pp . ) . , & ( ) . ,( ) . . , __ , . , & ( ) ., , , , , , , , , , , , , , & ( ) . . in ( ed . ) , _ _ ( pp . ) . volume of _ _ .( ) . . , _( ) . . , _, , & ( ) . . , & ( ) . . ,
|
we present a detailed thermal and structural analysis of a 2 m class solar telescope mirror which is subjected to a varying heat load at an observatory site . a 3-dimensional heat transfer model of the mirror takes into account the heating caused by a smooth and gradual increase of the solar flux during the day - time observations and cooling resulting from the exponentially decaying ambient temperature at night . the thermal and structural response of two competing materials for optical telescopes , namely silicon carbide -best known for excellent heat conductivity and zerodur -preferred for its extremely low coefficient of thermal expansion , is investigated in detail . the insight gained from these simulations will provide a valuable input for devising an efficient and stable thermal control system for the primary mirror . solar telescope mirror ; optical materials ; thermal effects ; finite element methods
|
the structure and evolution of the magnetic field ( and the associated electric currents ) that permeates the solar atmosphere play key roles in a variety of dynamical processes observed to occur on the sun .such processes range from the appearance of extreme ultraviolet ( euv ) and x - ray bright points , to brightenings associated with nanoflare events , to the confinement and redistribution of coronal loop plasma , to reconnection events , to x - ray flares , to the onset and liftoff of the largest mass ejections .it is believed that many of these observed phenomena take on different morphologies depending on the configurations of the magnetic field , and thus knowledge of such field configurations is becoming an increasingly important factor in discriminating between different classes of events .the coronal topology is thought to be a critical factor in determining , for example , why some active regions flare , why others do not , how filaments form , and many other topics of interest .one model of the coronal magnetic field assumes that the corona is static and free of lorentz forces , such that , where is the current density .this means that , and thus any electric currents must be aligned with the magnetic field . because , it can be shown that , demonstrating that is invariant along field lines of .the scalar is in general a function of space and identifies how much current flows along each field line . in cases where varies spatially , the problem of solving for ( and ) is nonlinear .solving for such nonlinear force - free fields ( nlfffs ) requires knowledge of over the complete bounding surface enclosing the solution domain . to be compatible with a force - free field ,it is necessary for these boundary data to satisfy a number of consistency criteria , which we outline in [ sec : construction ] and which are explained in detail in and in . in analyzing solar active regions , localized maps of the photospheric vector fieldare typically used for the lower bounding surface , and potential fields are used for the other surfaces .( for the cartesian models discussed herein , we use the convention that the axis is normal to the photosphere , which is located at height . ) the availability of vector field maps produced by recent instrument suites such as the synoptic optical long - term investigations of the sun ( solis ) facility and the hinode spacecraft , building on earlier work done in hawaii with data from the haleakal stokes polarimeter ( hsp ) and by the imaging vector magnetograph ( ivm ) as well as from the hao / nso advanced stokes polarimeter ( asp ) at sacramento peak in new mexico , has spurred investigations that employ coronal - field models based on such measurements .we anticipate that such research will intensify when regular , space - based vector field maps from the helioseismic and magnetic imager ( hmi ) instrument on board the solar dynamics observatory ( sdo ) become available. one goal of nlfff modeling is to provide useful estimates of physical quantities of interest ( e.g. , connectivities , free energies , and magnetic helicities ) for ensembles of active regions , so that these active regions may be systematically analyzed and intercompared .the use of static , force - free models mitigates some of the computational difficulties associated with solving the more physically realistic , time - dependent problem , as running such dynamical models at the desired spatial and temporal resolutions for multiple active regions typically exceeds current computing capabilities . thereexist several previous studies of individual active regions where nlfff models are shown to be compatible with various structures in the corona ( e.g. , ) .several of these studies provide evidence of good alignment between nlfff model field lines and the locations of observed features such as coronal loop structures observed in euv and x - ray images .others show that the locations of sigmoids , twisted flux ropes , and/or field line dip locations coincide with analogous features in the nlfff models .such studies are certainly encouraging , but still it remains difficult to conclusively determine whether these models match a significant fraction of the coronal magnetic field located within the volume overlying an entire active region . as part of a long - lasting ( e.g. , ) effort to develop methods that generate more robust nlfff models , a working group ( in which all of the authors of this article are participating ) has held regular workshops over the past several years .the previous results from this collaboration are presented in , , and . since the launch of hinode in 2006, we have applied multiple nlfff modeling codes to a few active regions for which hinode vector magnetogram data are available and for which nonpotential features are evident ( e.g. , ) .the resulting nlfff models generally differ from each other in many aspects , such as the locations and magnitudes of currents , as well as measurements of magnetic energy in the solution domain . in this article, we identify several problematic issues that plague the nlfff - modeling endeavor , and use a recent hinode case to illustrate these difficulties .we describe one representative data - preparation scheme in [ sec : construction ] , followed in [ sec : validation ] by a comparison of field lines in the resulting nlfff models to two- and three - dimensional coronal loop paths , the latter determined by analyzing pairs of stereoscopic images . in [ sec : discuss ] , we explain the primary issues that we believe to impact our ability to reconstruct the coronal field in a robust manner , and also identify and discuss the alternate data - preparation scenarios we tried in addition to those presented in [ sec : construction ] . concluding remarks are presented in [ sec : conc ] .several nlfff extrapolation algorithms ( each implementing one of the three general classes of extrapolation methods ) were applied to boundary conditions deduced from a scan of noaa active region ( ar ) 10953 , taken by the spectro - polarimeter ( sp ) instrument of the solar optical telescope ( sot ) on board the hinode spacecraft .the hinode / sot - sp scan of this active region started at 22:30 ut on 2007 april 30 and took about 30 min to complete .as the scan progressed , polarization spectra of two magnetically sensitive fe i lines at 6301.5 and 6302.5 were obtained within the 0 slit , from which stokes iquv spectral images were generated . for this scan ( in `` fast - map '' mode ) , the along - slit and slit - scan sampling was 0 , and the total width of of the scan was 160 . ar 10953 produced a c8.5 flare about two days after this hinode / sot - sp scan , and a c4.2 flare about four and a half days after this scan , but otherwise the active region was flare - quiet above the c1.0 level .images from the x - ray telescope ( xrt ) on board hinode around this time show a series of bright loops in the central region of ar 10953 ( fig .[ fig1]a ) .the nlfff algorithms need vector magnetic data as boundary conditions , and determining these boundary maps comprises the first step in constructing a nlfff model .the conditions pertaining to the lower boundary are determined from a map of the photospheric vector magnetic field from the hinode / sot - sp instrument .the magnetic components parallel to and transverse to the line of sight , and , are functions of the circular and linear polarization signals , respectively .constructing requires assuming an atmospheric model ( in this case milne - eddington ) and determining which combinations of magnetic field strengths and filling factors produce the observed polarization signals ( e.g. , ) . has uncertainties that are typically an order of magnitude less than .the next step involves removing the ambiguities in the components of that arise due to the property that the same linear polarization signal can be produced by either of two magnetic field vectors differing by 180 of azimuth in the transverse plane .we choose to perform the disambiguation using the interactive azimuthal ambiguity method ( azam ) , which is one of several methods have been devised and tested to resolve this ambiguity ( see , and references therein ) .after disambiguation , the map for ar 10953 is used to produce potential field data with which the extrapolation codes will initialize the computational domain .our approach is to specify the computational domain ( having an enclosing surface ) that contains much of the coronal volume overlying the active region of interest , such that the lower boundary includes the area for which vector magnetogram data are available .the initialization field is calculated by embedding the hinode / sot - sp vector magnetogram data in a larger line - of - sight magnetogram observed by the michelson doppler imager ( mdi ) instrument on board the solar and heliospheric observatory ( soho ) spacecraft ( as shown in fig . [ fig1]d ) .then , the potential field coefficients corresponding to this enlarged footprint are determined , from which the potential field in the 320-pixel nlfff computational domain is computed .in addition , the vector field boundary conditions for the side and top boundaries of the computational domain are taken from this same potential field extrapolation , primarily because we expect that the coronal magnetic field becomes largely potential away from the center of the active region , but also because it is useful to specify how unbalanced flux emanating from this active region connects to flux of the opposite polarity located elsewhere on the sun .the embedded lower - boundary data are then sampled onto a uniform , helioplanar , 320-pixel grid having 580 km pixels , such that the footprint of the computational domain spans a 185.6-mm - square area .the region for which hinode vector magnetogram data for ar 10953 were available comprise about a 100-mm - by-115-mm subarea of the full lower boundary footprint , outside of which the horizontal components of are set to zero .thus , in this peripheral region outside the hinode / sot - sp field of view , the field on the lower boundary can either be considered as purely vertical ( for force - free methods which use all three components of the field as boundary conditions ) , or equivalently as having zero vertical current ( for methods which use the vertical component of the field together with the vertical component of the current density ) .next , to be consistent with a force - free field , it is necessary ( but not sufficient ) that the entire boundary field satisfy several criteria , as delineated in and in : namely , ( 1 ) the volume - integrated lorentz force must vanish , ( 2 ) the volume - integrated magnetic torque must vanish , and ( 3 ) the amount of negative - polarity flux through having a given value of must equal the positive - polarity flux through with this same value of .the first two criteria are relations involving various components of , and are derived from volume integrals of the lorentz force and its first moment .the third ( `` -correspondence '' ) relation operates over all values of present on .there is of course no guarantee , however , that the values of , coupled with the potential field of for the complement of the enclosing surface , together satisfy these consistency criteria .our working group attempts to deal with this problem by preprocessing the boundary data before feeding them to the extrapolation codes .the preprocessing scheme used here ( developed by ) seeks to adjust the components of so as to satisfy the first two consistency criteria while minimizing the deviations of from their measured values . during this preprocessing step , spatial smoothingis also applied to to attenuate some of the small - scale magnetic fluctuations that likely die off shortly above the photosphere .finally , we apply the various nlfff algorithms to these boundary and initial data .several methods for calculating nlfff models of the coronal magnetic field have been developed and implemented in recent years , including ( 1 ) the optimization method , in which the solution field is evolved to minimize a volume integral such that , if it becomes zero , the field is divergence- and force - free ; ( 2 ) the evolutionary magnetofrictional method , which solves the magnetic induction equation using a velocity field that advances the solution to a more force - free state ; and ( 3 ) grad - rubin - style current - field iteration procedures , in which currents are added to the domain and the magnetic field is recomputed in an iterative fashion .some of these methods have been implemented by multiple authors .for brevity , we omit detailed explanations of these numerical schemes as implemented here and instead direct the reader to and , and references therein .although these methods work well when applied to simple test cases , we have found that the results from each of the methods typically are not consistent with each other when applied to solar data .the resulting magnetic field configurations differ both qualitatively ( e.g. , in their connectivity ) as well as quantitatively ( e.g. , in the amount of magnetic energy contained within them ) .in discussing the results from the solar - like test case of , we described some likely causes of such discrepancies amongst the models . inwhat follows , we illustrate these problems in greater detail using the ( solar ) data set at hand . lcccc pot & 1.00 & & 0.02 & 24 + wh & 1.03 & 0.24 & 7.4 & 24 + tha & 1.04 & 0.52 & 34 . &25 + wh & 1.18 & 0.16 & 1.9 & 27 + val & 1.04 & 0.26 & 71 . &28 + am1 & 1.25 & 0.09 & 0.72 & 28 + am2 & 1.22 & 0.12 & 1.7 & 28 + can & 1.24 & 0.09 & 1.6 & 28 + wie & 1.08 & 0.46 & 20 . &32 + mct & 1.15 & 0.37 & 15 . &38 + rg & 1.04 & 0.37 & 6.2 & 42 + rg & 0.87 & 0.42 & 6.4 & 44 + [ table1 ]the results of twelve extrapolations for ar 10953 ( including the potential field ) , based on the data - preparation steps described in [ sec : construction ] , are summarized in table [ table1 ] and figure [ fig3 ] .table [ table1 ] contains domain - averaged metrics characterizing the center of the active region ( corresponding to the region surrounding the leading , negative - polarity sunspot ) , and figure [ fig3 ] shows representative field lines in this same volume for each of these models .this central region is a 160-pixel volume , chosen to cover the portion of the lower boundary containing much of hinode / sot - sp magnetogram data ( i.e. , where we have some knowledge about the currents passing through the photosphere ) , and is fully contained within the larger 320-pixel computational domain .the models considered in table [ table1 ] and figure [ fig3 ] are the current - field iteration method as run by wheatland using the values of in either the negative or positive polarity ( hereafter `` wh '' and `` wh '' , respectively ) ; the finite - element grad - rubin - style method ( femq in ) run using two different parameter sets by amari ( `` am1 '' and `` am2 '' ) ; the vector - potential grad - rubin - like method ( xtrapol in ) by canou ( `` can '' ) , or by rgnier using the values of in either the positive ( `` rg '' ) or negative ( `` rg '' ) polarity ; the optimization method using grid refinement as run by wiegelmann ( `` wie '' ) or mctiernan ( `` mct '' ) , or no grid refinement as run by thalmann ( `` tha '' ) ; the magnetofrictional method using grid refinement as run by valori ( `` val '' ) ; and the initial potential solution ( `` pot '' ) .we find that the am1 , am2 , can , and wh current - field iteration models contain between 18% and 25% more energy than the potential solution , and have smaller residual lorentz forces and smaller average than the other models .in addition , the am1 , am2 , and can models find a strongly twisted flux rope in equilibrium , whose foot points are anchored southeast of the main spot ( mostly outside of the core volume shown in fig .[ fig3 ] ) , a feature which was anticipated by the analysis of .models using the optimization method ( mct , wie , and tha ) contain between 4%15% more energy than the potential solution , but possess more residual lorentz forces than the current - field iteration solutions .the magnetofrictional model ( val ) has more energy than the potential solution but has larger values of than the optimization or current - field iteration solutions .based on the results summarized in table [ table1 ] , the excess magnetic energy ( above the potential field ) for this active region could be anywhere from near zero to about 25% of the potential field energy .however , it is also possible that the excess energy is significantly larger than 25% when taking into account the uncertainty associated with the inconsistency between the boundary data and the force - free - model assumption ( see [ sec : chromosphere ] ) . because of these differences in the resulting nlfff models of ar 10953, we perform a goodness - of - fit test to determine which of the nlfff models is the best approximation to the observed coronal magnetic field . in the earlier study of , we performed this test in both a qualitative and quantitative manner using euv and x - ray imagery , provided respectively by the transition region and coronal explorer ( trace ) and hinode / xrt instruments , by determining which model possessed field lines that were more closely aligned with the projected coronal loop structures visible in the ( two - dimensional ) image plane .models for which most field lines appeared to be aligned with loops were considered good approximations to the actual coronal magnetic field .locations where the field was noticeably sheared or twisted were of particular interest because such patterns are usually indicative of the presence of currents ( which the modeling seeks to ascertain ) .more weight was typically given to regions connected to places at the photospheric boundary where is found to be high , whereas coronal loops located in the periphery of the active region with footpoints located where was lower were likely to be less sensitive to the presence of currents elsewhere in the active region .for ar 10953 , we overlaid field lines from all of the nlfff models ( as well as the potential field model ) on top of the time - averaged hinode / xrt image shown in figure [ fig1]a , and used the same criteria listed above to qualitatively determine the better - matching models .we subjectively judged the field lines in the wh , am1 , am2 , and can models to be more closely aligned with the xrt loops than any of the others .an overlay of field lines from the wh model is shown in figure [ fig1]b .this judgement is based on good alignment with the tightly curved x - ray loops north of the sunspot ( which is visible in the coaligned magnetogram of this region shown in fig .[ fig1]d ) , together with a reasonably good match of the loop arcade and fan structures to the south and west of the sunspot .this judgement is also based on side - by - side comparisons of field line overlays amongst the various candidate models ( including the potential field model ) , from which a relative ranking was determined .the models listed above came out on top in both instances . with the aim of determining more quantitatively the best - fit model(s ) for ar 10953, we also compared the model field lines to three - dimensional trajectories of loop paths .we are able to do this because ar 10953 was observed by the twin solar terrestrial relations observatory ( stereo ) spacecraft , one of which leads the earth in its orbit around the sun , and the other of which trails the earth . as part of the sun earth connection coronal and heliospheric investigation ( secchi ) instrument suite , each stereo spacecraft contains an extreme ultraviolet imager ( euvi ) .the angular separation of the two stereo spacecraft at the time ar 10953 was on disk ( of about 7 ) was favorable for stereoscopically determining the three - dimensional trajectories of loops observed in the 171 , 195 , and 284 channels of euvi .the coordinates of these loop trajectories were obtained by triangulating the positions of common features visible in pairs of concurrent euvi images using the method described in . unfortunately , most of the loops visible in the three euvi wavebands lie outside of the central region of ar 10953 ( fig . [ fig1]c ) , and thus do not overlap the region for which the vector magnetogram data are available ( figs .[ fig1]d , e ) .the main reason is that loops located closer to the centers of the active regions tend to emit more in x - ray passbands than in euv passbands .in addition , large loops at the periphery of active regions are generally easier to reconstruct with stereoscopy , while small loops in the centers of active regions are more difficult to discern from underlying bright features ( such as moss ) and thus can not unambiguously be triangulated .however , the outlying loops evident in ar 10953 should still sense the presence of currents in the center of the active region , due to ampre s law , and thus might be useful for quantitatively determining the best - matching nlfff model for this active region .we infer that currents must be present in the ar 10953 corona for two reasons .first , most of the strong vertical currents in the map are located in the central portion of the active region and presumably flow upward into the corona .second , field lines from the potential model do not qualitatively match the x - ray and euv loops as well as field lines from the wh , am1 , am2 , and can models , which are our most nonpotential models and evidently contain currents strong enough to affect the trajectories of many field lines in the central portion of this active region ( cf .[ fig3 ] ) . to quantitatively compare the stereo loops and the nlfff - model field lines, we determine the ( positive ) angle between the stereo - loop and the model - field line trajectories subtended at all stereo - loop points lying inside the full 320-pixel nlfff computational domain .we then computed the mean of these angles , yielding for each model the domain - averaged misalignment angle metric listed in table [ table1 ] .we find that , at least by this particular quantitative measure , none of the nlfff models improve upon the value of found for the potential field model , although several models ( including the qualitatively better - fitting models discussed earlier ) are comparable .we discuss reasons why none of the models improved upon the potential field metric for in [ sec : fovissues ] .given the boundary conditions produced using the data preparation process described in [ sec : construction ] , the various nlfff algorithms converged to different solutions for the coronal field above ar 10953 .a few of the models appear to match the loop structures in the hinode / xrt image , but none of them were able to improve upon the potential field in their alignment with the three - dimensional loop trajectories inferred from stereo / secchi - euvi . in attempting to find a consensus model, we also applied the nlfff algorithms to different boundary data generated using variants of the data preparation process .these variations , described in [ sec : dataprep ] , were run in parallel to those analyzed in [ sec : validation ] , and also did not produce a viable model .this inability to generate models that both qualitatively and quantitatively match the coronal loops paths is disappointing , especially given the generally successful application of these algorithms to test cases with known solutions , including a solar - like test case with quasi - realistic forcing in the lower layers that was meant to approximate some of the forces acting in the solar chromosphere .while we realistically expect the various methods to yield somewhat different solutions , we can not fully ascribe the broad range of inconsistencies in the solutions solely to algorithmic differences .this causes us to examine the entire nlfff modeling process from beginning to end , and in so doing we have identified several additional factors that likely also impact our ability to produce robust models .these factors are discussed further in [ sec : fovissues ] and [ sec : chromosphere ] .we applied the nlfff algorithms to boundary data produced using eleven variations of the data preparation process , of which only one was outlined in [sec : construction ] .variations involved substituting a different procedure to remove the 180 ambiguity of the measured transverse vector field , and/or using different versions of the standard preprocessing algorithm . in total , about 60 different nlfff models for ar 10953 were calculated .( not all algorithms were run on all of the available boundary data sets . )the first variant entailed using a different algorithm to remove the 180 ambiguity inherent in the vector - magnetogram inversion process .although there are in fact several algorithms to do this , we chose as an alternative to azam to employ the automated university of hawaii iterative method ( uhim ) because it has been used extensively in the literature and also scored highly amongst other ambiguity resolution algorithms .we found that , while differences exist in , for example , field line trajectories near regions where the ambiguity was resolved differently , the volume - integrated metrics discussed in [ sec : validation ] and shown in table [ table1 ] were largely similar for both the azam- and uhim - disambiguated boundary data . the second variant involved a new version of the method used to preprocess the values of to make the boundary data more consistent with a force - free solution .our standard scheme pivots and smooths the components of so that the integrated magnetic forces and torques in the overlying volume are reduced as much as possible , while also retaining some fidelity to the measured vector field . for ar 10953, we also experimented with a preprocessing scheme ( described in ) that , in addition to the above , seeks to align the horizontal components of with fibrils seen in contemporaneous images of h .the motivation for this additional preprocessing constraint is to produce boundary data as close as possible to the force - free field expected to exist at the chromospheric level ( to which the h fibrils are assumed parallel ) .we found , however , that using h-fibril information ( observed by the narrowband filter imager of hinode / sot ) did not make a significant difference in the domain - averaged metrics used to characterize the various extrapolation models , although we intend to experiment further with this preprocessing scheme as it is somewhat new .the third variant was to use the method of preprocessing described in , the goals of which are the same as the , but which uses a simulated annealing numerical scheme to find the optimal field . as with the other variations , using this alternate preprocessing scheme did not much affect the resulting global metrics .the hinode / sot - sp vector magnetogram data span only the central portion of the ar 10953 , and thus do not cover all of the weaker field and plage that surround the active - region center . here , as in the case , we chose to extend the nlfff computational domain and embed the vector data in a larger line - of - sight magnetogram .one benefit of such embedding is that it places the side and top bounding surfaces farther away from the center of the active region , in locations where the coronal magnetic field is presumed more potential and thus more consistent with the boundary conditions applied there .another reason is that in earlier test cases using boundary data with known solutions ( described in ) , we found that enlarging the nlfff computational domain improved the solution field in the central region of interest .we attributed this behavior primarily to the sensitivity of the final solution to the specified boundary conditions , and concluded that moving the side and top boundaries farther away from the region of interest improved the resulting models .however , there is an important difference between these earlier tests and the current case of ar 10953 . in the study ,vector data for the entire ( enlarged ) lower boundary were available , and thus the locations of currents penetrating the entire lower bounding surface , over both polarities , were known .in contrast , for ar 10953 we have no information about currents located exterior to the region containing the hinode vector magnetogram data , and consequently ( as stated earlier ) the horizontal components of were set to zero in the region outside of the area containing hinode / sot - sp vector data .this is obviously not correct , but lacking any knowledge of actual horizontal fields there , this approach was presumed to be the least damaging .however , the lack of satisfactory results suggests that the decision to embed may not be as harmless as originally believed . the ability of the various nlfff algorithms to find a valid solution ultimately depends upon how they deal with the currents passing through the bounding surfaces of the computational domain .it is interesting to note that for ar 10953 , as for the case , the solutions bearing the best resemblance to the hinode / xrt loops , and here were among the best at matching the stereo - loop trajectories , were calculated using the current - field iteration method .this method differs from the others in that it uses values of and only in one of the polarities ( the well - observed leading polarity , in the case of the best - fit models ) from the lower boundary , while ignoring such measurements in the opposite polarity .in contrast , the optimization and magnetofrictional methods require that information about currents be available across both polarities .we suspect that the wheatland current - field iteration algorithm benefits from the additional space in the solution domain because fewer current - carrying field lines intersect the side boundaries ( which causes their values of to be set to zero ) .however , the wiegelmann optimization algorithm , and the valori magnetofrictional algorithm in particular , perform better when applied to smaller volumes or when the weighting given to the peripheral boundary information is less than that applied to the hinode vector magnetogram data .many of these problems caused by the embedding process are alleviated when vector magnetogram data are provided over a field of view that covers the locations of all relevant currents associated with the region of interest .for active - region studies , this often means capturing much of the trailing polarity , which is often more diffuse and extended than the leading polarity .we therefore conclude that vector magnetogram data of active regions for use by nlfff modeling efforts need to span much of the area above which currents flow .coverage of the more diffuse , trailing - polarity fields is likely to be especially important because of the tendency for the trailing - polarity field to contain the endpoints of many field lines that carry significant currents ( due to the existence of such currents in the leading polarity , coupled with the assumption that many field lines connect the leading and trailing polarities within the active region of interest ) . on a related topic, we suspect that the stereo - loop comparison process described in [ sec : validation ] is affected both by the proximity of the stereo loops to the sidewalls of the nlfff computational domain ( where potential - field boundary conditions were applied ) and by their lying outside of the region for which we have vector magnetogram data ( figs .[ fig1]d , e ) .consequently , one might not be surprised that the potential model bested the others in matching the stereo loops , but the sizable misalignment angle of 24 for the potential model seems to suggest that even these outlying stereo loops do carry some currents . in light of these issues ,rather than using the stereo - loop comparison as a discriminator between the collection of nlfff models , we instead view the collectively poor misalignment angles by the nlfff models as another indication that the region over which vector magnetogram data are available needs to be enlarged .although it is possible to enlarge the nlfff computational domain ( beyond what we have already done ) in order to include even more loops observed by stereo , we again emphasize that the added benefit of doing so without additional vector magnetogram data would be minimal because of the lack of further information about currents flowing through the lower boundary . indeed, we applied the same current - field iteration method used for wh to larger ( 512-pixel ) boundary data produced using the same process described in [ sec : construction ] , and found that the value of for the identical volume used to compute the values of in table [ table1 ] remained unchanged .lastly , we recognize that , when compared with stronger - field regions , the transverse field components are not measurable with the same degree of certainty in weaker - field regions such as those likely to lie within the enlarged fields of view for which we are advocating .the findings presented here , however , suggest that the nlfff modeling algorithms would benefit by having these vector magnetic field data available , even if such data possess higher measurement uncertainties than the stronger fields found closer to the centers of most active regions . in [ sec :construction ] , we described several conditions that the boundary data must satisfy in order to be consistent with a force - free magnetic field .however , these conditions are never guaranteed to be satisfied on the full bounding surface , which here consists of the vector and line - of - sight magnetogram data for the lower boundary combined with the potential field boundary conditions used for the remainder of the enclosing surface . to partially rectify this problem, we apply preprocessing to these data to thereby adjust the various components of on such that the boundary data are made more compatible with the equations the nlfff algorithms seek to solve .even after preprocessing , however , the boundary data can be shown to be incompatible with a force - free field . the wh model , which is one of several models judged to match best on a qualitative basis , only uses the values located in the negative polarity of the active region .however , the algorithm converged to a solution for which the corresponding values in the positive polarity do not match those indicated by the hinode / sot - sp data .figure [ fig2]a illustrates this problem .there , the values in the wh model from field lines that intersect the lower boundary in the positive polarity are plotted versus the values at the same boundary points deduced from the preprocessed hinode data . for consistent boundary data, these would be equal .the scatter evident in the figure indicates that the hinode boundary data , even after preprocessing , are inconsistent with a force - free field .figure [ fig2]b illustrates this effect in a different way .this incompatibility can be illustrated by computing where is the heaviside step function , and and are , respectively , the flux density and value of at each point on the preprocessed hinode boundary map .the function signifies the net flux in that subarea of the boundary map for which is larger than a certain threshold . when the -correspondence relation holds , the function thus possesses a derivative of zero because such correspondence requires , for any interval , an equal amount of positive and negative flux passing through that subarea of the boundary map having values of between and .however , figure [ fig2]b shows that is nonzero over most values for the preprocessed data used here , especially within the range which corresponds to the values possessed by about 80% of the area of the boundary map .for comparison , the figure includes the function for the unpreprocessed dataset .the various methods deal with the lack of correspondence in the boundary data in different ways .current - field iteration methods allow the -correspondence condition to be met by ignoring the values of in one polarity .however , only limited uniqueness results have been found for this approach , and even existence results are limited to the case of an unbounded domain ( see ) .it is well known that the current - field iteration method fails to converge in some cases , and this may be due to the absence of a solution , or the absence of a unique solution . in wheatland s implementation of this method , if the solution does not converge , values of are censored ( set to zero ) in the polarity defining the currents going into the corona .the censorship is imposed at boundary points with less than a threshold value , and that value is increased as required .additional censorship is also imposed such that field lines intersecting the side and top boundaries carry no current . in practiceit is found that such reduction of the currents flowing into the domain can lead to convergence . the wh model , for example, censored almost half of the values of in the negative polarity ( corresponding to 43% of the negative - polarity flux ) before convergence was achieved , as illustrated in figure [ fig4 ] .valori s magnetofrictional method is prevented from relaxing past an equilibrium state in which the continual injection of inconsistencies into the model ( at the boundaries ) is balanced by their removal via diffusion .wiegelmann s optimization method does not reach as well - relaxed of a force - free state as some of the other models , even though it disregards some of the boundary mismatches via the tapered nature of the weighting functions towards the edges of the model volume .there are several reasons why the boundary conditions used for this study ( and other active region studies ) might not satisfy the force - free consistency relations .the most conspicuous reason is that the photospheric layers of the sun , from which originate the hinode / sot - sp magnetogram data used here , do contain lorentz , buoyancy , and pressure gradient forces and thus are not force - free to begin with .additionally , measurement uncertainties in the components of preclude accurate determinations of ( and thus ) on the lower boundary because of the need to take derivatives of the horizontal components of .another reason is that measurements of the current normal to the enclosing surface are unavailable over much of due to the lack of vector magnetogram data above the photosphere .another is that the modeling implicitly assumes that the boundary data span a planar surface , and do not take into account effects present in vector magnetograms such as the wilson depression in sunspots and the broad range of line - formation heights across the line .yet another is that the inversion techniques that produce the vector magnetogram measurements do not fully take into account the multiple components of thin , narrow strands of interleaved magnetic fields that characterize sunspot penumbrae .we thus conclude that the nlfff modeling process needs to account for these intrinsic uncertainties in the boundary data , which include everything from measurement uncertainties to the lack of knowledge about how to infer the magnetic field in the force - free region at the base of the corona from the observed photospheric field maps .we have attempted to model the coronal magnetic field overlying ar 10953 by applying a suite of nlfff algorithms to the photospheric vector field measured using hinode / sot - sp .these data were remapped , embedded , and preprocessed in various ways in order to produce boundary data for this active region that were also consistent with the force - free assumption . from these boundary data ,about 60 different nlfff models were constructed .the resulting variations in these models prompted us to validate the results against images of coronal loops evident in euv or x - ray images .the goodness of fit was first determined in a qualitative manner by overlaying nlfff - model field lines on hinode / xrt imagery .this comparison indicated that some models contain field lines that are aligned with the observed loop structures .however , conclusive determinations of best - matching models , based solely on such overlays , remained difficult because of the indistinct nature of many coronal loops , especially those located near the center of ar 10953 where many of the currents are presumed to lie .we then turned to stereoscopic determinations of three - dimensional loop paths as a way to quantitatively assess the goodness of fit .this comparison was also inconclusive , because the loops traced stereoscopically in the stereo / secchi - euvi observations were restricted to the outermost domain of the active region .this meant that those loops that did fall in the nlfff computational domain lay close to the edge of the computational volume , where model field lines either leave the domain or run close to the side boundaries .we suspect this quantitative comparison was at least partially compromised by these effects , due to the model fields being sensitive to the way in which the side boundary information is incorporated and to their being located above the portion of the lower boundary for which hinode / sot - sp vector magnetogram data were not available .as exemplified by the qualitative and quantitative comparisons presented here , we find that it remains difficult to construct and validate coronal magnetic field models of solar active regions that can reliably be used for detailed analyses of a quantitative nature .our experience with modeling test cases with known solutions had shown that the various algorithms do work when given consistent boundary conditions .this led us to examine thoroughly the entire nlfff modeling framework in order to identify problematic issues that impact our ability to build useful models of the solar coronal field .the results of this examination leave us with several possibilities .first , it may be that useful nlfff extrapolations based on currently available signal - to - noise levels , preprocessing procedures , fields of view , and observable fields are intrinsically infeasible .a second ( and more hopeful ) possibility is that nlfff extrapolations need both much larger fields of view to better constrain the long field lines high over a region or to distant neighboring regions , and enough spatial resolution to resolve the spatial distribution of on the boundaries .third , nlfff algorithms need to accommodate the fact that the boundary conditions contain ( sometimes significant ) uncertainties , either from the measurement process ( e.g. , signal - to - noise issues or inadequate resolution of the 180 ambiguity ) , or from physical origins ( e.g. , variations in the line - formation height , or most prominently the non - force - free nature of photospheric vector magnetograms ) .the second possibility can be tested empirically .one way to do this with current codes and instrumentation is to obtain vector magnetic observations of a substantially smaller active region and its wide surroundings .this will place the side boundaries relatively farther away from the region of interest , while remaining compatible with the range and resolution of , e.g. , the hinode / sot - sp and with the cartesian nature of the available modeling codes . to address the third possibility , we have several avenues available .simple ways to account for boundary data uncertainties include introducing a position - dependent weighting function used in relaxation methods , or modifying the selection criteria for the field in the current - field iterative method .additionally , the preprocessing of the raw vector data needs to better approximate the physics of the photosphere - to - chromosphere interface in order to transform the observed photospheric field to a realistic approximation of the overlying near - force - free field at the base of the corona .one way to do that without resorting to more computationally intensive mhd models is to use the magnetohydrostatic concept ( e.g. , ) and approximate the stratifications for the flux tubes and their surroundings ( or the strongly and weakly magnetic regions ) separately .finally , in light of our findings in this study and in consideration of the aforementioned goal of constructing models that provide useful estimates of physical quantities of interest , we thus recommend that a particular force - free extrapolation should not be considered a consistent model of an active - region corona unless the following indicators ( at a minimum ) are satisfied : ( 1 ) good alignment of modeled field lines to the coronal loops observed on the solar disk ; ( 2 ) acceptable agreement of the -correspondence relation by having similar values of at both ends of all closed field lines , and acceptable agreement with the boundary values of from the data ; while ( 3 ) still realizing low values of the nlfff metrics and .we gratefully acknowledge prof .sami solanki and the max - planck - institut fr sonnensystemforschung in katlenburg - lindau , germany , for their hospitality during our most recent workshop , at which the ideas presented in this article were discussed and refined .hinode is a japanese mission developed and launched by isas / jaxa ( japan ) , with naoj as domestic partner and nasa ( usa ) and stfc ( uk ) as international partners . it is operated by these agencies in cooperation with esa and nsc ( norway ) .the stereo / secchi data used here are produced by an international consortium of the naval research laboratory ( usa ) , lockheed martin solar and astrophysics laboratory ( usa ) , nasa / goddard space flight center ( usa ) , rutherford appleton laboratory ( uk ) , university of birmingham ( uk ) , max - planck - institut fr sonnensystemforschung ( germany ) , centre spatiale de lige ( belgium ) , institut doptique thorique et applique ( france ) , and institut dastrophysique spatiale ( france ) . m.l.d ., g.b . , and k.d.l .were supported by lockheed martin independent research funds .j.m.m . was supported by nasa grantsnng05144 g and nnx08a156 g . s.r . acknowledges the financial support of the uk stfc . j.k.t . acknowledges support from dfg grant wi 3211/1 - 1 .g.v . was supported by dfg grant ho 1424/9 - 1 .t.w . acknowledges support from dlr grant 50 oc 0501 .is an ircset government of ireland scholar .t.t . acknowledges support from the international max planck research school on physical processes in the solar system and beyond . ,l. , deluca , e. , austin , g. , bookbinder , j. , caldwell , d. , cheimets , p. , cirtain , j. , cosmo , m. , reid , p. , sette , a. , weber , m. , sakao , t. , kano , r. , shibasaki , k. , hara , h. , tsuneta , s. , kumagai , k. , tamura , t. , shimojo , m. , mccracken , j. , carpenter , j. , haight , h. , siler , r. , wright , e. , tucker , j. , rutledge , h. , barbera , m. , peres , g. , & varisco , s. 2007 , , 243 , 63 , r. a. , moses , j. d. , vourlidas , a. , newmark , j. s. , socker , d. g. , plunkett , s. p. , korendyke , c. m. , cook , j. w. , hurley , a. , davila , j. m. , thompson , w. t. , st cyr , o. c. , mentzell , e. , mehalick , k. , lemen , j. r. , wuelser , j. p. , duncan , d. w. , tarbell , t. d. , wolfson , c. j. , moore , a. , harrison , r. a. , waltham , n. r. , lang , j. , davis , c. j. , eyles , c. j. , mapson - menard , h. , simnett , g. m. , halain , j. p. , defise , j. m. , mazy , e. , rochus , p. , mercier , r. , ravet , m. f. , delmotte , f. , auchere , f. , delaboudiniere , j. p. , bothmer , v. , deutsch , w. , wang , d. , rich , n. , cooper , s. , stephens , v. , maahs , g. , baugh , r. , mcmullin , d. , & carter , t. 2008 , , 136 , 67 , t. r. , leka , k. d. , barnes , g. , lites , b. w. , georgoulis , m. k. , pevtsov , a. a. , balasubramaniam , k. s. , gary , g. a. , jing , j. , li , j. , liu , y. , wang , h. n. , abramenko , v. , yurchyshyn , v. , & moon , y .-2006 , , 237 , 267 , t. j. , tsuneta , s. , lites , b. w. , kubo , m. , yokoyama , t. , berger , t. e. , ichimoto , k. , katsukawa , y. , nagata , s. , shibata , k. , shimizu , t. , shine , r. a. , suematsu , y. , tarbell , t. d. , & title , a. m. 2008 , , 673 , l215 scherrer , p. h. , bogart , r. s. , bush , r. i. , hoeksema , j. t. , kosovichev , a. g. , schou , j. , rosenberg , w. , springer , l. , tarbell , t. d. , title , a. , wolfson , c. j. , zayer , i. , & the mdi engineering team .1995 , , 162 , 129 , c. j. , derosa , m. l. , metcalf , t. , barnes , g. , lites , b. , tarbell , t. , mctiernan , j. , valori , g. , wiegelmann , t. , wheatland , m. s. , amari , t. , aulanier , g. , dmoulin , p. , fuhrmann , m. , kusano , k. , rgnier , s. , & thalmann , j. k. 2008 , , 675 , 1637 , s. , ichimoto , k. , katsukawa , y. , nagata , s. , otsubo , m. , shimizu , t. , suematsu , y. , nakagiri , m. , noguchi , m. , tarbell , t. , title , a. , shine , r. , rosenberg , w. , hoffmann , c. , jurcevich , b. , kushner , g. , levay , m. , lites , b. , elmore , d. , matsushita , t. , kawaguchi , n. , saito , h. , mikami , i. , hill , l. d. , & owens , j. k. 2008 , , 249 , 167
|
nonlinear force - free field ( nlfff ) models are thought to be viable tools for investigating the structure , dynamics and evolution of the coronae of solar active regions . in a series of nlfff modeling studies , we have found that nlfff models are successful in application to analytic test cases , and relatively successful when applied to numerically constructed sun - like test cases , but they are less successful in application to real solar data . different nlfff models have been found to have markedly different field line configurations and to provide widely varying estimates of the magnetic free energy in the coronal volume , when applied to solar data . nlfff models require consistent , force - free vector magnetic boundary data . however , vector magnetogram observations sampling the photosphere , which is dynamic and contains significant lorentz and buoyancy forces , do not satisfy this requirement , thus creating several major problems for force - free coronal modeling efforts . in this article , we discuss nlfff modeling of noaa active region 10953 using hinode / sot - sp , hinode / xrt , stereo / secchi - euvi , and soho / mdi observations , and in the process illustrate the three such issues we judge to be critical to the success of nlfff modeling : ( 1 ) vector magnetic field data covering larger areas are needed so that more electric currents associated with the full active regions of interest are measured , ( 2 ) the modeling algorithms need a way to accommodate the various uncertainties in the boundary data , and ( 3 ) a more realistic physical model is needed to approximate the photosphere - to - corona interface in order to better transform the forced photospheric magnetograms into adequate approximations of nearly force - free fields at the base of the corona . we make recommendations for future modeling efforts to overcome these as yet unsolved problems .
|
measuring complexity of experimental time series is one of the important goals of mathematical modeling of natural phenomena . a measure of complexity gives an insight in to the phenomenon being studied .for example , in a study of population dynamics of the fruit - fly , a measure of complexity of the time series ( population size of generations ) will throw light on the persistence and stability of the population . if the complexity is low , then it is possible that the population is exhibiting a periodic behavior , i.e. fluctuating between a high population size and a low one alternately .complexity also plays a very important role in determining whether a sequence is random or not in cryptography applications .different measures of complexity such as lyapunov exponent , kolmogorov complexity , algorithmic complexity etc . are proposed in the literature .while complexity has several facets , shannon entropy is one of the reliable indicators of ` compressibility ' which can serve as a measure of complexity .it is given by the following expression : where is the symbolic sequence with symbols and is the probability of the -th symbol for a block - size of one .block - size refers to the number of input symbols taken together to compute the probability mass function .shannon entropy plays an important role in lossless data storage and communications .shannon s noiseless source coding theorem provides an upper limit on the compression ratio achievable by lossless compression algorithms .this limit is given by the shannon entropy .numerous algorithms have been designed with the aim of achieving this limit .huffman coding , shannon - fano coding , arithmetic coding , lempel - ziv coding are a few examples of lossless compression algorithms which achieve the shannon entropy limit for stochastic i.i.d sources ( independent and identically distributed ) .however , practical estimation of entropy of sources is non - trivial since most sources are not i.i.d but contain correlations ( short or long - range ) . as a simple example , in the english language, the probability of the occurrence of the letter ` ' after the letter ` ' has occurred , is nearly one . in this paper, we are interested in measuring complexity of short symbolic sequences which are obtained from time series generated by chaotic non - linear dynamical systems ( we have used the logistic map in our study and we expect the results to hold for other systems as well ) .this paper is organized as follows . in the next section ,we highlight the challenges in measuring an estimate of shannon entropy for short sequences .in section iii , we introduce nsrps and propose a new measure of complexity based on this algorithm .subsequently , in section iv , we test the new measure on several ( short ) sequences from the logistic map and compare the complexity with a uniformly distributed random sequence . the complexity measure based on nsrpsis compared with lyapunov exponent . in sectionv , we construct chaotic sequences which are incompressible by popular lossless compression algorithms , but which can be compressed by nsrps . we conclude in section vi indicating directions for future work .shannon entropy can serve as a good indicator for complexity , but estimation of entropy is not a trivial task . determining shannon entropy of experimental time series is particularly challenging owing to the following reasons : 1 .analytical determination of the entropy is not easy even for a simple model of the experimental time series .the time series typically consists of real numbers . in order to calculate the entropy, it has to be converted into a symbolic sequence .the choice of the partition has a very important role to play in the estimation of the entropy . ebeling _et al . _ shows that depending on the choice of the partition , the results can vary widely .3 . noise is inevitable in any experiment .noise has the tendency to increase entropy .length of the time series is another important factor in the accurate determination of entropy .shannon entropy requires the estimation of the probability mass function , which is difficult to accurately estimate with a short time series .biological time series such as population sizes are typically of very small lengths , around 50 - 100 samples ( since actual experiments are time consuming ) .entropy estimation methods in literature require 1000 to 10000 samples .in order to overcome these drawbacks , researchers have used lossless compression algorithms in order to estimate complexity or entropy .lempel - ziv and its popular variations are extensively used by several researchers to determine complexity of time series ( and references therein ) . as we shall demonstrate in section v ,this is not always reliable for short sequences .[ figure : entle ] shows the effect of length of the time series on numerical computation of the shannon entropy . for a data - length , as the bifurcation parameter of the logistic map is varied from to , we observe that the numerically estimated shannon entropy ( equation ( 1 ) ) is poorly correlated with lyapunov exponent with a correlation coefficient of .when the data - length is increased to , shannon entropy comes close to the lyapunov exponent with a correlation coefficient of 0.8934 .ebeling demonstrates that for the logistic map , shannon entropy comes very close to the lyapunov exponent as the block - size increases to 10 and for large data - lengths ( ) . as the bifurcation parameter is varied .8 bins were used for the numerical computation of shannon entropy using equation ( 1 ) and data - length . for computation of , equation ( 3 )the two graphs are poorly correlated as indicated by a correlation coefficient of -0.2682 . ]in this section , we propose a new measure of complexity based on a lossless compression algorithm called non - sequential recursive pair substitution ( nsrps ) .nsrps was first proposed by ebeling _et al . _ and later improved by jimnez - montao _ et al . _it was subsequently shown to be optimal .nsprs has also been used to estimate entropy of written english .the algorithm is briefly described as follows .let the original sequence be called . at the first iteration ,find which pair of symbols have maximum number of occurrences and replace all its non - overlapping occurrences with a new symbol .for example , the input sequence ` ' is transformed into ` ' since the pair ` ' has maximum number of occurrences compared to other pairs ( ` ' , ` ' and ` ' ) . in the second iteration ,` ' is transformed to ` ' since ` ' has maximum frequency .the algorithm proceeds in this fashion until the length of the string is 1 ( at which stage there is no pair to substitute ) . in this example , in the third iteration , ` ' is transformed into ` ' and in the fourth iteration it is transformed into ` ' and the algorithm stops .the following observations can be made about the algorithm : 1 .the algorithm always terminates for finite length sequences . 2 .after each iteration , the length of the sequence reduces .the number of distinct symbols may or may not increase ( if the input sequence is ` ' , then it is transformed to ` ' and then to ` ' ) .3 . the quantity ` ' may increase or decrease across the iterations .4 . ultimately , the quantity ` ' has to go to zero since the length eventually reaches 1 at which point the entropy is 0 ( since there is now only one symbol , it occurs with probability 1 ) .a faster way for this quantity to go to zero is when the sequence gets transformed to a constant sequence ( which has only one distinct symbol and hence zero entropy ) .let be the number of iterations required for the quantity ` ' to reach zero . is always a positive integer .the minimum value of is zero ( for the constant sequence ) and maximum is where is the length of the sequence ( for a sequence either with distinct symbols are with all pairs being distinct ) .the algorithm as described above is not reversible , i.e. the original symbolic sequence ca nt be restored by the sequence at subsequent iterations . in order to make the algorithm reversible , we have to maintain a record of the specific pair of symbols which was substituted at each iteration .the bits required to store this overhead information compensates for the reduction in the number of bits needed to store the transformed sequence . for achieving the best lossless compression ratio , we stop at the iteration number at which the total number of bits required to store the transformed sequence and the overhead is a minimum ( and hopefully lesser than the size of the original sequence ) .the number of iterations for the quantity ` ' to approach zero by the nsrps algorithm ( as described above ) is defined as our new complexity measure . is an integer in the range $ ] .jimnez - montao actually tracks the quantity ` ' across the iterations of nsrps . while this is important ,our motivation to use as a complexity measure is the following . actually represents the _ effort _ required by nsrps algorithm to transform the input sequence into a constant sequence ( having only one distinct symbol and hence zero entropy ) .a sequence which is highly redundant would naturally have a lower value of .as an example , the sequences and have the same length ( ) and the same entropy of 1 bits / symbol ( block - size=1 ) . however , sequence requires only iteration for the quantity to reach zero , whereas requires iterations .now . ] .clearly , is more _ complex _ than ( is periodic , has no obvious pattern ) .in this section , we shall evaluate the usefulness of the new complexity measure based on nsrps described in the previous section .to this end , we consider sequences arising from the logistic map for various values of the bifurcation parameter ` ' . we know that the complexity of the time series increases with ` ' , with occasional dips owing to the presence of _ windows _( attracting periodic orbits ) . ): vs. number of iterations . ] vs. lyapunov exponent as the bifurcation parameter ` ' is varied between 3.5 to 4.0 .we have used 8 bins for deriving the symbolic sequence from the time series .the data - length is . for computation of have used equation ( 3 ) . was scaled by a factor of 70 .the two graphs are highly correlated as indicated by a correlation coefficient of 0.8832 .compare this with fig .[ figure : entle ] . ] in fig .[ figure : nsrpscomp ] , the quantity is plotted along the y - axis and iteration number along the x - axis .the length of all sequences is .the new complexity measure is the iteration number when the graph hits x - axis . as it can be seen , different sequences have different values of .as expected , the sequence with the highest complexity is the independent and uniformly distributed random sequence ( in matlab ) .the order of complexity ( from higher to lower ) is .there is an attracting periodic orbit ( _ window _ ) at and this explains the lower value of .table 1 shows the effect of data - length and number of bins on the new measure for the logistic map . as we vary the bifurcation parameter ` ' between 3.5 to 4.0 , we find that even for , the correlation coefficient ( cc ) of with the lyapunov exponent is quite good .the entropy ( calculated using equation ( 1 ) ) is very poorly correlated with . for 2 bins , even at , we found the cc of and to be 0.3565 .compare this with table 1 : for and 2 bins , the cc is already 0.6651 .this shows that the new measure is quite good for very short symbolic sequences .figure [ figure : nsrpsle ] shows the graphs of and ( scaled by a factor of 70 for better visibility and ease of comparison ) .p1 cm p1.5 cm p1 cm l & # of bins & cc + & 2 & 0.6651 + 50 & 4 & 0.6654 + & 8 & 0.7324 + & 2 & 0.8352 + 100 & 4 & 0.8149 + & 8 & 0.8172 + & 2 & 0.8870 + 200 & 4 & 0.8648 + & 8 & 0.8832 + lyapunov exponent is given by the equation : for the logistic map , we have used the following equation to estimate : where ` ' is the bifurcation parameter ( 3.5 to 4.0 ) and is a randomly chosen initial condition in the interval ( 0,1 ) .the number of bins determines the number of symbols for the initial sequence . as and number of bins increase, the cc gets better and better .complexity measures based on lossless data compression are not always accurate , especially for short data lengths , as we shall demonstrate .consider the skew - tent map : here ` ' can be any value in the interval [ 0.5,1 ) . for , we have the well - known tent map . using the value , data length and using a random initial condition, we first obtain a chaotic time series . from this, we find the symbolic sequence with 2 bins .the first bin is corresponding to symbol ` 0 ' and the second bin corresponding to symbol ` 1 ' .the symbols ` 0 ' and ` 1 ' are equally likely since the invariant distribution for the skew - tent map is uniform .this implies that the shannon entropy is 1 bits / symbol . for compression using nsrps ,the overhead information was taken in to account .table [ tablecompression ] shows the efficacy of nsrps for compressing such chaotic sequences of short length while other popular compression algorithms expand ( all these use some variation of lempel - ziv compression algorithm ) .this behaviour was observed for values of between 0.5 and 0.7 .rigorous investigation of these interesting sequences needs to be performed ..chaotic sequences from the skew - tent map subjected to lossless compression algorithms .all numbers are in bits .as it can be seen , only nsrps manages to compress the sequence .[ cols="^,^,^,^,^",options="header " , ]the new measure is able to correctly characterize the complexity of chaotic sequences as demonstrated for the logistic map ( for different values of the bifurcation parameter ) and a uniformly distributed random sequence .this new measure is highly correlated with the lyapunov exponent even for very small data - lengths , as low as .future work would be to investigate the effect of various kinds of noise ( corrupting the time series ) on the complexity measure .we have reasons to believe that would be robust to noise to some extent since we are working on the symbolic sequence .the new measure needs to be further tested for various dynamical systems ( maps and flows ) and stochastic time series of different distributions , and to non - uniform bin structures .the data compression aspect of nsrps needs to the thoroughly investigated , especially for compressing chaotic sequences which are otherwise incompressible by standard techniques. * acknowledgments : * the authors express their heart - felt gratitude to mata amritanandamayi devi ( affectionately known as ` amma ' which means ` mother ' ) for her constant support in material and spiritual matters .nn thanks sutirth dey ( iiser , pune ) for useful discussions and department of biotechnology , govt . of india for funding through the rgyi scheme .
|
we investigate the complexity of short symbolic sequences of chaotic dynamical systems by using lossless compression algorithms . in particular , we study non - sequential recursive pair substitution ( nsrps ) , a lossless compression algorithm first proposed by w. ebeling _ et al . _ [ math . biosc . 52 , 1980 ] and jimnez - montao _ et al . _ [ arxiv : cond - mat/0204134 , 2002 ] ) which was subsequently shown to be optimal . nsprs has also been used to estimate entropy of written english ( p. grassberger [ arxiv : physics/0207023 , 2002 ] ) . we propose a new measure of complexity - defined as the number of iterations of nsrps required to transform the input sequence into a constant sequence . we test this measure on symbolic sequences of the logistic map for various values of the bifurcation parameter . the proposed measure of complexity is easy to compute and is observed to be highly correlated with the lyapunov exponent of the original non - linear time series , even for very short symbolic sequences ( as short as 50 samples ) . finally , we construct symbolic sequences from the skew - tent map which are incompressible by popular compression algorithms like winzip , winrar and 7-zip , but compressible by nsrps .
|
radiative transfer ( rt ) is the underlying physical phenomenon in many astrophysical problems and among the most difficult to deal with .the main difficulty arises from the non - local and in general non - linear coupling of the radiation field and the state of the gas . in the problem of nlte line formation in a given atmospheric model ,the internal state of the gas depends , via radiative transitions , on the radiation field intensity , which in turn depends , via the rt process , on the state of the gas over a wide range of distant points .mathematically , the non - local coupling is performed by the simultaneous solution of the corresponding rt equation describing the dependence of the mean intensity on the source function $ ] by means of the so - called operator and the statistical equilibrium ( se ) equations defining the source function in terms of the mean intensity of the radiation field . in the well - known two - level - atom line formation problemthe non - local coupling is linear and the problem can be easily solved by using either direct or iterative methods . in a more general multilevel case ,rt problem is non - linear and , therefore , an iterative method is required .the most straightforward iterative procedure , the so called iteration , solves radiative transfer and statistical equilibrium equations in turn .however , in most cases of interest ( in scattering dominated media of large optical thickness ) the rate of convergence of this simple procedure is infinitely slow .a broad class of ali ( approximate ( or accelerated ) lambda iteration ) methods , currently in use , is based on the use of certain physical or computational approximations of the operator within an iterative procedure .these methods usually employ some free parameter controlling the convergence and almost always need additional acceleration by some mathematical techniques ( ng acceleration , successive over - relaxation method , etc . ) to achieve high convergence rate ( ; ) . developed the forth - and - back implicit lambda iteration ( fbili ) - a simple , accurate and extremely fast convergent method to solve nlte rt problems .fbili dramatically accelerates the convergence of the classical iteration while retaining its straightforwardness . in this paperthe basic idea of the method is briefly explained and the applications to various rt problems are shown and discussed .in order to demonstrate the basis of the fbili method we shall consider the well - known case of the two - level atom line formation ( with complete redistribution and no overlapping continuum ) in a plane - parallel and static atmosphere . under these assumptions , the specific intensity of the radiation field is described by the rt equation of the form : \ \ , \ ] ] where is the mean optical depth , is the frequency displacement from the line center in doppler width units , is the cosine of the angle between the photon s direction and the outward normal and is the absorption - line profile , normalized to unity .the line source function ( se equation for a two - level atom ) has the following form : where is the standard nlte parameter representing the branching ratio between the thermal ( lte ) contribution and the scattering term which accounts for the angle and frequency coupling of the specific intensities at the given depth point .the basic idea of the forth - and - back implicit iteration in the solution of the problem is as follows .first , as suggested by the existence of two separate boundary conditions , the fbili uses a separate description of the propagation of the in - going intensities of the radiation field with initial conditions at the surface ( ) and of the out - going intensities with initial conditions at the bottom of the atmosphere ( ) .second , although the values of the radiation field are unknown , its propagation can be easily represented by using the integral form of the rt equation and assuming polynomial ( e.g. piecewise quadratic ) representation of the source function between two successive depth points .thus , for each depth point one can write linear relations for the specific intensities as functions of the unknown values of the source function and of its derivative. following the idea of iteration factors , it is the iterative computation of the coefficients of these _ implicit _ relations ( implicit , as the source function is a priori unknown ) , rather than that of the unknown functions themselves , which greatly accelerates the convergence of the direct iterative scheme . in the first part of each iteration ( forward process ) , proceeding from the upper boundary condition , using the integral form of the rt equation for the in - going intensities for each layer ( ) : where , and assuming parabolic behavior for the source function , one can write the linear _ local implicit _ relation s(\tau _ l ) + c^-_{x\mu } s^{\prime } ( \tau _ l)\ , \ ] ] representing the values of the in - going intensities at a given optical depth point in terms of yet unknown values of the source function and of its derivative . here , is computed with the old ( known from the previous iteration ) source function , whereas and depend only on the known optical distance . by integrating ( 5 ) over all frequencies and directionswe obtain the linear relation representing _ implicitly _ the value of the in - going mean intensity .thus , in the forward process , we differ from the classical iteration that re - calculates from the old ( known ) source function , in using the old source function to compute , at each optical depth point , the coefficients and of the linear relation ( 6 ) .the coefficients are stored for further use in the backward process of computation of the new values of .let us note here that the ratio in eq .( 5 ) of the non - local part of the in - going intensity to the current source function is actually the iteration factor . since it is the only information that is carried from the previous iteration step , an extremely fast convergence is to be expected .the final aim is to derive , at each optical depth point , an _ implicit _ linear relation between the full mean intensity and the source function that , together with equation ( 2 ) , leads to the new source function . to obtain this ,we need the coefficients of the corresponding relation for the out - going mean intensity . in the backward processwe proceed from the bottom where , i.e. is known or more precisely , the coefficients of the _ implicit _ relation for the out - going specific intensities and , therefore , for the out - going mean intensity are known at . eliminating from eq .( 6 ) ( see * ? ? ?* ) we obtain the linear implicit relation ( 7 ) , which together with se equation ( 2 ) leads to the new value of .the new values of , and , hence , are then used to compute the coefficients of the linear relation ( 8) in the next upper layer . together with the coefficients of eq .( 6 ) ( stored in the forward process ) , we obtain the coefficients and of eq . ( 7 ) .the computation of and and the solution of eq .( 7 ) together with se equation ( 2 ) to get a new source function are performed during the backward process layer by layer to the surface .the process is iterated to the convergence .the accuracy and efficiency of the fbili method have been checked in several rt problems . here , we list some of the applications .the method was first developed for the two - level - atom line formation ( with complete redistribution and no overlapping continuum ) in a plane - parallel constant property medium ( for details see , ) .this case represents an ideal test for checking the numerical accuracy and stability of a new method .namely , under such conditions the features of the solution depend only on the nlte parameter that is usually very small , so that the numerical errors can easily blur the solution . thus , in order to test the stability of the method we solved the two - level atom problem with .the results are given in fig .they are compared with the exact discrete - ordinate solution , obtained by using the same discretization in optical depth ( 10 points per decade ) .the asymptotic value of the maximum relative error of the order of 0.3 is reached already within 9 - 14 iterations .namely , only nine iterations are sufficient for the maximum relative correction between two successive iterations ( for all depth points ) to be less than and 14 iterations for .hence , we see that a negligible additional effort ( of the iterative computation of the coefficients of the _ implicit _ relations , instead of the mean intensities themselves ) with respect to the classical iteration results in an extremely fast convergence ( about 10 fbili iterations compared to about classical iterations , while one fbili iteration takes only about 10 more cpu time than a classical iteration ) . in the literaturethe performances of various methods are usually given for .thus , for this case , in fig .2 the convergence properties of the fbili method are compared with those of the ali methods that use diagonal and 3-diagonal approximate operators .excellent convergence properties of the fbili are evident . using fbili yields convergence that is comparable to or even faster ( for ) than the 3-diagonal operator with ng acceleration !the fbili method was applied to the case of the two - level atom line formation problem in which partial redistribution is taken into account in the paper by .the results reproduced the well - known ones by .for the cases and , 13 and 15 iterations , respectively , are enough to fulfill the criterion for all the frequencies and all optical depths .the procedure for multilevel problem is the same as in the two - level - atom case .starting with the known set of level populations , we repeat the entire forward process for each radiative transition . in the backward process ,layer by layer , we compute the coefficients of the linear relation ( 7 ) for all the transitions , and replacing them in the se equations we solve the latter for the new set of level populations .the test is performed by solving the same problem ( three - level hydrogen atom line formation in an isothermal atmosphere ) as in .the solution with a maximum relative error below 3 is obtained in only nine iterations with the convergence criterion ( see * ? ? ?the generalization of the fbili method to spherical geometry is performed by .monochromatic scattering problem in a spherical atmosphere is solved and the results are compared with those given by and .the relative difference of the solutions is about 1 .the solution is obtained already in the second iteration , whereas three iterations are required for the maximum relative correction to be less than 1 . in order to test the feasibility of the method when applied to the line formation in spherically symmetric media , the test problem of the line transfer with background absorption , proposed by and , is solved .the solution is obtained in 15 iterations with an error less than 2 .forth - and - back implicit lambda iteration method is a simple , stable and extremely fast convergent iterative method developed for the exact solution of nlte rt problems .a negligible additional computational effort with respect to the classical iteration results in an extremely fast convergence . no additional acceleration is needed .the method is easy to apply . no matrix formalism is required so that the memory storage grows linearly with dimension of the problem .due to its great simplicity and considerable savings in computational time and memory storage fbili seems to be a far - reaching tool to deal with more complex problems ( multidimensional rt , rt in moving media , or when rt has to be coupled with other physical phenomena ) .i would like to thank the organizers of the meeting and unesco - roste for financial support .i would also like to thank prof .p. heinzel for inspiring discussions that helped to improve the paper .this work has been realized within the project no.146003 g supported by the ministry of science and environmental protection of the republic of serbia .
|
the basic idea of an extremely fast convergent iterative method , the forth - and - back implicit lambda iteration ( fbili ) , is briefly described and the applications of the method to various rt problems are listed and discussed .
|
today , ultrasound beams apart from diagnosis and ultrasound imaging is used in treatment process such as the treatment of cancers ( hifu ) , stone crushing in the body , outside the body , rehabilitation and injection and detection of hematological parameters .cancers treatment is one of ultrasound uses to which is done inside the body and outside the body .some external damages and skin lesions have been reported in outside the body therapy methods which are used more in treatment of abdominal soft tissue cancers .many of the articles have worked on methods of heat gaining during the therapy with hifu but examining method based on tissue mechanical model is a new approach . today , several people have worked on thermal effects during treatment . in one of these reports ,the thermal effect on the tissue and its cooling system is considered and it is tried to obtain thermal coefficients using reverse engineering in solving bioheat equation that do nt harm tissue .of course this action is considered in time process and in hyperthermia treatment but tissue is severed in hifu and physiotherapy practice is nt done .+ in discussing hifu and its thermal effects in the treatment process , when the ultrasound waves are radiated tissue , because changing in some tissue thermal parameters and by simulation of kzk equations using acoustic wave s field and coefficients of tissue , heat distribution is calculated at the time of treatment .the dimensions of the transducer , the distance from the tissue and radiation time is also effective in heat distribution along with the tissue coefficients and tissue specific heat coefficient . in hifu treatment , laser radiation could be used along with the sound waves . since laser waves have wavelength and every body tissue absorbs a wavelength can be absorbed .in fact they select that tissue .when the laser is absorbed in the tissue , causes temperature raise of the tissue . in the meantime, ultrasound waves can be radiated to the desired tissue .being these together makes the treatment more precise and at the same time increase of the tissue heat .however in this manner when the depth is greater laser ca nt have a major influence ? . + usually bio heat equations are used to simulate the heat .sound waves are propagated with a wave equation . propagated wave form , its distribution manner has an impact on the kzk equation .diffraction occurs when the sound enters the tissue .existence of this phenomenon causes waves are constantly diffracting when they come out of the piezoelectric cell until they reach the target tumor tissue .number of piezoelectric cells , transducer diameter , and distance from the tissue , the tissue thickness and the overall angle of diffraction causes a heat and pressure at the center tissue .different methods except ionizing methods such as microwave , radiofrequency , laser , optical and ultrasound methods are used for the treatment of soft tissue cancers . + in all of these methods , it is important to detect heat amount in the treatment process .mri imaging techniques can be used for heat tracing .treatment process heat tracking with this method , is called mr thermometry . mri in this way is used in many medical treatments for examining cancer treatment improving and tracing accuracy of treatment and comparison is made . piezoelectric cells that are used for treatment in hifu , are in a spherical dish and make focal intensified waves in the tumor spot , they can be used to make a tissue slow or treat it in non - invasive manner. geometry form and dimensions of the transducer , gender like pzt and type of piezoelectric cells arrangement , and piezoelectric cells acoustic impedance are important in type of generated heat .r. martinez , a. , vera , l. and colleagues marked the piezoelectric cells in this way .they could obtain best type of piezoelectric cell and its arrangement based on frequency and based on a finite element method based on matlab simulation software .this examining method causes reducing the least heat in the treatment process that is performed in the piezoelectric cell and treatment process is done more efficiently . + at the time of treatment with hifu the dose term is used at the time of energy receiving in the tumor area .m costaand his colleges at the time of treatment with hifu used optical ct device and imaging simultaneously with a volumetric chemical dosimeter called presagewhich is made of polyurethane and at the time of using sound waves changes color according to heat in the focal point based on optical imaging for heat calibration of hifu device . using gel phantom with entering some bubbles in it and applying hifu waves , heat degree varies despite existence of some bubbles .if bubbles are injected vibrationally , convert ultrasound kinetic energy into thermal energy .radial movements and changes of hifu waves in this phantom are simulated numerically and then are compared with the actual value and the impact of micro - bubbles is reviewed in the process . + the overall structure of this article is to explain design model components in section 2 , mechanical model components of the tissue in section 3 , results and laboratory works in section 4 , the benefits of this method in section 5 and overall results of the study in 6 .since in treatment with hifu mechanical beam is used and treatment is done with two ways of inside the body and outside the body , in outside the body treatment methods , if tumor is in body depth , intensified sound beam should pass soft tissues before hitting tumor . because of being mechanical , wave influences on healthy soft tissues and even in some cases causes dermal damages .producer company named china hifu reported some skin lesions in treatment of soft tissue tumors . despite inventing mr - hifu systems , led us toward using a mechanical model and using tissue binary feature that tissue parameters can be extracted using elastography or lab , made entering sound to tissue mechanical model and obtained pressure and heat in each layer through simulink matlab .if the temperature or pressure in any direction is to the extent that cause skin lesions , in the same direction is extracted with respect to tissue parameters and in biophysical cases that the temperature degree should not exceed 2 c in order to damage not occur , sound wave intensity from transducer in the same direction can be changed .the model creating and modeling manner in the software environment is a unique method that nowhere pressure and heat extracted from hifu had not been discussed in this way .however , this amount of extracted heat and pressure should be compared with practical example that unfortunately due to lack of hifu device in iran , the extracted cases and done with hifu simulation are considered by the fda as an official reference of data comparison and being standard .all the variables , variable with time in the project are not considered , and even the blood supply system that is a cooling parameter at the time of treatment with hifu , is not considered .hope that in the future with the acquisition of equipment and practical tests with completing the simulation and adding mentioned parameters using simulink sim mechanical environment animation ; works more like the reality are done . in the project using a sheep kidney , measured and divided in equal parts in lab using mechanical parameters measurement device , obtained parameters were measured and entered simulator environment and with applying a hifu wave in one direction , resulting heat on a kidney tissue that is in phantom is obtained .in the model where is based on three elements of mass , spring and damper using simulink toolbox matlab software version2015a has been used . in hifu processthat mechanical sound waves are applied to the tissue based on ultrasound focused and intensified waves , if the tissue is assumed to be a mechanical tissue , it can be considered to be comprised of three members of mass , spring and damper .damper and spring values according to tissue coefficients are derived from the following equation : in these equations , f is applied force to the spring or damper , k , spring rate and is the coefficient of damper viscosity . in physics or tissue lab , tissue valuescan be obtained , obtaining viscosity and springy values of tissue based on simulation in matlab software , amount of heat in each layer of tissue is obtained . if the type of tissue , for example , is supposed to be a kidney , knowing it s acoustic impedance degree and putting the speed of sound in soft tissue and kidney tissue acoustic impedance whose values are respectively the values in table [ tabel1 ] .using the hifu simulator software [ 10 ] that has been mentioned in fda site and is to simulate hifu , the pressure and temperature at each layer of tissue is obtained and compared .ll body tissue & .acoustic impedances of different body tissues and organs by food and drug administration[ cols= " < " , ] + air & 0.0004 + lung & 0.18 + fat & 1.34 + liver & 1.65 + blood & 1.65 + kidney & 1.63 + muscle & 1.71 + bone & 7.8 + to use in simulation phase in simulink matlab space as shown in fig . [ fig001 ] , to simulate hifu waves , a saw tooth wave is used .we consider spl , pd signal as follows : for the pulse , spl , pd value is subject to change based on the following values : in these equations spl is adjustable according to longitudinal dimension and pd based on the dimension of time ( spatial and temporal length of pulse ) . in these equations , n is number of damped pulse oscillations , t , pulse period and is the wavelength of a pulse oscillation , it means that parameters of wavelength , period and number of oscillations can be changed for changing the spatial and temporal length . if fig .[ fig001 ] input signal as a mechanical signal enters into a layer of mass , spring and damper , will weaken sound .weakened sound is absorbed by the mechanical tissue as thermal energy and remaining pass through it as weakened mechanical waves . in fig .[ fig003 ] , when the ultrasound waves come out of hifu transducer , become weak in each layer as shown in fig .[ fig002 ] .thus when ultrasound waves reach tumor tissue , have become very weak .in this time if final heat of every beam of ultrasound waves is t , in tumor tissue will be : + + in this model , body tissue is modelled as mechanical elements . in this model springis considered as tissue elasticity and damper as a spring -resistant and spring fixer .the resulting force is stored as a total result in intended mass ( fig .[ fig004 ] ) . according to a performed project report for hifu simulator, it was performed by a blue phantom that target tissue was within it and obtained the heat and pressure results in the cartesian directions .accordingly , by applying hifu wave , iso shape of heat dose is obtained in a certain range of the phantom .this application is also used in fda that is as a medical device measure reference .accordingly , it is intended to be a beam of sound waves to pass directly and reach a tissue . in simulation that was performed using simulink matlab , three layers were considered that these three layers can be considered in line with a beam of fig . [ fig003 ] ultrasound wave . if the tissue is taken into account with mechanical properties ,consists of three mentioned elements .viscoelasticity of tissue can be determined by digital signal processing works even with passing mechanical and acoustic wave . accordingly in the following fig layer and each layer based on the mechanical properties is considered .the temperature of each layer should be measured .following formula is used to do so under the terms of the heat degree electrical rules in two ends of resistance : in this equation , r is the electrical resistance , i , electrical current and w is generated thermal energy . if according to the binary rules , resistance amount of a damper is supposed to be a number like b and corresponding to electric current in mechanics , sound speed is assumed v , heat amount in two ends of damper that causes heat in the model is obtained from the following equation : the heat is the product of two parameters multiplication .this product can be displayed at two ends of each layer .it should be mentioned that in the simulation the time dependent variable and time varying viscosity is not assumed .even with thermal sensors , the heat can be obtained at the ends of each layer as it is done in a separate simulation .we divide a sheep kidney weighing 61.07 grams into four equal parts and the size and weight of each piece based on measurement with verniercalliper and accurate scale is as fig .additionally the measurement system can draw strain and shear stress of one ship s kidney same as fig .9 therefore we can obtain much more mechanical data of kidney . [ h ! ] shear stress and strain curves for a lobe symbolically with anton paar device is as follows . with putting the values of mass , spring , damper in fig .[ fig003 ] , hifu pressure on a tissue comprised of a layer of kidney can be obtained . in the phantom that is designed as follows ,a water layer and a layer of tissue in the middle of it is immersed ( fig.[fig008 ] ) : +using data obtained from kidney tissue with placement in fig .[ fig005 ] simulation , amount of heat wave in three layers of kidney tissue can be obtained by applying hifu wave .with average heat from resulting heat in the three tissues , and an average temperature in kidney tissue is estimated . using the hifu simulator software of fda site ,heat degree can be obtained based on fda is official reference of standard of medical equipment according to fig .[ fig010 ] charts by placement of acoustic impedance and sound velocity in the kidney tissue . in these comparisonsthis should be noted that the blood flows heat sink that makes the tissue cool is ignored and all time variable values have not been considered .+ if we had access to real transducer in iran could do more accurate comparison . for central point of kidney tissue ,heat degree was 90 c with hifu simulator software and with matlab simulator was 79 c which this difference in being low of the number of layers of kidney , can be lack of attention to the blood supply . with this simulator it is hoped that the input heat degree can be changed using the input ultrasound wave to prevent the skin lesions reported during treatment with hifu ( fig.[fig012 ] ) .look at the fig.11 layer model , the outermost layer of body is skin and pressure and heat leave their maximum amount on skin .ultrasound beams are entered into the tissue from the skin . according to formula : in this equation , is linear attenuation coefficient , z is depth and i is sound intensity that in the formula intensity and thickness of each layer becomes less gradually .if the pressure or temperature at the outermost layer is not calculated , leads to burning of tissue or damage of tissue . to avoid tissue damage, we should set the intensity and time of sound passes externally .for this purpose , at first the mechanical model of fig .[ fig003 ] was used to apply according to waves tissue specifications . in this model and project , since we did not have access to a real hifu device and a human tissue , a kidney and liver of a sheep at a later stage in fig .[ fig006 ] phantom were used . in all of these steps ,variable was not taken into account with time , it is hoped that in later trials for more accuracy , variable with time was taken into account .fda site software was used due to lack of real access .much of this simulation was done in hfu for tissue protection and optimization of treatment planning . as ultrasound safety reportsare available , ultrasound beams should always be kept in safe limit even during treatment to prevent physical and thermal damage of healthy tissue .this is important when by association of medical physics of america is placed in safe phase in ultrasound beam treatment phase to prevent tissue damage ( fig .example of safe environment graph of ultrasound and tissue damage that is created during treatment and non -compliance with safety standards comes in the following forms . with the completion of the modeling and having practical devices, simulation parameters can be more complete and by obtaining the amount of heat in each layer , further tissue damage can be prevented in the future .we used a sheep kidney tissue with definite specifications and obtained its strain and stress values after dividing it into distinct layers with specific dimensions .those values were put in designed mechanical model then ultrasound beams were applied to it in mechanical wave form . in this model , using temperature and binary sensors , obtained the temperature in each layer and compared it with hifu simulation software in fda site . given that in the project , the weight and viscoelasticity properties of tissue have been considered and in fda model just tissue type is considered regarding acoustic impedance and the speed of sound in tissue , comparison was made . in this comparison thermal values error in our modelis about 12% compared to the fda error .it is possible that a lot of this error is related to simulation and comparison type error which is important and is removable in clinical trials .this comparison is considered to prevent tissue damage . in hifu ,in fact , the amount of heat in each layer reduces gradually . butphenomenon of overall effects causes a lot of pressure is applied on the tumor location or final that is attempted to consider overall work addition with this model in the next projects .we would like to thank ms parstoo hoseyni from dept . of medical physics , science and research branch of islamic azad university , tehran , iran , roya shafiyian and sara banayi rad from dept . of biomedical eng , islamic azad university , qazvin , iran , for their help in experimental set up and data acquisition .this work was partially supported by my father and mother and award to them .s. sarraf and j. sun , `` advances in functional brain imaging : a comprehensive survey for engineers and physical scientists . , '' _ international journal of advanced research _ , vol . 4 , pp . 640660 , aug 2016 .s. a. aghayan , d. sardari , s. r. m. mahdavi , and m. h. zahmatkesh , `` an inverse problem of temperature optimization in hyperthermia by controlling the overall heat transfer coefficient , '' _ journal of applied mathematics _, vol . 2013 , 2013 .m. costa , c. mcerlean , i. rivens , j. adamovics , m. leach , g. ter haar , and s. doran , `` presage as a new calibration method for high intensity focused ultrasound therapy , '' in _ journal of physics : conference series _ , vol . 573 , p. 012026, iop publishing , 2015 .c. grady , s. sarraf , c. saverino , and k. campbell , `` age differences in the functional interactions among the default , frontoparietal control , and dorsal attention networks , '' _ neurobiology of aging _ , vol .41 , pp . 159172 , 2016 .s. r. guntur and m. j. choi , `` influence of temperature - dependent thermal parameters on temperature elevation of tissue exposed to high - intensity focused ultrasound : numerical simulation , '' _ ultrasound in medicine & biology _ , vol .41 , no . 3 , pp .806813 , 2015 .e. j. kim , k. jeong , s. j. oh , d. kim , e. h. park , y. h. lee , and j .- s .suh , `` mr thermometry analysis program for laser - or high - intensity focused ultrasound ( hifu)-induced heating at a clinical mr scanner , '' _ journal of the korean physical society _ , vol . 65 , no . 12 , pp . 21262131 , 2014 .r. martinez , a. vera , and l. leija , `` heat therapy hifu transducer electrical impedance modeling by using fem , '' in _ 2014 ieee international instrumentation and measurement technology conference ( i2mtc ) proceedings _ , pp .299303 , ieee , 2014 .s. sarraf , c. saverino , h. ghaderi , and j. anderson , `` brain network extraction from probabilistic ica using functional magnetic resonance images and advanced template matching techniques , '' in _ electrical and computer engineering ( ccece ) , 2014 ieee 27th canadian conference on _ , pp . 16 , ieee , 2014 .s. sarraf , e. marzbanrad , and h. mobedi , `` mathematical modeling for predicting betamethasone profile and burst release from in situ forming systems based on plga , '' in _ electrical and computer engineering ( ccece ) , 2014 ieee 27th canadian conference on _ , pp . 16 , ieee , 2014 .s. sarraf , c. saverino , and a. m. golestani , `` a robust and adaptive decision - making algorithm for detecting brain networks using functional mri within the spatial and frequency domain , '' in _ 2016 ieee - embs international conference on biomedical and health informatics ( bhi ) _ , pp . 5356 , ieee , 2016 .c. saverino , z. fatima , s. sarraf , a. oder , s. c. strother , and c. l. grady , `` the associative memory deficit in aging is related to reduced selectivity of brain activity during encoding , '' _ journal of cognitive neuroscience _ , 2016 .
|
in outside the body hifu treatment that focused ultrasound beams hit severely with cancer tissue layer especially the soft one , at the time of passage of the body different layers as long as they want to reach tumor , put their own way components under mechanical and even thermal influence and they can cause skin lesions . to reduce this effect a specific mechanical model can be used that means body tissue is considered as a mechanical model , it is affected when passing sound mechanical waves through it and each layer has an average heat . gradually sound intensity decreases through every layer passage , finally in one direction a decreased intensity sound reach tumor tissue . if sound propagated directions increase , countless waves with decreased intensity are gathered upon the tumor tissue that causes a lot of heat focus on tumor tissue . depending on the kind and mechanical properties of the tissue , intensity of each sound wave when it passes through tissue can be controlled to reduce damages outside the tumor tissue .
|
the formalism of quantum operation can be used to describe a very large class of dynamical evolution of quantum systems , including quantum algorithms , quantum channels , noise processes , and measurements . the task to fully characterize an unknown quantum operation by applying it to carefully chosen input state(s ) and analyzing the output is called quantum process tomography .the parameters characterizing the quantum operation are contained in the density matrices of the output states , which can be measured using quantum state tomography .recipes for quantum process tomography have been proposed . in earlier methods , is applied to different input states each of exactly the input dimension of . in , is applied to part of a fixed bipartite entangled state . in other words ,the input to is entangled with a reference system , and the joint output state is analyzed .quantum processing tomography is an essential tool in reliable quantum information processing , allowing error processes and possibly imperfect quantum devices such as gates and channels to be characterized .the method in has been experimentally demonstrated and used to benchmark the fidelities of teleportation and the gate cnot , and to investigate the validity of a core assumption in fault tolerant quantum computation .the number of parameters characterizing a quantum operation , and therefore the experimental resources for any method of quantum process tomography , are determined by the input and output dimensions of .however , different methods can be more suitable for different physical systems .furthermore , each method defines a procedure to convert the measured output density matrices to a desired representation of , and a simpler procedure will enhance the necessary error analysis . in this paper , we describe in detail the method initially reported in , which is derived as a simple corollary of a mathematical proof reported in .our goal is two - fold .we hope to make this interesting proof more accessible to the quantum information community , as well as to provide a simple recipe for obtaining the kraus operators of the unknown quantum operation . in the rest of the paper , we review the different approaches of quantum operations , describe choi s proof and the recipe for quantum process tomography in sections [ sec:3approachs ] , [ sec : proof ] , and [ sec : qpt ] .we conclude with some discussion in section [ sec : conclude ] .a quantum state is usually described by a density matrix that is positive semidefinite ( , i.e. , all eigenvalues are nonnegative ) with . a quantum operation describes the evolution of one state to another .more generally , let and denote the input and output hilbert spaces of .a density matrix can be regarded as an operator acting on the hilbert space .let denote the set of all bounded operators acting on for .we can consider for any without restricting the domain to density matrices .a mapping from to is a quantum operation if it satisfies the following equivalent sets of conditions : 1 . is ( i ) linear , ( ii ) trace non - increasing ( ) for all , and ( iii ) _completely positive_. the mapping is called _ positive _ if in implies in .it is called completely positive if , for any auxillary hilbert space , in ) implies in where is the identity operation on .2 . has a _ kraus representation _ or an _ operator sum representation _ : ( m ) = _ k a_k m a^_k [ eq : osr_main ] where , and is the identity operator in .the operators are called the kraus operators or the operation elements of .3 . . here, is a density matrix of the initial state of the ancilla , is the identity operator , , is a projector , and is a partial tracing over .each set of conditions represents an approach to quantum operation when the input is a density matrix ( ) .the first approach puts down three axioms any quantum operation should satisfy .the completely positive requirement states that if the input is entangled with some other system ( described by the hilbert space ) , the output after acts on should still be a valid state .the third approach describes system - ancilla ( or environment ) interaction .each of these evolutions results from a unitary interaction of the system with a fixed ancilla state , followed by a measurement on a subsystem with measurement operators , post - selection of the first outcome , and removing .( 170,50 ) ( 40,35)(10,10) \ { ( 40,15)(10,10) \ { ( 120,40)(10,10) ( 70,10)(25,40 ) ( 50,15)(1,0)20 ( 50,25)(1,0)20 ( 50,35)(1,0)20 ( 50,45)(1,0)20 ( 95,15)(1,0)10 ( 95,23)(1,0)20 ( 95,31)(1,0)20 ( 95,45)(1,0)20 ( 105,11)(14,8) ( 5,37 ) ( 5,17 ) ( 135,42 ) ( 115,20 ) the fact that the third approach is equivalence to the first is nontrivial the evolutions described by the third approach are actually all the mappings that satisfy the three basic axioms .finally , the second approach provides a convenient representation useful in quantum information theory , particularly in quantum error correction ( see for a review ) .proofs of the equivalence of the three approaches are summarized in .there are four major steps , showing that the 1st set of conditions implies the 2nd set and vice versa , and similarly for the 2nd and 3rd sets of conditions .the most nontrivial step is to show that every linear and completely positive map has a kraus representation , and a proof due to choi for the finite dimensional case will be described next .the precise statement to be proved is that , if is a completely positive linear map from to , then for some matrices , where is the dimension of .let be a maximally entangled state in . here, is a basis for . consider where = n_1 || = _ i , j=1^n_1 |ij| |ij| . is an array of matrices .the block is exactly : m = [ eq : proof41 ] when is applied to , the block becomes , and [ eq : proof42 ] which is an array of matrices .we now express in a manner completely independent of .since is positive and is completely positive , is positive , and can be expressed as , where are the eigenvectors of , normalized to the respective eigenvalues .one can represent each as a column vector and each as a row vector .we can divide the column vector into segments each of length , and define a matrix with the -th column being the -th segment , so that the -th segment is exactly . then ( 210,130 ) ( 2,125 ) ( 57,10)(8,120 ) ( 57,40)(1,0)8 ( 57,70)(1,0)8 ( 57,100)(1,0)8 ( 65,120)(10,10) ( 75,122)(120,8 ) ( 105,122)(0,1)8 ( 135,122)(0,1)8 ( 165,122)(0,1)8 ( 72,25)(-1,0)10 ( 72,85)(-1,0)10 ( 72,105)(-1,0)10 ( 68,20)(25,10) ( 67,80)(25,10) ( 67,100)(25,10) ( 77,121)(25,10) ( 108,121)(25,10) ( 167,121)(25,10) [ fig : proof43 ] and ( i ) ( ) = _ k [ eq : proof44 ] comparing eqs .( [ eq : proof42 ] ) and ( [ eq : proof44 ] ) block by block for recipe for quantum process tomography ------------------------------------- the basic assumptions in quantum process tomography are as follows .the unknown quantum operation , , is given as an `` oracle '' or a `` blackbox '' one can call without knowing its internal mechanism .one prepares certain input states and _ measures _ the corresponding output density matrices to learn about systematically .the task to measure the density matrix of a quantum system is called quantum state tomography . to obtain a kraus representation for , one needs an experimental procedure that specifies the input states to be prepared , and a numerical method for obtaining the kraus operators from the measured output density matrices .a method follows immediately from the proof in sec .[ sec : proof ] .we retain all the previously defined notations .the crucial observation is that and correspond to the input and output physical states and which can be prepared and measured .the procedure is therefore to : 1 .prepare a maximally entangled state in .2 . subject one system to the action of , while making sure that the other system does not evolve .3 . measure the joint output density matrix , multiply by , obtain the eigen - decomposition ) into equal segments each of length . is the matrix having the -th segment as its -th column .the maximally entangled state in the above procedure can be replaced by any pure state with maximum schmidt number , where are real and .the output density matrix is equal to , divides the block by , and performs eigen - decomposition to obtain a set of operators .the kraus operators of are given by .we have provided an experimental and analytic procedure for obtaining a set of kraus operators for an unknown quantum operation .the set of is called `` canonical '' in , meaning that the are linearly independent .we remark that any other kraus representation can be obtained from using the fact that if and only if when are the entries of an isometry .alternatively , one can replace the eigen - decomposition of by any decomposition into a positive sum to obtain other valid sets of kraus operators .previous methods of quantum process tomography involve preparing a set of physical input states that form a basis of , and measuring to determine .the input states are physical states , and can not be chosen to be trace orthonormal , causing complications in the analysis .in contrast , the output state in the current method automatically contains complete information on for the unphysical orthonormal basis ( see ) , which greatly simplifies the analysis to obtain the kraus operators .however , the current method requires the preparation of a maximally entangled state and the ability to stop the evolution of the reference system while is being applied .the previous methods are more suitable in implementations such as solution nmr systems , while the current method is more suitable for implementations such as optical systems .any efficient quantum process tomography procedure consumes approximately the same amount of resources , which is determined by the number of degrees of freedom in the quantum operation . in general , to measure an density matrix , _ ensemble _ measurements are needed , requiring steps .the previous methods require the determination of density matrices each and take steps .the current method requires the determination of one density matrix which also requires steps . in both cases ,the number of steps is of the same order as the number of degrees of freedom in the quantum operation and are optimal in some sense .we thank isaac chuang for suggesting the application of choi s proof in quantum process tomography . after the initial report of the current result in , g. dariano andp. presti independently reported a similar tomography method .this work is supported in part by the nsa and arda under the us army research office , grant daag55 - 98-c-0041
|
quantum process tomography is a procedure by which an unknown quantum operation can be fully experimentally characterized . we reinterpret choi s proof of the fact that any completely positive linear map has a kraus representation [ lin . alg . and app . , 10 , 1975 ] as a method for quantum process tomography . furthermore , the analysis for obtaining the kraus operators are particularly simple in this method .
|
frequency identification ( rfid ) is a rapidly evolving automatic identification and tracking system .even though the basic operating principles of modern rfid systems have been known for several decades , their adoption in numerous industrial and consumer applications ( such as supply chain management , inventory control , supermarket checkout process , and toll collections ) has been proliferated recently due to the ability now to build miniaturized rfid components at low cost .typically , a rfid system consists of two components : a reader and tags .each tag has a unique i d stored in its memory .the reader should read ( interrogate ) ids of all the tags within its radio field , and for this purpose it broadcasts interrogation rf signal periodically .if an rfid tag finds itself within the rf - field of the reader , it backscatters ( i.e. transmits back ) a signal containing its unique i d .when more than one rfid tags backscatter their ids using a common chunk of the shared wireless channel ( in terms of frequency , time , space , or code ) , signal from one tag interferes the signals from others , and the reader might not be able to decode ids of the backscattering tags .such phenomenon is commonly known as tag - collision .occurrence of such tag - collision events triggers the collided tags to retransmit their ids in the subsequent interrogation rounds and thus elongates tag identification delay ( or in other words reduces the tag reading rate ) at the reader . many link - layer ( more precisely medium access control sub - layer ) anti - collision protocolshave been developed so far to address the tag - collision problem .those protocols not only reduce the frequency of occurrence of tag - collision events but also help to recover from such events as quickly as possible . in a broad sense ,time division multiple access rfid anti - collision protocols are classified as either deterministic or probabilistic protocols based on how tags are allocated a fraction of the shared channel resource ( a time slot ) to transmit their ids .the former type of protocols is based on binary tree ( bt ) where the collided tags are split into two subsets .the tags in the first subset transmit their ids in the next slot , while the tags in the other subset wait until the first subset of tags are successfully identified .this process is repeated recursively until all tags are recognized .the performance of tree - based anticollision protocols deteriorates with increase in the number of tags .this is because even though the colliding tags are successively grouped into two subsets , each subset may still contain many tags resulting in collision . on the other hand , in probabilistic protocols such asframed slotted aloha ( fsa ) , the channel time is split into frames and a single frame is further divided into several time slots . during each frame , each tag randomly chooses a time slot and transmits its i d to the reader in that slot .the unidentified tags will transmit their ids in the next frame .it has been shown that the probabilistic fsa can achieve smaller tag identification delay than its deterministic counterpart . in the literature thereexist many works which have been independently developed by different researchers and engineers to enhance the tag identification performance of a rfid system .some of the representative works are available in . based on the scope of their design, they can be categorized into ( i ) pure advancement in the link - layer anticollision protocols , and ( ii ) pure advancement in the physical layer rf transmission / reception models . the fundamental approach behind the first category of enhancements is to dynamicallyadjust the frame length of the probabilistic fsa protocols to its optimal value in each interrogation round ( resulting in new protocol referred to as dynamic fsa or dfsa ) , or to optimize tree search algorithm in the deterministic bt protocols taking advantage of inherent correlatedness among the tag ids .the latter category of enhancement , on the other hand , uses multiuser - multiple input multiple output ( mu - mimo ) technique along with efficient blind signal separation algorithms to realize multi - packet reception ( mpr ) capable rf reception model at the reader . due to the mpr capability at reader ,simultaneously transmitted signals from several tags can be separated and the transmitting tags can be correctly identified ( which otherwise would have been treated as being collided ) . it has been shown in that mpr capability at reader has potential to substantially increase the read rate and decrease identification delay of fsa and bt anticollision protocols , respectively . however , how to ascertain optimal tag reading performance in a rfid system with mpr capability is remained as an open research problem . to this end , we derive an optimality criterion and present a method to adopt such a criterion in the probabilistic dfsa anticollision protocol in a rfid system with mpr capability . to the best of our knowledge ,it is the first work in this regard .the rest of the paper is organized as follows .section ii presents the system model while section iii presents analytical derivation of a criterion for achieving optimal tag reading efficiency .section iv provides detail information about simulations environment , performance metrics and evaluation methodology .finally , section v concludes this work .we consider a rfid system where number of tags with single antenna communicates with a reader equipped with array of antennas . under such mu - mimo setting , it is assumed that spatially multiplexed backscattered signals from multiple tags can be separated at the reader using advanced signal processing algorithms unless the number of multiplexed signals exceeds .dacuna et al have recently demonstrated the feasibility of such assumption in uhf rfid systems .dfsa is used as the anticollision protocol .the operation procedures of dfsa at the reader and tag are described below : + * reader side : * ( 1 ) set initial frame length .( 2 ) initiate interrogation round by broadcasting the frame length information .( 3 ) in each slot of the frame , check whether there are any backscattered rf signals from the tags .mark the slot as an empty slot if no backscattered rf signal is detected .if rf signals are detected , use the advanced signal separation algorithm to separate the multiplexed backscatter rf signals .based on the outcome of the signal separation operation , mark the slot as a collided slot if none of the transmitting tags are identified , and mark it as a successful slot if any of the tags are identified .also record the number of identified tags in the successful slot .( 4 ) after the completion of the frame , check whether any slot within that frame is marked as the collided slot .it is the indication whether any tags are left to be interrogated or not . if none of the slots are marked as the collided slot , terminates the interrogation processotherwise , prepare for the next interrogation round .( 5 ) estimate the total number of contending tags in the last frame using _ maximum a posteriori _ based estimation method in eq .as to be elaborated in the next section , the map estimation mechanism utilizes the statistics of the collided , successful , and idle slots to perform estimation .( 6 ) determine the optimal frame length for next interrogation round using eq .( 12 ) and go to step ( 2 ) . +* tag side : * ( 1 ) wait for interrogation signal from the reader .( 2 ) obtain the frame length information .( 3 ) randomly select any of the slot within the frame and backscatter its i d in the selected slot . ( 4 ) if the transmission is inferred to be unsuccessful , wait for interrogation signal for the next round .in this section , we derive a theoretical criterion for achieving optimal tag reading performance at the reader with mpr capability and present a method to use such criterion in the practical rfid systems .consider the rfid system described in the previous section with tags to be read .the frame used in an interrogation round initiated by the reader consists of time slots .so , the probability that tags among tags occupy a slot can be expressed by the binomial distribution with parameters and as if the frame length is sufficiently large , eq .( 1 ) can be approximated by the poisson distribution with mean .accordingly , the probabilities that a slot is found to be empty ( no tags use the slot ) , successful ( or less number of tags use the slot ) , and collided ( more than number of tags use the slot ) are given by based on eq .( 3 ) , the expected value of the number of successful slots in the frame with slots is =l\cdot e^{-n / l}\sum_{j=1}^{m}\frac{(n / l)^j}{j!}.\ ] ] to maximize read rate ( number of successful tags per unit time ) of the reader it should be ensured that the shared channel should be used as efficiently as possible .this implies that a criterion that maximizes the channel usage efficiency ( defined as a ratio of expected value of the number of successful slots to the frame length ) also maximizes the read rate . since is a concave - downward function of , the criterion that maximizes can be obtained by equating the derivative of with respect to to zero as further simplification of eq .( 6 ) yields solving eq .( 7 ) , the criterion ( i.e. optimal frame length ) that maximizes is found to be if the number of tags to be interrogated is known in advance , a value of the frame length for the optimal usage of dfsa can be set to the value obtained from eq .however , the cardinality of tags to be interrogated is not known in advance .hence , for each frame , except for the initial frame , remaining tags to be interrogated should be estimated on - the - fly .chen has previously proposed a maximum a posteriori ( map)-based tag estimation method and showed that it is more accurate than its predecessors such as vogt s method .chen however did not consider mpr capabilities in the reader and hence his tag estimation formula is applicable for single packet reception model only ( i.e. ) . inwhat follows , we extend chen s formula for all possible values of . in a frame with slots , the joint probability mass function for finding empty slots , successful slots and collision slots can be represented using the following trinomial distribution where , and are previously defined in eq .( 2 ) , ( 3 ) , and ( 4 ) , respectively . hence , when the reader finds empty slots , successful slots , and collision slots in a frame , a posteriori probability distribution of having tags in the system is ^s \notag \\ & \times & \left[e^{-k / l}\left(e^{k / l}-t_m(k / l)\right)\right]^c,\end{aligned}\ ] ] where is the taylor polynomial of of order . based on the _ posterior _ probability distribution in eq .( 10 ) , the reader determine the total number of estimated tags as once the number of tags in frame is estimated using eq .( 11 ) , the optimal frame length in the next frame for interrogating the remaining tags will be where is the number of successfully identified tags in the frame .1 shows the posteriori probability distribution for tags when 1 empty slot , 6 collision slots , and 3 success slots are observed in a frame with 10 slots for three different cases of ( viz . , and ) . for each case of , the value corresponding to the peak of the distribution curve is the estimated number of tags .it is noteworthy to mention that while implementing the map - based estimation method in the reader , the first constant factor ( involving factorial ) in can be removed as it is only responsible in scaling the probability mass function .there will be no difference in the estimation result but significant computation burden from the reader can be reduced , especially when is large .we analyzed the performance of the mpr capable rfid system described in section ii for varying , and using monte carlo simulations .average results of 500 simulation trials are presented in terms of two metrics defined below : ( a ) read rate : number of tags identified per unit time , and ( b ) identification delay : total time required to read all the tags in the system .we considered the duration of a slot to be a basic unit of time , and hence the read rate is expressed in terms of tag / slot ( number of tags per slot ) and identification delay in terms of number of slots .2a ( left ) shows read rate of a fsa anticollision algorithm and dfsa anticollision algorithm with varying mpr capabilities ( 1 , 2 , 3 and 4 ) when the initial frame length was set to 128 .it is evident from the figure that read rate substantially increases with increase in the value of .this is attributed to the reduction in the number of tag - collision events due to mpr capability .read rate reaches its peak value of 1.9 tags / slot for the case of 4 , which in the conventional single packet reception capable reader ( i.e. , 1 ) is caped to 0.36 tag / slot .note that dfsa s peak read rate in the single packet reception capable reader agrees well to the previously established theoretical network throughput bound of ( 0.37 ) in any aloha based random access systems . in the figure , it is also evident that by merely using fsa it is not possible to attain the read rate closer to in the single packet reception capable reader .2a ( right ) shows the identification delay of fsa anticollision algorithm and dfsa anticollision algorithm with varying mpr capabilities . from the figureone can see that the increased read rate due to mpr capabilities ( observed in fig .2a ( right ) ) translates to the reduction in the identification delay .for example , when there were around 350 tags in the rfid system , nearly 5.5 fold decrease in the identification delay ( from 1011 slots to 184 slots ) was observed when the single packet reception capable reader was replaced with mpr - capable reader with 4 .2b shows that the initial frame length affects the performance of dfsa both in terms of read rate and identification delay , especially when the reader has high - order mpr capabilities and the number of tags to be interrogated is small . from the figure it is evident that the read rates for three different cases of initial frame lengths ( 32 , 64 and 128 ) appear to converge to a rate close to the peak read rate with increase in the number of tags in the systemthis implies that the effects of the initial frame length on read rate tends to vanish with increase in the number of tags .similarly , the difference in identification delay for different frame length values shrinks for larger number of contending population size .next , we measured the accuracy of the map - based tag estimation method used in our previous simulations . for that we calculated the estimation error ( in % ) as , where where is the estimated number of tags when there were tags in the systemthe lower value of the estimation error corresponds to the higher estimation accuracy .3 depicts estimation errors for four different cases of ( 1 , 2 , 3 , and 4 ) when the frame length was set to 128 in the simulations . from the figure it is evident that the estimation error increases with increase in the value of , but only up to a certain tag population size . beyond that tag population size , estimation error for higher remains lower .importantly , for all four different cases of , the estimation error remains lower than 6% regardless of the number of considered tags .in this paper , we have derived a general criterion to achieve the optimal performance of a probabilistic dfsa based anticollision algorithm in rfid system with mpr capable reader .previously , only the criterion for the single packet reception capable reader was known .further , we have provided a simple method to adopt such a criterion in practical rfid systems . through rigorous computer simulations ,we have shown the performance implications of that optimal criterion in terms of increased tag reading rate and reduced identification delay .j. myung , w. lee , j. srivastava , and t. k. shin , tag - splitting : adaptive collision arbitration protocols for rfid tag identification , " _ ieee trans .parallel distrib .763 - 775 , june 2007 .s. kim , s. kwack , s. choi , and b. g. lee , enhanced collision arbitration protocol utilizing multiple antennas in rfid systems , " in proc .asia - pacific conf .on communications , pp .925 - 929 , october 2011 .w. chen , an accurate tag estimate method for improving the performance of an rfid anticollision algorithm based on dynamic frame length aloha , " _ ieee trans ._ , vol . 6 .9 - 15 , january 2009 .
|
maximizing the tag reading rate of a reader is one of the most important design objectives in rfid systems as the tag reading rate is inversely proportional to the time required to completely read all the tags within the reader s radio field . to this end , numerous techniques have been independently suggested so far and they can be broadly categorized into pure advancements in the link - layer tag anticollision protocols and pure advancements in the physical - layer rf transmission / reception model . in this paper , we show by rigorous mathematical analysis and monte carlo simulations that how such two independent approaches can be coupled to attain the optimum tag reading efficiency in a rfid system considering a dynamic frame slotted aloha based link layer anti - collision protocol at tags and a multi - packet reception capable rf reception model at the reader . tag anticollision protocol , maximum a posteriori tag estimation , multi packet reception , rfid system
|
exotic species , commonly referred to as invasive " species , are defined as any species , capable of propagating into a nonnative environment .if established , they can be extremely difficult to eradicate , or even manage .there are numerous cases of environmental , economic and ecological losses attributed to invasive species .some well - known examples of these species include the burmese python in the united states and the cane toad in australia , both of which have been wreaking havoc on indigenous ecosystems . despite the magnitude of threats posed by invasive species ,there have been very few conclusive results to eradicate or contain these species in real scenarios , in the wild .biological control is an adopted strategy to limit invasive populations .it works on the so called enemy release hypothesis " , in which natural enemies of the target species / pest , are released against it in a controlled fashion. these can be in the form of predators , parasitoids , pathogens or combinations thereof .it is a controversial yet fascinating area , with much debate and regulation .an interesting problem in the field , is the biological control paradox " .this has attracted much research attention , see and the references within .the essential paradox here is that if one models a predator - pest system , via the holling type ii functional response , we can not obtain a stable coexistence equilibrium , where the pest density is low .however , in reality many predators introduced for biological control purposes , are able to keep pest densities down to low levels .yet another interesting conundrum , also of a paradoxical nature , pertains to the effectiveness of generalist predators as biological controls . in theory , generalist predators ,are considered poor agents for biological control .this is due to many factors , such as lack of specific prey targets , highly frequent interference amongst themselves , and their interference in the search of other specialist predators .however , there is a large body of growing field evidence , showing that generalist predators , are actually quite effective in regulating pest densities .thus there is an apparent discrepancy between what theory predicts , and what is actually seen in empirical observations .the primary objective of the current manuscript , is to corroborate these empirical observations , by proposing an alternate theory to understand the effect of generalist predator interferences , as it effects their efficiency as biological controls .biological control is risky business " .for example , the introduced species might attack a variety of species , other than those it was released to control .this phenomena is referred to as a _ non - target effect _ , and is common in natural enemies that are generalist predators .some well known example of this are the cane toad ( _ bufo marinus _ ) and the nile perch ( _ lates niloticus _ ) .the secondary objective of the current manuscript , is to use our proposed theory to explain why certain species such as the cane toad , that were originally introduced as a biological control , have had an explosive increase in population .before we delve further into these aspects we briefly survey some of the relevant literature on mutual interference .mutual interference is defined as the behavioral interactions among feeding organisms , that reduce the time that each individual spends obtaining food , or the amount of food each individual consumes .it occurs most commonly where the amount of food is scarce , or when the population of feeding organisms is large .food chain models incorporating mutual interference have a long history , and were first proposed by hassell , and roger and hassell , to model insect parasites and predator searching behaviour .three species food chain models with mutual interference , and time delays , were proposed by freedman and his group and they studied the trade - off between mutual interference among predators , and time delays due to gestation .they observed that mutual interference is acting as a stabilizing factor and time delay does not necessarily destabilize the system , but increasing delay may cause a bifurcation into periodic solutions . for a delayed predator - prey model with mutual interference parameterwang and zu have obtained some sufficient conditions for the permanence and global attractively . comparing this with empirical / statistical evidences from 19 predator - prey systems ,skalski and gilliam pointed out that the predator dependent functional responses ( hassell - varley type , beddington - deangelis type and crowley - martin type ) could provide a better description of predator feeding , over a range of predator - prey abundances , and in some cases the beddington - deangelis ( bd ) type functional response performed even better .upadhyay and iyengar have pointed out that if predators do not waste time , interacting with one another , or if the attacks are always successful and instantaneous , then the response changes into holling type ii functional response , and the predators benefit from co - feeding . in light of our objectiveswe ask the following questions : * how does mutual interference amongst generalist predators effect their efficiency as biological controls ? * specifically , can the interference delay or exacerbate controlling the target species ? * can interference cause a population explosion in certain generalist predator biological agent populations ? to answer these we investigate the three species model proposed in , where the interaction between the intermediate predator and the top predator is modeled according to the beddington - deangelis type functional response .this response models predator interference in the top predator .we first introduce a concept central to our investigations .we introduce the dynamic of finite time blow - up to address our question on interference , via the following connected definitions : given a mathematical model for a nonlinear process , say through a partial differential equation ( pde ) , one says finite time blow - up occurs if where is a certain function space with a norm , is the solution to the pde in question , and is the blow - up time . in the case of an ordinary differential equation ( ode ) model the function space is simply the real numbers .if blow - up does not occur , that is , we say there is global existence , that is a solution exists for all time . consider a mathematical model ( pde or ode ) for the population dynamics of a certain species , introduced as a biological control .if the model blows - up in finite time , that is then we say the population has reached an `` excessive '' level . in these excessive numbers it is able to wipe out the target pest almost with certainty .there is a rich history of blow - up problems in pde theory and its interpretations in physical phenomenon .for example , blow - up may be interpreted as the failure of certain constitutive materials leading to gradient catastrophe or fracture , it may be interpreted as an uncontrolled feedback loop such as in thermal runaway , leading to explosion .it might also be interpreted as a sudden change in physical quantities such as pressure or temperature such as during a shock or in the ignition process .the interested reader is referred to .blow - up in population dynamics is usually interpreted as excessively high concentrations in small regions of space , such as seen in chemotaxis problems . in the current manuscript , blow - up in a population of a population of bio - control agents ,is interpreted as the population becoming excessive , which then enables it to easily control the target pest , given that it has excessive numbers .this leads us to equate however , there might also be various negative effects associated with this .for example the following chain of events might occur via this formulation , it is conceivable to see how this is what might have occurred with the cane toad in australia . the cane toad was introduced in australia in 1935 to control the cane beetle . however , the toad being a generalist predator , attacked various other species .in addition , the toad is highly poisonous and therefore other predators shy away from eating it .this has enabled the toad population to grow virtually unchecked , and it is today considered one of australia s worst invasive species .the contributions of the current manuscript are : * we introduce a new concept to measure the success of a biological control by equating * we show the three species model proposed in , and its spatially explicit version , can blow - up in finite time , for sufficiently large initial data via theorem [ t1 ] and corollary [ ct1 ] . * we show predator interference _ is the sole factor _ in inducing blow - up , when there is global existence in the no interference case .this is demonstrated via theorem [ t2 ] and corollary [ ct2 ] .* the spatially explicit form of the three species model proposed in , possesses spatio - temporal chaos .also time delays in the temporal model , affect both the blow - up dynamics as well as the chaotic dynamics .* based on the above results , we propose that generalist predator interference _ might be a cause of their success _ , in controlling target pests .however , predator interference may also be a cause of the population explosion of certain species , introduced originally for biological control purposes , such as the cane toad in australia .we provide details of the model formulation next .we consider an ecosystem where a specialist predator / pest , invasive or otherwise , is depredating on a prey species . in order to control , a generalist predator released into the same ecosystem .the goal is that will hunt and depredate on , its favorite food , thus lowering the population of .the dynamical interaction between and is modeled via standard rosenzweig - mcarthur scheme , while the dynamics between and is modeled via a leslie - gower formulation , where interference in the top predator is assumed , and this is modeled via a beddington deangelis functional response .upadhyay et al . have proposed a tri - trophic hybrid food chain model to study such a system . herethe prey population density serves as the only food for the intermediate specialist predator population of density .this population serves as a favorite food for the generalist top predator population of density r. now , we present a brief sketch of the construction of the model which is biologically motivated . *behavior of the entire community is assumed to arise from the coupling of these interacting species where population prey on and only on and the top predator preys on favorite food but it has other options for food in case of scarcity or short supply . *the rate of change of population size for prey and intermediate predator has been written according to the rosenzweig - mcarthur scheme i.e. predator population dies out exponentially in the absence of its prey and prey population density grows according to the famous logistic growth rate . *the top predator is a sexually reproducing species .the interaction between the intermediate predator and the top predator is according to the beddingto - deangelis type functional response .this response models predator interference in the top predator .we impose different assumptions from the ones assumed in to formulate the spatially explicit form of the differential equations which describe the model .the detailed meaning of the different parameters is given in .further , we also assume that all the three populations perform active movements in space .random movement of animals occurs because of various requirements and necessities like , search for better food , better opportunity for social interactions such as finding mates .food availability and living conditions demand that these animals migrate to other spatial locations . in the proposed model , we have included diffusion terms assuming that the animal movements are uniformly distributed in all directions .the model is described by the following set of partial differential equations . with suitable initial conditions and neumann boundary conditions , the physical domain for the problem is some bounded set , which is a subset of . for numerical simulations ,we restrict ourselves to , here we take ] and and and then ( i ) is positively invariant ( ii ) all non negative solutions of system are uniformly bounded and they eventually enter the attracting set ( iii ) the model system is dissipative the primary issue with theorem [ thm : u ] is that an attracting set is invariant , but an invariant set * may not * be attracting .although the set is invariant , that is if we start in we remain in for all time , we will show it is not attracting for large initial conditions . in particular for large enough initial conditions , system can actually blow - up in finite time .next we demonstrate finite time blow - up in .[ t1 ] consider the three species food chain model given by , for any choice of parameters , including the ones satisfying theorem [ thm : u ] , and a , such that . given any initial data , there exists initial data , such that if this data meets the largeness condition then will blow - up in finite time , that is here the blow - up time consider the equation for the top predator in the event that , blow - up is obvious . if , where or , blow - up is far from obvious .however still possible for large data . to see this note , if then will blow - up in finite time in comparison with the tricky part here is that can switch sign , and this is dependent on the dynamics of the middle predator , which changes in time . in order to guarantee blow - up , we must have that or equivalently we must guarantee that to this end we will work with the equation for here the interference term will come to our aid .note as trivially one has thus using the above we obtain multiplying the above through by , and integrating in the time interval ] to negative for some , we basically need to find the minimum referred to as the minimum turing point ( ) such that .this minimum turing point occurs when which when solved for we obtain which ensures is real and positive such that , by which we require either which ensures that therefore , if at hence - are necessary and sufficient conditions for to produce diffusion driven instability , which leads to emergence of patterns . also to first establish stability when , in each case has to be positive . herewe demonstrate turing patterns that form in 1d .the initial condition used is a small perturbation around the positive homogeneous steady state given as where .[ app1 ] we choose parameters , and simulate to obtain spatiotemporal patterns as seen in fig . [ 1 t ] .we next choose parameter values the parameters are : and simulate to obtain spatial patterns as seen in fig .[ 2 t ] . in fig[ 3 t ] we see that increasing interference causes fewer modes to become unstable and thus effects the turing instability .the goal of this section is to investigate spatio - temporal chaos in the model .spatio - temporal chaos is usually defined as deterministic dynamics in spatially extended systems that are characterized by an apparent randomness in space and time .there is a large literature on spatio - temporal chaos in pde , in particular there has been a recent interest on spatially extended systems in ecology exhibiting spatio - temporal chaos .however , most of these works are on two species models , and there is not much literature in the three - species case . note , that the appearance of a jagged structure in the species density , as seen in , which seems to change in time in an irregular way , does not necessarily mean that the dynamics are chaotic .one rigorous definition of chaos means sensitivity to initial conditions .thus two initial distributions , close together , should yield an exponentially growing difference in the species distribution at later time . in order to confirm this in, we perform a number of tests as in .we run from a number of different initial conditions , that are the same modulo a small perturbation .we then look at the difference of the two densities , at each time step in both the and norms .+ the simulations use two different ( but close together in norms ) initial conditions . the first simulation ( which we call )is a perturbation of by .the second simulation ( which we call ) is a perturbation of by .the densities of the species are calculated up to the time at each time step in the simulation we compute where are used .then , is plotted on a log scale . in doing so, we observe the exponential growth of the error .this grows at an approximate rate of .since this is positive then this is an indicator of spatio - temporal chaos .these numerical tests provide experimental evidence for the presence of spatio - temporal chaos in the classical model .figure [ contourchaos ] shows the densities of the populations in the -plane while figure [ contourchaoserror ] gives the error and its logarithm till .is shown as contour plots in the -plane .the long - time simulation yields spatio - temporal chaotic patterns . points are used with a temporal step size of .the parameters used are . ] , and its logarithm .the error grows at an approximate rate of , confirming spatio - temporal chaos.,title="fig : " ] , and its logarithm .the error grows at an approximate rate of , confirming spatio - temporal chaos.,title="fig : " ]in this section we will try to unravel the effect of delay both on the finite time blow - up dynamics and on the chaotic dynamics .there is a large literature on the effect of delay on both two and three species predator - prey models .upadhyay and agrawal investigated the effect of mutual interference on the dynamics of delay induced predator prey system , and determined the conditions under which the model becomes globally asymptotically stable around the nonzero equilibria .recently , jana et al . have made an attempt to understand the role of top predator interference and gestation delay on the dynamics of a three species food chain model .interaction between the prey and the middle predator follows volterra scheme , while that between the top predator and its prey depends on the beddington - deangelis type functional response .upadhyay et al . studied the three species food chain model with generalist type top predator and obtained that increasing the top predator interference stabilizes the system , while increasing the normalization of the residual reduction in the top predator population destabilizes the system . in our current investigationswe choose a constant time delay , in various forms .these are demonstrated next .we perform all our simulations using the standard matlab routine dde23 , for delay differential equations . in this sectionwe attempt to numerically investigate the effect of a constant time delay on the chaotic dynamics the system possesses . for thiswe choose to place the time delay in different parts of the functional response , of the top predator equation only .the first delayed model we consider is the following ; the equations for remain the same .for the investigations we have chosen in figure [ 1de ] and in fig [ 2de ] . takes a chaotic state ( with no delay ) into a stable focus.,title="fig : " ] takes a chaotic state ( with no delay ) into a stable focus.,title="fig : " ] maintains a similar structure.,title="fig : " ] maintains a similar structure.,title="fig : " ] therefore via fig.[1de ] , fig.[2de ] we see that for a small time delay , the dynamics remain the same , but with an increase in the delay we can observe a radical change in the dynamics .next we aim to observe the effect of delay on the blow - up dynamics in the model . herewe incorporated delay in its growth term .this seems plausible due to gestation effect , as well as because this is the term that causes finite time blow - up .after the introduction of delay into the model we have ; in takes a chaotic state ( no delay ) into a limit cycle.,title="fig : " ] in takes a chaotic state ( no delay ) into a limit cycle.,title="fig : " ] we have chosen a parameter set for which we have the blow up in the ode model without delay . upon introducing the delay ( ) , in we found that the system does not exhibit blow - up . +the chosen parameter set for all of our simulations is the current manuscript we propose an alternate theory that provides a partial answer to the paradox of the generalist predator .generalist predators are considered poor for biological control purposes , primarily due to mutual interference and their interference in the search of other specialist predators .how then might they be effective in controlling pest densities , as suggested by real field data ?we suggest that the interference , might actually be a cause in their population explosion , enabling them in these excessive numbers to control the target pest . from a biological point of view, i think the crucial point is that by interfering in the search of specialist predators , they _ indirectly keep the pest density high enough _ , for themselves to excessively harvest the pest , resulting in a sharp growth of their own population , described mathematically by finite time blow - up .thus there seems to be an underlying feedback mechanism , between this indirect interference and their own harvesting .this is a subtle point that warrants further investigation .our work also opens an alternate approach to understanding the population explosion of species such as the cane toad , introduced originally for biological control .maybe the toad due to its excessive interference with other predators , was able to keep the population of its food source high enough , so that it could feed enough and grow unchecked .this explosive growth was also helped by the fact that the toad being poisonous , was not easily predated upon .finite time blow - up here should be viewed as a mathematical construct , that is a conduit to understanding population explosions .one should not consider it in the literal sense , as a population can not become infinite in finite time .however , an equation describing populations , with such emergent behavior , can be a tool to understand excessive increases in population , such as precisely the situation with the cane toad .all in all we hope that the alternate approach we provide , will help reconcile the conflict between theory and data , as concerns the effectiveness of the generalist predator as a biological control .bryan , m.b . ; zalinski , d. ; filcek , k.b . ; libants , s. ; li , w. ; scribner , k.t . , _ patterns of invasion and colonization of the sea lamprey in north america as revealed by microsatellite genotypes _ , molecular ecology , 14 ( 2005 ) , 3757 - 3773 .dorcas , m.e .willson , r.n .reed , r.w .snow , m.r .rochford , m.a .miller , w.e .mehsaka , jr .andreadis , f.j .mazzotti , c.m .romagosa and k.m ._ severe mammal declines coincide with proliferation of invasive burmese pythons in everglades national park _ ,proceedings of the national academy of sciences 109 ( 2012 ) , 2418 - 2422 .letnic , m. , webb , j. and shine , r. _ invasive cane toads ( bufo marinus ) cause mass mortality of freshwater crocodiles ( crocodylus johnstoni ) in tropical australia _ , biological conservation , 141 ( 2008 ) , 1773 - 1782 .lang , a. 1997 .invertebrate epigeal predators in arable land : population densities , biomass and predator prey interactions in the field with special reference to ground beetles and wolf spiders . ph.d .dissertation , ludwig - maximilians - universitat , munchen , germany parshad , r. d. , kumari , n. and kouachi , s. , _ a remark on study of a leslie - gower - type tritrophic population model [ chaos , solitons and fractals 14 ( 2002 ) 1275 - 1293 ] _ , chaos , solitons fractals , 71 ( 2015 ) , 22 - 28 .rodda , g.h . ; jarnevich , c.s . ; reed , r.n . , _what parts of the us mainland are climatically suitable for invasive alien pythons spreading from everglades national park ? _ , molecular ecology , 14 ( 2005 ) , 3757 - 3773 .emsens , w. , hirch , b. , kays , r. and jansen , p. _ prey refuges as predator hotspots : ocelot ( _ leopardus pardalis _ ) attraction to agouti ( _ dasyprocta punctata _ ) dens _ acta theriologica , vol 59 ( 2014 ) , 257 - 262 .white , k. a. j. and gilligan , c.a . , _ spatial heterogeneity in three species , plant parasite hyperparasite , systems _ , philosophical transactions of the royal society of london .series b : biological sciences 353.1368 ( 1998 ) , 543 - 557 .
|
an interesting conundrum in biological control questions the efficiency of generalist predators as biological control agents . theory suggests , generalist predators are poor agents for biological control , primarily due to mutual interference . however field evidence shows they are actually quite effective in regulating pest densities . in this work we provide a plausible answer to this paradox . we analyze a three species model , where a generalist top predator is introduced into an ecosystem as a biological control , to check the population of a middle predator , that in turn is depredating on a prey species . we show that the inclusion of predator interference alone , can cause the solution of the top predator equation to blow - up in finite time , while there is global existence in the no interference case . this result shows that interference could actually cause a population explosion of the top predator , enabling it to control the target species , thus corroborating recent field evidence . our results might also partially explain the population explosion of certain species , introduced originally for biological control purposes , such as the cane toad ( _ bufo marinus _ ) in australia , which now functions as a generalist top predator . we also show both turing instability and spatio - temporal chaos in the model . lastly we investigate time delay effects . suman bhowmick , emmanuel quansah , aladeen basheer , and rana d. parshad department of mathematics , clarkson university , potsdam , new york 13699 , usa . ranjit kumar upadhyay department of applied mathematics , indian school of mines , dhanbad 826004 , jharkhand , india .
|
the sensitivities of second - generation ( advanced ligo , advanced virgo , geo - hf , and lcgt ) and third - generation ( einstein telescope ) interferometric gravitational wave detectors will be partly limited by thermal fluctuations in the mirrors .the pioneering articles on this issue were dedicated to the investigation of the mirror _ substrate _ fluctuation : brownian thermal noise and thermo - elastic noise .fundamental thermal motion ( brownian motion ) of material atoms or molecules causes brownian noise .fundamental thermodynamic fluctuations of temperature lead to thermo - elastic noise through the material s thermal expansion .similarly thermo - refractive noise is caused by temperature fluctuations leading to fluctuations of the refractive index and therefore fluctuations of the optical path length inside the material .these results were obtained for the model of an infinite test mass , i.e. the mirror was considered to be an elastic layer with infinite width and finite thickness .all of these results were generalized for a finite - size mirror model .very soon the importance of mirror _ coating _ thermal noise was realized as its parameters may differ considerably from the mirror substrate parameters . despite its low thickness , the very high loss angle of the mirror coating materials ( usually sio and ta ) makes the coating brownian noise the most significant one among all kinds of mirror thermal noise .the thermo - elastic noise of the coatings only has a small contribution in the total noise budget .later , kimble proposed the idea of thermal noise compensation which was explored carefully in for a particular case of thermo - elastic and thermo - refractive noise . the coating brownian noise is still one of the main contributions to the noise spectra of gravitational wave observatories .one of the most promising approaches aimed to decrease its level was offered by khalili who proposed to replace the end mirror in the interferometer arm with a short fabry - prot cavity tuned to anti - resonance ( see center panel of figure [ fig : schematics ] ) . in practicemost light is reflected from only a few first layers ( farest from substrate ) and all others ( located closer to substrate ) only reflect a small part of light .however , since the thermal fluctuations are proportional to the total thickness of the coating and the inner layers of coating are the main contribution to the phase fluctuations of the reflected light , the transmittance of each mirror can be higher to realize the same reflectivity of the system as a compound end mirror .the total thickness of coatings in such _ khalili cavity _mirrors is the same as the thickness of the conventional mirror , while the brownian noise of the end mirror of a khalili cavity is significantly reduced because the thickness fluctuations of the second end mirror coating ( eem in figure [ fig : schematics ] ) do not influence the fluctuations of the input mirror coating ( iem in figure [ fig : schematics ] ) .moreover , using a rigidly controlled khalili cavity allows a reduction of coating brownian noise .one of the main problems in the khalili cavity is to establish a low - noise control of the mirror positions ( see detailed explanation in sec .[ thermlens ] ) .a potentially easier way is to use a _ khalili etalon _ ( ke ) instead of a khalili cavity ( kc ) or a simple conventional mirror ( cm ) .the idea is to use a single mirror but to split the coating into two parts ( see right hand panel of figure ( eem in figure [ fig : schematics ] ) : _ the front coating _ ( on the front substrate surface ) features just a few layers and _ the rear coating _ ( on the rear substrate surface ) consists of the rest of the required coating layers .the purpose of this article is to develop an idea of the khalili etalon , to calculate the total mirror thermal noise arising in a ke and in a cm , and to compare them .we investigate the idea of using a ke in the einstein telescope ( et ) and advanced ligo ( aligo ) . in sec .[ optim ] we describe the mirror parameter optimization procedure , namely the optimal number of layer pairs in the front coating . in sec .[ tninke ] we describe the details of the thermal noises arising in the ke and cm calculations .section [ thermlens ] is dedicated to the problem of thermal lensing which is much more important in a ke than in a cm . in sec .[ conc ] we discuss the obtained results and draw the conclusions . finally , some calculation details are provided in the appendices [ appa]-[appb ] .the main idea of using a ke is to reduce the mirror s total thermal noise without reducing its reflectivity . by _total thermal noise spectral density _we mean the sum of the brownian , thermo - elastic and thermo - refractive noise spectral densities . coating brownian noise is caused mostly by the fluctuations of the entire coating thickness .it would then seem evident that brownian noise be lower when the front coating contains less layers and hence the lowest noise be achieved for the coating totally displaced to the rear mirror surface .this is in principle true but at the same time some other noises , such as substrate thermo - refractive noise , rise dramatically causing the total noise level to rise also .moreover , the less layers one puts onto the front coating , the higher will be the absorption in the substrate .so there has to be an optimum of how to best distribute the coating layers between the front and back surfaces in order to obtain minimal total thermal noise and not too much of absorption in substrate .the aim of this section is to find this optimum configuration .the only way to find the optimal number of front coating layers , , is to compare the thermal noise for every .this requires the calculation of the different noise contributions as functions of the front coating layers number .the most basic principles we used are : ( i ) the total number of ta and sio layers , , is fixed , i.e. we used the coating structure planned for both et and aligo and modified it to fit the double coating paradigm : 20 ta and 18 sio quarter - wave layers plus the substrate ( it is considered as an ordinary but `` slightly '' thicker coating layer ) and plus two caps consisting of a half - wavelength sio layer ( for the cm it would have been 20 ta layers and 19 sio layers plus one cap ) ; ( ii ) a quarter - wavelength ta layer and a quarter - wavelength sio layer are alternately coated on the front or rear surface so that there are always an odd number of front coating layers ( 1 , 3 , 5 etc . ) and also an odd number of layers of the rear coating ( 37 , 35 , 33 etc . ) .please note that the substrate and caps are not included in these numbers ; ( iii ) the number of layers of the front coating is the argument and the total thermal noise driven mirror displacement is the function of it ; ( iv ) we consider only brownian , thermo - elastic and thermo - refractive noises ( being the most significant contributions ) , and ( v ) we used the mirror of a finite - size cylinder , the model of which has been developed in ref . , and calculated all noises numerically using the fluctuation - dissipation theorem ( fdt ) as it is briefly described in secs .[ optbn]-[optabs ] .the optimal number of front coating layers appeared to be , i.e. ta layers and sio layer plus a cap in the front coating and ta layers and sio layers plus a cap in the rear coating . with the technical feasibility taken into account ( see sec .[ optabs ] ) , however , it turns out that the system with layers on the front surface ( i.e. ta layers and sio layers plus a cap on the front mirror surface and ta layers and sio layers plus a cap on the rear surface ) will be better and we analyze the system with in detail . in this case the mirror thermal noise does not reach its minimum but it is only about % higher .the total coating thermal noise of the etalon will be the sum of noise on the front surface and noise on the back surface : here is the displacement of the front surface of the mirror and the displacement of the boundary surface between the rear surface of the mirror substrate and the coating on it .considering the ke as a fabry - prot cavity consisting of two mirrors with amplitude reflectivities ( front coating ) and ( rear coating ) tuned to anti - resonance , one can calculate the coefficients and ( see details in ref . ) : +r_2(1-n_s)}{(1+r_1r_2)^2}\ , \nonumber\\ \label{e1e2 } \epsilon_2&=&\frac{n_sr_2(1-r_1 ^ 2)}{(1+r_1r_2)^2}\ .\end{aligned}\ ] ] here is the substrate refractive index .note that and are functions of the number of front and rear coating layers .in particular , we have the following formulas for and as functions of the number of the front coating layers ( is the number of the rear coating layers ) : where and are ta and sio coating layers refractive indices .hence , in order to calculate spectral density of the displacement caused by thermal noise using the fdt , one has to apply the forces to the front and rear coatings correspondingly and to calculate the total dissipated power . for the calculation of the spectral density of brownian coating noise the dissipated power may be calculated through the elastic energy stored in each -th layer ( of the front or rear coating ) : where and are the strain and stress tensor components ( only the non - zero components are shown in the formula above ) , is the mirror radius and is the thickness of the -th layer .the components and are calculated as it is described in detail in .then the brownian noise spectral density may be evaluated as follows : where is boltzmann s constant , is the absolute temperature and is the loss angle describing structural losses in the -th layer .the sum is taken over all layers , i.e. is the number of layers without the substrate and the caps .the total number of summands is therefore ; layers in the front and rear coatings , plus layer - caps and layer - substrate .so the index refers to the front coating cap , the indices refer to the front coating quarter wavelength ( qwl ) layers , the index refers to the substrate , the indices refer to the rear coating qwl layers and the index refers to the rear coating cap .there are summands ; the ones with are to be considered for coating brownian noise and the one with index is to be considered for substrate brownian noise : here is the loss angle of the substrate , while and represent the loss angels of the ta and sio layers , respectively . the values are presented in table [ paramphys ] . the thermo - elastic ( te ) noise calculations for the substrate and for the coating is similar . in order to calculatethe dissipated power one should calculate the diagonal components of the strain tensor for each layer ( including the substrate and the caps ) and take the trace : then one may find the power dissipated through the te mechanism : ^ 2r\,dr\ , , \ ] ] where is the thermal conductivity , is the thermal capacity per unit volume , is the young s modulus , is the poisson s ration , is the thermal expansion coefficient , is the density , and the index denotes number of the layer . therefore , the te noise spectral density will simply be : similar to the brownian noise calculations , the summands with the indices are relevant for the coating te noise while the one with the index needs to be considered for the te noise of the substrate : tr noise originates from thermodynamic fluctuations of the temperature in the substrate , producing phase fluctuations of the reflected wave phase via the temperature dependence of the substrate s refraction index .likewise , the phase fluctuations may be recalculated into effective fluctuations of mirror surface displacement where the coefficient introduced in ( [ e1e2 ] ) , characterizes the light amplitude circulating inside the substrate and is the thermo - optic coefficient of the substrate .we calculate the thermo - refractive ( tr ) noise in the substrate using the model of an infinitely large plane in the transverse directions with thickness of .the spectral density of the temperature fluctuations in this model is shown in see eq .( e8 ) : where is the radius of the light spot ( intensity decreases with distance from center as ) and the parameters with subscript refer to the substrate .the tr noise spectral density for the substrate ( recalculated to displacement ) becomes benthem and levin have pointed out that some corrections should be applied to this formula .this corrections are based on the account of the fact that light inside the arm froms the standing wave and not a traveling wave .we can rewrite eq .( 2 ) of ref . in a simpler form with only the normal incidence and the circular beam being considered : in addition we have to consider the tr noise present in the coatings .for its estimate we use the following formula where is the wavelength of light in vacuum , is the averaged thermo - optic coefficient of the entire coating , and and are the thermo - optic coefficients of ta and sio layers , respectively .this formula is based on the assumption that only the first few layers contribute considerably to the thermo - refractive loss mechanism .it is obtained for a mirror with an infinite radial dimension and a finite height .this model is valid with good accuracy for cm .however , for ke we use the same formula as an order - of - magnitude estimation . in this subsectionwe present the results of our optimization process .using the proposed parameters we obtained numerical estimates of all noise sources discussed above for et and aligo .all geometrical design parameters for these interferometers are presented in table [ paramgeom ] .the physical constants and material parameters are summarized in table [ paramphys ] .first of all , we analyze the spectral density of the displacement noise for a khalili etalon ( ke ) as a function of the number of front layers .this noise analysis considers the sum of the noise sources listed in sec .[ n1opt ] : brownian , te and tr noises which are divided into a coating and a substrate contribution each .the ke total thermal noise is then compared to the results for a conventional mirror ( cm ) using a _ gain _ parameter which is defined as : this gain has to be maximized in order to enhance the detector sensitivity . in fig .[ gain ] we plot the gain as a function of the number of front coating layers for both , et and aligo .please recall that the number of rear coating layers is constrained by the total number of coatings .[ cc][cc ] [ cc][cc]gain [ cc][cc]number of front coating layers ( see ) in et ( blue stars ) and aligo ( red stars ) as a function of the number of front coating layers .the number of rear coating layers results from the total number of layers minus the number of front layers .parameters used for this calculation are presented in tables [ paramgeom ] and [ paramphys ] . for eta maximum gain of appears for front coating layers .for aligo the maximum gain of is also obtained for .,title="fig:",scaledwidth=40.0% ] one can see that for both detectors , et and aligo , the gain is obviously maximized for the case of front coating layers .then the maximum gain appears to be for et and for aligo .another important parameter to be taken into account is the substrate absorption .it describes the portion of light energy which is absorbed in the substrate with respect to the light incident to the mirror .the value of is accessible via the light power circulating inside the substrate .we know that this light power inside the etalon will be a factor of lower than the light power incident to the mirror .finally , the value may be evaluated using the absorption coefficient of the substrate material ( in ppm per cm ) and the substrate thickness ( in cm ) : note the factor of 2 in front of the substrate thickness , which occurs as the light passes the substrate twice : once forward and once backward . also keep in mind that the reflectivities and are functions of the number of front and rear coating layers see formulas ( [ r1r2 ] ) .[ cc][cc ] [ cc][cc]substrate absorption , ppm [ cc][cc]number of front coating layers in et ( blue stars ) and aligo ( red stars ) as a function of the number of front coating layers . the number of rear coating layers results from the total number of layers minus the number of front layers .parameters used for this calculation are presented in tables [ paramgeom ] and [ paramphys].,title="fig:",scaledwidth=40.0% ] as fig .[ losses ] illustrates decreases exponentially with an increasing number of front coating layers .for the optimum number of front coating layers the substrate absorptions in et and aligo equal ppm and ppm , respectively .it seems reasonable to assume that a loss coefficient of ppm is admissible . in this casewe have to choose .indeed , using formula ( [ r1r2 ] ) with and coating parameters listed in tables [ paramgeom ] and [ paramphys ] we obtain : it means that for aligo ( circulating power mw ) the absorbed power is about w , as for et ( mw ) w. consequently , everywhere below in this article we assume the number of front coating layers to be .this choice allows the gain to be for et and for aligo . ' '' ''+ , m & & + , m & & + , m & & + , mw & & + & & + & caps & caps + layer & sio layer ' '' '' + , k & + , m & + , ppm / m & & - & - + & 1.45 & 2.035 & 1.45 + , 1/k & & & + , 1/k & & & + , kg / m & & & + , pa & & & + & & & + , w / k m & & & + , j / k kg & & & + & & & + in fig .[ etsense ] and [ alsense ] we present the thermal noise spectrum of a ke including different noise sources for et and aligo , respectively . for numerical estimateswe use the parameter data listed in tables [ paramgeom ] and [ paramphys ] .one clearly realizes that brownian thermal noise dominates the mirror thermal noise at almost all the frequency range from hz to khz .layers ( plus a cap ) and a rear coating of layers ( plus a cap ) .parameters used for this calculation are presented in table [ paramgeom ] and table [ paramphys].,scaledwidth=50.0% ] layers ( plus a cap ) and a rear coating of layers ( plus a cap ) .parameters used for this calculation are presented in table [ paramgeom ] and table [ paramphys].,scaledwidth=50.0% ]seperately , we also present the numerical results for all noise sources at a single frequency of hz in table [ noisesetal ] .we choose this frequency as a round number located in the frequency range with the highest sensitivity .this frequency value has already been used previously in this article in sec .[ optres ] for numerical estimates . ' '' '' + khalili etalon ( ke ) : & & + coating brown . , & & + substrate brown . , & & + coating tr , & & + substrate tr , & & + substrate te , & & + coating te , & & + ke total , & & + conv .mirror ( cm ) : & & + coating brown . , & & + substrate brown . , & & + coating tr , & & + substrate te , & & + coating te , & & + cm total , & & + brownian noise is the main object of our investigations as it dominates the sensitivies of both detectors ( et and aligo ) almost in the whole detection band .it can be calculated accurately using the model developed in .shortly , a calculation method is presented in sec .[ optbn ] .a further inspection of the spectral noise sensitivity plots reveals substrate brownian noise to be the second important noise process .thus , brownian noise dominates over te and tr noise absolutely and is the main factor limiting both interferometers sensitivities in the frequency domain near hz . taking the same noise sources into account as for our ke investigation ( see sec .[ optbn]-[opttrn ] ) we have calculated the numerical values for the spectral noise density of a corresponding cm . for clarity we do not present the single contributions but only the total mirror thermal noise in this section . using thermal noise values for ke and cm ( and ) and the definition of the gain parameter we compare ke and cm to estimate the benefit of using a ke . for the spectral densities calculated in table [ noisesetal ] we arrive at a gain of the sligtly larger gain value for aligo parameters may be qualitatively explained by the mirror geometry .so the et mirror is more sensitive to membrane deformations than the aligo mirror .this property allows to transfer brownian fluctuations from the rear coating to the front surface more effectively .indeed , the geometrical factor ( the fraction of diameter to thickness of mirror ) for et is larger than for aligo : in this subsection we would like to present a way to simply estimate the gain . for an order of magnitude estimate we may approximate the total thermal noise by brownian coating noise that prevails at all frequency ranges as we have seen .moreover , the thickness of sio layers and ta layers differs only about % while the ta loss angle is times higher than the loss angle of sio .therefore , we may very roughly approximate the total thermal noise with the sum of ta coating layers brownian noise ( recall that all of them are uncorrelated ) .in a first approximation one could assume that the front coating is responsible for the main contribution to the noise level .thus , the total thermal noise level should be proportional to the number of front coating ta layers only . for a cm this number is and for a ke .the ratio of the values should represent the gain of a ke : the estimated gain is larger compared to the accurate results ( [ getav ] ) .it may be explained by the fact that we do not account for elastic coupling ( through substrate ) between rear coating layers motion and front coating layers motion , i.e. the displacement of the front coating due to a deformation of the rear coating layers .one could say , the rear coating layers motion is _transferred _ to the front coating through the substrate .this coupling is moderated by the elastic properties of the latter .we introduce a transfer ratio to account for this elastic coupling .the variable ranges from for a khalili cavity ( kc ) to for a cm or zero-thickness substrate .we can calculate using the simple model of a cylindrical mirror whose front and rear surface are covered by equal layers ( same thickness and same elastic parameters ) .let us apply a single force at the front surface and keep the rear surface free of forces .one can calculate the elastic energies in the front layer and in the rear layer .obviously , the transfer ratio may be calculated as the ratio of both energies .this estimate gives : the ratio for aligo is smaller than for et .again this behaviour can be explained by the different geometry factors ( see estimates ( [ g ] ) ) . now instead of eq .( [ gappkc ] ) we can state a more accurate formula for the gain estimate taking into account the elastic coupling of the rear coating layers : [ gappke ] we see that the approximated gain values coincide with the accurate values ( [ getav ] ) within an accuracy of about % .real gains in et and aligo are lower than the expected approximated values because of other noise sources that were omitted here ( brownian substrate and brownian coating of the sio layers , substrate and coating te and tr noise ) . note that the elastic coupling does not take place in a kc where both coatings are mechanically separated by vacuum . consequently for both detectors , et and aligo , the usage of a kc instead of a cm is expected to show a gain value of ( as ) . in this sectionwe quantitatively analyse the overall sensitivity improvement potentially achievable by replacing the conventional end mirrors by ke in aligo and et .figure [ aligo_comparison ] shows the potential sensitivity improvement of aligo for the use of kc as end mirrors .the sensitivity curves have been created using the gwinc software and for a signal recycling configuration that is optimised for the detection of binary neutron star inspirals ( see configuration 2 in ) . only the two main noise contributions are shown : quantum noise ( black trace ) and coating brownian noise ( sum of all test masses ) ( red trace ) , as well as the total noise ( blue traces ) .please note that all other relevant noise sources have been included in the calculations of the total noise traces , but have been omitted from the plot for clarity .the dashed lines indicate the strain levels for the standard aligo design , while the solid lines show the potentially reduced noise levels originating from the application of ke as end test masses , as described in this article .the main difference between these two scenarios originates from the reduction of coating brownian noise by a factor 2.18 , as described by the values in the right hand column of table [ noisesetal ] .please note that thermal noise contributions from the input mirrors stay identical for the two scenarios .the corresponding increase in the binary inspiral range ( 1.4 solar masses , snr of 8 , averaged sky location ) is about 15% and therefore yields an relative increase of the binary neutron star inspiral event rate of about 50% .figure [ et_comparison ] shows the sensitivity improvement of a potential et high frequency detector as described in for the replacement of the conventional end mirrors by kes . following the values given in table [ noisesetal ] we considered a flat coating brownian noise reduction factor of 1.76 for the end test masses , while again we assumed the thermal noise of the input test masses to stay constant .this yields an overall reduction of the total thermal noise of all test masses of about 25% and an increase in the observatory sensitivity of up to 20% in the most sensitive frequency band between 50 and 400hz .we find an increase in the binary neutron star inspiral range of 15% from 1593 to 1833mpc .this corresponds to an increase in the binary neutron star inspiral event rate of about 50% .figure [ fig : schematics ] shows the simplified schematics of an aligo or et interferometer with different end mirror configurations . replacing the conventional end mirrors by kcs would have a significant impact on the required hardware . instead of a single end mirror suspended from a single seismic isolation system per end mirror , in the case of the kc two mirrors with two full seismic isolation systems are required at the end of each arm cavity .this means that with kcs there are six ( 2x i m , 2x iem , 2x eem ) instead of four optical elements ( 2x i m , 2 em ) , which require the maximal seismic isolation .the concept of the ke allows us to still significantly reduce the thermal noise contribution of the end mirrors , while being compatible with the already available seismic isolation systems .therefore , it is in principle possible to upgrade a 2nd or 3rd generation gravitational wave detector by replacing conventional end mirrors by kes without altering or extending the vacuum systems and seimsic isolation systems .in addition to the reduced hardware requirements , the main advantage of the kes with respect to kcs is the potential simplification of several aspects related to the interferometric sensing and control .upgrading aligo or an et interferometer from its standard configuration to employ kcs increases the length degrees of freedom of the main interferometer from five ( darm ( differential arm length ) , mich ( michelson cavity length ) , srcl ( signal recycling cavity length ) , carm ( common arm length ) , prcl ( power recycling cavity arm length ) ) to a total of seven .it is worth mentioning that the additional two degrees of freedom actually have a very strong coupling to the differential arm length channel of the interferometer .for the example of the coating distribution discussed in this article , the length of the kc needs to be stabilized with an accuracy of only a factor 10 less than what is required for the main arm cavities .that means the length of the kc needs to be orders of magnitude more stable than for example the differential arm length of the central michelson interferometer . in order to achieve this demanding stability of the kc onehas to make use of highly dedicated readout and control schemes .special care needs to be taken to avoid introducing potential control noise at the low frequency end , which could potentially spoil the overall sensitivity of the gravitational wave detector . substituting the kc , consisting of two individual mirrors potentially encountering independent driven motion ( for example seismic ) , by the proposed ke would ensure that both relevant mirror surfaces would be rigidly coupled via the etalon substrate .therefore , the length of the ke would be much less susceptible to seismic disturbances or gravity gradient noise , as compared to the length of kc .also in terms of potential control noise the ke is advantageous over the kc . in case of the kcthe mirror positions would have to be controlled by means of coil magnet actuators or electro - static actuators , which can potentially introduce feedback noise at frequencies within the detection band of the gravitational wave detector .in contrast the length of the ke can be locked by controlling the etalon s substrate temperature ( using the temperature dependency of the index of refraction ) .since the etalon substrate acts as a thermal low pass , the etalon length will be extremely constant for all frequencies within the detection band of the interferometer . however ,not only the length sensing and control is highly demanding in case of a kc , but also the alignment sensing and control . again the key point here is to find a high signal to noise error - signal and then applying low noise feedback systems to keep the mirrors of the kc aligned in pitch and yaw . as the kc would be rather short compared to the main arm cavities , the kcs would unfortunately feature a high mode degeneracy , i.e. it would not only be resonant for the desired tem mode , but also for higher order modes , which would further increase the alignment requirements . using a ke would potentially allow us to transfer the alignment control problem from the detector operation to the manufacturing process of the etalon . if it would be possible to manufacture an ke with sufficiently parallel front and back surface , we would not need to actively control the relative alignment of the ke surfaces during operation .the two parameters that would be most relevant are the relative curvature mismatch of the etalon front and rear surfaces as well as the parallelism of the two surfaces .as we have shown in the curvature mismatch is the dominating factor for the etalon s performance . in this sectionwe will compare the thermal lensing of the kc configuration to the one of a ke . in the case of the kcwe have the following absorption processes : ( i ) iem front coating , ( ii ) iem substrate , ( iii ) iem anti - reflex coating on its rear surface and ( iv ) front coating of eem . in the case of the proposed kethe situation is pretty similar apart from process ( iii ) , which does not exist . in the following we will show by means of fem that the actual thermal lensing induced into the ke is of the same order , but slightly smaller than in the case of the kc . the fem used here treats the mirror as a substrate . the coatings and the laser beam are included as heat sources . for the reflective coatings we assume an absorption of 0.5 ppm , and 1 ppm for the anti - reflective coating in the kc .the fem assumes an emissivity , an ambient temperature of 300 k and uses the parameters from table [ paramgeom ] and [ paramphys ] for the values of aligo . after computing the temperature and displacements of the finite elements , the optical path difference ( opd ) is derived . for the opd we included the temperature dependence of the refractive index ( which is the dominant thermal lensing effect in fused silica ) and the expansion , while we omitted the elasto - optic effect .we also did not include surface to surface radiation in the kc .this would make the thermal lens worse , and is therefore safe to exclude in order to make a conservative comparison between kc and ke .[ fig : fem ] shows the temperature distribution in a kc and a ke for the aligo parameters with n=5 , while fig .[ fig : therm_lens ] presents the corresponding opd for a single pass due to the thermo - optic effect and expansion of the substrate , as well as a fit of an opd that would be caused by an ideal thin lens .the fits are least square fits , weighted by the beam intensity . for the aligo parameters with n=5 , the thermal lensing in the kc can be described by a thermal lens with a focal length of m , while the opd in the ke can be fitted by a thermal lens with a focal length of m. the respective values for et are a focal length of 1797 m for the kc and 1838 m for kes .it follows that the induced thermal lensing in the kc and ke is of similar strength .as we have shown above , the thermal lensing for the ke is slightly weaker than for the kc .as one can see from the magnitude of the induced thermal lensing , the compensation of this effect will be extremely challenging in both cases .potential ways of mitigating the thermal lensing could include innovative approaches such as radiative cooling or pre - shaped mirror or etalon substrates , which feature the wrong curvature , when being cold , but develop the correct shape when operated at the designed optical power .the compensation of thermal lensing has turned out to be more challenging than anticipated in the first generation gravitational wave detectors .only , the practical experience that will be collected with the advanced detectors will allow us to realistically judge the feasibility of kcs as well as kes .however , the main purpose of the thermal lensing analysis presented here was to show that the thermal lens will not be worse , but slightly better for the proposed ke compared to a kc .in this article we have investigated the main thermal noise sources arising in the mirrors of the two next generation gravitational wave detectors : advanced ligo and einstein telescope .the thermal noise sources include brownian , thermo - elastic and thermo - refractive noise of the mirror coatings and the mirror substrate , among which the brownian coating noise is the largest .we applied our model developed in to study the idea of the khalili etalon to decrease the coating thermal noise and to improve the sensitivity .the optimum ke configuration minimizing the total thermal noise level was found to be with ta layers and sio layer plus a cap in the front coating and with ta layers and sio layers plus a cap in the rear coating .however , since the substrate absorption in et with such a configuration is w , and that in aligo is w , our choice is not to use the optimal but a slightly different coating distribution with , i.e. ta and sio layers plus a cap on the front surface and ta and sio layers plus a cap on the rear surface .the absorbed power in the substrate with such a configuration is w for et and w for aligo .such an absorption is around 1 ppm which seems to be reasonable price for the thermal noise enhancement .the total noise spectral density of et and aligo can be improved by the factors of and , respectively , compared with the cases of conventional end mirrors .moreover , we have checked our numerical calculations with a very simple qualitative consideration designed to make an order of magnitude estimation .this estimation shows an agreement of better than percent with the exact numerical calculations . a use of kes instead of conventional end mirrors would improve the detection rate of the binary neutron star inspirals with the future gravitational wave observatories by about 50%we also discussed the feasibility of the khalili etalon compared with that of the khalili cavity .the ke is more advantageous in terms of the hardware requirements .we also compared the thermal lensing effects in the ke and in the kc and found that the former is slightly better for not having the anti - reflective coatings that kc contains on the rear surface of the front mirror .in fact , the thermal lensing problem in either case is quite severe and we should explore a way to compensate the lensing effect without imposing excess noise . in this paper we assumed that the light is reflected on the outer surface of each coating without taking into account reflections from inner layers . a more accurate analysis shown in ref . gives a value of coating brownian noise slightly lower ( about % ) .thermoelastic and thermo - refractive noises originate from a thermodynamical fluctuation of the temperature and the correlation of the two noises can be non - trivial with a certain set of parameters . in this paper ,the correlation was ignored and we treated the two noises individually , which is not a problem as one of them is much lower than the other in the case of ke ( see table [ noisesetal ] ) . it should be noted , however , that the optimal ke configuration could be determined in such a way that thermoelastic noise and thermo - refractive noise be negatively compensated if the mechanical loss angles of the coating materials were 10 times lower than the current values .this work has been performed with the support of the european commission under the framework programme 7 ( fp7 ) capacities , project einstein telescope ( et ) design study ( grant agreement 211743 ) .a.g . gurkovsky and s.p .vyatchanin were supported by ligo team from caltech and in part by nsf and caltech grant phy-0967049 and grant 08 - 02 - 00580 from russian foundation for basic research .d.heinert and r.nawrodt were supported by the german science foundation ( dfg ) under contract sfb transregio 7 .s.hild was supported by the science and technology facilities council ( stfc ) .h.wittel was supported by the max planck society .let us consider ke as a fabry - prot interferometer with two mirrors ( namely two reflective coatings ) with amplitude transmittances and , and amplitude reflectivities and . the mirrors are separated by a medium with a refractive index , and the mean distance between the mirrors is .optical losses are equal to zero .the fluctuations of the coordinates of the front and rear mirrors are reprenseted by and , respectively. the probe beam is incident on the front mirror ( coating ) and is partially reflected .we are interested now in the reflected beam .the weight coefficients and represent how much the fluctuations and contribute to the reflected beam , respectively . for a short cavity we can use a quasi - static approximation it means that the motion of the mirrors are sufficiently slow compared with the relaxation rate of the cavity .we assume that the optical path between the mirrors is fixed to a quarter wavelength , i.e. .we can consider the cavity as a generalized mirror .obviously , the reflectivity of the generalized mirror depends on the fluctuations and . however , for the reflected beam we have to include the motion of the generalized mirror , that is , the common - mode motion of and . the reflected beam shall be described as : we have already taken into account the fact that ( cavity is tuned in the anti - resonance ) and describes the variation of cavity length due to the mirror fluctuation .the fluctuations and are small enough compared with the cavity length so that we may expand ( [ b2 ] ) into series over and . keeping the linear terms only , we get ,\\ \epsilon_1 & = \frac{r_2(1-n_s)+r_1\big[1+(1+n_s)r_1r_2+r_2 ^ 2\big]}{\big(1+r_1r_2\big)^2},\label{e1}\\ \epsilon_2&=\frac{n_sr_2\big(1-r_1 ^ 2\big)}{\big(1+r_1r_2\big)^2},\label{e2}\end{aligned}\ ] ] which have been introduced in eq . .let us first consider a cm with altering layers of ta and sio with the refractive indices and , respectively , and the substrate with the refractive index .a multilayer coating consisting of layers with refractive indices and lengths is described by the same formulas for the transmission line consisting of ports with wave resistances and the distances .it is convenient to describe the transmission line with impedances and reflectivities .impedance of the -th layer can substitute the total impedance of all the layers between this layer and the substrate , which does not affect the other layers .it is convenient to start the calculation from the boundary of the substrate and the -th layer , and will be the equivalent impedance of all the mirror and will be the reflectivity of the entire system .there is a recurrent formula for impedances and reflectivities of neighboring layers ( neighboring ports of the transmission line ) : where is the phase shift in the -th layer . using andone may easily get the recursive formula .the substrate is considered as an infinite half - space , so its impedance is given by and hence its reflectivity is given by .then eqs . and yields each and .we are interested only in .the thickness of each layer in the high - reflective coating is a quarter - wavelength ( qwl ) , i.e. . then becomes : and thus the total coating reflectivity is : the cap does not change the impedance .the length of the cap is a half - wavelength ( hwl ) so that .using one may see that the impedance of the system with the cap is exactly the same as that without the cap : .at last , eq . for valid with or without the cap .now we are interested in reflectivities of the khalili etalon coatings ( that should be used as reflectivities and in a fabry - prot interferometer used in appendix [ appa ] ) .for both coatings we may use the same formula but with different number of layers and and different border refractive indices and : for the _ rear coating _ the vacuum plays the role of the substrate ( the rear infinite half - space ) and the substrate plays the role of the vacuum ( the front infinite half space ) .thus , , and :
|
reduction of thermal noise in dielectric mirror coatings is a key issue for the sensitivity improvement in second and third generation interferometric gravitational wave detectors . replacing an end mirror of the interferometer by an anti - resonant cavity ( a so - called khalili cavity ) has been proposed to realize the reduction of the overall thermal noise level . in this article we show that the use of a khalili etalon , which requires less hardware than a khalili cavity , yields still a significant reduction of thermal noise . we identify the optimum distribution of coating layers on the front and rear surfaces of the etalon and compare the total noise budget with a conventional mirror . in addition we briefly discuss advantages and disadvantages of the khalili etalon compared with the khalili cavity in terms of technical aspects , such as interferometric length control and thermal lensing .
|
the transient response of muscle to a sudden adjustment of its extension , or to an abrupt change in load , has been one of the most important sources of information about the mechanism of contraction for over three decades .ever since the pioneering work of , experimental data on transients has informed theoretical models of the interaction between myosin and actin , providing a more detailed picture than could be obtained from the force - velocity relation alone .the reason is that the actomyosin interaction involves several processes which occur on different time scales , and these individual components can be resolved during the transient response . the quickest process is the elastic deformation of the myosin crossbridges that link the thick and thin filaments .rapid transitions between two or more different bound states of the myosin molecule are thought to be the next fastest events , while detachment and reattachment of myosin heads occur on a slower time scale . in an experiment to determine the isometric transient response ,a muscle fiber is held at both ends to prevent it from contracting .the muscle is then suddenly shortened ( or stretched ) by a fixed amount , and the tension that it generates is measured .immediately after the imposed change of length , the tension shifts from the isometric value to a new value , which is termed .but shortly afterwards ( typically within ) , the tension adjusts to a new value , termed .subsequently , it gradually reverts to the original isometric value , and the entire transient response is usually completed in a fraction of a second .it is generally accepted that the initial response corresponds to the mechanical deformation of cross - bridges and provides a direct measure of their elasticity .the interpretation of is rather more controversial .it is often attributed to force generation by the working stroke of bound myosin molecules and this interpretation has recently gained support from x - ray interference techniques applied to shortening fibers .but alternative models suggest that the force regeneration might be due , in part , to the rapid binding of new myosin heads to the thin filament , or that it might involve the activation of the second myosin head . in this article, we wish to address a fundamental problem connected with the interpretation of force transients .the present theories are all based on the consideration of a single pair of filaments , i.e. one filament containing myosin molecules , interacting with one actin filament .the dynamics of this filament pair is generalized to that of a whole muscle fiber by assuming that all filament pairs in a fiber behave in exactly the same way .this assumption is certainly justified as long as there are no static or dynamic instabilities in the system .however , the possibility of such instabilities has been known for a long time . moreover , a stochastic model of the actomyosin cycle , based on the swinging lever - arm hypothesis , has shown that instabilities do arise when values of parameters such as the lever - arm displacement and the crossbridge elasticity are chosen to provide effective energy transduction .such instabilities would give rise to a region of negative slope in the curve of a single filament pair .several reasons have been advanced for the absence of any negative slope in the experimentally determined curve . argued that the power stroke is sub - divided into several small steps , and fixed the step size so that the curve had zero slope for limitingly small changes of length . in the model proposed by ,the flatness of the curve was explained by a broad distribution of cross - bridge strain after attachment , combined with a specific strain - dependence of the transition rates to ensure the proper occupancies of the two bound states . a further explanation involved the compliance of the filaments and the distribution of binding sites on the thin filament in addition to a sub - divided power stroke . has suggested that the flat curve of a muscle fiber can arise despite an instability in the dynamics of a single pair of filaments , owing to the symmetry of a sarcomere .we investigate this possibility further in this article .simulation of the stochastic evolution of the system was performed using the gillespie kinetic monte carlo algorithm , which works as follows . in each simulation stepthe rates of all possible transitions are calculated .the time until the next event is chosen as a random number with an exponential distribution and the expectation value given by the inverse of the sum of all rates .the event itself is chosen randomly with a statistical weight proportional to its rate . in the situations with stiff ( non - compliant ) filaments and continuous binding sitesthe transition rates can be factorized into factors that depend only on the backbone position ( which are the same for all motors in a group ) and factors that only depend on the binding position of a motor ( which does not change with time unless that motor undergoes a transition ) .this allowed us to use a very efficient ( ) algorithm based on binary trees .we assumed complete mechanical relaxation of the system in each step , i.e. the strain of all elastic elements is equilibrated before the next transition takes place .the structure involving sarcomeres , filaments and myosin heads was described as a circuit of elements with given resting lengths and compliances .the strain of every cross - bridge was calculated , given the constraint of fixed total length of the system ( isometric conditions ) , or of fixed force acting on the ends of the system ( isotonic conditions ) .the transients were always determined after the stretch / release .all other parameter values are summarized in table [ tab : parameters ] .[ cols= " < , < , < " , ] at high values of , the model shows an interesting feature . in the isometric statethe number of cross - bridges in the state a2 can be very low , although the generated force per attached cross - bridge is as high as , in agreement with single - filament measurements .this allows the muscle to support a load under isometric conditions with little atp consumption .it might at first sound paradoxical that most of the isometric force is generated by the pre - power - stroke state a1 .this is possible because the myosin heads bind with a stochastic distribution of strains .because those with a negative strain are more likely to undergo the power - stroke and then detach , the remaining ensemble produces a positive force . in this aspect , the power - stroke actually serves to eliminate negatively strained cross - bridges rather than direct generation of positive force .this notion is in agreement with recent experiments , which have shown that high phosphate concentration does reduce the isometric force significantly , but it does not have a visible effect on the conformation of the myosin heads or even their catalytic domains . but let us stress again that this holds only for the isometric state .the force in a contracting muscle originates mainly from the state a2 . another important class of experiments which provides information on the actomyosin interation involves the isotonic transient . herethe applied force is initially set at the value of the stalling load , so that the fiber is prevented from contracting .the force is then suddenly changed and held constant at a different value , while the length of the fiber is recorded . some early experiments showed that following a small step change of load , damped oscillations were imposed on the steady contraction or extension of the fiber .such oscillations are particularly clear in recent experiments on single muscle fibers .one possible cause of this oscillatory response has previously been suggested on the basis of the stochastic model of the actomyosin interaction used in this article . when , the chemical cycles of myosin motors on the same filament can become synchronized at loads close to the stalling force .a pair of filaments then slides in a step - wise fashion under isotonic conditions . but during steady shortening , the motors on different filaments within the same muscle fiber operate out of phase , so that there is no macroscopic manifestation of the steps .however , an abrupt change in the load can cause the synchronization of a large fraction of the bound motors , whereupon the steps do become observable . because the correlation of the motors soon decays ,the macroscopic steps fade and a damped oscillation is seen .a much stronger oscillatory response is seen in the regime where individual pairs of filaments perform oscillations in the near - isometric state , as described in sect .[ sec_osci ] .the synchronized oscillations can then be very pronounced following a small decrease in the load , as shown in fig .[ fig_forcestep]a . on the other hand ,no damped oscillations are observed after a larger drop in the load , e.g. to ( fig .[ fig_forcestep]b ) , because the individual filament pairs immediately move out of the hysteretic regime .these properties are in agreement with recent experiments on the isotonic response of single muscle fibers carried out by . , and with 50 myosin filaments acting in parallel .the results were averaged over 125 events ( which has the same effect as simulating that number of sarcomeres in series ) .a small drop in the load synchronizes the cross - bridges and therefore causes observable macroscopic oscillations ( a ) .the oscillations are less pronounced , but still visible following a small increase in the load ( c).,title="fig : " ] + , and with 50 myosin filaments acting in parallel .the results were averaged over 125 events ( which has the same effect as simulating that number of sarcomeres in series ) .a small drop in the load synchronizes the cross - bridges and therefore causes observable macroscopic oscillations ( a ) .the oscillations are less pronounced , but still visible following a small increase in the load ( c).,title="fig : " ] + , and with 50 myosin filaments acting in parallel .the results were averaged over 125 events ( which has the same effect as simulating that number of sarcomeres in series ) .a small drop in the load synchronizes the cross - bridges and therefore causes observable macroscopic oscillations ( a ) .the oscillations are less pronounced , but still visible following a small increase in the load ( c).,title="fig : " ] in steady isotonic conditions , another kind of instability can arise due to a hysteresis in the force - velocity relationship .the possibility of such an instability was first discussed in the context of a two - state ratchet model and subsequently in a kinetic cross - bridge model with a strain - dependent detachment rate . with the parameters we use this hysteresis ,though existent , covers a rather small range of velocities and only results in a small inflexion in the force - velocity curve .in the past , the isometric transient model of muscle has been modelled by considering the dynamics of a single pair of filaments .a significant problem with this approach is that , with a general choice of parameters in a cross - bridge model , the curve is not flat it typically has either a positive or a negative slope for limitingly small step displacements .thus , in order to reproduce the experimental curve of muscle , some of the parameters have needed to be finely adjusted , and limitations have been imposed on the size of the power - stroke and the rigidity of the cross - bridges . in this articlewe have investigated the isometric transient response using a stochastic model of the actomyosin cycle . when values of the parameters and are chosen to explain other characteristics of muscle , like the force - velocity relation and the efficiency of energy transduction , the curve of a single pair of filaments displays a region of negative slope , which can not be eliminated by factors including filament compliance and the discrete nature of the binding sites on the actin filament .however , we argue that the symmetric structure of sarcomeres must be taken into consideration when computing the isometric transient response of muscle . followinga step change of length of a muscle fiber , a redistribution of sarcomere lengths occurs within the fiber . some half - sarcomeres contract , while others extend .this redistribution always eliminates the negative slope , leading to a flat curve .it is a generic feature of unstable elements connected in series , and does not require any special values of model parameters .our model shows that there are two different regimes of the microscopic dynamics in near - isometric conditions . for a range of values of the parameter close to unity, the isometric point falls in the interval where the slope of the curve is negative . in this caseindividual filament pairs oscillate with small amplitude . in the other regime , where the curve has positive slope at the isometric point , the individual filaments are stationary , apart from stochastic fluctuations .the macroscopic manifestations of these two regimes differ in few respects . because the oscillations of different filament pairs have different phases , oscillatory motion is not normally observable on the scale of a whole muscle fiber in steady conditions . however , a sudden change of load can synchronize the oscillations and thereby make them visible .the existence of damped oscillations in the isotonic transient response of single muscle fibers therefore argues in favor of the oscillating regime .we note , however , that damped oscillations can also be a manifestation of step - wise shortening , which can exist in both regimes . because the efficient transduction of energy demands , which is close to value of this parameter at the boundary of the two regimes , it is possible that both regimes exist depending on conditions such as the myosin isoform , phosphate concentration , ph , ionic strength and temperature .indeed , measurements by show a dependence of the oscillation decay on the solution ph and on muscle fatigue . further experiments , in which conditions are systematically varied , could shed more light on the mechanism of oscillation . as a final remark, we emphasize that according to our model , the curve of an individual pair of filaments differs from that of a muscle fiber .recently , assays have been developed to measure the force - velocity relation of a single filament within a half - sarcomere . in order to test our predictions, it would worthwhile to develop such high - precision techniques to measure the transient response of a single filament .a.v . would like to acknowledge support from the european union through a marie curie fellowship ( no . hpmfct-2000 - 00522 ) and from the slovenian office of science ( grant no .z1 - 4509 - 0106 - 02 ) .t.d . acknowledges support from the royal society .irving , m. , g. piazzesi , l. lucii , y. b. sun , j. j. harford , i. m. dobbie , m. a. ferenczi , m. reconditi , and v. lombardi .2000__. conformation of the myosin motor during force generation in skeletal muscle . 7:482 - 485 .piazzesi , g. , m. reconditi , m. linari , l. lucii , y. b. sun , t. narayanan , p. boesecke , v. lombardi , and m. irving .2002_a_. mechanism of force generation by myosin heads in skeletal muscle . 415:659 - 662 .
|
we investigate the isometric transient response of muscle using a quantitative stochastic model of the actomyosin cycle based on the swinging lever - arm hypothesis . we first consider a single pair of filaments , and show that when values of parameters such as the lever - arm displacement and the crossbridge elasticity are chosen to provide effective energy transduction , the curve ( the tension recovered immediately after a step displacement ) displays a region of negative slope . if filament compliance and the discrete nature of the binding sites are taken into account , the negative slope is diminished , but not eliminated . this implies that there is an instability in the dynamics of individual half - sarcomeres . however , when the symmetric nature of whole sarcomeres is taken into account , filament rearrangement becomes important during the transient : as tension is recovered , some half - sarcomeres lengthen while others shorten . this leads to a flat curve , as observed experimentally . in addition , we investigate the isotonic transient response and show that for a range of parameter values the model displays damped oscillations , as recently observed in experiments on single muscle fibers . we conclude that it is essential to consider the collective dynamics of many sarcomeres , rather than the dynamics of a single pair of filaments , when interpreting the transient response of muscle .
|
social relationships among people are composed of various weight of ties , as much as metabolic pathways or airline traffic networks .however , introducing proper weight for the relationships in social networks is not an easy task since it is hard to objectively quantify the relatedness among people .as people s activities on the web and communications via social networking service become more popular , information about the social relationships among people ( especially for famous figures , through news and blog sites ) becomes available and can be used as a source of high - throughput data .here , we suggest that the ability of search engines can be used for this task .search engines count / estimate the number of webpages including all the words in a search query , and this feature can be used to measure the relatedness between pairs of people in social networks in which we are interested .the more webpages that are found , the more popular or relevant the combination of the search query is .therefore , _ cooccurrence _ of two people in many personal webpages , news articles , blog articles , wikipedia , _ etc ._ on the web implies that they are more closely related than two random counterparts .there are several advantages of using search engines to construct social relatedness networks .first , with a list of names , one can systematically count the number of webpages containing two names simultaneously , extracted by search engines to assign the weights of all the possible pairs .this procedure enormously reduces the necessary efforts to extract social networks , compared with the traditional methods based on surveys .in addition , such automation makes analysis of enormous amount of data related to social networks possible and helps us to avoid subjective bias , such as the `` self - report '' format of personal surveys .furthermore , if one extracts social networks from a group of people on a regular basis over a certain period , the temporal change or stability of the relationship between group members in the period can be monitored .although it is possible that some error or artifacts , such as several people with the same name , are caused by this systematic approach , this can also be managed by adding extra information ( such as putting additional queries like the subjects occupations into the search engine , in such cases ) .furthermore , the _ cost _ of investigation with the search engine is much smaller .this example highlights the effectiveness , objectiveness , and accuracy of the usage of web search engines .based on the pairwise correlations extracted from google , we constructed and analyzed the weighted social networks among the senators in the 109th united states congress ( us senate ) , as well as some other social groups from academics and sports .our datasets are three representative communities with very different characteristics , i.e. , politicians , physicists , and professional baseball players .the us senate in the 109th congress ( http://www.senate.gov ) consists of senators , two for each state . among the physicists who submitted abstracts to american physical society ( aps ) march meeting 2006 , we selected the subset of authors who submitted more than two abstracts for computational tractability . finally , the list of major league baseball ( mlb ) players is the 40-man roster ( march 28 , 2006 ) with players ( http://mlb.com ) . to avoid the ambiguous situation where there is more than one person with the same name , the following distinguishing words or phrases were added to all the search queries for each group : the words are `` senator '' for us senators , `` physicist '' for aps authors , and `` baseball '' for mlb playersfirst , we recorded the number of pages searched using google for each member s name , which were assigned as the google hits showing the fame of each individual member .the _ google correlation _ between two members of a group is defined as the number of pages searched using google when the pair of members names ( and the additional word ) is entered as the search query . in this case, google shows the number of searched pages including _ all the words in the search query_. simply , this google correlation value is assigned as the link s weight for the pair of nodes . if no searched page is found for a pair , the pair is not considered to be connected .note that the idea of using co - occurrence to quantify the correlation was presented before in systems biology or linguistics , but our work comprehensively approaches such a general concept and focuses on the digital records to extract information .the constructed weighted networks are usually densely connected : the link density , defined as the ratio of existing links to all the possible links among nodes ( , where is the number of nodes ) , is for the us senate , for aps authors , and for mlb players . for ( a ) us senate ,( b ) aps authors , and ( c ) mlb players and the strength distributions for ( d ) us senate , ( e ) aps authors , and ( f ) mlb players are shown .pairs with the largest google correlation values ( a)-(c ) and the nodes with largest strengths ( d)-(f ) for each plot are indicated ., scaledwidth=90.0% ] due to the high link density , elaborating on the weights of links or the strength ( the sum of the weights around a specific node ) of nodes to extract useful information is more important . figure [ weightstrength ] shows the weight and strength distributions for the weighted networks constructed by assigning the google correlation values as link weights .previous studies on other weighted networks show heavy tailed weight and strength distributions and our networks also reveal such broad distributions spanning several orders of magnitude , although the details are different for each network . the degree and strength are basic quantities that estimate the importance of nodes in a weighted network .however , the weights on the links of two nodes with the same degree and strength are not necessarily identically distributed . in other words ,just the number of links a node has ( degree ) and the sum of weights on the links the node has ( strength ) are not sufficient to fully conceive the node s character . for example , two central nodes in fig .[ disparity_example ] have the exactly same values of degree and strength , but the weight distributions around the nodes are totally different . quantifying such different forms of weight distributions is important because it can distinguish whether a node s relationship with its neighboring nodes is dominated only by a small portion of neighbors or if almost all the neighbors contribute similarly to the node s relationship . as an initial step to further investigation we are interested in the _ dispersion _ or _ heterogeneity _ of weights a node bears . although this concept of disparity is not a new one , we suggest a more general framework of such quantities based on information theory .suppose a node has links whose weights are given by the set , where is the set of the node s neighboring nodes .the strength of the node is defined as .now , let us denote for each weight as the normalized weight . in the continuum limit of neighbor indices sorted by descending weights ( without loss of generality ) around the node whose set of weights is , ( the normalization condition becomes simply in this case ) if all the neighbor indices are re - scaled as ( meaning the entire network gets larger by the factor of , and the normalized weights become due to the normalization condition ) , the quantity ] .this scaling behavior is the same as the degree measure and , in fact , if all the weights are identical , the quantity is set to precisely become the degree .we have found a class of solutions satisfying such scaling conditions , which is the weighted sum to node , where the constant is a tunable parameter , and we denote this measure as the _ rnyi disparity_. if all the weights are equal , , which is just the degree of node , regardless of the value . as the weight distribution deviates from the uniform distribution , also deviates from the degree , the details of which depend on the parameter , of course . we will use this weighted sum as the measure of the heterogeneity in the weight distribution for each node . note that the logarithm of eq .( [ weightedsum ] ) , , coincides with the rnyi entropy in information theory , from which the name `` rnyi disparity '' originates .we have yet to decide the parameter for . in previous works , the quantity called disparity was defined for each node .its scaling behavior is that if the weights are uniformly distributed and if the weight distribution is severely heterogeneous .it is easy to see that the disparity in refs . is the reciprocal of a special case of our rnyi disparity , with the parameter , i.e. , the logarithm of this is also a special case of rnyi entropy , called the extension entropy and is related to the simple variance by .if we consider the limiting case of , we denote it as the shannon disparity of the node . in this limit, one can easily verify that one can immediately notice that the shannon disparity is the exponential of an even more familiar and widely accepted entropy in information theory , which is the shannon entropy .the scaling property of is similar to in eq .( [ y_measure ] ) and , in fact , for our three weighted networks the two quantities and are highly correlated : the pearson correlation coefficients are for us senate , for aps authors , and for mlb players . even though and are highly correlated in our example networks ,shannon disparity works better for inhomogeneous weight distribution than the rnyi disparity with .suppose the weight around a node follows the power - law relation for , where is the continuous version of the neighbor indices sorted by descending weights and the constant is set to the normalization condition . and in case of the power - law weight - index relation , from eqs .( [ shannon_explicit ] ) and ( [ d_alpha_explicit]).*,scaledwidth=50.0% ] in this continuum limit , we can explicitly calculate the dependence of on the power - law exponent by direct integration , which is ^{1/(1-\alpha)} ] and measures the correlation of opinions of senator and .now we can infer the degree of cooperation with the vote correlation defined in eq .( [ votecorrelation ] ) . in fig .[ senator_mrs ] , we distinguish the links among senators with the positive and negative vote correlation . from fig . [ senator_mrs ] , we observe that the positive vote correlation is almost always given to the senator pairs from the same party and the negative vote correlation to the senator pairs from the different parties . among all the senator pairs , only are from the different parties and have positive vote correlation value and with the same party and have negative vote correlation value , which implies the partisan polarization discussed in ref . . investigating relationships via search enginesis not restricted to a specific group of people .in addition , objects in a search query do not have to be restricted to people s names .we demonstrate this fact by investigating the relatedness between politicians and large corporations , revealing possible connections between politics and business . for sets of politicians , we selected 18 potential us presidential candidates in january 2008 and the 109th us senators .we chose the 100 largest corporations , as reported by _ fortune _ as the set of corporations .the method of analysis is similar to the previous one , but in this case google correlation values only _ between politicians and corporations _ are considered in a way to construct a so - called `` bipartite network . ''mrs is generated by collecting links from politicians to the corporations to which they are related most and vice - versa .another measure introduced is the normalized google correlation which represents the relatedness where the effect of fame is removed .this new measure is able to effectively prevent famous nodes from `` dictating '' the network .all the data for this analysis were collected in january 2008 .figure [ candcorp_nor_mrs ] shows the mrs from the normalized google correlation network of the us presidential candidates and the 100 corporations .john mccain , who has become the actual republican presidential candidate at the time of writing , does not have many connections with large corporations in mrs . however , the only connected corporation with him is northrop grumman , which recently won the joint tanker contract to assemble the kc-45 refueling tankers for the us air force with eads .because senator john mccain once uncovered a corrupt effort by boeing , which is northrop grumman s rival company , the connection looks interesting .the thick bidirectional connection between senator hillary clinton and exxon mobil is likely from the large amount of money contributed to senator clinton from the corporation . in similar ways, such analysis might give some hints for further investigation for the relationship between politics and business ., from the normalized google correlation network of the 109th us senators and the 100 corporations .the democratic senators are colored as blue , the republican senators as red , and the corporations as green ., scaledwidth=90.0% ] we also tried to elucidate community structures from the bipartite network between politicians and corporations as shown in fig .[ nor_corpsen_2m_comm ] .first we extracted the normalized google correlation values between us senators and the 100 corporations .then we kept the link , whose google correlation value is larger than , to obtain a sparser subnetwork for visualization .the community structure from the subnetwork was obtained by newman s eigenvalue spectral method and the modularity is , which might reveal the subunit of politics - business connections . in this section , we provide evidence for the validity of social network construction by google correlation values .we obtained a scientific collaboration network among the authors of papers citing the five key papers in the network theory .the 776 authors who wrote at least three papers were selected due to computational tractability . in this collaboration network ,the pairs of authors who wrote the papers together were connected and the weights were assigned as the numbers of collaborated papers . to test the reliability of the google correlation network among these authors , we constructed a weighted social network with the google correlation values .and the dashed line is the linear fit whose slope is .( b ) the average google correlation values for each value of the shortest path length among pairs of nodes in the collaboration network .the error bars represent the standard deviations .the pearson correlation coefficient between the shortest path length and the google correlation values is ., scaledwidth=50.0% ] the direct comparison between these two weighted networks ( the collaboration network and the google correlation network ) is nontrivial , partly because of the enormous difference in the link density , i.e. , the collaboration network is much sparser .therefore , we suggest two schemes for comparison .first , we check the correlation between the weight in the collaboration network ( the number of collaborated papers ) and the google correlation values for pairs of connected authors in the collaboration network .if the google correlation network represents the true relatedness , we expect a positive correlation between the two quantities and fig .[ coauthorgoogle](a ) indeed shows a positive correlation .second , regardless of whether two nodes in the collaboration network are directly connected or not , the google correlation value and the shortest path length in the collaboration network for those two nodes are expected to be negatively correlated .figure [ coauthorgoogle](b ) confirms this expectation . because the google correlation value represents the relatedness of two authors , the larger the google correlation value of the two authors , the nearer they are located in the collaboration network .these correlations , of course , are not perfect .however , we suggest that the difference does not indicate the error or limitation of the google correlation but reveals the actual difference between the collaboration and relatedness .two authors can have large google correlation value , even if they have never written papers together , if they work in the similar fields , show up at the same conferences many times , and thereby appear in the same `` participant list '' webpages of many conferences , for example . in summary, we have verified that our method actually reflects the structure of the real coauthorship network and have demonstrated the potential of our method . finally , we should mention caveats of our method .many webpages are not under the quality control and may contain misleading or alleged facts .therefore , our method should be considered as a _ proxy _ reflecting the real correlations . in other words, one has to be careful when dealing with the google correlation data and note that any conclusions drawn from the analysis should be followed by accurate follow - up investigations , like genome - wide computational predictions followed by high - quality , small - scale experiments in biology .however , in any case , we would like to emphasize that the google correlation values can be the first , useful and exploratory step towards further investigations . we also want to point out that it is possible to flexibly customize the definition of the correlation measure for different purposes , for instance , by dividing the raw cooccurrence value by their google hit values to get rid of their popularity effects whenever it is necessary , as suggested in the previous sections .another way to customize our method is to use more specific search engines .for instance , for the coauthorship relations , one can count cooccurrences from google scholar , which indexes only the scholarly literature .public relationships among politicians can be extracted more accurately by focusing on only the news articles . as an example, we constructed a network of korean politicians by counting the number of news articles from a korean online news service , and demonstrated that the two clear groups in mrs well correspond to political parties and each party s leader / influential person possessing central position with many incoming links .there is a tremendous amount of data on the web , which can prove very useful if we harness it cleverly .search engines are a basic device to classify such information and we have constructed social networks based on the google correlation values quantifying the relatedness of people .we have systematically analyzed the basic statistical properties from the viewpoint of weighted network theory , introduced a new quantity called the rnyi disparity to represent the different aspect of the weight distribution for individual nodes , and suggested mrs to elucidate the essential relatedness .we have used the us senate as a concrete example of our analysis and presented the results .the concepts of the rnyi disparity and mrs introduced in this paper are not restricted to the google correlation network , of course .the process of finding out `` hidden asymmetry '' of weighted links is applicable to other many weighted networks from various disciplines as well . in other words ,such concepts can be interpreted as useful characteristics in different contexts .we have also compared a real scientific collaboration network with the social network constructed by our method introduced in this paper and discussed the result .the larger google correlation values two authors have , the more papers they tend to have written together , causing them to appear to be `` closer '' in the scientific collaboration network . extracting information on the web to construct networks makes it possible not only to obtain large networks with many participants , but also to monitor the change of such networks by collecting data on a regular basis .we have verified that the network structures do not change abruptly , partly because the web plays the role of a digital `` archive , '' not a `` newspaper . '' however , during important events such as the elections for the united states senate held in november 2006 , the us senate network was significantly reformed as we have discussed in this paper . if the webpages were classified into several categories such as news articles , blog articles , _ etc . _, more information would be available .we hope that so - called web 2.0 will significantly increase the possibility to obtain such classified information with ease in the future .the proper use of the web and search engine in scientific research has already begun , for instance , in the research on the human tissue - specific metabolism , and we welcome other researchers who will join this movement in the future .we thank daniel kim for building the data extraction platform with the google search api . this work was supported by nap of korea research council of fundamental science & technology .we have removed the number of web pages from to and divided the number greater than by , based on our judgment about the _ page counting problem _ in google .see google inconsistencies ( 2003 ) http://www.searchengineshowdown.com/features/google/inconsistent.shtml .last accessed on 6/2/2010 .we observed that there exists an obvious _ gap _ between the number of and in the distributions of google hits and correlations , and found that if we process data by the removing and dividing mentioned , the distributions become smooth without any gap .this process , however , does not cause any relevant changes in the main results of our work .the list of eighteen candidates are hillary clinton , barack obama , john edwards , dennis kucinich , joe biden , chris dodd , bill richardson , mike gravel , rudy giuliani , fred thompson , john mccain , mitt romney , mike huckabee , duncan hunter , tom tancredo , sam brownback , john cox , and ron paul .newman mej ( 2006 ) modularity and community structure in networks .proc natl acad sci u s a 103 : 8577 - 8582 ; finding community structure in networks using the eigenvectors of matrices .phys rev e 74 : 036104 .naver website ( 2010 ) http://www.naver.com/. last accessed on 6/2/2010 . in south korea , the search engine naver is more popular to the general public than google , due to the many localized information and interface .moreover , it deals with the korean characters more appropriately than google .so we use it for the analysis on the korean politicians .
|
social network analysis has long been an untiring topic of sociology . however , until the era of information technology , the availability of data , mainly collected by the traditional method of personal survey , was highly limited and prevented large - scale analysis . recently , the exploding amount of automatically generated data has completely changed the pattern of research . for instance , the enormous amount of data from so - called high - throughput biological experiments has introduced a systematic or network viewpoint to traditional biology . then , is `` high - throughput '' sociological data generation possible ? google , which has become one of the most influential symbols of the new internet paradigm within the last ten years , might provide torrents of data sources for such study in this ( now and forthcoming ) digital era . we investigate social networks between people by extracting information on the web and introduce new tools of analysis of such networks in the context of statistical physics of complex systems or socio - physics . as a concrete and illustrative example , the members of the 109th united states senate are analyzed and it is demonstrated that the methods of construction and analysis are applicable to various other weighted networks .
|
computational modeling plays an increasingly important role in the characterization and understanding of a broad range of elementary chemical transformations relevant to catalytic processes .such catalytic chemical reactions can be described by microscopic kinetic models such as the coarse - grained lattice kinetic monte carlo ( kmc ) method , a computational simulation of the time evolution of some stochastic process .typically , simulation of processes arising in catalytic chemistry are carried out based upon rates for adsorption , reaction , desorption , and diffusion that are obtained from experiments , density functional theory ( dft ) , and transition state theory ( tst ) .if a system exhibits significant transport , hybrid methods for heterogeneous reaction kinetics can be constructed , which combine kmc for the chemical kinetics with finite difference methods for the continuum - level heat and mass transfer .underlying all models and algorithms for determining reaction dynamics is the master equation .the master equation describes the evolution of a multivariate probability distribution function ( pdf ) for finding a surface in any given state .the master equation , however is an infinite dimensional ordinary differential equation which can not be solved exactly , and thus a number of techniques have been developed for finding approximate solutions .one class of computational methods for catalytic processes hypothesize _ ad hoc _ rate equations , derived by physical reasoning , to construct phenomenological kinetic ( pk ) models of surface processes .such pk models start from an idealized surface geometry for binding sites and site connections .for example , on a ( 110 ) idealized surface , one can define bridge and cus sites connected by a square lattice , as shown in figure [ fig : surfaces ] .the models track the probability of finding a site of given type bound to a particular molecule , and use a maximum - entropy / well - mixed assumption to reconstruct spatially correlated information .the well - mixed assumption on surfaces can often fail , and there are many examples in which a given kinetic model fits one set of data well , but fails with additional test data .there have been two methodologies that have been developed to alleviate the deficiency of the pk models : generalized phenomenological kinetic ( gpk ) models and kinetic monte carlo ( kmc ) simulation .gpk models are similar to pk models however they add evolution equations for spatial correlations . for surface catalysis, the gpk framework was introduced in the 1990 s by mai , kuzovkov , and von niessen and others , and since then this work has both been extended and applied in a variety of ways .the primary idea behind these works is to present nested sets of kinetic equations for the probability of finding a collection of surface sites in a particular configuration .the dynamics for the collections can only be determined exactly if the state of the collections are known , which is analogous to the bbgky hierarchy of statistical physics . in principlethis leads to an infinite system of equations . in practicethe nested chain is approximated through a truncation closure at some level in the hierarchy . despite the advances in gpk modeling , kinetic monte carlo ( kmc ) simulations are now often the method of choice that is used to determine and verify the surface dynamics and steady state values in surface catalysis problems .kmc algorithms are stochastic realizations of the surface dynamics that probabilistically update the state of some surface with large but finite size .the advantage of kmc algorithms when compared to pk models is demonstrated in ref . ,in which the authors demonstrate the break down of pk models when compared to kmc predictions .kmc simulations are , however , far more expensive than both pk and low level gpk models , and rely on slow statistical averaging for predicting desired observable quantities .because the computational cost associated with kmc simulations is significantly larger than low level hierarchy members of gpk models , it is worth investigating why the kmc simulations have become the method of choice when determining solutions to the master equation dynamics .possible motivations include : ( 1 ) the simplicity of kmc algorithms , ( 2 ) advances in computational power that make kmc simulations feasible , and ( 3 ) straight forward convergence testing of kmc predictions . elaborating on point ( 3 ) , kmc simulations allow for convergence tests by taking larger surface domains and more statistical samples while using the same implementation , whereas gpk convergence tests require inclusion of new elements within the hierarchy , with the associated additional coding effort .we know of no work ( see , for example refs . ) in which the nested hierarchy is built beyond time evolution of pair correlations ; in these works triplet correlations are approximated from pairwise information .the problem of formulating a general procedure to construct higher - order hierarchy truncations is still unsolved .this means that although gpk techniques may be used to approximate the master equation , there is no methodology to test if this approximation has converged to the correct dynamics other than comparing the results with kmc simulation .if gpk models are to become viable techniques in determining accurate approximations to the master equation , there must come along with them a generalized methodology for constructing arbitrary elements of the hierarchy so that convergence may be examined . as shown in the supplemental material ,it is possible to obtain an inconsistent model through an inappropriate triplet closure , further highlighting the need for a formulation of gpk models that incorporates a method for testing convergence of successive truncations of the approximation hierarchy . to accomplish this formulation ,the present work will take a slightly different approach to the nested schemes .similarly to the nested gpk schemes we will also arrive at a generalized hierarchy of phenomenological kinetic models .our methodology begins with the observation that the simple pk models are probability distributions of a group of sites on the surface .we will denote an arbitrary grouping of sites sites as a ` tile ' on the surface and seek to determine the kinetics of the probability distribution of surface states on a tile .we note that the tiling is identical to the nested scheme presented in ref .( for surfaces made up of a single site type ) , and reiterate that the triplet nested scheme , mentioned but not studied in ref ., is different than the tiling scheme ( see the supplemental material ) .the novel aspect of the present work will be the generalized construction of members of the hierarchy which will provide consistent and increasingly accurate dynamics for larger tilings .such a framework has the potential to allow the gpk models not to rely on kmc simulation to test for accuracy , but rather to remain self contained by comparing the results from smaller tile dynamics to larger tile dynamics , with the hope that the scheme will converge with significant computational savings .we begin this work by exposing the formalism of the tiling idea in section 2 , and provide two examples of how to construct a set of odes within the tiling framework .the second of these examples considers the oxidation of co on a face centered cubic structure s ( 110 ) surface ( also considered in refs . ) . in section 3we provide numerical evidence that supports and verifies the formalism based on a 1d and 2d uniform surface with a square lattice . in section 4we test the kinetic models resulting from site ( ) and pair ( ) tilings for the surface catalysis problem of co oxidation demonstrated in refs .e demonstrate that the pair tiling significantly reduces errors made in the pk formalism of ref . .we next examine the idea of having a mixed tiling scheme and show that improved accuracy may be efficiently obtained within this mixed tiling hierarchy .we present results for this mixed tiling and demonstrate that it better captures the dynamics predicted by kmc simulation , providing the possibility for a search algorithm in the tiling hierarchy that remains computationally inexpensive .consider a surface made of sites , each labeled as particular type from a set .for example on an idealized ( 100 ) surface there are atop , bridge , and 4-fold hollow sites and so , bridge , hollow , whereas on a ( 110 ) crystal surface there are bridge and cus site types and so , cus ( see figure [ fig : surfaces ] ) .each site on the surface may be in a particular state and we will call the set of possible states . for example , in the surface oxidation of co , co and o adsorb and desorb on the surface .co remains bonded upon adsorption whereas o dissociates , two adjacent sites will become occupied by a single o molecule . in this casethe set of possible site states is , which corresponds to an unoccupied site , a site occupied by o , and a site occupied by co , respectively .supposing that we start with an surface , the master equation is formulated with a known transition matrix , , which prescribes the rate at which one particular system state transitions to another and may be written as where is the probability of the surface being in state , and a state is an vector describing the state of each site ( for example , refs . ) . in the discretesetting of surface reactions , each site may be in one of states , where represents the cardinality of a set .the master equation yields a set of ordinary differential equations ( odes ) . in the limit of ,the kinetics are described by a denumerable but infinite dimensional ode ( so long as ) , an intractable problem that requires truncation along with periodic boundary conditions to become solvable . even for finite values of , the size of state spaceis often intractably large for realistic choices of , and thus instead of solving the master equation directly , a stochastic realization of the surface dynamics is often considered by using kinetic monte carlo algorithms ( see for example , ref . ) . in the present work, we restrict attention to translationally invariant systems and hypothesize that they exhibit a finite correlation length captured in an subdomain we refer to as a tile within the overall domain , ( ) . as suggested by the notation, we will consider square lattices in the present work , however see no reason why the theory can not be generalized to different lattice geometries . under these assumptions , a truncated bbgky hierarchy for approximation of the master equationcan be obtained based on the dynamics of tiles that capture correlation effects .in essence the procedure is yet another phenomenological closure , but of increasing accuracy with increasing tile size .mathematically , this corresponds to construction of a stochastic reduced model of size to approximate the behavior of a larger system .the formal analysis of the approximation error will be presented in follow - on work . herewe present the overall procedure and demonstrate the practical efficiency of the approach for surface co catalysis .consider a rectangular tile of size that covers contiguous sites within the overall surface , ( see figure 1 for an illustration of and tiles ) .sites may be of various types ( e.g. , bridge or cus , cf .1 ) ; let denote the set of site types ( distinct from , the set of site states ) . for some chosen tile , let denote the set of possible tile types . this is usually a small subset of fixed by the overall lattice construction .for example , in a one - dimensional ( 1d ) lattice with repeating sites , the possible tiles are , a subset with four elements of the eight - element set .it is necessary to distinguish tiles of the same type that have different neighbors .for example in the 1d lattice , the are two variants of the bb tile , one with neighbors , the other with neighbors .we therefore introduce as the set of tile types with distinct positions in the lattice .having categorized each tile , we may denote the state of the tile , , as , with underlying site types .we may then assign a discrete pdf of finding tile in state , and denote this probability as .due to the assumed translational invariance of the system , all tiles identified with tile type in the lattice are assumed to have identical probability distributions throughout time .we seek to approximate these dynamics by assuming knowledge only up to a given tiling on the system , which will hold information of the pdf s .each pdf has a domain of size , there are pdf s to track , and thus the goal is to reduce the large or infinite dimensional master equation ( equation [ eqn : fullcme ] ) to a dimensional set of ordinary differential equations .because in many interesting cases ( such as the ( 110 ) and ( 100 ) surfaces described above ) , the dimensionality of the ode will often be equivalent to . to achieve this approximation , we first note that given an arbitrary tile located on the surface with site type geometry and lattice position described by , we may evaluate the dynamics of the discrete pdf over the state space on this tile based on the full description of the master equation ( equation [ eqn : fullcme ] ). below we will refer to ` the tile , ' by which we mean an arbitrary choice from all the similar tiles from the full system .we reiterate that the reason we may choose any arbitrary is due to the assumed translational invariance of the system . to approximate the dynamics of the pdf on a reduced system, we decompose the transition matrix into a sum of three matrices : a matrix that only changes site states within the tile , , a matrix that changes site states both within the tile and exterior to the tile , , and a matrix that does not change any of the site states within the tile , .thus we write the second matrix , is further decomposed by considering collections of sites that include an arbitrary number of exterior sites along with all of the sites in the tile , and in which the site states of are specified .any one of these collections will be denoted where is a collection of sites that includes the sites in , and represents the states of each site in the collection .we then define to be the matrix that encodes transitions such that ( i ) at least one site in the tile changes state , ( ii ) _ every _ site exterior to the tile in is changed , and ( iii ) _ no _ site exterior to changes state .summing over all possible choices of , both in selection of and the choice of states , we then further decompose to be because we are only interested in the local events , we then average the rate of each transition type in the tile going from state to state ( both states in ) .we describe these averaged transitions rates as [ eqn : meanfield ] where is the set of all possible states of the collection of sites , is the tile that may also be thought of as a set of sites on the surface that comprise the tile , and is a choice of that is constrained so that the states on the tile are described by .the matrix elements in the interior of each sum are the elements of the transition matrix that begin in state on the subset and finish in state on , and all exterior states in remain fixed in state . the last condition will be trivially satisfied based on the definitions of the matrix decomposition above .finally the sum is weighted by the conditional probability that the system will be in state given that we know either the state of the tile or the state of tile along with the external sites specified by and the discrete pdf s of all tiles .we then may define new transition matrices and , all of which have dimension .taken together , these matrices represent a mean field theory that accounts only for the sites that change within the tilings , averaging the influences of the system states that do not change with a given transition ; we note that in general , the reduced matrices will depend non - trivially on the pdf s of the tiling , however in the current work we will primarily focus on systems that have transition rates independent of the state of sites that do not change states . therefore ,having a fixed and , we assume that for all , which means that we may arbitrarily assign any of these elements to the reduced matrix : [ eqn : simpmeanfield ] having defined the transition rates on tiles , the master equation approximation is the first summation on the right hand side represents tile state changes that occur entirely within tile .the second and third summation capture state changes dependent on neighboring tiles being in a specific state . to close the set of equations given in equation [ eqn : mastertruncate ] ,the conditional probabilities must be computed .these are the probabilities that the neighboring sites of are in state when the tile is in state . in this workwe consider square lattices in which only a single site exterior to the tile changes state upon a reaction .we note that the second assumption is common as many model reactions effect either one or two sites , but often no more . with thissaid , we note that it is possible to formally determine equation [ eqn : condprob ] with the both assumptions relaxed , however such cases are beyond the scope of the current work .next we say that if there is no tile that covers both the exterior changed tile and the interior changed tile , then the probability of finding the exterior tile in a given state is independent of the interior state . along with the consistency criteria presented below ( see section [ sec : constraint ] ), this will provide a robust method for determining the probability of finding the exterior site in the desired state .as an example of the calculation of the conditional probability ( equation [ eqn : condprob ] ) , consider the tile shown in figure [ fig : closure ] along with a reaction that changes the states of the two sites ( 5,2 ) , ( 5,3 ) within the tile , but requires a neighbor site outside the tile to be in a specific state .we must determine the probability of finding the overall initial state of both tile and neighboring sites that allows for the reaction to occur .this is computed by summation of the probability of all partially overlapping tiles that have consistent states . for the transition to be able to occur .we integrate over all states of a secondary overlapping tile , represented with thin solid grey lines , excluding the exterior tile site that will change state.,title="fig:",width=151 ] for the transition to be able to occur .we integrate over all states of a secondary overlapping tile , represented with thin solid grey lines , excluding the exterior tile site that will change state.,title="fig:",width=151 ] to obtain a precise computational statement of the required conditional probability , consider the tile , a tile of type , in initial state .let denote both the site type and state at position within the tile , , . consider a state transition at site that requires a neighbor site situated outside the tile to be in state .let denote the subset of tiles that have the same site types and states as tile in the overlap region after translation by the probability of a translated tile to have the correct overlap site types and states is within there exists a further subset that has the required state at position the conditional probability from ( 7 ) is given by with numerator simply expression the further restriction on the required state at .this methodology provides a robust method for closing the approximated dynamics of the master equation with the reduced tiling system provided that only one site exterior to a given tile changes with arbitrarily many interior sites based on all given transitions . as we have mentioned above , it is also straight forward to generalize this framework to cases in which multiple interior sites , or multiple exterior sites transition , however such reactions are not considered in the present work and we forgo this discussion presently .having established a reduced system with corresponding equations for arbitrary rectangular tilings , we next mention some simple ways in which we can take advantage of rotational symmetries on the lattice , and then introduce a generalization to a mixed tiling scheme . following this exposition , to ensure both the ode written in equation [ eqn : mastertruncate ] , and the generalization presented below are self - consistent , we state two consistency criteria . from here, we present a simple example , and then several concrete examples of surface catalysis , which will be used below in numerical tests . up to this point , we have discussed tilings with fixed orientations .certain lattices , however , contain symmetries that may be used to obtain higher range spatial correlations without increasing the dimension of the set of corresponding equations . in the ( 100 ) crystalabove , for example , there are four rotational symmetries of the lattice found by rotation of radians .thus we expect that for each tile , there is a corresponding tile with an equivalent pdf over site states under rotation .therefore , in reconstructing the conditional probabilities used in conjunction with , we assume that we have both and tiling types . in the case, this may lead to more detail in spatial correlations along both lattice directions , and thus a more accurate method with no additional cost .we may also consider the ( 110 ) surface described above that only has two rotational symmetries found by rotation of radians . in this case , considering tiles would provide us with spatial correlations along cus - cus or bridge - bridge sites but we would be assuming independence in the cus - bridge direction .similarly a tiling would retain spatially correlated data in the bridge - cus direction , but not along bridge - bridge or cus - cus networks . to get around thiswe could consider tiling , however the number of equations would grow exponentially , from an to a dimensional ode , where is the tile set when considering a tiling . in the case of the ( 110 ) surface , and and are equal to if is even and if is odd .instead we may consider mixed tilings , which is to say we consider the system constructed when considering the pdf s on and tiles , which causes the system to grow from a to a dimensional ode ( so long as ) . to close the conditional probabilities we utilize an identical method to that listed above, however note that in some cases we will be able to choose either an or overlapping tile that satisfies the above criteria . to make the choice uniquewe will first require that the chosen tile containing both the interior and exterior sites that will transition , contain a maximal number of overlapping sites . in the case that this choice is not unique, we require the sum of the radial distance between the overlapping tiles and the interior transitioning tiles to be minimal , where we define the radial distance in the usual sense ( i.e. the distance between site and is ) .we conjecture but are not certain that this criteria will provide a unique tiling choice , but mention that it does in all the cases we have considered below . with all of the methods listed above, we must ensure that several constraints are met to ensure the system is consistent .we expose these constraints in the following subsection .care must be taken to ensure that certain constraints are obeyed by any of the resulting reduced systems described above .for one , a system must be initialized to have normalized pdf s over each tile .analytically , the pdf s will trivially be normalized throughout all time , since what is added to one state is taken away from another .next all lower dimensional projections must be well defined . by thiswe mean that lower dimensional projections must agree between ( i ) different tiles and ( ii ) within a given tiling .for example , suppose we have a mixed and tiling on a ( 110 ) surface as described above , suppose we have a bridge - bridge tile , and a bridge - cus tile .we note that in the tiling system all bridge and cus sites are identical on the lattice and thus the probability of finding a bridge site in a particular state must be the same no mater how it is determined .this means that we must have [ eqn : constraints ] we conjecture that systems will be well behaved for rectangular tiling systems and for mixed tiling schemes . we note that we have numerically verified that all systems mentioned below satisfy the above criteria for a variety of test cases and projections . from this point onwe will assume all tiling systems are mixed , i.e. saying that we are working with a tiling system will mean that we are working with a mixed and tiling system . in general , equation [ eqn : mastertruncate ] may be difficult to work with explicitly , however we will show that in several relevant problems it simplifies to a corresponding hierarchy of odes that can be easily coded into a computational algorithm .we illustrate the above concepts with a simple example .suppose that is comprised of two sites each of which is identical , and each of which can be in states .the state space is then .the transition matrix is chosen as which states that a site in state 0 can transition to state 1 , but not the other way around , and that the rate with which this happens depends on the state of the other site .the four dimensional system may be written as suppose we choose a tiling and wish to determine the reduced dynamics .there is only one type of tile as both sites share identical dynamics .we find that ( and hence ) as there are no reactions effecting both sites and corresponding to the state space , which leads to dynamics given by we next demonstrate how to construct approximated systems with a more realistic example . to do this , we consider the ( 110 ) surface made up of bridge and cus sites ( see figure [ fig : surfaces ] ) . each bridge site is connected to two other bridge sites and two cus sites .we suppose that we know a collection of surface transitions and each of their rates . in the following example we consider approximate dynamics of co oxidation for and tilings with the collection of possible events that can generate state changes on the surface , given as [ eqn : surfacereactions ] where the subscripts represent the site types being occupied and the subscript represents molecules in the gas phase disconnected from the surface .the only additional constraint is that sites must be adjacent on the lattice for transitions involving two sites to occur .the classification ads / des denotes adsorption and desorption events ( depending on the reaction direction ) and surf rxn denotes surface reactions .we classify the transitions as site transitions ( e.g. equation [ eqn : siter ] ) or pair transitions ( e.g. equation[eqn : pairri ] ) .we may also define diffusion events as pair transitions that appear as but we do not consider these events in the present work .the reason for this omission is four - fold : ( 1 ) near steady state the systems we examine have very low probability of being in the empty state and thus diffusion in these regimes will not heavily influence the system , ( 2 ) many previous studies that have explored this work have also left out the consideration of diffusion ( see for example refs ) and thus omitting diffusion will make our work more comparable to these studies , ( 3 ) we have run simulations where we have included diffusion ( but have omitted the results from this presentation ) and find that the system dynamics are nearly identical to the case without diffusion , and ( 4 ) we note that diffusion tends to mix systems and thus smaller elements of the hierarchy may lead to accurate results in cases where diffusion is important ; thus we expect that showing that the hierarchy is still effective when diffusion is not relevant should lead to a more stringent verification of our method .there are two possible site types ( , cus ) , and each may be in one of the three possible states , , o , co .we begin with a tiling and show that this tiling leads to a similar set of odes to the pk model described in ref . .there are two possible tilings , , representing a covering of each site type .we shall describe the set of tilings as [b ] , [ c] for bridge and cus sites respectively .we next wish to determine an approximate master equation describing the probability of finding each particular site in a certain state .following the formalism of the previous section , the only possible interior transition in each tile is the site transition ( equation [ eqn : siter ] ) , and the other transitions enter by the second and third terms in equation [ eqn : mastertruncate ] .the mixed interior / exterior terms are closed via equation [ eqn : closure ] .each transition occurs stochastically with an exponential distribution in time ; this implies that taking a time step of a single event occurs with probability proportional to .multiple events are considered independent and thus occur with probability proportional to . in the limit as , these terms will vanish leaving a simple transition matrix that only describes events that generate state changes on a site or two adjacent sites via the site and pair transitions listed above , respectively , with transition rates that are assumed to be independent of the sites that are not effected by the transition .this observation greatly reduces the space of possible choices , and allows us to use the reduction presented in equations [ eqn : inmatrxreduct ] and [ eqn : outmatrxreduct ] .we note further that there are multiple choices for that will lead to identical reactions .for example given a tile covering a single bridge site , bridge - bridge pair transitions can effect the bridge tile either through the left or right bridge site .based on the symmetry of the system , each transition rate will be identical , and thus instead of accounting for these distinct choices for , we combine and weight them with a weight function which describes the number of adjacent tiles given that we are at the tile .this weight function has arisen naturally by summing over and will appear in the terms of equation [ eqn : mastertruncate ] .we may write the resulting six - dimensional set of equations explicitly as where .the reaction speeds and are set to zero if the reactions corresponding to the transitions within the subscript do not occur ( it is assumed in this notation that tile is adjacent to tile ) . the weight function is computed from the geometry of the system . in the current geometry, each cus has two bridge and two cus neighbors , so that |[c ] ) = w([c]|[c ] ) = 2 ] in state , and denote the statement that tile with site types ] is the weight function which gives the number of additional neighbors of type to site type , given that we have already included a neighbor of type . for example , | [ b , b ] ) = 1 ] .the reaction rates effecting a single site are described by denoting the rate at which a site of type transitions from state to state .pair reactions rates are written similarly to the case , however we now have a different way to describe the initial state as the initial states are covered by a single tile .the first sum represents reactions that effect individual sites ( i.e. adsorption / desorption of co ) .the second sum represents pair reactions ( i.e. co oxidation and adsorption / desorption of o ) that occur on the interior of the tile .the third and fourth sum represent pair reactions effect the first site of type from transition toward and away from state , and fifth and sixth sums represent pair reactions that effect the second site of type from transitioning toward and away from state .higher - order tilings , such as and tilings may be constructed similarly .note that in these higher - dimensional cases , the tiles will overlap on a greater number of sites and thus the conditional probability will account longer range spatial correlations .the steps listed above may be generalized to a computational algorithm .however the weighting functions will change for each type of tiling . given a latticewe may calculate the neighbor and conditional neighbor probabilities , we may construct the list of tiles along with their site types and position on the lattice , and we may construct a list of state transitions along with the transition rates which may be looped through and added in accordance with the above procedures .we construct this algorithm and test the convergence of the approximated master equation ( equation [ eqn : mastertruncate ] ) in the proceeding section .we have begun to develop a code base for this algorithm and an example is available at https://github.com/gjherschlag/mixedtiling_eg .we continue with the example above of oxidation of co on the ( 110 ) surface , however we simplify the model in two ways .first , we treat the cus sites as being inactive which reduces the system to a one dimensional lattice .next , we assume there is no difference between cus and bridge sites and set the transition rates accordingly . for both cases, we then have and along with the transitions listed in equations [ eqn : siter]-[eqn : pairrf ] . after analyzing these two test cases, we will then examine the system with differentiated bridge and cus sites and use the realistic parameters found in ref . .for the test cases , we choose test parameters with ratios that are similar ( in order of magnitude ) to parameters found for the realistic system . in non - dimensional units of time , we set , and vary ] . for and larger tilings ,the approximated master equation dynamics fall less than a quarter of the standard deviations from the means of the kmc simulations .the tiling performs very well lying within two standard deviations of the kmc statistics over all tested parameter values .we next plot the infinity norm of the absolute error over all predicted steady states between successive tilings , and we find that the developed method converges with order ( figure [ fig : errandsp1d ] ) .although we have statistically determined the steady states from the kmc simulations , we have not performed a statistical sampling for the kmc time dynamics in this test .we note , however , that the relaxation time scales appear to match precisely for the higher dimensional tiles ( ) and the kmc simulations ; a single kmc realization is compared to all of the tilings in figure [ fig:1ddyn ] , for .as expected , the execution times of the ode s are far faster than kmc simulation ( see figure [ fig : errandsp1d ] ) . in the case of the ) tiling , the approximated equations run 1690 times faster than the kmc simulations , and in the more accurate case of the ) tiling , the approximated equations run 352 times faster than the kmc simulations .we note that the tiling associated odes are solved using serial cpu execution , whereas the kmc simulations exploit parallel capabilities of gpus , and thus the true acceleration of our methods can be even greater than what we have presented ( exact performance figures depend to a large extent on computer architecture ; all tests were run on a standard early 2013 15 " macbook pro ) ..,width=302 ] next , we analyze the two dimensional system in which bridge sites are treated identically to cus sites . in this casewe run the kmc simulations for a periodic grid , and compare , , and tilings which yield and dimensional equations respectively .we again compare the steady states of the tiling approximations with results from kmc simulations and find that only the tiling approximation lies within two times the standard deviation of the kmc results for all parameters ( see figure [ fig:2dssandspu ] ) .the tiling does not show significant improvement over the tiling . additionally we plot the speed up in figure [ fig:2dssandspu ] .the tiling approximation runs 7.7 times faster than the kmc simulation .we do not test larger tilings as the ode matrices quickly become too large for the lsoda method to approximate the jacobian .we determine the spatial correlations where the approximations have maximal error ( ; see figure [ fig : cor2d ] ) .we find that at steady state , the spatial correlations die down over four nearest neighbors , and determine that the fifth nearest neighbors have correlations that are less than 10% of nearest neighbor correlations .thus we conjecture that or tilings to be within the mean of the kmc simulations .we do not , however , test this conjecture as the number of equations is too large for the memory requirements of the lsoda routine in constructing the approximated jacobian which would require and dimensional arrays respectively . below in the current section and in the discussion , we suggest several methodologies of reducing the number of dimensions for these larger systems , however do not formally investigate these methods in the current work .lattice sites to the right and lattice sites above .the correlations are taken at steady state and .we note that correlation decays slowly as a function of distance implying that we must take a large element in the hierarchy in order to predict accurate system dynamics.,width=302 ] we next test the tiling approximations for the catalysis problem described in refs . .in this system o , co , , cus .although previous gpk methods claim to be able to handle different site types , to our knowledge there has been no work that has examined these models on a regular lattice made up of different site types ( however there have been gpk models on lattices with randomly active / inactive sites as well as disordered heterogenous lattices ) . in the current tiling framework , however , such an extension becomes natural . using the formalism above, we compare and tiling approximations with results from kmc simulations .the tiling types may again be mapped directly to the site types which are [b],[c] for a tiling and [b , b],[b , c],[c , c] for a tiling .these approximations result in a and dimensional ode .we repeat one of the numerical experiments from ref ., in which we assume that the partial pressure of co is zero ( ) , fix the partial pressure of o to be 1atm ( ) and determine the system evolution for a variety of partial pressures of co ( ) , ranging from 0.5 to 50 atm ( 21 partial pressures evenly partitioned on a log scale ) .the temperature of the system is taken to be 600k .reaction rates are taken from ref . ..[tab : param]we present the parameters used in simulating the oxidation of co on ruo .the table is a partially reconstructed table from table 1 found ref . .parameters that differ by more than 16 orders of magnitude ( machine ) from the largest parameter values are set to zero . [cols="<,<",options="header " , ] to determine the accuracy of the and tilings , kmc simulations are performed on a grid and 98 runs are completed at each partial pressure . on the bridge sites the tiling approximation falls within a standard deviation of the mean kmc results for site occupations ( see figure [ fig : brsites ] ) . on cus sites the tiling approximation fails for partial pressures greater than 2 and less than 5 atm ( see figure [ fig : brsites ] ) .the tiling approximation , however , demonstrates a vast improvement from the tiling approximation ( pk ) model ; far more so than the parameters of the previous section .a similar study was performed in ref .in which the authors compared the pk model with kmc simulation and their results match ours .we also note that there is existing evidence that the pair model will perform well in this scenario from ref ., in which the authors demonstrate that by better approximating the pair probabilities from the probability distribution on cus sites for low values of , a modified pk model can perform very well in terms of predicting turnover efficiency up to the point where o and co begin to coexist on the surface .+ + to determine the approximated size of a tiling that would lead to an accurate description of the system dynamics , we again examine the length scale correlations at steady state as a function of partial pressure . within the kmc simulations at a partial pressure of , we find that the bridge - bridge correlations die off nearly completely after the nearest neighbor ; the cus - bridge pairs , however , are significantly correlated up to two neighbors away , while the cus - cus pairs are significantly correlated beyond 8 neighbors away ( see figures [ fig : covar ] and [ fig : cuscor ] ) .this data supports the observations that the predicted dynamics for bridge sites is accurate for a tiling , whereas the predicted dynamics for the cus sites is not . to accurately capture the system dynamics at this partial pressure, we would need either an unmixed tiling approximation which would result , at minimum , in a dimensional ode .we could also potentially use a mixed ( and ) system which would lead to a dimensional ode which is far more tractable .finally it is also possible to use a more complex mixed tiling system such that we use tiles for bridge - bridge and bridge - cus connections and ) tiles in the cus - cus direction , which would lead to a dimensional ode ; for , this gives a dimensional ode , which is far more tractable still .the first method leads to a set of equations which is intractably large .the second corresponding ode is numerically tractable and the typical system we have been using in the present work .the third system is yet a new tiling structure which we begin to explore by increasing the size of an tiling in the cus - cus direction until we observe the hierarchy to converge .we note that when , the mixed tiling scheme is equivalent to the scheme examined above .we check for consistency over all single site projections of the probability of finding co on cus sites for each case and verify that the consistency criteria is satisfied in these test cases .we plot the results found at steady state in figure [ fig : mixedtile ] .we find an improvement in the mixed tiling scheme , and note that the mixed tiling scheme has converged when ( found by comparing with the case ) .we note however that we do not see convergence that accurately captures the predictions of kmc simulation . as in the previous section ,we compare the speed up of the generalized pk models with a single kmc run and determine the average speed up over all examined partial pressures .we note that we have taken 98 kmc simulations and thus the actual average speed up in our computations is 98 times greater than what is presented .we display the speed up in figure [ fig : mixedtilespeedup ] and find that the tiling scheme runs 4500 times faster than a single kmc realization .the mixed tiling scheme yields a speed up of 4.5 for a single kmc simulation run .finally , we examine the accuracy of the predicted turnover efficiency . in applications the true quantity of interest is the rate at which co is oxidized in to co .this quantity is called the turnover efficiency ( tof ) which is a measure of how often co is oxidized on the surface . herewe define the tof as the number of reactions per two sites per second and write it as where we have used the notations defined in section [ subsec:2x1 ] .we plot and compare the tof for the kmc simulations , the pk model , the tiling and the mixed and tilings in figure [ fig : tof ] .we find that the prediction for the location and value of the peak tof is significantly improved even in the tiling scheme .we note further that for the oxygen poisoned regimes for small partial pressures of co , the tiling model performs very well , and is comparable to that of the problem specific modified pk model presented in ref . .the slight over prediction is consistent with the single site predictions , as co is over estimated on the cus sites ( see figure [ fig : mixedtile ] ) .the over prediction is thus due to the abundance of oxygen and the relatively large rate .tiling scheme , however we do not see convergence when the partial pressure of co is 3.15.,title="fig:",width=302 ] tilings.,width=181 ] with the two tiling systems that we have presented , we have confirmed that the hierarchy of kinetic equations leads to improved predictions for realistic surface dynamics and have shown that this occurs even with small improvements within the hierarchy .we have also introduced the idea of mixed tiling systems and shown how they may be used to introduce improvements in the accuracy of the dynamics .we have developed a method of approximating the master equation for systems that are assumed to be translationally invariant with finite spatial correlations . in principle, this hierarchy will always converge to the master equation and we have shown this in one- and two - dimensional examples . the computational cost of the developed method has been explored and compared with kmc simulation on the test examples and we find that for smaller elements of the hierarchy , we see significant reductions in computational time .we have also shown that the methods quickly lead improvements on a realistic example of oxidation of co on ruo(110 ) , and have observed that a modest improvement to tilings captures many of the important system dynamics within the parameters considered .although the ) tiling scheme is similar to the gpk model presented in ref . , the present work , to our knowledge , is the first to explore the surface reaction dynamics of gpk equations within the context of a non - uniform or non - randomized lattice .furthermore , we have introduced the concept of a mixed tiling , and have shown that mixed tiling schemes can lead to a more accurate description of the dynamics of the master equation .the ability to extend models to larger tilings provides a means to hypothesis test pk models on smaller tiles , as well hypothesis test gpk models by examine larger tilings . in any pursuitin which one hypothesizes that a gpk model provides a suitable model , the current methodology provides a fast method to justify this choice of model by testing this hypothesis with extended tiling systems. should the dynamics change significantly between the smaller and larger tilings , we can reject the method . although this provides a sufficient tool for hypothesis rejection, it is an open and interesting question to ask that if we do not see improvement between a smaller and larger tiling system , does this mean that the method has converged or are there local plateaus ( i.e. is the condition necessary ) ?for example , although we did not see convergence in the mixed tiling scheme on the realistic example , we did see progressively more accurate schemes when both spatial directions were accounted for in the tiling scheme .a rigorous framework describing the situations in which the tiling schemes will converge to the appropriate dynamics will be in an important next step in developing this work .this hierarchy of gpk models may also be used to fit parameters from observed experimental data .these parameter estimates can be similarly tested by examining larger tilings .if the parameters change , we can conclude that longer range spatial correlations play a significant role in the surface dynamics , however if the parameter estimates do not change it is still an open question as to whether or not we can conclude these are accurate surface parameters .if this question can be answered in the affirmative , we can then use transition state theory ( tst ) to predict energy barriers and energy differences between bound and unbound site states , and also predict transition rates over all temperatures .the two open questions presented in this and the above paragraph will be the subject of a future investigation . we note that it is possible for the gpk model to be too computationally expensive in order to select a corresponding tile that will guarantee convergence as is demonstrated in the 2d uniform surface example of section [ sec : idbc ] . to potentially resolve this issuewe have noted the possibility of introducing mixed tiling surfaces and have demonstrated the possibility for improved accuracy .we propose the idea that a mixed tiling search algorithm may be able to determine appropriate directions on which to increase a mixed tiling scheme , but save such a development for future work .we also note that speed - ups beyond what we have presented in the present work are possible by ( 1 ) solving resulting odes with a quasi - newton method rather than via direct integration of the odes from some initial condition , ( 2 ) by utilizing symbolic programing to reduce the number of degrees of freedom via the consistency restraints found in section [ sec : constraint ] .we save both of these tasks for future work , but note that we have begun to develop a code base for this algorithm and an example of the code is available at https://github.com/gjherschlag/mixedtiling_eg .these speedups and savings in memory will allow us to search larger elements of the hierarchy that may allow us to reach convergence for a wide class of problems with large computational savings .the methodology here has been tested in the context of constant rate coefficients so that equations [ eqn : inmatrxgenreduct ] and [ eqn : outmatrxgenreduct ] may be simplified to equations [ eqn : inmatrxreduct ] and [ eqn : outmatrxreduct ] . in many interesting catalysis reactions ,rate reactions will change based on local spatial correlations .although we have not investigated such mechanisms in the current work , it will be interesting to examine methods to reconstruct longer range spatial correlations that may be used to predict variable rate equations based on the current state of the tilings .we must ensure that all models will be consistent based on the ideas presented in section [ sec : constraint ] . in the current work we have not attempted to prove several natural propositions that have arisen , such as finding conditions for when a tiling scheme will be consistent . for example , the idea of mixed tilings raises an interesting mathematical question as to what type of tilings will lead to consistent dynamics .the triplet scheme that fails in the supplementary notes is a kind of mixed tiling scheme that leads to inconsistent dynamics , whereas the mixed tiling scheme presented in section [ sec : ruo2 ] leads to consistent dynamics ( tested numerically ) .we conjecture that convex tilings will be necessary for consistent mixed tiling dynamics but save such investigation for future work .we note that even if the inconsistencies of the previous gpk models could be handled , these equations are typically formulated in a system of odes coupled to an approximate pde that accounts for spatial correlations . for a member of these hierarchies considering correlations of sites, there must be pdes that must be solved ( or the number of independent combinations of states considering sites ; see for example ) ; compounding this complexity is the issue of regularized , anisotropic lattices as we have examined in the example above on ruo which would lead to a two dimensional pde for each collection of state variables . although it is clear that in order for convergence to be achieved , less sites must be considered in a theoretically corrected von niessen hierarchy than in the one presented in the current work , it is unclear which method would be more computationally efficient to solve due to the addition of the ( potentially anisotropic ) pde .we have presented a generalized framework in terms of surface kinetics on square lattices .the work immediately extends to three dimensional reaction networks and may extend to more general lattices and tiling structures .there are many other models that take the same form of pk models such as susceptible - infected - recovered ( sir ) models and other ecological models ; indeed , pairwise models corresponding to tilings along with pairwise approximations for long range pairs have been examined in many instances and it will be interesting to examine whether the more generalized framework presented in the current work will lead to more accurate modeling while retaining efficiency .we remark that the current methodology may have extensions to more irregular networks similar to the presentation found in ref .and we note that this is another promising continuation of the present work .gregory herschlag and sorin mitran gh and sm were supported by nsf - dmr 0934433 .guang lin would like to acknowledge support from the applied mathematics program within the doe s office of advanced scientific computing research as part of the collaboratory on mathematics for mesoscopic modeling of materials .pacific northwest national laboratory ( pnnl ) is operated by battelle for the doe under contract de - ac05 - 76rl01830 .39ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] __ ( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ , vol .( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) http://www.scipy.org/ [ `` , '' ] ( ) * * , ( ) * * , ( )
|
we develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite - range spatial correlation . each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance ; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system . we provide evidence of this convergence in the context of one- and two - dimensional numerical examples . lower levels within the hierarchy that consider shorter spatial correlations , are shown to be up to three orders of magnitude faster than traditional kinetic monte carlo methods ( kmc ) for one - dimensional systems , while predicting similar system dynamics and steady states as kmc methods . we then test the hierarchy on a two - dimensional model for the oxidation of co on ruo2(110 ) , showing that low - order truncations of the hierarchy efficiently capture the essential system dynamics . by considering sequences of models in the hierarchy that account for longer spatial correlations , successive model predictions may be used to establish empirical approximation of error estimates . the hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models .
|
explosive user traffic increase in spite of scarce wireless frequency - time resources is one of the most challenging issues for the future cellular system design .lte broadcast , also known as evolved multimedia multicast broadcast service ( embms ) in the third generation partnership project ( 3gpp ) standards , is one promising way to resolve the problem by broadcasting common requests among users so that it can save frequency - time resources .the common user requests can be easily found in , for example , popular multimedia content or software updates in smart devices . by harnessing these overlapping requests of users ,lte broadcast enhances the total resource amount _ per cell_.this plays a complementary role to the prominent small cell deployment approach providing more resource amount _ per user _ by means of reducing cell sizes . to implement this technique in practice , it is important to validate the existence of sufficiently large number of common requests . according to the investigation in ,discovering meaningful amount of common requests is viable even in youtube despite its providing a huge amount of video files .that is because most users request popular files ; for instance , 80% of user traffic may occur from the top 10 popular files . on the basis of this reason ,at&t and verizon wireless are planning to launch lte broadcast in early 2014 to broadcast sports events to their subscribers .the number of available common requests and its resultant saving amount of resources in cellular networks are investigated in , but it focuses on broadcast ( bc ) service while neglecting the effect of incumbent unicast ( uc ) service .joint optimization of the resource allocations to bc and uc are covered in in the perspectives of average throughput and spectral efficiency .the authors however restrict their scenarios to streaming multimedia services where data are packetized , which can not specify the content of data as well as the corresponding user demand of the files .leading from the preceding works , we propose a bc network framework being specifically aware of content and able to transmit generic files via either bc or uc service .the selection of the service depends on the following content characteristics : 1 ) file size , 2 ) delay tolerance , and 3 ) price discount on bc compared to uc .these characteristics are able to represent a content specified file in practice . for easier understanding ,let us consider a movie file as an example .it is likely to be large file sized , delay tolerable ( if initial playback buffer is saturated ) , and sensitive to the per - bit price of bc under usage - based pricing owing to its large file size .an update file of a user s favorite application in smart devices can be a different example , being likely to be small file sized , delay sensitive , and less price sensitive .furthermore , this study devises a policy that a base station ( bs ) solely carry out bc / uc service selection based on user demand prediction . corresponding to the policy , we maximize the network operator s revenue without user payoff degradation by jointly optimizing bc resource allocation , file scheduling , and pricing . to be more specific, the following summarizes the novelty of the proposed network framework . * * bc / uc selection policy * : a novel bc / uc selection policy is proposed where a bs solely assigns one of the services for each user by comparing his expected payoffs of bc and uc if assigned , without degrading user payoff . * * bc resource allocation * : optimal bc frequency allocation amount is derived in a closed form , showing the allocation is linearly increased with the number of users in a cell , and inversely proportional to uc price .* * bc pricing * : optimal bc price is derived in a closed form , proving the price is determined proportionally to the number of users until bc frequency allocation uses up the entire resources . ** bc file scheduling * : optimal bc file order is derived in an operation - applicable form as well as a closed form for a suboptimal rule suggesting smaller sized and/or more delay tolerable files should be prioritized for bc . as a consequence ,we are able to not only estimate revenue in a closed form , but also verify the revenue from the proposed network keeps increasing along with the number of users unlike the conventional uc only network where the revenue is saturated after exhausting entire frequency resources . considering 3gpp release 11 standards , we foresee up to % increase in revenue for a single lte broadcast scenario and more than a -fold increase for a multi - cell scenario .a single cellular bs simultaneously supports downlink uc and bc services with frequency bandwidth where bc files are slotted in a single queue .the bs serves number of mobile users who are uniformly distributed over the cell region .let the subscript indicate the -th user for , and define s as the locations of users .user locations are assumed to be fixed during time slots , but change at interval of independent of their previous locations .let the subscripts and represent uc and bc hereafter , and and respectively denote uc and bc usage prices per bit . in order to promote bc use, the network offers price discount on bc so that it can compensate longer delay of bc .amount of frequency is allocated for broadcast while unity is allocated for unicast during time slots , width=291 ] each user independently requests a single file at the same moment with a unit interval time slots .let the subscript represent the -th popular file for where denotes the number of all possible requests in a given region .assume user request pattern follows zipf s law ( truncated discrete power law ) as in youtube traffic .it implies the file requesting probability is given as where for .note that larger indicates user requests are more concentrated around a set of popular files .the following example sequentially describes the bs s operation to serve a typical user requesting file . 1 .* common request examination * : by inspecting user requests , bs becomes aware of the file s size as well as the number of file requests .delay tolerance examination * : user marks his requesting priority of the file as in conventional peer - to - peer ( p2p ) services ( e.g. high / low ) . assuming bs has the full knowledge of users quality - of - experience ( qoe ) patterns , this priority information corresponds to delay threshold , allowable delay without degrading qoe .bc frequency allocation , pricing , and file scheduling * : by inspecting , , and , bs allocates bc frequency amount , and sets bc price as well as optimizing bc file scheduling in a revenue maximizing order .* bc / uc selection * : meanwhile in 3 ) , bs assigns either bc or uc to user in order to maximize revenue without inflicting the user s payoff loss .note that the pricing scheme we consider is similar to time - dependent pricing in respect of its flattening user traffic effect by adjusting over time .the target offloading traffic by the pricing is , however , novel since the conventional scheme aims at the entire user traffic but the proposed at _ content - specific _ traffic captured by .bs allocates amount of bc frequency for handling the entire bc assigned requests . in compliance with the 3gpp release 11 , the earmarked amount can not be reallocated to uc requests during as fig .[ fig : rscalloc ] visualizes . for each uc request , bs allocates a normalized unity frequency resource , to be addressed with a realistic unit in section [ section : num ] . for region , and for ,width=154 ] let denote the payoff of user when downloading file via uc . consider the payoff has the following characteristics : logarithmically increasing with ; logarithmically decreasing with its downloading completion delay after exceeding ; and linearly decreasing with cost under usage - based pricing .define as the spectral efficiency when user is served by uc .consider delay sensitive uc users such that uc downloading completion delays always make them experience qoe degrading delays ., i.e. . additionally , we neglect any queueing delays on uc .the payoff then can be represented as follows . note that as we are only interested in the users willing to pay for at least uc service . in a similar manner ,consider indicating the payoff of user when downloading file via bc .let denote the bc spectral efficiency of user .we further define as the size of the broadcasted files until the bc downloading of file completes .this captures the effect of bc file scheduling .the payoff can be represented as below . to maximize revenue while guaranteeing at least uc payoff amount , bs compares and , and assigns either uc or bc service , to be further elaborated in section [ section : revmax_a ] .we consider distance attenuation from difference user locations , and adaptive modulation and coding ( amc ) which changes modulation and coding schemes ( mcs ) depending on wireless channel quality . while uc can adaptively adjust mcs based on its serving user s channel quality , the mcs for bc resorts to aim at the worst channel quality user because bc has to apply an identical mcs to all its users .bc average spectral efficiency is therefore not greater than the uc s . to be more specific , as fig .[ fig : channel ] illustrates , we consider a cell region divided into and .bs can provide high spectral efficiency to , but low spectral efficiency to for .let denote the area of a region .the probability that user is located within , , is given as , independent of . define as uc average spectral efficiency of user , represented as : similarly , average bc spectral efficiency is given as : where denotes the number of bc users .note that is because is an increasing function of .in order to maximize revenue , we optimize bc frequency bandwidth , price , and file scheduling . for more brevity , assume sufficiently large such that bc average spectral efficiency is approximated as as in .we firstly propose a bc / uc selection policy guaranteeing allowable user payoff , and then formulate the average revenue maximization problem under the policy .assume that users predict to be served by uc as default , and hence bs should guarantee at least the amount of uc payoff for every service selection . for user , revenue maximizing service selection policy is described in the following two different user payoff cases : 1 . if , bs firstly assigns uc as much as possible until uc resource allocation reaches because .after using up the entire uc resources , bs then assigns bc ; 2 . if , bs resorts to assign uc in order to avoid payoff loss .note that this policy not only maximizes revenue , but also , albeit not maximizes , enhances user payoff . for simplicity without loss of generality ,assume the required resource amount for uc user demand exceeds the entire uc resources , .as there is no more available uc resource , is set as a maximum value due to no price discount motivation on uc .it results in the revenue from uc is fixed as . by contrast , the revenue from bc still can be increased if holds . as a consequence , the average revenue in a cell region is represented as follows . + p_u \l ( w - w_b\r)t\ ] ] the left and right halves of respectively indicate the average revenues from bc and uc , and is an indicator function which becomes 1 if a condition inside the function is satisfied , otherwise 0 .unfortunately , is an analytically intractable nonlinear function due to . in order to detour the problem ,consider the following lemma .note that indicates the aggregate delay tolerance of file among users for a given and .additionally , the assumption does not imply small sized files since is a normalized value . applying in the result of lemma 1, the lower bound of , yields the corresponding problem formulation given as : & \underset{w_b , p_b , s_i } { \text{max } } \ ; \mathcal{l } \notag\\ \end{aligned } \label{eq : problem1 } \notag\\ & \text{subject to } \notag \\ & \qquad \quad 0\leqp_b \leq p_u , \notag \\ & \qquad \quad 0 \leq w_b \leq w , \notag \\ & \qquad \quad s_i > s_j \ ; \text{or } \ ; s_i < s_j , \ ; \forall i , j \in \l\ { 1 , 2 , \cdots , m\r\}. \notag\end{aligned}\ ] ] the last inequality condition means bc files are slotted in a single queue while bs transmits each file only once . in respect to in , the following sections sequentially derive optimal bc network components , , , and .define as implying the average requesting file size per user , which is a given value independent of our network design .consider small and sufficiently large as assumed at the beginning of section [ section : revmax ] , we can derive a closed form solution of the optimal bc frequency allocation in the following proposition .the proposition shows the optimal bc frequency allocation is determined regardless of bc spectral efficiency and price .moreover , it provides the network design principles that the bc frequency amount is proportional to and inversely proportional to uc price .the latter is because it becomes necessary to enhance bc downloading rate by allocating more amount of frequency to bc when bc service becomes less price competitive ( smaller ) .we can derive the optimal bc price in a closed form in the following proposition .the result shows that is strictly increasing with within the range from to .it implies price increase is more effective to enhance revenue than price discount although the discount may promote more bc use .this result plays a key role to design a bc file scheduler for detouring a recursion problem in section [ section : schedule ] .in addition , it is worth mentioning that bc file scheduler affects by adjusting since therein varies along with the order of bc files , to be further elaborated in the following section .each file is tagged with a weighting factor by bs .bs examines the scheduling file priorities by comparing s .the file scheduling affects defined in section [ subsection : userpayoff ] , so we maximize in terms of as follows .( optimal scheduler ) note that is recursive since in is a function of which is also a function of .this can not be solved analytically , and therefore we resort to derive the value by simulation in section [ section : num ] . in order to provide more fundamentally intuitive understanding , we consider the following suboptimal but closed form solution .( suboptimal scheduler ) although the proposed scheduler is suboptimal , it still shows close - to - optimal behavior , to be verified by fig .[ fig : rs ] in section [ section : num ] .the suboptimal scheduler provides the following network design principle : more delay tolerable ( larger ) , more popular ( larger ) , and/or smaller files ( smaller ) should be prioritized for bc if is sufficiently small such that . in a revenue perspective , we compare the proposed bc / uc network and conventional cellular networks where only uc operates . as a performance metric , we consider _ revenue gain _ defined as the revenue of the proposed bc / uc network divided by that of the uc only network . by combining propositions 13, our proposed network framework shows the following revenue gain .interestingly , the proposed network always achieves positive revenue gain for sufficiently large files such that where defined in proposition 4 is a decreasing function of ( recall in and therein is an increasing function of by definition in section [ subsection : userpayoff ] ) .for those files , the revenue gain increases with the order of , converging to the order of for large when as the effect of diminishes .it is worth mentioning that grows even when frequency - time resources become scarce ( smaller ) thanks to the thrifty nature of bc in frequency .in addition , the result captures the design of bc file scheduler affects revenue by adjusting ( and g , a function of ) . and,,width=336 ] and ,,width=336 ]we consider two different lte broadcast network scenarios in accordance with 3gpp release 11 standards .the first scenario is a typical single cell operates lte bc , having the number of users up to with the entire frequency amount given as mhz . for bc ,bs is able to allocate up to % of . for uc , bs allocates average mhz to a single uc user until the downloading completes . at , the average spectral efficiency is given as bps / hz whereas at is 45 % degraded from where .these correspond to mcs index with 64qam and the index with 16qam respectively .the number of possible requesting files in the cell is fixed as , , and the zipf s law exponent is set as as default .file sizes are uniformly distributed from to mbytes , which may correspond with to minute long 1080p resolution video content .user delay threshold is uniformly distributed from to seconds .furthermore , is set as minutes and as a normalized value having no unit .[ fig : r ] shows up to % gain in revenue for a single cell lte broadcast network , including the effect of the % increment from the suboptimal scheduler proposed in section [ section : schedule ] .moreover , scheduler design becomes more important when increases due to its increasing effect on revenue gain .in addition , the result captures the revenue gain is highly depending on user request concentration ( zipf s law exponent ) as well as the number of possible requesting file in a cell .specifically , doubling from decreases revenue gain by up to % , and from , does by % .the second scenario we consider is a multicast broadcast single frequency network ( mbsfn ) where neighboring cells are synchronized and operate lte broadcast like a single cell .assuming we neglect inter - cell interference , all the simulation settings are the same as in the single cell case except for the increased entire frequency amount by mhz and the number of users by up to , . as a result ,[ fig : rs ] shows the proposed network with the suboptimal scheduler achieves up to % revenue .the result also verifies that the revenue gain increasing rate with respect to converges to a linear scaling law when ( see fig .[ fig : p_b ] at ) as expected in section [ section : revmax_r ] the effect of gain increment by the scheduler increases as anticipated in the single cell case for small .this tendency , however , is no longer valid after exceeding , where having the maximum % revenue increment by means of the suboptimal scheduler , and the effect of scheduler diminishes along with increasing .the reason is there is no more available bc frequency since then , and thus revenue can not be increased by any operations of bs other than the increasing number of common requests due to .this behavior can be further justified by fig .[ fig : w_b ] and [ fig : p_b ] respectively representing the linear growing rates of and with increasing , as well as the convergence to the maximum values for .in this paper , we propose a bc network framework adaptively assigning bc or uc based on user demand prediction by examining content specific information such as file size , delay tolerance , and price sensitivity .for the purpose of the network operator s revenue maximization , the proposed framework jointly optimizes resource allocation , pricing , and file scheduling under a novel bc / uc selection policy .although a bs solely assigns bc or uc service without informing users of the possible selections , the proposed policy does not degrade but even enhance user payoff .in addition , this study provides closed form solutions that enables to understand the fundamental behavior of the proposed framework and give meaningful network design insights ; for instance , revenue gain scaling order becomes from as increases .we consequently observe up to % increase in revenue for a single cell and more than 7 times for 7 cell coordinated lte broadcast networks compared to the conventional networks .the future work we are heading in is to extend the proposed framework into more general multi - cell scenarios which may rigorously incorporate inter - cell interference modeling .this research was supported by the ministry of science , ict and future planning , korea , under the communications policy research center support program supervised by the korea communications agency ( kca-2013 - 001 ) ._ proof of lemma 1 : _ let denote . since s are independent of , we can apply wald s identity , yielding = np_i \e_k \l [ x_k\r]$ ] .the lower bound of is derived as follows . combining these results completes the proof . + _ proof of proposition 1 and 2 :_ the lower bound of average revenue gain is a concave function with respect to as well as .we therefore can find the unique optimal point via convex programming .let be fixed , and consider in terms of , yielding the solution given as : similarly , for a fixed , the optimal bc price is given as follows . combining and proves proposition 1 . for proposition 2 , increases with since due to where is only a function of in .this proves is an increasing function of , completing the proof. j. f. monserrat , j. calabuig , a. fernandez - aguilella , and d. gomez - barquero , `` joint delivery of unicast and e - mbms services in lte networks , '' _ ieee trans . on broadcasting _ ,58 , no . 2 , pp .157167 , 2010 .r. radhakrishnan , b. tirouvengadam , and a. nayak , `` channel quality - based amc and smart scheduling scheme for svc video transmission in lte mbsfn networks , '' _ proc ., ieee intl .conf . on comm .65146518 , jun . 2012 .
|
the long term evolution ( lte ) broadcast is a promising solution to cope with exponentially increasing user traffic by broadcasting common user requests over the same frequency channels . in this paper , we propose a novel network framework provisioning broadcast and unicast services simultaneously . for each serving file to users , a cellular base station determines either to broadcast or unicast the file based on user demand prediction examining the file s content specific characteristics such as : file size , delay tolerance , price sensitivity . in a network operator s revenue maximization perspective while not inflicting any user payoff degradation , we jointly optimize resource allocation , pricing , and file scheduling . in accordance with the state of the art lte specifications , the proposed network demonstrates up to % increase in revenue for a single cell and more than a -fold increase for a cell coordinated lte broadcast network , compared to the conventional unicast cellular networks . lte broadcast , embms , unicast , resource allocation , delay , scheduling , pricing , revenue maximization
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one ought to recognize that the present political chaos is connected with the decay of language , and that one can probably bring about some improvement by starting at the verbal end . _ orwell , `` politics and the english language '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we have entered an era where very large amounts of politically oriented text are now available online .this includes both official documents , such as the full text of laws and the proceedings of legislative bodies , and unofficial documents , such as postings on weblogs ( blogs ) devoted to politics . in some sense ,the availability of such data is simply a manifestation of a general trend of `` everybody putting their records on the internet . ''the online accessibility of politically oriented texts in particular , however , is a phenomenon that some have gone so far as to say will have a potentially society - changing effect . in the united states , for example , governmental bodies are providing and soliciting political documents via the internet , with lofty goals in mind : _ electronic rulemaking _ ( erulemaking ) initiatives involving the `` electronic collection , distribution , synthesis , and analysis of public commentary in the regulatory rulemaking process '' ,may `` [ alter ] the citizen - government relationship '' .additionally , much media attention has been focused recently on the potential impact that internet sites may have on politics , or at least on political journalism .regardless of whether one views such claims as clear - sighted prophecy or mere hype , it is obviously important to help people understand and analyze politically oriented text , given the importance of enabling informed participation in the political process .evaluative and persuasive documents , such as a politician s speech regarding a bill or a blogger s commentary on a legislative proposal , form a particularly interesting type of politically oriented text .people are much more likely to consult such evaluative statements than the actual text of a bill or law under discussion , given the dense nature of legislative language and the fact that ( u.s . ) bills often reach several hundred pages in length .moreover , political opinions are explicitly solicited in the erulemaking scenario . in the analysis of evaluative language , it is fundamentally necessary to determine whether the author / speaker supports or disapproves of the topic of discussion . in this paper , we investigate the following specific instantiation of this problem : we seek to determine from the transcripts of u.s .congressional floor debates whether each `` speech '' ( continuous single - speaker segment of text ) represents support for or opposition to a proposed piece of legislation .note that from an experimental point of view , this is a very convenient problem to work with because we can automatically determine ground truth ( and thus avoid the need for manual annotation ) simply by consulting publicly available voting records .[ [ section ] ] + determining whether or not a speaker supports a proposal falls within the realm of _ sentiment analysis _ , an extremely active research area devoted to the computational treatment of subjective or opinion - oriented language ( early work includes wiebe and rapaport , hearst , sack , and ; see esuli for an active bibliography ) . in particular , since we treat each individual speech within a debate as a single `` document '' , we are considering a version of _ document - level sentiment - polarity classification _ , namely , automatically distinguishing between positive and negative documents .most sentiment - polarity classifiers proposed in the recent literature categorize each document independently .a few others incorporate various measures of inter - document similarity between the texts to be labeled .many interesting opinion - oriented documents , however , can be linked through certain relationships that occur in the context of evaluative _discussions_. for example , we may find textual evidence of a high likelihood of _ agreement _ between two speakers , such as explicit assertions ( `` i second that ! '' ) or quotation of messages in emails or postings ( see but cf . ) .agreement evidence can be a powerful aid in our classification task : for example , we can easily categorize a complicated ( or overly terse ) document if we find within it indications of agreement with a clearly positive text .obviously , incorporating agreement information provides additional benefit only when the input documents are relatively difficult to classify individually .intuition suggests that this is true of the data with which we experiment , for several reasons .first , u.s .congressional debates contain very rich language and cover an extremely wide variety of topics , ranging from flag burning to international policy to the federal budget .debates are also subject to digressions , some fairly natural and others less so ( e.g. , `` why are we discussing this bill when the plight of my constituents regarding this other issue is being ignored ? '' ) second , an important characteristic of persuasive language is that speakers may spend more time presenting evidence in support of their positions ( or attacking the evidence presented by others ) than directly stating their attitudes .an extreme example will illustrate the problems involved .consider a speech that describes the u.s .flag as deeply inspirational , and thus contains only positive language .if the bill under discussion is a proposed flag - burning ban , then the speech is _ supportive _ ; but if the bill under discussion is aimed at rescinding an existing flag - burning ban , the speech may represent _ opposition _ to the legislation . given the current state of the art in sentiment analysis , it is doubtful that one could determine the ( probably topic - specific ) relationship between presented evidence and speaker opinion . [ cols="<,>,>,>,<",options="header " , ] [ [ using - relationship - information ] ] using relationship information + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + applying an svm to classify each speech segmentin isolation leads to clear improvements over the two baseline methods , as demonstrated in table [ tab : results - local ] .when we impose the constraint that all speech segmentsuttered by the same speaker receive the same label via `` same - speaker links '' , both test - set and development - set accuracy increase even more , in the latter case quite substantially so .the last two lines of table [ tab : results - local ] show that the best results are obtained by incorporating agreement information as well .the highest test - set result , 71.16% , is obtained by using a high - precision threshold to determine which agreement links to add .while the development - set results would induce us to utilize the standard threshold value of 0 , which is sub - optimal on the test set , the agreement - link policy still achieves noticeable improvement over not using agreement links ( test set : 70.81% vs. 67.21% ) .we use speech segmentsas the unit of classification because they represent natural discourse units . as a consequence ,we are able to exploit relationships at the speech - segmentlevel .however , it is interesting to consider whether we really need to consider relationships specifically between speech segmentsthemselves , or whether it suffices to simply consider relationships between the _ speakers _ of the speech segments . in particular , as an alternative to using same - speaker links , we tried a _speaker - based _ approach wherein the way we determine the initial individual - document classification score for each speech segmentuttered by a person in a given debate is to run an svm on the concatenation of _ all _ of s speech segmentswithin that debate .( we also ensure that agreement - link information is propagated from speech - segment to speaker pairs . )how does the use of same - speaker links compare to the concatenation of each speaker s speech segments ?tables [ tab : results - local ] and [ tab : results - global ] show that , not surprisingly , the svm individual - document classifier works better on the concatenated speech segmentsthan on the speech segmentsin isolation. however , the effect on overall classification accuracy is less clear : the development set favors same - speaker links over concatenation , while the test set does not .but we stress that the most important observation we can make from table [ tab : results - global ] is that once again , the addition of agreement information leads to substantial improvements in accuracy .recall that in in our experiments , we created finite - weight agreement links , so that speech segmentsappearing in pairs flagged by our ( imperfect ) agreement detector can potentially receive different labels .we also experimented with _ forcing _ such speech segmentsto receive the same label , either through infinite - weight agreement links or through a speech - segmentconcatenation strategy similar to that described in the previous subsection .both strategies resulted in clear degradation in performance on both the development and test sets , a finding that validates our encoding of agreement information as `` soft '' preferences .we have seen several cases in which the method that performs best on the development set does not yield the best test - set performance .however , we felt that it would be illegitimate to change the train / development / test sets in a post hoc fashion , that is , after seeing the experimental results . moreover , and crucially , it is very clear that using agreement information , encoded as preferences within our graph - based approach rather than as hard constraints , yields substantial improvements on both the development and test set ; this , we believe , is our most important finding .[ [ politically - oriented - text ] ] politically - oriented text + + + + + + + + + + + + + + + + + + + + + + + + + sentiment analysis has specifically been proposed as a key enabling technology in erulemaking , allowing the automatic analysis of the opinions that people submit .there has also been work focused upon determining the political leaning ( e.g. , `` liberal '' vs. `` conservative '' ) of a document or author , where most previously - proposed methods make no direct use of relationships between the documents to be classified ( the `` unlabeled '' texts ) .an exception is , who experimented with determining the political orientation of websites essentially by classifying the concatenation of all the documents found on that site .others have applied the nlp technologies of near - duplicate detection and topic - based text categorization to politically oriented text . [ [ detecting - agreement ] ] detecting agreement + + + + + + + + + + + + + + + + + + + we used a simple method to learn to identify cross - speaker references indicating agreement .more sophisticated approaches have been proposed , including an extension that , in an interesting reversal of our problem , makes use of sentiment - polarity indicators within speech segments . also relevantis work on the general problems of dialog - act tagging , citation analysis , and computational rhetorical analysis .we currently do not have an efficient means to encode _ disagreement _ information as hard constraints ; we plan to investigate incorporating such information in future work .[ [ relationships - between - the - unlabeled - items ] ] relationships between the unlabeled items + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + consider sequential relations between different types of emails ( e.g. , between requests and satisfactions thereof ) to classify messages , and thus also explicitly exploit the structure of conversations. previous sentiment - analysis work in different domains has considered inter - document similarity or explicit inter - document references in the form of hyperlinks .notable early papers on graph - based semi - supervised learning include blum and chawla , bansal , blum , and chawla , kondor and lafferty , and joachims .zhu maintains a survey of this area .recently , several alternative , often quite sophisticated approaches to _ collective classification _ have been proposed .it would be interesting to investigate the application of such methods to our problem .however , we also believe that our approach has important advantages , including conceptual simplicity and the fact that it is based on an underlying optimization problem that is provably and in practice easy to solve .in this study , we focused on very general types of cross - document classification preferences , utilizing constraints based only on speaker identity and on direct textual references between statements .we showed that the integration of even very limited information regarding inter - document relationships can significantly increase the accuracy of support / opposition classification .the simple constraints modeled in our study , however , represent just a small portion of the rich network of relationships that connect statements and speakers across the political universe and in the wider realm of opinionated social discourse . one intriguing possibility is to take advantage of ( readily identifiable ) information regarding interpersonal relationships , making use of speaker / author affiliations , positions within a social hierarchy , and so on . or , we could even attempt to model relationships between topics or concepts , in a kind of extension of collaborative filtering .for example , perhaps we could infer that two speakers sharing a common opinion on evolutionary biologist richard dawkins ( a.k.a .`` darwin s rottweiler '' ) will be likely to agree in a debate centered on intelligent design . while such functionality is well beyond the scope of our current study , we are optimistic that we can develop methods to exploit additional types of relationships in future work .we thank claire cardie , jon kleinberg , michael macy , andrew myers , and the six anonymous emnlp referees for valuable discussions and comments .we also thank reviewer 1 for generously providing additional _ post hoc _ feedback , and the emnlp chairs eric gaussier and dan jurafsky for facilitating the process ( as well as for allowing authors an extra proceedings page ) .this paper is based upon work supported in part by the national science foundation under grant no .iis-0329064 and an alfred p. sloan research fellowship .any opinions , findings , and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies , either expressed or implied , of any sponsoring institutions , the u.s .government , or any other entity .agarwal , alekh and pushpak bhattacharyya .2005 . sentiment analysis : a new approach for effective use of linguistic knowledge and exploiting similarities in a set of documents to be classified . in _ proceedings of the international conference on natural language processing ( icon)_. bansal , nikhil , avrim blum , and shuchi chawla .correlation clustering . in _ proceedings of the symposium on foundations of computer science ( focs ) _ , pages 238247 .journal version in _ machine learning journal _ , special issue on theoretical advances in data clustering , 56(1 - 3):89113 ( 2004 ) .daelemans , walter and vronique hoste . 2002 .evaluation of machine learning methods for natural language processing tasks . in _ proceedings of the third international conference on language resources and evaluation ( lrec ) _ , pages 755760 .efron , miles .cultural orientation : classifying subjective documents by cociation [ sic ] analysis . in _ proceedings of the aaai fall symposium on style and meaning in language , art , music , and design_ , pages 4148 .galley , michel , kathleen mckeown , julia hirschberg , and elizabeth shriberg .2004 . identifying agreement and disagreement in conversational speech :use of bayesian networks to model pragmatic dependencies . in _ proceedings of the 42nd acl _ ,pages 669676 .goldberg , andrew b. and jerry zhu .2006 . seeing stars when there are nt many stars : graph - based semi - supervised learning for sentiment categorization . in _ textgraphs : hlt / naacl workshop on graph - based algorithms for natural language processing_. mullen , tony and robert malouf . 2006 . a preliminary investigation into sentiment analysis of informal political discourse . in _ proceedings of the aaai symposium on computational approaches to analyzing weblogs, pages 159162 .shulman , stuart , jamie callan , eduard hovy , and stephen zavestoski .language processing technologies for electronic rulemaking : a project highlight . in _ proceedings of digital government research ( dg.o )_ , pages 8788 .stolcke , andreas , noah coccaro , rebecca bates , paul taylor , carol van ess - dykema , klaus ries , elizabeth shriberg , daniel jurafsky , rachel martin , and marie meteer . 2000 .dialogue act modeling for automatic tagging and recognition of conversational speech ., 26(3):339373 .zhu , jerry .semi - supervised learning literature survey .computer sciences technical report tr 1530 , university of wisconsin - madison .available at http://www.cs.wisc.edu//pub/ssl_survey.pdf ; has been updated since the initial 2005 version .
|
we investigate whether one can determine from the transcripts of u.s . congressional floor debates whether the speeches represent support of or opposition to proposed legislation . to address this problem , we exploit the fact that these speeches occur as part of a discussion ; this allows us to use sources of information regarding relationships between discourse segments , such as whether a given utterance indicates agreement with the opinion expressed by another . we find that the incorporation of such information yields substantial improvements over classifying speeches in isolation .
|
a possible way to avoid the inconsistence of geometric brownian motion as a model of speculative markets is the assumption that the volatility is itself a time - depending random variable . within this assumption thereexist the discrete arch and garch models as well as the continuous stochastic volatility models .it is empirically established that the autocorrelation function of volatility decays very slowly with time exhibiting two different exponents of the power - law distribution : for short time scales and for the long time ones . however , common stochastic volatility models are able reproduce only one time scale exponential decay .there is empirical evidence that the trading activity is a stochastic variable with the power - law probability distribution function ( pdf ) and the long - range correlations resembling the power - law statistical properties of volatility .empirical analysis confirms that the long - range correlations in volatility arise due to those of the trading activity . on the other hand ,the trading activity can be modeled as the event flow of the stochastic point process with more evident microscopic interpretation of the observed power - law statistics . recently , we proposed the stochastic model of the trading activity in the financial markets as poissonian - like process driven by the stochastic differential equation ( sde ) . herewe present the detailed comparison of the model with the empirical data of the trading activity for 26 stocks traded on nyse .this enables us to present a more precise model definition based on the scaled equation , universal for all stocks .the proposed form of the difference equation for the intertrade time can be interpreted as a discrete iterative description of the proposed model , based on sde .we consider trades in the financial market as identical point events .such point process is stochastic and defined by the stochastic interevent time , with being the occurrence times of the events .recently we proposed to model the flow of trades in the financial markets as poissonian - like process driven by the multiplicative stochastic equation , i.e. we define the rate of this process by the continuous stochastic differential equation \frac{n^4}{(n\epsilon+1)^2}\mathrm{d } t+\frac{\sigma n^{5/2}}{(n\epsilon+1)}\mathrm{d}w .\label{eq : nstoch2}\ ] ] this sde with the wiener noise describes the diffusion of the stochastic rate restricted in some area : from the side of the low values by the term and from the side of high values by the relaxation . the general relaxation factor is keyed with multiplicative noise to ensure the power - law distribution of .the multiplicative noise is combined of two powers to ensure the spectral density of with two power law exponents .this form of the multiplicative noise helps us to model the empirical probability distribution of the trading activity , as well .a parameter defines the crossover between two areas of diffusion . for more detailssee .equation ( [ eq : nstoch2 ] ) has to model stochastic rate with two power - law statistics , i.e. , pdf and power spectral density or autocorrelation , resembling the empirical data of the trading activity in the financial markets .we will analyze the statistical properties of the trading activity defined as integral of in the selected time window , the poissonian - like sequence of trades described by the intertrade times can be generated by the conditional probability in the case of single exponent power law model , see , when pdf of is , the distribution of intertrade time in -space has the integral form \mathrm{d } n,\label{eq : taupdistrib}\ ] ] with defined from the normalization , .the explicit expressions of the integral ( [ eq : taupdistrib ] ) are available for the integer values of .when , pdf ( [ eq : taupdistrib ] ) is expressed through the bessel function of the second kind whereas for the more complicated structures of distribution expressed in terms of hypergeometric functions arise .now we have the complete set of equations defining the stochastic model of the trading activity in the financial markets .we proposed this model following our increasing interest in the stochastic fractal point processes .our objective to reproduce in details statistics of trading activity conditions rather complicated form of the sde ( [ eq : nstoch2 ] ) and low expectation of analytical results . in this paperwe focus on the numerical analysis and direct comparison of the model with the empirical data . in order to achieve more general description of statistics for different stocks we introduce the scaling to eq .( [ eq : nstoch2 ] ) with scaled time , scaled rate and .then eq .( [ eq : nstoch2 ] ) becomes \frac{x^4}{(x\varepsilon ' + 1)^2}+\frac{x^{5/2}}{(x\varepsilon'+1)}\mathrm{d}w_s .\label{eq : nscaled}\ ] ] the parameter specific for various stocks is now excluded from the sde and we have only three parameters to define from the empirical data of trading activity in the financial markets .we solve eq .( [ eq : nscaled ] ) using the method of discretization . introducing the variable step of integration , the differential equation ( [ eq : nscaled ] ) transforms to the difference equation \frac{x_k^3}{(x_k\epsilon ' + 1)^2}+\kappa\frac{x_k^{2}}{(x_k\epsilon'+1)}\varepsilon_k , \\t_{k+1 } & = & t_k+\kappa^2/x_k \label{eq : difference}\end{aligned}\ ] ] with being a small parameter and defining gaussian noise with zero mean and unit variance . with the change of variables one can transform eq .( [ eq : nstoch2 ] ) into \frac{1}{(\epsilon+\tau)^2}dt+\sigma\frac{\sqrt { \tau}}{\epsilon+\tau}\mathrm{d}w \label{eq : taustoch}\ ] ] with limiting time .we will show that this form of driving sde is more suitable for the numerical analysis .first of all , the powers of variables in this equation are lower and the main advantage is that the poissonian - like process can be included into the procedure of numerical solution of sde .we introduce a scaling of eq .( [ eq : taustoch ] ) with the nondimensional scaled time , scaled intertrade time and . then( [ eq : taustoch ] ) becomes \frac{1}{(\epsilon'+y)^2}\mathrm{d}t_s+\frac{\sigma}{\tau_0 } \frac{\sqrt{y}}{\epsilon'+y}\mathrm{d}w_s .\label{eq : tauscaled}\ ] ] as in the real discrete market trading we can choose the instantaneous intertrade time as a step of numerical calculations , , or even more precisely as the random variables with the exponential distribution . then we get the iterative equation resembling tick by tick trades in the financial markets , \frac{h_k}{(\epsilon'+y_k)^2}+\frac{\sigma}{\tau_0 } \frac{\sqrt{y_k h_k } } { \epsilon'+y_k}\varepsilon_k .\label{eq : tauiterat}\ ] ] in this numerical procedure the sequence of gives the modulating rate and the sequence of is the poissonian - like intertrade times .seeking higher precision one can use the milshtein approximation instead of eq .( [ eq : tauiterat ] ) .we will analyze the tick by tick trades of 26 stocks on nyse traded for 27 months from january , 2005 .an example of the empirical histograms of and and power spectrum of ibm trade sequence are shown on figure 1 .we will adjust the parameters of the poissonian - like process driven by sde eq .( [ eq : nstoch2 ] ) or eq .( [ eq : tauiterat ] ) to reproduce numerically the empirical trading statistics . of the intertrade time sequence ; b ) histogram of trading activity calculated in the time interval ; c ) power spectral density of the sequence of trades , straight lines approximate power spectrum with and .,title="fig : " ] of the intertrade time sequence ; b ) histogram of trading activity calculated in the time interval ; c ) power spectral density of the sequence of trades , straight lines approximate power spectrum with and .,title="fig : " ] of the intertrade time sequence ; b ) histogram of trading activity calculated in the time interval ; c ) power spectral density of the sequence of trades , straight lines approximate power spectrum with and .,title="fig : " ] the histograms and power spectrum of the sequences of trades for all 26 stocks are similar to ibm shown on fig . [ fig:1 ] . from the histogram define a model parameter for every stock .one can define the exponent from the power - law tail of the histogram .the power spectrum exhibits two scaling exponents and when approximated by power - law .the empirical values of , , and for are presented in table [ tab1 ] ..the empirical values of , , and . [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] values of and fluctuate around and , respectively , as in the separate stochastic model realizations .the crossover frequency of two power - laws exhibits some fluctuations around the value as well .one can observe considerable fluctuations of the exponent around the mean value . notice that value of exponent for integrated trading activity is higher than for .our analysis shows that the explicit form of the , eq .( [ eq : taupdistrib ] ) with and , fits empirical histogram of for all stocks very well and fitting parameter can be defined for every stock .values of are presented on table [ tab1 ] . from the point of view of the proposed modelthe parameter is specific for every stock and reflects the average trading intensity in the calm periods of stock exchange .we eliminate these specific differences in our model by scaling transform of eq .( [ eq : taustoch ] ) arriving to the nondimensional sde ( [ eq : tauscaled ] ) and its iterative form ( [ eq : tauiterat ] ) .these equations and parameters , , and define our model , which has to reproduce in details power - law statistics of the trading activity in the financial markets . from the analysis based on the research of fractal stochastic pointprocesses and by fitting the numerical calculations to the empirical data we arrive at the collection of parameters , , . in figure [ fig:2 ]we present histogram of the sequence of , ( a ) , and the power spectrum of the sequence of trades as point events , ( b ) , generated numerically by eq .( [ eq : tauiterat ] ) with the adjusted parameters . ,( a ) , and power spectrum , ( b ) , of the sequence of point events calculated from eq .( [ eq : tauiterat ] ) with the adjusted parameters , , .,title="fig : " ] , ( a ) , and power spectrum , ( b ) , of the sequence of point events calculated from eq .( [ eq : tauiterat ] ) with the adjusted parameters , , .,title="fig : " ] for every selected stock one can easily scale the model sequence of intertrade times by empirically defined to get the model sequence of trades for this stock .one can scale the model power spectrum by for getting the model power spectrum for the selected stock . of intertrade time for ibm and mmm stocks ; empirical histogram , tick gray line , modeled poissonian - like distribution , solid line , distribution of driving in eq .( [ eq : tauiterat ] ) , dashed line ., title="fig : " ] of intertrade time for ibm and mmm stocks ; empirical histogram , tick gray line , modeled poissonian - like distribution , solid line , distribution of driving in eq .( [ eq : tauiterat ] ) , dashed line ., title="fig : " ] .,title="fig : " ] .,title="fig : " ] we proposed the iterative eq .( [ eq : tauiterat ] ) as quite accurate stochastic model of trading activity in the financial markets . nevertheless , one has to admit that real trading activity often has considerable trend as number of shares traded and the whole activity of the markets increases .this might have considerable influence on the empirical long range distributions and power spectrum of the stocks in consideration .the trend has to be eliminated from the empirical data for the detailed comparison with the model .only few stocks from the selected list in table [ tab1 ] have stable trading activity in the considered period . in figures[ fig:3 ] , [ fig:4 ] and [ fig:5 ] we provide the comparison of the model with the empirical data of those stocks . as we illustrate in figure [ fig:3 ], the model poissonian - like distribution can be easily adjusted to the empirical histogram of intertrade time , with for ibm trade sequence and with for mmm trading .the comparison with the empirical data is limited by the available accuracy , , of stock trading time .the probability distribution of driving eq . ( [ eq : tauiterat ] ) , dashed line , illustrates different market behavior in the periods of the low and high trading activity .the poissonian nature of the stochastic point process hides these differences by considerable smoothing of the pdf .figure [ fig:4 ] illustrates that the long range memory properties of the trading activity reflected in the power spectrum are universal and arise from the scaled driving sde ( [ eq : nscaled ] ) and ( [ eq : tauscaled ] ) .one can get power spectrum of the selected stock trade sequence scaling model spectrum , figure [ fig:2 ] ( b ) , with .the pdf of integrated trading activity is more sensitive to the market fluctuations .nevertheless , as we demonstrate in figure [ fig:5 ] , the model is able to reproduce the power - law tails very well .we proposed the generalization of the point process model as the poissonian - like sequence with slowly diffusing mean interevent time and adjusted the parameters of the model to the empirical data of trading activity in the financial markets .a new form of scaled equations provides the universal description with the same parameters applicable for all stocks .the proposed new form of the continuous stochastic differential equation enabled us to reproduce the main statistical properties of the trading activity and waiting time , observable in the financial markets . in proposed modelthe fractured power - law distribution of spectral density with two different exponents arise .this is in agreement with the empirical power spectrum of the trading activity and volatility and implies that the market behavior may be dependent on the level of activity .one can observe at least two stages in market behavior : calm and excited .ability to reproduce empirical pdf of intertrade time and trading activity as well as the power spectrum in every detail for various stocks provides a background for further stochastic modeling of volatility .r. engle , econometrica 50 ( 1982 ) 987 .t. bollersev , journal of econometrics 31 ( 1986 ) 307 .r. baillie , t. bollersev and h.o .mikkelsen , journal of econometrics 74 ( 1996 ) 3 .r. engle and a. patton , quant .finance 1 ( 2001 ) 237 .j. perello and j. masoliver , phys .e 67 ( 2003 ) 037102 .j. perello , j. masoliver and n. anento , physica a 344 ( 2004 ) 134 .a. dragulescu , v. yakovenko , quant .finance 2 ( 2002 ) 443 .hull , a. white , the journal of finance 42 ( 1987 ) 281 .mandelbrot , j. business 36 ( 1963 ) 394 .t. lux , appl . fin .6 ( 1996 ) 463 .v. plerou , p. gopikrishnan , x. gabaix , l.a.n . amaral and h.e .stanley , quant .finance 1 ( 2001 ) 262 .x. gabaix , p. gopikrishnan , v. plerou , h.e .stanley , nature 423 ( 2003 ) 267 .v. gontis and b. kaulakys , j. stat .( 2006 ) p10016 .v. gontis and b. kaulakys , physica a 382 ( 2007 ) 114. v. gontis and b. kaulakys , physica a 343 ( 2004 ) 505 .v. gontis and b. kaulakys , physica a 344 ( 2004 ) 128 .b. kaulakys , v. gontis and m. alaburda , phys .e 71 ( 2005 ) 051105 .b. kaulakys , j. ruseckas , v. gontis and m. alaburda , physica a 365 ( 2006 ) 217 .
|
we propose the point process model as the poissonian - like stochastic sequence with slowly diffusing mean rate and adjust the parameters of the model to the empirical data of trading activity for 26 stocks traded on nyse . the proposed scaled stochastic differential equation provides the universal description of the trading activities with the same parameters applicable for all stocks . , , financial markets , trading activity , stochastic equations , point processes 89.65.gh , 02.50.ey , 05.10.gg
|
let us model a physical body by a bounded set , , with smooth boundary , and the following spatially varying quantities : heat capacity , density , electric conductivity , and ( possibly anisotropic ) thermal conductivity , each defined for .consider applying a spatially and temporally variable electrical voltage distribution at the boundary starting at time .then , if there are no sinks or sources of current inside , the electric potential inside the body satisfies the conductivity equation equation is often used as a mathematical model for electrical impedance tomography ( eit ) , where one measures the current through the boundary caused by a family of static voltage distributions and recovers from such voltage - to - current map . here is the unit outer normal to .we refer to for an extensive survey of the mathematical developments in eit .see also for results in the two - dimensional case , and for results in higher dimensional cases . for counterexamples to uniqueness of time - harmonic inverse problems involvingvery anisotropic and degenerate material parameters , leading to the phenomenon of invisibility , see .our aim here is somewhat different as we wish to couple _heat conduction _ to the problem .let us denote the electrical power density inside by : now acts as a source of heat inside .assuming that the body is at a constant ( zero ) temperature at the time when the voltage is first applied , and the surface of the body is kept at that temperature at all times , the temperature distribution inside satisfies the following heat equation : where .the model , , is based on the physical assumption that the heat transfer is so slow that the quasistatic ( dc ) model for the electric potential is realistic .associated to the coupled system , , , we introduce the voltage - to - heat flow map defined by the idea is to measure the heat flow through the boundary caused by the heat from the electric current resulting from the applied voltage distribution .our main result is theorem [ thm_main ] below , stating that under certain smoothness assumptions , the coefficients , , and are uniquely determined from the knowledge of the voltage - to - heat flow map .the method of proof of theorem [ thm_main ] also outlines a constructive reconstruction procedure for recovering conductivity from .namely , it turns out that applying a temporally static voltage distribution and studying at thermal equilibrium ( ) yields the knowledge of the dirichlet - to - neumann map related to the eit problem. then one can recover using nachman s reconstruction result .notice that various hybrid imaging methods have been proposed and analyzed recently .examples include thermoacoustic and photoacoustic imaging , combination of electrical and magnetic probing , electrical and acoustic imaging and magnetic and acoustic imaging .theorem [ thm_main ] suggests a new hybrid imaging method , utilizing two diffuse modes of propagation : electrical prospecting and heat transfer - based probing .we emphasize that the proposed method recovers complementary information about three different physical properties .we also note that in many applications where one wants to reconstruct the heat transfer parameters and , the use of electric boundary sources may be easier than controlling the temperature or the heat flux at the boundary .concerning inverse problems for the heat equation , we refer to .we remark that in practice one might use a measurement setup shown in figure [ fig : electrodes ] .however , analysis of such discrete measurements is outside the scope of this paper , and in the mathematical results below we work with the continuum models , , , and .( 300,130 ) ( -10,5 ) ( 2,50) ( 50,80) ( 50,60) ( 50,40) ( 90,94)(1,0)41 ( 134,91)electrode for applying voltage ( 98,44)(1,0)33 ( 134,41)heat flow sensor this paper is organized as follows . in section [ sec : statement ] we state our assumptions and results in a mathematically precise form . in section [ sec_auxiliary ]we give an auxiliary density result for the conductivity equation .section [ sec : eit ] is devoted to the reconstruction of the conductivity .the proof of theorem [ thm_main ] is completed in section [ sec_a_kappa ] , where we show the identifiability of the heat parameters and .finally , appendix a is devoted to the recovery of the boundary values of the matrix from interior to boundary measurements , associated to a suitable elliptic boundary value problem .this result may be of an independent interest .let , , be a bounded domain with boundary .let be a strictly positive function on . then given , ,on the boundary at time , there exists a unique , which solves the boundary value problem see .we have provided that is taken large enough , say .in what follows we shall always choose the sobolev index in this way .consider the anisotropic heat equation here is a real symmetric matrix with , and there exists such that we shall assume that .the operator is formally self - adjoint in and we have we also let denote the friedrichs extension of the operator on , so that the domain of the positive self - adjoint operator is .the solution of is given by the duhamel formula see .associated to the coupled system , , and , we consider the voltage - to - heat flow map , the main result of the paper is as follows .[ thm_main ] assume that , , and are real symmetric matrices with entries , satisfying , for .if , then , and . it turns out that in the course of the proof of theorem [ thm_main ] , we establish a result for the anisotropic heat equation , which may be of independent interest . in order to state the result , consider the inhomogeneous initial boundary value problem for the anisotropic heat equation with an arbitrary source . define the map , where is the solution of .[ thm_main_2 ] let , , be a bounded domain with boundary .assume that , and are real symmetric matrices with entries , satisfying , for .if , then and .let , , be a bounded domain with boundary and let be a strictly positive function on .we shall need the following density result , which is a quite straightforward consequence of .[ prop_density_gradients ] the set is dense in .let be such that for any solution of the conductivity equation we have it follows from , see also , that for any satisfying and large enough , the conductivity equation has a solution where satisfies here the constant depends on , , and a finite number of derivatives of .given and , according to , there exist such that , and , . for the solutions and of the form, we get in view of , we may substitute the latter expression into and let . using , we obtain that here we shall view as a strictly positive function on , which is equal to a positive constant near infinity .the identity is equivalent to where denotes the fourier transformation and is the characteristic function of .it follows that is a solution of a second order elliptic equation on with smooth coefficients .since it is compactly supported , by unique continuation we conclude that in .this completes the proof .the purpose of this section is to make the first step in the proof of theorem [ thm_main ] , by establishing the following result . recall that here .[ prop_cond ] the voltage - to - heat flow map determines the conductivity uniquely .when proving proposition [ prop_cond ] , we let ) ] and for , we define so that , the dirac measure at , as .let , for large enough .then the solution of the problem satisfies , where solves let and denote by the solution of with , and , .set thus , is a solution of the following inhomogeneous initial boundary value problem , since is a self - adjoint positive operator in with the domain , the spectrum of is discrete , accumulating at , consisting of eigenvalues of finite multiplicity , .associated to the eigenvalues we have the eigenfunctions , which form an orthonormal basis in . in what follows we shall assume , as we may , that the eigenfunctions are real - valued .hence , with convergence in .therefore , with convergence in , for each fixed .next we notice that thus , since , it follows that for all , here we would like to let . in order to do so, it will be convenient to obtain an explicit representation of the fourier coefficients .set , where the scalar product is taken in the space .it follows from that hence , and is uniformly bounded in , . for and ,fixed , we get using , we may let in , and conclude that in what follows we shall have to distinguish the eigenvalues of the operator , .in order to do that let us continue to denote by the sequence of _ distinct _ eigenvalues of and let denote the multiplicity of .let be an orthonormal basis of the eigenfunctions , corresponding to the eigenvalue . by the uniqueness of the dirichlet series , see , and , we obtain the following result .[ prop_almost_spec ] assume that .then for all , we have and here and are arbitrary functions .let us introduce the following linear continuous operators proposition [ prop_density_gradients ] together with implies that on a dense subset of , and hence , everywhere . on the level of the distribution kernels, we obtain that for all , we shall next need the following result , see . since the argument is short , for the convenience of the reader , we give it here .[ lem_alg_1 ] the functions are linearly independent , .assume that there are such that then satisfies by the unique continuation principle , we get in .thus , for .this proves the lemma .our next goal is to analyze the consequences of , and the key step here is the following algebraic result which is similar to ( * ? ? ?* lemma 2.3 ) .[ lem_alg_2 ] let , , , and , , , be such that moreover , assume that the systems , , and are all linearly independent .then and there exists an invertible matrix with real entries such that here we use the notation as is not identically zero in , there exists such that . assuming that we getthat are linearly dependent which contradicts the assumptions of the proposition .thus , there exists such that continuing in the same way , we find points such that the matrix is invertible .it follows from that for any , where thus , for any , where , and therefore , .similarly , using the fact that is linearly independent , we have for any , and . hence , .furthermore , since the system is linearly independent , in the same way as above , we see that there are points such that the vectors form a basis in .thus , implies that and therefore , is invertible .similarly , one can see that for all with an invertible matrix .it follows from that and therefore , since there exist points and such that the vectors ( respectively , ) form a basis in , we get .this proves the claim . if follows from lemma [ lem_alg_2 ] together with and lemma [ lem_alg_1 ] that , and there exists an invertible matrix such that and using and , we have the next step is to show that the matrix is in fact orthogonalthis will follow once we establish the following result . if then .consider the following elliptic boundary value problem , where ( respectively , ) is the solution to the problem with the boundary source ( respectively , ) , large enough .we shall now return to the original notation , where each eigenvalue of the operator is repeated according its multiplicity . since , the problem has the unique solution with convergence in .thus , and therefore , it follows from proposition [ prop_almost_spec ] that if , then define the continuous map is a solution to the problem it follows from together with proposition [ prop_density_gradients ] that on a dense subset of , and thus , everywhere .hence , proposition [ prop_a - boundary ] in appendix a implies that . now going back to equation , using lemma [ lem_alg_1 ] we obtain that is an orthogonal matrix .proposition [ prop_almost_spec ] together with gives the following result .[ prop_spectral ] assume that .let be an orthonormal basis in of the dirichlet eigenfunctions of the operator .then the dirichlet eigenvalues ( respectively , ) of the operator ( respectively , ) , counted with multiplicities , satisfy for all and there exists an orthonormal basis in of the dirichlet eigenfunctions of the operator such that for all .we shall next show that proposition [ prop_spectral ] yields that .indeed , let us write where the fourier coefficients are given by thus , .it follows that , for any , and we get .the proof of theorem [ thm_main ] is complete .theorem [ thm_main_2 ] can be proven by exactly the same arguments presented in this section applied to the problem with the right hand sides of the form where is given by .let , , be a bounded domain with boundary , and , , be a real symmetric matrix with .assume that there exists such that consider the following elliptic boundary value problem , for any , the problem has a unique solution and one can define the map , where is the unit outer normal to the boundary .we note that the map is sometimes used to model boundary measurements for optical tomography with diffusion approximation , .we have the following proposition , which is closely related to the earlier boundary reconstruction results of a riemannian metric from the dirichlet to neumann map .[ prop_a - boundary ] the knowledge of the map given by determines the values of on the boundary .we shall recover the values of on the boundary by analyzing the distribution kernel of the map , obtained by constructing a right parametrix for the boundary value problem .let us denote .since is a positive definite matrix , smoothly depending on , we can view as a riemannian manifold with boundary , equipped with the metric , , . to construct a parametrix for, we shall work locally near a boundary point .let and introduce the boundary normal coordinates , , centered at .here stands for some open neighborhood of in . in terms of the boundary normal coordinates , locally near , the boundary defined by , and if and only if . in what follows , we shall write again for the boundary normal coordinates . in the boundary normal coordinates , the metric has the form see , and the principal symbol of the operator is given by therefore , the equation , , has the solutions , we can view as a linear continuous map in the space . in the boundary normal coordinates , the problem has the following form , let where , for and near .the operator is a rough parametrix for the operator , which will be sufficient for our purposes .here we are using the classical quantization of a symbol , which is given by as usual , we say that if locally uniformly in , we have let be the operation of restriction from to and let be the operation of extension by zero from to .we shall construct a right parametrix for the boundary value problem in the following form here and will be constructed as a right parametrix for the boundary value problem in what follows we shall suppress the operator from the notation , as this will cause no confusion .when constructing the operator , we shall follow the standard approach in the theory of elliptic boundary value problems , see . to this end , let be such that for .notice that let be a simple closed smooth curve in the upper half - plane , which encircles the root in the positive sense . inwhat follows we may and will choose so that it is independent of , depending on only , i.e. . when , we define the operator where is the fourier transform of . by a contour deformation argument in the complex ,we have we get therefore , since the operator is local .we shall take , for some function , defined locally near , to be found from the boundary condition , i.e. to this end , we shall now prove that is an elliptic pseudodifferential operator on the boundary and compute its principal symbol . by the residue calculus , using , we get where we introduce next a rough parametrix of , given by , where is such that to satisfy , we choose this choice of completes the construction of a rough parametrix for the boundary value problem , given by hence , the parametrix for the problem has the form in the boundary normal coordinates , the operator is given by and therefore , to obtain the claim of the proposition it suffices to analyze the distribution kernel of the operator given by let us first consider the schwartz kernel of the operator which is given by recall that here . restricting the attention to the region , by a contour deformation argument to the lower half plane , we find that and therefore , where next , the operator is a pseudodifferential operator on , given by the principal symbol of the operator is therefore the operator is also a pseudodifferential operator on and its principal symbol is given by , large enough .hence , the principal symbol of the operator is , and therefore , its kernel is given by where .finally , the kernel of the operator is given by here as usual we use the residue calculus , where only the pole in the lower half plane contributes . hence , the kernel of the composition is given by where here occurs as a parameter .now for large and satisfies where is large enough .hence , , depending on the parameter , is the symbol of the composition of two pseudodifferential operators in the tangential directions . by the standard results on pseudodifferential operators ,see , it has the following asymptotic expansion , with the leading term .the knowledge of the operator implies the knowledge of the kernel for any and .this implies the knowledge of the leading term of the latter expression is given by varying , we recover .the proof is complete .the research of k.k . was financially supported by the academy of finland ( project 125599 ) . the research of m.l . and s.s .was financially supported by the academy of finland ( center of excellence programme 213476 and computational science research programme , project 134868 ) .this project was partially conducted at the mathematical sciences research institute , berkeley , whose hospitality is gratefully acknowledged .chazarain , j. , piriou , a. , _ introduction to the theory of linear partial differential equations _ , translated from the french .studies in mathematics and its applications , 14 .north - holland publishing co. , amsterdam - new york , 1982 .hielscher , a. , jacques , s. , wang , l. , and tittel , f. , _ the influence of boundary conditions on the accuracy of diffusion theory in time - resolved reflectance spectroscopy of biological tissue _ , phys .40*(1995 ) , 19571975 .kwon , o. , woo , e. , yoon , j. , and seo , j. , _ magnetic resonance electrical impedance tomography ( mreit ) : simulation study of j - substitution algorithm _ , ieee trans . biomed .eng . , * 49 * ( 2002 ) , no .2 , 160167 .ma , q. , and he , b. , _ investigation on magnetoacoustic signal generation with magnetic induction and its application to electrical conductivity reconstruction _ ,* 52 * ( 2007 ) , 50855099 .
|
let , , be a smooth bounded domain and consider a coupled system in consisting of a conductivity equation and an anisotropic heat equation . it is shown that the coefficients , and are uniquely determined from the knowledge of the boundary map , where is the unit outer normal to . the coupled system models the following physical phenomenon . given a fixed voltage distribution , maintained on the boundary , an electric current distribution appears inside . the current in turn acts as a source of heat inside , and the heat flows out of the body through the boundary . the boundary measurements above then correspond to the map taking a voltage distribution on the boundary to the resulting heat flow through the boundary . the presented mathematical results suggest a new hybrid diffuse imaging modality combining electrical prospecting and heat transfer - based probing . * keywords : * electrical impedance tomography , heat transfer , inverse problem , coupled systems * ams subject classification : * 35k20 , 35j25 , 35r30 , 80a23
|
digital library architectures that are concerned with long - term access to digital materials face interesting challenges regarding the representation and actual storage of digital objects and their constituent datastreams . with regard to representation of digital objects, a trend can be observed that converges on the use of xml - based complex object formats such as the mpeg-21 digital item declaration language or mets . in these approaches ,the open archival information system archival information package ( oais aip ) that represents a digital object is an xml - wrapper document that contains a variety of metadata pertaining to the digital object , and that provides the constituent datastreams of the digital object either by - value ( base64-encoded datastream inline in the xml - wrapper ) or by - reference ( pointer to the datastream inline in the xml - wrapper ) .this choice for xml is not surprising .indeed , both its platform - independence nature and the broad industry support provide some guarantees regarding longevity or , eventually , migration paths . moreover , a broad choice of xml processing tools is available , including tools that facilitate the validation of xml documents against schema definitions that specify compliance with regard to both structure and datatypes .however , the choice of an xml - based aip format is only part of the solution .the digital objects - represented by means of xml - wrapper documents - and their constituent datastreams still need to be stored . with this respect , less convergence is observed in digital library architectures , and the following approaches have been explored or are actively used : * storage of the xml - wrapper documents as individual files in a file system : on most operating systems , this approach is penalized by poor performance regarding access , and especially back - up / restore .also , the oais reference model recommends against the storage of preservation description information and content information using directory or file - based naming conventions . *storage of the xml - wrapper documents in sql or native xml databases : this approach provides a flexible storage approach , but it raises concerns for long - term storage because , in database systems , the data are crucially dependent on the underlying system . * storage of the xml - wrapper documents by concatenating many such documents into a single file such as tar , zip , etc . :this approach is appealing because it builds on the simplest possible storage mechanism - a file - and it alleviates the problems of the `` individual file '' approach mentioned before .however , off - the - shelf xml tools are not efficient to retrieve individual xml - wrapper documents from such a concatenation file .the internet archive has devised the arc file , a file - based approach to store the datastreams that result from web crawling .in essence , an arc file is the concatenation of many datastreams , whereby a separation between datastreams is created by a section that provides mainly crawling - related metadata in a text - only format .indexing tools are available to allow rapid access to datastreams based on their identifiers . while the file - based approach to store a collection of datastreams is attractive , the arc file format has limited capabilities for expressing metadata .even the result of the ongoing revision of the arc file format , in which the authors are involved , will probably not allow expressing the extensive metadata that is typical for archival information packages in digital libraries . by all means , it is not clear how various constituent datastreams of a digital object could be tied together in the arc file format , or how their structural relationships could be expressed .moreover , arc files do not provide the validation capabilities that are part of what makes xml - based representation and storage attractive . in this paperwe introduce a representation and storage approach for digital objects that was pioneered in the adore repository effort of the research library of the los alamos national laboratory ( lanl ) .the approach combines the attractive features of the aforementioned techniques by building on two interconnected file - based storage approaches , xmltapes and arc files .these file formats are proposed as a long - term storage mechanism for digital objects and their constituent datastreams .the proposed storage mechanism is independent of the choice of an xml - based complex object format to represent digital objects .it is also independent of the indexing technologies that are used to access embedded digital objects or constituent datastreams : as technologies evolve , new indexing mechanisms can be introduced , while the file - based storage mechanism itself remains unchanged .over the last 2 years , the digital library research and prototyping team of the lanl research library has worked on the design of the adore repository architecture aimed at ingesting , storing , and making accessible to downstream applications a multi - tb heterogeneous collection of digital scholarly assets . as is the case in most digital libraries , assets stored in adore are _ complex _ in the sense that they consist of multiple individual datastreams that jointly form a single logical unit .that logical unit can , for example , be a scholarly publication that consists of a research paper in pdf format , metadata describing the paper expressed in xml , and auxiliary datastreams such as images and videos in various formats , including tiff , jpeg and mpeg . for reasons of clarity , this paper will refer to an asset as a _ digital object _ , and to the individual datastreamsof which the asset consists as _ constituent datastreams_. the complex nature of the assets to be ingested into adore led to an interest in representing assets by means of xml wrappers , which itself resulted in the selection of the mpeg-21 didl as the sole way to represent the asset by means of xml documents called didl documents .the actual use of the mpeg-21 didl in adore is described in some detail in the slightly outdated and the more recent .although this paper will illustrate the xmltape / arc storage mechanism for the case where mpeg-21 didl is used to represent digital objects , it will become clear that the approach is independent of the choice of a specific xml - based complex object format .hence , it could also be used when digital objects are represented using mets or ims / cp .an important , oais - inspired , characteristic of the adore environment is its write - once / read - many strategy .indeed , whenever a new version of a previously ingested digital object needs to be ingested , a new didl document is created ; existing didl documents are never updated or edited .the distinction between multiple versions of a digital object is achieved through the use of 2 types of identifiers that are clearly recognizable , and expressed as uris in didl documents : content identifiers : : content identifiers corresponds to what the oais categorizes as content information identifiers .content identifiers are directly related to identifiers that are natively attached to digital objects before their ingestion into adore .indeed , in many cases such digital objects , or their constituent datastreams , have identifiers that were associated with them when they were created or published , such as digital object identifiers for scholarly papers .different versions of a digital object have the same content identifier .package identifiers : : a didl document that represents a digital object functions as an oais aip in adore . during the ingestion process, this didl document itself is accorded a globally unique identifier , which the oais categorizes as an aip identifier .values for package identifier are constructed using the uuid algorithm ; they are expressed as uris in a reserved sub - namespaces of the info : lanl - repo/ namespace , which the lanl research library has registered under the info uri scheme .a separate component in the adore architecture , the identifier locator , keeps track of all versions of a digital object .the adore environment shares two important characteristics with the internet archive : * data feeds in adore are typically received in batches , each of which can contain anywhere between 1,000 and 1,000,000 digital objects .* ingestion of a previously ingested digital object does not result in editing of that previously ingested version , but rather to a from - scratch ingestion of the new version .these characteristics suggest that a file - based , write - once / read - many storage approach should be as appealing to adore as it is to the internet archive .however , internet archive arc files have only limited capabilities to express metadata pertaining to datastreams and to the ingestion process , and they have no obvious way to express structure of a digital object with multiple constituent datastreams . therefore , in adore , an approach has been devised that combines two interconnected file - based storage mechanisms : xmltapes and arc files .an xmltape is an xml file that concatenates the xml - based representation of multiple digital objects . in the adore implementation of the xmltape ,the xml - based representations of digital objects are didl documents compliant with the mpeg-21 didl standard . in order to keep these didl documents small and hence easy to process ,they typically contain : by - value : : the metadata pertaining to the digital object , its constituent datastreams , and the ingestion process . by - reference: : the constituent datastreams of the represented digital object . the embedded reference in the didl document points to the datastream that is stored in an arc file that is associated with the xmltape. the nature of the reference and the access mechanism will be explained in section 3.3 and section 3.4 , respectively .the structure of xmltapes is defined by means of an xml schema : * an xmltape starts off with a section that allows for the inclusion of administrative information pertaining to the xmltape itself .typical information includes provenance information of the contained batch of digital objects , identification of the processing software , processing time , etc . *the xmltape - level administrative section is followed by the concatenation of records , each of which has administrative information attached to it .while allowing for the inclusion of a variety of record - level administrative information , the xmltape has two strictly defined administrative elements : the identifier and creation datetime of the contained record .this allows for the use of a generic xmltape processing tool that is independent of the nature of the actual included records . in adore ,these strictly defined administrative information elements translate to the package identifier and the creation datetime of the didl document that is a record in the xmltape .* the records provided in an xmltape can be from any xml namespace .in adore , they are didl documents compliant with the mpeg-21 didl xml schema . *the xmltape itself is a valid and well - formed xml file that can be handled by off - the - shelf xml tools for validation or parsing . in order to interpret an xml file ,it is generally necessary to parse and load the complete file . in case of xmltapes ,such an approach would forbid fast retrieval of the embedded xml documents .therefore , in order to optimize access , two indexes are created .the indexes correspond with the mandatory record - level administrative information , and have identifier and creation datetime of the embedded records as their respective keys .as will be explained in section 3.5 , these indexes facilitate oai - pmh access to the xml documents contained in the xmltape .in addition to these identifier and datetime keys , each index stores the byte - offset and byte - count per matching record . when retrieving a record from an xmltape ,first a lookup in an index file is required to fetch a record position , followed by a seek into the xmltape to return the required record . in some scenarios, it can make sense to physically embed certain constituent datastreams of a digital object in the didl document that is contained in an xmltape .for example , embedding descriptive metadata or image thumbnails may improve access speed for downstream applications .however , in other scenarios , such embedding is neither optimal nor realistic .indeed , the mere size of a constituent datastream , worsened by the required base64 encoding , leads to large didl documents that may cause serious xml processing challenges at the time of dissemination .the arc file format is used by internet archive to store datastreams resulting from large - scale web crawling .the arc file format is structured as follows : * an arc file has a file header that provides administrative information about the arc file itself . *the file header is followed by a sequence of document records .each such record starts with a header line containing some , mainly crawl - related , metadata .the most important fields of the header line are the uri of the crawled document , the timestamp of acquisition of the data , and the size of the data block that follows the header line .the header line is followed by the response to a protocol request such as an http get .tools such as those from netarchive.dk are available to generate and consult an index external to the arc file that facilitates rapid access to contained records , using their uri as the key . as will become clear from section 3.3 , these tools play a core role when connecting xmltapes and associated arc files .both the xmltape and its associated arc files are created during the ingestion process . an insight in the ingestion flow is given here : * when a feed of digital objects is obtained from an information provider , the ingestion process creates a didl document per obtained digital object ; each didl document receives a globally unique package identifier .* typically , all didl documents for a given batch are stored in a single xmltape that can easily store over 1,000,000 didl documents .an xmltape itself also receives a globally unique xmltape identifier .* depending on the size of the constituent datastreams of the digital objects in a given feed , one or more arc files are created during the ingestion process .each arc file is given a globally unique arc file identifier , and arc files are associated with the xmltape by including these arc file identifiers in the xmltape - level administrative section .* for each didl document written to the xmltape :* * each constituent datastream of the represented digital object is accorded a globally unique datastream identifier that has no relation to the aforementioned package identifiers or content identifiers . * * the constituent datastream is written to an arc file ; the uri field of the arc file record header receives the datastream identifier as its value . * * a reference to the constituent datastream is written in the didl document .core elements in this reference are the arc file identifier and the datastream identifier .as will be explained in the next section , these references are encoded in a manner compliant with the niso openurl standard .* indexes are created for both the xmltape and its associated arc files : * * for the xmltape , two indexes are created , with the package identifier and the creation datetime of didl documents as their respective keys . * * per arc file , an index is created that has the datastream identifier as its key . *all globally unique identifiers accorded during the ingestion process are created based on the uuid algorithm .the features described in the previous sections allow for a persistent standards - based access to both file - based storage mechanisms .each xmltape is exposed as an autonomous oai - pmh repository with the following characteristics ( protocol elements are shown in * bold * ) : * it has a * baseurl(xmltape ) * , which is an http address that contains the xmltape identifier to ensure its uniqueness .* contained * records * are didl documents only . * the * identifier * used by the oai - pmh is the package identifier .* the * datestamp * used by the oai - pmh is the creation datetime of the didl document . *access based on * identifier * and * datestamp * is enabled via the 2 aforementioned indexes created per xmltape . *the only supported metadata format is didl , with * metadataprefix * didl , defined by the mpeg-21 didl xml schema .* the supported oai - pmh harvesting * granularity * is seconds - level .each arc file is exposed as an openurl resolver : * the openurl resolver has a * baseurl(arc file ) * , which is an http address that contains the arc file identifier to ensure its uniqueness .* references embedded in the didl documents are compliant with the niso openurl standard .as a matter of fact , the reference uses the http - based , key / encoded - value inline openurl .the referent of this openurl is a datastream stored in an arc file , and this datastream is described on the openurl by means of its datastream identifier . * the sole service provided by the openurl resolver is delivery of the datastream referenced on the openurl .this service is enabled by the index that is created per arc file .this section explains how digital objects and constituent datastreams are accessed in the adore environment in which the xmltape / arc approach is used as file - based storage mechanism .figure [ fig : fig1 ] is provided to support a better understanding of the flow . in what follows ,protocol elements are shown in * bold * , while argument values are shown in _italic_. * in a typical scenario , an agent requests a digital object from adore by means of its content identifier .the identifier locator , not described in this paper , contains information on the locations of all version of a digital object with a given content identifier .when queried , it returns a list of package identifiers of didl documents that represent the given digital object , and for each returned package identifier the oai - pmh * baseurl*(xmltape identifier ) of the xmltape in which the didl document resides . * next , the requesting agent selects a specific version of a digital object , thereby implicitly selecting a specific xmltape identifier and the oai - pmh * baseurl*(xmltape identifier ) of the xmltape in which the chosen didl document resides .this didl document can be obtained using the oai - pmh request : + + [ cols= " < , < " , ] * issuing this openurl request results in a look - up of the datastream identifier in the uri - based index that was created for the targeted arc file .this look - up reveals the byte - offset and byte - count of the required datastream in the arc file .given this information , a process can access the datastream in the arc file and return it to the agent .all xmltape and arc file components are implemented in java . due to the standards - based approach, several off - the - shelf components have been used .the xmltape indexes are implemented with berkeley db java edition , while oai - pmh access is facilitated by oclc s oaicat software which , in collaboration with oclc , was extended to support access to multiple oai - pmh repositories in a single installation .creation , indexing and access to arc files are implemented using the netarchiv.dk toolset .the performance and scalability of arc files are demonstrated by the internet archive and its wayback machine , which stores more than 400 terabytes of data .the performance of the xmltape solution depends on the choice of the underlying indexing and retrieval tools .the file - based nature of both xmltapes and arc files makes it straightforward to distribute content over multiple disks and servers .two aspects of the reported work will require future updating of the xmltape / arc approach : * first , a problem related to the indexing of xmltapes must be resolved .many xml parsers do not support byte - level processing .however , correct byte - level location is essential to yield a waterproof solution for the two indexes that are created for xmltapes , both of which are based on byte - count and byte - offset .this problem currently limits the choice of xml tools that can be used for the indexing process .a fundamental solution to this problem should come from support for the dom level 3 api in xml tools , as this api requires support for byte - level location . * second , under the umbrella of the international internet preservation consortium ( iipc ) , a conglomerate of the internet archive and national libraries , the arc file format is undergoing a revision .formal requirements for the revised format have been specified , including oais compliance , ability to deal with all internet protocols , support of metadata , and capability to verify data integrity .the authors are involved in this effort , and have provided input , some of which is aimed at making the revised file format even more suitable for the use case of storing local content , in addition to the typical web crawling use case . at the time of writing , a draft proposal for a warc file format is available and awaiting further comments .once a new format is accepted , existing arc files in adore will be converted , and new tools compliant with the new format will be put in place .this paper has described a storage approach for digital objects and its constituent datastreams that has been pioneered in the context of the adore repository effort by the lanl research library .the approach combines two interconnected file - based storage mechanisms that are made accessible in a protocol - based manner : * xmltapes concatenate xml - based representations of multiple digital objects , and are made accessible through the oai - pmh . *arc files concatenate constituent datastreams of digital objects , and are made accessible through an openurl resolver . * the interconnection between both is provided by conveying the identifiers of arc files associated with an xmltape as administrative information in the xmltape , and by including openurl references to constituent datastreams of a digital object in the xml - based representation of that digital object .the approach is appealing for several reasons : * the file - based approach is inherently simple , and dramatically reduces the dependency on other components as it exists with database - oriented storage . * the disconnection of the indexes required for access from the file - based storage approach allows retaining the files over time , while the indexes can be created using other techniques as technologies evolve .* the protocol - based nature of the access further increases the flexibility in light of evolving technologies as it introduces a layer of abstraction between the access method and the technology by which actual access is implemented . *the xmltape approach is inspired by the arc file format , but provides several additional attractive features .it provides a native mechanism to store xml - based representations of digital objects that are increasingly being used in digital library architectures .this yields the ability to use of off - the - shelf xml processing tools for tasks such as validating and parsing .it also provides the flexibility to easily deal with digital objects that have multiple constituent datastreams , and to attach a wide variety of metadata to both those digital objects and their datastreams .of special interest for preservation purposes is the ability to include xml signatures for constituent datastreams ( stored themselves outside of the xmltape ) as metadata within the xml - based representation of a digital object stored in the xmltape . * used in this dual file - based storage approach , arc files keep the appeal they have in the context of the internet archive . for adore , they are appealing for additional reasons , including the existence of off - the - shelf processing tools , the proven use in a large - scale environment , and the prospect of the format or a new version thereof being used in the international context of the international internet preservation consortium that groups the internet archive and national libraries worldwide . as can be understood , the proposed xmltape / arc approachis not tied to adore s choice of mpeg-21 didl as the complex object format to represent digital objects .the approach can also be used when digital objects are represented using other formats such as mets or ims / cp . as a matter of fact , at lanl , the xmltape approach is even used to store the results from oai - pmh harvesting of dublin core records , in which case the record - level administrative information contains the oai - pmh identifier and datestamp of the dublin core record to which it is attached . while currently untested , the proposed approach could also be used as a mechanism to transport large archives encoded as xmltape / arc collections from one system to another .the authors would like to thank jeff young from oclc for his willingness to update the oaicat software to accommodate the multiple repository use case . andmany thanks to our lanl research library colleagues jeroen bekaert , mariella di giacomo , and thorsten schwander for their input in devising many facets of the adore architecture . finally thanks to michael l. nelson for proofreading a draft of this paper . van de sompel , h. , bekaert , j. , liu , x. , balakireva , l. , schwander , t. ( accepted submission ) : adore : a modular , standards - based digital object repository . the computer journal ( 2005 ) .preprint at http://arxiv.org/abs/cs.dl/0502028 bekaert , j. , hochstenbach , p. , van de sompel , h. : using mpeg-21 didl to represent complex digital objects in the los alamos national laboratory digital library .d - lib magazine , 9(11 ) ( 2003 , november ) retrieved from http://dx.doi.org/10.1045/november2003-bekaert van de sompel , h. , hammond , t. , neylon , e. , weibel , s. : the `` info '' uri scheme for information assets with identifiers in public namespaces ( 2nd ed . )( 2005 , january 12 ) retrieved from http://info-uri.info/registry/docs/drafts/draft-vandesompel-info-uri-03.txt lagoze , c. , van de sompel , h. , nelson , m. l. , warner , s. ( eds . ) : the open archives initiative protocol for metadata harvesting ( 2nd ed . )( 2002 , june ) retrieved from http://www.openarchives.org/oai/openarchivesprotocol.htm
|
this paper introduces the write - once / read - many xmltape / arc storage approach for digital objects and their constituent datastreams . the approach combines two interconnected file - based storage mechanisms that are made accessible in a protocol - based manner . first , xml - based representations of multiple digital objects are concatenated into a single file named an xmltape . an xmltape is a valid xml file ; its format definition is independent of the choice of the xml - based complex object format by which digital objects are represented . the creation of indexes for both the identifier and the creation datetime of the xml - based representation of the digital objects facilitates oai - pmh - based access to digital objects stored in an xmltape . second , arc files , as introduced by the internet archive , are used to contain the constituent datastreams of the digital objects in a concatenated manner . an index for the identifier of the datastream facilitates openurl - based access to an arc file . the interconnection between xmltapes and arc files is provided by conveying the identifiers of arc files associated with an xmltape as administrative information in the xmltape , and by including openurl references to constituent datastreams of a digital object in the xml - based representation of that digital object .
|
grover s algorithm allows one to find a marked item in an unsorted database quadratically faster compared with the best classical algorithm .it is the paradigm for many quantum algorithms that use exhaustive search .the main technique used in grover s algorithm , called amplitude amplification , can be applied in many computational problems providing gain in time complexity .a related problem is to find a marked location in a spatial , physical region .benioff asked how many steps are necessary for a quantum robot to find a marked vertex in a two - dimensional grid with vertices . in his model, the robot can move from one vertex to an adjacent one spending one time unit .benioff showed that a direct application of grover s algorithm does not provide improvements in the time complexity compared to a classical robot , which is . using a different technique , called _ abstract search algorithms _ , ambainis et . showed that it is possible to find the marked vertex with steps .tulsi was able to improve this algorithm obtaining the time complexity .the time needed to find a marked vertex depends on the spatial layout .the abstract search algorithm is a technique that can be applied to any regular graph .it is based on a modification of the standard discrete quantum walk .the coin is the grover operator for all vertices except for the marked one which is .the choice of the initial condition is also essential. it must be the uniform superposition of all states of the computational basis of the coin - position space .this technique was applied with success on higher dimensional grids , honeycomb networks , regular lattices and triangular networks . spatial search in hanoi network of degree 3 ( hn3 )was analyzed in ref .recently , the abstract search algorithm was applied for spatial search on sierpinski gasket . in this work ,we analyze spatial search algorithms on the hanoi network of degree 4 ( hn4 ) extending the analysis performed for hn3 .hn4 is a special case of _ small world networks _ , which are being used in many contexts including quantum computing .we also analyze the use of a modified coin operator instead of the grover coin , which is used in the standard form of the abstract search algorithm , to take advantage of the small world structure .our results are based on numerical simulations , but the hierarchical structure of hn4 indicates that analytical results can also be obtained .hanoi networks have a hierarchical structure of fractal type that helps to gain insights of spatial search algorithms in graphs that are not translational invariant .recently , there has some effort in this direction the structure of the paper is as follows . sec . [sec : hs ] introduces the degree-4 hanoi network .[ sec : qw ] describes the standard coined discrete quantum walk on hn4 .[ sec : asa ] reviews the basics of the abstract search algorithm and tulsi s method .[ sec : mm ] describes the modification on the coin operator we propose to enhance the time complexity of quantum search algorithms on hn4 .[ sec : results ] describes the main results based on numerical simulations . finally , we present our final remarks in sec .[ sec : conclusions ] .the hanoi network has a cycle with vertices as a backbone structure , that is , each vertex is adjacent to 2 neighboring vertices in this structure and extra , long - range edges are introduced with the goal of obtaining a small - world hierarchy .the labels of the vertices can be factorized as where denotes the level in the hierarchy and labels consecutive vertices within each hierarchy . in any level ,one links the vertices with consecutive values of keeping the degree constraint .when , the values of are the odd integers . for hn4 , we link 1 to 3 , 3 to 5 , 5 to 7 and so on .the vertex with label 0 , not being covered by eq .( [ eq : k ] ) , has a loop as well as vertex of label .[ fig : nh4 ] shows all edges for hn4 when the number of vertices is . using this figure, one can easily build hn4 recursively , each time doubling the number of vertices .our analysis will be performed for a generic value of to allow us to determine the computational cost as function of .hn4 has a small - world structure because the diameter of the network only increases with , less fast than the number of vertices .yet , hn4 is a regular graph of fixed degree at each vertex .a coined quantum walk in hn4 with vertices has a hilbert space , where is the -dimensional coin subspace and the -dimensional position subspace .a basis for is the set for and is spanned by the set with , .we use the decomposition , given by eq .( [ eq : k ] ) , when convenient . a generic state of the discrete quantum walker in hn4 is the evolution operator for the standard quantum walk is where is the identity in and is the shift operator defined by the following equations , and the arithmetical operations on the second ket is performed modulo .the shift operator obeys . is a unitary coin operation in . in the standardwalk , is the grover coin , denoted by , which is the most diffusive coin .the dynamics of the standard quantum walk is given by where is the initial condition .after steps of unitary evolution , we perform a position measurement which yields a probability distribution given by _ abstract search algorithm _ is based on a modified evolution operator , obtained from the standard quantum walk operator by replacing the coin operation with a new unitary operation which is not restricted to and acts differently on the searched vertex .the modified coin operator is where is the marked vertex in a regular graph and is the grover coin , the dimension of which depends on the degree of the graph .ambainis et . have shown that the time complexity of the spatial search algorithm can be obtained from the spectral decomposition of the evolution operator of the unmodified quantum walk , which is usually simpler than that of .the initial condition is the uniform superposition of all states of the computational basis of the whole hilbert space .this can be written as the tensor product of the uniform superposition of the computational basis of the coin space with the uniform superposition of the position space .usually , this initial condition can be obtained in time , where is the number of vertices .the evolution operator is applied recursively starting with the initial condition . if is the running time of the algorithm , the state of the system just before measurement is .if one analyzes the probability of obtaining the marked vertex as function of time since the beginning of the algorithm , one gets an oscillatory function with the first maximum close to . after a little algebra, the evolution operator can be converted into the form , where , is given by eq .( [ evol ] ) , and is the uniform superposition of the computational basis of the coin space . using the expression as a starting point , tulsi proposed a new version of the search algorithm , which requires an extra register ( an ancilla qubit ) used as a control for the operators and .the operators acting on the ancilla register are described in figure [ fig : circ ] , where is the negative of pauli s operator and the value of is the one that optimize the cost of the algorithm . for two - dimensional lattices ,tulsi showed that ( 160,160)(80,0 ) ( 43,25)(0,0)[r] ( 250,125)(1,0)20 ( 80,110)(30,30) ( 140,110)(30,30) ( 220,110)(30,30) ( 218,25)(1,0)52 ( 218,75)(1,0)52 ( 102,15)(46,70 ) ( 148,25)(1,0)24 ( 148,75)(1,0)24 ( 195,125 ) ( 195,121)(0,-1)36 ( 172,15)(46,70) ( 43,125)(0,0)[r] ( 43,75)(0,0)[r] ( 125,125 ) ( 170,125)(1,0)50 ( 110,125)(1,0)30 ( 55,125)(1,0)25 ( 125,121)(0,-1)36 ( 55,25)(1,0)47 ( 55,75)(1,0)47 the tulsi s evolution operator is where and are the controlled operations shown in figure [ fig : circ ] and is the identity operator in .we want to determine how many times must be iterated , taking as the initial condition , in order to maximize the overlap with the search element .the coin in a quantum walk is used to determine the direction of the movement .the grover coin is an isotropic operator regarding all outgoing edges from a vertex .it is useful in networks that have no special directions , such as two - dimensional grids and hypercubes .the hanoi network , on the other hand , has a special direction that creates the small world structure .any edge that takes the walker outside the circular backbone provides an interesting opportunity in terms of searching .the strategy is to have a parameter that can control the probability flux among the edges , reinforcing or decreasing the flux outwards or inwards the circular backbone . instead of using the grover coin of the _ abstract search algorithms _, we analyze the use of a modified coin given by where is the degree at each vertex . when , the grover coin is recovered . when , the probability flux along small - world edges ( labels 0 and 1 ) which escapes from the circular backbone is weakened . when , the probability flux off the backbone is reinforced , allowing the walker to use the small world structure with higher efficiencyhence , this new coin controls the bias to escape off the circular backbone of hn4 through the parameters .the _ abstract search algorithms _ use a uniform distribution as initial condition .we change this recipe .the initial condition is where is the uniform superposition on the position space .when the initial condition is the uniform superposition of coin - position space .we want to check whether it is possible to improve the abstract search algorithm by modifying the coin operator for hn4 in such way we can tune parameter for obtaining the best rate of probability flux between the circle backbone and small - world edges . in a previous paper , we have concluded that , for hn3 without using tulsi s method , it is better to choose . after analyzing further this issue , we have to reconsider this conclusion , mainly when one uses tulsi s method , which seems to favor the grover coin even in nonhomogeneous graphs . in this work ,we consider three different methods : 1 ) the abstract search algorithm , 2 ) the tulsi s method , and 3 ) the modified method .the analysis of the evolution of the quantum search algorithm using the new coin and initial condition is far more complex than the standard one .our conclusions here are based in numerical simulations .[ fig : prob - vs - t ] shows the oscillatory behavior of the probability of finding the walker at the marked vertex using the modified method with ( lower curve ) and the _ abstract search algorithm _ using tulsi s method ( higher curve ) .initially , the probability is close to zero , because the initial condition is a state that is close to the uniform superposition of all vertices .the running time of the algorithm is the value of for which the probability is close to its first maximum .note that , without using tulsi s method , the maximum value of the probability is smaller than that of the abstract search algorithm with tulsi s method . in either case , the maximum value of the probability is not close to 1 , as one would expect in order to have high probability to find the marked vertex .this means that the algorithm must be rerun many times to amplify the success probability . for the lower curve ,the number of repetitions is large , in fact , it scales with , which has a strong impact on the total cost of the algorithm .we call _ success probability _ the value of the probability for the first peak of fig .[ fig : prob - vs - t ] .the marked vertex used in all simulations is , but the conclusions will not depend on the hierarchy of the target vertex . as a function of time using the modified method and the abstract search algorithm with tulsi s method . ]now let us try to answer the following question about parameter : what is the best value of for the spatial search algorithm ?[ fig : prob - vs - eps - hn4 ] shows the success probability as function of for three values of both for the modified method ( lower curves ) and the abstract search algorithm using tulsi s method ( higher curves ) .the curves are very flat around , which correspond to the grover coin .this shows the the modified method does not play an important role for improving the efficiency of the algorithm .these curves do not provide enough clues for choosing .the final answer can be achieved by analyzing the effect of on the total cost of the algorithm .we measure the cost as the number of times the evolution operator is applied , or equivalently the number of oracle queries considering the repetitions necessary for amplitude amplification . for three values of using both the modified coin and tulsi s method on the top of the abstract search algorithm . ][ fig : cost - vs - n - nh4 ] shows the total cost of the search algorithm as function of for two values of both for the modified method ( higher curves ) and the abstract search algorithm using tulsi s method ( lower curves ) .the curves are very flat around as before , but we can conclude that the best values are for the modified method and ( grover coin ) for the abstract search algorithm using tulsi s method . from now on , we will take these values of for the rest of this paper .the fact that the algorithm can not be improved by modifying the coin after tulsi s method seems to show that the algorithm has achieved its best performance using the grover coin .this conclusion for a graph that has non - homegeneous vertices and non - isotropic edges was not the one expected by us at the beginning and it is quite surprising . for two values of using both the modified coin and tulsi s method on the top the abstract search algorithm . ][ fig : prob - vs - n ] shows the success probability after a single run of the search algorithm as a function of the network size in log - scale for the modified method ( lower points ) and the abstract search algorithm using tulsi s method ( higher points ) . from the inclination of the best fitting line ,we conclude that the success probability of the modified method decays approximately as . using the technique of amplitude amplification , the algorithm must be rerun around times in order to ensure a final probability close to .this produces a high impact on the total cost of the algorithm .recall that for the two - dimensional grid , the number of repetitions is .this result shows that tulsi s method plays an important role in terms of computational complexity .the best value of in eq .( [ eq : xdelta ] ) for hn4 seems to be . in fig .[ fig : prob - vs - n ] , we see that success probability after a single run of the abstract search algorithm using tulsi s method does not depend on . in this case, we can rerun the algorithm a fixed number of times to obtain an overall probability very close to 1 . .the fitting shows that for the modified method . ][ fig : cost - vs - n ] shows the computational cost of the search algorithm as a function of the network size both for the modified method ( cross points ) and the abstract search algorithm using tulsi s method ( x points ) .we have not used the method of amplitude amplification in this experiment .we have displayed the best fitting lines for both cases , which scale as . for the tulsi method ,this is the total cost of the algorithm , because the success probability is high ( see fig .[ fig : prob - vs - n ] ) . for the modified method ,the total cost is , because we must use the method of amplitude amplification , which puts an overhead of , where ( see fig .[ fig : prob - vs - n ] ) . in log scale .the dotted curves are fitting curves . for the modified method , the bestfitting is . for the tulsi method , the bestfitting is . ]it is important to note that the data in fig .[ fig : cost - vs - n ] is not precise enough to detect the presence of terms in the expression of the total cost .we are using results with small to draw conclusions about the asymptotic behavior .there are imprecisions in the simulations that come from the discrete nature of the spatial layout .for instance , the lower curve in fig .[ fig : prob - vs - t ] has quick oscillations that have some impact on the impreciseness of the total cost .usually , the results using tulsi s method are more stable .the scaling of the total cost has a slight variation when we change the position of the target . on the other hand , the prefactor changes . in terms of graph structure ,hn4 is non - homogeneous , because the vertices can be divided in hierarchical levels. this non - homogeneity does not play an important role in term of the cost of finding a vertex .the same kind of conclusions holds for hn3 , analyzed in ref . .we have redone and extended the simulations for hn3 , using a new implementation .the scaling for the total cost in terms of is ( 1 ) using tulsi s method , and ( 2 ) using the modified method with applying amplitude amplification .there are two important conclusions we draw from the comparison between hn3 and hn4 , one is about the graph degree and the other about the value of .the degree of the graph seems to play no important role in terms of efficiency .similar conclusions were draw in ref . , which compared the efficiency of quantum search algorithms in triangular , square , and hexagonal lattices that have degrees 3 , 4 , and 6 , respectively .the scaling of the cost as a function of is the same for all of them .the optimal value of for hn3 is larger than 1 using the modified method , which means that the probability flux toward the edges leaving the circle backbone is enhanced . for hn4 ,the optimal value of is 0.75 , smaller than 1 .we can not say that long range connections ( in term of hierarchical level ) helps in the quantum search .in fact , for hn4 we have to decrease the probability flux in the edges leaving the circle backbone .this can be interpreted , when we take into account that hn4 is really small - world and mean - field like in terms of the average distance between any two vertices , which scales logarithmically with system size , whereas the average distance scales as for hn3 .that means , in hn4 it is less significant to take long - range jumps , because a random mix is already enough to get to most other sites ; whereas in hn3 , if the walker does not take more long - range jumps than nearest - neighbor jumps , it is difficult to go very far .we have analyzed spatial search algorithms on degree-4 hanoi networks with the goal of extending the abstract - search - algorithm technique for nonhomogeneous graph structures with fractal nature .we have proposed a modification of the abstract search algorithm by choosing a coin that takes advantage of the edge asymmetry of hn4 .we have obtained a faster algorithm by tuning numerically parameter .the cost of this algorithm is in terms of number of oracle queries .the algorithm uses the standard method of amplitude amplification on top of the modified abstract search algorithm with .this value of tells us that the probability flux is higher on the circle backbone of hn4 than on the edges that produces the small world structure .we have also analyzed tulsi s method on top of the abstract search algorithm . in this case, the grover coin ( ) seems to be the best option and the modified method does not improve the algorithm .the cost of the algorithm is .this is above the lower bound , which is , and above the cost of searching a marked vertex on two - dimensional lattices , which is .we have used numerical methods to estimate the cost scale .this means that factors may be lost , which could decrease the scale of in the total cost .
|
we use the _ abstract search algorithm _ and its extension due to tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in hanoi networks of degree 4 faster than classical algorithms . we also analyze the effect of using non - groverian coins that take advantage of the small world structure of the hanoi networks . we obtain the scaling of the total cost of the algorithm as a function of the number of vertices . we show that tulsi s technique plays an important role to speed up the searching algorithm . we can improve the algorithm s efficiency by choosing a non - groverian coin if we do not implement tulsi s method . our conclusion is based on numerical implementations .
|
image deblurring is a classical ill - conditioned problem in many fields of applied sciences , including astronomy imaging and biomedical imaging . during the recording of a digital image, blurring artifacts always arise due to some unavoidable causes , e.g. , the optical imaging system in a camera lens may be out of focus , in astronomy imaging the incoming light in the telescope may be slightly bent by turbulence in the atmosphere , and the same problem appears due to the diffraction of light in the fluorescence microscopy .mathematically , image blurring process in such applications can often be described as follows . for simplificationwe denote the image as a one - dimensional vector in by concatenating their columns .let be the original image .the degradation model is described by where is the observed image and is a linear blurring operator .since the linear operator can not be inverted , and is also possibly contaminated by random noises , the recovery of from the noisy version of the blurred observation is a ill - posed problem .variational image restoration methods based on the regularization technique are the most popular approach for solving this problem .typically , the variational model corresponds to solving the following minimization problem where is a data fidelity term which is derived from the the noise distribution , and is a regularization term for imposing the prior on the unknown image . generally , the data fidelity term controls the closeness between the original image and the observed image .it takes different forms depending on the type of noise being added .for example , it is well known that the -norm fidelity is used for the additive white gaussian noise .such fidelity term is mostly considered in literature for its good characterization of noise of a optical imaging system .however , non - gaussian noises are also presented in the real imaging , e.g. , poisson noise is generally observed in photon - limited images such as electronic microscopy , positron emission tomography and single photon emission computerized tomography . due to its important applications in medical imaging ,linear inverse problems in presence of poisson noise have received much interest in literature .the likelihood probability of the poisson noisy data is given by based on the statistics of poisson noise and maximum a posterior ( map ) likelihood estimation approach , a generalized kullback - leibler ( kl)-divergence arises as the fidelity term for poisson deconvolution variational model , i.e. , besides the fidelity term , a regularization term is also needed to restrain the noise amplification and avoid other artifacts in the recovered image .a simple but efficient idea is to use sparse representation in some transform domain of the unknown image .the choice of transform domain is crucial to obtain a suitable solution , and one popular choice is the total variation ( tv ) due to the strong edge - preserving ability . in this case , we obtain the classical tv - kl model for poisson image deblurring : where is a regularization parameter , and with is the total variation regularization .since the pixel values of images represent the number of discrete photons incident over a given time interval in this application , we demand that in model ( [ equ1.6 ] ) .another selection for the regularizer term is the wavelet tight framelets , which have also been proved to be efficient but may need more computational cost associated with the wavelet transform and inverse transform . in the last several years , the relationship between the total variation and wavelet framelet has also been revealed . in this paper , we focus our attention on the tv - kl model ( [ equ1.6 ] ) . due to the complex form of the fidelity term ( [ equ1.5 ] ) , the ill - posed inverse problem in presence of poisson noise has attracted less interest in literature than their gaussian counterpart .recently , sawatzky et al . proposed an em - tv algorithm for poisson image deblurring which has been shown to be more efficient than earlier methods , such as tv penalized richardson - lucy algorithm .s. bonettini et al . also developed gradient projection methods for tv - based image restoration .later on , the augmented lagrangian framework , which has been successfully applied to various image processing tasks , has been used for solving the tv - kl model .in particular , in a very effective alternating direction method of multipliers ( admm ) called pidal was proposed for image deblurring in presence of poisson noise , where a tv denoising problem is solved by chambolle s algorithm in each iteration .it has been proved to be more efficient than the admm algorithm proposed in .the relation between the two admms with and without nested iteration has been analyzed in .although the augmented lagrangian methods have been shown to be very useful , inner iterations or inverse operators involving the linear operator and laplacian operator are required in each iteration . besides , at least three auxiliary variables , which may reduce the convergence speed of the iterative algorithm , need to be introduced in the augmented lagrangian method due to the fidelity term is non - quadratic . in order to further improve the efficiency of the augmented lagrangian method, alternating direction minimization methods based on the linearized technique have been widely investigated very recently .the key idea of these methods is to use the proximal linearized term instead of the whole or part of the augmented lagrangian function . as a result ,sub - minimization problems which have closed solutions are obtained in the iteration process . in literature , an efficient optimization algorithm using the linearized alternating direction method was proposed , and further applied to solve the tv minimization problem for multiplicative gamma noise removal .numerical examples demonstrate that it is more efficient than the augmented lagrangian algorithms in this application .the primal - dual hybrid gradient ( pdhg ) method proposed by zhu et al . is another efficient iterative algorithm .the core idea is to alternately update the primal and dual variables by the gradient descent scheme and the gradient ascend scheme .the recent study on variants of the original pdhg algorithm , and on the connection with the linearized version of admm reveals the equivalence relation between the two algorithms framework . for more detailsrefer to and the references cited therein .however , in the previous linearized alternating direction methods , the second - order derivative ( or the hessian matrix ) of the objective function of the sub - minimization problem is just approximated by an identity matrix multiplied by some constant .this approximation is obviously not exact in most cases , and therefore may reduce the convergence speed of the iterative algorithms .in fact , from the numerical comparison shown in section [ sec4 ] we find that the convergence rate of the linearized alternating direction method proposed in is obviously influenced by the inexact linearized approximation while applying it to solve the tv - kl model for poisson image deblurring .it is observed that the computational efficiency of the linearized alternating direction is even lower than the previous augmented lagrangian algorithms such as the pidal algorithm .refer to the experiments below for details .the main contribution of this work is to propose a novel inexact alternating direction method utilizing the second - order information of the objective function .specifically , in one sub - minimization problem of the proposed algorithm , the solution is obtained by a one - step iteration , in a way reminiscent of newton descent methods .in other words , the second - order derivative of the corresponding objective function in the sub - minimization problem is just approximated by a proximal hessian matrix which can be computed easily , rather than a constant multiplied by the identity matrix .the improved iterative algorithm is proved to be more efficient than the current state - of - the - art methods with application to poisson image deblurring , including the pidal algorithm and the linearized alternating direction methods .the rest of this paper is organized as follows . in section [ sec2 ]we briefly review the recently proposed proximal linearized alternating direction ( plad ) method .in section [ sec3 ] , in order to overcome the drawback of the previous linearized alternating direction method , we develop an inexact alternating direction method based on the newton descent algorithm .the updating strategy of the proximal hessian matrix in the newton descent algorithm is also discussed , and then the convergence of the proposed algorithms is further investigated under certain conditions . in section [ sec4 ] the numerical examples on poisson image deblurring problem are reported to compare the proposed algorithms with the recent state - of - the - art algorithms .in this section , we briefly review the plad method proposed in , and through further investigation we find that the plad method can be regarded as a linearized version of another widely used iterative algorithm-primal - dual hybrid gradient algorithm ( pdhg ) , which was firstly proposed by zhu et.al .fist of all , we consider the following tv regularized minimization problem where ^{n} ] is the relaxed parameter .the iterative formula ( [ equ3.2rev2 ] ) can be reformulated as here we assume that the hessian matrix is inverse . in what follows, we consider the special case of tv - kl model for poisson image deblurring . in this case, we have , and ( [ equ3.2 ] ) can be reformulated as however , the computation of the inverse of the operator is difficult in the update formula ( [ equ3.3 ] ) . one simple strategy is to use a proximal hessian matrix , which is a block - circulant matrix with periodic boundary conditions and hence can be easily computed by fast fourier transforms ( ffts ) , instead of the original operator . in this situation , we obtain the following inexact alternating direction method based on the newton descent algorithm with adaptive parameters ( iadmnda ) : in the proposed iadmnda algorithm ( [ equ3.4 ] ) , a parameter is used to approximate the term in the operator , and therefore is replaced by a simple block - circulant matrix . in what follows, we further discuss the selection of the parameters and in the iadmnda algorithm .for the relaxed parameter , we choose it to satisfy that for guaranteeing the convergence of the proposed algorithm . here in the poisson image deblurring problem , we always choose , and hence the condition ( [ equ3.3rev2 ] ) comes into existence while choosing monotone non - increasing and small enough . in this setting, the projection operator can be removed from the first formula of ( [ equ3.4 ] ) . for the parameter ,one strategy is to update its value in the iterative step according to the widely used barzilai - borwein ( bb ) spectral approach .let , , and .the parameter is chosen such that mimics the hessian matrix over the most recent step .specifically , we require that and immediately get the whole process of the proposed algorithm is summarized as algorithm 1 .it is observed that the update of introduces the extra convolution operation including in .therefore , one simple strategy is to use a unchanged value for during the iteration , i.e. , , where is a constant . in this setting, we abbreviate the proposed algorithm as iadmnd . observation ; regularization parameter ; parameters and ; inner iteration number . + * initialization * : ; ; ; ; ; .+ * iteration * : + ( 1 ) update : + ; + update according to ( [ equ3.3rev2 ] ) ; + ; + ( 2 ) update : + ; + ( 3 ) update : + ; + update according to ( [ equ3.6 ] ) ; + ( 4 ) ; + until some stopping criterion is satisfied . +* output * the recovered image . in this subsection, we further investigate the global convergence of the proposed iadmnd(a ) algorithms for poisson image deblurring under certain conditions .the bound constrained tv regularized minimization problem ( [ equ2.2 ] ) can be reformulated as where denotes the indicator function of set ,i.e. , if and otherwise .assume that is one solution of the above bound constrained optimization problem corresponding to tv - kl model with , and is the corresponding lagrangian multiplier .then the point is a karush - kuhn - tucker ( kkt ) point of problem ( [ equ3.50c ] ) , i.e. , it satisfies the following conditions : where denotes the set of the subdifferential of at , and denotes the set of the subdifferential of at . from literature we know that is also equal to the normal cone at .besides , assume that the convex function satisfies : for any , where and are two positive constants ( the estimation of and is discussed at the end of this section ) .[ the1 ] ( the convergence of the proposed iadmnda algorithm ) let be the sequence generated by the iadmnda algorithm with , , and .then converges to a solution of the minimization problem ( [ equ2.1 ] ) .denote . according to the iterative formula with respect to we know that is the solution of the minimization problem therefore , the sequence generated by the iadmnda algorithm satisfies where denotes the set of the subdifferential of at .due to is one solution of ( [ equ3.50c ] ) , it is also the kkt point that satisfies : denote the errors by , , and .subtracting ( [ equ3.11 ] ) from ( [ equ3.10 ] ) , and taking the inner product with , and on both sides of the three equations , we obtain that where , , , and .utilizing , and , we can reformulate the third equation of ( [ equ3.12 ] ) as therefore , summing three formulas in ( [ equ3.12 ] ) we can obtain that due to ( ] denoting the line segment between and ) , we have besides , we also have , and based on the above three relations , expression ( [ equ3.14 ] ) can be reformulated as due to , by the convexity of and the definition of the subdifferential we conclude that . similarly , by the convexity of the function we also have that . due to , we get therefore , removing the first two non - negative terms in ( [ equ3.15 ] ) , and utilizing the inequalities ( [ equ3.17 ] ) we obtain that according to the definition of in algorithm 1 , we know that is monotone non - increasing , and hence there exists such that . denote . by the boundedness of and we have that since , we know that according to the condition ( [ equ3.21 ] ) . therefore , by the definition of we further have based on ( [ equ3.18 ] ) and ( [ equ3.16 ] ) we immediately get according to ( [ equ3.6rev2 ] ) we can easily conclude that , there exists some such that in the next , summing ( [ equ3.18 ] ) from some to we obtain that which implies that due to , we get due to , we further have , which implies that converges to a solution of the minimization problem ( [ equ2.1 ] ) [ the2 ] ( the convergence of the proposed iadmnd algorithm ) let be the sequence generated by the iadmnd algorithm with , and .then converges to a solution of the minimization problem ( [ equ2.1 ] ) .the proof is analogous to that presented in theorem [ the1 ] , and the only difference lies in that is replaced by a constant .here we neglect the proof due to limited space . in the above proof , we observe that the constants and in ( [ equ3.21 ] ) are crucial for the convergence of the proposed algorithms , since they decide the range of the parameters in iadmnda algorithm and in iadmnd algorithm respectively .for the poisson image deblurring problem , we have .then we can obtain that where denotes a diagonal matrix with diagonal elements of the components of .due to and are upper and lower bounds of , they are also some estimation of and . in this extreme case ,the value of is too large and the value of is too small , and thus the parameter can be too large , and the step size " can be too small .this may cause the proposed algorithms converge very slowly .therefore , similarly to , we use the average of the second derivative instead of the worst estimation of and , which implies that a smaller can be selected during the implementation of the proposed algorithms .assume that is a collection of the image region with .when is sufficiently close to the unknown image , where the third approximation equation uses the relation of , and the last approximation is obtained by the second - order taylor expansion of the function .the rough estimation of and shown in ( [ equ3.23 ] ) depends on the mean and variance of the unknown blurring image .however , in general , we have .therefore , and can be simply approximated by due to , which implies that larger is demanded for images with smaller image intensity ( corresponding to images with higher noise level ) . besides , in the proof of the convergence of the proposed iadmnda algorithm , we demand that and for any .this condition can be satisfied by modifying the update formula of as follows : where is computed by the formula ( [ equ3.6 ] ) in the -th iteration , and is a large positive number . however , in our experiments we observe that the iadmnda algorithm still converges without the monotone decreasing condition .in fact , the iadmnda algorithm using the new update strategy in ( [ equ3.24e ] ) can not obviously improve the convergence speed compared with the counterpart using the original update strategy in ( [ equ3.6 ] ) .in this section , we evaluate the performance of the proposed algorithms by numerical experiments on poissonian image deblurring problem .first , the convergence of the proposed algorithms , which has been investigated in section [ sec3 ] under certain conditions , is further verified through several experiment examples , and meanwhile the influence of the parameter on the rate of convergence is also investigated .second , the proposed algorithms are compared with the widely used augmented lagrangian methods for poisson image restoration and the recently proposed plad algorithm , which can also be understood from the view of the linearized pdhg algorithm . the codes of proposed algorithms and methods used for comparison are written entirely in matlab , and all the numerical examples are implemented under windows xp and matlab 2009 running on a laptop with an intel core i5 cpu ( 2.8 ghz ) and 8 gb memory . in the following experiments , six standard nature images ( see figure [ fig4.1 ] ) , which consist of complex components in different scales and with different patterns , are used for our test . among them , the size of the boat image is , and the size of other images is .+ in the proposed iadmnda algorithm , there are two parameters needed to be manually adjusted .one is the regularization parameter , the other is the penalty parameter .it is well - known that is decided by the noise level , and the value of does influence the convergence speed of the proposed algorithm .here we use the strategy similarly to that adopted in to choose , i.e. , we set , where denotes the maximum intensity of the original image .moreover , for the step parameter , we choose it to be the largest value to satisfy the condition ( [ equ3.3rev2 ] ) . in what follows, we further investigate influence of the parameter on the rate of convergence of proposed iadmnd algorithm .two images named cameraman " and barbara " ( see figure [ fig4.1 ] ) are used for the test . herewe consider two types of blur effects with different levels of poisson noise : the cameraman image is scaled to a maximum value of and respectively , and blurred with a uniform blur kernel ; the barbara image is scaled to the same range and convoluted by a gaussian kernel of unit variance .then the blurred images are contaminated by poisson noise .figure [ fig4.2 ] depicts the evolution curves of the relative error with different values .it is observed that the value of the parameter does influence the convergence speed . generally speaking ,the proposed algorithm with a small converges faster .however , if is too small , the convergence can not be guaranteed . in our experiments , we observe that is not suitable for images with , and in this case the iadmnd algorithm become unstable .this is also consistent with the analysis in section [ subsec3.2 ] . besides, the plots in figure [ fig4.2 ] implicitly verify that the convergence of the proposed iadmnd algorithm is really guaranteed with suitable values of . in the proposed iadmnda algorithm, we use the formula ( [ equ3.6 ] ) to update the parameter in each iteration , and hence we further discuss the selection of the initial value .two images called cameraman " and bridge " ( see figure [ fig4.1 ] ) are adopted here .figure [ fig4.3 ] shows the evolution curves of snr ( signal - to - noise ratio ) values with different .note that there is almost no difference between the results with different values of , except the snr values in the first several iteration steps .therefore , we set to be a fixed constant in the following experiments . finally , we compare the performance of the iadmnda algorithms which use the update formulas ( [ equ3.6 ] ) and ( [ equ3.24e ] ) for respectively . in the formula ( [ equ3.24e ] ) , the values of and are estimated by table [ tab4.3 ] lists the snr values and the iteration number of the iadmnda algorithms with different update formulas for .here we use the parameters setting in table [ tab4.1 ] for the iadmnda algorithms , and the stopping criterion is defined such that the relative error is below some small constant , i.e. , here we choose . in this table ,the serial numbers 1 " and 2 " denote the results of the iadmnda algorithms with the update formulas ( [ equ3.6 ] ) and ( [ equ3.24e ] ) respectively , and denotes the snr values , iteration numbers in sequence .it is observed that the performances of algorithms with both update formulas are almost the same .therefore , in the following compared experiments , we use the update formulas ( [ equ3.6 ] ) for the iadmnda algorithm .[ htbp ] [ tab4.3 ] in this subsection , we further compare the proposed algorithms with the current state - of - the - art algorithms , including the pidal algorithm proposed in , and the recently proposed plad algorithm .note that several parameters are required to be manually adjusted in the compared algorithms : the regularization parameter , the penalty parameter for all these algorithms ; the step parameter ( see ( [ equ2.6 ] ) ) for the plad algorithm , and the parameter for the proposed iadmnd algorithm . throughmany trials we use the rules of thumb : in the pidal algorithm is set to , while in other algorithms it is chosen to be ; the initial value in the proposed iadmnda algorithm is fixed as ; the other parameters setting is summarized in table [ tab4.1 ] , found to guarantee the convergence and achieve satisfactory results . moreover, a rudin - osher - fatemi ( rof ) denoising problem is included in each iteration of the pidal algorithm , and it is solved by using a small and fixed number of iterations ( just 5 ) of chambolle s algorithm . for more details refer to .[ htbp ] [ tab4.1 ] in the following numerical experiments , the stopping criterion in ( [ equ4.2 ] ) is used for all the algorithms here .table [ tab4.2 ] lists the snr values , the number of iterations and cpu time of different algorithms for images with different blur kernels and noise levels . in this table , gaussian " and uniform " denote a gaussian kernel of unit variance and a uniform blur kernel respectively .the two cases can be seen as examples of mild blur and strong blur . besides , " denotes the snr values , iteration numbers and cpu time in sequence . note that the iteration numbers of the pidal algorithm represent the outer iteration numbers . from the results in table [ tab4.2 ] we observe that the proposed algorithms are much faster than the pidal and plad algorithms , and meanwhile the snr values of the recovered images achieved with the proposed algorithms are comparable to those achieved with the pidal and plad algorithms .therefore , it is verified that the strategy of using the proximal hessian matrix to approximate the second - order derivatives is more efficient than the simple approximation of an identity matrix multiplied by some constant in the plad algorithm .it is also noted that the iteration numbers of the iadmnda algorithm are the least in most cases .however , the update of generates extra computational cost in the iadmnda algorithm , which makes its implementation time longer than the iadmnd algorithm in some cases .figures [ fig4.4][fig4.9 ] show the recovery results of the pidal methods , the plad algorithm and the proposed iadmnd and iadmnda algorithms with respect to the cameraman , bridge and boat images respectively .it is observed that the visual qualities of images generated by these algorithms are more or less the same . finally , we consider two mri images called rkknee " and chest " .table [ tab4.4 ] lists the snr values , the number of iterations and cpu time of different algorithms for images with different blur kernels and noise levels .the regularization parameter is set to and for images with and convoluted by gaussian and uniform blur kernels respectively .similarly , we notice that the proposed algorithms are the most efficient in the computational time .some recovery results are shown in figures [ fig4.10 ] .we find that the quality of recovery images by these algorithms is very similar .[ htbp ] [ tab4.2 ] [ htbp ] [ tab4.4 ]in this article , through further analyze the drawback of the recently proposed linearization techniques for image restoration , we develop an inexact alternating direction method based on the proximal hessian matrix . compared with the existing algorithms ,the main difference is that the second - order derivative of the objective function is just approximated by a proximal hessian matrix in the proposed algorithm , rather than a identity matrix multiplied by a constant . besides, we also propose a strategy for updating the proximal hessian matrix .the convergence of the proposed algorithms is further investigated under certain conditions , and numerical experiments demonstrate that the proposed algorithms outperform the widely used linearized augmented lagrangian methods in the computational time .the work was supported in part by the national natural science foundation of china under grant 61271014 and 61401473 .
|
the recovery of images from the observations that are degraded by a linear operator and further corrupted by poisson noise is an important task in modern imaging applications such as astronomical and biomedical ones . gradient - based regularizers involve the popular total variation semi - norm have become standard techniques for poisson image restoration due to its edge - preserving ability . various efficient algorithms have been developed for solving the corresponding minimization problem with non - smooth regularization terms . in this paper , motivated by the idea of the alternating direction minimization algorithm and the newton s method with upper convergent rate , we further propose inexact alternating direction methods utilizing the proximal hessian matrix information of the objective function , in a way reminiscent of newton descent methods . besides , we also investigate the global convergence of the proposed algorithms under certain conditions . finally , we illustrate that the proposed algorithms outperform the current state - of - the - art algorithms through numerical experiments on poisson image deblurring . image deblurring ; proximal hessian matrix ; inexact alternating direction method ; total variation ; poisson noise
|
queueing models with many - servers are prevalent in modeling call centers and other large - scale service systems .they are used for optimizing staffing and making dynamic control decisions .the complexity of the underlying queueing model renders such optimization problems intractable for exact analysis , and one needs to resort to approximations .a prominent mode of approximate analysis is to study such systems in the so - called halfin whitt ( hw ) heavy - traffic regime ; cf . . roughly speaking , the analysis of a queueing system in the hw regime proceeds by scaling up the number of servers and the arrival rate of customers in such a way that the system load approaches one asymptotically . to be more specific , instead of considering a single system ,one considers a sequence of ( closely related ) queueing systems indexed by a parameter along which the arrival rates and the number of servers scale up so that the system traffic intensity satisfies in the context of dynamic control , passing to a formal limit of the ( properly scaled ) system dynamics equations as gives rise to a _ limit _ diffusion control problem , which is often more tractable than the original dynamic control problem it approximates .the approximating diffusion control problem typically provides useful structural insights and guides the design of good policies for the original system .once a candidate policy is proposed for the original problem of interest , its asymptotic performance can be studied in the hw regime .the ultimate goal is to establish that the proposed policy performs well . to this end ,a useful criterion is the notion of asymptotic optimality , which provides assurance that the optimality gap associated with the proposed policy vanishes asymptotically _ under diffusion scaling _ as .hence , asymptotic optimality in this context is equivalent to showing that the optimality gap is .a central reference for our purposes is the recent paper by atar , mandelbaum and reiman , where the authors apply all steps of the above scheme to the important class of problems of dynamically scheduling a multiclass queue with many identical servers in the hw regime .specifically , considers a sequence of systems indexed by the number of servers , where the number of servers and the arrival rates of the various customer classes increase with such that the heavy - traffic condition holds ; cf .equation ( [ eqhtcond ] ) . following the scheme described above, the authors derive an approximate diffusion control problem through a formal limiting argument .they then show that the diffusion control problem admits an optimal markov policy , and that the corresponding hjb equation ( a semilinear elliptic pde ) has a unique classical solution . using the markov control policy and the hjb equation , the authors propose scheduling control policies for the original ( sequence of ) queueing systems of interest . finally , they prove that the proposed sequence of policies is asymptotically optimal under diffusion scaling .namely , the optimality gap of the proposed policy for the system is .a similar approach is applied to more general networks in . in this paper , we study a similar queueing system ( see section [ secmodel ] ). our goal , however , is to provide an improved optimality gap which , in turn , requires a substantially different scheme than the one alluded to above . approximations in the hw regime for performance analysis have been used extensively for the study of fixed policies . given a particular policy , it may often be difficult to calculate various performance measures in the original queueing system .fortunately , the corresponding approximations in the hw regime are often more tractable .the machinery of strong approximations ( cf .csrgo and horvth ) often plays a central role in such analysis . in the context of many - server heavy - traffic analysis , with strong approximations ,the arrival and service processes ( under suitable assumptions on the inter - arrival and service times ) can be approximated by a diffusion process so that the approximation error on finite intervals is ( where is the number of servers as before ) .therefore , it is natural to expect that , under a given policy , the error in the diffusion approximations of the various performance metrics is , which is indeed verified for various settings in the literature ( see , e.g. , ) .a natural question is then whether one can go beyond the analysis of fixed policies and achieve an optimality gap that is logarithmic in also under dynamic control , improving upon the usual optimality gap of .more specifically , can one propose a sequence of policies ( one for each system in the sequence ) where the optimality gap for the policy ( associated with the system ) is logarithmic in ?while one hopes to get logarithmic optimality gaps as suggested by strong approximations , it is not a priori clear if this can be achieved under dynamic control .the purpose of this paper is to provide a resolution to this question .namely , we study whether one can establish such a strong notion of asymptotic optimality and if so , then how should one go about constructing policies which are asymptotically optimal in this stronger sense .our results show that such strengthened bounds on optimality gaps can be attained .specifically , we construct a sequence of asymptotically optimal policies , where the optimality gap is logarithmic in .our analysis reveals that identifying ( a sequence of ) candidate policies requires a new approach .to be specific , we advance a sequence of diffusion control problems ( as opposed to just one ) where the diffusion coefficient in each system depends on the state and the control .this is contrary to the existing work on the asymptotic analysis of queueing systems in the hw regime . in that stream of literature , the diffusion coefficient is typically a ( deterministic ) constant .indeed , borkar views the constant diffusion coefficient as a characterizing feature of the problems stemming from the heavy - traffic approximations in the hw regime .interestingly , it is essential in our work to have the diffusion coefficient depend on the state and the control for achieving the logarithmic optimality gap .in essence , incorporating the impact of control on the diffusion coefficient allows us to track the policy performance in a more refined manner .while the novelty of having the diffusion coefficient depend on the control facilitates better system performance , it also leads to a more complex diffusion control problem .in particular , the associated hjb equation is fully nonlinear ; it is also nonsmooth under a linear holding cost structure . inwhat follows , we show that each of the hjb equations in the sequence has a unique smooth solution on bounded domains and that each of the diffusion control problems ( when considered up to a stopping time ) admits an optimal markov control policy .interpreting this solution appropriately in the context of the original problem gives rise to a policy under which the optimality gap is logarithmic in . as in the performance analysis of fixed policies , strong approximations will be used in the last step , where we propose a sequence of controls for the original queueing systems , and show that we achieve the desired performance .however , it is important to note that strong approximation results alone are not sufficient for our results .rather , for the improved optimality gaps we need the refined properties of the solutions to the hjb equations . specifically , gradient estimates for the sequence of solutions to the hjb equations ( cf .theorem [ thmhjb1sol ] ) play a central role in our proofs .our analysis restricts attention to a linear holding cost structure .however , we expect the analysis to go through for some other cost structures including convex holding costs. indeed , the analysis of the convex holding cost case will probably be simpler as one tends to get `` interior '' solutions in that case as opposed to the corner solutions in the linear cost case , which causes nonsmoothness .one could also enrich the model by allowing abandonment .we expect the analysis to go through with no major changes in these cases as well ; see the discussion of possible extensions in section [ secconclusions ] . for purposes of clarity, however , we chose not to incorporate these additional / alternative features because we feel that the current set - up enables us to focus on and clearly communicate the main idea : the use of a novel brownian model with state / control dependent diffusion coefficient to obtain improved optimality gaps .section [ secmodel ] formulates the model and states the main result .section [ secresult ] introduces a ( sequence of ) brownian control problem(s ) , which are then analyzed in section [ secadcp ] .a performance analysis of our proposed policy appears in section [ sectracking ] .the major building blocks of the proof are combined to establish the main result in section [ seccombining ] and some concluding remarks appear in section [ secconclusions ] .we consider a queueing system with a single server - pool consisting of identical servers ( indexed from 1 to ) and a set of job classes as depicted in figure [ figv ] .jobs of class- arrive according to a poisson process with rate and wait in their designated queue until their service begins . once admitted to service , the service time of a class- job is distributed as an exponential random variable with rate .all service and interarrival times are mutually independent .we consider a sequence of systems indexed by the number of servers .the superscript will be attached to various processes and parameters to make the dependence on explicit .( it will be omitted from parameters and other quantities that do not change with . )we assume that for all , where is the total arrival rate and for with .this assumption is made for simplicity of notation and presentation .nothing changes in our results if one assumes , instead , that and as where . the nominal load in the system is then given by so that defining ^{-1} ] denote the expectation with respect to the initial condition and an admissible ratio control .given a ratio control and initial conditions , the expected infinite horizon discounted cost in the system is given by ,\ ] ] where is the strictly positive vector of holding cost rates and is the discount rate . for , the value functionis given by .\ ] ] we next state our main result .[ thmmain ] fix a sequence such that and for all and some .then , there exists a sequence of tracking functions together with constants ( that do not depend on ) such that where is the ratio control associated with the -tracking policy .the constant in our bound may depend on all system and cost parameters but not on .in particular , it may depend on and .its value is explicitly defined after the statement of theorem [ thmhjb1sol ] .theorem [ thmmain ] implies , in particular , that the optimal performance for nonpreemptive policies is close to that among the larger family of preemptive policies .indeed , we identify a nonpreemptive policy ( a tracking policy ) in the queueing model whose cost performance is close to the optimal value of the preemptive control problem .the rest of the paper is devoted to the proof of theorem [ thmmain ] , which proceeds by studying a sequence of auxiliary brownian control problems .the next subsection offers a heuristic derivation and a justification for the relevance of the sequence of brownian control problems to be considered in later sections .we proceed by deriving a sequence of approximating brownian control problems heuristically , which will be instrumental in deriving a near - optimal policy for our original control problem .it is important to note that we derive an approximating brownian control problem for each as opposed to deriving a single approximating problem ( for the entire sequence of problems ) .this distinction is crucial for achieving an improved optimality gap for large because it allows us to tailor the approximation to each element of the sequence of systems . to this end , let fixing an admissible control for the system [ and centering as in ( [ eqtildexdefin ] ) ] , we can then write ( [ eqdynamics2 ] ) as where \\[-8pt ] & & { } - { \mathcal{n}}_i^d\biggl(\mu_i\int_0^t \bigl ( \check{x}_i{^n}(s)+\nu_i n - u^n_i(s)\bigl ( e\cdot\check{x}{^n}(s)\bigr)^+ \bigr ) \,ds \biggr).\nonumber\end{aligned}\ ] ] in words , captures the deviations of the poisson processes from their means .it is natural to expect that an approximation result of the following form will hold : can be approximated by where and are -dimensional independent standard brownian motions .moreover , by a time - change argument we can write ( see , e.g. , theorem 4.6 in ) \\[-8pt ] & & { } + \int_0^t \sqrt{\lambda_i^n+\mu_i\bigl ( \hat { x}_i{^n}(s)+\nu_i n - u^n_i(s)\bigl ( e\cdot\hat{x}{^n}(s)\bigr)^+ \bigr)}\,db_i(s),\nonumber\hspace*{-30pt}\end{aligned}\ ] ] where is an -dimensional standard brownian motion constructed by setting taking a leap of faith and arguing heuristically , we next consider a brownian control problem with the system dynamics where will be an admissible control for the brownian system and and note that the brownian control problem will only be used to propose a candidate policy , whose near optimality will be verified from first principles without relying on the heuristic derivations of this section . to repeat , the preceding definition is purely formal and provided only as a means of motivating our approach . in what follows , we will directly state and analyze an auxiliary brownian control problem motivated by the above heuristic derivation .the analysis of the auxiliary brownian control problem lends itself to constructing near optimal policies for our original control problem . to be more specific ,the system dynamics equation ( [ eqnbmdefn1 ] ) , and in particular , the fact that its variance is state and control dependent , is crucial to our results .indeed , it is this feature of the auxiliary brownian control problems that yields an improved optimality gap .needless to say , one needs to take care in interpreting ( [ eqnbmdefn1])([eqnbmdefn3 ] ) , which are meaningful only up to a suitably defined hitting time .in particular , to have well defined , we restrict attention to the process while it is within some bounded domain .actually , it suffices for our purposes to fix and and consider the brownian control problem only up to the hitting time of a ball of the form where denotes the euclidian norm .we will fix the constant throughout and suppress the dependence on from the notation . setting diffusion coefficient is strictly positive for all and .note that , for all , and , so that for all sufficiently large and , consequently , . in what follows , and , in particular , through the proof of theorem [ thmmain ] , the reader should note that while choosing the size of the ball to be ( with small enough ) would suffice for the nondegeneracy of the diffusion coefficient , that choice would be too large for our optimality gap proofs .motivated by the discussion in the preceding section , we define admissible systems as follows . [ definadmissiblesystembrownian ] fix , and .we refer to as an admissible -system if : is a complete filtered probability space . is an -dimensional standard brownian motion adapted to . is -valued , -measurable and progressively measurable . the process is said to be the control associated with .we also say that is a controlled process associated with the initial data and an admissible system if is a continuous -adapted process on such that , almost surely , for , where and are as defined in ( [ eqnbmdefn30 ] ) and ( [ eqnbmdefn3 ] ) , respectively , and .given and , we let be the set of admissible -systems . the brownian control problem then corresponds to optimally choosing an admissible -system with associated control that achieves the minimal cost in the optimization problem ,\ ] ] where ] . in the statement of the following theorem, is the -dimensional vector with in the place and zeros elsewhere . also , , and are as defined in ( [ eqbkappadefin ] ) and ( [ eqnkappa ] ) , respectively .[ thmhjb1sol ] fix and . then , there exists ( that does not depend on ) and a unique classical solution to the hjb equation ( [ eqhjb1simp ] ) on with the boundary condition on . furthermore , there exists a constant ( that does not depend on ) such that where . in turn , for any , with and .also , \\[-8pt ] & & \qquad\leq\frac{c}{1-\vartheta}\log^{k_1}n\nonumber\end{aligned}\ ] ] for all with .note that ( [ eqgradients1 ] ) follows immediately from ( [ eqgradients0 ] ) through the definition of the operation in ( [ equstardefin ] ) . henceforth , we will use for the values given in the statement of theorem [ thmhjb1sol ] .moreover , the constant appearing in the statement of theorem [ thmmain ] is equal to .theorem [ thmhjb1sol ] facilitates a verification result , which we state next followed by the proof of theorem [ thmhjb1sol ] .below , is the value function of the adcp ; cf . equation ( [ eqoptbrownian ] ) .[ thmbrownianverification ] fix and .let be the unique solution to the hjb equation ( [ eqhjb1simp ] ) on with the boundary condition on .then , for all .moreover , there exists a markov control which is optimal for the adcp on .the tracking function associated with this optimal markov control is defined by , where the hjb equation ( [ eqhjb1simp ] ) has two sources of nondifferentiability .the first source is the minimum operation and the second is the nondifferentiability of the term .the first source of nondifferentiability is covered almost entirely by the results in . to deal with the nondifferentiability of the function , we use a construction by approximations .the proof of existence and uniqueness in theorem [ thmhjb1sol ] follows an approximation scheme where one replaces the nonsmooth function by a smooth ( parameterized by ) function .we show that the resulting `` perturbed '' pde has a unique classical solution and that as the corresponding sequence of solutions converges , in an appropriate sense , to a solution to ( [ eqhjb1simp ] ) which will be shown to be unique .note that this argument is repeated for each fixed and . to that end , given , define replacing with in ( [ eqhjb1simp ] ) gives the following equation : \\[-8pt ] & & { } + \sum_{i\in{\mathcal{i } } } ( l_i{^n}-\mu_ix_i)\phi_i(x ) + \frac{1}{2}\sum_{i\in{\mathcal{i } } } \bigl(\lambda_i^n+\mu_i(\nu _ in+x_i)\bigr)\phi_{ii}(x).\nonumber\end{aligned}\ ] ] to simplify this further , let and for all , define the function =\min\{f_a^1[y],\ldots , f_a^i[y]\},\ ] ] where for and , &=&f_a(e\cdot x ) \biggl[c_k+\mu_kp_k-\frac{1}{2}\mu_kr_{kk}\biggr]+\sum_{i\in{\mathcal{i}}}(l_i{^n}-\mu_ix_i)p_i \nonumber\\[-8pt]\\[-8pt ] & & { } + \frac{1}{2}\sum_{i\in{\mathcal{i } } } \bigl(\lambda_i^n+\mu_i(\nu _in+x_i)\bigr)r_{ii}-\gamma z.\nonumber\end{aligned}\ ] ] then , ( [ eqhjb2 ] ) can be rewritten as =0.\ ] ] in the following statement we use the gradient notation introduced at the beginning of this section . [ propsolphjb ]fix , and . a unique classical solution exists for the pde ( [ eqhjb2 ] ) on with the boundary condition on .moreover , for where and do not depend on and and does not depend on .also , is lipschitz continuous on the closure with a lipschitz constant that does not depend on ( but can depend on and ) .we postpone the proof of proposition [ propsolphjb ] to the and use it to complete the proof of theorem [ thmhjb1sol ] , followed by the proof of theorem [ thmbrownianverification ] .since we fix and , they will be suppressed below .we proceed to show the existence by an approximation argument . to that end , fix a sequence with as and let be the unique solution to ( [ eqhjb2 ] ) as given by proposition [ propsolphjb ] .the next step is to show that has a subsequence that converges in an appropriate sense to a function , which is , in fact , a solution to the hjb equation ( [ eqhjb1simp ] ) . to that end , let then , is a banach space ( see , e.g. , exercise 5.2 in ) .since the bound in ( [ eqgradients ] ) is independent of , we have that is a bounded sequence in and hence , contains a convergent subsequence . let be a limit point of the sequence . since the gradient estimates in proposition [ propsolphjb ] are independent of , they hold also for the limit function , that is , latexmath:[\[\label{equ - bound } for constants and that are independent of .proposition [ propsolphjb ] also guarantees that the global lipschitz constant is independent of so that we may conclude that and that on .we will now show that solves ( [ eqhjb1simp ] ) uniquely . to show that solves ( [ eqhjb1simp ] ), we need to show that =0 ] is defined similar to ] ) .fix and let .note that since in we have , in particular , the convergence of uniformly in .the equicontinuity of the function ] for all and since was arbitrary we have that =0 ] for all .we already argued that on , so that solves ( [ eqhjb1simp ] ) on with on .this concludes the proof of existence of a solution to ( [ eqhjb1simp ] ) that satisfies the gradient estimates ( [ eqgradients0 ] ) .finally , the uniqueness of the solution to ( [ eqhjb1simp ] ) follows from corollary 17.2 in noting that the function ] .[ remssc ] typically one establishes a stronger state - space collapse result showing that the actual queue and the desired queue values are close in supremum norm .the difficulty with the former approach is that the tracking functions here are nonsmooth .while it is plausible that one can smooth these functions appropriately ( as is done , e.g. , in ) , such smoothing might compromise the optimality gap .fortunately , the weaker integral criterion implied by theorem [ thmssc ] suffices for our purposes .fix and let be the solution to ( [ eqhjb1simp ] ) on ( see theorem [ thmhjb1sol ] ) .we start with the following lemma where and are as in ( [ eqnbmdefn30 ] ) and ( [ eqnbmdefn3 ] ) , respectively .[ lemito ] let be an admissible ratio control and let , be the queueing process associated with .fix and and let then , there exists a constant that does not depend on ( but may depend on , and ) such that &\leq&\phi_{\kappa}^n(\check{x}^n(0))+ { \mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } a_{u^n(s)}^n\phi_{\kappa}^n(\check{x}^n(s))\,ds\biggr]\\ & & { } -\gamma{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } \phi_{\kappa } ^n(\check{x}^n(s))\,ds\biggr ] + c\log^{k_1 + 1 } n\\ & \leq & { \mathbb{e}}[e^{-\gamma\tau_{\kappa',t}^n}\phi_{\kappa}^n(\check { x}^n(\tau_{\kappa',t}^n))]+2c\log^{k_1 + 1}n.\end{aligned}\ ] ] we will also use the following lemma where are the cost coefficients ( see section [ secmodel ] ) .[ lemafterstop ] let be as in the conditions of theorem [ thmmain ] .then , there exists a constant that does not depend on such that \leq c\log^2n\ ] ] and \leq c\log^2 n\ ] ] for all and any admissible ratio control .we postpone the proof of lemma [ lemito ] to the end of the section and that of lemma [ lemafterstop ] to the and proceed now to prove the main result of the paper .let be the ratio function associated with the optimal markov control for the adcp ( as in theorem [ thmhjb1sol ] ) .since is fixed we omit the subscript and use . let be the ratio associated with the -tracking policy .the proof will proceed in three main steps .first , building on theorem [ thmssc ] we will show that \leq\phi _ { \kappa}^n(\check{x}^n(0))+c\log^{k_0 + 3 } n.\ ] ] using lemma [ lemafterstop ] , this implies \nonumber\\[-8pt]\\[-8pt ] & \leq&\phi_{\kappa}^n(\check { x}^n(0))+c\log^{k_0 + 3 } n.\nonumber\end{aligned}\ ] ] finally , we will show that for any ratio control , +c\log^{k_1 + 1}n,\ ] ] where we recall that . in turn , which establishes the statement of the theorem .we now turn to prove each of ( [ eqinterim2 ] ) and ( [ eqinterim303 ] ) . to simplify notation we fix throughout and let . using lemma [ lemito ]we have \nonumber \\ & & \qquad\leq \phi_{\kappa}^n(\check{x}^n(0))+ { \mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } a_{u_h^n(s)}^n\phi_{\kappa}^n(\check{x}^n(s))\,ds\biggr]\\ & & \qquad\quad{}-\gamma{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s }\phi _ { \kappa}^n(\check{x}^n(s))\,ds\biggr]+ c\log^{k_1 + 1 } n .\nonumber\end{aligned}\ ] ] from the definition of as a minimizer in the hjb equation we have that \\ & & { } -\gamma{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } \phi _ { \kappa}^n(\check{x}^n(s))\,ds\biggr]\\ & & { } + { \mathbb{e}}\biggl[\int _ 0^{\tau_{\kappa',t}^n } e^{-\gamma s } l(\check{x}^n(s),h^n(\check { x}^n(s)))\,ds\biggr ] .\end{aligned}\ ] ] by theorem [ thmssc ] we then have that \nonumber\\[-2pt ] & & { } -\gamma{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } \phi_{\kappa}^n(\check{x}^n(s))\,ds\biggr]\nonumber\\[-9pt]\\[-9pt ] & & { } + { \mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } l(\check { x}^n(s),u_h^n(s))\,ds\biggr]\nonumber\\[-2pt ] & \geq&0.\nonumber\end{aligned}\ ] ] since is nonnegative , combining ( [ eqinterim1 ] ) and ( [ eqinterim101 ] ) we have that \leq\phi_{\kappa}^n(\check { x}^n(0))+c\log^{k_0 + 3 } n,\ ] ] which concludes the proof of ( [ eqinterim2 ] ) .we now show that . to that end , fix an arbitrary ratio control and recall that by the hjb equation , for all and . in turn , using the second inequality in lemma [ lemito ] we have that \\[-2pt ] & & \qquad\geq \phi_{\kappa}^n(\check{x}^n(0 ) ) -{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } l(\check { x}^n(s),u^n(s))\,ds\biggr]\\[-2pt ] & & \qquad\quad { } -2c\log^{k_1 + 1 } n.\end{aligned}\ ] ] using lemma [ lemafterstop ] , we have , however , that \leq c\log^{2 } n\ ] ] for a redefined constant so that \\[-2pt ] & & { } -2c\log^{k_1 + 1 } n\\&\geq & \phi_{\kappa } ^n(\check{x}^n(0))- { \mathbb{e}}\biggl[\int_0^{\infty } e^{-\gamma s } l(\check { x}^n(s),u^n(s))\,ds\biggr]\\[-2pt ] & & { } -2c\log^{k_1 + 1 } n\end{aligned}\ ] ] and , finally , +c\log^{k_1 + 1}n\vadjust{\goodbreak}\ ] ] for a redefined constant .this concludes the proof of ( [ eqinterim303 ] ) and of the theorem .we end this section with the proof of lemma [ lemito ] in which the following auxiliary lemma will be of use .[ lemmartingales ] fix and an admissible ratio control and let be the corresponding queueing process .let and be as defined in ( [ eqwtildedefin ] ) .then , for each , the process is a square integrable martingale w.r.t to the filtration as are the processes and lemma [ lemmartingales ] follows from basic results on martingales associated with time - changes of poisson processes .the detailed proof appears in the .note that , as in ( [ eqcheckxdynamics ] ) , satisfies and is a semi martingale .applying it s formula for semimartingales ( see , e.g. , theorem 5.92 in ) we have for all , that \\ & & { } - \sum_{i\in{\mathcal{i}}}\sum_{s\leq t\dvtx |\delta\check{x}^n(s)| > 0 } e^{-\gamma s}(\phi_{\kappa})_i^n(\check{x}^n(s))\delta\check{x}_i^n(s)\\ & & { } + \sum_{i\in{\mathcal{i}}}\int_0^t e^{-\gamma s } ( \phi_{\kappa}^n)_i(\check { x}^n(s-))b_i^n(\check{x}^n(s),u^n(s))\,ds\\ & & { } -\gamma\int_0^t e^{-\gamma s } \phi_{\kappa}^n(\check{x}^n(s))\,ds\end{aligned}\ ] ] and , after rearranging terms , that where .\end{aligned}\ ] ] setting as defined in the statement of the lemma and taking expectations on both sides we have \nonumber\\ & & \qquad=\phi_{\kappa}^n(\check{x}^n(0 ) ) + \sum_{i\in{\mathcal{i}}}{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa',t}^n } e^{-\gamma s } ( \phi_{\kappa}^n)_i(\check{x}^n(s- ) ) b_i^n(\check { x}^n(s),u^n(s))\,ds\biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad\quad{}+\frac{1}{2}\sum _ { i\in{\mathcal{i}}}{\mathbb{e}}\biggl[\sum_{s\leq t\dvtx |\delta\check { x}^n(s)|>0}e^{-\gamma s } ( \phi_{\kappa}^n)_{ii}(\check { x}^n(s-))(\delta\check{x}_i^n(s))^2\biggr]\nonumber\\ & & \qquad\quad{}+ { \mathbb{e}}[c^n(\tau_{\kappa',t}^n)]-\gamma{\mathbb{e}}\biggl[\int_0^{\tau_{\kappa ' , t}^n } e^{-\gamma s } \phi_{\kappa}^n(\check{x}^n(s))\,ds \biggr].\nonumber\end{aligned}\ ] ] we will now examine each of the elements on the right - hand side of ( [ eqinterim404 ] ) . first , note that and , in particular , \\ & & \qquad= { \mathbb{e}}\biggl[\sum_{s\leq\tau_{\kappa',t}^n\dvtx |\delta\check { x}^n(s)|>0}e^{-\gamma s } ( \phi_{\kappa}^n)_{ii}(\check { x}^n(s-))(\delta\check{w}_i^n(s))^2\biggr].\end{aligned}\ ] ] using the fact that , as defined in lemma [ lemmartingales ] , is a martingale as well as the fact that and its derivative processes are bounded up to , we have that the processes and are themselves martingales with and in turn , by optional stopping , that ={\mathbb{e}}[\bar { \mathcal{m}}_i^n(\tau_{\kappa',t}^n)] ] .finally , by ( [ eqfijderiv ] ) we have that thus , where we used the fact that is continuously differentiable with lipschitz constant ( independently of ) .finally , so that also , note that so that , & \quad if ,\vspace*{2pt}\cr 0 , & \quad otherwise . } \ ] ] combining the above gives for suitably redefined which concludes the proof that the conditions ( [ eqfcond1])([eqfcond3 ] ) hold with , and .having verified these conditions , the existence and uniqueness of the solution to ( [ eqhjb2 ] ) now follows from theorem 17.18 in . to obtain the gradient estimates in ( [ eqgradients ] )we first outline how the solution is obtained in as a limit of solutions to smoothed equations ( we refer the reader to , page 466 , for the more elaborate description ) . to that end , let be as defined in ( [ eqfdefin ] ) and for define =g_{h}(f^1_a[y],\ldots , f^i_a[y]),\ ] ] where and and is a mollifier on ( see , page 466 ) . satisfies all the bounds in ( [ eqfcond1])([eqfcond3 ] ) uniformly in ; cf . , page 466 .then , there exists a unique solution for the equations =0\ ] ] on with on . the solution is now obtained as a limit of in the space as defined in ( [ eqcstar ] ) .moreover , since the gradient bounds are shown in to be independent of , it suffices for our purposes to fix and focus on the construction of the gradient bounds .our starting point is the bound at the bottom of page 461 of by which where {j,{\mathcal{b}}}^* ] , are as defined in section [ secadcp ] .the constant depends only on the number of classes and on ( see , top of page 461 ) and this fraction equals , in our context , to and is thus constant and independent of and . we will address the constant shortly .we first argue how one proceeds from ( [ eqonemoreinterim ] ) .fix , let and ( see , top of page 132 ) . then , applying an interpolation inequality ( see , bottom of page 461 and lemma 6.32 on page 130 ) , it is obtained that plugging this back into ( [ eqonemoreinterim ] ) one then has for a constant that depends only on and . in turn , for a constant that does not depend on or .hence , to obtain the required bound in ( [ eqgradients ] ) it remains only tobound .following , building on equation ( 17.51 ) of , is the ( minimal ) constant that satisfies where ( as stated in , bottom of page 460 ) the ( redefined ) constant depends only on the number of class and on .the constants and are defined in and we will explicitly define them shortly . hereone should not confuse with the average service rate in our system . inwhat follows will only be used as the constant in .we now bound constants and .these are defined by where is a constant that depends only on the number of classes , is arbitrary and fixed ( independent of and ) and .the constants , and are defined in , pages 456460 , and and are as on page 461 there .we note that is a constant , is bounded by for some constant [ see ( [ eqgradients ] ) ] that depends only on and , by ( [ eqfx ] ) , . in turn , . arguing similarly for and we find that there exists a constant ( that does not depend on and ) such that which in turn implies the existence of a redefined constant such that and the proof of the bound is concluded by plugging these back into ( [ eqcandefin ] ) and setting there to get that for some that does not depend on and .the constant on the right - hand side of ( [ eqgradients ] ) ( which can depend on but does not depend on ) is argued as in the proof of theorem 17.17 in and we conclude the proof by noting that the global lipschitz constant ( that we allow to depend on ) follows from theorem 7.2 in .we next turn to proof of theorem [ thmssc ] .first , we will explicitly construct the queueing process under the -tracking policy and state a lemma that will be of use in the proof of the theorem .define so that is the arrival process of class- customers .given a ratio control and the associated queueing process , is as defined in ( [ eqwtildedefin ] ) .also , we define that is , is the total number of service completions by time in the system .for the construction of the queueing process under the tracking policy we define a family of processes as follows : let be a family of i.i.d uniform ] is an interval such that for all ] . indeed , by the definition of , we have that for all in ] where we used the fact that .since , for each , there exists such that we have , by the definition of that for such so that ( [ eqinterim747 ] ) now follows from ( [ eqmbound3 ] ) . finally , recall that for all and that so that by ( [ eqinterim747 ] ) plugging this into ( [ eqcheckpsiinterim ] ) together with ( [ eqitai1 ] ) and ( [ eqitai2 ] ) we then have that , on , this argument is repeated for each . to complete the proof of ( [ eqssc1 ] ) note that , using ( [ eqmboun4 ] ) together with , we have that . applying hlder s inequality we have that \\ & & \qquad\leq { \mathbb{e}}\bigl[\sup_{1\leq l\leq r^n+1}\sup_{\tau_{l-1}^n\leq s < ( \tau _ { l-1}^n+\eta^n)}|\check{\psi}^n(s)|1\{\tilde{\omega}(k)\ } \bigr]\\ & & \qquad\quad{}+ { \mathbb{e}}\bigl[\max_{1\leql\leq r^n+1}\sup_{\tau_{l-1}^n\leq s < ( \tau _ { l-1}^n+\eta^n)}|\check{\psi}^n(s)|1\{\tilde{\omega}(k)^c\ } \bigr]\\ & & \qquad\leq c\log^{k_0 + 2 } n + c\sqrt{n}\log^{k_1+m}n c_1e^{-(c_2 { k}/{2})\log n}\end{aligned}\ ] ] for redefined constants and ( [ eqssc1 ] ) now follows by choosing large enough .we turn to prove ( [ eqssc2 ] ) . rearranging terms in ( [ eqpsicheckdefin ] )we write so that equation ( [ eqssc2 ] ) now follows directly from ( [ eqsscinterim ] ) and ( [ eqmboun4 ] ) through an application of hlder s inequality . finally , to establish ( [ eqssc3 ] ) , note that from the definition of , in turn , \leq c\log^{k_1+m}n = c\log^{k_0}n.\ ] ] we have thus proved ( [ eqssc1])([eqssc3 ] ) and to conclude the proof of the theorem it remains only to establish ( [ eqsscinterim ] ) . to that end , let , and be as in ( [ eqchecktaudefin])([eqydefin ] ) .fix an interval such that for all . by the definition of the tracking policy , ( [ eqqtrack ] ) holds on this interval so that , on , \\[-8pt ] & \leq & -\frac{\epsilon_i}{4}n(t - s)+2k\log n.\nonumber\end{aligned}\ ] ] equation ( [ eqsscinterim ] ) now follows directly from ( [ equntilwhen3 ] ) .indeed , note for all , .hence , . in turn , using ( [ equntilwhen3 ] ) and assuming that we have that for some time with as defined in ( [ eqetandefin ] ) . also , let and note that ( [ equntilwhen3 ] ) applies to any subinterval of . in turn, would constitute a contradiction to ( [ equntilwhen3 ] ) so that we must have that for all with . finally , note that can be taken to be if .let , and be as in the statement of the lemma .we first prove ( [ eqafterstop2 ] ) . to that end , we claim that , for all large enough , \leq c\log^2n\ ] ] for some and all .this is a direct consequence of lemma 3 in that , in our notation , guarantees that \leq c\bigl(1+|x^n|+ \sqrt{n}(t+t^2)\bigr)\ ] ] for all and some constant .we use ( [ eqthisisjustonemore ] ) to prove lemma [ lemafterstop ] .the assertion of the lemma will be established by showing that \leq c\log^2n.\ ] ] to that end , applying hlder s inequality , we have \nonumber\\ & & \qquad\leq{\mathbb{e}}_{x^n , q^n}\bigl[(2t\log n-\tau_{\kappa',t}^n)^+\sup_{0\leq t\leq2t\log n}(e\cdot c)\bigl(e\cdot \check{x}^n(t)\bigr)^+\,ds\bigr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq \sqrt{{\mathbb{e}}_{x^n , q^n}\bigl[\bigl((2t\log n-\tau_{\kappa ' , t}^n)^+\bigr)^2\bigr]}\nonumber\\ & & \qquad\quad{}\times\sqrt { { \mathbb{e}}_{x^n , q^n}\bigl[\bigl(\sup_{0\leq t\leq2t\log n}(e\cdot c)\bigl(e\cdot\check{x}^n(t)\bigr)^+\,ds\bigr)^2\bigr]}.\nonumber\end{aligned}\ ] ] using lemma [ lemstrongappbounds ] we have that \leq cn\log^6n\ ] ] for some ( that can depend on ) . also , since , choosing ( and in turn large enough ) we then have , using lemma [ lemstrongappbounds ] , that and hence , that \leq c.\ ] ] plugging ( [ eqinterim9 ] ) and ( [ eqinterim10 ] ) into ( [ eqinterim8 ] ) we then have that \leq c\log ^2 n.\ ] ] to conclude the proof we will show that ( [ eqafterstop1 ] ) follows from our analysis thus far .indeed , \\ & & \qquad\leq { \mathbb{e}}_{x^n , q^n}^{u}\biggl[\int_{\tau_{\kappa',t}^n}^{2t\log n}e^{-\gamma s } \sup_{0\leq s\leq2t\log n}(e\cdot c)\bigl(e\cdot\check { x}^n(s)\bigr)^+ \,ds\biggr].\end{aligned}\ ] ] the right - hand side here is bounded by by the same argument that leads to ( [ eqafterstopbound1 ] ) .recall that is defined by , where the fact that each of the processes and are square integrable martingales with respect to the filtration follows as in section 3 of and specifically as in lemma 3.2 there .since , with probability 1 , there are no simultaneous jumps of and , the quadratic variation process satisfies &=&[m_{i,1}^n]_t+[m_{i,2}^n]_t\\ & = & \sum_{s\leq t}(\delta m_{i,1}^n(s))^2+\sum_{s\leq t}(\delta m_{i,2}^n(s))^2,\end{aligned}\ ] ] where the last equality follows again from lemma 3.1 in ( see also example 5.65 in ) .finally , the predictable quadratic variation process satisfies where the second equality follow again follows from lemma 3.1 in and the last equality from the definition of [ see ( [ eqnbmdefn3 ] ) ] . by theorem 3.2 in , t\geq0]) ] are both martingales with respect to . in turn , by the optional stopping theorem so are the processes and as defined in the statement of the lemma .finally , it is easy to verify that these are square integrable martingales using the fact the time changes are bounded for all finite .
|
we consider optimal control of a multi - class queue in the halfin whitt regime , and revisit the notion of asymptotic optimality and the associated optimality gaps . the existing results in the literature for such systems provide asymptotically optimal controls with optimality gaps of where is the system size , for example , the number of servers . we construct a sequence of asymptotically optimal controls where the optimality gap grows logarithmically with the system size . our analysis relies on a sequence of brownian control problems , whose refined structure helps us achieve the improved optimality gaps . and .
|
the addition of noise in mathematical models of population dynamics can be useful to describe the observed phenomenology in a realistic and relatively simple form .this noise contribution can give rise to non trivial effects , modifying sometimes in an unexpected way the deterministic dynamics .examples of noise induced phenomena are stochastic resonance , noise delayed extinction , temporal oscillations and noise - induced pattern formation .biological complex systems can be modelled as open systems in which interactions between the components are nonlinear and a noisy interaction with the environment is present .recently it has been found that nonlinear interaction and the presence of multiplicative noise can give rise to pattern formation in population dynamics of spatially extended systems .the real noise sources are correlated and their effects on spatially extended systems have been investigated in refs . ( see cited refs . there ) and . in this paperwe study the spatio - temporal evolution of an ecosystem of three interacting species : two competing preys and one predator , in the presence of a colored multiplicative noise . we find a nonmonotonic behavior of the average size of the patterns as a function of the noise intensity .the effects induced by the colored noise , in comparison with the white noise case , are : ( i ) pattern formation with a greater dimension of the average area , ( ii ) a shift of the maximum of the area of the patterns towards higher values of the multiplicative noise intensity .to describe the dynamics of our spatially distributed system , we use a coupled map lattice ( cml ) with a multiplicative noise + z_{i , j}^n z_{i , j}^n + d\sum_p ( z_{p}^n - z_{i , j}^n ) , \label{eqset}\end{aligned}\ ] ] where , and are respectively the densities of preys , and the predator in the site at the time step . here and are the interaction parameters between preys and predator , is the diffusion coefficient , and are scale factors . indicates the sum over the four nearest neighbors in the map lattice . are ornstein - uhlenbeck processes with the statistical properties and where is the correlation time of the process , is the noise intensity , and represents the three continuous stochastic variables ( ) , taken at time step .the boundary conditions are such that no interaction is present out of lattice . because of the environment temperature, the interaction parameter between the two preys can be modelled as a periodical function of time here , and .the interaction parameter oscillates around the critical value in such a way that the dynamical regime of lotka - volterra model for two competing species changes from coexistence of the two preys ( ) to exclusion of one of them ( ) .the parameters used in our simulations are the same of , in order to compare the results with the white noise case .specifically they are : ; ; , and .the noise intensity varies between and . with this choice of parameters the intraspecies competition among the two prey populationsis stronger compared to the interspecies interaction preys - predator ( ) , and both prey populations can therefore stably coexist in the presence of the predator . to evaluate the species correlation over the grid we consider the correlation coefficient between a couple of them at the step as ^{1/2 } } , \label{r}\ ] ] where is the number of sites in the grid ( ) , the symbols represent one of the three species concentration , and are the mean values of the same quantities in all the lattice at the time step . from the definition ( [ r ] ) it follows that .we quantify our analysis by considering the maximum patterns , defined as the ensemble of adjoining sites in the lattice for which the density of the species belongs to the interval $ ] , where is the absolute maximum of density in the specific grid .the various quantities , such as pattern area and correlation parameter , have been averaged over 50 realizations , obtaining the mean values below reported .we evaluated for each spatial distribution , in a temporal step and for a given noise intensity value , the following quantities referring to the maximum pattern ( mp ) : mean area of the various mps found in the lattice and correlation between two preys , and between preys and predator . from the deterministic analysis we observe : ( i ) for ( ) a coexistence regime of the two preys , characterized in the lattice by a strong correlation between them and the predator lightly anti - correlated with the two preys ; ( ii ) for ( ) wide exclusion zones in the lattice , characterized by a strong anti - correlation between preys .because of the periodic variation of the interaction parameter , an interesting activation phenomenon for takes place : the two preys , after an initial transient , remain strongly correlated for all the time , in spite of the fact that the parameter takes values greater than during the periodical evolution .we focus on this dynamical regime to analyze the effect of the noise .we found that the noise acts as a trigger of the oscillating behavior of the species correlation giving rise to periodical alternation of coexistence and exclusion regime .even a very small amount of noise is able to destroy the coexistence regime periodically in time .this gives rise to a periodical time behavior of the correlation parameter , with the same periodicity of the interaction parameter ( see eq.([betat ] ) ) , which turns out almost independent of the noise intensity and of the correlation time ( see fig.[cor](a ) ) .this periodicity reflects the periodical time behavior of the mean area of the patterns .a nonmonotonic behavior of the pattern area as a function of time is observed for all values of noise intensity investigated .this behavior becomes periodically in time for lower values of noise intensity , when higher values of correlation time are considered . in figs.[cor](b - d ) we show the time evolution of the mean area of the maximum patterns , for and for three values of correlation time , namely .the periodicity of the nonmonotonic behavior of the area of mps is clearly observed .-0.5 cm and for .the correlation plot ( a ) is quite the same for all the investigated.,title="fig:",height=302 ] -0.3 cm -0.5 cm [ cor ] to analyze the noise induced pattern formation we focus on the correlation regime between preys , where pattern formation appears .in fact when the preys are highly anticorrelated with species correlation parameter , a big clusterization of preys is observed , with large patches of preys enlarging to all the available space of the lattice .this scenario , observed also in the white noise case , is confirmed by the analysis of the time series of the species .these large patches appear , in the anticorrelation regime corresponding to the exclusion regime of the two preys , with smooth contours and low intensity of species density for lower noise intensities and higher correlation time values .the study of the area of the pattern formation as a function of noise intensity with colored noise shows two main effects : 1 ) the increase of the pattern dimension and 2 ) a shift of the maximum toward higher values of the noise intensity .as expected , for low values of the correlation time we observe the same results than in the white noise case .these effects are well visible in fig .[ aree ] where the three curves show the nonmonotonic behavior of the area of the maximum pattern as a function of noise intensity .the interaction step here considered is 1400 , which correspond to the biggest pattern area found in our calculations .the first curve ( ) is quite the same found in the white noise case .the value of maximum in the third curve ( ) is not so different from the previous one ( ) , because its value is approaching the maximum possible value of 10.000 into the used grid. for the three correlation time here reported .see the text for the values of the other parameters.,title="fig:",height=226 ] -0.3 cm -0.5 cm [ aree ] for and noise intensity : . represent respectively prey - prey , prey1-predator , prey2-predator and total preys - predator correlation .see the text for the values of the other parameters.,title="fig:",height=340 ] -0.3 cm -0.8 cm [ pat ] the pattern formation is visible in fig .[ pat ] , where we report three patterns of the two preys and the predator for the following values of noise intensity : and .the initial spatial distribution is homogeneous and equal for all species , that is for all sites ( ) .we see that a spatial structure emerges with increasing noise intensity . at very low noise intensity ( ), the spatial distribution appears almost homogeneous without strong pattern formation ( see fig . [we considered here only structured pattern , avoiding big clusterization of density visible in the case of anticorrelated preys . at intermediate noise intensity ( )spatial patterns appear .as we can see the structure disappears by increasing the noise intensity ( see fig . [pat]c ) . consistently with fig .[ aree ] , we find that for higher correlation time the qualitative shape of the patterns shown in fig .[ pat ] are repeated , but with a shift of the maximum area ( darkest patterns ) toward higher values of the noise intensity .the noise - induced pattern formation in a coupled map lattice of three interacting species , described by generalized lotka - volterra equations in the presence of multiplicative colored noise , has been investigated .we find nonmonotonic behavior of the mean area of the maximum patterns as a function of noise intensity for all the correlation time investigated . for increasing values of the correlation time observe an increase of the area of the pattern and a shift of the maximum value towards higher values of the noise intensity . the nonmonotonic behavior is also found for the area of the patterns as a function of the evolution time .0.2 cm this work was supported by , by infm and miur .garca lafuente j. , garca a. , mazzola s. , quintanilla l. , delgado j. , cuttitta a. and patti b. _ hydrographic phenomena influencing early life stages of the sicilian channel anchovy , fishery oceanography _ * 11*(1 ) 3144 ( 2002 ) .
|
a coupled map lattice of generalized lotka - volterra equations in the presence of colored multiplicative noise is used to analyze the spatiotemporal evolution of three interacting species : one predator and two preys symmetrically competing each other . the correlation of the species concentration over the grid as a function of time and of the noise intensity is investigated . the presence of noise induces pattern formation , whose dimensions show a nonmonotonic behavior as a function of the noise intensity . the colored noise induces a greater dimension of the patterns with respect to the white noise case and a shift of the maximum of its area towards higher values of the noise intensity .
|
magnetoencephalographic ( meg ) measurements record magnetic fields generated from small currents in the neural system while information is being processed in the brain . in the classical cortical distributed model , the activation of neurons in the cortex is represented by sources of currents whose distribution approximates the cortex structure , and meg measurements provide information on the current distribution for a specific brain function . in practice ,given an set of current sources , and a set of magnetic field detectors labeled by , the relation between the field strengths measure by the detectors and the sources can be expressed as where * a * is a matrix whose elements are known functions of the geometric properties of the sources and the detectors , as determined by the biot - savart law , and indicates noise , to be ignored here .the detail form of applicable to the present study is given in .in tensor analysis notation eq .( [ ill - posed eq ] ) may be simply expressed as . in what follows, we adopt the convention of summing over repeated index ( in eq .( [ ill - posed eq ] ) ) . in standard meg eq .( [ ill - posed eq ] ) appears as an inverse problem : the measured field strengths * m * are given and the unknowns are * j*. since the total number of detectors that can be deployed in a practical meg measurement is far less than the number of current sources , the answer to eq .( [ ill - posed eq ] ) is not unique and the inverse problem is ill - posed . a number of methodshave been proposed to solve eq .( [ ill - posed eq ] ) , including the least - square norm , the bayesian approach , and the maximum entropy approach . in the method of maximum entropy ( me ) the meg data , in the form of the constraints * m * - * a* , is used to obtain a _posterior _ probability distribution for neuron current intensities from a given _ prior _ ( distribution ) .in , the method is implemented by introducing a hidden variable denoting the grouping property of firing neurons . herewe develop an approach such that me becomes a tool for updating the probability distribution .* me updating procedure*. let the set * r * to be current intensities caused by neuron activities in the cortex at sites , and be the probability current intensity distribution at site .assuming the n current sources to be uncorrelated , we define the joint probability distribution as .the current at site is then .suppose we have prior knowledge about neuron activities expressed in terms of the joint prior .the implication is that would produce currents * j * that does not satisfy eq .( [ ill - posed eq ] ) ( here without noise ) .our goal is to update from this prior to a posterior that does satisfy eq .( [ ill - posed eq ] ) .the me method states that given and the meg data , the preferred posterior is the one that maximizes relative entropy ] and a set } ] denotes the updating step and } a ] is updated to } ] . then the current intensities will be fix - points such that . in practice the fix - point may not be reached with infinite accuracy within finite time , and the updating may be terminated when the quantity attains a predetermined value .it is important to stress that unless the prior properly reflects sufficient knowledge about neuron activities , there is no guarantee that the fix - point is closely related to the actual current intensities .[ fig1 ] * sources with gaussian distributed intensities*. pertinent general information on the geometric structure of the cortex and neuron activities , readily obtained from experiments such as functional magnetic resonance imaging ( fmri ) , positron emission tomography ( pet ) , etc . , is incorporated in a distributed model in which current sources , modeled by magnetic dipoles , are distributed in regions below the scalp .a schematic coarse - grained representation of this model is shown in fig.[fig1]a , where 1024 dipoles are placed on 16 planar patches , 64 dipole to a patch .eight of the which are parallel to the scalp and the other eight normal .regardless of the orientation of the patch , all dipoles are normal to the cortical surface with the positive direction pointing away from the cortex .information contained in a prior may be qualitative instead of quantitative . here , our prior will include the information that the activation resides in a part of the motor cortex that in fig.[fig1]a is represented by the patches 8 and 9 , and utilize this prior information by placing a higher concentration of field detectors in the area nearest to those in fig.[fig1]b .there is additional information such as neuronal grouping property .we follow amblard _ et al ._ and group dipoles into cortical regions , , each containing dipoles with the s satisfying . associated with is a hidden variable that expresses regional activation status : =1 denotes an `` excitatory state '' , or a state of out - going current ; =-1 denotes an `` inhibitory state '' ( in - going current ) ; =0 denotes a `` silent state '' ( no current ) . with this grouping ,the prior reduces to a sum of probability distributions over all possible configurations of : where specifies the current densities of the sources in ; is the conditional joint probability of the dipoles in being in state and having current densities ; is the probability of the region being in activation state .for we adopt a gaussian distribution for activated states : })^2 \right ] .\label{prior - conditional gauss}\ ] ] for simplicity , all current distributions have the same standard deviation .current sources at different sites have different mean intensities } ] and }=\alpha _ { k[1]} ] . in the absence of any other prior information we take to be a random number ( between zero and one ) , }\vert ]. however , the inverse problem being ill - posed , and since the prior contains no activation information , the above strategy produces poor results as expected .* better priors by coarse graining*. in the absence of prior information on activation pattern , one way to acquire some `` prior '' information from the meg data itself is by coarse graining the current source .coarse graining reduces the severity of the ill - posedness because the closer the number of current sources to the number of detectors , the less ill - posed the inverse problem . within the framework of the me procedure described above, coarse graining can be simply achieved by setting for all in a given region to be the same . herewe choose to take an intermediate step that disturbs the standard me procedure even less , by replacing the second relation in eq .( [ e : finetune_r ] ) by }= \langle r_{\eta } \rangle _ { [ i-1]}-\sigma \bar{\nu}_{\eta [ i-1]}. \label{e : coarse - grain}\ ] ] note that in eq .( [ e : finetune_r ] ) depends on the probability common to region , whereas does not explicitly . by replacing } ] on the right hand side of eq .( [ e : coarse - grain ] ) , we force the updated in each iteration to be more similar ( although not necessarily identical ) . in practicewe only use this modified me to get information on the activity pattern , rather than the intensity , of the sources .let be the current intensity set obtained after a convergence criterion set by requiring ( eq . ( [ bmse ] ) ) .we now define a better prior set of gaussian means , where , in units of na , these quantities , together with the obtained probabilities for the regions , define a prior probability , which may then be fed into the standard me procedure for computing .this procedure may be repeated by requiring to be not less than a succession of threshold values , , such that a successive level of better priors , , , , and , , , , may be obtained .eventually a point of diminishing return is reached . in this workwe find the second level prior is qualitatively better than the first , and the third is not significantly better than the second .in the following two examples , the 1024 current courses are partitioned into 16 patches , eight ( 4 cm wide and 3.3 cm long ) parallel and eight ( 4 cm wide and 2.3 cm deep ) normal to the scalp ( fig.[fig1]a ) . on each patchlies a 8 rectangular array of sources that are divided into 16 four - source groups ; that is , =256 .the interstitial distances on the horizontal ( vertical ) patches are 0.57 and 0.47 cm ( 0.57 and 0.33 cm ) , respectively .the distance between the adjacent vertical patches are normally 0.55 cm , but the distance will be varied for testing , see below .the detectors are arranged in a hemisphere surrounding the scalp as indicated in fig.[fig1]b .the matrix * a * of eq .( [ ill - posed eq ] ) is given in .the me procedure is insensitive to in the range 5 . in the coarse graining procedure we set =100 and =150 .as noted previously , coarse graining a third time did not produce meaningful improvement on the prior .in the two examples , artificial meg data are generated by having the sources on patch have uniform and varied current intensities , respectively .* uniform activation on patch 8*. in this case the `` actual '' activity pattern is : the 64 sources on patch 8 each has a current of 10 na , and all other sources are inactive ( fig.[fig2]a . ) . with the distance set to be 0.55 cm , the results in the first and second rounds of searching for a better prior , and in the final me procedure properare shown in fig.[fig2]b . in the plots , is the defined in eq .( [ bmse ] ) and is defined as } - \langle{\bf r}\rangle \right\vert ^{2}/ \left\vert \langle{\bf r}\rangle \right\vert ^{2}\right ) , \label{e : mse}\ ] ] ) .b ( right panel ) : ( top panel ) and ( bottom panel ) _vs_. iteration number.,title="fig:",height=172 ] ) .b ( right panel ) : ( top panel ) and ( bottom panel ) _ vs_. iteration number.,title="fig:",height=192 ] where represents the actual source current intensity and the index indicates the iteration number .the solid triangles , squares , and crosses , respectively , give results from me iteration procedures for constructing the first prior , second prior , and posterior .it is seen that rises rapidly in the search for the first prior ( solid triangles ) ; four iterations were needed for to reach 100 . is less than 100 at the beginning of the second prior search because the prior values for this search is not the posterior of the previous search , but is related to it by eq .( [ e : firing_pattern ] ) .the same goes with the the relation between the beginning of the me proper ( crosses ) and the end of second prior search ( squares ) . in the search for the second prior, increases slowly after the seventh iteration , but eventually reaches 150 at the 12 iteration .this already suggests that a round of search for a still better prior will not be profitable . in the me procedureproper , reaches 150 quickly at the fourth iteration , followed by a slow rise . after reaching 190 at the 14 iterationthe rise is very slow ; the final value at the 26 iteration is 195 .the dependence of the value on the me procedures and the iteration numbers essentially mirrors that of .the for the final posterior is 68 , which corresponds to an average of 3.3% error on the current intensities . * resolving power as a function of *. we tested the resolving power of our me procedure as a function of . with uniform activation on patch 8 ,the computed values versus are plotted in fig.[fig6_7]a .the general trend is that decreases with decreasing as expected : when .044 cm ; drops sharply when is less than 0.04 cm ; is less than 8 when is less than 0.0044 cm . in the last instancethe me procedure loses its resolving power because the error on the current intensity is about 70% . on the other hand, implies an error of 5.6.6% .this means that if an error of no more than 8% is acceptable , the me method should be applicable to a source array whose density is up to one hundred times higher than that used in the present study .value _ vs. _ the separation between patches 8 and 9 .the distances =0.55 , 0.0275 and 0.0035 cm are marked out and labeled ( 1 ) , ( 2 ) and ( 3 ) , respectively .b ( right panel ) : reconstructed _ vs. _ source number for the cases ( 1 ) ( solid line ) and case ( 2 ) ( dashed line ) in a. , title="fig:",height=192 ] value _ vs. _ the separation between patches 8 and 9 .the distances =0.55 , 0.0275 and 0.0035 cm are marked out and labeled ( 1 ) , ( 2 ) and ( 3 ) , respectively .b ( right panel ) : reconstructed _ vs. _ source number for the cases ( 1 ) ( solid line ) and case ( 2 ) ( dashed line ) in a. , title="fig:",height=192 ] * resolving power as a function of depth*. signals from sources deeper in the cortex are in general weaker at the detectors and are harder to resolve .this effect is shown in fig.[fig6_7]b .the abscissa gives the source numbers on patch 8 ( 449 to 512 ) and patch 9 ( 513 to 576 ) .the sources are arranged in equally spaced rows of eight , such that 449 - 456 and 513 - 520 are just below the scalp , 457 - 464 and 521 - 528 are 0.328 cm from the scalp , and so on .fig.[fig6_7]b shows that when =0.55 cm ( solid line ) , the me procedure can resolve all sources ( up to a maximum depth of 2.3 cm ) . this resolving power decreases with decreasing .when =0.0275 cm the me procedure fails for sources at a depth of 2 cm or greater ( that is , sources 496 - 512 and 561 - 576 on patches 8 and 9 , respectively ) .and values for the case of varied activation on patch 8 ( see text ) .b ( right panel ) : reconstructed ( dash line ) and ( artificially generated ) actual ( black line ) .current source number on patches 8 and 9.,title="fig:",height=201 ] and values for the case of varied activation on patch 8 ( see text ) .b ( right panel ) : reconstructed ( dash line ) and ( artificially generated ) actual ( black line ) .current source number on patches 8 and 9.,title="fig:",height=201 ] * varied activation on patch 8*. we tested the me procedure in a case with a slightly more complex activation pattern ( still unknown to the prior ) : with =0.55 cm , all current sources on patch 8 are activated , with those near the center of the patch having higher intensities than those in the peripheral .all other sources are silent .we used random source current densities as zeroth order prior , employed coarse graining twice , then used the standard me to obtain the final reconstructed current intensities . with =0.55cm , the dependence of and on the iteration number is shown in fig.[fig8]a .interestingly for the me procedure proper ( crosses ) , only the improves with iteration , whereas the value remains a constant at about 18 .this value is large compared to the value of 60 obtained for the case of uniform activation ( fig.[fig2]b ) .the solid and dashed lines in fig.[fig8]b indicate the actual and reconstructed current intensities , respectively , for the sources on patches 8 and 9 .these show that the poor result is caused by reconstructed false activation of sources 550 to 580 on patch 9 . when sources other than those on patch 8 are forcibly forbidden to activate , increases to 30 ( bottom plot in fig.[fig8]a ) , corresponding to a 22% error , suggesting that better results may be obtained when more reliable priors are given .this remains to be investigated .* acknowledgments*. this work is partially supported by grants nsc 93 - 2112-m-008 - 031 to hcl and nsc 94 - 2811-m-008 - 018 to cyt from national science council , roc .we thank jean - marc lina for his contribution during the early stages of this work .j. skilling , the axioms of maximum entropy in _ maximum entropy and bayesian methods in science and engineering _ , ed . by g. j. erickson and c. r. smith , kluwer , dordrecht , 1988 , pp.173187 ; classic maximum entropy , in _ maximum entropy and bayesian methods in science and engineering _ , edited . by j. skilling , kluwer , dordrecht , 1989 , pp .a. caticha , `` relative entropy and inductive inference '' in _bayesian inference and _ _ maximum entropy in science and engineering _ , ed . by g. erickson and y. zhai , aip conf .new york , 2004 , pp .tseng and a. caticha , `` maximum entropy and the variational method in statistical mechanics : an application to simple fluids '' ( under reviewing for phys .e , 2004 ) ; `` maximum entropy approach to the theory of simple fluids '' in _bayesian inference and _ _ maximum entropy in science and engineering _ , ed .by g. erickson and y. zhai , aip conf .new york , 2004 , pp .
|
magnetoencephalographic ( meg ) measurements record magnetic fields generated from neurons while information is being processed in the brain . the inverse problem of identifying sources of biomagnetic fields and deducing their intensities from meg measurements is ill - posed when the number of field detectors is far less than the number of sources . this problem is less severe if there is already a reasonable prior knowledge in the form of a distribution in the intensity of source activation . in this case the problem of identifying and deducing source intensities may be transformed to one of using the meg data to update a prior distribution to a posterior distribution . here we report on some work done using the maximum entropy method ( me ) as an updating tool . specifically , we propose an implementation of the me method in cases when the prior contain almost no knowledge of source activation . two examples are studied , in which part of motor cortex is activated with uniform and varying intensities , respectively . address= department of physics and of life sciences + computational biology laboratory + national central university , chungli , taiwan 320 , roc address= department of physics and of life sciences + computational biology laboratory + national central university , chungli , taiwan 320 , roc address= department of physics and of life sciences + computational biology laboratory + national central university , chungli , taiwan 320 , roc
|
in the past three decades , there has been a growing effort of the scientific community for studying and understanding the principles that govern the folding process of a sequence of amino acids in the corresponding native structure . in recent years , several proteins ,in particular those folding via a two - state mechanism have provided an extraordinary benchmark for experimental and theoretical characterization of the folding pathways .the significant amount of experimental data available for several structurally unrelated proteins , has opened the possibility to identify and isolate the factors that influence the folding rate . besides considering detailed chemical interaction , such as those affecting free - energy barriers ,an appealing and elegant line of investigation has focused on the effects of the native state structure on the folding process . from a qualitative point of view, the influences of structural effects was traditionally summarised in the tenet that proteins with high helical content fold faster than proteins with mixed alpha / beta content , the slowest folding being for the all - beta ones .this useful and intuitive rule of thumb , fails to account for the very different rates observed between proteins in each of the alpha , alpha / beta or beta families .a deep insight into this problem was provided by the work of plaxco _ et al ._ who introduced the concept of contact order , which captures , quantitatively , features beyond the mere secondary structure motifs .the highly significant correlation of contact order and experimental folding rates shows the extent to which the mere topology of native state can influence the folding process .however , the highly organised native structure of proteins is too rich to be captured by a single parameter such as the contact order .indeed , the latter can not account in the same satisfactory way for the transition state placement , three - state folding rates or the diversity of folding rates among structurally similar proteins . in the present studywe investigate how the topology of the native state can be further exploited to provide optimised predictions for protein folding rates and the transition state placement .to do so we consider , among others , one particular topological descriptor that is crucial for characterising the connection and interactions of native contacts : the clustering coefficient , or cliquishness .such parameter , heavily studied in the context of graph theory is shown to have highly significant correlation with folding rates .the advantage of using this topologic descriptor is that it allows to capture the cooperative formation of native interactions , as proved by its statistically relevant correlations with the transition state placement .further , we discuss how the different topologic aspects captured by the cliquishness and contact order can be combined to yield optimal correlations higher than for the individual descriptors .customarily , at the heart of theoretical or numerical studies of topology - based folding models is the contact matrix ( or map ) which will be used extensively also in the present context .the generic entry of the contact map , , takes on the value 1 if residues and are in contact and zero otherwise .several criteria can be adopted to define a contact ; in the present study we consider two amino acids in interaction if any pair of heavy atoms in the two amino acids are at a distance below a certain cutoff , .all values of between 3.5 and 8 have been considered and reported .the contact map provides a representation for the spatial distribution of contacts in the native structures that is both concise and often reversible ( since native structures can be recovered when appropriate values of are used ) .plaxco and coworkers have used the contact map to describe and characterize the presence and organization of secondary motifs in protein structures .the parameter that was introduced , the relative contact order , provides a measure of the average sequence separation of contacting residues and is defined as where and run over the sequence indeces , is the contact degeneracy ( i.e. the number of pairs of heavy atoms in interaction ) and is the protein length .remarkably , the contact order was shown have a highly significant linear correlation with experimental folding rates .the result of plaxco and coworkers can be explained , _ a posteriori _ , with intuitive arguments : a high contact order corresponds to few local interactions .one may thus expect that the route from the unfolded ensemble to the native state is slow , being hindered by the overcoming of several barriers due to spacial restraints , as recently analysed by debe and goddard and previously by chan and dill and also observed in topology - based numeric studies .these considerations are based purely on geometric arguments and do not take into account the influence of specific interactions between the residues . in principle , the latter may well override the topological influence on the folding process , but surprisingly , as remarked in a recent review article this is often not the case . our aim is to exploit as much as possible the topologic information contained in the native state to improve both the accuracy of predictions for folding rates and gain more fundamental insight into the process . to this purposewe have considered additional topologic descriptors besides the contact order .the one that appeared most significant is a parameter termed cliquishness or clustering coefficient . for a given site , , the cliquishness is defined as : where is the number of contacts to which site takes part to .as for the contact order , also the cliquishness has an intuitive meaning ; in fact it provides a measure of the extent to which different sites interacting with are also interacting with each other .of course , the cliquishness is properly defined only if site is connected to , at least , two other sites . to ensure this, we included also the covalently bonded interactions $ ] in ( [ eqn : cliq ] ) .the importance of taking the cliquishness into account for discriminating fast / slow folders can be anticipated since a higher interdependency of contacts ( large cliquishness ) will likely result in a more cooperative folding process .in fact , the formation of a fraction of interactions will result in the establishment of a whole network of them . consistently with this intuitive picture one should also expect that a large / small cliquishness will affect in different ways the amount of native - like content of the transition state .we have tested and verified these expectations by calculating the average cliquishness for 40 proteins for which folding rates and transition state placement , , have been measured . is deduced from the variation of folding / refolding rates upon change of denaturant concentration ( and ) and provides an indirect indication of how much the solvent - exposed surface of the transition state is similar to that of the native one . ranges between 0 and 1 ; higher values denote stronger similarity with the native state .it is worth pointing out that , although the model underlying the calculation of relies on a two - state analysis , an effective can be inferred for three - state folders as well .since reliable s are not available for all proteins , the number of entries used to correlate the cliquishness and ( see tables [ tab : list1 ] and [ tab : list2 ] ) is slightly smaller than that used for tge logarigthm of refolding rates , .the set of proteins used , shown in tables [ tab : list1 ] and [ tab : list2 ] , was built up from experimental data collected in previous studies and predictions ( often topology - based ) of folding rates .as indicated , the entries include both two - state and three - state folders , proteins belonging to the same structural family as well as proteins under different experimental conditions .this allows to examine to what extent predicted folding rates are consistent with the wide variations of folding velocities observed in structurally - related proteins and in different experimental conditions .as discussed in detail below , when the comprehensive set of table [ tab : list1 ] and [ tab : list2 ] is used , the correlation found between cliquishness and folding rates is 0.71 , with a statistical significance of , more relevant than the one between a suitably defined contact order and folding velocities ( , ) .as will be shown , the predicting power of the two quantities can be combined to achieve the optimal correlation of 0.74 .the prediction of the transition state placement , turns out to be more difficult when either of the two topologic parameters is used .while for the contact order it is equal to 0.23 , the cliquishness yields the value of 0.48 which is not significantly improved by combining the two descriptors . though the linear correlation of the clustering coefficient and the transition state placement is not as high as for the folding - rate case it is nevertheless statistically meaningful , having a probability of 0.004 to have arisen by chance . before considering the more general case of all entries in tables [tab : list1 ] and [ tab : list2 ] , we focus on two - state folders , i.e. proteins with a cooperative ( all - or - none ) transition between the unfolded and folded states .the neatness of this process , due to the absence of any significantly populated intermediate state , makes them ideal candidates for identifying and isolating the factors that influence the folding rate . in the present contextthis separate test is important since it appears that the relative contact order is a much stronger descriptor for two - state folders , than for the general case . as a matter of fact ,when both two- and three - state folders are considered , the influence of the average sequence separation of native contacts on folding properties is better captured by a different version of the contact order , which we shall term `` absolute '' , obtained when the r.h.s .( [ eqn : co ] ) is not divided by : in the following we shall report and compare the performance of both parameters ; furthermore we shall always consider the absolute value of the linear correlation coefficients , , without regard to its sign , which can be easily inferred from the plots .the original definition of contact order has an unrivaled performance in the prediction of folding rates for the two - state folders of table [ tab : list1 ] . as visible in fig .[ fig:2fr ] , it gives a stable correlation for cutoffs in the range 5 7 , with the maximum value of for the cutoff .the statistical significance of such correlation can be quantified through a calculation of the probability , , to observe by pure chance a correlation higher than the measured one ( in modulus ) .the standard model underlying such estimates relies on the hypothesys of normal distribution of the deviates of the correlated quantities . as a rule of thumb, the upper value of is taken as a threshold for statistically meaningful correlations .for the value of reported above , this probability is , which is , therefore , extremely significant .consistently with previous results , we found that the transition state placement is a much more elusive quantity to predict than folding rates .in fact , all topologic descriptors yield a poorer correlation compared to ( see fig . [ fig:2theta ] ) . for the relative contact order , the best is 0.48 ( for ) with an associated . as anticipated , the performance of the absolute contact order in this particular context is significantly inferior then the relative one ( see figs .[ fig:2fr ] and [ fig:2theta ] ) and hence will not be further commented . concerning the performance of the novel parameter under scrutiny , the cliquishness, it can be seen from figs .[ fig:2fr ] and [ fig:2theta ] that it is statistically meaningful for both folding rates and transition state placement .there are , however , significant differences with respect to the contact order . for folding ratesthe optimal is 0.67 ( ) and the associates value of is 5 , one order of magnitude larger than for the relative contact order .for the situation is reversed since the optimal value of ( for ) has the statistical relevance of , with a marked improvement over the previous case .it is also interesting to note that cliquishness - based correlations have a non - trivial dependence on the cutoff .in fact , due to the overall compactness and steric effects , the degree of dispersion of the cliquishness values for different sites in the same or different proteins is much more limited compared , e.g. to the average sequence separation of contacts .this leads to the observed decay of the correlations when the cutoff is increased .the applicability of topology - based models are not limited to two - state folder , but can be extended to include three - state folders as well . despite the addition of the 11 entries corresponding to three - state folders ,the performance of cliquishness - based predictions for folding rates and improves from the values reported for two - state folders . as shown in figs [ fig:2_3fr ] and [ fig:2_3theta ] the associated optimal correlations for and and 0.49 , again observed for the same cutoff values ( ) mentioned for the two - state case .the corresponding statistical significances are now , and , which , despite the enlargement of the experimental set , show even an improvement over the two - state case . from figs .[ fig:2_3fr ] and [ fig:2_3theta ] it can be noticed that the performance of the relative contact order is noticeably poorer than the absolute contact order which , being a much better descriptor , becomes the focus of our analysis .the corresponding measured correlations are , in fact , for and for with corresponding values of and . a direct comparison of how the clustering coefficient and the absolute contact order correlate with and can be made by inspecting the plots of figs .[ fig : best_fr ] and [ fig : best_theta ] .it is worth pointing out that the analysis of the deviations from the linear trends of figs .[ fig : best_fr ] and [ fig : best_theta ] reveals that a particular protein , 1urn , is among the top outliers for both cliquishness and contact order - based analysis , although no simple explanations is available for this singular behaviour .although for both folding parameters the cliquishness gives a more significant correlation than contact order , the difference is particularly dramatic for the transition state placement which is notoriously difficult to capture with topology - based predictions .an important conclusion stemming out of this observation is that the transition state structure ( and hence ) is more influenced by the degree of interdependency of native contacts than their average sequence separation .this is in accord with the intuition that highly interdependent contacts may mutually enhance their probability of formation , thus facilitating the progress towards the native state during the folding process .this is , indeed , consistent with the negative correlation observed between cliquishness and native content , , at the transition state .it is important to stress that the presence and effects of the cooperative formation of native interactions can not be captured by parameters based on measures of contact locality .this highlights the importance of considering all viable topologic descriptors to characterize the folding process , since they do not impact in the same way on various folding properties .a natural question that arises is whether it is possible to combine the predicting power of cliquishness and contact order to achieve correlations with experimental folding rates and transition state placements that are better than the individual cases . indeed , as shown in appendix a, it is straightforward to combine in an optimal linear way the two quantities to improve the prediction accuracy .the quantitative increment in the correlation is clearly related to the amount of independent information contained in the two topologic descriptors .hence , an important issue is to what extent cliquishness and contact order are mutually correlated .if , in place of a physical contact map , , one uses a random symmetric matrix , no meaningful correlation will be found .the contact maps of real proteins , however , display features that are highly non - random which reflect both ( i ) the physical constraints to which a compact three - dimensional chain is subject and ( ii ) the presence and organisation of secondary motifs . with the aid of numeric simulations it was possible to assess the degree of interdependency of clustering coefficient of native contacts and their average sequence separation resulting from the first of the mentioned effects .this was accomplished by considering , in place of the proteins of tables [ tab : list1 ] and [ tab : list2 ] , 150 computer - generated compact structures respecting basic steric constraints found in real proteins ( details can be found in the methods section ) . as visible in the plot of fig .[ fig : decoys ] the level of mutual contact order - cliquishness correlation observed in these artificial structures is which is significantly smaller than the actual correlation of the two quantities found in real proteins .in fact , the typical correlation for cliquishness and contact order ( either relative or absolute ) is around 0.65 .such non trivial correlation can been ascribed to the special topologic properties of naturally occurring proteins whose ramifications have been investigated in a variety of contexts .thus , the very presence and organization of secondary motifs in proteins makes it possible , on one hand , to exploit the native topology to predict e.g. folding rates , while on the other it limits the amount of independent information contained in different topologic descriptors . nevertheless , since the mutual correlation is not perfect , it is still possible to achieve , by definition , better predictions by combining cliquishness and contact order .the degree of enhancement depends also on the statistical significance of the individual starting correlations . for these reasons ,the improvement is noticeable for folding rates , while it is not significant for transition state placement . for the case of two - state folders , the optimal combination yields correlations of while for the more general case of two and three - state folding rates one has which leads to a discernible improvement over previous cases , as visible in fig .[ fig : combined ] . to the best of our knowledge , this is the highest correlation recorder among similar studies involving a comparable number of entries ( also including non - linear prediction schemes ) . due to the fact that the optimal combined correlations are found _ a posteriori _ , the associated values of are no more meaningful indicators of statistical significance . besides the cliquishness, we have investigated other parameters that are routinely used to characterise general networks ( networks of contacts in our case ) .in particular , we considered the `` diameter '' of the contact map , defined as the largest degree of separation between any two residues , and also its average value .the diameter measures the maximum number of contact that need to be traversed to connect an arbitrary pair of distinct residues .although the contact - map diameter is an abstract object , it conveys relevant topological information about protein structure , since it measures the long - range structural organisation .we found , a posteriori , that neither the maximum , nor the average diameter , correlate in a significant manner with the folding rate or transition state placement .we have analysed important topological descriptors of organised networks ( in our case the spatial network of native contacts ) that could be used , individually , or in mutual conjunction , to describe and predict experimental parameters used to characterize the folding process .it is found that , besides the previously introduced contact order , a topologic parameter , termed cliquishness or clustering coefficient , is a powerful indicator of both the folding velocity and the transition state placement for two- and three - state folders .the predicting power of the cluquishness is that it takes into account the presence and organisation of clusters of interdependent contacts that are putatively responsible for the cooperative formation of native - like regions .this property appears well - suited to reproduce important features in the transition state that are otherwise elusive to other topologic analysis .the high statistical significance of the observed correlations testifies the strong influence of geometric structural issues on the folding process .the maximum predicting power is obtained when the topologic information contained in the cliquishness is used in combination with the contact order ; this allows to reach a linear correlation as high as 0.74 with experimental folding rates recorder in 40 experimental measurements .the linear correlation between two sets of data , and is obtained from the normalised scalar products of the covariations : without loss of generality , in the following we shall consider the sets of data to be with zero average and with unit norm , so that the expression of the correlation simplifies we now formulate the following problem .two sets of data , and have linear correlation and respectively with a third ( reference set ) , . what is the maximum and minimum correlations we can expect between sets and ?we assume that and are positive since this condition can always be met by changing sign , if necessary , to the vector components .the answer is easily found by decomposing and into their components parallel and orthogonal to : since is equal to , and hence is fixed , the maximum [ minimum ] correlation is found when and are [ anti]parallel .thus , now we turn to a different , but related problem. how can we combine linearly and , so to have the maximum correlation with . the generic linear combination, leads to the following correlations the maximum is achieved for which yields to generate the thirty randomly - collapsed structures used in the comparison of fig .[ fig : decoys ] , we adopted a monte carlo technique .the length of the artificial proteins ranged uniformly in the interval 80 - 110 .starting from an open conformation , each structure was modified under the action of typical mc moves ( single - bead , crankshaft , pivot ) .a newly generated modified configuration is accepted according to the ordinary metropolis rule .the energy scoring function is composed of two terms : the first one contains a homopolymeric part that rewards the establishment of attractive interactions ( cutoff of 6.0 ) between any pair of non - consecutive residues .the second term is introduced to penalise structure realisations with radii of gyration larger than that found in naturally - occurring proteins with the same length .the monte carlo evaluation is embedded in a simulated annealing scheme which allows to minimise efficiently the scoring function by slowly decreasing a temperature - like control parameter .we are indebted with amos maritan for several illuminating discussions and with fabio cecconi and alessandro flammini for a careful reading of the manuscript .support from infm and murst cofin99 is acknowledged ..list of proteins known to fold via a two - state mechanism .the experimental quantities ( s ) and are desumed from the cited references .the reported cliquishness values are calculated for the cutoff yielding optimal correlations against folding rates .[ cols= " < , < , < , < , < , < " , ] * fig .1 . correlation of cliquishness , relative and absolute contact order against folding rates of two - state folders .the values of the correlation coefficients are plotted as a function of the cutoff , , used in the definition of the contact map . *2 . correlation of cliquishness , relative and absolute contact order against transition state placement of two - state folders .the values of the correlation coefficients are plotted as a function of the interaction cutoff , . *3 . correlation of cliquishness , relative and absolute contact order against folding rates of two- and three - state folders .the values of the correlation coefficients are plotted as a function of the cutoff , , used in the definition of the contact map . *4 . correlation of cliquishness , relative and absolute contact order against transition state placement of two- and three - state folders .the values of the correlation coefficients are plotted as a function of the interaction cutoff , . *scatter plot of cliquishness ( left ) and absolute contact order ( right ) versus folding rates of the 40 entries of tables 1 and 2 .the used values of are the optimal ones reported in the text .filled circles , open squares and starred points denote proteins belonging to the , and families , respectively . * fig .scatter plot of cliquishness ( left ) and absolute contact order ( right ) versus of the entries of tables 1 and 2 .the used values of are the optimal ones reported in the text .filled circles , open squares and starred points denote proteins belonging to the , and families , respectively . * fig .scatter plot of the logarithm of folding rates for the entries of tables 1 and 2 , against data from optimally combined cliquishness and contact order .the optimal linear superposition , see methods , is obtained for , ( and being the cliquishness and contact order data respectively .filled circles , open squares and starred points denote proteins belonging to the , and families , respectively .scatter plot of average cliquishness versus absolute contact order , for randomly collapsed structures generated by stochastic numerical methods .
|
a variety of experimental and theoretical studies have established that the folding process of monomeric proteins is strongly influenced by the topology of the native state . in particular , folding times have been shown to correlate well with the contact order , a measure of contact locality . our investigation focuses on identifying additional topologic properties that correlate with experimentally measurable quantities , such as folding rates and transition state placement , for both two- and three - state folders . the validation against data from forty experiments shows that a particular topologic property which measures the interdepedence of contacts , termed cliquishness or clustering coefficient , can account with significant accuracy both for the transition state placement and especially for folding rates , the linear correlation coefficient being . this result can be further improved to , by optimally combining the distinct topologic information captured by cliquishness and contact order .
|
in the first paper of this suite , we have considered coexistence at a fixed point of population dynamics . thisis justified for some of the simplest population models , where it can be shown that the fixed point is both locally and globally stable , such that the asymptotic dynamics converges to it . however , the dynamics of more complex ecological models wander on periodic or chaotic attractors .even when the trajactory would tend asymptotically to a fixed point , the time necessary to reach it may be very large , so that disturbances such as immigrations , speciations or environmental variations can take place before the system effectively attains equilibrium . in the present paper ,we consider coexistence of competing species in the framework of models of species assembly , in which the ecological community is continuously perturbed through immigration , speciation and extinction event that build up its biodiversity ( macarthur and wilson , 1967 ) .we argue that the relationship between the competition matrix and the productivity distribution derived for static ecosystems can be generalized in the slow assembly regime , in which new species arrive to the ecosystems over time scales much larger than those of population dynamics . in a previous work ( bastolla _et al . _ , 2001 ) , we have modeled an insular ecosystem characterized by a constant immigration rate and by extinction produced by population dynamics . after a transient time, the model ecosystem reaches a statistically stationary state where the extinction rate and the immigration rate balance , as predicted by the equilibrium theory of island biogeography ( macarthur and wilson , 1967 ) .we have shown that the model yields in a natural way species area relationships in qualitative agreement with field observations . despite the fact that space is not represented explicitly in our model, we represent the area of the island as an effective parameter influencing both the immigration rate and the threshold density at which extinction takes place .as pointed out by macarthur and wilson ( 1967 ) , the immigration rate is expected to increase with the size of the island .we assume that .the case corresponds to an immigration rate proportional to the perimeter .we use it to model immigrations from a continent to an archipelago .the case in which the immigration rate does not depend on area is used to describe immigration coming from nearby islands in the same archipelago , since in this case , the immigration rate is expected to depend mainly on the distance from the closest island .the other parameter depending on area is the threshold density .we assume that the number of individuals in the population is relevant for extinction , so that the critical density is inversely proportional to the area , or . under the above assumptions , the model reproduces a broad range of observed species area relationships .the logarithmic species area law , observed for the central islands of the solomon archipelago ( diamond and mayr , 1976 ) , is reproduced under the hypothesis that the immigration flux is independent of area , .the power law , observed by adler ( 1992 ) for the number of bird species on archipelagos versus their area , is reproduced assuming that , a plausible assumption for archipelagos . in this paper, we generalize our previous model considering speciation events beside immigrations .we show that simulations of the new model reproduce qualitatively the distributions of the ecological overlap measured for three large natural food webs , a quantity that we define here and that allows the characterization of the food web structure and of the interspecific competition .we then define and study an effective model for the biodiversity profile in food webs . in the previous paper , we have showed that environmental fluctuations on time scales much shorter than those of population dynamics , combined with a coexistence condition for competing species , limit the maximal biodiversity the system can host .this role of rapid environmental fluctuations complements the result that fluctuations on slower time scales can promote biodiversity through mechanisms such as the storage effect and the non - linearities in the environmental response ( chesson , 2000 ) .our effective model of biodiversity consists of the condition on the maximum allowed biodiversity at each trophic level , combined with equations obtained from population dynamics for the across level variation of the competition overlap , the biomass density and the fluctuations in rescaled growth rates .this effective model produces a profile of biodiversity versus trophic level presenting a maximum at intermediate level , in qualitative agreement with field observations ( cohen _ et al ._ , 1990 ) .a mean field study of the model was preliminarily reported in ( lssig _ et al ._ , 2001 ) .here , we generalize our previous species assembly model ( bastolla _ et al . _ , 2001 ) , including speciation events in it .some features of this new model have been described in ( bastolla _ et al ._ , 2002 ) .for a recent review of several models of food webs structure , dynamics and assembly , see ( drossel and mc kane , 2003 ) . in our model, biodiversity arises from a balance between species origination through immigration and speciation events , and extinction of species resulting from population dynamics .the ecosystem is continuously maintained far from the fixed point of population dynamics through species origination events that occur regularly , at time intervals equal to .eventually , a state of statistical equilibrium is reached where the average properties do not vary with time . as described in the companion paper ,population dynamics equations have the form of generalized lotka - volterra equations , = _j_ij^(l)n_j^(l-1)-_i^(l ) [ web_eq ] - _ j _ij^(l)n_j^(l)-_j_ij^(l+1)n_j^(l+1 ) , where the superindex stands for the level where the species belongs . the dynamical variables are rescaled population densities , , where is the population density and , defined in the first paper , is proportional to the inverse of the carrying capacity , . the coefficient is the efficiency of conversion of prey biomass into predator biomass , and it is assumed to be independent of level .the coefficients of the predator functional response , , and the death rates have been rescaled dividing them by . using rescaled variables, the competition overlaps are dimensionless parameters with .we assume that for is proportional to the predation overlap , , where is defined as the fraction of common preys shared by species and .introducing a predation matrix , such that equals one if is a prey for , and zero otherwise , the predation overlap is formally defined as q_ij= .[over ] this definition guarantees that is one if and only if species and share exactly the same preys .since competition for common preys is already implicitly represented through the prey dynamics , the coefficients model competition for resources not explicitly included in the ecosystem .the reason for the proportionality between the non diagonal elements of the competition matrix and the predation overlap is that we expect that species sharing more preys are more closely related ecologically , so that their overall requirements are more similar .the population dynamics equations are complemented by a threshold density below which a species is considered extinct and is eliminated from the system .the community is maintained by a number of external resources , which are represented as extra populations with intrinsic growth rate and predators only . the dimensionless parameter , ratio between the carrying capacity determined by the external resources and the density threshold for extinction , plays an important role in controlling the biodiversity in the model .the introduction of new species is modelled as follows .first , we choose at random one of the species present which acts as `` mother species '' for the new one , with label ( old species with are renumbered accordingly ) .three parameters define the similarity between and regarding their preys and predators .each link of the mother is ( i ) either deleted from the daughter species with probability , ( ii ) or copied with probability , ( iii ) or redirected to another species with the complementary probability .after this is done , with probability a new link is added , such that gets a new prey or a new predator . the links that are copied mutate their strength with respect to that of the mother species according to the stochastic rule , where ] .new preys are extracted only in the set of species with , while new predators are extracted in the set of species with .this condition is imposed in analogy with the cascade model ( cohen _ et al ._ , 1990 ) , and prevents the formation of feeding loops . in the limit , the introduction of new species proceeds through pure immigration , as in our earlier model ( bastolla _ et al ._ , 2001 ) . when the daughter species are most similar to their mothers , apart from deletions and additions of links and small mutations in the link strengththis mimics a system where biodiversity is maintained by speciation rather than immigration events .in our simulations , population dynamics never reaches a fixed point between two immigration events : the system contains species with a positive growth rate as well as species with a negative growth rate , which are slowly driven towards extinction .these can be either unsuccessful immigrants or resident species outcompeted by newly arrived ones .as in our earlier model ( bastolla _ et al . _ , 2001 ) , the system reaches a stationary state where the average biodiversity does not vary with time .this stationary biodiversity increases as a power law of the immigration rate and as the logarithm of the external resources . to get analytical insight on this species assembly model, we note that in the stationary state the typical time required for the extinction of one species must coincide with the time between arrivals of new species , .species that get extinct more rapidly than this do not contribute to the stationary biodiversity .this implies the following condition for species that belong to the instantaneous transient community : - 1 t _ .[ eq - mig ] this equation generalizes the fixed point equations that we studied in the first paper , which correspond to the limit .we can apply this condition to one - layer communities or structured food webs , as we already did in the case of fixed point coexistence . applying a mean field approximation to the effective competition matrix , the condition of coexistence in transient communities can be generalized to + 1pt _ , [ coe - mig ] where is the effective rescaled growth rate arising both from preys and predators of species , after eliminating the effective competition with species with shared preys ( see the companion paper ) . here and elsewhere , angular brackets denote averaging over species at the same trophic level .if the quantity is large , i.e. for slow immigration rates , the system can get close to the fixed point , and the above equation modifies only slightly the result for static systems ( ) presented in the previous paper , which is equivalent to a previous result by chesson ( 1994 ) .therefore , in the slow immigration regime the variance of the distribution of the s decreases as , as for systems at the fixed point . for more frequent immigration ( smaller ) ,the variance of the productivity distribution increases .thus it becomes easier to pack a larger number of species in the ecosystem , in agreement with the results of our simulations , where the stationary biodiversity increases as a power law of the immigration rate ( bastolla _ et al ._ , 2001 ) , and consistently with the predictions of the theory of island biogeography ( macarthur and wilson , 1967 ) .we show in fig .[ fig : prod ] the productivity distribution for the first trophic level of the simulated ecosystem .as expected , the distribution is narrow , and its variance decreases with the number of species ( see insert ) , the inverse of the variance being well fitted with a linear function of , as predicted by eq.([coe - mig ] ) .in addition to the dependence of biodiversity on the immigration rate , the number of species at the stationary state also increases as the fraction of speciation events gets larger ( growing ) . also this behavior is easy to rationalize through eq .( [ coe - mig ] ) .in fact , new species originated through speciation have a higher probability of remaning in the ecosystem , since all of their ecological parameters are similar to those of their mother species , which have been already selected through the ecological dynamics .thus a larger fraction of speciation events implies a higher effective rate of appearance of new species .to characterize the structure of food webs , we have studied the distribution of the ecological overlap , defined in eq .( [ over ] ) .the overlap distribution is a property that bears the fingerprint of the topology of the species network . in the framework of species assembly models ,this distribution is influenced both by the process of species origination , either through immigration or through speciation , and by the extinctions driven by population dynamics .furthermore , the overlap distribution can be measured in real food webs for which sufficiently detailed information is available , and in this way it allows to compare the results of our model with empirical observations .we show in fig .[ fig : over ] the overlap distribution obtained from simulations of our model for non - basal species above the first trophic level . to better compare different ecosystems , the delta function at overlap equal to zerois eliminated and the continuous part of the distribution is normalized to one .the peaks that one sees arise from the discreteness of the system : the number of prey per species is a small integer number .peaks at high overlap are produced by speciation events , while peaks at small overlap are due to distantly related species . in the insert of fig .[ fig : over ] , we notice that the fraction of species with overlap equal to zero increases with the number of possible preys at trophic level one , .this is expected on the ground of the following simple calculation , based on a mean - field argument .we assume that all species at level two have preys at level one , and that these preys are chosen at random . neglecting terms of higher order in , we can compute the average predation overlap as . under these assumptions ,the distribution of the overlap is expected to be poissonian , so that the expected fraction of pairs with zero overlap is given by , which is an increasing function of .we have considered three of the largest food webs analyzed in field studies : a freshwater marine interface ( ythan estuary , see huxham _et al . _ , 1996 ) , a lake ( little rock , see martinez , 1991 ) , and a community associated to a single plant ( silwood park , see memmott __ , 2000 ) .they have been studied in enough detail to allow a statistical characterization of their network structure ( montoya and sol , 2002 ) . for these three large food webs, we have calculated the overlap between all pairs of predators as defined in eq .( [ over ] ) , and we have obtained the overlap distribution and the average overlap , . the ythan estuary food web , described in ( huxham _ et al . _ , 1996 ) , is formed by species and contains 592 links from predators to preys . of these species ,42 are metazoan parasites contributing to a total of 52 top species .only 5 species are basal .the average number of preys per predator is and the number of predators per prey .the average overlap for this food web is .silwood park network is constituted by trophic interactions between herbivores , parasitoids , predators , and pathogens associated with a single plant , the broom _ cytisus scoparius _ ( memmott _ et al ._ , 2000 ) .this web is formed by 154 species , of which 66 are parasitoids , and 60 predators .there are 117 top species and a total of 370 links : the average number of preys per predator is , the number of predators per prey is 10 , and the average overlap between predators is .finally , the study of little rock lake ( martinez , 1991 ) reports a total of 182 consumer , producer , and decomposer taxa .this is a highly lumped food web : in little rock , 63% of `` species '' correspond to genera - level nodes .this lack of resolution is probably responsible for systematic statistical deviations , as the fact that some `` species '' have a very large number of predators or preys .the network has 2430 links from predators to preys , 63 basal species and a single top species .the average number of preys per predator is , and the number of predators per prey is .the average overlap between predators is . in fig .[ fig : over ] we represent the distributions of overlaps for the three natural food webs described above . for the sake of comparison, we also show a distribution obtained in our simulations , with the parameters shown in the figure caption .the comparison shows that our model is able to reproduce overlap distributions in good agreement with field observations , at least in some range of its space of parameters .the probability that the overlap is zero is also in reasonable agreement with field data : its value is in the ythan and little rock food webs , and in the silwood food web .this values are quite comparable with those shown in the insert of fig.[fig : over ] for the model ecosystems .we have shown in the companion paper that the combination of a general condition for coexistence of species competing at trophic level and an effective model of short time scale environmental fluctuations yields the following limit on biodiversity : [ s - delta ] s_l1+(1-_l_l ) ( 1-_l - n_c / n_l_l ) , where is the typical competition overlap between a pair of distinct species at level , is the average rescaled density of the competing species , is the threshold density below which extinction takes place , and represents the minimal width of the productivity distribution at level compatible with environmental fluctuations .the variability is considered level dependent , since fluctuations in productivity propagate along the trophic chain and are expected to increase at higher levels ( see below ) .this is important for characterizing the variation of biodiversity with trophic levels and the length of food webs . in ( lssig _et al . _ , 2001 ) we have used eq .( [ s - delta ] ) , with level - independent , in order to get an analytical insight on the biodiversity of a hierarchical trophic web .we assumed that the biodiversity at level is the maximal one allowed by eq .( [ s - delta ] ) .the validity of this assumption depends on the species assembly process , and we think that it is plausible for mature food webs , where there was enough time for filling all ecological niches . using the above assumption , we can define an effective model for the biodiversity profile across the trophic levels of hierarchical food webs .being extremely simplified , this model presents the advantage that it can be solved analytically through some further approximation , and that the main processes responsible for the biodiversity profile can be individuated rather clearly .the model predicts under general conditions that biodiversity has a maximum at an intermediate trophic level , as observed in real food webs . for fixed biodiversities , we can calculate the average rescaled densities through a mean field approximation of the generalized lotka - volterra equations describing the population dynamics on the trophic web : n^(l ) , where is the average number of preys per predator , which is assumed to be independent of level , is the resulting average number of predators per prey, is the efficiency of conversion of prey biomass into predator biomass , also assumed to be independent of level , is the average rescaled rate at which preys at level are consumed for unit of predator at level , and is the average death rate or energy consumption rate of species at level . in the calculations , for simplicity, the two last quantities were assumed to be independent of . inserting the densities in eq .( [ s - delta ] ) , we obtain the maximum allowed biodiversities .this procedure is applied iteratively , until convergence to a stable profile and that solves simultaneously the maximum coexistence condition and the mean - field equations for the densities . for all parameterssets we studied , the resulting decreases approximately as a negative exponential of , as a result of metabolic energy dissipation along the food chain . in order to improve the analytical understanding of the model , we adopt in the following this phenomenological relationship , assuming that n^(l)r (-l / l_0 ) .[ eq - n ] aside the decrease across levels of the rescaled biomass density , the other effect that limits the length of the food web in this model is the propagation of the fluctuations along the chain , which determine an increase of the width of the productivity distribution as _ l _ 0(l / l _ ) .[ eq - d ] a justification of this ansatz is provided in next section . to fully define the model , we still need an effective model for the variation of the overlap across the level . for this purpose , we assume that each of the species at level is coupled to species at the level below , provided there are more than species at that level ; otherwise it is coupled to all species : .we consider two different ways in which these connections are drawn , leading to two different models for the overlap : 1 .the connections are drawn at random . in this case , the fraction of common links between two species at level is .we further assume that the competitive overlap is proportional to the link overlap : , with .we interpret as the fraction of limiting factors that are represented by the species at the level .we thus have + & & _ l = c_l / s_l-1 + & & s_l=1+(s_l-1c_l-1 ) ( 1-_l-^l / l_0_l ) 2 . in the second case , we consider that the species are divided into clusters of size . species in different clusters are not in competition .species in the same cluster compete with the maximal possible overlap .we get + [ 28a ] & & _ l=1+(1 - 1 ) ( 1-_l-^l / l_0 _ l ) + & & s_l = s_l-1/c_l _ l . in both cases , at small and for a broad range of parameters , biodiversity increases at low levels : . at high levels , the second term in brackets on the rhs of eq([28a ] ) becomes small and the biodiversity decreases with the level , either because decreases with , eq .( [ eq - n ] ) , or because the minimal width of the productivity distribution , , grows with , eq .( [ eq - d ] ) . thus our model food webs present a maximum in the distribution of the biodiversity per level in a broad region of parameter space ( lssig _ et al ._ , 2001 ) .this result is consistent with studies of real food webs , where the maximum of biodiversity is attained at the second or third trophic level ( cohen _ et al ._ , 1990 ) .eventually , biodiversity is limited by either of the two mechanisms to just one species .this defines the maximum food web length in our model .the qualitative description outlined above is supported by numerical computations of the full effective model , and by simulations of the species assembly model .summarizing , in the framework of this model the biodiversity profile is shaped by two very simple processes : horizontal ( within level ) competition , limiting the maximum biodiversity at each trophic level , and vertical ( across level ) hardening of competition , either due to the propagation of fluctuations ( the growth of with the level ) , or to energy dissipation ( the decrease of with the level ) .here we justify the assumption that the minimal width of the distribution of rescaled growth rates increases for higher levels along a food web : .this assumption was used in the previous section to yield a limitation on biodiversity at high levels , and ultimately to constrain the maximal length of food webs . for simplicity, we consider a food chain with just one species per level . in this way, we do not have to consider the number of species at each level as an additional unknown parameter coupled to through eq([s - delta ] ) .we start from the system of equations that determine the fixed point of a food chain with linear prey dependent functional responses , _l n_l-1-_l - n_l-_l+1 n_l+1=0 . as usual ,the level specific densities andthe parameters ( coefficients of the functional response ) and ( death rate ) have been rescaled so to that the coefficient of the self - damping term equals one .the equations can be solved iteratively starting from the lowest level in the form n_l= p_l- where the rescaled growth rates and the rescaled self - damping terms are recursively given by p_l= + b_l=1 + we now consider a perturbation that changes the ( fictitious ) growth rate at level zero by a relative amount .this perturbation propagates along the food chain , leading to relative changes in the growth rates equal to _ l= > _ l-1 .this is larger than because all the factors in the denominator are strictly positive and , moreover , is smaller than one .since decreases at higher levels , the factor also increases with the level , so that increases even faster than exponentially with .this rapid amplification of perturbations along the food chain justifies our expectation that the distribution of rescaled growth rates becomes broader with the level .in this paper , we have generalized to transient ecological communities far from fixed points the coexistence conditions derived in the companion paper for systems at the fixed point .also in the general case , species with rescaled growth rates much lower than average will disappear very rapidly and will not be observed . for systems maintained out of equilibrium through immigration ,the relevant time scale is given by the inverse of the immigration rate .a non zero ratio between the immigration rate and typical growth rates of the population dynamics , , makes it easier to fulfill the coexistence condition .this analytical result leads to the prediction that the biodiversity in the stationary state of the species assembly model increases with the immigration rate , as observed in the simulations .coupling the coexistence condition with the unavoidable fluctuations in productivity values ( due to environmental noise with time scale much smaller than that of population dynamics ) , we predicted in our previous paper that competition and fluctuations limit the maximum biodiversity that can be hosted in a trophic level .this result complements , but does not contradict the prediction that environmental fluctuations with time scale comparable to that of population dynamics enhance species coexistence ( chesson , 2003a ; 2003b ) .it would be desirable to develop a more general theory of the interaction between environmental fluctuations and population dynamics from which the two results can be derived .the coexistence condition also depends on the typical competition overlap between species at the same trophic level .we have defined the competition overlap to be proportional to the predation overlap , defined through eq .( [ over ] ) .the distribution of the overlap is a useful property for characterizing the structure of ecological networks . our modified model of speciesassembly through immigration and speciation yields overlap distributions in good agreement with those obtained from three well - studied natural food webs : the ythan estuary , the little rock lake , and the silwood park system .these steps allowed us to define an effective model for the variation of biodiversity across the levels of a hierarchical food web . in our model , two main processes control biodiversity : competition , on the horizontal within - level direction ; and modulation of competition , on the vertical across - level direction . in the framework of the effective model, this last process controls the decay of the number of species across higher levels , and therefore the length of food webs , an issue that received a considerable attention in the ecological literature ( see for instance post , 2002 for a recent review ) .accomodating more competing species becomes harder at higher levels , because of two complementary mechanisms : the dissipation of metabolic energy across the food web , which make energetic constraints more difficult to fulfill , and the propagation of environmental perturbations across the food web , which makes it more difficult to fine tune ecological parameters in order to accomodate new species .the first mechanism is reminiscent of the so - called productivity hypothesis for the length of food webs , which goes back to almost 80 years ago ( elton , 1927 ) .however , weak or no correlation was found between food chain length and primary productivity in field studies ( briand and cohen , 1987 ; post et al . , 2000 ) .these and other results suggest that resources limit the length of food chains below some threshold level , above which other factors come into play ( post , 2002 ) . the other mechanism proposed here , relating food chain length to the amplification of environmental perturbations across the chain , is a novel variant of the stability hypothesis that states that environmental disturbance limits the length of food webs ( menge and sutherland , 1987 ) .this hypothesis was originally founded on the observation that the dynamical stability of model ecosystems decreases as chain length increases ( pimm and lawton , 1977 ) .however , the generality of this model result was later questioned ( sterner et al . , 1997 ) .the mechanism proposed here constitutes a new theoretical justification for the disturbance hypothesis , which is supported by some empirical evidence , but only indirectly ( post , 2002 ) .in addition , simulations of the species assembly model provide a third mechanism that may limit the length of food webs . in the simulations ,longer food webs can be generated by increasing the immigration rate , which makes the coexistence condition more permissive and increases the overall biodiversity , therefore allowing more opportunities for dynamically generating longer networks .a positive relation between colonization and food chain length was also suggested in another model of species assembly ( holt , 1996 ) . assuming a relation between the size of the ecosystem and the immigration rate, the effect of the immigration rate may explain the observed positive correlation between food chain length and ecosystem size ( post , 2000 ) , to date the strongest empirical determinant of food chain length found in field studies .this work can be extended in several directions .the most important , in our opinion , would be to build a mechanistic model in which environmental fluctuations are explicitly modelled , instead of including them in an effective way as we have done here .this might permit a more quantitative comparison between model results and parameters and the relevant mechanisms and variables operating in natural ecosystems .ub , ml and scm acknowledge hospitality and support by the max planck institut for colloids and interfaces during part of this work .ub was also supported by the i3p program of the spanish csic , cofunded by the european social fund .scm benefits from a ryc fellowship of mec , spain .u. bastolla , m. lssig , s. c. manrubia , and a. valleriani , 2002 .dynamics and topology of species networks . in : _ biological evolution and statistical physics _ , m. lssig and a. valleriani eds .springer - verlag .f. briand and j.e .cohen , 1987 .environmental correlates of food chain length . _science * 238*_:956 - 60 . .p. chesson , 1994 .multispecies competition in variable environments , _ theor .* 45 * _ , 227 - 276 .
|
this is the second of two papers dedicated to the relationship between population models of competition and biodiversity . here we consider species assembly models where the population dynamics is kept far from fixed points through the continuous introduction of new species , and generalize to such models the coexistence condition derived for systems at the fixed point . the ecological overlap between species with shared preys , that we define here , provides a quantitative measure of the effective interspecies competition and of the trophic network topology . we obtain distributions of the overlap from simulations of a new model based both on immigration and speciation , and show that they are in good agreement with those measured for three large natural food webs . as discussed in the first paper , rapid environmental fluctuations , interacting with the condition for coexistence of competing species , limit the maximal biodiversity that a trophic level can host . this horizontal limitation to biodiversity is here combined with either dissipation of energy or growth of fluctuations , which in our model limit the length of food webs in the vertical direction . these ingredients yield an effective model of food webs that produce a biodiversity profile with a maximum at an intermediate trophic level , in agreement with field studies . -1 cm -0.15 cm _ centro de astrobiologa , inta - csic , ctra . de ajalvir km . 4 , 28850 torrejn de ardoz , madrid , spain . institut fr theoretische physik , universitt zu kln , zlpicher strasse 77 , 50937 kln , germany . max planck institute of colloids and interfaces , 14424 potsdam , germany .
|
high - precision parameter estimation is fundamental throughout science . quite generally , a number of probe particles are prepared , then subjected to an evolution which depends on the quantity of interest , and finally measured . from the measurement results an estimateis then extracted . when the particles are classically correlated and non - interacting , as a consequence of the central limit theorem , the mean - squared error of the estimate decreases as , where is the number of particles ( probe size ) .this best scaling achievable with a classical probe is known as the _ standard quantum limit _ ( sql ) .quantum metrology aims to improve estimation by exploiting quantum correlations in the probe . in an idealsetting without noise , it is well known that quantum resources allow for a quadratic improvement in precision over the sql ; i.e , the mean - squared error of the estimate after a sufficient number of experimental repetitions can scale as , yielding the the so - called _ heisenberg limit_. realistic evolution , however , always involves noise of some form , and although quantum metrology has been demonstrated experimentally , e.g. , for atomic magnetometry , spectroscopy , and clocks , there is currently much effort to determine exactly when , and by how much , quantum resources allow estimation to be improved in the presence of decoherence . it is known that for most types of uncorrelated noise ( acting independently on each probe particle ) the asymptotic scaling is constrained to be sql - like .specifically , when estimating a parameter , the mean - squared error obeys , where is the number of repetitions and is a constant which depends on the evolution . if the evolution , which each probe particle undergoes , is independent of , the scaling is constrained to be sql - like .however , for frequency estimation this is not necessarily the case . in frequency estimation scenarios , such as those of atomic magnetometry , spectroscopy , and clocks , there are two relevant resources , the total number of probe particles and the total time available for the experiment .the experimenter is free to choose the interrogation time and , in particular , may be adapted to . in this case , the time over which unitary evolution and decoherence act is different for each and thus the evolution is not independent of .schematically , the no - go results for noisy evolution in this case become with . thus , if for some optimal choice of the coefficient decreases with , although the no - go results may hold for any fixed evolution time , the bound does not imply sql - like scaling .note that the bound eq .( [ eq.sqllike ] ) is always achievable in the many - repetitions limit , which corresponds to .although without noise it is optimal to take as large as possible , i.e , , for any noisy evolution the optimal becomes finite because of noise dominating at large times .so the many - repetitions regime can always be ensured by considering sufficiently large . in frequency estimation scenarios , for the asymptoticscaling to be superclassical , must vanish as , which is only possible if the evolution is such that decoherence can be neglected at short time scales , and the no - go theorems then do not apply .this is also necessary for error - correction techniques , which utilise ancillary particles not sensing the parameter or employ correcting pulses during the evolution , to surpass the sql . without such additional resources considering just interrogation - time optimisation the possibility of superclassical scaling has been demonstrated for non - markovian evolutions ( for which the effective decoherence strength vanishes as ) , as well as for dephasing directed along a direction perpendicular to the unitary evolution . in the latter case , it was shown that an optimal variance scaling of can be obtained by choosing .this result was based on numerical analysis of the _ quantum fisher information _ ( qfi ) and was shown to be saturable by greenberger - horne - zeilinger ( ghz ) states .however , ghz states of many particles are not easily generated in practice , and the fisher information approach does not explicitly provide the required measurements .thus , the question of whether the scaling is achievable in practically implementable metrology was left open . in this paper , we argue that the transversal - noise model applies to atomic magnetometry , in particular the experimental setting of , and study the quantum advantage attainable with use of _ one axis - twisted spin - squeezed states _ ( oatsss ) and _ ramsey - interferometry - like measurements _ , both of which are accessible with current experimental techniques . we explicitly show that the setup geometry plays an important role for the achievable quantum enhancement .a suboptimal choice leads to a constant factor of quantum enhancement , while superclassical precision scaling can be maintained for a more appropriate choice .we study the enhancement achievable with the numbers of the experiment and demonstrate the advantage of modifying the geometry .we further consider the case of noise which is not perfectly transversal and find that , although the asymptotic precision scaling is then again sql - like , the precision may be substantially enhanced by optimising the geometry .as the previous results were based on numerics , we also provide an analytical proof of the scaling for ghz states in appendix [ app.ghz_states ] .we consider a scheme in which two - level quantum systems are used to sense a frequency parameter in an experiment of total duration , divided into rounds of interrogation time .we keep in mind that this can correspond to atomic magnetometry , in which the particles then represent the atoms with a spin precessing in a magnetic field at a frequency proportional to the field strength . as in ref . , we describe the noisy evolution by a master equation of lindblad form here , 12 & 12#1212_12%12[1][0] link:\doibase 10.1126/science.1104149 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.96.010401 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.133601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.093602 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.253605 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.143001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.160802 [ * * , ( ) ] link:\doibase 10.1063/1.4901588 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.103004 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.86.5870 [ * * , ( ) ] link:\doibase 10.1126/science.1097576 [ * * , ( ) ] link:\doibase 10.1073/pnas.0901550106 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/12/6/065032 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.79.3865 [ * * , ( ) ] link:\doibase 10.1103/physreva.64.052106 [ * * , ( ) ] link:\doibase 10.1103/physreva.76.032111 [ * * , ( ) ] link:\doibase 10.1088/1751 - 8113/41/25/255304 [ * * , ( ) ] link:\doibase 10.1109/tit.2008.929940 [ * * , ( ) ] link:\doibase 10.1007/s00220 - 011 - 1239 - 4 [ * * , ( ) ] link:\doibase 10.1038/ncomms2067 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/15/7/073043 [ * * , ( ) ] _ _ , http://arxiv.org/abs/1409.0535[ph.d .thesis ] , ( ) link:\doibase 10.1103/physrevlett.113.250801 [ * * , ( ) ] link:\doibase 10.1038/nphys1958 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.012103 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.233601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.120401 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.150802 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.080801 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.150801 [ * * , ( ) ] http://stacks.iop.org/1367-2630/16/i=8/a=083010 [ * * , ( ) ] http://arxiv.org/abs/1402.0495 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/1/013010 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.93.173002 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.073002 [ * * , ( ) ] http://stacks.iop.org/0953-4075/48/i=3/a=035502 [ * * , ( ) ] link:\doibase 10.1103/physreva.46.r6797 [ * * , ( ) ] link:\doibase 10.1103/physreva.50.67 [ * * , ( ) ] link:\doibase 10.1103/physreva.54.r4649 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.82.2207 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.92.230801 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/16/11/113002 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.123601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.090801 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.190403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.72.3439 [ * * , ( ) ] `` , '' ( , ) link:\doibase 10.1016/j.physrep.2011.08.003 [ * * , ( ) ] link:\doibase 10.1103/physrevb.72.121303 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.156402 [ * * , ( ) ] link:\doibase 10.1103/physreva.47.5138 [ * * , ( ) ] link:\doibase 10.1103/physreva.77.033408 [ * * , ( ) ] link:\doibase 10.1364/oe.18.027167 [ * * , ( ) ] http://arxiv.org/abs/1410.7556 [ * * , ( ) ] link:\doibase 10.1080/09500340701352581 [ * * , ( ) ] link:\doibase 10.1109/jstqe.2009.2020810 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.85.5098 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.063603 [ * * , ( ) ]
|
under ideal conditions , quantum metrology promises a precision gain over classical techniques scaling quadratically with the number of probe particles . at the same time , no - go results have shown that generic , uncorrelated noise limits the quantum advantage to a constant factor . in frequency estimation scenarios , however , there are exceptions to this rule and , in particular , it has been found that transversal dephasing does allow for a scaling quantum advantage . yet , it has remained unclear whether such exemptions can be exploited in practical scenarios . here , we argue that the transversal - noise model applies to the setting of recent magnetometry experiments and show that a scaling advantage can be maintained with one - axis - twisted spin - squeezed states and ramsey - interferometry - like measurements . this is achieved by exploiting the geometry of the setup that , as we demonstrate , has a strong influence on the achievable quantum enhancement for experimentally feasible parameter settings . when , in addition to the dominant transversal noise , other sources of decoherence are present , the quantum advantage is asymptotically bounded by a constant , but this constant may be significantly improved by exploring the geometry .
|
recently , beyond the traditional object detection and semantic segmentation tasks , instance - level object segmentation has attracted much attention .it aims at joint object detection and semantic segmentation , and requires the pixel - wise semantic labeling for each object instance in the image .therefore , it is very challenging for existing computer vision techniques since instances of a semantic category may present arbitrary scales , various poses , heavy occlusion or obscured boundaries .most of the recent advances in instance - level object segmentation are driven by the rapidly developing object proposal methods .a typical pipeline of solving this task starts with an object proposal generation method and then resorts to tailored convolutional neural networks ( cnn ) architectures and post - processing steps ( graphical inference ) . as a result ,the network training and the accuracy of segmentation results are largely limited by the quality of object proposals generated by existing methods .some efforts have been made in refining the object proposals by bounding box regressions and iterative localizations during testing .however , their strategies did not explicitly utilize additional information such as more fine - grained segmentation masks during training to boost the network capability .intuitively , object proposal refinement and proposal - based segmentation should be jointly tackled as they are complementary to each other . specifically , the semantic category information and pixel - wise semantic labeling can provide more high - level cues and local details to learn more accurate object proposal localizations , while the refined object proposals with higher recall rates would naturally lead to more accurate segmentation masks with an improved segmentation network .in addition , as illustrated in figure [ fig : rnn ] , different object proposals may require different extent of refinement depending on their initial localization precision and interactions with neighboring objects .therefore the recursive refinement should be able to adaptively determine the optimal number of iterations for each proposal as opposed to performing a fixed number of iterations for all the proposals as in those previous methods .motivated by the above observations , in this work we propose a novel reversible recursive framework for instance - level object segmentation ( r2-ios ) .r2-ios integrates the instance - level object segmentation and object proposal refinement into a unified framework .inspired by the recent success of recurrent neural network on visual attention , our r2-ios updates instance - level segmentation results and object proposals by exploiting the previous information recursively .as illustrated in figure [ fig : framework ] , the instance - level segmentation sub - network produces the foreground mask of the dominant object in each proposal , while the proposal refinement sub - network predicts the confidences for all semantic categories as well as the bounding box offsets for refining the object proposals . to make the two sub - networks complementary to each other , the rich information in pixel - wise segmentation is utilized to update the proposal refinement sub - network by constructing a powerful segmentation - aware feature representation .the object proposals are therefore refined given the inferred bounding box offsets by the updated sub - networks and the previous locations , which are in turn fed into the two sub - networks for further updating .r2-ios can be conveniently trained by back - propagation after unrolling the sub - networks and sharing the network parameters across different iterations . to obtain a better refined bounding box for each proposal ,the proposal refinement sub - network adaptively determines the number of iterations for refining each proposal in both training and testing , which is in spirit similar to the early stopping rules for iteratively training large networks .r2-ios first recursively refines the proposal for all iterations , and then the reversible gate would be activated at the optimal refinement iteration where the highest category - level confidence is obtained across all iterations .the final results of the proposal can thus be obtained by reversing towards the results of the optimal iteration number .the optimization of the proposal will be stopped at the optimal iteration when the reversible gate is activated during training , and similarly the generated results in that iteration will be regarded as the final outputs during testing .one major challenge in proposal - based instance segmentation methods is that there might be multiple overlapped objects , in many cases belonging to the same category and sharing similar appearance , in a single proposal .it is critical to correctly extract the mask of the dominant object with clear instance - level boundaries in such a proposal in order to achieve good instance - level segmentation performance . to handle this problem ,a complete view of the whole proposal region becomes very important . in this work, an instance - aware denoising autoencoder embedded in the segmentation sub - network is proposed to gather global information to generate the dominant foreground masks , in which the noisy outputs from other distracting objects are largely reduced .the improved segmentation masks can accordingly further help update the proposal refinement sub - network during our recursive learning .the main contributions of the proposed r2-ios can be summarized as : 1 ) to the best of our knowledge , our r2-ios is the first research attempt to recursively refine object proposals based on the integrated instance - level segmentation and reversible proposal refinement sub - networks for instance - level object segmentation during both training and testing .2 ) a novel reversible proposal refinement sub - network adaptively determines the optimal number of recursive refinement iterations for each proposal .3 ) the instance - aware denoising autoencoder in the segmentation sub - network can generate more accurate foreground masks of dominant instances through global inference .4 ) extensive experiments on the pascal voc 2012 benchmark demonstrate the effectiveness of r2-ios which advances the state - of - the - art performance from to .* object detection .* object detection aims to recognize and localize each object instance with a bounding box .generally , most of the detection pipelines begin with producing object proposals from the input image , and then the classification and the bounding box regression are performed to identify the target objects .many hand - designed approaches such as selective search , edge boxes and mcg , or cnn - based methods such as deepmask and rpn have been proposed for object proposal extraction .those detection approaches often treat the proposal generation and object detection as two separate techniques , yielding suboptimal results .in contrast , the proposed r2-ios adaptively learns the optimal number of refinement iterations for each object proposal .meanwhile , the reversible proposal refinement and instance - level segmentation sub - networks are jointly trained to mutually boost each other. * instance - level object segmentation . * recently , several works have developed algorithms on the challenging instance - level object segmentation .most of these works take the object proposal methods as the prerequisite .for instance , hariharan proposed a joint framework for both object detection and instance - level segmentation .founded on , complex post - processing methods , category - specific inference and shape prediction , were proposed by chen to further boost the segmentation performance .in contrast to these previous works that use fixed object proposals based on a single - pass feed - forward scheme , the proposed r2-ios recursively refines the bounding boxes of object proposals in each iteration .in addition , we proposed a new instance - level segmentation sub - network with an embedded instance - aware denoising autoencoder to better individualize the instances .there also exist some works that are independent of the object proposals and directly predict object - level masks . particularly , liang predicted the instance numbers of different categories and the pixel - level coordinates of the object to which each pixel belongs .however , their performance is limited by the accuracy of instance number prediction , which is possibly low for cases with small objects . on the contrary, our r2-ios can predict category - level confidences and segmentation masks for all the refined proposals , and better covers small objects .as shown in figure [ fig : framework ] , built on the vgg-16 imagenet model , r2-ios takes an image and initial object proposals as inputs . an imagefirst passes serveral convolutional layers and max pooling layers to generate its convolutional feature maps .then the segmentation and reversible proposal refinement sub - networks take the feature maps as inputs , and their outputs are combined to generate instance - level segmentation results . to get the initial object proposals ,the selective search method is used to extract around 2,000 object proposals in each image . in the following , we explain the key components of r2-ios , including the instance - level segmentation sub - network , reversible proposal refinement sub - network , recursive learning and testing phase in more details . * sub - network structure . *the structure of the segmentation sub - network is built upon the vgg-16 model .the original vgg-16 includes five max pooling layers . to retain more local details , we remove the last two max pooling layers in the segmentation sub - network . following the common practice in semantic segmentation , we replace the last two fully - connected layers in vgg-16 with two fully - convolutional layers in order to obtain convolutional feature maps for the whole image .padding is added when necessary to keep the resolution of feature maps .then the convolutional feature maps of each object proposal pass through a region of interest ( roi ) pooling layer to extract fixed - scale feature maps ( in our case ) for each proposal .several convolutional filters are then applied to generate confidence maps for foreground and background classes .an instance - aware autoencoder is further appended to extract global information contained in the whole convolutional feature maps to infer the foreground mask of the dominant object within the object proposal .* instance - aware denoising autoencoder . * in real - world images , multiple overlapping object instances ( especially those with similar appearances and in the same category ) may appear in an object proposal . in order to obtain good instance - level segmentation results ,it is very critical to segment out the dominant instance with clear instance - level boundaries and remove the noisy masks of other distracting instances for a proposal . specifically , when an object proposal contains multiple object instances , we regard the mask of the object that has the largest overlap with the proposal bounding box as the dominant foreground mask . for example , in figure [ fig : framework ] , there are three human instances included in the given proposal ( red rectangle ) . apparently the rightmost person is the dominant instance in that proposal .we thus would like the segmentation sub - network to generate a clean binary mask over that instance as shown in figure [ fig : framework ] .such appropriate pixel - wise prediction requires a global perspective on all the instances in the proposal to determine which instance is the dominant one .however , traditional fully - convolutional layers can only capture local information which makes it difficult to differentiate instances of the same category . to close this gap ,r2-ios introduces an instance - aware denoising autoencoder to gather global information from confidence maps to accurately identify the dominant foreground mask within each proposal .formally , we vectorize to a long vector of with a dimension of . then the autoencoder takes as the input and maps it to a hidden representation , where denotes a non - linear operator .the produced hidden representation is then mapped back ( via a decoder ) to a reconstructed vector as .the compact hidden representation extracts global information based on the predictions from convolutional layers in the encoder , which guides the reconstruction of a denoised foreground mask of the dominant instance in the decoder . in our implementation , we use two fully connected layers along with relu non - linear operators to approximate the operators and .the number of output units in the fully - connected layer for is set as and that of the fully - connected layer for is set as 3200 .finally the denoised prediction of is reshaped to a map with the same size as . a pixel - wise cross - entropy loss on is employed to train the instance - level segmentation sub - network .* sub - network structure . * the structure of the proposal refinement sub - network is built upon the vgg-16 model .given an object proposal , the proposal refinement sub - network aims to refine the category recognition and the bounding box locations of the object , and accordingly generates the confidences over categories , including semantic classes and one background class , as well as the bounding - box regression offsets .following the detection pipeline in fast - rcnn , an roi pooling layer is added to generate feature maps with a fixed size of .the maps are then fed into two fully - connected layers .different from fast r - cnn , segmentation - aware features are constructed to incorporate guidance from the pixel - wise segmentation information to predict the confidences and bounding box offsets of the proposal , as indicated by the dashed arrow in figure [ fig : framework ] .the foreground mask of the dominant object in each proposal can help better depict the boundaries of the instances , leading to better localization and categorization of each proposal .thus , connected by segmentation - aware features and recursively refined proposals , the segmentation and proposal refinement sub - networks can be jointly optimized and benefit each other during training . specifically , the segmentation - aware features are obtained by concatenating the confidence maps from the instance - aware autoencoder with the features from the last fully - connected layer in the proposal refinement sub - network .two output layers are then appended to these segmentation - aware features to predict category - level confidences and bounding - box regression offsets .the parameters of these predictors are optimized by minimizing soft - max loss and smooth loss . * reversible gate . * the best bounding box of each object proposal and consequently the most accurate segmentation mask may be generated at different iterations of r2-ios during training and testing , depending on the accuracy of its initial bounding box and the interactions with other neighboring or overlapped instances . in the -th iterationwhere , the reversible gate is therefore introduced to determine the optimal number of refinement iterations performed for each proposal . while we can check the convergence of predicted bounding box offsets in each iteration, in practice we found that the predicted confidence of the semantic category is an easier and better indicator of the quality of each proposal .all the reversible gates are initialized with 0 which means an inactivated state . after performing all the iterations for refining each proposal ,the iteration with the highest category - level confidence score is regarded as the optimal iteration .its corresponding reversible gate is then activated .accordingly , we adopt the refinement results of the proposal at the -th iteration as the final results .we apply the reversible gate in both training and testing . during training ,only the losses of this proposal in the first iterations are used for updating the parameters of the unrolled sub - networks , while the losses in the rest iterations are discarded .the recursive learning seamlessly integrates instance - level object segmentation and object proposal refinement into a unified framework .specifically , denote the initial object proposal as where contains the pixel coordinates of the center , width and height of the proposed bounding box .assume each object proposal is labeled with its ground - truth location of the bounding - box , denoted as . in the -th iteration, the bounding box location of the input proposal is denoted as , produced by the two sub - networks in the -th iteration . after passing the input image and the object proposal into two sub - networks, the proposal refinement sub - network generates the predicted bounding box offsets for each of the object classes , and the category - level confidences for categories . the ground - truth bounding box offsets are transformed as .we use the transformation strategy given in to compute , in which specifies a scale - invariant translation and log - space height / width shift relative to each object proposal .the segmentation sub - network generates the predicted foreground mask of the dominant object in the proposal as .we denote the associated ground - truth dominant foreground mask for the proposal as .we adopt the following multi - loss for each object proposal to jointly train the instance - level segmentation sub - network and the proposal refinement sub - network as j_\text{loc}(\mathbf{o}_{t , g } , \tilde{\mathbf{o}}_t ) + \bm{1}[g \geq 1 ] j_\text{seg } ( \mathbf{v}_t , \tilde{\mathbf{v}_t } ) , \label{eq : loss } \end{split}\ ] ] where is the log loss for the ground truth class , is a smooth loss proposed in and is a pixel - wise cross - entropy loss . the indicator function $ ] equals 1 when and 0 otherwise . for proposals that only contain background (g = 0 ) , and are set to be 0 . following ,only the object proposals that have at least 0.5 intersection over union ( iou ) overlap with a ground - truth bounding box are labeled with a foreground object class , .the remaining proposals are deemed as background samples and labeled with .the refined bounding box of the proposal can be calculated as , where represents the inverse operation of to calculate the refined bounding box given and .note that our r2-ios adaptively adopts the results obtained by performing different number of refinement iterations for each proposal .if the reversible gate is activated at the -th iteration as described in sec .[ sec : proposal ] , the final refinement results for the proposal will be reversed towards the results of -th iteration . thus r2-ios updates the network parameters by adaptively minimizing the different number of multi - loss in eqn .( [ eq : loss ] ) for each proposal .the global loss of the proposal to update the networks is accordingly computed as .r2-ios can thus specify different number of iterations for each proposal to update the network capability and achieve better instance - level segmentation results . during training ,using the reversible gates requires a reliable start of the prediction of category - level confidences for each proposal to produce the optimal iteration number for the refinement .we therefore first train the network parameters of r2-ios without using the reversible gates in which the results after performing all iterations of the refinement are adopted for all proposals .then our complete r2-ios is fine - tuned on these pre - trained network parameters by using the reversible gates for all proposals .r2-ios first takes the whole image and the initial object proposals with locations as the input , and recursively passes them into the proposal refinement and segmentation sub - networks . in the -th iteration , based on the confidence scores of all categories , the category for each proposal predicted by taking the maximum of the . for the proposals predicted as background , the locations of proposals are not updated . for the remaining proposals predicted as a specific object class , the locations of object proposals refined by the predicted offsets and previous location .based on the predicted confidence scores of the refined proposal in all iterations , the optimal number of refinement iterations for each proposal can be accordingly determined .we denote the optimal number of refinement iterations of each proposal as .the final outputs for each object proposal can be reversed towards the results at the -th iteration , including the predicted category , the refined locations and the dominant foreground mask .the final instance - level segmentation results can be accordingly generated by combining the outputs of all proposals ..comparison of instance - level segmentation performance with two state - of - the - arts using mean metric over 20 classes at 0.5 and 0.7 iou , when evaluated with the ground - truth annotations from sbd dataset .all numbers are in % . [cols="<,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] [ mpr_vol ] * dataset and evaluation metrics . * to make fair comparison with four state - of - the - art methods , we evaluate the proposed r2-ios framework on the pascal voc 2012 validation segmentation benchmark . for comparing with , we evaluate the performance on voc 2012 main validation set , including 5732 images .the comparison results are reported in table [ mprvoc ] . for comparing with ,the results are evaluated on voc 2012 segmentation validation set , including 1449 images , and reported in table [ mpr ] and table [ mpr_vol ] .note that , voc 2012 provides very elaborated segmentation annotations for each instance ( carefully labeled skeletons for a bicycle ) while sbd just gives the whole region ( rough region for a bicycle ) . since chen et al . re - evaluated the performance of the method in with the annotations from voc 2012 validation set , most of our evaluations are thus performed with the annotations from voc 2012 segmentation validation set when comparing with .we use standard metric for evaluation , which calculates the average precision under different iou scores with the ground - truth segmentation map .* implementation details . *we fine - tune the r2-ios based on the pre - trained vgg-16 model and our code is based on the publicly available fast r - cnn framework on caffe platform . during fine - tuning ,each sgd mini - batch contains 64 selected object proposals from each training image .following , in each mini - batch , 25% of object proposals are foreground that have iou overlap with a ground truth bounding box of at least 0.5 , and the rest are background . during training ,images are randomly selected for horizontal flipping with a probability of 0.5 to augment the training set .the maximal number of refinement iterations for all proposals is set as , since only minor improvement with more iterations is observed . in the reversible proposal refinement sub - network ,parameters in the fully - connected layers used for softmax classification and bounding box regression are randomly initialized with zero - mean gaussian distributions with standard deviations of 0.01 and 0.001 , respectively . in the segmentation sub - network ,the last two convolutional layers used for pixel - wise semantic labeling and the fully - connected layers in the instance - aware denoising autoencoderare all initialized from zero - mean gaussian distributions with standard deviations 0.001 .all values of initial bias are set as 0 .the learning rate of pre - trained layers is set as 0.0001 .for training , we first run sgd for 120k iterations for training the network parameters of r2-ios without using reversible gates on a nvidia geforce titan x gpu and intel core i7 - 4930k cpu .40ghz .then our r2-ios with the reversible gates is fine - tuned on the pre - trained network paramters for 100k iterations . for testing , on average , the r2-ios framework processes one image within 1 second ( excluding object proposal time ) .table [ mprvoc ] provides the results of sds , hc and our r2-ios for instance - level segmentation with the annotations from sbd dataset .r2-ios outperforms the previous state - of - the - art approaches by a significant margin , in average 19.1% better than sds and 8.8% better than hc in terms of mean metric at 0.5 iou score . when evaluating on 0.7 iou score , improvement in can be observed when comparing our r2-ios with hc .we can only compare the results evaluated at 0.5 to 0.7 iou scores , since no other results evaluated at higher iou scores have been reported for the baselines . when evaluated with the annotations from voc 2012 dataset , table [ mpr ] and table [ mpr_vol ]present the comparison of the proposed r2-ios with three state - of - the - art methods using metric at iou score 0.5 , 0.6 and 0.7 , respectively . evaluating with much higheriou score requires high accuracy for predicted segmentation masks of object instances .r2-ios significantly outperforms the three baselines : 66.7% vs 43.8% of sds , 46.3% of chen et al . and 58.7% of pfn in mean metric .furthermore , table [ mpr_vol ] shows that r2-ios also substantially outperforms the three baselines evaluated at higher iou scores 0.6 and 0.7 . in general, r2-ios shows dramatically higher performance than the baselines , demonstrating its superiority in predicting accurate instance - level segmentation masks benefiting from its coherent recursive learning .several examples of the instance - level segmentation results ( with respect to the ground truth ) are visualized in figure [ fig : result ] .because no publicly released codes are available for other baselines , we only compare with visual results from sds .generally , r2-ios generates more accurate segmentation results for object instances of different object categories , various scales and heavy occlusion , while sds may fail to localize and segment out the object instances due to the suboptimal localized object proposals . for example , in the first image of the second row , the region of the leg is wrongly included in the predicted mask of the cat by sds , while r2-ios precisely segments out the mask of the cat without being distracted by other object instances .we further evaluate the effectiveness of the four important components of r2-ios , the recursive learning , the reversible gate , the instance - aware denoising autoencoder and the segmentation - aware feature representation .the performance over all 20 classes from eight variants of r2-ios is reported in table [ mpr ] . * recursive learning . * the proposed r2-ios uses the maximal iterations to refine all object proposals . to justify the necessity of using multiple iterations, we evaluate the performance of r2-ios with different numbers of iterations during training and testing stages .note that all the following results are obtained without using the reversible gates . in our experimental results , r2-ios recursive 1 " indicates the performance of using only 1 iteration , which is equivalent to the model without any recursive refinement . r2-ios recursive 2 and r2-ios recursive 3 " represents the models of using 2 and 3 iterations . by comparing r2-ios recursive 4 " with the three variants , one can observe considerable improvement on segmentation performance when using more iterations .this shows that r2-ios can generate more precise instance - level segmentation results benefiting from recursively refined object proposals and segmentation predictions .we do not observe a noticeable increase in the performance by adding more iterations , thus the setting of 4 iterations is employed throughout our experiments .in addition , we also report the results of the r2-ios variant where the recursive process is only performed during testing and no recursive training is used , as r2-ios recursive only testing " . by comparing with r2-ios recursive 4 " , a decrease is observed , which verifies the advantage of using recursive learning during training to jointly improve the network capabilities of two sub - networks .we also provide several examples for qualitative comparison of r2-ios variants with different numbers of iterations in figure [ fig : recursive ] .we can observe that the proposed r2-ios is able to gradually produce better instance - level segmentation results with more iterations .for instance , in the first row , by using only 1 iteration , r2-ios can only segment out one part of the sofa with salient appearance with respect to background . after refining object proposals with 4 iterations ,the complete sofa mask can be predicted by r2-ios .similarly , significant improvement by r2-ios with more iterations can be observed in accurately locating and segmenting the object with heavy occlusion ( in the second row ). * reversible gate .* we also verify the effectiveness of the reversible gate to adaptively determine the optimal number of refinement iterations for each proposal . r2-ios ( ours ) " offers a increase by incorporating the reversible gates into the reversible proposal refinement sub - network , compared to the version r2-ios recursive 4 " .this demonstrates that performing adaptive number of refinement iterations for each proposal can help produce more accurate bounding boxes and instance - level object segmentation results for all proposals .similar improvement is also seen at 0.6 and 0.7 iou scores , as reported in table [ mpr_vol ] .* instance - aware autoencoder .* we also evaluate the effectiveness of using the instance - aware denoising autoencoder to predict the foreground mask for the dominant object in each proposal . in table[ mpr ] , r2-ios ( w / o autoencoder ) " represents the performance of the r2-ios variant without the instance - aware autoencoder where the dominant foreground mask for each proposal is directly generated by the last convolutional layer . as shown by r2-ios ( w / o autoencoder ) " and r2-ios ( ours ) " , using the instance - aware autoencoder , over 12.5% performance improvement can be observed . this substantial gain verifies that the instance - aware autoencoder can help determine the dominant object instance by explicitly harnessing global information within each proposal .in addition , another alternative strategy of gathering global information is to simply use fully - connected layers .we thus report the results of the r2-ios variant using two fully - connected layers with 3200 outputs stacked on the convolutional layers , named as r2-ios ( fully w / o autoencoder ) " .our r2-ios also gives favorable performance over r2-ios ( fully w / o autoencoder ) " , showing that using intermediate compact features within the instance - aware autoencoder can help introduce more discriminative and higher - level representations for predicting the dominant foreground mask .figure [ fig : denoise ] shows some segmentation results obtained by r2-ios ( w / o autoencoder ) " and r2-ios ( ours ) " . r2-ios ( w / o autoencoder ) " often fails to distinguish the dominant instances among multiple instances in an object proposal , and wrongly labels all object instances as foreground .for example , in the first row , the instance - aware autoencoder enables the model to distinguish the mask of a human instance from a motorcycle .* segmentation - aware feature representation . * the benefit of incorporating the confidence maps predicted by the segmentation sub - network as part of the features in the reversible proposal refinement sub - network can be demonstrated by comparing r2-ios ( w / o seg - aware ) " with r2-ios ( ours ) " .the improvement shows that the two sub - networks can mutually boost each other and help generate more accurate object proposals and segmentation masks .in this paper , we proposed a novel reversible recursive instance - level object segmentation ( r2-ios ) framework to address the challenging instance - level object segmentation problem .r2-ios recursively refines the locations of object proposals by leveraging the repeatedly updated segmentation sub - network and the reversible proposal refinement sub - network in each iteration . in turn ,the refined object proposals provide better features of each proposal for training the two sub - networks .the reversible proposal refinement sub - network adaptively determines the optimal iteration number of the refinement for each proposal , which is a very general idea and can be extended to other recurrent models .an instance - aware denoising autoencoder in the segmentation sub - network is proposed to leverage global contextual information and gives a better foreground mask for the dominant object instance in each proposal . in future, we will utilize long short - term memory ( lstm ) recurrent networks to leverage long - term spatial contextual dependencies from neighboring objects and scenes in order to further boost the instance - level segmentation performance .
|
in this work , we propose a novel reversible recursive instance - level object segmentation ( r2-ios ) framework to address the challenging instance - level object segmentation task . r2-ios consists of a reversible proposal refinement sub - network that predicts bounding box offsets for refining the object proposal locations , and an instance - level segmentation sub - network that generates the foreground mask of the dominant object instance in each proposal . by being recursive , r2-ios iteratively optimizes the two sub - networks during joint training , in which the refined object proposals and improved segmentation predictions are alternately fed into each other to progressively increase the network capabilities . by being reversible , the proposal refinement sub - network adaptively determines an optimal number of refinement iterations required for each proposal during both training and testing . furthermore , to handle multiple overlapped instances within a proposal , an instance - aware denoising autoencoder is introduced into the segmentation sub - network to distinguish the dominant object from other distracting instances . extensive experiments on the challenging pascal voc 2012 benchmark well demonstrate the superiority of r2-ios over other state - of - the - art methods . in particular , the over classes at iou achieves , which significantly outperforms the results of by pfn and by .
|
_ should we perform measurement or not ? _ this question appears to be critical in quantum physics , particularly in quantum information science . for quantum computation , for instance, it is of essential importance to study differences between the conventional closed - system approach and the measurement - based one ( i.e. the so - called one - way computation ) .this paper focuses on a specific aspect of this abstract and broad question ; we will consider feedback control problems .that is , for a given open system ( plant ) , we want to engineer another system ( controller ) connected to the plant so that the plant or the whole system behaves in a desirable way .the fundamental question is then , in our case , as follows ; _ should we measure the plant or not , for engineering a closed - loop system ? _ more precisely , in the former case , we measure the plant s output and engineer a classical controller that manipulates the plant using the measurement result this is called the _ measurement - based feedback ( mf ) _ approach . in the latter case ,we do not measure it , but rather connect a fully quantum controller directly to the plant system in a feedback manner this is called the _ coherent feedback ( cf ) _ approach . a typical example is shown in fig .[ mf vs cf intro ] ; the plant is an open mechanical oscillator coupled to a ring - type optical cavity , and the control goal is to minimize the energy of the oscillator , or equivalently to cool the oscillator towards its motional ground state . as mentioned above , there are two feedback control strategies .one is the mf controller ( fig .[ mf vs cf intro ] ( a ) ) that measures the output field by for instance a homodyne detector ; then , using the continuous - time measurement results , it produces the control signal for modulating the input field .the other option is the cf control ( fig .[ mf vs cf intro ] ( b ) ) , where we construct another fully quantum system that feeds the output field back to the input field , without involving any measurement component .the question is then about how to design a mf / cf controller that cools the oscillator most effectively .controller synthesis for a quantum system is in general non - trivial , but researchers longstanding efforts have built a solid mathematical framework for dealing with those problems . for the mf case , actually there exists a beautiful _ quantum feedback control theory _ that was developed based on the _ quantum filtering _ together with the classical control theory .in fact , the above - described cooling problem can be formulated as a quantum linear quadratic gaussian ( lqg ) feedback control problem and explicitly solved .also the theory has been applied to various control problems in quantum information science such as error correction .notably , experiment of mf control is now within the reach of current technologies .the cf control , on the other hand , has still a relatively young history though its initial concept was found in back in 1994 ; but recently it has attracted increasing attention , leading as a result development of the basic control theory and applications .some experimental demonstrations of cf control also warrant special mention ; in fact , one of the main advantages of cf is in its experimental feasibility compared to the mf approach .let us return to our question ; which controller , mf or cf , is better ?now note that a cf controller is a fully quantum system whose random variables are in general represented by non - commutative operators , while a mf controller is a classical system with commutative random variables .hence from a mathematical viewpoint the class of mf controllers is completely included in that of cf controllers .thus our question is as follows ; _ in what situation is a cf controller better than a mf controller ?_ actually there have been several studies exploring answers to this question ; most of these studies discussed problems of minimizing a certain cost function such as energy of an oscillator or the time required for state transfer . in particular in , the authors studied the problem discussed in the second paragraph and clarified that a certain cf controller outperforms any mf controller when the total mean phonon number of the oscillator is in the quantum regime ; in other words , the two types of controllers do not show a clear difference in their performance for cooling , in a classical situation .this in more broad sense implies that a cf controller would outperform a mf controller only in a purely quantum regime .consequently , our question can be regarded as a special case of the fundamental problem in physics asking in what situation a fully quantum device ( such as a quantum computer ) outperforms any classical one ( such as a classical computer ) . towards shedding a new light on the above - mentioned fundamental problem ,this paper attempts to clarify a boundary between the cf and mf controls for specific control problems .the problems are not what aim to minimize a cost function , but we will consider the following three ; ( i ) realization of a back - action evasion ( bae ) measurement , ( ii ) generation of a quantum non - demolished ( qnd ) variable , and ( iii ) generation of a decoherence - free subsystem ( dfs ) .the followings are brief descriptions of these notions in the input - output formalism .first , if a measurement process is subjected only to a single noise quadrature ( shot noise ) and not to its conjugate ( back - action noise ) , then it is called the bae measurement ; as a result bae may beat the so - called standard quantum limit ( sql ) and enables high - precision detection for a tiny signal such as a gravitational wave force .next , a qnd variable is a physical quantity that can be measured without being disturbed ; more precisely , it is not affected by an input probe field but still appears in the output field , which can be thus measured repeatedly .lastly , a dfs is a subsystem that is completely isolated from surrounding environment ; that is , it is a subsystem whose variables are not affected by any input probe / environment field , and further , they do not appear in the corresponding output fields .hence , a dfs can be used for quantum computation or memory .these three notions play crucial roles especially in quantum information science , thus their realizations are of essential importance .indeed we find in the literature some feedback - based approaches realizing bae , qnd , and dfs . , where the corresponding quantum variables are given by . and denote the controllable and observable subspaces , respectively .the colored region represents the set of qnd variables ( middle ) and the set of variables in a dfs ( right ) ., width=328 ] another feature of this paper is that we focus on general open _ linear quantum systems _ ; this is a wide class of systems containing for instance optical devices , mechanical oscillators , and large atomic ensembles .linear systems are typical continuous - variables ( cv ) systems , which are applicable to several cv quantum information processing both in gaussian case and non - gaussian case . in both classical and quantum cases , for linear systems , the so - called _ controllability _ and _observability _ properties can be well defined ; further , those properties have equivalent representations in terms of a _ transfer function _ , which explicitly describes the relation between input and output .in fact a main advantage of focusing on linear systems is that we can have systematic characterizations of bae , qnd , and dfs in terms of the controllability and observability properties or transfer functions , which are consistent with the standard definitions found in the literature .figure [ goals ] is an at a glance overview of those characterizations , showing unification of the notions .indeed this is the key idea to obtain all the results in this paper .[ the no - go theorems ] therefore our problem is , for a given open linear system , to design a cf / mf controller to realize bae , qnd , or dfs . for this problem ,the results summarized in table [ the no - go theorems ] are obtained .that is , no mf controller can achieve any of the control goals for general linear systems ( there are two kinds of general configurations for feedback control , as indicated by type " in table [ the no - go theorems ] ) .in contrast to these no - go theorems , for every category in the table we can find an example of cf controller achieving the goal . from the viewpoint of the above - mentioned fundamental question asking differences of the ability of quantum and classical devices , therefore , these results imply that bae , qnd , and dfs are the properties that can only be realized in a fully quantum device .this paper is organized as follows .section ii reviews some useful facts in classical linear systems theory and describes a general linear quantum system with some examples . in sec .iii we discuss the three control goals , bae , qnd , and dfs , in the general input - output formalism and give their systematic characterizations in terms of the controllability - observability properties and also transfer functions ; again , these new characterizations are special feature of this paper. then the proofs of the no - go theorems are given in secs .iv and v , each of which are devoted to the proofs for the type-1 and the type-2 mf control configuration , respectively .sections vi and vii demonstrate systematic engineering of a cf controller achieving the control goal .in particular , in the type-2 case , we will study a michelson s interferometer composed of two mechanical oscillators , which is used for gravitational wave detection .* notations : * for a matrix , the _ kernel _ and the _ range _ are defined by and , respectively .the complement of a linear space is denoted by . means the null space . in this paperwe do not use the terminology observable " to represent a measurable physical quantity ( i.e. a self adjoint operator ) , because it has a different meaning in systems theory ; a physical quantity is called a variable " , e.g. a qnd variable rather than a qnd observable .a standard form of classical linear systems is given by is a vector of c - number variables . and are vectors of real - valued input and output signals , respectively . , and are real matrices with appropriate dimensions . in this paper , the following three questions are important ; ( i ) which components of can be controlled by , ( ii ) which components of can be observed from , and ( iii ) in what condition does not appear in ? the answers are briefly described below .see for more detailed discussion .the first problem can be explicitly solved by examining the following _ controllability matrix _ : .\ ] ] indeed this matrix fully characterizes the controllable and uncontrollable variables with respect to ( w.r.t . ) . to see this fact , suppose and let and be independent vectors spanning and , respectively .further let us define ] .then , as is spanned by , there exists a matrix satisfying . on the other hand in general spanned by all the vectors ; i.e. .note also that there exists a matrix satisfying .these relations are summarized in terms of the invertible square matrix ] ; in particular , due to , the uncontrollable variable is characterized by .also the controllable one is defined in .hence we call these sets the _ uncontrollable subspace _ and the _ controllable subspace _ , respectively . and , respectively .but in the quantum case a variable of interest is an infinite - dimensional operator and does not live in either of these subspaces ; rather it is always of the form and thus can be well characterized by the dual vector .this is the reason why we define the controllable and uncontrollable subspaces in the dual space as and , respectively . ]the following fact is especially useful in this paper : the system has an uncontrollable variable iff the answer to the second question is obtained in a similar fashion .let us define the _ observability matrix _ ^\top.\ ] ] assume .then , there exists a linear transformation ^\top ] .then , the ccr ( we assume ) is represented by . \end{aligned}\ ] ] is a block diagonal matrix ; we often omit the subscript .the system is driven by the hamiltonian where .further , it couples to environment / probe fields through the hamiltonian , where ( , ) .also is the annihilation operator on the field , which under the markovian approximation satisfies =\delta_{ij}\delta(t - t') ] , where further , the field variables change to the set of equations and is the most general form of open _ linear quantum systems_. all the elements of the vector in eq .can not be measured simultaneously , because they do not commute with each other .in fact , without introducing additional noise fields as explained just later , we can measure only at most half of them ; that is , the output equation associated with a _ linear measurement _ , which is realized by a homodyne detector , is of the form where is a real matrix satisfying and .actually , all the elements of are classical signals commuting with each other as well as with those of for all times ; i.e. = 0,~\forall i , j , ~\forall t , t'.\ ] ] let us further introduce with matrix such that ] and performing homodyne measurement on the joint fields composed of and ; that is , the output equation is given by = m_1 \left [ \begin{array}{c } c \\ 0 \\ \end{array}\right ] \hat x + m_1 \left [ \begin{array}{c } \hat{\cal w } \\\hat{\cal v } \\\end{array}\right],\ ] ] where in this case is with the size and it satisfies , etc .we thus have measurement outcomes , though they are subjected to the additional noise .note that , by simply replacing and by ^\top ] , this dual homodyne detection scheme can be represented by eqs . and .hence in what follows , without loss of generality , we use eq . to represent the most general linear measurement . *( i ) * a simple open linear system is an empty optical cavity with two input and output fields , depicted in fig .[ examples ] ( a ) .the system equations are given by is the annihilation operator of the cavity mode . and are the white noise operators of the incoming and the outgoing optical fields , respectively . is the coupling strength between and the field , which is proportional to the transmissivity of the coupling mirror . in this paperwe express the variables in the quadrature form , which in this case are defined as ^\top ] with the field quadratures .then , the above system equations are rewritten as typically this system works as a low - pass filter ; that is , for the noisy input field , the corresponding mode - cleaned output field is generated , which will be used later for e.g. some quantum information processing . to attain this goal , is measured to detect the error signal for locking the optical path length in the cavity .note that is a vacuum field .that is , in this case , the two input - output fields have different roles . *( ii ) * the mechanical oscillator shown in fig .[ examples ] ( b ) can also be modeled as a linear system .this system is composed of a mechanical oscillator with mode and a cavity with mode .the cavity couples to a probe field ^\top ] is obtained as \hat x - \sqrt{2\gamma } \left [ \begin{array}{cc } 0 & 0 \\ 0 & 0 \\ \hline 1 & 0 \\ 0 & 1 \\ \end{array}\right]\hat{w } , \nonumber \\ & & \hspace*{-1em } \hat{w}^{\rm out } = \sqrt{2\gamma } \left [ \begin{array}{cc|cc } 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array}\right ] \hat x + \hat{w}. \end{aligned}\ ] ] and are the mass and the resonant frequency of the oscillator . is the coupling constant between the oscillator and the cavity field , which is proportional to the strength of radiation pressure force . is the coupling constant between the cavity and the probe field . as indicated from the equations , it is possible to extract some information about the oscillator s behavior by measuring the probe output field .a typical situation is that the oscillator is pushed by an external force with unknown strength ; we attempt to estimate this value , by measuring .the oscillator s motion is usually much slower than that of the cavity field , thus we can adiabatically eliminate the cavity mode and have a reduced dynamical equation of only the oscillator : \hat x + \sqrt{\lambda } \left [ \begin{array}{c } 0 \\ 1 \\\end{array}\right]\hat{q } + \left [ \begin{array}{c } 0 \\ 1 \\\end{array}\right]\hat{f } , \nonumber \\ & & \hspace*{0em } \hat{w}^{\rm out } = \left [ \begin{array}{c } \hat{q}^{\rm out } \\\hat{p}^{\rm out } \\\end{array}\right ] = \sqrt{\lambda } \left [ \begin{array}{cc } 0 & 0 \\ 1 & 0 \\ \end{array}\right ] \hat x + \left [ \begin{array}{c } \hat{q } \\ \hat{p } \\\end{array}\right ] , \end{aligned}\ ] ] where represents the strength of the direct coupling between the oscillator and the probe field .this equation clearly shows that only contains the information about the oscillator and accordingly ; thus should be measured , implying ] , where is the optical path length in the interferometer .hence under the assumption , the normalized signal containing is given by = \frac{y[i\omega]}{2\sqrt{\lambda}l } = \hat{g}[i\omega ] + \frac{\sqrt{\lambda}}{ml\omega^2}\hat{q}_2[i\omega ] + \frac{1}{2\sqrt{\lambda}l}\hat{p}_2[i\omega].\ ] ] the noise power of is bounded from below by the following _ standard quantum limit ( sql ) _ : ={\langle{|\tilde{y}-\hat{g}|^2}\rangle } = \frac{\lambda}{m^2l^2\omega^4}{\langle{|\hat{q}_2|^2}\rangle } + \frac{1}{4\lambda l^2}{\langle{|\hat{p}_2|^2}\rangle } \nonumber \\ & & \hspace*{0em } \geq 2\sqrt { \frac{{\langle{|\hat{q}_2|^2}\rangle}{\langle{|\hat{p}_2|^2}\rangle } } { 4m^2 l^4 \omega^4 } } \geq \frac{1}{2ml^2 \omega^2 } = s_{\rm sql}[i\omega].\end{aligned}\ ] ] the last inequality is due to the heisenberg uncertainty relation .( for the simple notation , the power spectrum is defined without involving the delta function . )the sql appears because the output contains the ba noise in addition to the shot noise .thus , towards high - precision detection of , a special system configuration should be devised so that is free from .that is , we need bae .in fact , if bae is realized , then by injecting a -squeezed light field into the dark port , we can possibly reduce the noise power below the sql and may have chance to detect ; for some specific configurations achieving bae , see .the above discussion can be generalized for the system and .let us assume that the signal to be detected is contained in the output : hence , is the shot noise , which must appear in .the ba noise is then given by the conjugate .note that these are vectors of operators : ^\top ] .the matrices and satisfy several conditions ; in particular holds and leads to .hence eq . is rewritten as bae is realized , if the output does not contain the ba noise .( we will not consider the so - called variational measurement approach , in which case is frequency dependent . ) in the language of linear systems theory , as stated in eq ., this condition means that there is no subsystem that is controllable w.r.t . and observable w.r.t . ; i.e. where is the controllability matrix generated from and is the observability matrix generated from .further , again as described in eq . , the condition is equivalent to under this condition , the system equations and are represented in a transformed coordinate by = \left [ \begin{array}{cc } a_{11 } & 0 \\ a_{21 } & a_{22 } \\\end{array}\right ] \left [ \begin{array}{c } \hat x'_1 \\\hat x'_2 \\ \end{array}\right ] + \left [ \begin{array}{c } b_{11 } \\ b_{21 } \\ \end{array}\right]\hat{\cal q } + \left [ \begin{array}{c } 0 \\ b_{22 } \\\end{array}\right]\hat{\cal p } , \nonumber \\ & & \hspace*{-1em } y=[c_1,~0 ] \left [ \begin{array}{c } \hat x'_1 \\\hat x'_2 \\\end{array}\right ] + \hat{\cal q } , \end{aligned}\ ] ] showing that actually there is no signal flow from to .it is also obvious from this equation that , similar to the classical case , the equivalent characterization to eq . in terms of the transfer functionis given by =0,~~\forall s.\ ] ] finally , note that achieving the above bae condition or itself does not necessarily mean the improvement of signal sensitivity ; actually in the case of gw force sensing discussed in sec .vii - b , we need squeezing of the input field in addition to the bae property for realizing such operational improvement . the idea of the third control goal , generation of a dfs , can be clearly seen from the work , which studies a quantum memory served by an atomic ensemble in a cavity .each atom has -type energy levels , constituted by two metastable ground states and an excited state .the state transition between and is naturally coupled to the cavity mode with strength ( denotes the number of atoms ) , while the transition is induced by a classical magnetic field with time - varying rabi frequency .the system variables are the polarization operator and the spin - wave operator , where is the collective lowering operator ; in a large ensemble limit , they can be well approximated by annihilation operators .consequently the system dynamics is given by = \left [ \begin{array}{ccc } -\kappa & ig\sqrt{n } & 0 \\ig\sqrt{n } & -i\delta & i\omega \\ 0 & i\omega^ * & 0 \\ \end{array } \right ] \left [ \begin{array}{c } \hat{a}_1 \\ \hat{a}_2 \\ \hat{a}_3 \\\end{array } \right ] - \left [ \begin{array}{c } \sqrt{2\kappa } \\ 0 \\ 0 \\ \end{array } \right ] \hat a , \nonumber \\ & & \hspace*{-1.2em } \hat{a}^{\rm out } = \sqrt{2\kappa}\hat{a}_1 + \hat a,\end{aligned}\ ] ] where denotes the cavity decay rate and is the detuning between the cavity center frequency and the transition frequency .this system works as a quantum memory in the following way .first , a state to be stored is carried by an appropriately shaped optical pulse on the input field , and it is transferred to the metastable state ; the rabi frequency is suitably designed throughout this writing process . in the storage stage , the classical magnetic field is turned off , i.e. .it is seen from eq . that the spin - wave operator is then completely decoupled from the fields and ; that is , constitutes a linear dfs , and ideally its state is perfectly preserved . in the language of systems theory, this dfs is uncontrollable w.r.t . and unobservable w.r.t . . note that is not a variable on the so - called decoherence - free _ subspace _ , which though has the same abbreviation .in general , if the system s hilbert space can be decomposed to and is free from external noise , then it is called the df subsystem and particularly when it is called the df subspace ; now we are dealing with the case where and live in and , respectively , while . for other examples of such an infinite dimensional dfs , see .in this paper , we study a general linear system having multi input and multi output fields ( it is called a mimo system ) .the first essential question is about which input and output fields should be used for feedback .we define the _type-1 control _ as a configuration where at most all the input and output fields can be used for this purpose .note that , if the system has single input - output channel such as the one shown in sec .ii - c ( ii ) , the control configuration must be of type-1 .figure [ siso general fb ] illustrates the general configuration of type-1 mf control .that is , at most all the plant s output fields can be measured , and the measurement results are then processed in a classical system ( controller ) that produces a control signal . from the standpoint comparing mf and cf , we assume that the control is carried out by modulating the input probe fields , which can be physically implemented using an electric optical modulator on the optical field ; in the type-1 case , hence , at most all the plant s input fields can be modulated using the control signal .this section studies the type-1 mf control and shows the no - go theorems given in the left column of table i. as described above , the mf control is carried out by modulating the input probe fields .this mathematically means that the input field is replaced by , where ^\top ] as follows ; \hat x_e \nonumber \\ & & \hspace*{5em } \mbox { } + \left [ \begin{array}{c } \sigma c^\top \sigma \\b_k m_1\\ \end{array}\right]\hat{\cal w } , \\ & & \hspace*{-2em } \label{type 1 closed - loop output } y=[m_1c , m_1c_k ] \hat x_e + \hat{\cal q}.\end{aligned}\ ] ] hence , is the shot noise .equation can be expressed in terms of the quadratures and as : \hat x_e \nonumber \\ & & \hspace*{1em } + \left [ \begin{array}{c } \sigma c^\top \sigma m_1^\top \\ b_k \\\end{array}\right]\hat{\cal q } + \left [ \begin{array}{c } \sigma c^\top \sigma m_2^\top \\ 0 \\\end{array}\right]\hat{\cal p } , \end{aligned}\ ] ] due to .we aim to find a set of matrices that achieves the control goals described in sec .iii ; but as shown below , it is impossible to accomplish those tasks .suppose that bae holds for the closed - loop dynamics with output ; that is , the condition holds for this system , which is now .( equivalently , the transfer function of the closed - loop system satisfies =0 , \forall s ] .thus the contrapositive of this result yields the following theorem .* theorem 1 : * if the original plant system does not have the bae property , then , any type-1 mf control can not realize bae for the closed - loop system .first of all , let us consider the case where the closed - loop system and has a qnd variable .this should be purely quantum " , meaning that is composed of only the quantum variables ^\top ] . as described in eq ., this means , with and the controllability and observability matrices of the system and . to prove the no - go theorem , the following two facts are useful .first , means that \left [ \begin{array}{cc } a & \sigma c^\top \sigma c_k \\b_k m_1 c & a_k + b_k m_1 c_k\\ \end{array}\right]^k \left [ \begin{array}{c } \sigma c^\top \sigma \\ b_k m_1\\ \end{array}\right ] = 0,\ ] ] for all .it follows from a similar procedure as in the bae case that this is equivalent to , ; i.e. with the controllability matrix of the _ original _ plant system and .second , is expressed by \left [ \begin{array}{cc } a & \sigma c^\top \sigma c_k \\ b_k m_1 c & a_k + b_k m_1 c_k \\ \end{array}\right]^k \left [ \begin{array}{c } v \\ 0 \\ \end{array}\right]=0\ ] ] for all .this is equivalent to , , meaning that for the original plant system .now we prove the theorem .suppose that the original plant system and does not have a qnd variable ; hence for any variable , the vector satisfies or for the original plant system .in particular , since the unobservability property does not depend on the choice of a specific coordinate , the latter condition is equivalently converted to .but as proven above , these two conditions are equivalent to or for the closed - loop system ; that is , the closed - loop system does not have a qnd variable of the form .thus the following result is obtained .* theorem 2 : * if the original plant system does not have a qnd variable , then , any type-1 mf control can not generate a qnd variable in the closed - loop system .finally we prove the no - go theorem for generating a dfs via the type-1 mf control .let us assume that the closed - loop dynamics with the output field \left [ \begin{array}{c } \hat x \\x_k \\\end{array}\right ] + \hat{\cal w}\ ] ] contains a dfs composed of purely quantum " variables of the form .then , it follows from the statement below eq . that and hold .as proven in the qnd case , the first condition equivalently leads to for the original plant system and .also in almost the same way we can prove that the second condition is equivalent to for the original plant system .these two conditions on mean that the original plant system and has a dfs , thus the contraposition yields the following theorem .* theorem 3 : * if the original plant system does not have a dfs , then , any type-1 mf control can not generate a dfs in the closed - loop system .in the type-1 case , it is assumed that at most all the plant s output fields can be used for feedback control and they are equally evaluated .for example , in the type-1 bae case , the ba noise must not appear in _ all _ the elements of .but it is sometimes more reasonable to give different roles to the output fields ; such a control schematic in the mf case is illustrated in fig .[ mimo general fb ] , which we call the _type-2 control _ configuration . in this case , at most all the components of can be used for feedback control , while those of are for evaluation ; that is , they will be measured to extract some information about the system or will be kept untouched for later use . for instance , we attempt to design a mf control based on the measurement of , so that the ba noise does not appear in the measurement output of .however , we will see that such a mf control strategy does not work to achieve any of the control goals .that is , in this section , the type-2 no - go theorems in table i will be proven . as in the case of type-1 control ,we study the situation where the feedback control is performed by modulating the input fields .the plant system driven by the modulated fields obeys the following dynamical equation : and are the vectors of control signals that represent the time - varying amplitude of the input fields and , respectively .note that in general the size of and need not to be equal .the output field is measured by a set of dyne detectors , which yield is the symplectic matrix , representing which quadratures of is measured .the measurement result is sent to a classical feedback controller of the form note that is allowed to contain the direct term from , i.e. ; but this modification does not change the results shown below , thus for simplicity we assume . combining all the above equations, we end up with the closed - loop dynamics of ^\top ] .the above result implies = \xi_{\hat{\cal w}_1\rightarrow z}^{(o)}[s]\hat { \cal w}_1[s ] + \xi_{\hat{\cal q}\rightarrow z}^{(o)}[s]\hat { \cal q}[s] ] , which leads to = \xi_{\hat{\cal w}_1\rightarrow y}^{(o)}[s]\hat { \cal w}_1[s ] + \xi_{\hat{\cal q}\rightarrow y}^{(o)}[s]\hat { \cal q}[s] ] and ] does not depend on the matrix , representing which quadratures of are measured .this means that the above equality holds for other choice of measurement , say .thus we have \sigma \big(\xi_{\hat{\cal w}_1\rightarrow z}^{(o)}\big)^\top = \left [ \begin{array}{c } m \\\tilde{m } \\\end{array}\right ] \xi_{\hat{\cal w}_1\rightarrow \hat{\cal w}_1^{\rm out}}^{(o ) } \sigma \big(\xi_{\hat{\cal w}_1\rightarrow z}^{(o)}\big)^\top \nonumber \\ & & \hspace*{9.7em } = 0. \nonumber \end{aligned}\ ] ] is chosen so that ] is also invertible , =0 ] , , this means that bae holds for the original plant system .consequently , we have the following result : * theorem 4 : * if the original plant system does not have the bae property , then , any type-2 mf control can not realize bae for the closed - loop system .the idea for the proof is the same as that taken in the type-1 case .again , a qnd variable is of the form with ^\top ] and it generates the measurement outputs ^\top ] , , ] , and it has two input fields ^\top ] .the controller s system matrices are chosen so that they satisfy ,~~ \sigma g_k = \left [ \begin{array}{cc } & -\omega\\ \omega & \\ \end{array}\right],\ ] ] which leads to ,~~ g_k = \left [ \begin{array}{cc } -\omega & 0 \\ 0 & -\omega \\ \end{array}\right].\ ] ] physical implementation of the controller specified by these matrices will be discussed in the end of this subsection .together with the term , which directly acts on , the dynamics of the closed - loop system is given by where ,~ b_e = c_e^\top , \nonumber \\ & & \hspace*{-1em } c_e = \sqrt{2\gamma } \left [ \begin{array}{cc|cc|cc } 0 & & 1 & & 0 & \\ & 0 & & 1 & & 0 \\ \end{array}\right],~~ b_f=[0 , 1 , 0 , 0 , 0 , 0]^\top.\end{aligned}\ ] ] since does not contain any information about , we need to measure , implying that the output signal is given by with ] , .together with the fact that are independent from other variables , this means that they are essentially classical variables which are detectable from the output field .in general , if a quantum system contains a subsystem whose variables are all commutative , then it is called a _ classical subsystem_ ; thus we now found that the cf - controlled opto - mechanical system contains a classical subsystem .to show that the type-1 cf control has capability of generating a dfs , let us return to the general closed - loop system .suppose now that the original plant system and does not have a dfs , and further that a quantum controller with parameters and can be engineered .hence , the plant and the controller have the same number of modes. takes the following form : ,~ b_e = \left [ \begin{array}{c } \sigma c^\top \sigma \\ \sigma c^\top \sigma \\\end{array}\right ] , \nonumber \\ & & \hspace*{-0.5em } c_e = [ c,~c].\end{aligned}\ ] ] now we prove that this system contains a dfs , i.e. a subsystem that is uncontrollable w.r.t . and unobservable w.r.t . . first , for the vector ^\top ] the controllability matrix . also , \left [ \begin{array}{c } ( \sigma g)^k v \\-(\sigma g)^k v \\ \end{array}\right ] = 0 , ~~\forall k\geq 0\ ] ] holds , implying with the observability matrix ^\top ] .thus this dfs is composed of variables .in this section we study the type-2 cf control for realizing bae , qnd , and dfs . as in the type-1 case , a specific system achieving each control goal will be shown . and detuning .hwp : half wave plate ., width=211 ] to demonstrate that the type-2 cf is capable of realizing bae , we here study the michelson s interferometer as a plant system , which is described in sec .ii - c ( iii ) with fig .[ examples ] ( c ) .the system is composed of two oscillators driven by an unknown force along opposite directions .the oscillators dynamical motion is described by eq . , which is specified by the following system matrices : and ,~ c_2 = \sqrt{\lambda } \left [ \begin{array}{cc|cc } & 0 & & 0 \\ 1 & & -1 & \end{array}\right].\ ] ] this system works as a sensor for detecting the force ; but as explained before , the noise power of the output signal is bounded from below by the sql .hence the purpose here is to design a cf controller that realizes bae and as a result beats the sql .actually , the plant system has two input - output ports , hence it can be treated within the type-2 cf control framework .here we consider the cf configuration described in the previous subsection .that is , and are optically connected through cf . in particular , as a cf controller , let us take a single input - output optical cavity , whose dynamical equation is specified by the following matrices : ,~~ c_k = \sqrt{2\epsilon } \left [ \begin{array}{cc } 1 & 0 \\ 0 & 1 \end{array}\right],~~ s = \left [ \begin{array}{cc } 0 & 1 \\ -1 & 0 \end{array}\right],\ ] ] where is the coupling constant between the field and the cavity mode .later we will set , which thus represents the detuning . represents a phase shift acting on the input optical field in the form .thus the closed - loop system is a 3-modes single input - output linear system , depicted in fig .[ gw with cf ] . with the above setup, the closed - loop system takes the following form : , \nonumber \\ & & \hspace*{-1.1em } b_e = \sigma c_e^\top \sigma = [ b_1,~b_2 ] = \left [ \begin{array}{cc } 0 & 0\\ \sqrt{\lambda } & -\sqrt{\lambda } \\ \hline 0 & 0 \\ -\sqrt{\lambda } & -\sqrt{\lambda } \\\hline -\sqrt{2\epsilon } & 0 \\ 0 & -\sqrt{2\epsilon } \\ \end{array}\right ] , \nonumber \\ & & \hspace*{-1.1em } b_f = \left [ \begin{array}{cc|cc|cc } 0 & 1 & 0 & -1 & 0 & 0 \\ \end{array}\right]^\top , \nonumber \\ & & \hspace*{-1.1em } c_e = \left [ \begin{array}{c } c_1^\top \\ c_2^\top \end{array}\right ] = \left [ \begin{array}{cc|cc|cc } \sqrt{\lambda } & 0 & \sqrt{\lambda } & 0 & \sqrt{2\epsilon } & 0 \\ \sqrt{\lambda } & 0 & -\sqrt{\lambda } & 0 & 0 & \sqrt{2\epsilon } \\ \end{array}\right ] , \nonumber \\ & & \hspace*{-1.1em } \hat{w}_1 ' = s\hat{w}_1=[\hat p_1 , -\hat q_1]^\top.\end{aligned}\ ] ] let us seek the parameters that achieve bae .first , it is easy to see , or equivalently ; that is , does not contain any information about .thus we measure implying that is the shot noise while is the ba noise .thus the parameters should be chosen so that the bae condition i.e. is satisfied , which is carried out by examining the equivalent condition : . the case is already satisfied . to see the case , let us focus on ,~~ a_e^2 b_1 = \left [ \begin{array}{c } 0 \\ -(\omega^2 + 2\epsilon^2)\sqrt{\lambda } \\\hline 0 \\ ( \omega^2+\epsilon^2/2)\sqrt{\lambda } \\\hline ( \alpha \beta+\epsilon^2)\sqrt{2\epsilon } \\ 0 \\\end{array}\right],\ ] ] where the proportional part to are subtracted .then , the condition is satisfied if we impose and , which yield let us especially take the parameter , implying that the cf controller is an optical cavity with negative detuning .the parameters are then explicitly given by when , they are approximated by and , respectively .actually under the condition , the output is described in the laplace domain by = -\frac{s^3 - \epsilon s^2 + \omega^2 s + \epsilon(\omega^2 + 2\epsilon^2 ) } { ( s^2+\omega^2)(s+\epsilon)}\hat{q}_1[s ] \nonumber \\ & & \hspace*{4.5em } \mbox { } + \frac{2\sqrt{\lambda}}{m(s^2+\omega^2 ) } \hat{f}[s],\end{aligned}\ ] ] which is free from the ba noise ] .then , under the assumption , the normalized signal is given by = \frac{y[i\omega]}{2\sqrt{\lambda}l } = \hat{g}[i\omega ] - \frac{i\omega^3 - \epsilon\omega^2 - 2\epsilon^3 } { 2\sqrt{\lambda}l\omega^2 ( i\omega + \epsilon ) } \hat{q}_1[i\omega].\ ] ] using we obtain ={\langle{|\tilde{y}-\hat{g}|^2}\rangle } = \big ( \frac{\lambda}{m^2l^2\omega^4 } + \frac{1}{4\lambda l^2 } \big ) { \langle{|\hat{q}_1|^2}\rangle},\ ] ] which has the same form as that of the non - controlled scheme in eq ., except that the ba noise is replaced by the shot noise . therefore , by injecting a -squeezed light field into the _ first _ input port ( i.e. the bright port ) , we can realize a broadband noise reduction below the sql in the output noise power .it should be noted again that , without squeezing of the input field , the output noise power of the cf - controlled interferometer having the bae property reproduces the sql .this means that achieving bae itself does not necessarily result in the increased force sensitivity ; in fact we need to combine the bae property and squeezing of the input .note that , while we have found a cf controller achieving bae for high - precision detection of below the sql , the result obtained here does not mean to emphasize that the proposed schematic is an alternative configuration for gravitational wave detection .actually , the schematic is very different from several effective methods , particularly in that the second output port is not anymore a dark port .hence the amplitude component must be subtracted from the output field , which though can not be carried out perfectly ; thus the above - described ideal detection of below the sql would be a difficult task in a practical situation .rather the main purpose here is to prove the capability of a type-2 cf controller for realizing bae .also , as demonstrated above , it is remarkable that the problem for designing bae can be solved , by a system theoretic approach based on the controllability / observability notion ; this approach might shed a new light on the engineering problems for gravitational wave detection .we here see that the closed - loop system studied in the previous subsection contains qnd variables .note that the original interferometer does not have a qnd variable .first let us calculate the controllability matrix ] is the cavity mode quadratures , is the control signal representing the amplitude modulation , and is the coupling strength between the cavity and the probe field . note that this modulation effect does not appear in the output .the direct feedback considered in is of the form , which enables us to modify the system dynamics so that evolves in time with the following linear equation : = -\sqrt{\kappa } \left [ \begin{array}{c } 0 \\ 1 \\\end{array}\right ] \hat p,~~~~ y = \sqrt{\kappa } \hat q + \hat q.\ ] ] clearly , is not disturbed by the noise while it appears in the output signal , implying that we can measure without disturbing it .that is , is a qnd variable .the above result means that the type-1 no - go theorem for qnd does not hold , if an ideal direct mf can be employed .however , we should note a critical assumption that an ideal direct mf controller has infinite bandwidth .hence let us further examine a practical case where the feedback circuit has a finite bandwidth and its dynamics is given by where represents the time constant and is the internal variable of the circuit .actually the transfer function from to is given by =\sqrt{\kappa}/(1+\tau s) ] , in which more than half the power of the signal is allowed to pass through the circuit .this clearly shows that the mf is only available in the infinite bandwidth limit .we can also see the finite bandwidth effect on the ideal qnd variable as follows ; the combined system dynamics of the cavity and the circuit is given by = \left [ \begin{array}{cc } -\kappa & \sqrt{\kappa } \\\sqrt{\kappa}/\tau & -1/\tau \\ \end{array}\right ] \left [ \begin{array}{c } \hat q \\x_k \\\end{array}\right ] + \left [ \begin{array}{c } -\sqrt{\kappa } \\ 1/\tau\\ \end{array}\right]\hat q,\ ] ] which yields = \frac{-\sqrt{\kappa } \tau}{(\kappa \tau+1)+\tau s}.\ ] ] thus , actually in the ideal limit , the variable becomes qnd . in other words ,a practical direct mf does not generate a qnd variable .note that controlling via the field modulation together with the finite - bandwidth mf controller is exactly the type - i mf , meaning that the no - go theorem is applied to this practical case .we should rather have an understanding that the controller is an effective mf realizing an approximated qnd variable in the scenario discussed in sec .viii .r. vijay , c. macklin , d. h. slichter , s. j. weber , k. w. murch , r. naik , a. n. korotkov , and i. siddiqi , stabilizing rabi oscillations in a superconducting qubit using quantum feedback , nature * 490 * , 77 ( 2012 ) j. kerckhoff , h. i. nurdin , d. pavlichin , and h. mabuchi , designing quantum memories with embedded control : photonic circuits for autonomous quantum error correction , phys .* 105 * , 040502 ( 2010 ) o. crisafulli , n. tezak , d. b. s. soh , m. a. armen , and h. mabuchi , squeezed light in an optical parametric oscillator network with coherent feedback quantum control , optics express * 21 * -15 , 18372 ( 2013 ) c. m. caves , k. s. thorne , r. w. p. drever , v. d. sandberg , and m. zimmermann , on the measurement of a weak classical force coupled to a quantum - mechanical oscillator , i , issues of principle , rev .phys . * 52 * , 341 ( 1980 ) l. bouten , j. k. stockton , g. sarma , and h. mabuchi , scattering of polarized laser light by an atomic gas in free space : a quantum stochastic differential equation approach phys .a * 75 * , 052111 ( 2007 )
|
to control a quantum system via feedback , we generally have two options in choosing control scheme . one is the coherent feedback , which feeds the output field of the system , through a fully quantum device , back to manipulate the system without involving any measurement process . the other one is the measurement - based feedback , which measures the output field and performs a real - time manipulation on the system based on the measurement results . both schemes have advantages / disadvantages , depending on the system and the control goal , hence their comparison in several situation is important . this paper considers a general open linear quantum system with the following specific control goals ; back - action evasion ( bae ) , generation of a quantum non - demolished ( qnd ) variable , and generation of a decoherence - free subsystem ( dfs ) , all of which have important roles in quantum information science . then some no - go theorems are proven , clarifying that those goals can not be achieved by any measurement - based feedback control . on the other hand it is shown that , for each control goal , there exists a coherent feedback controller accomplishing the task . the key idea to obtain all the results is system theoretic characterizations of bae , qnd , and dfs in terms of controllability and observability properties or transfer functions of linear systems , which are consistent with their standard definitions .
|
floating point arithmetic is based on the idea that at each step of a computation , a number is rounded to a prescribed precision , which in standard ieee arithmetic is 53 bits or about 16 digits . by rounding at every step in this fashion ,one eliminates the combinatorial explosion of the lengths of numerators and denominators that would occur in exact rational arithmetic : all numbers are represented to the same relative accuracy and require the same storage .it is ultimately for this reason that virtually all computational science is carried out in floating point arithmetic .chebfun and other related software systems that have arisen in the past fifteen years are based on implementing an analogous principle for functions as opposed to numbers .if is a lipschitz continuous function on ] other than ] .the plot appears in figure [ functionf ] . ....> > f = chebfun(@(x ) 3*exp(-1./(x+1))-(x+1 ) ) ; > >plot(f ) > >roots(f ) ans = -1.000000000000000-0.338683188672833 0.615348950784159 > > max(f ) ans = 0.108671573231256 .... above , the function of ( [ examplefun ] ) represented in chebfun by a polynomial of degree .below , the error at equally spaced points in ].,title="fig:",width=302 ] -1.5em figure [ functionf ] also shows the error in the chebfun approximation to this function at equally spaced points in ] . if , the constructor is `` happy '' and coeffs should be chopped to length cutoff .if , the constructor is `` unhappy '' and a longer coefficient sequence is needed .note that matlab s convention of beginning indexing at 1 rather than 0 is a potential source of confusion .the sequence coeffs is indexed from 1 to , so the description below is framed in those terms . in the standard applicationthis will correspond to a chebyshev series with coefficients from degree to , and the final index cutoff to be retained will correspond to degree .standardchop proceeds in three steps ._ step .compute the upper envelope of coeffs and normalize . _the input sequence coeffs is replaced by the nonnegative , monotonically nonincreasing sequence .( note that the use of the absolute value makes the chopping algorithm applicable to complex functions as well as real ones . )if , this sequence is then normalized by division by to give a nonnegative , monotonically nonincreasing sequence whose first element is .the output of standardchop will depend only on envelope , so this first step entails a substantive algorithmic decision : to assess a sequence only on its rate of decay , ignoring any oscillations along the way ._ step .search for a plateau . _the algorithm now searches for a sufficiently long , sufficiently flat plateau of sufficiently small coefficients .if no such plateau is found , the construction process is unhappy : cutoff is set to and the algorithm terminates. a plateau can be as high as if it is perfectly flat but need not be flat at all if it is as low as .precisely , a plateau is defined as a stretch of coefficients with and with the property the integer plateaupoint is set to , where is the first point that satisfies these conditions ._ step .chop the sequence near the beginning of the plateau . _having identified an index plateaupoint that is followed by a plateau , one might think that the code would simply set and terminate .such a procedure works well most of the time .however , exploring its application to hundreds of functions reveals some examples where plateaupoint does nt catch the true `` elbow '' of the envelope curve .sometimes , it is clear to the eye that a little more accuracy could be achieved at low cost by extending the sequence a little further ( algorithmic aim 7 in the list of the last section ) .other times , if a plateau is detected just below the highest allowed level , it is clear to the eye that the plateau actually begins at an earlier point slightly higher than . to adjust for these cases , step 3 sets cutoff not simply to plateaupoint , but to the index just before the lowest point of the envelope curve as measured against a straight line on a log scale titled downward with a slope corresponding to a decrease over the range by the factor .figure [ stepsfig ] gives the idea , and for precise details , see the code listing in the appendix .sketch of the chebfun construction process for .( a ) after rejecting the - , - , and -point chebyshev grids , chebfun computes coefficients on the -point grid .( b ) in step 1 of standardchop , the monotonically nonincreasing normalized envelope of the coefficients is constructed , and the plateau is found to be long , low , and level enough for chopping .step 2 picks as the last point before the plateau , marked by a triangle at position 70 on the axis ( since the corresponding degree is 70 ) .( c ) step 3 finds the lowest coefficient relative to a line tilted slightly downward , giving , marked by a circle at position 74 . for this functionthe net effect of extending the series through rather than is an improvement in accuracy by about one bit . ]we now present four figures each containing four plots , for a total of sixteen examples selected to illustrate various issues of chebfun construction .we present these in the context of chebfuns constructed from scratch , as in the example ` f = chebfun(@(x ) exp(x)./(1+x.^2 ) ) ` , but approximately the same results would arise in computation with functions as in ` x = chebfun(@(x ) x ) ` , ` f = exp(x)./(1+x.^2 ) ` , as discussed in section 8 . in each casewe use the default interval ] , for example , consists of coefficients multiplying exponentials .the decision of when a series is `` happy '' and where to chop it is made by standardchop applied to the sequence , ( repeated twice ) , ( repeated twice ) , and so on .the reason for this duplication of each coefficient is so that fourier series will be treated by essentially the same parameters as chebyshev series , with 17 values being the minimal number for happiness .an example of fourier coefficients of a trigfun is shown in figure [ doublelengthtrig ] , which revisits the function of figure [ doublelength ] now in fourier mode .repetition of figure [ doublelength ] for the function , but now with chebfun called with the trig flag , producing a fourier rather than chebyshev representation . ] -1em one of the most important capabilities of chebfun for users is the solution of ordinary differential equations ( odes ) , both boundary - value problems ( bvps ) and initial - value problems ( ivps ) , which can be linear or nonlinear . to solve a linear bvp, chebfun discretizes the problem on chebyshev grids of sizes approximately and checks for happiness on each grid .apart from the modified grid sequence , which is based on half - integer as well as integer powers of 2 , this process differs from standard construction in two ways .one is that this is not just a matter of `` sampling '' a fixed function , since the value at a grid point such as , for example , will change as the gridding is refined . in chebfun terminology , this means bvps are constructed in a `` resampling '' mode , with function values already obtained on a coarse grid recomputed on each finer grid .the other difference is that for solving bvps , standardchop is called with , where the parameter bvptol is by default set to rather than machine epsilon .one reason for this is that solution of bvps on fine grids is expensive , with complexity on a grid of points , so pushing to full machine precision may be slow .in addition , the matrices involved in the solution process are often ill - conditioned , so setting would sometimes be problematic .figure [ figode ] illustrates the chebfun solution of for ] .( b ) chebyshev coefficients for standard construction with the default tolerance .( c ) more accuracy achieved by tightening the tolerance to .,width=307 ] -1.5em if a bvp is nonlinear , chebfun embeds the construction process just described in a newton or damped - newton iteration , with the necessary derivatives formulated in a continuous setting as frchet derivative operators constructed by automatic differentiation .the correction functions will eventually become very small , and it is necessary for the chebyshev series involved in their construction to be judged relative to the scale of the overall function , not that of the correction .accordingly , standardchop is called with its value of tol increased by a factor analogous to vscaleglobal / vscalelocal , as described in 8.1 .the newton iteration is exited when its estimated error falls below .ivps in chebfun are solved differently from bvps , by marching rather than a global spectral discretization .this solution process has nothing a priori to do with chebyshev grids , and it is carried out by standard matlab ode codes : ode113 by default , which can be changed e.g. to ode15s for a stiff problem . as with bvps , chebfun aims by default for about 12 digits rather than 16 . to be precise , ode113 by default is called with abstol = 1e5*macheps and reltol = 1e2*macheps ( see footnote 2 ) . the computed function that results is then converted to a chebfun by a call to standardchop with tol set to the maximum of reltol and abstol / vscale .a quasimatrix is a chebfun with more than one column , that is , a collection of several functions defined on the same interval $ ] . by default, chebfun constructs each column of a quasimatrix independently , calling standardchop with as usual . for some applications , however , it is appropriate for the columns to be constructed relative to a single global scale , for example if they correspond to pieces of a decomposition of a single function . for such applications a user can specify the flag ` ' globaltol ' ` , andthen standardchop will be called with tol appropriately adjusted for the various columns as described in 8.1 and 8.3 .the next subsection gives an example .chebyshev coefficients of the rows of the chebfun2 of , corresponding to the part of the bivariate representation .the curves for the 12 rows begin at different heights but end at approximately the same height , reflecting calls to standardchop with the globaltol flag set so that tolerances are adjusted to a global scale.,width=297 ] -1.6em finally we mention that chebfun can also compute with smooth functions in two dimensions , since the release of chebfun2 in 2012 , and soon in 3d with the upcoming release of chebfun3 . in both cases ,functions are represented by low - rank approximations constructed from outer products of 1d functions , which in turn are represented by the usual chebfun chebyshev series , or fourier series in periodic directions .these series are constructed with standardchop in the ` ' globaltol ' ` mode described above .to illustrate , figure [ figquasi ] plots the chebyshev coefficients of the 12 rows representing the -dependence of the chebfun2 representation of .although chebfun has been chopping chebyshev series for more than a decade , and more recently also fourier series , the details have been inconsistent and ad hoc until lately .this paper has described the new algorithm standardchop , which unifies these processes with a clear structure .other related projects for numerical computation with functions face the same challenge of `` rounding '' of functions .a list of chebfun - related projects can be found under the `` about '' tab at www.chebfun.org , currently including approxfun.jl , pychebfun , fourfun , chebint , pacal , sincfun , a collection of lisp codes by fateman , libchebfun , and rktoolbox .the introduction of standardchop has made the foundations of the chebfun project more secure .at the same time , we reiterate that we make no claim that this algorithm is optimal , or even represents the only reasonable approach to this problem . just as it took decades for floating - point arithmetic of real numbers to reach a reasonably settled state with the introduction of the ieee floating point standard in the mid-1980s , perhaps it will take a long time for consensus to emerge as to the best ways to realize the analogue of floating - point arithmetic for numerical computation with functions .we have benefitted from extensive discussions with others in the chebfun team including anthony austin , sgeir birkisson , toby driscoll , nick hale , behnam hashemi , mohsin javed , hadrien montanelli , mikael slevinsky , alex townsend , grady wright , and kuan xu .we also acknowledge insightful suggestions from folkmar bornemann of tu munich .this work was supported by the european research council under the european union s seventh framework programme ( fp7/20072013)/erc grant agreement no .291068 .the views expressed in this article are not those of the erc or the european commission , and the european union is not liable for any use that may be made of the information contained here ..... function cutoff = standardchop(coeffs , tol ) % standardchop a sequence chopping rule of " standard " ( as opposed to " loose " or % " strict " ) type , that is , with an input tolerance tol that is applied with some % flexibility .this code is used in all parts of chebfun that make chopping % decisions , including chebfun construction ( chebtech , trigtech ) , solution of % ode bvps ( solvebvp ) , solution of ode ivps ( odesol ) , simplification of chebfuns % ( simplify ) , and chebfun2 .see j. l. aurentz and l. n. trefethen , " chopping a % chebyshev series " , arxiv , december 2015 .% % input : % % coeffs a nonempty row or column vector of real or complex numbers % which typically will be chebyshev or fourier coefficients .% % tol a number in ( 0,1 ) representing a target relative accuracy .% tol will typically will be set to the chebfun eps parameter , % sometimes multiplied by a factor such as vglobal / vlocal in % construction of local pieces of global chebfuns .% default value : machine epsilon ( matlab eps ) .% % output : % % cutoff a positive integer .% if cutoff = = length(coeffs ) , then we are " not happy " : % a satisfactory chopping point has not been found .% if cutoff < length(coeffs ) , we are " happy " and cutoff % represents the last index of coeffs that should be retained .% % examples : % % coeffs = 10.^-(1:50 ) ; random = cos((1:50).^2 ) ; % standardchop(coeffs ) % = 18 % standardchop(coeffs + 1e-16*random ) % = 15 % standardchop(coeffs + 1e-13*random ) % = 13 % standardchop(coeffs + 1e-10*random ) % = 50 % standardchop(coeffs + 1e-10*random , 1e-10 ) % = 10 % jared aurentz and nick trefethen , july 2015 .% % copyright 2015 by the university of oxford and the chebfun developers .% see http://www.chebfun.org/ for chebfun information .% standardchop normally chops coeffs at a point beyond which it is smaller than % tol^(2/3 ) .coeffs will never be chopped unless it is of length at least 17 and % falls at least below tol^(1/3 ) .it will always be chopped if it has a long % enough final segment below tol , and the final entry coeffs(cutoff ) will never % be smaller than tol^(7/6 ) .all these statements are relative to % max(abs(coeffs ) ) and assume cutoff > 1 .these parameters result from % extensive experimentation involving functions such as those presented in % the paper cited above .they are not derived from first principles and % there is no claim that they are optimal .% make sure coeffs has length at least 17 : n = length(coeffs ) ; cutoff = n ; if ( n < 17 ) return end % step 1 : convert coeffs to a new monotonically nonincreasing % vector envelope normalized to begin with the value 1 .% step 2 : scan envelope for a value plateaupoint , the first point j-1 , if any , % that is followed by a plateau .a plateau is a stretch of coefficients % envelope(j), ... ,envelope(j2 ) , j2 = round(1.25*j+5 ) < = n , with the property % that envelope(j2)/envelope(j ) >r. the number r ranges from r = 0 if % envelope(j ) = tol up to r = 1 if envelope(j ) = tol^(2/3 ) .thus a potential % plateau whose starting value is envelope(j ) ~ tol^(2/3 ) has to be perfectly % flat to count , whereas with envelope(j ) ~ tol it does n't have to be flat at % all . if a plateau point is found , then we know we are going to chop the % vector , but the precise chopping point cutoff still remains to be determined % in step 3 . for j= 2:n j2 = round(1.25*j + 5 ) ; if ( j2 > n ) % there is no plateau : exit return end e1 = envelope(j ) ; e2 = envelope(j2 ) ; r = 3*(1 - log(e1)/log(tol ) ) ; plateau = ( e1 = = 0 ) | ( e2/e1 > r ) ; if ( plateau ) % a plateau has been found : go to step 3 plateaupoint = j - 1 ; break end end % step 3 : fix cutoff at a point where envelope , plus a linear function % included to bias the result towards the left end , is minimal .% % some explanation is needed here .one might imagine that if a plateau is % found , then one should simply set cutoff = plateaupoint and be done , without % the need for a step 3 . however , sometimes cutoff should be smaller or larger % than plateaupoint , and that is what step 3 achieves .% % cutoff should be smaller than plateaupoint if the last few coefficients made % negligible improvement but just managed to bring the vector envelope below the % level tol^(2/3 ) , above which no plateau will ever be detected .this part of % the code is important for avoiding situations where a coefficient vector is % chopped at a point that looks " obviously wrong " with plotcoeffs .% % cutoff should be larger than plateaupoint if , although a plateau has been % found , one can nevertheless reduce the amplitude of the coefficients a good % deal further by taking more of them .this will happen most often when a % plateau is detected at an amplitude close to tol , because in this case , the % " plateau " need not be very flat .this part of the code is important to % getting an extra digit or two beyond the minimal prescribed accuracy when it % is easy to do so .if ( envelope(plateaupoint ) = = 0 ) cutoff = plateaupoint ; else j3 = sum(envelope > = tol^(7/6 ) ) ; if ( j3 < j2 ) j2 = j3 + 1 ; envelope(j2 ) = tol^(7/6 ) ; end cc = log10(envelope(1:j2 ) ) ; cc = cc ( : ) ; cc = cc + linspace(0 , ( -1/3)*log10(tol ) , j2 ) ' ; [ ~ , d ] = min(cc ) ; cutoff = max(d - 1 , 1 ) ; end k. poppe and r. cools , chebint : a matlab / octave toolbox for fast multivariate integration and interpolation based on chebyshev approximations over hypercubes , _ acm trans . math ._ 40 ( 2013 ) , 2:12:13 .
|
chebfun and related software projects for numerical computing with functions are based on the idea that at each step of a computation , a function defined on an interval $ ] is `` rounded '' to a prescribed precision by constructing a chebyshev series and chopping it at an appropriate point . designing a chopping algorithm with the right properties proves to be a surprisingly complex and interesting problem . we describe the chopping algorithm introduced in chebfun version 5.3 in 2015 after many years of discussion and the considerations that led to this design .
|
robustness is one of the key issues for network maintenance and design .the representation of complex systems has been limited to single networks for a long time .in many cases , however , coupling between several networks takes place .an important case is that of interdependency where there are two kinds of links : connectivity and dependency links .an example of interdependent networks is the ensemble of the internet and the power supply grid where telecommunication is used to control power plants and electric power is needed to supply communication devices .connectivity links model the relation of the entities within the same sector , spanning in the above example a power supply network and a telecommunication network .dependency links depict the basic supplies an entity depends on which are supplied by entities in the other network .if a supplier fails its dependent nodes fail as well .the system is viable if a giant component of interconnected units exists in both networks . in the 28 september 2003 blackout in italy it came to evidence that the interdependency of the two networks makes them more vulnerable than ever thought before .similar relations occur in the economics between banks and firms or funds .banks are related through interbank loans , firms through supply chains and the interdependence comes from loans and securities .inappropriate asset proportions can also lead to global avalanches as seen in the subprime mortgage crisis . interconnecting similar subsystems used to increase capacitywas shown beneficial as long as it does not open pathways to cascades . however , in interdependent networks , the aspect of robustness was considered with the conclusion that broadening the degree distribution of the initial networks enhances vulnerability . a cost - intensive intervention to strengthen robustness is to upgrade nodes to be autonomous on some resources .because failures propagate rapidly in infrastructure networks , they can not be stopped by installing backup devices during the spreading of the damage .but rather they require already existing systems .after the cascade of failures , damaged devices or elements can be replaced by new , functioning ones _ identical _ to the originals .in contrast to engineered systems , social or economic networks are highly responsive and may react quickly .when a failure occurs considerable effort is made to reorganize the network and rearrange the load of failing elements among functioning ones .the role of the failing entities is taken over by _ similar _ participants .such processes can be modeled by healing , i.e. , substituting some of the failed elements by new ones .the timescale of an economic crisis is wide enough for the network to completely restructure itself .so far such mechanisms have only been studied for simple networks . herewe extend the original model of cascading failures of interdependent networks .after each removal , the healing process attempts to bypass the removed node with a new connectivity link ( see fig . [fig : cascade_schema ] ) . in this paper , we demonstrate how healing acts on interdependent networks ._ a ) _ failures , represented by red dots , affect the nodes one by one in a random order .whenever a node fails , its counterpart , that is , the node in the other network which depends on it , fails as well . in both networks ,only the largest connected component ( lcc ) survives .this constraint can cause further nodes to fail in both networks , which trigger further shrinking of the lcc , and so on , illustrated by the shaded areas . _b ) _ the neighbors of a failing node try to heal the network , such that two functioning neighbors of a removed node establish a connectivity link with probability . ] the outline of the paper is as follows . in sec .[ sec : model ] we define the node failure process in a dynamic way .we introduce initial failures one by one to be able to apply healing at every failure event .then we relate the original version of cascading failures to our model as a special case and give formulas for comparing the order parameter of the two models .the scaling properties of the healing are explained along with the numeric results in sec .[ sec : scaling ] . in sec .[ sec : cascades ] we discuss the properties of the cascades with microscopic insight to the model .finally we conclude our findings in sec .[ sec : conclusion ] .in the standard model of interdependent networks the computer - generated model - system is built up of two topologically identical networks and , e.g. , square lattices of size , where each node has _ connectivity _links within the same network . in addition , _ dependency _ links couple between the networks , which are bidirectional one - to - one relationships connecting randomly selected pairs of nodes from the two networks .if any of the nodes fails its dependent pair fails too .a node in any network can function only if it is connected to the largest connected component of that network the node which it depends on is also functional , otherwise it fails , i.e. , it is removed from the network .the existence of a macroscopic connected component in a single network is treated by percolation theory . in the usual case ,for a lattice it describes a second - order phase transition between the phases with and without the existence of a giant component .adding interdependency allows cascades of failures to propagate between the two networks .the threshold the network can survive without collapse decreases considerably in this setting .the collapse due to cascades was shown to be a first order transition if the dependency links have unlimited range while the transition is of second order if the range is less than a critical length . moreover, the first order transition has a hybrid character with scaling on one of its sides .part of network of a simulated system at at _ a ) _ no healing ( ) _ b ) _ below the critical healing ( , the average degree stays below ) and _ c ) _ slightly above the critical healing ( ) . _d ) _ this latter system is also represented at where one can observe that the nodes get more and more connected and the healing process establishes links between distant nodes . ]as mentioned in the introduction we first introduce a dynamic process on the interdependent network model . in the setting of two interdependent networks of general topology this dynamic process consists of the repetition of attacks and relaxations to a rest via cascades .( see fig .[ fig : cascade_schema ] . )let us suppose that failures affect the nodes one by one in a random order which defines a timeline .one time step is identified with the external attack of one node .time is measured by the number of time steps normalized by for systems of different sizes to be comparable : the externally introduced failure in network may separate the largest connected component ( lcc ) into two or more parts where only the largest one survives .all the failed nodes have dependency connections to nodes of the network causing their failure .again , the lcc of may get fragmented and only the largest part survives .this cascading procedure is repeated until no more failures happen .of course , our model can easily be generalized to any number of interdependent networks and any density of dependency links .our aim is to introduce healing into this dynamic model .the procedure is as follows : after an externally introduced failure ( which may cut off a part of the lcc ) the healing step follows . two remaining , functioning neighbors of a removed nodeestablish a connectivity link with an independent probability .( see part _b ) _ in fig .[ fig : cascade_schema ] . )then the dependent nodes of the removed nodes are removed from the other network .after the propagation of the failure there , again , two functioning neighbors of a removed node establish a connectivity link probability .due to the separation of small components , further damages might propagate back and forth within the network , always followed by a healing step . here, the healing step means that all pairs of neighbors of each failed node is considered as a candidate for a new connectivity link with an independent probability , then , after having selected the candidates , the connectivity links are established simultaneously .the process goes on until no more separation of components occurs .the healing links may change the topology considerably , bridging larger and larger distances as the time goes on ( fig .[ fig : insight ] ) .once a critical fraction of nodes are removed , a catastrophic cascade destroys the remaining system .the case is simply the dynamic version of the well studied model of li _et al._. in a fraction of the original network is destroyed in the first step then the size of the giant component after the relaxation of cascades is traced as a function of .the important difference between this procedure and ours is that in the version of li _ et al . _nodes may be accidentally attacked , which already fail in our step - by - step ( dynamic ) model .let denote the fraction of remaining nodes as a function of the fraction of attacked nodes in the step - by - step model .the number of unattacked but disconnected nodes is ] . for the purpose of precise measurement we created simulation data for all system sizes with step size for 12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevlett.85.4626 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.85.5468 [ * * , ( ) ] link:\doibase 10.1038/35019019 [ * * , ( ) ] \doibase 10.1093/acprof : oso/9780199206650.001.0001 [ _ _ ] ( , ) ( ) , in link:\doibase 10.1007/978 - 3 - 319 - 03518 - 5_1 [ _ _ ] , , ( , ) pp .link:\doibase 10.1038/nature08932 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.228702 [ * * , ( ) ] link:\doibase 10.1038/nature09659 [ * * , ( ) ] link:\doibase 10.1073/pnas.1110586109 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.066134 [ * * , ( ) ] link:\doibase 10.1038/srep01969 [ * * , ( ) ] link:\doibase 10.1038/nphys2180 [ * * , ( ) ] link:\doibase 10.1126/science.1184819 [ * * , ( ) ] link:\doibase 10.1126/science.1173644 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2013.01.013 [ * * , ( ) ] link:\doibase 10.1038/nphys2819 [ * * , ( ) ] link:\doibase 10.1140/epjst / e2012 - 01695-x [ * * , ( ) ] http://www.crcpress.com/product/isbn/9780748402533[__ ] , ed .( , ) in link:\doibase 10.1109/sitis.2013.101 [ _ _ ] ( , ) pp .link:\doibase 10.1103/physreve.84.066116 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.248701 [ * * , ( ) ] , ed ., http://www.worldscientific.com/worldscibooks/10.1142/1011[__ ] ( , )
|
interdependent networks are characterized by two kinds of interactions : the usual connectivity links within each network and the dependency links coupling nodes of different networks . due to the latter links such networks are known to suffer from cascading failures and catastrophic breakdowns . when modeling these phenomena , usually one assumes that a fraction of nodes gets damaged in one of the networks , which is followed possibly by a cascade of failures . in real life the initiating failures do not occur at once and effort is made replace the ties eliminated due to the failing nodes . here we study a dynamic extension of the model of interdependent networks and introduce the possibility of link formation with a probability , called healing , to bridge non - functioning nodes and enhance network resilience . a single random node is removed , which may initiate an avalanche . after each removal step healing sets in resulting in a new topology . then a new node fails and the process continues until the giant component disappears either in a catastrophic breakdown or in a smooth transition . simulation results are presented for square lattices as starting networks under random attacks of constant intensity . we find that the shift in the position of the breakdown has a power - law scaling as a function of the healing probability with an exponent close to . below a critical healing probability , catastrophic cascades form and the average degree of surviving nodes decreases monotonically , while above this value there are no macroscopic cascades and the average degree has first an increasing character and decreases only at the very late stage of the process . these findings facilitate to plan intervention in case of crisis situation by describing the efficiency of healing efforts needed to suppress cascading failures .
|
in knowledge representation , rules play a prominent role . default rules of the form _ if a then normally b _ are being investigated in nonmonotonic reasoning , and various semantical approaches have been proposed for such rules . since it is not possible to assign a simple boolean truth value to such default rules , a semantical approach is to define when a rational agent accepts such a rule .we could say that an agent accepts the rule _ birds normally fly _ if she considers a world with a flying bird to be less surprising than a world with a nonflying bird . at the same time, the agent can also accept the rule _ penguin birds normally do not fly _ ; this is the case if she considers a world with a nonflying penguin bird to be less surprising than a world with a flying penguin bird .the informal notions just used can be made precise by formalizing the underlying concepts like default rules , epistemic state of an agent , and the acceptance relation between epistemic states and default rules . in the following, we deal with qualitative default rules and a corresponding semantics modelling the epistemic state of an agent . while a full epistemic state could compare possible worlds according to their possibility , their probability , their degree of plausibility , etc . ) , we will use ordinal conditional functions ( ocfs ) , which are also called ranking functions .to each possible world , an ocf assigns a natural number indicating its degree of surprise : the higher , the greater is the surprise for observing . in a criterion when a ranking function respects the conditional structure of a set conditionals is defined , leading to the notion of c - representation for , and it is argued that ranking functions defined by c - representations are of particular interest for model - based inference . in a system that computes a c - representation for any such that is consistentis described , but this c - representation may not be minimal .an algorithm for computing a minimal ranking function is given in , but this algorithm fails to find all minimal ranking functions if there is more than one minimal one . in extension of that algorithm being able to compute all minimal c - representations for is presented .the algorithm developed in uses a non - declarative approach and is implemented in an imperative programming language . while the problem of specifying all c - representations for is formalized as an abstract , problem - oriented constraint satisfaction problem in , no solving method is given there . in this paper, we present a high - level , declarative approach using constraint logic programming techniques for solving the constraint satisfaction problem for any consistent . in particular , the approach developed here supports the generation of all minimal solutions ; these minimal solutions are of special interest as they provide a preferred basis for model - based inference from .the rest of this paper is organized as follows : after recalling the formal background of conditional logics as it is given in and as far as it is needed here ( section [ sec_background ] ) , we elaborate the birds - penguins scenario sketched above as an illustration for a conditional knowledge base and its semantics in section [ sec_example ] . the definition of the constraint satisfaction problem and its solution set denoting all c - representations for is given in sec .[ sec_c_representations ] . in section [ sec_clp_approach ] , a declarative , high - level clp program solving developed , observing the objective of being as close as possible to , and its realization in prolog is described in detail ; in section [ sec_example_applications_and_first_evaluation ] , it is evaluated with respect to a series of some first example applications . section [ sec_conclusions ] concludes the paper and points out further work .we start with a propositional language , generated by a finite set of atoms .the formulas of will be denoted by uppercase roman letters .for conciseness of notation , we will omit the logical _ and_-connective , writing instead of , and overlining formulas will indicate negation , i.e. means .let denote the set of possible worlds over ; will be taken here simply as the set of all propositional interpretations over and can be identified with the set of all complete conjunctions over . for , means that the propositional formula holds in the possible world . by introducing a new binary operator , we obtain the set of _ conditionals _ over . formalizes `` _ _ if then ( normally ) _ _ '' and establishes a plausible , probable , possible etc connection between the _ antecedent _ and the _ consequence _ . here, conditionals are supposed not to be nested , that is , antecedent and consequent of a conditional will be propositional formulas .a conditional is an object of a three - valued nature , partitioning the set of worlds in three parts : those worlds satisfying , thus _ verifying _ the conditional , those worlds satisfying , thus _ falsifying _ the conditional , and those worlds not fulfilling the premise and so which the conditional may not be applied to at all .this allows us to represent as a _ generalized indicator function _ going back to ( where stands for _ unknown _ or _ indeterminate _ ) : to give appropriate semantics to conditionals , they are usually considered within richer structures such as _ epistemic states_. besides certain ( logical ) knowledge , epistemic states also allow the representation of preferences , beliefs , assumptions of an intelligent agent .basically , an epistemic state allows one to compare formulas or worlds with respect to plausibility , possibility , necessity , probability , etc .well - known qualitative , ordinal approaches to represent epistemic states are spohn s _ ordinal conditional functions , ocfs _ , ( also called _ ranking functions _ ) , and _ possibility distributions _ , assigning degrees of plausibility , or of possibility , respectively , to formulas and possible worlds . in such qualitative frameworks ,a conditional is valid ( or _ accepted _ ) , if its confirmation , , is more plausible , possible , etc . than its refutation , ; a suitable degree of acceptance is calculated from the degrees associated with and . in this paper , we consider spohn s ocfs .an ocf is a function expressing degrees of plausibility of propositional formulas where a higher degree denotes `` less plausible '' or `` more suprising '' .at least one world must be regarded as being normal ; therefore , for at least one .each such ranking function can be taken as the representation of a full epistemic state of an agent .each such uniquely extends to a function ( also denoted by ) mapping sentences and rules to and being defined by for sentences and by for conditionals .note that since any satisfying also satisfies and therefore .the belief of an agent being in epistemic state with respect to a default rule is determined by the satisfaction relation defined by : thus , is believed in iff the rank of ( verifying the conditional ) is strictly smaller than the rank of ( falsifying the conditional ) .we say that _ accepts _ the conditional iff .in order to illustrate the concepts presented in the previous section we will use a scenario involving a set of some default rules representing common - sense knowledge .[ example : penguins_1 ] suppose we have the propositional atoms - _ flying _ , - _ birds _ , - _ penguins _ , - _ winged _ animals , - _ kiwis_. let the set consist of the following conditionals : + [ cols= " < , < , < , < " , ] + where worlds are represented as complete conjunctions of literals over , using the representation described above . using these predicates , in the following subsections we will present the complete source code of the constraint logic program ` genocf`solving .the particular program code given here uses the sicstus prolog system and its clp(fd ) library implementing constraint logic programming over finite domains .the main predicate expecting a knowledge base of conditionals and yielding a vector of values as specified by ( [ eq_kappa_accepts_r_with_kappaiminus ] ) is presented in fig .[ fig_main_kappa ] .kappa(kb , k ) : - consult(kb ) , indices(is ) , length(is , n ) , length(k , n ) , domain(k , 0 , n ) , constrain_k(is , k ) , labeling ( [ ] , k ) .after reading in the knowledge base and getting the list of indices , a list of free constraint variables , one for each conditional , is generated .in the two subsequent subgoals , the constraints corresponding to the formulas ( [ eq_kappaiminus_lower_upper_bound ] ) and ( [ eq_kappa_accepts_r_with_kappaiminus ] ) are generated , constraining the elements of accordingly .finally , yields a list of values . upon backtracking, this will enumerate all possible solutions with an upper bound of as in ( [ eq_kappaiminus_lower_upper_bound ] ) for each .later on , we will demonstrate how to modify in order to take minimality into account ( sec .[ sec_generation_of_minimal_solutions ] ) . how the subgoal in generates a constraint for each index according to ( [ eq_kappa_accepts_r_with_kappaiminus ] ) is defined in fig .[ fig_constrain_k ] .constrain_k ( [ ] , _ ) .constrain_k([i|is],k ) : - constrain_ki(i , k ) , constrain_k(is , k ) .constrain_ki(i , k ) : - verifying_worlds(i , vworlds ) , falsifying_worlds(i , fworlds ) , list_of_sums(i , k , vworlds , vs ) , list_of_sums(i , k , fworlds , fs ) , minimum(vmin , vs ) , minimum(fmin , fs ) , element(i , k , ki ) , ki # > vmin - fmin .given an index , determines all worlds verifying and falsifying the -th conditional ; over these two sets of worlds the two expressions in ( [ eq_kappa_accepts_r_with_kappaiminus ] ) are defined .two lists and of sums corresponding exactly to the first and the second sum , repectively , in ( [ eq_kappa_accepts_r_with_kappaiminus ] ) are generated ( how this is done is defined in fig . [ fig_list_of_sums ] and will be explained below ) . with the constraint variables and denoting the minimum of these two lists , the constraint ` ki # > vmin - fmin ` given in the last line of fig .[ fig_constrain_k ] reflects precisely the restriction on given by ( [ eq_kappa_accepts_r_with_kappaiminus ] ) . for an index , a kappa vector , and a list of worlds , the goal ( cf . fig .[ fig_list_of_sums ] ) yields a list of sums such that for each world in , there is a sum in that is generated by where is the list of indices . in the goal , corresponds exactly to the respective sum expression in ( [ eq_kappa_accepts_r_with_kappaiminus ] ) , i.e. , it is the sum of all such that and falsifies the -th conditional. list_of_sums ( _ , _ , [ ] , [ ] ) .list_of_sums(i , k , [ w|ws ] , [ s|ss ] ) : - indices(js ) , sum_kappa_j(js , i , k , w , s ) , list_of_sums(i , k , ws , ss ) . sum_kappa_j ( [ ] , _ , _ , _ , 0 ) . sum_kappa_j([j|js ] , i , k , w , s ) : - sum_kappa_j(js , i , k , w , s1 ) , element(j , k , kj ) , ( ( j = i , falsify(j , w ) ) -# = s1 + kj ; s # = s1 ) .[ ex_kb_birds_clp_output ]suppose that is a file containing the conditionals of the knowledge base given in ex .[ eq_multiple_minimal_solutions ] .then the first five solutions generated by the program given in figures [ fig_main_kappa ] [ fig_list_of_sums ] are : .... | ? - kappa('kb_birds.pl ' , k ) .k = [ 1,0,1 ] ? ; k = [ 1,0,2 ] ? ; k = [ 1,0,3 ] ? ; k = [ 1,1,0 ] ? ; k = [ 1,1,1 ] ?.... note that the first and the fourth solution are the minimal solutions .[ example : penguins_1_clp_output ] if is a file containing the conditionals of the knowledge base given in ex .[ example : penguins_1 ] , the first six solutions generated by ` kappa/2 ` are : .... | ? - kappa('kb_penguins.pl ' , k ) .k = [ 1,2,2,1,1 ] ? ; k = [ 1,2,2,1,2 ] ? ; k = [ 1,2,2,1,3 ] ? ; k = [ 1,2,2,1,4 ] ? ; k = [ 1,2,2,1,5 ] ? ; k = [ 1,2,2,2,1 ]? .... the enumeration predicate of sicstus prolog allows for an option that minimizes the value of a cost variable .since we are aiming at minimizing the sum of all , the constraint introduces such a cost variable .thus , exploiting the sicstus prolog minimization feature , we can easily modify to generate a minimal solution : we just have to replace the last subgoal in fig .[ fig_main_kappa ] by the two subgoals : .... sum(k , # = , s ) , % introduce constraint variable s % for sum of kappa_i minimize(labeling([],k ) , s ) .% generate single minimal solution .... with this modification , we obtain a predicate that returns a single minimal solution ( and fails on backtracking ) . hence calling similar as in ex .[ ex_kb_birds_clp_output ] yields the minimal solution . however , as pointed out in sec .[ sec_c_representations ] , there are good reasons for considering not just a single minimal solution , but all minimal solutions .we can achieve the computation of all minimal solutions by another slight modification of .this time , the enumeration subgoal in fig .[ fig_main_kappa ] is preceded by two new subgoals as in in fig .[ fig_main_kappa_min_all ] .kappa_min_all(kb , k ) : - consult(kb ) , indices(is ) , length(is , n ) , length(k , n ) , domain(k , 0 , n ) , constrain_k(is , k ) , sum(k , # = , s ) , min_sum_kappas(k , s ) , labeling ( [ ] , k ) .min_sum_kappas(k , min ) : - once((labeling([up],[min ] ) , labeling([],k ) ) ) .the first new subgoal introduces a constraint variable just as in . in the subgoal, this variable is constrained to the sum of a minimal solution as determined by .these two new subgoals ensure that in the generation caused by the final subgoal , exactly all minimal solutions are enumerated .[ ex_kb_birds_clp_output_all_min ] continuing example [ ex_kb_birds_clp_output ] , calling .... | ? - kappa_min_all('kb_birds.pl ' , k ) .k = [ 1,0,1 ] ? ; k = [ 1,1,0 ] ? ; no .... yields the two minimal solutions for .[ example : penguins_1_clp_output_all_min ] for the situation in ex .[ example : penguins_1_clp_output ] , ` kappa_min_all/2 ` reveals that there is a unique minimal solution : .... | ? - kappa_min_all('kb_penguins.pl ' , k ) .k = [ 1,2,2,1,1 ] ? ; no .... determining the ocf induced by the vector according to ( [ eq_conditional_indifference_no_kappaplus ] ) yields the ranking function given in fig .[ figure_kappa_after_initialzation ] .although the objective in developing ` genocf ` was on being as close as possible to the abstract formulation of the constraint satisfaction problem , we will present the results of some first example applications we have carried out . for , we generated synthetic knowledge bases according to the following schema : using the variables , ` kb_synth < n>_c<2n\!\!-\!\!1>.pl ` contains the conditionals given by : : for instance , ` kb_synth4_c7.pl ` uses the five variables and contains the seven conditionals : the basic idea underlying the construction of these synthetic knowledge bases is to establish a kind of subclass relationship between and for each on the one hand , and to state that every is exceptional to with respect to its behaviour regarding , again for each .this sequence of pairwise exceptional elements will force any minimal solution of to have at least one value of size greater or equal to . from `kb_synth<\ensuremath{n}>_c < m>.pl ` , the knowledge bases ` kb_synth<\ensuremath{n}>_c< m\!\!-\!\!j>.pl ` are generated for by removing the last conditionals .for instance , ` kb_synth4_c5.pl ` is obtained from ` kb_synth4_c7.pl ` by removing the two conditionals , .figure [ fig_sicstus_results ] shows the time needed by ` genocf ` for computing all minimal solutions for various knowledge bases . the execution time is given in seconds where the value 0 stands for any value less than 0.5 seconds .measurements were taken for the following environment : sicstus 4.0.8 ( x86-linux - glibc2.3 ) , core 2 duo e6850 3.00ghz .while the number of variables determines the set of possible worlds , the number of conditionals induces the number of contraints .the values in the table in fig .[ fig_sicstus_results ] give some indication on the influence of both values , the number of variables and the number of conditionals in a knowledge base .for instance , comparing the knowledge base ` kb_synth7_c10.pl ` , having 8 variables and 10 conditionals , to the knowledge base ` kb_synth8_c10.pl ` , having 9 variables and also 10 conditionals , we see an increase of the computation time by a factor 2.3 . increasing the number of conditionals , leads to no time increase from ` kb_synth7_c10.pl ` to ` kb_synth7_c11.pl ` , and to a time increase factor of about 1.6 when moving from ` kb_synth8_c10.pl ` to ` kb_synth8_c11.pl ` , while for moving from` kb_synth8_c10.pl ` to ` kb_synth9_c10.pl ` and ` kb_synth10_c10.pl ` , we get time increase factors of 3.3 and 11.0 , respectively .of course , these knowledge bases are by no means representative , and further evaluation is needed .in particular , investigating the complexity depending on the number of variables and conditionals and determining an upper bound for worst - case complexity has still to be done .furthermore , while the code for ` genocf ` given above uses sicstus prolog , we also have a variant of ` genocf`for the swi prolog system . in our further investigations ,we want to evaluate ` genocf`also using swi prolog , to elaborate the changes required and the options provided when moving between sicstus and swi prolog , and to study whether there are any significant differences in execution that might depend on the two different prolog systems and their options .while for a set of probabilistic conditionals }}}}$ ] the principle of maximum entropy yields a unique model , for a set of qualitative default rules there may be several minimal ranking functions . in this paper, we developed a clp approach for solving , realized in the prolog program ` genocf ` .the solutions of the constraint satisfaction problem are vectors of natural numbers that uniquely determine an ocf accepting all conditionals in .the program ` genocf ` is also able to generate exactly all minimal solutions of ; the minimal solutions of are of special interest for model - based inference . among the extentions of the approach described here we are currently working on ,is the investigation and evaluation of alternative minimality criteria . instead of ordering the vectors by the sum of their components, we could define a componentwise order on by defining iff for , yielding a partial order on .still another alternative is to compare the full ocfs induced by according to ( [ eq_conditional_indifference_no_kappaplus ] ) , yielding the ordering on defined by iff for all . in general, it is an open problem how to strengthen the requirements defining a c - representation so that a unique solution is guaranteed to exist .the declarative nature of constraint logic programming supports easy constraint modification , enabling the experimentation and practical evaluation of different notions of minimality for and of additional requirements that might be imposed on a ranking function .furthermore , in the framework of default rules concidered here is extended by allowing not only default rules in the knowledge base , but also strict knowledge , rendering some worlds completely impossibe .this can yield a reduction of the problem s complexity , and it will be interesting to see which effects the incorporation of strict knowledge will have on the clp approach presented here . c. beierle and g. kern - isberner . a verified asml implementation of belief revision . in e.brger , m. butler , j. p. bowen , and p. boca , editors , _ abstract state machines , b and z , first international conference , abz 2008 , london , uk , september 16 - 18 , 2008 . proceedings _ ,volume 5238 of _ lncs _ , pages 98111 .springer , 2008 . c. beierle and g. kern - isberner . on the computation of ranking functions for default rules a challenge for constraint programming . in _ proc .deklarative modellierung und effiziente optimierung mit constraint - technologie .workshop at gi jahrestagung 2011 _ , 2011 .( to appear ) . c. beierle , g. kern - isberner , and n. koch . a high - level implementation of a system for automated reasoning with default rules ( system description ) .in a. armando , p. baumgartner , and g. dowek , editors , _ proc . of the 4th international joint conference on automated reasoning ( ijcar-2008 ) _ , volume 5195 of _ lncs _ , pages 147153 .springer , 2008 .s. benferhat , d. dubois , and h. prade . representing default rules in possibilistic logic . in _proceedings 3th international conference on principles of knowledge representation and reasoning kr92 _ , pages 673684 , 1992 .m.carlsson , g. ottosson , and b. carlson . an open - ended finite domain constraint solver . in h.glaser , p. h. hartel , and h. kuchen , editors , _ programming languages : implementations , logics , and programs , ( plilp97 ) _ , volume 1292 of _ lncs _ , pages 191206 .springer , 1997 . w. spohn .ordinal conditional functions : a dynamic theory of epistemic states . in w.l .harper and b. skyrms , editors , _ causation in decision , belief change , and statistics , ii _ , pages 105134 .kluwer academic publishers , 1988 .
|
in order to give appropriate semantics to qualitative conditionals of the form _ if a then normally b _ , ordinal conditional functions ( ocfs ) ranking the possible worlds according to their degree of plausibility can be used . an ocf accepting all conditionals of a knowledge base r can be characterized as the solution of a constraint satisfaction problem . we present a high - level , declarative approach using constraint logic programming techniques for solving this constraint satisfaction problem . in particular , the approach developed here supports the generation of all minimal solutions ; these minimal solutions are of special interest as they provide a basis for model - based inference from r.
|
quantum information processing may potentially revolutionize classical computation based on ordinary bits . however, current constructions of universal quantum computer still can not compete with classical computational machines .one of the main difficulties lies in the fact that systems on a quantum scale are extremely susceptible to any kind of external noise , as well as to erroneous action of quantum gates in a circuit .therefore , to handle qubits effectively , there is a need for methods protecting quantum information against all possible disturbances .two general approaches to this problem have been developed .the first one is based on the so - called _ decoherence - free subspaces _ and exploits particular states of the hilbert space that are immune to certain errors - a readable review of this methods can be found in refs .an alternative technique is based on _ quantum error - correcting codes _ ( qecc ) , which are quantum counterparts of the classical error - correcting codes .quantum error - correcting codes are constructs which protect quantum information against some specified errors .this method of error - correction has been extensively studied in the case of unitary noise operations - see ref . for a comprehensive introduction to this field. however , any real quantum operation can be only _ approximately _ unitary and thus one has to consider non - unitary noise operations . here the progess is slower than in the unitary case , mainly because of the increased complexity of the problem .for instance , in the unitary case , any product of two kraus operators is normal , so its numerical range is determined by the spectrum , which is not longer true in the general case .the need for a constructive method of finding quantum error correction codes for non - unitary noise models provides a motivation toward this work .the main aim of this paper is to propose a method of finding quantum error correction codes for a class of non - unitary noise operators .the method , in the form that is presented here , can be effectively applied to noise models with short kraus decomposition ( consisting of _ two _ operators ) .the main advantage of this method is that it allows to solve an algebraic problem ( often untractable with other approaches ) using an elementary geometrical construction .the paper is organized as follows . in sect .[ sec:1 ] we review some basic notions related to quantum error correction and specify the general form of a non - unitary quantum channel . in sect .[ sec:2 ] we recall definitions of generalized numerical range , introduce the concept of _ nuclear numerical range _ and present some of its properties . in sect .[ sec:3 ] we describe a geometric method of obtaining quantum error correction codes for block - diagonal kraus operators .precisely , using compression formalism based on the knill - laflamme conditions and the notion of nuclear numerical range , we obtain projectors on the subspaces of the quantum error correction code .this is the main contribution of this work . in sect .[ sec:4 ] we provide two examples of block - diagonal channels and obtain the corresponding quantum error correction codes using method described in sect .[ sec:3 ] .we conclude this paper with a summary of results obtained and discuss possible generalizations of the method to higher - dimensional problems .in this section we recall the definition of kraus operators and their role in description of noise in quantum systems .we invoke the _ knill - laflamme _conditions for an error correction code of a particular noise model . in the end of this sectionwe present a block - diagonal model of quantum noise that will be explored further in this work .consider a quantum system described in an -dimensional hilbert space .assume that the system evolves according to some given error process ( noise channel ) represented by a superoperator acting on . according to the _ kraus representation theorem _ any such superoperator can be written as the sum of matrix operators , where is the space of all complex - valued matrices of order : = \sum_{i=1}^m a^{\dagger}_i[\cdot]a_i,\ ] ] where denotes the adjoint operator .matrices are called _ kraus operators _ , and they satisfy the trace preserving condition : . in this paper we consider ( unless otherwise stated ) models of noise acting on two - qubit systems described by two kraus operators .the dimension of matrices is thus and there are of them . to solve the quantum error correction problem for a map , one has to look for subspaces , which satisfy _ knill - laflamme conditions _error correcting code , labeled by , is itself a quantum state in subspace of dimension .we denote by a particular basis of this subspace ( correction code basis ) , so that and by the projection operator on . according to following conditions are sufficient to reconstruct information about the system subjected to errors described by the set of kraus operators : where are called _ compression values _ of the error correcting code .the problem of determining the projectors is related to an algebraic _ compression problem _ . in our casethe invariant subspace of code , , is two - dimensional , so is a projector on a two - dimensional subspace ( ) . denoting can write : determining quantum error - correction code for the error model described by the set of kraus operators is equivalent to finding a subspace that satisfies the above set of equations .the problem we adress in this paper involves finding correction subspace , which amounts to determining projections from the last equation .the main difficulty lies in the fact that has to satisfy the knill - laflamme conditions ( [ eq:3 ] ) for all operators simultaneously . in what followswe consider the kraus operators with a block - diagonal structure .it will be convenient to introduce a short - hand notation : where .note that if eq .( [ eq:3 ] ) is fulfilled for , it is also fulfilled for the adjoint . moreover , due to the normalization condition we can write . thus , we effectively look for simultaneous solutions of the compression problem for two operators and .there are several approaches of solving problems of this kind if matrices are normal ( see for example the _ eigenvector - pairing method _ introduced in ) . herewe consider a broader range of models of non - unitary kraus operators , for which matrices need not be normal . thus , the techinques developed in literature can not be applied here in a straightforward manner . in sect .[ sec:3 ] we will develop a new method for solving this type of problems .in this section we review the notion of _ higher order numerical range _ and introduce the concept of nuclear numerical range .we state several its properties that will be further explored to determine the correction code subspaces for models considered in sect .[ sec:3 ] and [ sec:4 ] .the algebraic form of equation ( [ eq:3 ] ) suggests that the problem can be approached using the so - called _ _ rank-__ numerical range of matrix , introduced in , where is the set of all rank- projections on space .unit vectors which yield compression value are called _ generating vectors _ ( or generators ) of .the two special cases ( and ) are of particular interest in this paper .notice that setting yields the standard numerical range , often denoted by : on the other hand , the case gives the numerical range of rank two : based on the definition of higher rank numerical range , one can derive the following properties : ( p1 ) : : for any , ( p2 ) : : for any unitary , ( p3 ) : : if a is hermitian and is an ordered set of eigenvalues of a matrix , then ] , then .the proof of ( n ) is straightforward from the definition of and properties ( p ) from previous subsection . to prove property ( n )note that always contains the point .thus , and is not empty . in order to prove property ( n ) we use the following theorem , proved in appendix a. [ t1 ] let be a normal matrix of order two with real - valued entries .furthermore , let be an arbitrary complex - valued matrix of order two .consider : then , there exists a set of normalized states , , parametrized by a phase and a real number , which satisfies the following set of simultaneous equations : where forms an elliptic disk in the complex plane parametrized by and : .\end{aligned}\ ] ] variables , , , and are defined in eq .( [ leq:4 ] ) .the family of states is given by : where and . from above theoremwe can deduce the following simple corollary : [ c1 ] , where and is given by eq .( [ teq:2 ] ) .the proof of property ( n ) follows directly from theorem [ t1 ] if we take . in order to prove property ( n )note that by the elliptic range theorem , the set forms an elliptic disk with foci at eigenvalues of , that is and . by the convexity property , must also contain a line with endpoints . since , the line either passes through the origin or is a singular point at the origin .thus , there exists such that and is not an empty set .property ( n ) can be proven after noticing that if one allows to take all possible real values between the two eigenvalues of , then there is no restriction on vector ( cosine of azimuthal angle given by eq .( [ leq:2 ] ) in the appendix takes all real values between and ) .this means that is the full numerical range .+ + it is worth to emphasise that the unitary invariance of standard numerical range does not hold in the case of nuclear numerical range , that is .it can be easly seen by considering : computing yields a single point , whereas .thus , for a general matrix , .similary we can find .in this section we describe a method of constructing projectors onto the code subspace for not - normal matrices .let us now return to the basic problem and recall equation ( [ eq:3 ] ) . because of the block - diagonal structure of we expect that will have a similar block - diagonal structure .we emphasize that this choice of is not the most general possible and reduces the total set of possible correction codes we can obtain .let us call the upper and lower of the blocks of by and , respectively .this allows us to write : using property ( p ) and setting we can reduce the initial problem of determining one -by- projection matrix to a problem of finding two -by- projection matrices and . writing explicitly : can rewrite eq .( [ eq:3 ] ) in the matrix notation : by definition , and are points in respective standard numerical ranges .the above equality is satisfied only if the condition holds for all values of and .this means that both points , and lie in the intersection for every choice of . in order to determine and ( which are projections onto points in that intersection ) note that we can rewrite them in respective correction code bases as : where both vectors are normalized .note that the states and are both generators of the set of compression values , for .the intersections that we are interested in , , are sets of points for which the following statement holds : due to the structure assumed in eq .( [ eq:4 ] ) not all of the above equations are independent . having this in mind , we are left with the following set of two equations : to solve this set of equations we use the concept of nuclear numerical range discussed in sect . [ sec:3 ] .consider the following two sets : notice that points in the complex plane which satisfy eqs .( [ eq:14 ] ) are exactly the ones that constitute the intersection of above nuclear numerical ranges .let us label this intersection by : clearly , .since both matrices and are normal , by property ( n ) we conclude that for a given value of these two sets are elliptic curves in the complex plane .if we treat as a parameter whose range is the appropriate line segment , then by property ( n ) we have .similar statement holds for a pair and . in our problem , in order to satisfy eq .( [ eq:14 ] ) , the number must be contained in both and .thus must be contained in the intersection , which is simply a line segment . if we denote the sets of eigenvalues of matrices and by and respectively , then in order to fulfill the first condition from eq .( [ eq:14 ] ) , one has to satisfy : let us label by the set of all allowable values of .then , the set is not empty if and only if : ^{1/2 } \gtrless \tr f_{11 } \mp \left[\left(\tr f_{11}\right)^2 - 4\det f_{11 } \right]^{1/2},\end{aligned}\ ] ] where the symbol corresponds to and signs respectively , so eq . ( [ eq:15b ] ) contains two inequalities .the above equation follows from the fact that the two eigenvalues of a -by- matrix a are given by a formula . by corollary [ c1 ]we can conclude that and , where function is defined in eq .( [ teq:2 ] ) . the intersection is determined by : in order to determine the vectors and , which are useful to construct projectors on the respective correction code bases ( recall eq .( [ eq:12 ] ) ) , one can in principle find points in determined by and their phase angles and . following the proof of theorem [ t1 ] , one diagonalizes and using orthogonal matrices and : now one introduces the transformed states : and , where and are given by : where denotes the complex imaginary unit . in order to determine these projection vectorsone has to find angles for in terms of a point in the complex plane and a real parameter .let us suppose that we have found a point for some value of .we shall consider only the case for matrices and since for the matrices and the same pattern can be applied .our current task is to find the projection vectors .( [ leq:2 ] ) one can determine the azimuthal angle for projection in terms of : where denotes the eigenvalues of matrix ( defined in appendix a , in the text above eq .[ leq:1 ] ) . using the definition of given in eq .( [ leq:5 ] ) with the following substitution : , one finds : using the above expressions and eqs .( [ leq:6a ] ) and ( [ leq:6b ] ) one can determine the polar angle , - r_1\left[\widetilde{y } - y_0(\lambda))\right]}{q_1 r_2 - r_1 q_2}.\end{aligned}\ ] ] the angle can be computed by recalling that is related by eq .( [ leq:1aa ] ) to elements of matrix .the family of states is then given by : in a similar manner we can determine the projection vectors for matrices and . thus , our initial problem of finding error correction code and solving the compression eqs .( [ eq:3 ] ) , equivalent to the set of eqs .( [ eq:14 ] ) for projectors and , is reduced into a geometric problem of finding intersection points of two elliptic curves in the complex plane .the quantum code subspace - the projection from eq .( [ eq:3 ] ) - is then given by a matrix of order four , given by the direct sum of two projectors of size two : this section we present two examples of non - unitary quantum channels and determine their quantum error correction code subspaces using the method described in the previous section .the amplitude - damping channel ( ad channel ) is an important channel describing effects due to loss of energy of a quantum system .here we consider two - level systems ( qubits ) , but channels describing arbitrary -level systems are also known . moreover , an interesting study of generalized amplitude - damping channels based on approximate quantum error - correction schemes appeared recently .exemplary physical processes which can be described by this channel include the relaxation of atom from its excited state to the ground state , sending a quantum state from one location to another using a spin chain and attenuation of a photon in a cavity .the kraus representation of one - qubit amplitude damping channel acting on state with probability , where , is given by : where the kraus operators and are defined : to extend this channel into the two - qubit system one can consider a product of two one - qubit channels : this channel is given by four kraus operators .the bi - partite channel will be described by a sum of four new kraus operators ( all possible tensor products of and ) acting on a two - qubit state . assuming that the damping occurs with probability on the first qubit and on the second , we may write : to simplify the model we consider a two - qubit channel defined by two kraus operators : and , where is defined up to a unitary transformation .we make the following choice of kraus operators : the trace - preserving channel analyzed here is then given by : one can determine the quantum error - correction code for this channel by solving the compression problem given in eq .( [ eq:3 ] ) .let us solve it using the geometrical method presented in sect .[ sec:3 ] . in the first stepwe compute matrices and given in eq .( [ eq:4 ] ) , from which we obtain matrices and for : our aim is to find projection operator from eq .( [ eq:11 ] ) , where and are given by ( [ eq:12 ] ) , which is equivalent to set of equations ( [ eq:14 ] ) . to find these operators we use notion of nuclear numerical range .we first compute the intersection , where and from this intersection we determine projection operators and .[ fig:4 ] ( -1.0,0 ) ( 10.0,0 ) ; ( 1.5 , -0.12 ) ( 1.5 , 0.12 ) ; at ( 1.5,-0.4 ) ; ( 4.5 , -0.12 ) ( 4.5 , 0.12 ) ; at ( 4.5,-0.4 ) ; ( 7.5 , -0.12 ) ( 7.5 , 0.12 ) ; at ( 7.5,-0.4 ) ; ( 9 , -0.12 ) ( 9 , 0.12 ) ; at ( 9,-0.4 ) ; ( 1.5,.5 ) ( 9,.5 ) ; ( 4.5,-0 ) ( 9,-0 ) ; ( 1.5,0 ) ( 1.5,1 ) ; ( 4.5,0 ) ( 4.5,1 ) ; ( 9,0 ) ( 9,1 ) ; ( 4.5,0 ) rectangle ( 9,.5 ) ; \(a ) at ( 4.5 , .5 ) ; ( b ) at ( 6.1 , -0.75 ) ; \(c ) at ( 1.5 , 1.5 ) ; ( d ) at ( 6.65 , 0 ) ; ( c ) edge ( d ) ; \(x ) at ( 9.8,0.3 ) ; let us introduce kets and , where diagonalizing unitary matrices and are given by eq .( [ eq:16 ] ) .since and are already diagonal we may assume that and thus and , where and are yet arbitrary and parametrized according to eq .( [ eq:17 ] ) . in order to find the azimuthal angles and one has to assure that the set is not empty , which means that the overlapping condition ( [ eq:15a ] ) holds .in this case this condition reduces to the following two expressions : both of above conditions are satisfied by and .we can now write respective standard numerical ranges as : this allows us to conclude that the set is given by the intersection of two sets , ] can be treated as a free parameter .first of the above two sets is a line segment placed on the real axis while the second one forms a circle centered at zero .the nontrivial intersection of these two sets is possible only if and : [ fig:5 ] ( -5.0,0 ) ( 5.0,0 ) ; ( 0,-5.0 ) ( 0,5.0 ) ; \(x ) at ( 4.8,0.3 ) ; ( y ) at ( -0.4,4.8 ) ; ( 0,0 ) circle ( 3.5 ) ; ( 0,0 ) circle ( 2.4 ) ; ( 10,0 ) ( 13,0 ) ; \(a ) at ( -3.5 , 4 ) ; ( b ) at ( -0.5 , 2.5 ) ; \(c ) at ( + 6 , 3 ) ; ( d ) at ( 1.6 , -0.2 ) ; ( 3.55 , -0.12 ) ( 3.55 , 0.12 ) ; at ( 3.55,-0.4 ) ; ( 2.366 , -0.12 ) ( 2.366 , 0.12 ) ; at ( 2.366,-0.4 ) ; ( 1.183 , -0.12 ) ( 1.183 , 0.12 ) ; at ( 1.183,-0.4 ) ; at ( 0.0,-0.4 ) ; ( e ) at ( 4.5 , 2 ) ; ( f ) at ( 2.49 , -0.27 ) ; ( 2.366 , 0 ) circle ( 0.1 ) ; ( a ) edge ( b ) ; ( c ) edge ( d ) ; ( e ) edge ( f ) ;thus , the error correction code for this specific model is determined by the projector of the form ( [ eq:21 ] ) with vectors and defined in eq .( [ eq:17 ] ) with the following horizontal angles : the polar angle can be chosen arbitrarily : , as shown in fig .2 . let us now consider a more general case of a quantum channel of length two ( ) acting on system consisting of two qubits : our motivation is to find the most general noise model with a maximal number of free parameters , whose kraus representation consists of two block - diagonal matrices .operators in general contain 16 free variables , where and .the condition that preserves the trace : imposes additional six constraints ( there are eight equations for nonzero block - diagonal elements , from which two are not independent ) , so that we have 10 free parameters in total . without loss of generality we can choose any 6 parameters from the set andexpress them in terms of the remaining ones .we label them by for so that are all dependent . let us label the vector of free parameters by . having this in mind , we can write kraus operators in the following form : where the trace - preserving condition ( [ eq : ex1 ] ) implies : to simplify notation we introduce variables which are functions of the independent parameters : in order to find qecc for the map we proceed with the method described in sect .[ sec:3 ] .let us begin by computing matrices and .defining : one can write matrices and as : matrices , , and are then : once again our aim is to find projection operator satisfying eq . ([ eq:3 ] ) .we will do it by first computing the intersection of nuclear numerical ranges as explained in sect .[ sec:3 ] . to doso we first compute the intersection which , according to eq .( [ eq:15a ] ) , is completely determined by the eigenvalues of matrices and , denoted by , and , , respectively . without loss of generality we may assume and .the set is then given by : .\end{aligned}\ ] ] this set is not empty if condition ( [ eq:15a ] ) holds for matrices and .let us denote by a free parameter contained in .in order to find an appropriate qecc one can determine the set defined in eq .( [ eq:15c ] ) .we plot the set for some convenient choice of parameters in fig .[ fig:7 ] ( -5.0,0 ) ( 5.0,0 ) ; ( 0,-5.0 ) ( 0,5.0 ) ; \(x ) at ( 4.8,0.3 ) ; ( y ) at ( -0.4,4.8 ) ; ( .8,1.5 ) ellipse ( 2.6 and 1.0 ) ; ( -1.0,1.6 ) ellipse ( 1.0 and 3.0 ) ; ( -1,0.5 ) ellipse ( .6 and 1.6 ) ; ( 1.4,1.5 ) ellipse ( 1.4 and .7 ) ; ( 3.55 , -0.12 ) ( 3.55 , 0.12 ) ; at ( 3.55,-0.4 ) ; ( 3.55 , -0.12 ) ( 3.55 , 0.12 ) ; at ( 3.55,-0.4 ) ; ( -0.12 , 3.55 ) ( 0.12 , 3.55 ) ; at ( -0.4,3.55 ) ; at ( -0.2,-0.4 ) ; \(e ) at ( -2.5 , 1 . ) ; ( ea ) at ( -1.5 , 0.5 ) ; \(a ) at ( 5.5 , 3.5 ) ; ( b ) at ( .8 , 2.91 ) ; \(f ) at ( 2,-2.0 ) ; ( fa ) at ( 3,-1.0 ) ; \(c ) at ( -1.0 , -1.2 ) ; ( d ) at ( 2 , -0.35 ) ; \(f ) edge ( fa ) ; ( e ) edge ( ea ) ; ( a ) edge ( b ) ; ( c ) edge ( d ) ; ( 0.37 , 1.5 ) circle ( 0.1 ) ; ( -0.37 , 1 ) circle ( 0.1 ) ; ( -2.97 , 4.32 ) circle ( 0.1 ) ; \(o ) at ( -2.5 , 4 ) ; following the reasoning from sect . [ sec:3 ] and methodology present in the proof of theorem [ t1 ] we conclude that for a given value of one can construct elliptic curves in the complex plane and determine their intersection points using theorem .having obtained the set and using eq .( [ eq:18 ] ) and eq .( [ eq:19 ] ) one can then determine the azimuthal and polar angles for , and analogous angles for vector , respectively .the projection vectors and , which form the projection operator , can be computed using eq .( [ eq:20 ] ) .the correction code subspace for this particular noise model is then given by eq .( [ eq:21 ] ) .in this work we have introduced the notion of nuclear numerical range of an operator with respect to an auxiliary operator and demonstrated that it allows one to find quantum error - correction codes protecting against noise . in particular, this technique works for models of quantum errors with non - unitary noise operators . using a simple geometric construction involving an intersection of two ellipses in the complex plane we found such a quantum error - correction code for a simplified model of two - qubit amplitude damping channel and a general noise model with two kraus operators of size with block - diagonal structure .note that the method used here for the two - qubit system is straightforward to generalize for larger dimensions .we expect that further development of this technique will allow for effective construction of quantum error - correction codes protecting information against more general non - unitary noise models .in this appendix a proof of theorem [ t1 ] is presented .consider : let us start by diagonalizing the matrix . to do so , we first subtract half of the trace from the diagonal to obtain a traceless matrix with eigenvalues ^{1/2} ] . thus , assuming that is non - degenerate , we can express the parametrization angle in terms of the compression value as : consider now the second eq . from ( [ teq:1 ] ) . in our current parametrizationit reads : + \\ \cos 2 \alpha \left [ \frac{d - h}{2 } \cos \theta - \frac{f+g}{2 } \sin \theta \cos \varphi \right ] - i \frac{f - g}{2 } \sin \varphi.\end{aligned}\ ] ] by substituting into above expression and using eq .( [ leq:2 ] ) we obtain : \\+ \left[b(d - h)-(f+g)(a - c)\right ] \left[\epsilon^2 - ( a+c-\lambda)^2\right]^{1/2 } \cos \varphi \\- i ( f - g ) \operatorname{sgn}(a - c ) \epsilon\sin \varphi \bigg \}.\end{gathered}\ ] ] let us simplify the notation by denoting : z_0 & = \frac{d+h}{2 } - \frac{\operatorname{sgn}(a - c)}{2\epsilon^2 } \big \ { ( a+c)[b(f+g)+(a - c)(d - h)]\big \},\\ w & = \frac{\operatorname{sgn}(a - c)}{2 \epsilon^2}\left[b(f+g ) + ( a - c)(d - h)\right ] , \\p(\lambda ) & = \frac{\operatorname{sgn}(a - c)}{2 \epsilon^2 } \left[\epsilon^2 - ( a - c-\lambda)^2\right]^{1/2 } , \\ q & = b(d - h)-(a - c)(f+g ) , \quad r = -i \epsilon ( f - g ) \operatorname{sgn}(a - c ). \end{aligned}\ ] ] using the above we can rewrite eq .( [ leq:3 ] ) in the following form : .\end{aligned}\ ] ] eq .( [ leq:5 ] ) defines an entire family of ellipses in the complex plane which belong to .in particular , if we let to run over its available range , that is ] , where .let us denote , , , , , and choose and in the following way : recall that the equation of an ellipse centered at zero is given by : let us now plug - in the coordinates and into above equation .making use of simple trigonometric identities the above equation of the ellipse can be rewritten as : \nonumber \\ + & \left[\alpha(q_1 ^ 2 - r_1 ^ 2 ) + \beta ( q_2 ^ 2 - r_2 ^2 ) + \gamma ( q_1 q_2 - r_1 r_2 ) \right ] \cos 2 \varphi \nonumber \\ + & \left[2 q_1 r_1 \alpha + 2 q_2 r_2 \beta + \gamma ( q_2 r_1 + q_1 r_2 ) \right]\sin 2 \varphi = 2p^2.\end{aligned}\ ] ]this can be satisfied for all values of only if the following set of equations is satisfied : by solving the above set of equations we can easly determine coefficients and , a simple calculation shows that discriminant of eq .( [ leq:7 ] ) is negative : since all variables in above expression are real .if we now rescale and by a factor of and shift it accordingly by and so that and , we get .since an affine transformation sends an ellipse to an ellipse , also describes an ellipse .this completes the proof that defines an ellipse . in order to determine states , where is given by eq .( [ leq:1a ] ) , one has to find angles in terms of the point in the complex plane and a real parameter . from eq .( [ leq:2 ] ) one determines the azimuthal angle in terms of . using the definition of onethen finds : using above expressions and eqs .( [ leq:6a ] ) and ( [ leq:6b ] ) one can determine the polar angle : - r_1\left[\widetilde{y } - y_0(\lambda))\right]}{q_1 r_2 - r_1 q_2}.\end{aligned}\ ] ] the angle can be computed by recalling that .the family of states is then given by which completes the proof of the second part of the theorem .we are grateful to the anonymous referee for several constructive comments which allowed us to improve our work .this work has been supported by the polish national science center under the project number dec-2015/18/a/ st2/00274 and by the john templeton foundation under the project no .56033 lidar , d. a. , whaley , b. k. : irreversible quantum dynamics : decoherence - free subspaces and subsystems , pp 83 - 120 , springer , berlin heidelberg ( 2003 ) lidar , d. a. : quantum information and computation for chemistry : review of decoherence - free subspaces , noiseless subsystems , and dynamical decoupling , pp 295 - 354 , wiley & sons , inc .( 2014 ) lidar , d.a . ,brun , t.a . : quantum error correction , springer , cambridge university press ( 2013 ) knill , e. , laflamme , r. : theory of quantum error - correcting codes , phys .a 55 , 900 ( 1997 ) bennet , c. h. , divincenzo , d. p. , smolin , j. a. , wootters , w. k. : mixed state entanglement and quantum error correction , phys . rev . a 54 , 3824 ( 1996 ) nielsen , m. a. , chuang , i. l. : quantum computation and quantum information : 10th anniversary edition , cambridge ( 2011 ) choi , m. d. , kribs , d. w. , yczkowski , k. : quantum error correcting codes from the compression formalism , rep .phys . , 58 , 77 - 91 ( 2006 ) choi , m. d. , kribs , d. w. , yczkowski , k. : higher - rank numerical ranges and compression problems , linear algebra appl . , 2 - 3 , 828 - 839 ( 2006 ) horn , r. a. , johnson , c. r. : matrix analysis , cambridge university press ( 1986 ) gawron , p. , puchaa , z. , miszczak , j. a. , skowronek , . ,yczkowski , k. : restricted numerical range : a versatile tool in the theory of quantum information , j. math .phys . , 51 , 102204 ( 2010 )li , c .- k . : a simple proof of the elliptical range theorem , proc .soc . , 124 , 1985 - 1986 ( 1996 ) grassl , m. , wei , z. , yin , z. q. , zeng , b. : 2014 ieee international symposium on information theory : quantum error - correcting codes for amplitude damping , 906 - 910 ( 2014 ) audretsch , j. : entangled systems : new directions in quantum physics , wiley ( 2007 ) giovannetti , v. , fazio , r. : information - capacity description of spin - chain correlations , phys .a , 71 , 032314 ( 2005 ) bose , s. : quantum communication through an unmodulated spin chain , phys .lett . , 91 , 207901 ( 2003 ) hong - yi , f. , rui , h. : photon - added chaotic field and its damping in a thermo enviroment , can .j. phys . , 93 , 456 - 459 ( 2015 ) cafaro , c. , van loock , p. : approximate quantum error correction for generalized amplitude - damping errors , phys .a 89 , 022316 ( 2014 )
|
we introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator among normalized pure states , which belong to the nucleus of an auxiliary operator . this notion proves to be applicable to investigate models of quantum noise with block - diagonal structure of the corresponding kraus operators . the problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane . this technique , worked out in the case of two - qubit systems , can be generalized for larger dimensions .
|
quantum entanglement underlines the intrinsic order of statistical relations between subsystems of a compound quantum system . due to this surprising feature ,systems of entangled qubits are central to most protocols for transmitting and processing quantum information , such as the quantum key distribution , quantum teleportation , quantum secure direct communication , quantum dense coding , and quantum state sharing .all the applications require the entangled qubit systems to setup the quantum channel . in the applications , photons are the best long - range carriers of quantum information , for photons have long decoherence time , and are relatively easy to manipulate . among various entanglement forms ,the single - photon entanglement with the form of is the simplest one .the single - photon entanglement describes a superposition state , in which the single photon is in two different locations a and b. the single - photon entanglement has wide applications in the qip tasks .for example , the well known duan - lukin - cirac - zoller ( dlcz ) repeater protocol requires the quantum state with the form of , where the and represent the excited state and the ground state of the atomic ensembles , respectively . in 2005 , chou __ observed the spatial entanglement between two atomic ensembles located in distance .it is essentially the creation of the single - photon spatial entanglement by storing the entanglement into the atomic - ensemble - based quantum memory .recently , gottesman _proposed an interesting protocol for constructing an interferometric telescope based on the single - photon entanglement . with the help of the single - photon entanglement ,the protocol has the potential to eliminate the baseline length limit , and realize the interferometers with arbitrarily long baselines in principle .unfortunately , in the practical applications , the environmental noise can make the photonic quantum system decoherence , which may cause the maximally entangled single - photon entangled state degrade to a mixed state or a pure less - entangled state .such less - entangled state may decrease after the entanglement swapping and can not ultimately set up the high quality quantum entanglement channel .therefore , in practical applications , we need to recover the mixed state or the pure less - entangled state into the maximally entangled state .the pure less - entangled state , which will be detailed here , can be recovered into the maximally entangled state by the method of entanglement concentration . in 1996 , bennett _ et al ._ , proposed the first entanglement concentration protocol , which is called the schimidit projection method .it is a great start for the entanglement concentration .later , the ecps based on entanglement swapping and the unitary transformation have been put forward successively . in 2001 , zhao _ et al ._ and yamamoto _ et al . _proposed two similar ecps independently with linear optical elements , both of which were realized in experiment . in 2010 , sheng _ et al ._ described an approach for concentrating the spe . in each concentrationround , they require two pairs of less - entangled states to complete the task . recently , zhou and sheng proposed a simplified ecp for spe .only one pair of less - entangled state and local single photons were required . up to now , though several ecps for spe were discussed .they do not consider the exact information encoded in the single photon , which will limit its practical application .moreover , existing ecps for spe are not suitable for the case that the less - entangled spe contains the unknown polarized information in the single photon . because existed ecps all depend on the hong - ou - mandel interference , which requires the two photons are identical .unfortunately , if the polarization information of the spe is unknown , they can not prepare the identical auxiliary single photon to complete the task . in this paper, we put forward two efficient ecps .both they not only can distill the maximally entangled single - photon entangled state from arbitrary less - entangled single - photon entangled state , but also can protect its polarization characteristics .the first ecp is based on linear optics , which is feasible in current experiment .moreover , the second one can be used to repeated to obtain a high success probability .the paper is organized as follows : in sec .ii , we explain the first ecp for the single - photon entangled state . in sec .iii , we explain the second ecp with the help of cross - kerr nonlinearity . in sec .iv , we make a discussion . finally , in sec .v , we present a conclusion .) photon and reflect the vertical polarized ( ) photon .the vbs can adjust the coefficients of the entangled state . in the ecp ,alice and bob share a less - entangled single - photon polarization qubit . with the help of some auxiliary single photons, they can finally distill the maximally entangled single - photon state , while preserving its polarization characteristics.,width=302 ] the basic principle of our first ecp is shown in fig .1 . we suppose a single photon source s1 emits a single - photon entangled state , and sends it to alice and bob in the spatial modes a1 and b1 , respectively .the single - photon entangled state can be described as where and are the coefficients of the initial entangled state , .we consider the polarization of the single - photon quibit can be written as where and represent the horizontal and vertical polarization of the single photon . and are the coefficients of the polarization state , .therefore , the single - photon entangled state can be fully described as bob makes the photon in the b1 mode pass through the polarization beam splitter ( pbs ) , here named pbs1 , which can fully transmit the photon and reflect the photon .it can be easily found that the item will make the single photon in the upper spatial mode b2 , while the item will make the single photon in the lower spatial mode b3 . in this way , after the pbs1 , eq .( [ whole1 ] ) can evolve to , and .afterwards , and can be individually concentrated by the similar process . here, we first explain the concentration process of .a single photon source s2 emits an auxiliary single photon in the polarization , and sends it to bob in the spatial mode b4 .bob makes it pass through a variable beam splitter ( vbs ) with the transmission of , here named vbs1 .after the vbs1 , the quantum state of the auxiliary single photon can be written as in this way , combined with evolves to then , bob makes the photons in the b2 and b5 modes pass through an beam splitter ( bs ) , here named bs1 , which can make after the bs1 , eq .( [ whole2 ] ) will evolve to it can be easily found if only the detector d1 detects exactly one photon , eq .( [ bs1 ] ) will collapse to while if only the detector d2 detects exactly one photon , eq .( [ bs1 ] ) will collapse to there is only a phase difference between eq .( [ max1 ] ) and eq .( [ max2 ] ) .( [ max2 ] ) can be easily converted to eq .( [ max1 ] ) by the phase flip operation . meanwhile , if a suitable vbs with the transmission can be provided , eq . ( [ max1 ] ) can evolve to latexmath:[\[\begin{aligned } so far , the concentration for is completed , and the success probability for getting eq .( [ max3 ] ) is .the concentration process for is similar with that for .first , a single photon source s3 emits an auxiliary single photon in the polarization and sends it to bob in the b7 mode .bob makes this photon pass through the vbs2 with the transmission of , which makes it as then bob makes the photons in the b3 and b8 modes pass through bs2 , which can make after the bs2 , combined with the auxiliary single photon can evolve to in this way , if only the photon detector d3 detects exactly one photon , eq .( [ bs2 ] ) will collapse to if only the photon detector d4 detects exactly one photon , eq .( [ bs2 ] ) will collapse to which can be converted to eq .( [ max4 ] ) by the phase flip operation . under the condition that the transmission of vbs2 is , eq .( [ max4 ] ) can ne rewritten as latexmath:[\[\begin{aligned } so far , we have successfully concentrated to eq .( [ max5 ] ) , with the probability of .finally , bob makes the photons in the b6 and b9 modes pass through the pbs2 , then the whole single photon state can evolve to which can be normalized as ) , it can be found that by operating our ecp , we can successfully concentrate the less - entangled single - photon state while preserving its polarization characteristics .the total success probability of our ecp can be written as the second ecp , we adopt the cross - kerr nonlinearity to construct the quantum nondemolition detector ( qnd ) . in this way , before we start to explain the ecp , we first briefly introduce the cross - kerr nonlinearity .the cross - kerr nonlinearity provides a good way to construct the qnd , which has played an important role in the fields of quantum entanglement , quantum logic gate , quantum teleportation , purification and concentration , and so on .the cross - kerr nonlinearity has a hamiltonian of the form where is the coupling strength of the nonlinearity , which depends on the cross - kerr material . and are the photon number operators for mode a and mode b. in the process of cross - kerr interaction , a laser pulse in the coherent state interacts with the photons through a proper cross - kerr material. the interaction process can be written as we note that and are the number of the photons . if a photon is presented , the interaction will induce the coherent state pick up a phase shift of , otherwise , the coherent state pick up no phase shift . in this way, it can be found that the phase shift is directly proportional to the number of photons . as the phase shift can be measured by the homodyne measurement, the photon number in each spatial mode can be detected without destroying the photons ., while the single photon in the mode a2 will induce it pick up .,width=302 ] in the second ecp , the schematic drawing of the qnd is shown in fig .2 . it can be found that if a photon is presented in the spatial mode a1 , the coherent state will pick up a phase shift of , while if a photon is in the spatial mode a2 , it will pick up a phase shift of .the schematic drawing of the second ecp is shown in fig .we also suppose that alice and bob share a less - entangled single photon polarization qubit in the spatial mode a1 and b1 as eq .( [ whole1 ] ) .bob makes the photon in the b1 mode pass through the pbs1 , which leads to eq .( [ upper ] ) in the spatial modes a1 and b2 with the probability of , and eq .( [ lower ] ) in the spatial modes a1 and b3 , with the probability of . here , we take the concentration process for eq .( [ upper ] ) as an example .a single photon source s2 emits an auxiliary photon in the polarization and sends it to bob in the b4 mode .bob makes the auxiliary photon pass through vbs1 with the transmission of .after the vbs1 , the auxiliary single photon state can be described as then , bob makes the photons in the b2 and b5 modes pass through the qnd1 . in this way ,( [ upper ] ) can evolve to as the phase shift of can not be distinguished by the homodyne measurement , bob selects the items which make the coherent state pick up the phase shift of . and( [ qnd1 ] ) will collapse to with the probability of then , bob makes the photons in the b1 and b4 modes enter the bs1 , which can make after the bs1 , eq .( [ select1 ] ) will evolve to finally , the photons in the d1 and d2 modes are detected by the photon detector d1 and d2 , respectively .it can be found that if d1 detects exactly one photon , eq .( [ 2bs1 ] ) will collapse to while if the d2 detects exactly one photon , eq .( [ 2bs1 ] ) will collapse to if they get eq .( [ 2max1 ] ) , alice or bob can easily convert it to eq .( [ 2max ] ) by the phase flip operation . based on eq .( [ 2max ] ) , if the transmission of vbs1 meets , eq .( [ 2max ] ) can be converted to eq .( [ max3 ] ) .so far , the concentration process for eq .( [ upper ] ) is completed , and eq .( [ upper ] ) can be finally converted to eq .( [ max3 ] ) with the success probability of the concentration process for eq .( [ lower ] ) in the spatial modes a1 and b3 are quite similar .the single photon source s3 emits an auxiliary photon in the state and sends to bob in the b7 mode .based on the concentration steps described above , bob firstly makes the auxiliary photon pass through the vbs2 with the transmission of . then , he lets the photons in the b3 and b8 modes enter the qnd and selects the items which make the coherent state take a phase shift of . in this way ,( [ lower ] ) can finally collapse to with the success probability of . in order to get the maximally entangled single photon state, bob makes the photons in the b3 and b8 modes pass through the bs3 , and then detected by the single photon detector d5 and d6 . under the cases that d5 or d6 exactly detects one photon , eq .( [ 2select ] ) can finally evolve to it is obvious that if a suitable vbs2 with can be provided , eq . ( [ 2bs3 ] ) can be ultimately converted to eq .( [ max5 ] ) . until now ,the concentration process for eq .( [ lower ] ) is completed , and its success probability is finally , bob makes the photons in the b6 and b9 modes pass through the pbs2 . after the pbs2 , the output photon state can be written as which is the same as that of the first ecp .interestingly , we can prove that both the concentration process for eq .( [ upper ] ) and eq .( [ lower ] ) can be repeated . here, we also take the concentration for eq .( [ upper ] ) as an example . after the concentration process, we can find under the case that , the discarded items in eq .( [ qnd1 ] ) which make the coherent state pick up no phase shift can be written as then , with the help of the optical switch ( os ) , bob makes the photons in the b5 and b6 modes pass through another bs , here named bs2 , which can make after bs2 , eq .( [ discard1 ] ) can evolve to then the photons in d3 and d4 modes are detected by the detectors d3 and d4 , respectively .if d3 detects exactly one photon , eq .( [ 2bs2 ] ) will collapse to ) will collapse to ) can be converted to eq .( [ new ] ) by the phase flip operation from alice or bob .it can be found that the has the similar form with eq .( [ upper ] ) , that is to say , eq .( [ new ] ) is a new less - entangled single photon state and can be reconcentrated for the next round . in the second concentration round, bob needs to select another vbs1 with the transmission of .the single photon source s2 emits another auxiliary photon in . by making it pass through the vbs1, the auxiliary single photon state can be described as then bob also makes the photons in the b2 and b5 mode pass through the qnd1 .the whole state of the combined with can evolve to bob still selects the items which make the coherent state pick up the phase shift of and makes the photons in the b2 and b5 modes enter the bs1 .after bs1,the photons in the output modes are detected by the detectors d1 and d2 . in this way , they can finally obtain under the case that the transmission , eq .( [ maxn ] ) will finally be converted to eq .( [ max2 ] ) . on the other hand ,the discarded items in the second concentration round can be described as by making the photons in the b5 and b6 modes pass through the bs2 , eq .( [ discard2 ] ) can finally collapse to in each concentration round , where k is the iteration number , the concentration process can be repeated to further concentrate the discarded items to eq .( [ max2 ] ) . similarly , by providing a suitable vbs2 with the transmission of in the gth concentration round , the concentration process of eq .( [ lower ] ) can also be repeatedly to obtain the eq .( [ max5 ] ) .in the paper , we put forward two efficient ecps for arbitrary less - entangled single - photon polarization qubit .both ecps only require one pair of less - entangled single - photon polarization qubit and some auxiliary single photons .moreover , both ecps only require local operations .bob can operate the ecps alone .after the concentration , he only needs to tell alice to remain or discard her photon according to his measurement results . after the concentration process , they can distill the maximally spatial entangled single - photon state while preserve its polarization characteristics .the first ecp is operated with the linear optical elements , which makes it can be easily realized under current experimental conditions .the second ecp is an improved ecp . in the second ecp, we adopt the cross - kerr nonlinearities to construct the qnd , which makes this ecp can be used repeatedly to further concentrate the less - entangled state . in both two ecps, we need to know the exact value of the initial entanglement coefficients and . in the experimental process, we can get the values of the two entanglement coefficients by measuring enough amount of initial less - entangled single - photon states .the vbs is the key element to perform the two protocols . especially in the second ecp , they require the vbss with different transmission in each concentration round .the vbs is a common linear optical element in current technology .recently , osorio _ et al ._ reported their results about the heralded photon amplification for quantum communication with the help of the vbs .they used their setup to increase the probability of the single photon from a mixed state . in their experiment , they can adjust the splitting ratio of vbs from 50:50 to 90:10 to increase the visibility from 46.7 3.1% to 96.3 3.8% .based on their results , our requirement for the vbs can be easily realized in practical experiment . in the second ecp , the cross - kerr nonlinearity is also of vice importance . in the practical applications ,the cross - kerr nonlinearity has been regarded as a controversial topic for a long time .the reason is that during the homodyne detection process , the decoherence is inevitable , which may lead the qubit states degrade to the mixed states .meanwhile , the natural cross - kerr nonlinearity is extremely weak so that it is difficult to determine the phase shift due to the impossible discrimination of two overlapping coherent states in homodyne detection .fortunately , according to ref . , the decoherence can be extremely reduced simply by an arbitrary strong coherent state associated with a displacement d( ) performed on the coherent state . moreover , several theoretical works have proved that with the help of weak measurement , it is possible for the phase shift to reach an observable value . ) of our two ecps altered with the initial entanglement coefficient .as the second ecp can be repeated to further concentrate the less - entangled single - photon state , and the p of the first ecp equals to the success probability of the second ecp in the first concentration round , we choose its iteration time for numerical simulation . considering the effect of the single photon detection efficiency ( ) on the the p , we for approximation .it is obvious that the higher initial entanglement lead to the higher p .moreover , by repeating the second ecp , the p be largely increased.,width=302 ] finally , it is interesting to calculate the success probability of the two ecps . as in both two ecps , the single photon detection play prominent role ,it is necessary for us to consider the effect of the single photon detection efficiency ( ) on the success probability of the ecp . in this way, the total success probability of the first ecp can be written as on the other hand , as the second ecp can be repeated to further concentrate the less - entangled state , we can calculate the success probability in each concentration round as where the subscript `` 1'',``2'',,``k '' represent the iteration number . in theory ,the second ecp can be reused indefinitely , so that its total success probability equals the sum of the success probability in each concentration round .the total success probability can be written as in practical experiment , the single photon detection has been a big difficulty , due to the quantum decoherence effect of the photon detector .in the optical range , is usually less than . in 2008 , lita _ et al ._ reported their experimental result about the near - infrared single - photon detection .they showed the at 1556 nm can reach . based on their research results, we can make the numerical simulation on the total success probability ( p ) of both the two ecps .4 shows the as a function of the entanglement coefficient . in fig .4 , we assume . as the of the first ecp equals that of the second ecp . in the second ecp , we choose the repeating times for approximation .it is obvious that the is largely dependent on the initial entanglement coefficients .the main reason is that the essence of the entanglement concentration is the entanglement transformation .the entanglement of the concentrated state comes from the initial less - entangled state .moreover , it can be found that by repeating the second ecp , the can be largely increased .in conclusion , we propose two efficient ecps for arbitrary less - entangled single - photon polarization qubit .the first ecp is operated with the linear optical elements , and the second ecp adopts the cross - kerr nonlinearities to construct the qnd , which makes it can be used repeatedly to further concentrate the discarded items of the first ecp .our ecps have some attractive advantages .first , both the two ecps can preserve the polarization characteristics of the single photon qubit , which can preserve the information encoded in the polarization qubit .so far , all the other existing ecps for single photon state do not have this property .second , both the two ecps only require one pair of the less entangled single - photon state and some auxiliary single photons . as the entanglement source is quite precious ,our two ecps are economical .third , our two ecps only require local operations , which can simplify the experimental operations largely . especially , by repeating the second ecp , it can get a high success probability .based on above properties , our two ecps , especially the second ecp may be useful in current quantum communication .this work is supported by the national natural science foundation of china under grant nos .11474168 and 61401222 , the qing lan project in jiangsu province , and the project funded by the priority academic program development of jiangsu higher education institutions .m. hillery , v. bu , and a. berthiaume , phys .a * 59 * , 1829 ( 1999 ) .a. karlsson , m. koashi , and n. imoto , phys .a * 59 * , 162 ( 1999 ) .l. xiao , g. l. long , f. g. deng , j. w. pan , phys . rev .a * 69 * ( 2004 ) 052307 .b. he , j.a .bergou , y.h .ren , phys .a 76 ( 2007 ) 032301 ; b. he , m. nadeem , j.a .bergou , phys .a 79 ( 2009 ) 035802 ; b. he , y.h .ren , j.a .bergou , phys . rev . a 79 ( 2009 ) 052323 ; j. phys .b 43 ( 2010 ) 025502 .q. lin , b. he , phys .a 80 ( 2009 ) 042310 ; q. lin , b. he , j.a .bergou , y.h .ren , phys .a 80 ( 2009 ) 042311 ; q. lin , b , he , phys . rev . a 80 ( 2009 ) 062312 ; q. lin , b. he , phys .a 82 ( 2010 ) 022331 ; q. lin , b. he , phys . rev . a 82 ( 2010 ) 064303 . v. dauria , n. lee , t. amri , c. fabre , j. laurat , phys( 2011 ) 050504 .d. henrich , l. rehm , s. , m. hofherr , k. ilin , a. semenov , m. siegel , arxiv:1210.3988 ( 2012 ) .lita , a.j .miller , s.w .nam , optics express , 16 ( 2008 ) 3032 .
|
we propose two efficient entanglement concentration protocols ( ecps ) for arbitrary less - entangled single - photon entanglement ( spe ) . different from all the previous ecps , these protocols not only can obtain the maximally spe , but also can protect the single qubit information encoded in the polarization degree of freedom . these protocols only require one pair of less - entangled single - photon entangled state and some auxiliary single photons , which makes them economical . the first ecp is operated with the linear optical elements , which can be realized in current experiment . the second ecp adopts the cross - kerr nonlinearities . moreover , the second ecp can be repeated to concentrate the discard states in some conventional ecps , so that it can get a high success probability . based on above properties , our ecps may be useful in current and future quantum communication .
|
recently , we investigated the use of a dynamical genetic programming representation scheme ( dgp ) within learning classifier systems ( lcs ) .it was shown that lcs are able to evolve ensembles of random boolean networks ( rbn ) to solve a number of discrete - valued computational tasks . additionally , it was shown possible to exploit memory existing inherently within the dgp representation . moreover , the networks in dgp are updated asynchronously - a potentially more realistic model of genetic regulatory networks ( grn ) in general .fuzzy set theory is a generalization of boolean logic in which continuous variables can partially belong to sets .a fuzzy set is defined by a membership function , typically within the range ] , where represents no input to be received on that connection .each integer in the connection list , along with the node function , is subjected to mutation on reproduction at the self - adapting rate for that rule .the output nodes provide a real numbered output in the range ] . after building ] is built by selecting a single classifier from ] , however , similar to xcsf , the fitness adjustment takes place in ] .exploitation functions by selecting the single rule with the highest prediction multiplied by accuracy from ] .a frog is given the learning task of jumping to catch a fly that is at a distance , , from the frog , where .the frog receives a sensory input , , before jumping a chosen distance , , and receiving a reward based on its new distance from the fly , as given by : the parameters used here are the same as used by and .[ fig : performance ] illustrates the performance of fdgp - xcsf in the continuous - action frog problem .it can be seen that greater than 99% performance is achieved in fewer than 4,000 trials ( ) , which is faster than ( > 99% after 30,000 trials , ) and ( > 95% after 10,000 trials , ) , and with minimal changes resulting in none of the drawbacks ; i.e. , exploration is here conducted with roulette wheel on prediction instead of deterministically selecting the highest predicting rule , enabling true reinforcement learning .furthermore , in the action weights update component includes the evaluation of the offspring on the last input / payoff before being discarded if the mutant offspring is not more accurate than the parent ; therefore additional evaluations are performed which are not reflected in the number of trials reported .( 1049,900)(0,0 ) ( 130.0,131.0 ) ' '' '' ( 110,131)(0,0)[r ] 0 ( 969.0,131.0 ) ' '' '' ( 130.0,277.0 ) ' '' '' ( 110,277)(0,0)[r ] 0.2 ( 969.0,277.0 ) ' '' '' ( 130.0,422.0 ) ' '' '' ( 110,422)(0,0)[r ] 0.4 ( 969.0,422.0 ) ' '' '' ( 130.0,568.0 ) ' '' '' ( 110,568)(0,0)[r ] 0.6 ( 969.0,568.0 ) ' '' '' ( 130.0,713.0 ) ' '' '' ( 110,713)(0,0)[r ] 0.8 ( 969.0,713.0 ) ' '' '' ( 130.0,859.0 ) ' '' '' ( 110,859)(0,0)[r ] 1 ( 969.0,859.0 ) ' '' '' ( 130.0,131.0 ) ' '' '' ( 130,90)(0,0 ) 0 ( 130.0,839.0 ) ' '' '' ( 302.0,131.0 ) ' '' '' ( 302,90)(0,0 ) 20000 ( 302.0,839.0 ) ' '' '' ( 474.0,131.0 ) ' '' '' ( 474,90)(0,0 ) 40000 ( 474.0,839.0 ) ' '' '' ( 645.0,131.0 ) ' '' '' ( 645,90)(0,0 ) 60000 ( 645.0,839.0 ) ' '' '' ( 817.0,131.0 ) ' '' '' ( 817,90)(0,0 ) 80000 ( 817.0,839.0 ) ' '' '' ( 989.0,131.0 ) ' '' '' ( 989,90)(0,0 ) 100000 ( 989.0,839.0 ) ' '' '' ( 130.0,131.0 ) ' '' '' ( 130.0,131.0 ) ' '' '' ( 989.0,131.0 ) ' '' '' ( 130.0,859.0 ) ' '' '' ( 559,29)(0,0)trials ( 849,557)(0,0)[r]performance ( 869.0,557.0 ) ' '' '' ( 130,524 ) ( 129.67,514 ) ' '' '' ( 129.17,519.00)(1.000,-5.000)2 ' '' '' ( 130.67,504 ) ' '' '' ( 130.17,504.00)(1.000,18.000)2 ' '' '' ( 131.0,504.0 ) ' '' '' ( 131.67,516 ) ' '' '' ( 131.17,516.00)(1.000,4.500)2 ' '' '' ( 132.0,516.0 ) ' '' '' ( 133.0,525.0 ) ' '' '' ( 132.67,531 ) ' '' '' ( 132.17,531.00)(1.000,1.000)2 ' '' '' ( 133.0,531.0 ) ' '' '' ( 134,534.67 ) ' '' '' ( 134.00,534.17)(0.500,1.000)2 ' '' '' ( 134.0,533.0 ) ' '' '' ( 134.67,526 ) ' '' '' ( 134.17,526.00)(1.000,12.000)2 ' '' '' ( 135.0,526.0 ) ' '' '' ( 135.67,523 ) ' '' '' ( 135.17,523.00)(1.000,12.500)2 ' '' '' ( 136.0,523.0 ) ' '' '' ( 136.67,537 ) ' '' '' ( 136.17,537.00)(1.000,1.000)2 ' '' '' ( 137.0,537.0 ) ' '' '' ( 137.67,516 ) ' '' '' ( 137.17,516.00)(1.000,8.000)2 ' '' '' ( 138.0,516.0 ) ' '' '' ( 138.67,535 ) ' '' '' ( 138.17,548.50)(1.000,-13.500)2 ' '' '' ( 139.0,532.0 ) ' '' '' ( 139.67,562 ) ' '' '' ( 139.17,562.00)(1.000,14.500)2 ' '' '' ( 140.0,535.0 ) ' '' '' ( 140.67,589 ) ' '' '' ( 140.17,589.00)(1.000,16.000)2 ' '' '' ( 141.0,589.0 ) ' '' '' ( 142.0,617.0 ) ' '' '' ( 141.67,624 ) ' '' '' ( 141.17,624.00)(1.000,6.500)2 ' '' '' ( 142.0,617.0 ) ' '' '' ( 142.67,633 ) ' '' '' ( 142.17,640.50)(1.000,-7.500)2 ' '' '' ( 143.0,637.0 ) ' '' '' ( 143.67,630 ) ' '' '' ( 143.17,630.00)(1.000,20.000)2 ' '' '' ( 144.0,630.0 ) ' '' '' ( 144.67,690 ) ' '' '' ( 144.17,690.00)(1.000,8.500)2 ' '' '' ( 145.0,670.0 ) ' '' '' ( 145.67,692 ) ' '' '' ( 145.17,692.00)(1.000,6.000)2 ' '' '' ( 146.0,692.0 ) ' '' '' ( 146.67,685 ) ' '' '' ( 146.17,685.00)(1.000,11.500)2 ' '' '' ( 147.0,685.0 ) ' '' '' ( 147.67,723 ) ' '' '' ( 147.17,723.00)(1.000,3.000)2 ' '' '' ( 148.0,708.0 ) ' '' '' ( 148.67,748 ) ' '' '' ( 148.17,748.00)(1.000,1.500)2 ' '' '' ( 149.0,729.0 ) ' '' '' ( 149.67,764 ) ' '' '' ( 149.17,764.00)(1.000,7.000)2 ' '' '' ( 150.0,751.0 ) ' '' '' ( 150.67,803 ) ' '' '' ( 150.17,803.00)(1.000,2.000)2 ' '' '' ( 151.0,778.0 ) ' '' '' ( 151.67,805 ) ' '' '' ( 151.17,805.00)(1.000,9.500)2 ' '' '' ( 152.0,805.0 ) ' '' '' ( 152.67,823 ) ' '' '' ( 152.17,825.00)(1.000,-2.000)2 ' '' '' ( 153.0,824.0 ) ' '' '' ( 154.0,823.0 ) ' '' '' ( 153.67,820 ) ' '' '' ( 153.17,823.50)(1.000,-3.500)2 ' '' '' ( 154.0,827.0 ) ( 154.67,832 ) ' '' '' ( 154.17,833.50)(1.000,-1.500)2 ' '' '' ( 155.0,820.0 ) ' '' '' ( 155.67,835 ) ' '' '' ( 155.17,835.00)(1.000,2.500)2 ' '' '' ( 156.0,832.0 ) ' '' '' ( 157,840 ) ( 156.67,843 ) ' '' '' ( 156.17,843.00)(1.000,4.000)2 ' '' '' ( 157.0,840.0 ) ' '' '' ( 158,841.67 ) ' '' '' ( 158.00,841.17)(0.500,1.000)2 ' '' '' ( 158.0,842.0 ) ' '' '' ( 159,857.67 ) ' '' '' ( 159.00,857.17)(0.500,1.000)2 ' '' '' ( 159.0,843.0 ) ' '' '' ( 160,859 ) ( 160,859 ) ( 160.0,859.0 ) ' '' '' ( 577.67,857 ) ' '' '' ( 577.17,857.00)(1.000,1.000)2 ' '' '' ( 578.0,857.0 ) ' '' '' ( 579,859 ) ( 578.67,855 ) ' '' '' ( 578.17,857.00)(1.000,-2.000)2 ' '' '' ( 579.67,857 ) ' '' '' ( 579.17,858.00)(1.000,-1.000)2 ' '' '' ( 580.0,855.0 ) ' '' '' ( 581,857 ) ( 580.67,857 ) ' '' '' ( 580.17,858.00)(1.000,-1.000)2 ' '' '' ( 581.0,857.0 ) ' '' '' ( 581.67,856 ) ' '' '' ( 581.17,856.00)(1.000,1.500)2 ' '' '' ( 582.0,856.0 ) ( 583,859 ) ( 583.67,857 ) ' '' '' ( 583.17,858.00)(1.000,-1.000)2 ' '' '' ( 583.0,859.0 ) ( 585.0,857.0 ) ' '' '' ( 585.67,856 ) ' '' '' ( 585.17,857.50)(1.000,-1.500)2 ' '' '' ( 585.0,859.0 ) ( 587.0,856.0 ) ' '' '' ( 589.67,857 ) ' '' '' ( 589.17,858.00)(1.000,-1.000)2 ' '' '' ( 587.0,859.0 ) ' '' '' ( 591,855.67 ) ' '' '' ( 591.00,855.17)(0.500,1.000)2 ' '' '' ( 591.0,856.0 ) ( 592,857.67 ) ' '' '' ( 592.00,858.17)(0.500,-1.000)2 ' '' '' ( 592.0,857.0 ) ' '' '' ( 593.0,852.0 ) ' '' '' ( 593.0,852.0 ) ' '' '' ( 593.0,857.0 ) ' '' '' ( 595.0,857.0 ) ' '' '' ( 595.0,859.0 ) ( 596.0,857.0 ) ' '' '' ( 596.0,857.0 ) ' '' '' ( 596.0,859.0 ) ( 596.67,857 ) ' '' '' ( 596.17,857.00)(1.000,1.000)2 ' '' '' ( 597.0,857.0 ) ' '' '' ( 598,855.67 ) ' '' '' ( 598.00,855.17)(0.500,1.000)2 ' '' '' ( 598.0,856.0 ) ' '' '' ( 599,857 ) ( 599,857 ) ( 598.67,857 ) ' '' '' ( 598.17,857.00)(1.000,1.000)2 ' '' '' ( 600,859 ) ( 599.67,857 ) ' '' '' ( 599.17,858.00)(1.000,-1.000)2 ' '' '' ( 600.67,857 ) ' '' '' ( 600.17,858.00)(1.000,-1.000)2 ' '' '' ( 601.0,857.0 ) ' '' '' ( 602,857 ) ( 601.67,856 ) ' '' '' ( 601.17,856.00)(1.000,1.500)2 ' '' '' ( 602.0,856.0 ) ( 603,859 ) ( 603.0,859.0 ) ' '' '' ( 604.67,857 ) ' '' '' ( 604.17,857.00)(1.000,1.000)2 ' '' '' ( 605.0,857.0 ) ' '' '' ( 605.67,856 ) ' '' '' ( 605.17,856.00)(1.000,1.500)2 ' '' '' ( 606.0,856.0 ) ' '' '' ( 606.67,854 ) ' '' '' ( 606.17,855.00)(1.000,-1.000)2 ' '' '' ( 607.0,856.0 ) ' '' '' ( 608,854 ) ( 608.0,854.0 ) ' '' '' ( 608.0,859.0 ) ( 609,855.67 ) ' '' '' ( 609.00,855.17)(0.500,1.000)2 ' '' '' ( 609.0,856.0 ) ' '' '' ( 610,857 ) ( 609.67,857 ) ' '' '' ( 609.17,857.00)(1.000,1.000)2 ' '' '' ( 611,853.67 ) ' '' '' ( 611.00,854.17)(0.500,-1.000)2 ' '' '' ( 611.0,855.0 ) ' '' '' ( 612.0,854.0 ) ' '' '' ( 612.0,856.0 ) ( 612.67,853 ) ' '' '' ( 612.17,853.00)(1.000,1.500)2 ' '' '' ( 613.0,853.0 ) ' '' '' ( 614.0,856.0 ) ' '' '' ( 613.67,853 ) ' '' '' ( 613.17,853.00)(1.000,1.500)2 ' '' '' ( 614.0,853.0 ) ' '' '' ( 615.0,856.0 ) ( 615.67,855 ) ' '' '' ( 615.17,856.00)(1.000,-1.000)2 ' '' '' ( 615.0,857.0 ) ( 616.67,854 ) ' '' '' ( 616.17,854.00)(1.000,2.500)2 ' '' '' ( 617.0,854.0 ) ( 617.67,856 ) ' '' '' ( 617.17,856.00)(1.000,1.500)2 ' '' '' ( 618.0,856.0 ) ' '' '' ( 619,855.67 ) ' '' '' ( 619.00,856.17)(0.500,-1.000)2 ' '' '' ( 619.0,857.0 ) ' '' '' ( 620.0,856.0 ) ' '' '' ( 620.0,859.0 ) ' '' '' ( 623,857.67 ) ' '' '' ( 623.00,857.17)(0.500,1.000)2 ' '' '' ( 623.0,858.0 ) ( 624,857.67 ) ' '' '' ( 624.00,857.17)(0.500,1.000)2 ' '' '' ( 624.0,858.0 ) ( 625,859 ) ( 624.67,857 ) ' '' '' ( 624.17,858.00)(1.000,-1.000)2 ' '' '' ( 626,857 ) ( 626.0,857.0 ) ( 627.0,857.0 ) ' '' '' ( 627,856.67 ) ' '' '' ( 627.00,856.17)(0.500,1.000)2 ' '' '' ( 627.0,857.0 ) ' '' '' ( 627.67,853 ) ' '' '' ( 627.17,853.00)(1.000,3.000)2 ' '' '' ( 628.0,853.0 ) ' '' '' ( 628.67,857 ) ' '' '' ( 628.17,857.00)(1.000,1.000)2 ' '' '' ( 629.0,857.0 ) ' '' '' ( 630,854.67 ) ' '' '' ( 630.00,854.17)(0.500,1.000)2 ' '' '' ( 630.0,855.0 ) ' '' '' ( 631.0,856.0 ) ( 631.0,857.0 ) ( 631.67,854 ) ' '' '' ( 631.17,856.00)(1.000,-2.000)2 ' '' '' ( 632.0,857.0 ) ( 633,855.67 ) ' '' '' ( 633.00,855.17)(0.500,1.000)2 ' '' '' ( 633.0,854.0 ) ' '' '' ( 633.67,856 ) ' '' '' ( 633.17,856.00)(1.000,1.500)2 ' '' '' ( 634.0,856.0 ) ( 635,859 ) ( 635,857.67 ) ' '' '' ( 635.00,858.17)(0.500,-1.000)2 ' '' '' ( 636.0,856.0 ) ' '' '' ( 636.0,856.0 ) ( 636.0,857.0 ) ( 637.0,857.0 ) ' '' '' ( 637.0,859.0 ) ( 638.0,858.0 ) ( 638.0,858.0 ) ( 638.67,857 ) ' '' '' ( 638.17,858.00)(1.000,-1.000)2 ' '' '' ( 639.0,858.0 ) ( 640,857.67 ) ' '' '' ( 640.00,857.17)(0.500,1.000)2 ' '' '' ( 640.0,857.0 ) ( 641,857.67 ) ' '' '' ( 641.00,857.17)(0.500,1.000)2 ' '' '' ( 641.0,858.0 ) ( 642.0,856.0 ) ' '' '' ( 642.0,856.0 ) ' '' '' ( 642.67,854 ) ' '' '' ( 642.17,856.00)(1.000,-2.000)2 ' '' '' ( 642.0,858.0 ) ( 643.67,856 ) ' '' '' ( 643.17,856.00)(1.000,1.000)2 ' '' '' ( 644.0,854.0 ) ' '' '' ( 645.0,858.0 ) ( 645.0,859.0 ) ( 645.67,857 ) ' '' '' ( 645.17,857.00)(1.000,1.000)2 ' '' '' ( 646.0,857.0 ) ' '' '' ( 647,859 ) ( 646.67,857 ) ' '' '' ( 646.17,858.00)(1.000,-1.000)2 ' '' '' ( 648,857 ) ( 648.0,857.0 ) ( 648.0,858.0 ) ( 649.0,858.0 ) ( 650,857.67 ) ' '' '' ( 650.00,858.17)(0.500,-1.000)2 ' '' '' ( 649.0,859.0 ) ( 651,858 ) ( 651,858 ) ( 651,857.67 ) ' '' '' ( 651.00,857.17)(0.500,1.000)2 ' '' '' ( 652,859 ) ( 652.0,859.0 ) ' '' '' ( 654,857.67 ) ' '' '' ( 654.00,857.17)(0.500,1.000)2 ' '' '' ( 654.0,858.0 ) ( 655,859 ) ( 656,857.67 ) ' '' '' ( 656.00,858.17)(0.500,-1.000)2 ' '' '' ( 655.0,859.0 ) ( 657,858 ) ( 657,858 ) ( 657,857.67 ) ' '' '' ( 657.00,857.17)(0.500,1.000)2 ' '' '' ( 658,857.67 ) ' '' '' ( 658.00,857.17)(0.500,1.000)2 ' '' '' ( 658.0,858.0 ) ( 659,859 ) ( 659,857.67 ) ' '' '' ( 659.00,858.17)(0.500,-1.000)2 ' '' '' ( 660,858 ) ( 660,858 ) ( 660.0,858.0 ) ( 661,857.67 ) ' '' '' ( 661.00,858.17)(0.500,-1.000)2 ' '' '' ( 661.0,858.0 ) ( 662,858 ) ( 662,857.67 ) ' '' '' ( 662.00,857.17)(0.500,1.000)2 ' '' '' ( 663.0,858.0 ) ( 663.0,858.0 ) ( 664,857.67 ) ' '' '' ( 664.00,858.17)(0.500,-1.000)2 ' '' '' ( 664.0,858.0 ) ( 665,858 ) ( 665.0,858.0 ) ' '' '' ( 668,856.67 ) ' '' '' ( 668.00,856.17)(0.500,1.000)2 ' '' '' ( 668.0,857.0 ) ( 669,858 ) ( 669,858 ) ( 669.0,858.0 ) ( 670,856.67 ) ' '' '' ( 670.00,856.17)(0.500,1.000)2 ' '' '' ( 670.0,857.0 ) ( 671,858 ) ( 671,856.67 ) ' '' '' ( 671.00,857.17)(0.500,-1.000)2 ' '' '' ( 671.67,856 ) ' '' '' ( 671.17,856.00)(1.000,1.000)2 ' '' '' ( 672.0,856.0 ) ( 672.67,853 ) ' '' '' ( 672.17,853.00)(1.000,1.500)2 ' '' '' ( 673.0,853.0 ) ' '' '' ( 674,857.67 ) ' '' '' ( 674.00,857.17)(0.500,1.000)2 ' '' '' ( 674.0,856.0 ) ' '' '' ( 675,857.67 ) ' '' '' ( 675.00,857.17)(0.500,1.000)2 ' '' '' ( 675.0,858.0 ) ( 676,859 ) ( 676.0,859.0 ) ( 676.67,853 ) ' '' '' ( 676.17,853.00)(1.000,3.000)2 ' '' '' ( 677.0,853.0 ) ' '' '' ( 678,859 ) ( 677.67,856 ) ' '' '' ( 677.17,857.00)(1.000,-1.000)2 ' '' '' ( 678.0,858.0 ) ( 679,855.67 ) ' '' '' ( 679.00,856.17)(0.500,-1.000)2 ' '' '' ( 679.0,856.0 ) ( 680.0,854.0 ) ' '' '' ( 680.0,854.0 ) ( 681.0,854.0 ) ' '' '' ( 681.0,857.0 ) ( 682.0,857.0 ) ' '' '' ( 682.0,859.0 ) ( 683,857.67 ) ' '' '' ( 683.00,857.17)(0.500,1.000)2 ' '' '' ( 683.0,858.0 ) ( 684,859 ) ( 684,859 ) ( 685,857.67 ) ' '' '' ( 685.00,858.17)(0.500,-1.000)2 ' '' '' ( 684.0,859.0 ) ( 686.0,858.0 ) ( 686.0,859.0 ) ( 687.0,858.0 ) ( 687.0,858.0 ) ( 687.0,859.0 ) ' '' '' ( 689,857.67 ) ' '' '' ( 689.00,857.17)(0.500,1.000)2 ' '' '' ( 689.0,858.0 ) ( 690,859 ) ( 690,857.67 ) ' '' '' ( 690.00,857.17)(0.500,1.000)2 ' '' '' ( 690.0,858.0 ) ( 691,859 ) ( 692,857.67 ) ' '' '' ( 692.00,858.17)(0.500,-1.000)2 ' '' '' ( 691.0,859.0 ) ( 693,858 ) ( 696,857.67 ) ' '' '' ( 696.00,857.17)(0.500,1.000)2 ' '' '' ( 693.0,858.0 ) ' '' '' ( 697.0,858.0 ) ( 700,857.67 ) ' '' '' ( 700.00,857.17)(0.500,1.000)2 ' '' '' ( 697.0,858.0 ) ' '' '' ( 701,859 ) ( 701,857.67 ) ' '' '' ( 701.00,858.17)(0.500,-1.000)2 ' '' '' ( 702,858 ) ( 702.0,858.0 ) ( 703.0,858.0 ) ( 703.0,858.0 ) ( 703.0,858.0 ) ' '' '' ( 705.0,857.0 ) ( 705.0,857.0 ) ( 706.0,857.0 ) ( 705.67,853 ) ' '' '' ( 705.17,853.00)(1.000,1.500)2 ' '' '' ( 706.0,853.0 ) ' '' '' ( 706.67,857 ) ' '' '' ( 706.17,857.00)(1.000,1.000)2 ' '' '' ( 707.0,856.0 ) ( 707.67,856 ) ' '' '' ( 707.17,857.00)(1.000,-1.000)2 ' '' '' ( 708.0,858.0 ) ( 708.67,856 ) ' '' '' ( 708.17,857.50)(1.000,-1.500)2 ' '' '' ( 709.0,856.0 ) ' '' '' ( 710,856.67 ) ' '' '' ( 710.00,856.17)(0.500,1.000)2 ' '' '' ( 710.0,856.0 ) ( 711,855.67 ) ' '' '' ( 711.00,855.17)(0.500,1.000)2 ' '' '' ( 711.0,856.0 ) ' '' '' ( 712.0,857.0 ) ( 712.0,858.0 ) ' '' '' ( 721.0,858.0 ) ( 721,857.67 ) ' '' '' ( 721.00,857.17)(0.500,1.000)2 ' '' '' ( 721.0,858.0 ) ( 722,857.67 ) ' '' '' ( 722.00,857.17)(0.500,1.000)2 ' '' '' ( 722.0,858.0 ) ( 723.0,858.0 ) ( 723.0,858.0 ) ' '' '' ( 729,857.67 ) ' '' '' ( 729.00,858.17)(0.500,-1.000)2 ' '' '' ( 729.0,858.0 ) ( 730,858 ) ( 730,858 ) ( 730.0,858.0 ) ( 730.67,856 ) ' '' '' ( 730.17,856.00)(1.000,1.500)2 ' '' '' ( 731.0,856.0 ) ' '' '' ( 732.0,857.0 ) ' '' '' ( 732.0,857.0 ) ( 733.0,855.0 ) ' '' '' ( 733,856.67 ) ' '' '' ( 733.00,856.17)(0.500,1.000)2 ' '' '' ( 733.0,855.0 ) ' '' '' ( 733.67,853 ) ' '' '' ( 733.17,856.00)(1.000,-3.000)2 ' '' '' ( 734.0,858.0 ) ( 735.0,853.0 ) ' '' '' ( 735.0,857.0 ) ( 736.0,857.0 ) ( 736.0,857.0 ) ( 736.0,857.0 ) ( 737,857.67 ) ' '' '' ( 737.00,857.17)(0.500,1.000)2 ' '' '' ( 737.0,857.0 ) ( 738.0,858.0 ) ( 743,857.67 ) ' '' '' ( 743.00,857.17)(0.500,1.000)2 ' '' '' ( 738.0,858.0 ) ' '' '' ( 744.0,858.0 ) ( 746,856.67 ) ' '' '' ( 746.00,857.17)(0.500,-1.000)2 ' '' '' ( 744.0,858.0 ) ' '' '' ( 746.67,854 ) ' '' '' ( 746.17,854.00)(1.000,1.500)2 ' '' '' ( 747.0,854.0 ) ' '' '' ( 748.0,857.0 ) ' '' '' ( 748,856.67 ) ' '' '' ( 748.00,857.17)(0.500,-1.000)2 ' '' '' ( 748.0,858.0 ) ( 749,857 ) ( 749.0,857.0 ) ( 749.67,855 ) ' '' '' ( 749.17,855.00)(1.000,1.000)2 ' '' '' ( 750.0,855.0 ) ' '' '' ( 750.67,853 ) ' '' '' ( 750.17,853.00)(1.000,1.000)2 ' '' '' ( 751.0,853.0 ) ' '' '' ( 752,855.67 ) ' '' '' ( 752.00,855.17)(0.500,1.000)2 ' '' '' ( 752.0,855.0 ) ( 753.0,857.0 ) ( 753.0,858.0 ) ' '' '' ( 755,854.67 ) ' '' '' ( 755.00,855.17)(0.500,-1.000)2 ' '' '' ( 755.0,856.0 ) ' '' '' ( 755.67,854 ) ' '' '' ( 755.17,855.50)(1.000,-1.500)2 ' '' '' ( 756.0,855.0 ) ' '' '' ( 757,856.67 ) ' '' '' ( 757.00,857.17)(0.500,-1.000)2 ' '' '' ( 757.0,854.0 ) ' '' '' ( 757.67,853 ) ' '' '' ( 757.17,853.00)(1.000,1.000)2 ' '' '' ( 758.0,853.0 ) ' '' '' ( 759.0,855.0 ) ' '' '' ( 759.0,858.0 ) ' '' '' ( 761.0,858.0 ) ( 761.0,858.0 ) ( 762.67,856 ) ' '' '' ( 762.17,857.00)(1.000,-1.000)2 ' '' '' ( 761.0,858.0 ) ' '' '' ( 764.0,856.0 ) ' '' '' ( 764.0,858.0 ) ' '' '' ( 771,857.67 ) ' '' '' ( 771.00,858.17)(0.500,-1.000)2 ' '' '' ( 771.0,858.0 ) ( 772,858 ) ( 774,857.67 ) ' '' '' ( 774.00,857.17)(0.500,1.000)2 ' '' '' ( 772.0,858.0 ) ' '' '' ( 775,857.67 ) ' '' '' ( 775.00,857.17)(0.500,1.000)2 ' '' '' ( 775.0,858.0 ) ( 776,857.67 ) ' '' '' ( 776.00,857.17)(0.500,1.000)2 ' '' '' ( 776.0,858.0 ) ( 777.0,858.0 ) ( 777.0,858.0 ) ' '' '' ( 780,857.67 ) ' '' '' ( 780.00,858.17)(0.500,-1.000)2 ' '' '' ( 780.0,858.0 ) ( 781,858 ) ( 783,857.67 ) ' '' '' ( 783.00,857.17)(0.500,1.000)2 ' '' '' ( 781.0,858.0 ) ' '' '' ( 784.0,858.0 ) ( 789,857.67 ) ' '' '' ( 789.00,857.17)(0.500,1.000)2 ' '' '' ( 784.0,858.0 ) ' '' '' ( 790.0,858.0 ) ( 790.0,858.0 ) ' '' '' ( 810,857.67 ) ' '' '' ( 810.00,858.17)(0.500,-1.000)2 ' '' '' ( 810.0,858.0 ) ( 811,858 ) ( 811.0,858.0 ) ' '' '' ( 819,857.67 ) ' '' '' ( 819.00,858.17)(0.500,-1.000)2 ' '' '' ( 819.0,858.0 ) ( 820,858 ) ( 820.0,858.0 ) ( 821,857.67 ) ' '' '' ( 821.00,858.17)(0.500,-1.000)2 ' '' '' ( 821.0,858.0 ) ( 822,858 ) ( 822.0,858.0 ) ' '' '' ( 835,856.67 ) ' '' '' ( 835.00,856.17)(0.500,1.000)2 ' '' '' ( 835.0,857.0 ) ( 836,858 ) ( 840,857.67 ) ' '' '' ( 840.00,857.17)(0.500,1.000)2 ' '' '' ( 836.0,858.0 ) ' '' '' ( 841,856.67 ) ' '' '' ( 841.00,857.17)(0.500,-1.000)2 ' '' '' ( 841.0,858.0 ) ( 842.0,857.0 ) ( 842.0,858.0 ) ' '' '' ( 844,856.67 ) ' '' '' ( 844.00,856.17)(0.500,1.000)2 ' '' '' ( 844.0,857.0 ) ( 845,858 ) ( 845.0,858.0 ) ' '' '' ( 849,856.67 ) ' '' '' ( 849.00,856.17)(0.500,1.000)2 ' '' '' ( 849.0,857.0 ) ( 850,856.67 ) ' '' '' ( 850.00,856.17)(0.500,1.000)2 ' '' '' ( 850.0,857.0 ) ( 851,858 ) ( 851,856.67 ) ' '' '' ( 851.00,857.17)(0.500,-1.000)2 ' '' '' ( 852.0,857.0 ) ( 863,857.67 ) ' '' '' ( 863.00,857.17)(0.500,1.000)2 ' '' '' ( 852.0,858.0 ) ' '' '' ( 864.0,858.0 ) ( 869.67,856 ) ' '' '' ( 869.17,857.00)(1.000,-1.000)2 ' '' '' ( 864.0,858.0 ) ' '' '' ( 871,854.67 ) ' '' '' ( 871.00,854.17)(0.500,1.000)2 ' '' '' ( 871.0,855.0 ) ( 872,856 ) ( 872,855.67 ) ' '' '' ( 872.00,855.17)(0.500,1.000)2 ' '' '' ( 873.0,853.0 ) ' '' '' ( 873.0,853.0 ) ' '' '' ( 873.0,858.0 ) ( 873.67,854 ) ' '' '' ( 873.17,854.00)(1.000,2.000)2 ' '' '' ( 874.0,854.0 ) ' '' '' ( 874.67,856 ) ' '' '' ( 874.17,856.00)(1.000,1.000)2 ' '' '' ( 875.0,856.0 ) ' '' '' ( 876,858 ) ( 876,858 ) ( 891,856.67 ) ' '' '' ( 891.00,857.17)(0.500,-1.000)2 ' '' '' ( 876.0,858.0 ) ' '' '' ( 892.0,857.0 ) ( 892.0,858.0 ) ' '' '' ( 894,856.67 ) ' '' '' ( 894.00,856.17)(0.500,1.000)2 ' '' '' ( 894.0,857.0 ) ( 895,858 ) ( 895,858 ) ( 895.0,858.0 ) ' '' '' ( 902,856.67 ) ' '' '' ( 902.00,856.17)(0.500,1.000)2 ' '' '' ( 902.0,857.0 ) ( 903,858 ) ( 908,857.67 ) ' '' '' ( 908.00,857.17)(0.500,1.000)2 ' '' '' ( 903.0,858.0 ) ' '' '' ( 909.0,858.0 ) ( 909.0,858.0 ) ' '' '' ( 911,856.67 ) ' '' '' ( 911.00,856.17)(0.500,1.000)2 ' '' '' ( 911.0,857.0 ) ( 912,858 ) ( 912.0,858.0 ) ( 913,856.67 ) ' '' '' ( 913.00,856.17)(0.500,1.000)2 ' '' '' ( 913.0,857.0 ) ( 914,858 ) ( 914.0,858.0 ) ( 915,856.67 ) ' '' '' ( 915.00,856.17)(0.500,1.000)2 ' '' '' ( 915.0,857.0 ) ( 916.0,857.0 ) ( 916.0,857.0 ) ( 916.0,858.0 ) ' '' '' ( 920,856.67 ) ' '' '' ( 920.00,856.17)(0.500,1.000)2 ' '' '' ( 920.0,857.0 ) ( 921,858 ) ( 921.0,858.0 ) ( 922.0,857.0 ) ( 922,856.67 ) ' '' '' ( 922.00,857.17)(0.500,-1.000)2 ' '' '' ( 922.0,857.0 ) ( 923.0,857.0 ) ( 924,856.67 ) ' '' '' ( 924.00,857.17)(0.500,-1.000)2 ' '' '' ( 923.0,858.0 ) ( 925,857 ) ( 925,856.67 ) ' '' '' ( 925.00,857.17)(0.500,-1.000)2 ' '' '' ( 925.0,857.0 ) ( 926.0,857.0 ) ( 926.0,858.0 ) ( 927,856.67 ) ' '' '' ( 927.00,856.17)(0.500,1.000)2 ' '' '' ( 927.0,857.0 ) ( 928,858 ) ( 928,858 ) ( 928.0,858.0 ) ' '' '' ( 930,856.67 ) ' '' '' ( 930.00,856.17)(0.500,1.000)2 ' '' '' ( 930.0,857.0 ) ( 931,858 ) ( 931,858 ) ( 931.0,858.0 ) ' '' '' ( 943.0,857.0 ) ( 944,855.67 ) ' '' '' ( 944.00,856.17)(0.500,-1.000)2 ' '' '' ( 943.0,857.0 ) ( 945.0,856.0 ) ' '' '' ( 945.0,858.0 ) ' '' '' ( 951,856.67 ) ' '' '' ( 951.00,856.17)(0.500,1.000)2 ' '' '' ( 951.0,857.0 ) ( 952,858 ) ( 952,858 ) ( 967,856.67 ) ' '' '' ( 967.00,857.17)(0.500,-1.000)2 ' '' '' ( 952.0,858.0 ) ' '' '' ( 968,857 ) ( 968,856.67 ) ' '' '' ( 968.00,857.17)(0.500,-1.000)2 ' '' '' ( 968.0,857.0 ) ( 969.0,857.0 ) ( 973,856.67 ) ' '' '' ( 973.00,857.17)(0.500,-1.000)2 ' '' '' ( 969.0,858.0 ) ' '' '' ( 974,857 ) ( 974.0,857.0 ) ( 974.0,858.0 ) ( 975.0,857.0 ) ( 975.0,857.0 ) ( 976.0,857.0 ) ( 976.0,858.0 ) ( 977.0,857.0 ) ( 977.0,857.0 ) ( 979,856.67 ) ' '' '' ( 979.00,857.17)(0.500,-1.000)2 ' '' '' ( 977.0,858.0 ) ' '' '' ( 980.0,857.0 ) ( 980,856.67 ) ' '' '' ( 980.00,856.17)(0.500,1.000)2 ' '' '' ( 980.0,857.0 ) ( 981.0,857.0 ) ( 981.0,857.0 ) ( 982.0,857.0 ) ( 982.0,858.0 ) ( 983,854.67 ) ' '' '' ( 983.00,854.17)(0.500,1.000)2 ' '' '' ( 983.0,855.0 ) ' '' '' ( 984,856.67 ) ' '' '' ( 984.00,857.17)(0.500,-1.000)2 ' '' '' ( 984.0,856.0 ) ' '' '' ( 985.0,857.0 ) ( 130,524 ) ( 345,859 ) ( 560,859 ) ( 775,859 ) ( 909,557 ) ( 985.0,858.0 ) ' '' '' ( 849,516)(0,0)[r]error ( 869.0,516.0 ) ' '' '' ( 130,322 ) ( 129.67,290 ) ' '' '' ( 129.17,306.00)(1.000,-16.000)2 ' '' '' ( 130.67,232 ) ' '' '' ( 130.17,245.50)(1.000,-13.500)2 ' '' '' ( 131.0,259.0 ) ' '' '' ( 131.67,220 ) ' '' '' ( 131.17,224.00)(1.000,-4.000)2 ' '' '' ( 132.0,228.0 ) ' '' '' ( 132.67,203 ) ' '' '' ( 132.17,206.50)(1.000,-3.500)2 ' '' '' ( 133.0,210.0 ) ' '' '' ( 133.67,184 ) ' '' '' ( 133.17,187.00)(1.000,-3.000)2 ' '' '' ( 134.0,190.0 ) ' '' '' ( 134.67,174 ) ' '' '' ( 134.17,174.00)(1.000,1.000)2 ' '' '' ( 135.0,174.0 ) ' '' '' ( 135.67,160 ) ' '' '' ( 135.17,161.50)(1.000,-1.500)2 ' '' '' ( 136.0,163.0 ) ' '' '' ( 137.0,157.0 ) ' '' '' ( 137.0,157.0 ) ( 137.67,154 ) ' '' '' ( 137.17,156.50)(1.000,-2.500)2 ' '' '' ( 138.0,157.0 ) ' '' '' ( 138.67,157 ) ' '' '' ( 138.17,160.00)(1.000,-3.000)2 ' '' '' ( 139.0,154.0 ) ' '' '' ( 139.67,150 ) ' '' '' ( 139.17,150.00)(1.000,4.000)2 ' '' '' ( 140.0,150.0 ) ' '' '' ( 140.67,155 ) ' '' '' ( 140.17,159.50)(1.000,-4.500)2 ' '' '' ( 141.0,158.0 ) ' '' '' ( 142,143.67 ) ' '' '' ( 142.00,144.17)(0.500,-1.000)2 ' '' '' ( 142.0,145.0 ) ' '' '' ( 142.67,139 ) ' '' '' ( 142.17,139.00)(1.000,5.000)2 ' '' '' ( 143.0,139.0 ) ' '' '' ( 143.67,147 ) ' '' '' ( 143.17,151.50)(1.000,-4.500)2 ' '' '' ( 144.0,149.0 ) ' '' '' ( 145,147 ) ( 144.67,151 ) ' '' '' ( 144.17,151.00)(1.000,1.500)2 ' '' '' ( 145.0,147.0 ) ' '' '' ( 145.67,147 ) ' '' '' ( 145.17,147.00)(1.000,3.000)2 ' '' '' ( 146.0,147.0 ) ' '' '' ( 146.67,149 ) ' '' '' ( 146.17,150.00)(1.000,-1.000)2 ' '' '' ( 147.0,151.0 ) ' '' '' ( 148.0,136.0 ) ' '' '' ( 148.0,136.0 ) ( 149.0,135.0 ) ( 149.0,135.0 ) ' '' '' ( 151.0,134.0 ) ( 158,132.67 ) ' '' '' ( 158.00,133.17)(0.500,-1.000)2 ' '' '' ( 151.0,134.0 ) ' '' '' ( 159.0,133.0 ) ( 159.0,134.0 ) ( 160.0,133.0 ) ( 160.0,133.0 ) ' '' '' ( 170.0,132.0 ) ( 170.0,132.0 ) ' '' '' ( 186.0,131.0 ) ( 186.0,131.0 ) ( 187.0,131.0 ) ( 188,130.67 ) ' '' '' ( 188.00,131.17)(0.500,-1.000)2 ' '' '' ( 187.0,132.0 ) ( 189,131 ) ( 392,130.67 ) ' '' '' ( 392.00,130.17)(0.500,1.000)2 ' '' '' ( 189.0,131.0 ) ' '' '' ( 393.0,131.0 ) ( 422,130.67 ) ' '' '' ( 422.00,130.17)(0.500,1.000)2 ' '' '' ( 393.0,131.0 ) ' '' '' ( 423.0,131.0 ) ( 423.0,131.0 ) ' '' '' ( 444,130.67 ) ' '' '' ( 444.00,131.17)(0.500,-1.000)2 ' '' '' ( 444.0,131.0 ) ( 445,131 ) ( 445.0,131.0 ) ' '' '' ( 447,130.67 ) ' '' '' ( 447.00,131.17)(0.500,-1.000)2 ' '' '' ( 447.0,131.0 ) ( 448,131 ) ( 471,130.67 ) ' '' '' ( 471.00,130.17)(0.500,1.000)2 ' '' '' ( 448.0,131.0 ) ' '' '' ( 472.0,131.0 ) ( 472.0,131.0 ) ' '' '' ( 478,130.67 ) ' '' '' ( 478.00,131.17)(0.500,-1.000)2 ' '' '' ( 478.0,131.0 ) ( 479,131 ) ( 479,130.67 ) ' '' '' ( 479.00,130.17)(0.500,1.000)2 ' '' '' ( 480,130.67 ) ' '' '' ( 480.00,130.17)(0.500,1.000)2 ' '' '' ( 480.0,131.0 ) ( 481.0,131.0 ) ( 492,130.67 ) ' '' '' ( 492.00,130.17)(0.500,1.000)2 ' '' '' ( 481.0,131.0 ) ' '' '' ( 493.0,131.0 ) ( 493.0,131.0 ) ' '' '' ( 495.0,131.0 ) ( 496,130.67 ) ' '' '' ( 496.00,131.17)(0.500,-1.000)2 ' '' '' ( 495.0,132.0 ) ( 497,131 ) ( 497.0,131.0 ) ' '' '' ( 499.0,131.0 ) ( 499,130.67 ) ' '' '' ( 499.00,130.17)(0.500,1.000)2 ' '' '' ( 499.0,131.0 ) ( 500,132 ) ( 500,130.67 ) ' '' '' ( 500.00,131.17)(0.500,-1.000)2 ' '' '' ( 501,131 ) ( 501.0,131.0 ) ' '' '' ( 507,130.67 ) ' '' '' ( 507.00,131.17)(0.500,-1.000)2 ' '' '' ( 507.0,131.0 ) ( 508,131 ) ( 508,131 ) ( 508.0,131.0 ) ' '' '' ( 510.0,131.0 ) ( 510.0,132.0 ) ( 511.0,131.0 ) ( 511.0,131.0 ) ( 512,130.67 ) ' '' '' ( 512.00,131.17)(0.500,-1.000)2 ' '' '' ( 511.0,132.0 ) ( 513.0,131.0 ) ( 513.0,132.0 ) ( 514.0,131.0 ) ( 514.0,131.0 ) ( 515,130.67 ) ' '' '' ( 515.00,131.17)(0.500,-1.000)2 ' '' '' ( 514.0,132.0 ) ( 516,131 ) ( 518,130.67 ) ' '' '' ( 518.00,130.17)(0.500,1.000)2 ' '' '' ( 516.0,131.0 ) ' '' '' ( 519.0,131.0 ) ( 519.0,131.0 ) ' '' '' ( 521.0,131.0 ) ( 521.0,132.0 ) ' '' '' ( 523,130.67 ) ' '' '' ( 523.00,130.17)(0.500,1.000)2 ' '' '' ( 523.0,131.0 ) ( 524.0,131.0 ) ( 524.0,131.0 ) ( 525.0,131.0 ) ( 525.0,132.0 ) ( 526,130.67 ) ' '' '' ( 526.00,130.17)(0.500,1.000)2 ' '' '' ( 526.0,131.0 ) ( 527,130.67 ) ' '' '' ( 527.00,130.17)(0.500,1.000)2 ' '' '' ( 527.0,131.0 ) ( 528,132 ) ( 528.0,132.0 ) ( 529,130.67 ) ' '' '' ( 529.00,130.17)(0.500,1.000)2 ' '' '' ( 529.0,131.0 ) ( 530,132 ) ( 531,130.67 ) ' '' '' ( 531.00,131.17)(0.500,-1.000)2 ' '' '' ( 530.0,132.0 ) ( 532.0,131.0 ) ( 533,130.67 ) ' '' '' ( 533.00,131.17)(0.500,-1.000)2 ' '' '' ( 532.0,132.0 ) ( 534,130.67 ) ' '' '' ( 534.00,131.17)(0.500,-1.000)2 ' '' '' ( 534.0,131.0 ) ( 535.0,131.0 ) ( 535.0,132.0 ) ' '' '' ( 537,130.67 ) ' '' '' ( 537.00,130.17)(0.500,1.000)2 ' '' '' ( 537.0,131.0 ) ( 538,132 ) ( 538,130.67 ) ' '' '' ( 538.00,130.17)(0.500,1.000)2 ' '' '' ( 538.0,131.0 ) ( 539.0,131.0 ) ( 540,130.67 ) ' '' '' ( 540.00,130.17)(0.500,1.000)2 ' '' '' ( 539.0,131.0 ) ( 541,132 ) ( 541,132 ) ( 541,130.67 ) ' '' '' ( 541.00,131.17)(0.500,-1.000)2 ' '' '' ( 542,131 ) ( 542,130.67 ) ' '' '' ( 542.00,130.17)(0.500,1.000)2 ' '' '' ( 543.0,131.0 ) ( 544,130.67 ) ' '' '' ( 544.00,130.17)(0.500,1.000)2 ' '' '' ( 543.0,131.0 ) ( 545,132 ) ( 545,130.67 ) ' '' '' ( 545.00,131.17)(0.500,-1.000)2 ' '' '' ( 546,131 ) ( 546.0,131.0 ) ' '' '' ( 548,130.67 ) ' '' '' ( 548.00,131.17)(0.500,-1.000)2 ' '' '' ( 548.0,131.0 ) ( 549,131 ) ( 549,130.67 ) ' '' '' ( 549.00,130.17)(0.500,1.000)2 ' '' '' ( 550,132 ) ( 550,132 ) ( 550.0,132.0 ) ' '' '' ( 552,130.67 ) ' '' '' ( 552.00,130.17)(0.500,1.000)2 ' '' '' ( 552.0,131.0 ) ( 553,132 ) ( 553,132 ) ( 556,130.67 ) ' '' '' ( 556.00,131.17)(0.500,-1.000)2 ' '' '' ( 553.0,132.0 ) ' '' '' ( 557.0,131.0 ) ( 557.0,132.0 ) ' '' '' ( 560,130.67 ) ' '' '' ( 560.00,130.17)(0.500,1.000)2 ' '' '' ( 560.0,131.0 ) ( 561,132 ) ( 568,130.67 ) ' '' '' ( 568.00,131.17)(0.500,-1.000)2 ' '' '' ( 561.0,132.0 ) ' '' '' ( 569.0,131.0 ) ( 569.0,132.0 ) ' '' '' ( 572.0,131.0 ) ( 572,130.67 ) ' '' '' ( 572.00,131.17)(0.500,-1.000)2 ' '' '' ( 572.0,131.0 ) ( 573.0,131.0 ) ( 573.0,132.0 ) ' '' '' ( 576.0,131.0 ) ( 576.0,131.0 ) ( 577.0,131.0 ) ( 577.0,132.0 ) ( 577.67,131 ) ' '' '' ( 577.17,132.00)(1.000,-1.000)2 ' '' '' ( 578.0,132.0 ) ( 578.67,132 ) ' '' '' ( 578.17,132.00)(1.000,2.000)2 ' '' '' ( 579.0,131.0 ) ( 580,131.67 ) ' '' '' ( 580.00,131.17)(0.500,1.000)2 ' '' '' ( 580.0,132.0 ) ' '' '' ( 581,133 ) ( 581,131.67 ) ' '' '' ( 581.00,131.17)(0.500,1.000)2 ' '' '' ( 581.0,132.0 ) ( 581.67,132 ) ' '' '' ( 581.17,133.50)(1.000,-1.500)2 ' '' '' ( 582.0,133.0 ) ' '' '' ( 583,132 ) ( 584,131.67 ) ' '' '' ( 584.00,131.17)(0.500,1.000)2 ' '' '' ( 583.0,132.0 ) ( 585.0,132.0 ) ( 585.67,132 ) ' '' '' ( 585.17,132.00)(1.000,1.500)2 ' '' '' ( 585.0,132.0 ) ( 587,130.67 ) ' '' '' ( 587.00,131.17)(0.500,-1.000)2 ' '' '' ( 587.0,132.0 ) ' '' '' ( 588.0,131.0 ) ( 589,130.67 ) ' '' '' ( 589.00,131.17)(0.500,-1.000)2 ' '' '' ( 588.0,132.0 ) ( 590,131 ) ( 590,131.67 ) ' '' '' ( 590.00,131.17)(0.500,1.000)2 ' '' '' ( 590.0,131.0 ) ( 591,132.67 ) ' '' '' ( 591.00,133.17)(0.500,-1.000)2 ' '' '' ( 591.0,133.0 ) ( 592,131.67 ) ' '' '' ( 592.00,131.17)(0.500,1.000)2 ' '' '' ( 592.0,132.0 ) ( 593.0,133.0 ) ' '' '' ( 593.0,133.0 ) ' '' '' ( 593.0,133.0 ) ' '' '' ( 595.0,132.0 ) ( 595.0,132.0 ) ( 596.0,132.0 ) ( 596.0,131.0 ) ' '' '' ( 596.0,131.0 ) ( 597,131.67 ) ' '' '' ( 597.00,132.17)(0.500,-1.000)2 ' '' '' ( 597.0,131.0 ) ' '' '' ( 598,132.67 ) ' '' '' ( 598.00,133.17)(0.500,-1.000)2 ' '' '' ( 598.0,132.0 ) ' '' '' ( 599,133 ) ( 599,133 ) ( 599,131.67 ) ' '' '' ( 599.00,132.17)(0.500,-1.000)2 ' '' '' ( 600,132 ) ( 600,131.67 ) ' '' '' ( 600.00,131.17)(0.500,1.000)2 ' '' '' ( 601,131.67 ) ' '' '' ( 601.00,131.17)(0.500,1.000)2 ' '' '' ( 601.0,132.0 ) ( 602,133 ) ( 601.67,132 ) ' '' '' ( 601.17,133.00)(1.000,-1.000)2 ' '' '' ( 602.0,133.0 ) ( 603,132 ) ( 603.0,132.0 ) ' '' '' ( 604.67,132 ) ' '' '' ( 604.17,133.00)(1.000,-1.000)2 ' '' '' ( 605.0,132.0 ) ' '' '' ( 605.67,132 ) ' '' '' ( 605.17,133.50)(1.000,-1.500)2 ' '' '' ( 606.0,132.0 ) ' '' '' ( 607,134.67 ) ' '' '' ( 607.00,134.17)(0.500,1.000)2 ' '' '' ( 607.0,132.0 ) ' '' '' ( 608,136 ) ( 608.0,132.0 ) ' '' '' ( 608.0,132.0 ) ( 608.67,133 ) ' '' '' ( 608.17,134.00)(1.000,-1.000)2 ' '' '' ( 609.0,132.0 ) ' '' '' ( 610,133 ) ( 609.67,131 ) ' '' '' ( 609.17,132.00)(1.000,-1.000)2 ' '' '' ( 611,135.67 ) ' '' '' ( 611.00,135.17)(0.500,1.000)2 ' '' '' ( 611.0,131.0 ) ' '' '' ( 612.0,135.0 ) ' '' '' ( 612.0,135.0 ) ( 612.67,135 ) ' '' '' ( 612.17,136.00)(1.000,-1.000)2 ' '' '' ( 613.0,135.0 ) ' '' '' ( 614.0,132.0 ) ' '' '' ( 613.67,135 ) ' '' '' ( 613.17,136.00)(1.000,-1.000)2 ' '' '' ( 614.0,132.0 ) ' '' '' ( 615.0,134.0 ) ( 615.0,134.0 ) ( 615.67,133 ) ' '' '' ( 615.17,133.00)(1.000,1.500)2 ' '' '' ( 616.0,133.0 ) ( 617,136 ) ( 616.67,132 ) ' '' '' ( 616.17,134.50)(1.000,-2.500)2 ' '' '' ( 617.0,136.0 ) ( 617.67,132 ) ' '' '' ( 617.17,133.00)(1.000,-1.000)2 ' '' '' ( 618.0,132.0 ) ' '' '' ( 618.67,133 ) ' '' '' ( 618.17,133.00)(1.000,1.000)2 ' '' '' ( 619.0,132.0 ) ( 620.0,132.0 ) ' '' '' ( 625,131.67 ) ' '' '' ( 625.00,131.17)(0.500,1.000)2 ' '' '' ( 620.0,132.0 ) ' '' '' ( 626,133 ) ( 626.0,133.0 ) ( 627.0,132.0 ) ( 626.67,132 ) ' '' '' ( 626.17,133.00)(1.000,-1.000)2 ' '' '' ( 627.0,132.0 ) ' '' '' ( 627.67,132 ) ' '' '' ( 627.17,135.00)(1.000,-3.000)2 ' '' '' ( 628.0,132.0 ) ' '' '' ( 629,131.67 ) ' '' '' ( 629.00,132.17)(0.500,-1.000)2 ' '' '' ( 629.0,132.0 ) ( 630,134.67 ) ' '' '' ( 630.00,135.17)(0.500,-1.000)2 ' '' '' ( 630.0,132.0 ) ' '' '' ( 631.0,133.0 ) ' '' '' ( 631.67,133 ) ' '' '' ( 631.17,133.00)(1.000,1.500)2 ' '' '' ( 631.0,133.0 ) ( 632.67,133 ) ' '' '' ( 632.17,134.00)(1.000,-1.000)2 ' '' '' ( 633.0,135.0 ) ( 633.67,132 ) ' '' '' ( 633.17,133.50)(1.000,-1.500)2 ' '' '' ( 634.0,133.0 ) ' '' '' ( 635,132 ) ( 635,131.67 ) ' '' '' ( 635.00,131.17)(0.500,1.000)2 ' '' '' ( 636.0,133.0 ) ( 636.0,133.0 ) ( 636.0,133.0 ) ( 637.0,132.0 ) ( 638,131.67 ) ' '' '' ( 638.00,131.17)(0.500,1.000)2 ' '' '' ( 637.0,132.0 ) ( 638.67,132 ) ' '' '' ( 638.17,132.00)(1.000,1.000)2 ' '' '' ( 639.0,132.0 ) ( 640,131.67 ) ' '' '' ( 640.00,132.17)(0.500,-1.000)2 ' '' '' ( 640.0,133.0 ) ( 641,132 ) ( 641.0,132.0 ) ( 642.0,132.0 ) ' '' '' ( 642.0,132.0 ) ' '' '' ( 642.67,132 ) ' '' '' ( 642.17,132.00)(1.000,2.500)2 ' '' '' ( 642.0,132.0 ) ( 643.67,132 ) ' '' '' ( 643.17,133.50)(1.000,-1.500)2 ' '' '' ( 644.0,135.0 ) ' '' '' ( 645,132 ) ( 645,132 ) ( 645.0,132.0 ) ( 646,131.67 ) ' '' '' ( 646.00,132.17)(0.500,-1.000)2 ' '' '' ( 646.0,132.0 ) ( 647,132 ) ( 646.67,132 ) ' '' '' ( 646.17,132.00)(1.000,1.000)2 ' '' '' ( 648.0,132.0 ) ' '' '' ( 656,131.67 ) ' '' '' ( 656.00,131.17)(0.500,1.000)2 ' '' '' ( 648.0,132.0 ) ' '' '' ( 657.0,132.0 ) ( 657.0,132.0 ) ( 658,131.67 ) ' '' '' ( 658.00,132.17)(0.500,-1.000)2 ' '' '' ( 658.0,132.0 ) ( 659,132 ) ( 659.0,132.0 ) ( 660,131.67 ) ' '' '' ( 660.00,132.17)(0.500,-1.000)2 ' '' '' ( 660.0,132.0 ) ( 661,132 ) ( 661.0,132.0 ) ( 662,131.67 ) ' '' '' ( 662.00,132.17)(0.500,-1.000)2 ' '' '' ( 662.0,132.0 ) ( 663,132 ) ( 663,132 ) ( 664,131.67 ) ' '' '' ( 664.00,131.17)(0.500,1.000)2 ' '' '' ( 663.0,132.0 ) ( 665,133 ) ( 665.0,133.0 ) ( 666.0,132.0 ) ( 666.0,132.0 ) ( 666.0,133.0 ) ( 667,131.67 ) ' '' '' ( 667.00,131.17)(0.500,1.000)2 ' '' '' ( 667.0,132.0 ) ( 668,132.67 ) ' '' '' ( 668.00,133.17)(0.500,-1.000)2 ' '' '' ( 668.0,133.0 ) ( 669,131.67 ) ' '' '' ( 669.00,131.17)(0.500,1.000)2 ' '' '' ( 669.0,132.0 ) ( 669.67,132 ) ' '' '' ( 669.17,133.00)(1.000,-1.000)2 ' '' '' ( 670.0,133.0 ) ( 671,132.67 ) ' '' '' ( 671.00,132.17)(0.500,1.000)2 ' '' '' ( 671.0,132.0 ) ( 671.67,132 ) ' '' '' ( 671.17,133.50)(1.000,-1.500)2 ' '' '' ( 672.0,134.0 ) ( 672.67,135 ) ' '' '' ( 672.17,136.50)(1.000,-1.500)2 ' '' '' ( 673.0,132.0 ) ' '' '' ( 674.0,132.0 ) ' '' '' ( 674.0,132.0 ) ' '' '' ( 676.67,132 ) ' '' '' ( 676.17,135.00)(1.000,-3.000)2 ' '' '' ( 677.0,132.0 ) ' '' '' ( 678,132 ) ( 678,132 ) ( 677.67,132 ) ' '' '' ( 677.17,132.00)(1.000,1.000)2 ' '' '' ( 678.67,133 ) ' '' '' ( 678.17,133.00)(1.000,1.000)2 ' '' '' ( 679.0,133.0 ) ( 680,135.67 ) ' '' '' ( 680.00,135.17)(0.500,1.000)2 ' '' '' ( 680.0,135.0 ) ( 681,132.67 ) ' '' '' ( 681.00,133.17)(0.500,-1.000)2 ' '' '' ( 681.0,134.0 ) ' '' '' ( 682.0,132.0 ) ( 682.0,132.0 ) ' '' '' ( 690,131.67 ) ' '' '' ( 690.00,132.17)(0.500,-1.000)2 ' '' '' ( 690.0,132.0 ) ( 691,132 ) ( 691.0,132.0 ) ' '' '' ( 693.0,132.0 ) ( 693.0,133.0 ) ( 694.0,132.0 ) ( 694,131.67 ) ' '' '' ( 694.00,132.17)(0.500,-1.000)2 ' '' '' ( 694.0,132.0 ) ( 695,132 ) ( 695.0,132.0 ) ' '' '' ( 697.0,132.0 ) ( 697.0,133.0 ) ( 698,131.67 ) ' '' '' ( 698.00,131.17)(0.500,1.000)2 ' '' '' ( 698.0,132.0 ) ( 699,133 ) ( 699.0,133.0 ) ( 700.0,132.0 ) ( 700.0,132.0 ) ' '' '' ( 702.0,132.0 ) ( 702.0,133.0 ) ( 703.0,132.0 ) ( 704,131.67 ) ' '' '' ( 704.00,131.17)(0.500,1.000)2 ' '' '' ( 703.0,132.0 ) ( 705,133 ) ( 705,132.67 ) ' '' '' ( 705.00,132.17)(0.500,1.000)2 ' '' '' ( 706.0,132.0 ) ' '' '' ( 705.67,135 ) ' '' '' ( 705.17,136.50)(1.000,-1.500)2 ' '' '' ( 706.0,132.0 ) ' '' '' ( 707,131.67 ) ' '' '' ( 707.00,132.17)(0.500,-1.000)2 ' '' '' ( 707.0,133.0 ) ' '' '' ( 708,132 ) ( 707.67,132 ) ' '' '' ( 707.17,132.00)(1.000,1.500)2 ' '' '' ( 708.67,132 ) ' '' '' ( 708.17,132.00)(1.000,1.500)2 ' '' '' ( 709.0,132.0 ) ' '' '' ( 709.67,132 ) ' '' '' ( 709.17,133.00)(1.000,-1.000)2 ' '' '' ( 710.0,134.0 ) ( 710.67,133 ) ' '' '' ( 710.17,134.00)(1.000,-1.000)2 ' '' '' ( 711.0,132.0 ) ' '' '' ( 712,131.67 ) ' '' '' ( 712.00,131.17)(0.500,1.000)2 ' '' '' ( 712.0,132.0 ) ( 713,133 ) ( 713,131.67 ) ' '' '' ( 713.00,132.17)(0.500,-1.000)2 ' '' '' ( 714.0,132.0 ) ( 716,131.67 ) ' '' '' ( 716.00,132.17)(0.500,-1.000)2 ' '' '' ( 714.0,133.0 ) ' '' '' ( 717.0,132.0 ) ( 718,131.67 ) ' '' '' ( 718.00,132.17)(0.500,-1.000)2 ' '' '' ( 717.0,133.0 ) ( 719.0,132.0 ) ( 720,131.67 ) ' '' '' ( 720.00,132.17)(0.500,-1.000)2 ' '' '' ( 719.0,133.0 ) ( 721,132 ) ( 721,131.67 ) ' '' '' ( 721.00,132.17)(0.500,-1.000)2 ' '' '' ( 721.0,132.0 ) ( 722,132 ) ( 723,131.67 ) ' '' '' ( 723.00,131.17)(0.500,1.000)2 ' '' '' ( 722.0,132.0 ) ( 724,131.67 ) ' '' '' ( 724.00,131.17)(0.500,1.000)2 ' '' '' ( 724.0,132.0 ) ( 725.0,132.0 ) ( 726,131.67 ) ' '' '' ( 726.00,131.17)(0.500,1.000)2 ' '' '' ( 725.0,132.0 ) ( 727,133 ) ( 727,133 ) ( 727.0,133.0 ) ' '' '' ( 728.67,132 ) ' '' '' ( 728.17,132.00)(1.000,1.000)2 ' '' '' ( 729.0,132.0 ) ( 730,131.67 ) ' '' '' ( 730.00,132.17)(0.500,-1.000)2 ' '' '' ( 730.0,133.0 ) ( 730.67,132 ) ' '' '' ( 730.17,133.50)(1.000,-1.500)2 ' '' '' ( 731.0,132.0 ) ' '' '' ( 732.0,132.0 ) ' '' '' ( 732.0,134.0 ) ( 733.0,134.0 ) ' '' '' ( 733,132.67 ) ' '' '' ( 733.00,133.17)(0.500,-1.000)2 ' '' '' ( 733.0,134.0 ) ' '' '' ( 733.67,132 ) ' '' '' ( 733.17,132.00)(1.000,3.000)2 ' '' '' ( 734.0,132.0 ) ( 735.0,134.0 ) ' '' '' ( 735.0,134.0 ) ( 736.0,132.0 ) ' '' '' ( 736.0,132.0 ) ' '' '' ( 736.0,134.0 ) ( 737.0,132.0 ) ' '' '' ( 737.0,132.0 ) ( 738.0,132.0 ) ( 738.0,133.0 ) ' '' '' ( 742.0,133.0 ) ( 742.0,133.0 ) ( 743,131.67 ) ' '' '' ( 743.00,132.17)(0.500,-1.000)2 ' '' '' ( 742.0,133.0 ) ( 744.0,132.0 ) ( 746,132.67 ) ' '' '' ( 746.00,132.17)(0.500,1.000)2 ' '' '' ( 744.0,133.0 ) ' '' '' ( 746.67,134 ) ' '' '' ( 746.17,135.50)(1.000,-1.500)2 ' '' '' ( 747.0,134.0 ) ' '' '' ( 748.0,132.0 ) ' '' '' ( 748,132.67 ) ' '' '' ( 748.00,132.17)(0.500,1.000)2 ' '' '' ( 748.0,132.0 ) ( 749,134 ) ( 749.0,134.0 ) ( 749.67,134 ) ' '' '' ( 749.17,135.00)(1.000,-1.000)2 ' '' '' ( 750.0,134.0 ) ' '' '' ( 750.67,136 ) ' '' '' ( 750.17,137.00)(1.000,-1.000)2 ' '' '' ( 751.0,134.0 ) ' '' '' ( 752,133.67 ) ' '' '' ( 752.00,134.17)(0.500,-1.000)2 ' '' '' ( 752.0,135.0 ) ( 753,131.67 ) ' '' '' ( 753.00,132.17)(0.500,-1.000)2 ' '' '' ( 753.0,133.0 ) ( 754,132 ) ( 754,132 ) ( 754.0,132.0 ) ( 755,134.67 ) ' '' '' ( 755.00,134.17)(0.500,1.000)2 ' '' '' ( 755.0,132.0 ) ' '' '' ( 755.67,134 ) ' '' '' ( 755.17,134.00)(1.000,1.500)2 ' '' '' ( 756.0,134.0 ) ' '' '' ( 757,132.67 ) ' '' '' ( 757.00,132.17)(0.500,1.000)2 ' '' '' ( 757.0,133.0 ) ' '' '' ( 757.67,136 ) ' '' '' ( 757.17,137.00)(1.000,-1.000)2 ' '' '' ( 758.0,134.0 ) ' '' '' ( 759.0,133.0 ) ' '' '' ( 759.0,133.0 ) ' '' '' ( 761.0,132.0 ) ( 761.0,132.0 ) ( 762.0,132.0 ) ( 763,132.67 ) ' '' '' ( 763.00,132.17)(0.500,1.000)2 ' '' '' ( 762.0,133.0 ) ( 764.0,133.0 ) ( 764.0,133.0 ) ( 765.0,132.0 ) ( 765.0,132.0 ) ( 766,131.67 ) ' '' '' ( 766.00,132.17)(0.500,-1.000)2 ' '' '' ( 766.0,132.0 ) ( 767.0,132.0 ) ( 767,131.67 ) ' '' '' ( 767.00,131.17)(0.500,1.000)2 ' '' '' ( 767.0,132.0 ) ( 768,133 ) ( 768.0,133.0 ) ( 769.0,132.0 ) ( 770,131.67 ) ' '' '' ( 770.00,131.17)(0.500,1.000)2 ' '' '' ( 769.0,132.0 ) ( 771,131.67 ) ' '' '' ( 771.00,131.17)(0.500,1.000)2 ' '' '' ( 771.0,132.0 ) ( 772.0,132.0 ) ( 772.0,132.0 ) ( 773,131.67 ) ' '' '' ( 773.00,132.17)(0.500,-1.000)2 ' '' '' ( 773.0,132.0 ) ( 774,131.67 ) ' '' '' ( 774.00,132.17)(0.500,-1.000)2 ' '' '' ( 774.0,132.0 ) ( 775,132 ) ( 775.0,132.0 ) ( 776,131.67 ) ' '' '' ( 776.00,132.17)(0.500,-1.000)2 ' '' '' ( 776.0,132.0 ) ( 777,132 ) ( 777.0,132.0 ) ( 778,131.67 ) ' '' '' ( 778.00,132.17)(0.500,-1.000)2 ' '' '' ( 778.0,132.0 ) ( 779,132 ) ( 779,132 ) ( 779.0,132.0 ) ' '' '' ( 782.0,132.0 ) ( 782.0,132.0 ) ( 782.0,132.0 ) ( 783,131.67 ) ' '' '' ( 783.00,132.17)(0.500,-1.000)2 ' '' '' ( 783.0,132.0 ) ( 784,132 ) ( 784,131.67 ) ' '' '' ( 784.00,131.17)(0.500,1.000)2 ' '' '' ( 785,133 ) ( 785,133 ) ( 786,132.67 ) ' '' '' ( 786.00,132.17)(0.500,1.000)2 ' '' '' ( 785.0,133.0 ) ( 787,131.67 ) ' '' '' ( 787.00,131.17)(0.500,1.000)2 ' '' '' ( 787.0,132.0 ) ' '' '' ( 788,133 ) ( 788,133 ) ( 789,131.67 ) ' '' '' ( 789.00,132.17)(0.500,-1.000)2 ' '' '' ( 788.0,133.0 ) ( 790.0,132.0 ) ( 790.0,133.0 ) ( 791.0,132.0 ) ( 791.0,132.0 ) ( 791.0,133.0 ) ( 792,131.67 ) ' '' '' ( 792.00,131.17)(0.500,1.000)2 ' '' '' ( 792.0,132.0 ) ( 793,133 ) ( 796,131.67 ) ' '' '' ( 796.00,132.17)(0.500,-1.000)2 ' '' '' ( 793.0,133.0 ) ' '' '' ( 797,132 ) ( 797.0,132.0 ) ( 802,131.67 ) ' '' '' ( 802.00,132.17)(0.500,-1.000)2 ' '' '' ( 797.0,133.0 ) ' '' '' ( 803,132.67 ) ' '' '' ( 803.00,133.17)(0.500,-1.000)2 ' '' '' ( 803.0,132.0 ) ' '' '' ( 804,132.67 ) ' '' '' ( 804.00,133.17)(0.500,-1.000)2 ' '' '' ( 804.0,133.0 ) ( 805,133 ) ( 805,131.67 ) ' '' '' ( 805.00,132.17)(0.500,-1.000)2 ' '' '' ( 806,132 ) ( 806.0,132.0 ) ( 807,131.67 ) ' '' '' ( 807.00,132.17)(0.500,-1.000)2 ' '' '' ( 806.0,133.0 ) ( 808,132 ) ( 808,131.67 ) ' '' '' ( 808.00,131.17)(0.500,1.000)2 ' '' '' ( 809,133 ) ( 809,133 ) ( 809.0,133.0 ) ( 810,131.67 ) ' '' '' ( 810.00,131.17)(0.500,1.000)2 ' '' '' ( 810.0,132.0 ) ( 811,131.67 ) ' '' '' ( 811.00,131.17)(0.500,1.000)2 ' '' '' ( 811.0,132.0 ) ( 812,133 ) ( 812,133 ) ( 812,131.67 ) ' '' '' ( 812.00,132.17)(0.500,-1.000)2 ' '' '' ( 813.0,132.0 ) ( 813.0,133.0 ) ( 814,131.67 ) ' '' '' ( 814.00,131.17)(0.500,1.000)2 ' '' '' ( 814.0,132.0 ) ( 815,133 ) ( 815,133 ) ( 816,131.67 ) ' '' '' ( 816.00,132.17)(0.500,-1.000)2 ' '' '' ( 815.0,133.0 ) ( 817,132 ) ( 817,131.67 ) ' '' '' ( 817.00,131.17)(0.500,1.000)2 ' '' '' ( 818,133 ) ( 818,133 ) ( 818,131.67 ) ' '' '' ( 818.00,132.17)(0.500,-1.000)2 ' '' '' ( 819,132 ) ( 820,131.67 ) ' '' '' ( 820.00,131.17)(0.500,1.000)2 ' '' '' ( 819.0,132.0 ) ( 821,131.67 ) ' '' '' ( 821.00,131.17)(0.500,1.000)2 ' '' '' ( 821.0,132.0 ) ( 822,133 ) ( 822,131.67 ) ' '' '' ( 822.00,132.17)(0.500,-1.000)2 ' '' '' ( 823.0,132.0 ) ( 825,131.67 ) ' '' '' ( 825.00,132.17)(0.500,-1.000)2 ' '' '' ( 823.0,133.0 ) ' '' '' ( 826.0,132.0 ) ( 826.0,133.0 ) ( 827.0,133.0 ) ( 827.0,134.0 ) ( 828,132.67 ) ' '' '' ( 828.00,132.17)(0.500,1.000)2 ' '' '' ( 828.0,133.0 ) ( 829.0,133.0 ) ( 829.0,133.0 ) ( 830,131.67 ) ' '' '' ( 830.00,131.17)(0.500,1.000)2 ' '' '' ( 830.0,132.0 ) ( 831,133 ) ( 831,133 ) ( 832,131.67 ) ' '' '' ( 832.00,132.17)(0.500,-1.000)2 ' '' '' ( 831.0,133.0 ) ( 833.0,132.0 ) ( 833.0,133.0 ) ( 833.67,132 ) ' '' '' ( 833.17,133.00)(1.000,-1.000)2 ' '' '' ( 834.0,133.0 ) ( 835,132.67 ) ' '' '' ( 835.00,133.17)(0.500,-1.000)2 ' '' '' ( 835.0,132.0 ) ' '' '' ( 836,133 ) ( 840,131.67 ) ' '' '' ( 840.00,132.17)(0.500,-1.000)2 ' '' '' ( 836.0,133.0 ) ' '' '' ( 841,132.67 ) ' '' '' ( 841.00,132.17)(0.500,1.000)2 ' '' '' ( 841.0,132.0 ) ( 842,131.67 ) ' '' '' ( 842.00,132.17)(0.500,-1.000)2 ' '' '' ( 842.0,133.0 ) ( 843.0,132.0 ) ( 843.0,133.0 ) ( 843.67,133 ) ' '' '' ( 843.17,134.00)(1.000,-1.000)2 ' '' '' ( 844.0,133.0 ) ' '' '' ( 845.0,133.0 ) ( 845.0,134.0 ) ( 846.0,133.0 ) ( 846,132.67 ) ' '' '' ( 846.00,133.17)(0.500,-1.000)2 ' '' '' ( 846.0,133.0 ) ( 847,133 ) ( 847.0,133.0 ) ( 848,132.67 ) ' '' '' ( 848.00,133.17)(0.500,-1.000)2 ' '' '' ( 848.0,133.0 ) ( 849,133 ) ( 849,132.67 ) ' '' '' ( 849.00,133.17)(0.500,-1.000)2 ' '' '' ( 849.0,133.0 ) ( 850.0,133.0 ) ( 851,133.67 ) ' '' '' ( 851.00,133.17)(0.500,1.000)2 ' '' '' ( 850.0,134.0 ) ( 852.0,133.0 ) ' '' '' ( 855,132.67 ) ' '' '' ( 855.00,132.17)(0.500,1.000)2 ' '' '' ( 852.0,133.0 ) ' '' '' ( 856.0,133.0 ) ( 856.0,133.0 ) ' '' '' ( 858,131.67 ) ' '' '' ( 858.00,131.17)(0.500,1.000)2 ' '' '' ( 858.0,132.0 ) ( 859.0,132.0 ) ( 860,131.67 ) ' '' '' ( 860.00,131.17)(0.500,1.000)2 ' '' '' ( 859.0,132.0 ) ( 861,133 ) ( 861,131.67 ) ' '' '' ( 861.00,131.17)(0.500,1.000)2 ' '' '' ( 861.0,132.0 ) ( 862,131.67 ) ' '' '' ( 862.00,131.17)(0.500,1.000)2 ' '' '' ( 862.0,132.0 ) ( 863,133 ) ( 863,131.67 ) ' '' '' ( 863.00,132.17)(0.500,-1.000)2 ' '' '' ( 864,132 ) ( 864,132 ) ( 864,131.67 ) ' '' '' ( 864.00,131.17)(0.500,1.000)2 ' '' '' ( 865,133 ) ( 865.0,133.0 ) ( 866,131.67 ) ' '' '' ( 866.00,131.17)(0.500,1.000)2 ' '' '' ( 866.0,132.0 ) ( 867,133 ) ( 867.0,133.0 ) ( 867.0,134.0 ) ( 868.0,133.0 ) ( 868.0,133.0 ) ' '' '' ( 870,133.67 ) ' '' '' ( 870.00,133.17)(0.500,1.000)2 ' '' '' ( 870.0,133.0 ) ( 871,134.67 ) ' '' '' ( 871.00,135.17)(0.500,-1.000)2 ' '' '' ( 871.0,135.0 ) ( 872,135 ) ( 872,133.67 ) ' '' '' ( 872.00,134.17)(0.500,-1.000)2 ' '' '' ( 873.0,134.0 ) ' '' '' ( 873.0,133.0 ) ' '' '' ( 873.0,133.0 ) ( 873.67,133 ) ' '' '' ( 873.17,135.50)(1.000,-2.500)2 ' '' '' ( 874.0,133.0 ) ' '' '' ( 875,133.67 ) ' '' '' ( 875.00,134.17)(0.500,-1.000)2 ' '' '' ( 875.0,133.0 ) ' '' '' ( 876.0,133.0 ) ( 876,132.67 ) ' '' '' ( 876.00,133.17)(0.500,-1.000)2 ' '' '' ( 876.0,133.0 ) ( 877,133 ) ( 877,132.67 ) ' '' '' ( 877.00,132.17)(0.500,1.000)2 ' '' '' ( 878,134 ) ( 878,132.67 ) ' '' '' ( 878.00,133.17)(0.500,-1.000)2 ' '' '' ( 879,132.67 ) ' '' '' ( 879.00,133.17)(0.500,-1.000)2 ' '' '' ( 879.0,133.0 ) ( 880,133 ) ( 880,132.67 ) ' '' '' ( 880.00,132.17)(0.500,1.000)2 ' '' '' ( 881.0,133.0 ) ( 883,131.67 ) ' '' '' ( 883.00,132.17)(0.500,-1.000)2 ' '' '' ( 881.0,133.0 ) ' '' '' ( 884.0,132.0 ) ( 884.0,133.0 ) ' '' '' ( 888.0,133.0 ) ( 888.0,133.0 ) ( 890,132.67 ) ' '' '' ( 890.00,132.17)(0.500,1.000)2 ' '' '' ( 888.0,133.0 ) ' '' '' ( 890.67,132 ) ' '' '' ( 890.17,132.00)(1.000,1.000)2 ' '' '' ( 891.0,132.0 ) ' '' '' ( 892.0,133.0 ) ( 892.0,133.0 ) ' '' '' ( 894,132.67 ) ' '' '' ( 894.00,133.17)(0.500,-1.000)2 ' '' '' ( 894.0,133.0 ) ( 895,133 ) ( 895,131.67 ) ' '' '' ( 895.00,131.17)(0.500,1.000)2 ' '' '' ( 895.0,132.0 ) ( 896,133 ) ( 896.0,133.0 ) ' '' '' ( 902.0,133.0 ) ( 902.0,134.0 ) ( 903.0,133.0 ) ( 903.0,133.0 ) ' '' '' ( 906,132.67 ) ' '' '' ( 906.00,133.17)(0.500,-1.000)2 ' '' '' ( 906.0,133.0 ) ( 907,133 ) ( 907,132.67 ) ' '' '' ( 907.00,133.17)(0.500,-1.000)2 ' '' '' ( 907.0,133.0 ) ( 908,133 ) ( 908,131.67 ) ' '' '' ( 908.00,132.17)(0.500,-1.000)2 ' '' '' ( 909.0,132.0 ) ( 909.0,133.0 ) ( 910,131.67 ) ' '' '' ( 910.00,131.17)(0.500,1.000)2 ' '' '' ( 910.0,132.0 ) ( 911,132.67 ) ' '' '' ( 911.00,133.17)(0.500,-1.000)2 ' '' '' ( 911.0,133.0 ) ( 912,133 ) ( 912,132.67 ) ' '' '' ( 912.00,132.17)(0.500,1.000)2 ' '' '' ( 913.0,133.0 ) ( 913.0,133.0 ) ( 913.0,134.0 ) ( 914.0,133.0 ) ( 914.0,133.0 ) ( 914.67,133 ) ' '' '' ( 914.17,134.00)(1.000,-1.000)2 ' '' '' ( 915.0,133.0 ) ' '' '' ( 916.0,133.0 ) ( 916,132.67 ) ' '' '' ( 916.00,132.17)(0.500,1.000)2 ' '' '' ( 916.0,133.0 ) ( 917,132.67 ) ' '' '' ( 917.00,132.17)(0.500,1.000)2 ' '' '' ( 917.0,133.0 ) ( 918.0,133.0 ) ( 918.0,133.0 ) ' '' '' ( 919.67,132 ) ' '' '' ( 919.17,133.00)(1.000,-1.000)2 ' '' '' ( 920.0,133.0 ) ( 921.0,132.0 ) ' '' '' ( 921.0,134.0 ) ( 922,132.67 ) ' '' '' ( 922.00,132.17)(0.500,1.000)2 ' '' '' ( 922.0,133.0 ) ( 923,132.67 ) ' '' '' ( 923.00,132.17)(0.500,1.000)2 ' '' '' ( 923.0,133.0 ) ( 924,134 ) ( 924.0,134.0 ) ( 925,132.67 ) ' '' '' ( 925.00,132.17)(0.500,1.000)2 ' '' '' ( 925.0,133.0 ) ( 926.0,133.0 ) ( 926.0,133.0 ) ( 926.67,133 ) ' '' '' ( 926.17,134.00)(1.000,-1.000)2 ' '' '' ( 927.0,133.0 ) ' '' '' ( 928,133 ) ( 928.0,133.0 ) ( 929,132.67 ) ' '' '' ( 929.00,133.17)(0.500,-1.000)2 ' '' '' ( 928.0,134.0 ) ( 929.67,133 ) ' '' '' ( 929.17,134.00)(1.000,-1.000)2 ' '' '' ( 930.0,133.0 ) ' '' '' ( 931.0,133.0 ) ( 931.0,133.0 ) ( 932,132.67 ) ' '' '' ( 932.00,132.17)(0.500,1.000)2 ' '' '' ( 931.0,133.0 ) ( 933,134 ) ( 933.0,134.0 ) ( 934.0,133.0 ) ( 934.0,133.0 ) ' '' '' ( 936,132.67 ) ' '' '' ( 936.00,133.17)(0.500,-1.000)2 ' '' '' ( 936.0,133.0 ) ( 937.0,133.0 ) ( 937.0,132.0 ) ' '' '' ( 937.0,132.0 ) ( 938.0,132.0 ) ( 938.0,133.0 ) ( 939,131.67 ) ' '' '' ( 939.00,131.17)(0.500,1.000)2 ' '' '' ( 939.0,132.0 ) ( 940.0,133.0 ) ( 940.0,133.0 ) ( 941,132.67 ) ' '' '' ( 941.00,132.17)(0.500,1.000)2 ' '' '' ( 940.0,133.0 ) ( 942,134 ) ( 942,132.67 ) ' '' '' ( 942.00,133.17)(0.500,-1.000)2 ' '' '' ( 943,133 ) ( 943,133.67 ) ' '' '' ( 943.00,133.17)(0.500,1.000)2 ' '' '' ( 943.0,133.0 ) ( 944,135 ) ( 944.0,135.0 ) ( 945.0,133.0 ) ' '' '' ( 948,132.67 ) ' '' '' ( 948.00,132.17)(0.500,1.000)2 ' '' '' ( 945.0,133.0 ) ' '' '' ( 949,131.67 ) ' '' '' ( 949.00,131.17)(0.500,1.000)2 ' '' '' ( 949.0,132.0 ) ' '' '' ( 950,133 ) ( 950.0,133.0 ) ( 951,132.67 ) ' '' '' ( 951.00,133.17)(0.500,-1.000)2 ' '' '' ( 951.0,133.0 ) ( 952,133 ) ( 952,133 ) ( 952,131.67 ) ' '' '' ( 952.00,132.17)(0.500,-1.000)2 ' '' '' ( 953.0,132.0 ) ( 963,132.67 ) ' '' '' ( 963.00,132.17)(0.500,1.000)2 ' '' '' ( 953.0,133.0 ) ' '' '' ( 964,132.67 ) ' '' '' ( 964.00,132.17)(0.500,1.000)2 ' '' '' ( 964.0,133.0 ) ( 965,132.67 ) ' '' '' ( 965.00,132.17)(0.500,1.000)2 ' '' '' ( 965.0,133.0 ) ( 966,132.67 ) ' '' '' ( 966.00,132.17)(0.500,1.000)2 ' '' '' ( 966.0,133.0 ) ( 967,132.67 ) ' '' '' ( 967.00,132.17)(0.500,1.000)2 ' '' '' ( 967.0,133.0 ) ( 968,134 ) ( 967.67,133 ) ' '' '' ( 967.17,133.00)(1.000,1.000)2 ' '' '' ( 968.0,133.0 ) ( 969.0,133.0 ) ' '' '' ( 969.0,133.0 ) ( 970,132.67 ) ' '' '' ( 970.00,133.17)(0.500,-1.000)2 ' '' '' ( 970.0,133.0 ) ( 971.0,133.0 ) ( 971,132.67 ) ' '' '' ( 971.00,132.17)(0.500,1.000)2 ' '' '' ( 971.0,133.0 ) ( 972.0,133.0 ) ( 973,132.67 ) ' '' '' ( 973.00,132.17)(0.500,1.000)2 ' '' '' ( 972.0,133.0 ) ( 974,134 ) ( 974.0,133.0 ) ( 974.0,133.0 ) ( 975.0,133.0 ) ( 975.0,134.0 ) ( 976.0,133.0 ) ( 976.0,133.0 ) ( 977.0,133.0 ) ( 977.0,133.0 ) ( 977.0,133.0 ) ' '' '' ( 979.0,133.0 ) ( 979.0,134.0 ) ( 980.0,133.0 ) ( 980.0,133.0 ) ( 982,132.67 ) ' '' '' ( 982.00,133.17)(0.500,-1.000)2 ' '' '' ( 980.0,134.0 ) ' '' '' ( 983,133 ) ( 983,134.67 ) ' '' '' ( 983.00,135.17)(0.500,-1.000)2 ' '' '' ( 983.0,133.0 ) ' '' '' ( 984.0,134.0 ) ( 984.0,134.0 ) ( 985.0,133.0 ) ( 985.0,133.0 ) ' '' '' ( 987.0,133.0 ) ( 987.0,134.0 ) ( 988.0,133.0 ) ( 130,322)(0,0) ( 302,131)(0,0) ( 474,131)(0,0) ( 646,132)(0,0) ( 818,133)(0,0) ( 909,516)(0,0) ( 988.0,133.0 ) ( 849,475)(0,0)[r]macro - classifiers/2000 ( 869.0,475.0 ) ' '' '' ( 130,859 ) ( 130.0,859.0 ) ( 131.0,858.0 ) ( 131.0,858.0 ) ( 132.0,857.0 ) ( 132.0,857.0 ) ( 133,854.67 ) ' '' '' ( 133.00,855.17)(0.500,-1.000)2 ' '' '' ( 133.0,856.0 ) ( 134,855 ) ( 134,853.67 ) ' '' '' ( 134.00,854.17)(0.500,-1.000)2 ' '' '' ( 135,854 ) ( 135,852.67 ) ' '' '' ( 135.00,853.17)(0.500,-1.000)2 ' '' '' ( 136,853 ) ( 136.0,852.0 ) ( 136.0,852.0 ) ( 137.0,851.0 ) ( 137.0,851.0 ) ( 138,848.67 ) ' '' '' ( 138.00,849.17)(0.500,-1.000)2 ' '' '' ( 138.0,850.0 ) ( 139,849 ) ( 139,846.67 ) ' '' '' ( 139.00,847.17)(0.500,-1.000)2 ' '' '' ( 139.0,848.0 ) ( 140,844.67 ) ' '' '' ( 140.00,845.17)(0.500,-1.000)2 ' '' '' ( 140.0,846.0 ) ( 141.0,844.0 ) ( 141.0,844.0 ) ( 142,840.67 ) ' '' '' ( 142.00,841.17)(0.500,-1.000)2 ' '' '' ( 142.0,842.0 ) ' '' '' ( 143,838.67 ) ' '' '' ( 143.00,839.17)(0.500,-1.000)2 ' '' '' ( 143.0,840.0 ) ( 144,836.67 ) ' '' '' ( 144.00,837.17)(0.500,-1.000)2 ' '' '' ( 144.0,838.0 ) ( 144.67,833 ) ' '' '' ( 144.17,834.00)(1.000,-1.000)2 ' '' '' ( 145.0,835.0 ) ' '' '' ( 146,830.67 ) ' '' '' ( 146.00,831.17)(0.500,-1.000)2 ' '' '' ( 146.0,832.0 ) ( 147,828.67 ) ' '' '' ( 147.00,829.17)(0.500,-1.000)2 ' '' '' ( 147.0,830.0 ) ( 148,824.67 ) ' '' '' ( 148.00,825.17)(0.500,-1.000)2 ' '' '' ( 148.0,826.0 ) ' '' '' ( 148.67,822 ) ' '' '' ( 148.17,823.00)(1.000,-1.000)2 ' '' '' ( 149.0,824.0 ) ( 149.67,819 ) ' '' '' ( 149.17,820.00)(1.000,-1.000)2 ' '' '' ( 150.0,821.0 ) ( 151,814.67 ) ' '' '' ( 151.00,815.17)(0.500,-1.000)2 ' '' '' ( 151.0,816.0 ) ' '' '' ( 152,812.67 ) ' '' '' ( 152.00,813.17)(0.500,-1.000)2 ' '' '' ( 152.0,814.0 ) ( 153,809.67 ) ' '' '' ( 153.00,810.17)(0.500,-1.000)2 ' '' '' ( 153.0,811.0 ) ' '' '' ( 153.67,804 ) ' '' '' ( 153.17,805.00)(1.000,-1.000)2 ' '' '' ( 154.0,806.0 ) ' '' '' ( 154.67,801 ) ' '' '' ( 154.17,802.00)(1.000,-1.000)2 ' '' '' ( 155.0,803.0 ) ( 155.67,798 ) ' '' '' ( 155.17,799.00)(1.000,-1.000)2 ' '' '' ( 156.0,800.0 ) ( 156.67,793 ) ' '' '' ( 156.17,794.00)(1.000,-1.000)2 ' '' '' ( 157.0,795.0 ) ' '' '' ( 158,789.67 ) ' '' '' ( 158.00,790.17)(0.500,-1.000)2 ' '' '' ( 158.0,791.0 ) ' '' '' ( 158.67,786 ) ' '' '' ( 158.17,787.00)(1.000,-1.000)2 ' '' '' ( 159.0,788.0 ) ' '' '' ( 159.67,780 ) ' '' '' ( 159.17,781.00)(1.000,-1.000)2 ' '' '' ( 160.0,782.0 ) ' '' '' ( 160.67,776 ) ' '' '' ( 160.17,777.00)(1.000,-1.000)2 ' '' '' ( 161.0,778.0 ) ' '' '' ( 161.67,771 ) ' '' '' ( 161.17,772.00)(1.000,-1.000)2 ' '' '' ( 162.0,773.0 ) ' '' '' ( 162.67,766 ) ' '' '' ( 162.17,767.00)(1.000,-1.000)2 ' '' '' ( 163.0,768.0 ) ' '' '' ( 163.67,759 ) ' '' '' ( 163.17,760.00)(1.000,-1.000)2 ' '' '' ( 164.0,761.0 ) ' '' '' ( 164.67,754 ) ' '' '' ( 164.17,755.50)(1.000,-1.500)2 ' '' '' ( 165.0,757.0 ) ' '' '' ( 165.67,750 ) ' '' '' ( 165.17,751.50)(1.000,-1.500)2 ' '' '' ( 166.0,753.0 ) ( 166.67,743 ) ' '' '' ( 166.17,744.50)(1.000,-1.500)2 ' '' '' ( 167.0,746.0 ) ' '' '' ( 167.67,738 ) ' '' '' ( 167.17,739.50)(1.000,-1.500)2 ' '' '' ( 168.0,741.0 ) ' '' '' ( 168.67,734 ) ' '' '' ( 168.17,735.00)(1.000,-1.000)2 ' '' '' ( 169.0,736.0 ) ' '' '' ( 169.67,727 ) ' '' '' ( 169.17,728.00)(1.000,-1.000)2 ' '' '' ( 170.0,729.0 ) ' '' '' ( 170.67,722 ) ' '' '' ( 170.17,723.00)(1.000,-1.000)2 ' '' '' ( 171.0,724.0 ) ' '' '' ( 171.67,716 ) ' '' '' ( 171.17,717.50)(1.000,-1.500)2 ' '' '' ( 172.0,719.0 ) ' '' '' ( 172.67,708 ) ' '' '' ( 172.17,709.50)(1.000,-1.500)2 ' '' '' ( 173.0,711.0 ) ' '' '' ( 173.67,703 ) ' '' '' ( 173.17,704.00)(1.000,-1.000)2 ' '' '' ( 174.0,705.0 ) ' '' '' ( 174.67,697 ) ' '' '' ( 174.17,698.50)(1.000,-1.500)2 ' '' '' ( 175.0,700.0 ) ' '' '' ( 175.67,687 ) ' '' '' ( 175.17,689.00)(1.000,-2.000)2 ' '' '' ( 176.0,691.0 ) ' '' '' ( 176.67,681 ) ' '' '' ( 176.17,682.50)(1.000,-1.500)2 ' '' '' ( 177.0,684.0 ) ' '' '' ( 177.67,675 ) ' '' '' ( 177.17,676.50)(1.000,-1.500)2 ' '' '' ( 178.0,678.0 ) ' '' '' ( 178.67,664 ) ' '' '' ( 178.17,666.00)(1.000,-2.000)2 ' '' '' ( 179.0,668.0 ) ' '' '' ( 179.67,658 ) ' '' '' ( 179.17,659.50)(1.000,-1.500)2 ' '' '' ( 180.0,661.0 ) ' '' '' ( 180.67,651 ) ' '' '' ( 180.17,652.50)(1.000,-1.500)2 ' '' '' ( 181.0,654.0 ) ' '' '' ( 181.67,640 ) ' '' '' ( 181.17,641.50)(1.000,-1.500)2 ' '' '' ( 182.0,643.0 ) ' '' '' ( 182.67,633 ) ' '' '' ( 182.17,634.50)(1.000,-1.500)2 ' '' '' ( 183.0,636.0 ) ' '' '' ( 183.67,626 ) ' '' '' ( 183.17,628.00)(1.000,-2.000)2 ' '' '' ( 184.0,630.0 ) ' '' '' ( 184.67,615 ) ' '' '' ( 184.17,616.50)(1.000,-1.500)2 ' '' '' ( 185.0,618.0 ) ' '' '' ( 185.67,608 ) ' '' '' ( 185.17,609.50)(1.000,-1.500)2 ' '' '' ( 186.0,611.0 ) ' '' '' ( 186.67,601 ) ' '' '' ( 186.17,602.50)(1.000,-1.500)2 ' '' '' ( 187.0,604.0 ) ' '' '' ( 187.67,591 ) ' '' '' ( 187.17,592.50)(1.000,-1.500)2 ' '' '' ( 188.0,594.0 ) ' '' '' ( 188.67,583 ) ' '' '' ( 188.17,585.00)(1.000,-2.000)2 ' '' '' ( 189.0,587.0 ) ' '' '' ( 189.67,576 ) ' '' '' ( 189.17,578.00)(1.000,-2.000)2 ' '' '' ( 190.0,580.0 ) ' '' '' ( 190.67,565 ) ' '' '' ( 190.17,567.00)(1.000,-2.000)2 ' '' '' ( 191.0,569.0 ) ' '' '' ( 191.67,558 ) ' '' '' ( 191.17,560.00)(1.000,-2.000)2 ' '' '' ( 192.0,562.0 ) ' '' '' ( 192.67,551 ) ' '' '' ( 192.17,553.00)(1.000,-2.000)2 ' '' '' ( 193.0,555.0 ) ' '' '' ( 193.67,540 ) ' '' '' ( 193.17,541.50)(1.000,-1.500)2 ' '' '' ( 194.0,543.0 ) ' '' '' ( 194.67,532 ) ' '' '' ( 194.17,534.00)(1.000,-2.000)2 ' '' '' ( 195.0,536.0 ) ' '' '' ( 195.67,525 ) ' '' '' ( 195.17,526.50)(1.000,-1.500)2 ' '' '' ( 196.0,528.0 ) ' '' '' ( 196.67,514 ) ' '' '' ( 196.17,515.50)(1.000,-1.500)2 ' '' '' ( 197.0,517.0 ) ' '' '' ( 197.67,507 ) ' '' '' ( 197.17,508.50)(1.000,-1.500)2 ' '' '' ( 198.0,510.0 ) ' '' '' ( 198.67,499 ) ' '' '' ( 198.17,501.00)(1.000,-2.000)2 ' '' '' ( 199.0,503.0 ) ' '' '' ( 199.67,489 ) ' '' '' ( 199.17,490.50)(1.000,-1.500)2 ' '' '' ( 200.0,492.0 ) ' '' '' ( 200.67,481 ) ' '' '' ( 200.17,483.00)(1.000,-2.000)2 ' '' '' ( 201.0,485.0 ) ' '' '' ( 201.67,474 ) ' '' '' ( 201.17,475.50)(1.000,-1.500)2 ' '' '' ( 202.0,477.0 ) ' '' '' ( 202.67,463 ) ' '' '' ( 202.17,465.00)(1.000,-2.000)2 ' '' '' ( 203.0,467.0 ) ' '' '' ( 203.67,457 ) ' '' '' ( 203.17,458.50)(1.000,-1.500)2 ' '' '' ( 204.0,460.0 ) ' '' '' ( 204.67,451 ) ' '' '' ( 204.17,452.50)(1.000,-1.500)2 ' '' '' ( 205.0,454.0 ) ' '' '' ( 205.67,440 ) ' '' '' ( 205.17,441.50)(1.000,-1.500)2 ' '' '' ( 206.0,443.0 ) ' '' '' ( 206.67,433 ) ' '' '' ( 206.17,434.50)(1.000,-1.500)2 ' '' '' ( 207.0,436.0 ) ' '' '' ( 207.67,425 ) ' '' '' ( 207.17,427.00)(1.000,-2.000)2 ' '' '' ( 208.0,429.0 ) ' '' '' ( 208.67,416 ) ' '' '' ( 208.17,417.00)(1.000,-1.000)2 ' '' '' ( 209.0,418.0 ) ' '' '' ( 209.67,409 ) ' '' '' ( 209.17,411.00)(1.000,-2.000)2 ' '' '' ( 210.0,413.0 ) ' '' '' ( 210.67,404 ) ' '' '' ( 210.17,405.50)(1.000,-1.500)2 ' '' '' ( 211.0,407.0 ) ' '' '' ( 211.67,396 ) ' '' '' ( 211.17,397.50)(1.000,-1.500)2 ' '' '' ( 212.0,399.0 ) ' '' '' ( 212.67,390 ) ' '' '' ( 212.17,391.50)(1.000,-1.500)2 ' '' '' ( 213.0,393.0 ) ' '' '' ( 213.67,384 ) ' '' '' ( 213.17,385.50)(1.000,-1.500)2 ' '' '' ( 214.0,387.0 ) ' '' '' ( 214.67,375 ) ' '' '' ( 214.17,376.00)(1.000,-1.000)2 ' '' '' ( 215.0,377.0 ) ' '' '' ( 215.67,369 ) ' '' '' ( 215.17,370.50)(1.000,-1.500)2 ' '' '' ( 216.0,372.0 ) ' '' '' ( 216.67,365 ) ' '' '' ( 216.17,366.00)(1.000,-1.000)2 ' '' '' ( 217.0,367.0 ) ' '' '' ( 217.67,357 ) ' '' '' ( 217.17,358.50)(1.000,-1.500)2 ' '' '' ( 218.0,360.0 ) ' '' '' ( 219,353.67 ) ' '' '' ( 219.00,354.17)(0.500,-1.000)2 ' '' '' ( 219.0,355.0 ) ' '' '' ( 219.67,350 ) ' '' '' ( 219.17,351.00)(1.000,-1.000)2 ' '' '' ( 220.0,352.0 ) ' '' '' ( 220.67,343 ) ' '' '' ( 220.17,344.00)(1.000,-1.000)2 ' '' '' ( 221.0,345.0 ) ' '' '' ( 221.67,339 ) ' '' '' ( 221.17,340.50)(1.000,-1.500)2 ' '' '' ( 222.0,342.0 ) ( 222.67,335 ) ' '' '' ( 222.17,336.00)(1.000,-1.000)2 ' '' '' ( 223.0,337.0 ) ' '' '' ( 223.67,327 ) ' '' '' ( 223.17,328.50)(1.000,-1.500)2 ' '' '' ( 224.0,330.0 ) ' '' '' ( 224.67,323 ) ' '' '' ( 224.17,324.00)(1.000,-1.000)2 ' '' '' ( 225.0,325.0 ) ' '' '' ( 225.67,320 ) ' '' '' ( 225.17,321.00)(1.000,-1.000)2 ' '' '' ( 226.0,322.0 ) ( 227,314.67 ) ' '' '' ( 227.00,315.17)(0.500,-1.000)2 ' '' '' ( 227.0,316.0 ) ' '' '' ( 227.67,311 ) ' '' '' ( 227.17,312.00)(1.000,-1.000)2 ' '' '' ( 228.0,313.0 ) ' '' '' ( 229,307.67 ) ' '' '' ( 229.00,308.17)(0.500,-1.000)2 ' '' '' ( 229.0,309.0 ) ' '' '' ( 229.67,303 ) ' '' '' ( 229.17,304.00)(1.000,-1.000)2 ' '' '' ( 230.0,305.0 ) ' '' '' ( 231.0,300.0 ) ' '' '' ( 231.0,300.0 ) ( 231.67,296 ) ' '' '' ( 231.17,297.00)(1.000,-1.000)2 ' '' '' ( 232.0,298.0 ) ' '' '' ( 233,293.67 ) ' '' '' ( 233.00,294.17)(0.500,-1.000)2 ' '' '' ( 233.0,295.0 ) ( 233.67,289 ) ' '' '' ( 233.17,290.00)(1.000,-1.000)2 ' '' '' ( 234.0,291.0 ) ' '' '' ( 234.67,286 ) ' '' '' ( 234.17,287.00)(1.000,-1.000)2 ' '' '' ( 235.0,288.0 ) ( 235.67,283 ) ' '' '' ( 235.17,284.00)(1.000,-1.000)2 ' '' '' ( 236.0,285.0 ) ( 237,278.67 ) ' '' '' ( 237.00,279.17)(0.500,-1.000)2 ' '' '' ( 237.0,280.0 ) ' '' '' ( 238,276.67 ) ' '' '' ( 238.00,277.17)(0.500,-1.000)2 ' '' '' ( 238.0,278.0 ) ( 239,273.67 ) ' '' '' ( 239.00,274.17)(0.500,-1.000)2 ' '' '' ( 239.0,275.0 ) ' '' '' ( 239.67,269 ) ' '' '' ( 239.17,270.00)(1.000,-1.000)2 ' '' '' ( 240.0,271.0 ) ' '' '' ( 241,266.67 ) ' '' '' ( 241.00,267.17)(0.500,-1.000)2 ' '' '' ( 241.0,268.0 ) ( 242.0,266.0 ) ( 242.0,266.0 ) ( 243,261.67 ) ' '' '' ( 243.00,262.17)(0.500,-1.000)2 ' '' '' ( 243.0,263.0 ) ' '' '' ( 244,259.67 ) ' '' '' ( 244.00,260.17)(0.500,-1.000)2 ' '' '' ( 244.0,261.0 ) ( 245,256.67 ) ' '' '' ( 245.00,257.17)(0.500,-1.000)2 ' '' '' ( 245.0,258.0 ) ' '' '' ( 246,253.67 ) ' '' '' ( 246.00,254.17)(0.500,-1.000)2 ' '' '' ( 246.0,255.0 ) ' '' '' ( 247,251.67 ) ' '' '' ( 247.00,252.17)(0.500,-1.000)2 ' '' '' ( 247.0,253.0 ) ( 248,249.67 ) ' '' '' ( 248.00,250.17)(0.500,-1.000)2 ' '' '' ( 248.0,251.0 ) ( 249,245.67 ) ' '' '' ( 249.00,246.17)(0.500,-1.000)2 ' '' '' ( 249.0,247.0 ) ' '' '' ( 250.0,245.0 ) ( 250.0,245.0 ) ( 251,242.67 ) ' '' '' ( 251.00,243.17)(0.500,-1.000)2 ' '' '' ( 251.0,244.0 ) ( 252,239.67 ) ' '' '' ( 252.00,240.17)(0.500,-1.000)2 ' '' '' ( 252.0,241.0 ) ' '' '' ( 253.0,239.0 ) ( 253.0,239.0 ) ( 254,236.67 ) ' '' '' ( 254.00,237.17)(0.500,-1.000)2 ' '' '' ( 254.0,238.0 ) ( 255,234.67 ) ' '' '' ( 255.00,235.17)(0.500,-1.000)2 ' '' '' ( 255.0,236.0 ) ( 256,235 ) ( 256,233.67 ) ' '' '' ( 256.00,234.17)(0.500,-1.000)2 ' '' '' ( 257.0,233.0 ) ( 257.0,233.0 ) ( 258.0,230.0 ) ' '' '' ( 258.0,230.0 ) ( 259,227.67 ) ' '' '' ( 259.00,228.17)(0.500,-1.000)2 ' '' '' ( 259.0,229.0 ) ( 260,225.67 ) ' '' '' ( 260.00,226.17)(0.500,-1.000)2 ' '' '' ( 260.0,227.0 ) ( 260.67,223 ) ' '' '' ( 260.17,224.00)(1.000,-1.000)2 ' '' '' ( 261.0,225.0 ) ( 262,223 ) ( 262,221.67 ) ' '' '' ( 262.00,222.17)(0.500,-1.000)2 ' '' '' ( 263,222 ) ( 263,220.67 ) ' '' '' ( 263.00,221.17)(0.500,-1.000)2 ' '' '' ( 264,221 ) ( 264,221 ) ( 264,219.67 ) ' '' '' ( 264.00,220.17)(0.500,-1.000)2 ' '' '' ( 265,220 ) ( 265,218.67 ) ' '' '' ( 265.00,219.17)(0.500,-1.000)2 ' '' '' ( 266,216.67 ) ' '' '' ( 266.00,217.17)(0.500,-1.000)2 ' '' '' ( 266.0,218.0 ) ( 267.0,215.0 ) ' '' '' ( 267.0,215.0 ) ( 268,212.67 ) ' '' '' ( 268.00,213.17)(0.500,-1.000)2 ' '' '' ( 268.0,214.0 ) ( 269,213 ) ( 269,211.67 ) ' '' '' ( 269.00,212.17)(0.500,-1.000)2 ' '' '' ( 270,212 ) ( 270,209.67 ) ' '' '' ( 270.00,210.17)(0.500,-1.000)2 ' '' '' ( 270.0,211.0 ) ( 271,210 ) ( 271,208.67 ) ' '' '' ( 271.00,209.17)(0.500,-1.000)2 ' '' '' ( 272,209 ) ( 272,207.67 ) ' '' '' ( 272.00,208.17)(0.500,-1.000)2 ' '' '' ( 273,205.67 ) ' '' '' ( 273.00,206.17)(0.500,-1.000)2 ' '' '' ( 273.0,207.0 ) ( 274,206 ) ( 274,204.67 ) ' '' '' ( 274.00,205.17)(0.500,-1.000)2 ' '' '' ( 275.0,204.0 ) ( 275.0,204.0 ) ( 276.0,203.0 ) ( 276.0,203.0 ) ( 277.0,202.0 ) ( 278,200.67 ) ' '' '' ( 278.00,201.17)(0.500,-1.000)2 ' '' '' ( 277.0,202.0 ) ( 279,201 ) ( 279,201 ) ( 279,199.67 ) ' '' '' ( 279.00,200.17)(0.500,-1.000)2 ' '' '' ( 280,197.67 ) ' '' '' ( 280.00,198.17)(0.500,-1.000)2 ' '' '' ( 280.0,199.0 ) ( 281,198 ) ( 281,196.67 ) ' '' '' ( 281.00,197.17)(0.500,-1.000)2 ' '' '' ( 282,197 ) ( 282,197 ) ( 282,195.67 ) ' '' '' ( 282.00,196.17)(0.500,-1.000)2 ' '' '' ( 283,193.67 ) ' '' '' ( 283.00,194.17)(0.500,-1.000)2 ' '' '' ( 283.0,195.0 ) ( 284,194 ) ( 284,192.67 ) ' '' '' ( 284.00,193.17)(0.500,-1.000)2 ' '' '' ( 285.0,193.0 ) ( 285.0,193.0 ) ( 286,191.67 ) ' '' '' ( 286.00,192.17)(0.500,-1.000)2 ' '' '' ( 285.0,193.0 ) ( 287,192 ) ( 287,190.67 ) ' '' '' ( 287.00,191.17)(0.500,-1.000)2 ' '' '' ( 288,191 ) ( 288,191 ) ( 288,189.67 ) ' '' '' ( 288.00,190.17)(0.500,-1.000)2 ' '' '' ( 289.0,189.0 ) ( 289.0,189.0 ) ' '' '' ( 291.0,188.0 ) ( 291.0,188.0 ) ( 292.0,187.0 ) ( 294,185.67 ) ' '' '' ( 294.00,186.17)(0.500,-1.000)2 ' '' '' ( 292.0,187.0 ) ' '' '' ( 295,186 ) ( 296,184.67 ) ' '' '' ( 296.00,185.17)(0.500,-1.000)2 ' '' '' ( 295.0,186.0 ) ( 297,185 ) ( 299,183.67 ) ' '' '' ( 299.00,184.17)(0.500,-1.000)2 ' '' '' ( 297.0,185.0 ) ' '' '' ( 300,184 ) ( 301,182.67 ) ' '' '' ( 301.00,183.17)(0.500,-1.000)2 ' '' '' ( 300.0,184.0 ) ( 302,183 ) ( 302.0,183.0 ) ( 303,180.67 ) ' '' '' ( 303.00,181.17)(0.500,-1.000)2 ' '' '' ( 303.0,182.0 ) ( 304,181 ) ( 304,181 ) ( 304.0,181.0 ) ' '' '' ( 306.0,180.0 ) ( 306.0,180.0 ) ( 307.0,179.0 ) ( 308,178.67 ) ' '' '' ( 308.00,178.17)(0.500,1.000)2 ' '' '' ( 307.0,179.0 ) ( 309.0,179.0 ) ( 310,177.67 ) ' '' '' ( 310.00,178.17)(0.500,-1.000)2 ' '' '' ( 309.0,179.0 ) ( 311,178 ) ( 312,176.67 ) ' '' '' ( 312.00,177.17)(0.500,-1.000)2 ' '' '' ( 311.0,178.0 ) ( 313,176.67 ) ' '' '' ( 313.00,177.17)(0.500,-1.000)2 ' '' '' ( 313.0,177.0 ) ( 314,176.67 ) ' '' '' ( 314.00,177.17)(0.500,-1.000)2 ' '' '' ( 314.0,177.0 ) ( 315.0,176.0 ) ( 315.0,176.0 ) ' '' '' ( 317.0,175.0 ) ( 317.0,175.0 ) ' '' '' ( 319.0,174.0 ) ( 319.0,174.0 ) ' '' '' ( 322.0,174.0 ) ( 322.0,174.0 ) ( 324,172.67 ) ' '' '' ( 324.00,173.17)(0.500,-1.000)2 ' '' '' ( 322.0,174.0 ) ' '' '' ( 325,173 ) ( 325,172.67 ) ' '' '' ( 325.00,173.17)(0.500,-1.000)2 ' '' '' ( 325.0,173.0 ) ( 326,173 ) ( 326,171.67 ) ' '' '' ( 326.00,172.17)(0.500,-1.000)2 ' '' '' ( 327,172 ) ( 327.0,172.0 ) ' '' '' ( 330.0,171.0 ) ( 330.0,171.0 ) ( 331.0,171.0 ) ( 331.0,172.0 ) ( 332.0,171.0 ) ( 334,169.67 ) ' '' '' ( 334.00,170.17)(0.500,-1.000)2 ' '' '' ( 332.0,171.0 ) ' '' '' ( 335,170 ) ( 335.0,170.0 ) ( 336.0,169.0 ) ( 337,168.67 ) ' '' '' ( 337.00,168.17)(0.500,1.000)2 ' '' '' ( 336.0,169.0 ) ( 338.0,169.0 ) ( 338.0,169.0 ) ' '' '' ( 341.0,168.0 ) ( 341.0,168.0 ) ( 342.0,168.0 ) ( 344,167.67 ) ' '' '' ( 344.00,168.17)(0.500,-1.000)2 ' '' '' ( 342.0,169.0 ) ' '' '' ( 345,168 ) ( 346,167.67 ) ' '' '' ( 346.00,167.17)(0.500,1.000)2 ' '' '' ( 345.0,168.0 ) ( 347.0,168.0 ) ( 347.0,168.0 ) ' '' '' ( 349.0,167.0 ) ( 349.0,167.0 ) ( 349.0,168.0 ) ' '' '' ( 351.0,167.0 ) ( 355,165.67 ) ' '' '' ( 355.00,166.17)(0.500,-1.000)2 ' '' '' ( 351.0,167.0 ) ' '' '' ( 356,166 ) ( 356.0,166.0 ) ' '' '' ( 361,164.67 ) ' '' '' ( 361.00,164.17)(0.500,1.000)2 ' '' '' ( 361.0,165.0 ) ( 362,166 ) ( 362.0,165.0 ) ( 365,164.67 ) ' '' '' ( 365.00,164.17)(0.500,1.000)2 ' '' '' ( 362.0,165.0 ) ' '' '' ( 366,166 ) ( 366.0,166.0 ) ( 367.0,165.0 ) ( 367.0,165.0 ) ( 368.0,165.0 ) ( 368.0,165.0 ) ( 368.0,165.0 ) ' '' '' ( 373,163.67 ) ' '' '' ( 373.00,163.17)(0.500,1.000)2 ' '' '' ( 373.0,164.0 ) ( 374.0,163.0 ) ' '' '' ( 374.0,163.0 ) ( 374.0,164.0 ) ' '' '' ( 378,163.67 ) ' '' '' ( 378.00,164.17)(0.500,-1.000)2 ' '' '' ( 378.0,164.0 ) ( 379,164 ) ( 379.0,164.0 ) ( 380.0,164.0 ) ( 380,163.67 ) ' '' '' ( 380.00,163.17)(0.500,1.000)2 ' '' '' ( 380.0,164.0 ) ( 381.0,164.0 ) ( 387,162.67 ) ' '' '' ( 387.00,163.17)(0.500,-1.000)2 ' '' '' ( 381.0,164.0 ) ' '' '' ( 388,163 ) ( 388,162.67 ) ' '' '' ( 388.00,162.17)(0.500,1.000)2 ' '' '' ( 389,164 ) ( 389,164 ) ( 389,162.67 ) ' '' '' ( 389.00,163.17)(0.500,-1.000)2 ' '' '' ( 390,163 ) ( 390.0,163.0 ) ( 391,162.67 ) ' '' '' ( 391.00,163.17)(0.500,-1.000)2 ' '' '' ( 391.0,163.0 ) ( 392,163 ) ( 392,163 ) ( 394,161.67 ) ' '' '' ( 394.00,162.17)(0.500,-1.000)2 ' '' '' ( 392.0,163.0 ) ' '' '' ( 395,161.67 ) ' '' '' ( 395.00,162.17)(0.500,-1.000)2 ' '' '' ( 395.0,162.0 ) ( 396.0,162.0 ) ( 398,161.67 ) ' '' '' ( 398.00,162.17)(0.500,-1.000)2 ' '' '' ( 396.0,163.0 ) ' '' '' ( 399.0,162.0 ) ( 403,161.67 ) ' '' '' ( 403.00,162.17)(0.500,-1.000)2 ' '' '' ( 399.0,163.0 ) ' '' '' ( 404,162 ) ( 404,162 ) ( 405,161.67 ) ' '' '' ( 405.00,161.17)(0.500,1.000)2 ' '' '' ( 404.0,162.0 ) ( 406.0,162.0 ) ( 406.0,162.0 ) ( 407.0,162.0 ) ( 407.0,162.0 ) ( 407.0,162.0 ) ' '' '' ( 410.0,162.0 ) ( 410,161.67 ) ' '' '' ( 410.00,161.17)(0.500,1.000)2 ' '' '' ( 410.0,162.0 ) ( 411.0,162.0 ) ( 411.0,162.0 ) ' '' '' ( 419.0,161.0 ) ( 420,160.67 ) ' '' '' ( 420.00,160.17)(0.500,1.000)2 ' '' '' ( 419.0,161.0 ) ( 421.0,161.0 ) ( 422,160.67 ) ' '' '' ( 422.00,160.17)(0.500,1.000)2 ' '' '' ( 421.0,161.0 ) ( 423,162 ) ( 423,160.67 ) ' '' '' ( 423.00,161.17)(0.500,-1.000)2 ' '' '' ( 424,160.67 ) ' '' '' ( 424.00,161.17)(0.500,-1.000)2 ' '' '' ( 424.0,161.0 ) ( 425,161 ) ( 425,161 ) ( 426,160.67 ) ' '' '' ( 426.00,160.17)(0.500,1.000)2 ' '' '' ( 425.0,161.0 ) ( 427.0,161.0 ) ( 427.0,161.0 ) ' '' '' ( 429.0,161.0 ) ( 429.0,162.0 ) ' '' '' ( 431,160.67 ) ' '' '' ( 431.00,160.17)(0.500,1.000)2 ' '' '' ( 431.0,161.0 ) ( 432,162 ) ( 432.0,161.0 ) ( 432.0,161.0 ) ' '' '' ( 441.0,161.0 ) ( 441,160.67 ) ' '' '' ( 441.00,160.17)(0.500,1.000)2 ' '' '' ( 441.0,161.0 ) ( 442,162 ) ( 442,160.67 ) ' '' '' ( 442.00,161.17)(0.500,-1.000)2 ' '' '' ( 443,161 ) ( 443.0,161.0 ) ( 444,159.67 ) ' '' '' ( 444.00,159.17)(0.500,1.000)2 ' '' '' ( 444.0,160.0 ) ( 445,161 ) ( 445.0,161.0 ) ( 446,159.67 ) ' '' '' ( 446.00,159.17)(0.500,1.000)2 ' '' '' ( 446.0,160.0 ) ( 447.0,160.0 ) ( 447,159.67 ) ' '' '' ( 447.00,160.17)(0.500,-1.000)2 ' '' '' ( 447.0,160.0 ) ( 448,160 ) ( 448,159.67 ) ' '' '' ( 448.00,159.17)(0.500,1.000)2 ' '' '' ( 449,161 ) ( 449,159.67 ) ' '' '' ( 449.00,160.17)(0.500,-1.000)2 ' '' '' ( 450,160 ) ( 450,160 ) ( 450.0,160.0 ) ' '' '' ( 452,159.67 ) ' '' '' ( 452.00,160.17)(0.500,-1.000)2 ' '' '' ( 452.0,160.0 ) ( 453,160 ) ( 453,160 ) ( 455,159.67 ) ' '' '' ( 455.00,159.17)(0.500,1.000)2 ' '' '' ( 453.0,160.0 ) ' '' '' ( 456.0,160.0 ) ( 456.0,160.0 ) ' '' '' ( 460,159.67 ) ' '' '' ( 460.00,160.17)(0.500,-1.000)2 ' '' '' ( 460.0,160.0 ) ( 461,159.67 ) ' '' '' ( 461.00,160.17)(0.500,-1.000)2 ' '' '' ( 461.0,160.0 ) ( 462,160 ) ( 462,160 ) ( 462.0,160.0 ) ' '' '' ( 465,158.67 ) ' '' '' ( 465.00,158.17)(0.500,1.000)2 ' '' '' ( 465.0,159.0 ) ( 466,160 ) ( 466.0,160.0 ) ' '' '' ( 469.0,159.0 ) ( 469.0,159.0 ) ( 470,158.67 ) ' '' '' ( 470.00,159.17)(0.500,-1.000)2 ' '' '' ( 470.0,159.0 ) ( 471.0,159.0 ) ( 472,158.67 ) ' '' '' ( 472.00,159.17)(0.500,-1.000)2 ' '' '' ( 471.0,160.0 ) ( 473,159 ) ( 477,158.67 ) ' '' '' ( 477.00,158.17)(0.500,1.000)2 ' '' '' ( 473.0,159.0 ) ' '' '' ( 478.0,159.0 ) ( 479,158.67 ) ' '' '' ( 479.00,158.17)(0.500,1.000)2 ' '' '' ( 478.0,159.0 ) ( 480,160 ) ( 480,160 ) ( 480,158.67 ) ' '' '' ( 480.00,159.17)(0.500,-1.000)2 ' '' '' ( 481,159 ) ( 482,158.67 ) ' '' '' ( 482.00,158.17)(0.500,1.000)2 ' '' '' ( 481.0,159.0 ) ( 483,160 ) ( 483,160 ) ( 483,158.67 ) ' '' '' ( 483.00,159.17)(0.500,-1.000)2 ' '' '' ( 484.0,159.0 ) ( 484.0,160.0 ) ' '' '' ( 487.0,159.0 ) ( 487.0,159.0 ) ' '' '' ( 489.0,159.0 ) ( 489.0,160.0 ) ( 490.0,159.0 ) ( 490.0,159.0 ) ( 491.0,159.0 ) ( 491.0,160.0 ) ' '' '' ( 493.0,159.0 ) ( 499,157.67 ) ' '' '' ( 499.00,158.17)(0.500,-1.000)2 ' '' '' ( 493.0,159.0 ) ' '' '' ( 500,158 ) ( 500,157.67 ) ' '' '' ( 500.00,157.17)(0.500,1.000)2 ' '' '' ( 501,159 ) ( 502,157.67 ) ' '' '' ( 502.00,158.17)(0.500,-1.000)2 ' '' '' ( 501.0,159.0 ) ( 503,158 ) ( 503.0,158.0 ) ( 504,157.67 ) ' '' '' ( 504.00,158.17)(0.500,-1.000)2 ' '' '' ( 504.0,158.0 ) ( 505,158 ) ( 505,158 ) ( 506,157.67 ) ' '' '' ( 506.00,157.17)(0.500,1.000)2 ' '' '' ( 505.0,158.0 ) ( 507,159 ) ( 507,157.67 ) ' '' '' ( 507.00,158.17)(0.500,-1.000)2 ' '' '' ( 508,158 ) ( 508,157.67 ) ' '' '' ( 508.00,158.17)(0.500,-1.000)2 ' '' '' ( 508.0,158.0 ) ( 509,158 ) ( 509.0,158.0 ) ' '' '' ( 511.0,158.0 ) ( 512,157.67 ) ' '' '' ( 512.00,158.17)(0.500,-1.000)2 ' '' '' ( 511.0,159.0 ) ( 513,158 ) ( 519,157.67 ) ' '' '' ( 519.00,157.17)(0.500,1.000)2 ' '' '' ( 513.0,158.0 ) ' '' '' ( 520,159 ) ( 520.0,158.0 ) ( 520.0,158.0 ) ' '' '' ( 528.0,157.0 ) ( 528.0,157.0 ) ( 529.0,157.0 ) ( 529.0,158.0 ) ' '' '' ( 533.0,157.0 ) ( 533.0,157.0 ) ( 534.0,157.0 ) ( 534.0,158.0 ) ' '' '' ( 537,157.67 ) ' '' '' ( 537.00,158.17)(0.500,-1.000)2 ' '' '' ( 537.0,158.0 ) ( 538,158 ) ( 538,158 ) ( 539,156.67 ) ' '' '' ( 539.00,157.17)(0.500,-1.000)2 ' '' '' ( 538.0,158.0 ) ( 540.0,157.0 ) ( 541,156.67 ) ' '' '' ( 541.00,157.17)(0.500,-1.000)2 ' '' '' ( 540.0,158.0 ) ( 542,157 ) ( 542.0,157.0 ) ( 543.0,157.0 ) ( 543.0,158.0 ) ( 544.0,157.0 ) ( 544.0,157.0 ) ( 544.0,158.0 ) ' '' '' ( 548.0,157.0 ) ( 548.0,157.0 ) ( 549.0,157.0 ) ( 549.0,158.0 ) ' '' '' ( 553.0,157.0 ) ( 553.0,157.0 ) ' '' '' ( 557,156.67 ) ' '' '' ( 557.00,157.17)(0.500,-1.000)2 ' '' '' ( 557.0,157.0 ) ( 558,156.67 ) ' '' '' ( 558.00,157.17)(0.500,-1.000)2 ' '' '' ( 558.0,157.0 ) ( 559,156.67 ) ' '' '' ( 559.00,157.17)(0.500,-1.000)2 ' '' '' ( 559.0,157.0 ) ( 560.0,157.0 ) ( 560.0,158.0 ) ( 561.0,157.0 ) ( 561.0,157.0 ) ' '' '' ( 566.0,157.0 ) ( 566.0,158.0 ) ' '' '' ( 568.0,157.0 ) ( 568.0,157.0 ) ' '' '' ( 572,155.67 ) ' '' '' ( 572.00,155.17)(0.500,1.000)2 ' '' '' ( 572.0,156.0 ) ( 573,156.67 ) ' '' '' ( 573.00,157.17)(0.500,-1.000)2 ' '' '' ( 573.0,157.0 ) ( 574,157 ) ( 574.0,157.0 ) ' '' '' ( 576,156.67 ) ' '' '' ( 576.00,157.17)(0.500,-1.000)2 ' '' '' ( 576.0,157.0 ) ( 577,157 ) ( 578,156.67 ) ' '' '' ( 578.00,156.17)(0.500,1.000)2 ' '' '' ( 577.0,157.0 ) ( 579,156.67 ) ' '' '' ( 579.00,156.17)(0.500,1.000)2 ' '' '' ( 579.0,157.0 ) ( 580,158 ) ( 580,156.67 ) ' '' '' ( 580.00,157.17)(0.500,-1.000)2 ' '' '' ( 581,157 ) ( 581,157 ) ( 585,156.67 ) ' '' '' ( 585.00,156.17)(0.500,1.000)2 ' '' '' ( 581.0,157.0 ) ' '' '' ( 586.0,157.0 ) ( 586.0,157.0 ) ' '' '' ( 591,156.67 ) ' '' '' ( 591.00,157.17)(0.500,-1.000)2 ' '' '' ( 591.0,157.0 ) ( 592,157 ) ( 592.0,157.0 ) ' '' '' ( 594,155.67 ) ' '' '' ( 594.00,155.17)(0.500,1.000)2 ' '' '' ( 594.0,156.0 ) ( 595,157 ) ( 595,155.67 ) ' '' '' ( 595.00,156.17)(0.500,-1.000)2 ' '' '' ( 596.0,156.0 ) ( 596.0,157.0 ) ( 597.0,156.0 ) ( 600,155.67 ) ' '' '' ( 600.00,155.17)(0.500,1.000)2 ' '' '' ( 597.0,156.0 ) ' '' '' ( 601,157 ) ( 607,155.67 ) ' '' '' ( 607.00,156.17)(0.500,-1.000)2 ' '' '' ( 601.0,157.0 ) ' '' '' ( 608,156 ) ( 608,155.67 ) ' '' '' ( 608.00,156.17)(0.500,-1.000)2 ' '' '' ( 608.0,156.0 ) ( 609.0,156.0 ) ( 616,155.67 ) ' '' '' ( 616.00,156.17)(0.500,-1.000)2 ' '' '' ( 609.0,157.0 ) ' '' '' ( 617,156 ) ( 617.0,156.0 ) ( 617.0,157.0 ) ( 618.0,156.0 ) ( 618.0,156.0 ) ' '' '' ( 620.0,156.0 ) ( 620.0,156.0 ) ( 620.0,156.0 ) ( 621.0,156.0 ) ( 621.0,157.0 ) ' '' '' ( 623.0,156.0 ) ( 625,155.67 ) ' '' '' ( 625.00,155.17)(0.500,1.000)2 ' '' '' ( 623.0,156.0 ) ' '' '' ( 626.0,156.0 ) ( 627,155.67 ) ' '' '' ( 627.00,155.17)(0.500,1.000)2 ' '' '' ( 626.0,156.0 ) ( 628.0,156.0 ) ( 628.0,156.0 ) ' '' '' ( 630.0,155.0 ) ( 630.0,155.0 ) ( 630.0,156.0 ) ' '' '' ( 636.0,156.0 ) ( 636,155.67 ) ' '' '' ( 636.00,155.17)(0.500,1.000)2 ' '' '' ( 636.0,156.0 ) ( 637,157 ) ( 637.0,157.0 ) ' '' '' ( 639.0,156.0 ) ( 639.0,156.0 ) ( 640,155.67 ) ' '' '' ( 640.00,156.17)(0.500,-1.000)2 ' '' '' ( 640.0,156.0 ) ( 641,156 ) ( 641.0,156.0 ) ( 642,155.67 ) ' '' '' ( 642.00,156.17)(0.500,-1.000)2 ' '' '' ( 642.0,156.0 ) ( 643,155.67 ) ' '' '' ( 643.00,156.17)(0.500,-1.000)2 ' '' '' ( 643.0,156.0 ) ( 644,156 ) ( 644,155.67 ) ' '' '' ( 644.00,155.17)(0.500,1.000)2 ' '' '' ( 645.0,156.0 ) ( 645.0,156.0 ) ( 645.0,157.0 ) ( 646,155.67 ) ' '' '' ( 646.00,155.17)(0.500,1.000)2 ' '' '' ( 646.0,156.0 ) ( 647,157 ) ( 647,155.67 ) ' '' '' ( 647.00,156.17)(0.500,-1.000)2 ' '' '' ( 648,156 ) ( 648,155.67 ) ' '' '' ( 648.00,156.17)(0.500,-1.000)2 ' '' '' ( 648.0,156.0 ) ( 649,156 ) ( 649.0,156.0 ) ' '' '' ( 657.0,156.0 ) ( 657.0,157.0 ) ( 658.0,156.0 ) ( 658.0,156.0 ) ( 659,154.67 ) ' '' '' ( 659.00,154.17)(0.500,1.000)2 ' '' '' ( 659.0,155.0 ) ( 660,156 ) ( 660,156 ) ( 660,155.67 ) ' '' '' ( 660.00,155.17)(0.500,1.000)2 ' '' '' ( 661,157 ) ( 661,155.67 ) ' '' '' ( 661.00,156.17)(0.500,-1.000)2 ' '' '' ( 662,156 ) ( 662,155.67 ) ' '' '' ( 662.00,155.17)(0.500,1.000)2 ' '' '' ( 663,157 ) ( 663,157 ) ( 665,155.67 ) ' '' '' ( 665.00,156.17)(0.500,-1.000)2 ' '' '' ( 663.0,157.0 ) ' '' '' ( 666,156 ) ( 666,156 ) ( 670,155.67 ) ' '' '' ( 670.00,155.17)(0.500,1.000)2 ' '' '' ( 666.0,156.0 ) ' '' '' ( 671,157 ) ( 671.0,157.0 ) ( 672.0,156.0 ) ( 672.0,156.0 ) ' '' '' ( 674,154.67 ) ' '' '' ( 674.00,154.17)(0.500,1.000)2 ' '' '' ( 674.0,155.0 ) ( 675,156 ) ( 675.0,155.0 ) ( 675.0,155.0 ) ( 676.0,155.0 ) ( 676.0,156.0 ) ' '' '' ( 680,154.67 ) ' '' '' ( 680.00,154.17)(0.500,1.000)2 ' '' '' ( 680.0,155.0 ) ( 681,156 ) ( 681,156 ) ( 681.0,156.0 ) ' '' '' ( 683,155.67 ) ' '' '' ( 683.00,156.17)(0.500,-1.000)2 ' '' '' ( 683.0,156.0 ) ( 684,156 ) ( 684,156 ) ( 684,154.67 ) ' '' '' ( 684.00,155.17)(0.500,-1.000)2 ' '' '' ( 685.0,155.0 ) ( 685.0,156.0 ) ( 686.0,155.0 ) ( 686.0,155.0 ) ( 687,154.67 ) ' '' '' ( 687.00,155.17)(0.500,-1.000)2 ' '' '' ( 687.0,155.0 ) ( 688,155 ) ( 689,154.67 ) ' '' '' ( 689.00,154.17)(0.500,1.000)2 ' '' '' ( 688.0,155.0 ) ( 690,156 ) ( 690,154.67 ) ' '' '' ( 690.00,154.17)(0.500,1.000)2 ' '' '' ( 690.0,155.0 ) ( 691,156 ) ( 691,154.67 ) ' '' '' ( 691.00,155.17)(0.500,-1.000)2 ' '' '' ( 692,155 ) ( 692.0,155.0 ) ( 693,154.67 ) ' '' '' ( 693.00,155.17)(0.500,-1.000)2 ' '' '' ( 693.0,155.0 ) ( 694,155 ) ( 694,155 ) ( 695,154.67 ) ' '' '' ( 695.00,154.17)(0.500,1.000)2 ' '' '' ( 694.0,155.0 ) ( 696,154.67 ) ' '' '' ( 696.00,154.17)(0.500,1.000)2 ' '' '' ( 696.0,155.0 ) ( 697,156 ) ( 697,156 ) ( 697.0,156.0 ) ( 698,154.67 ) ' '' '' ( 698.00,154.17)(0.500,1.000)2 ' '' '' ( 698.0,155.0 ) ( 699,154.67 ) ' '' '' ( 699.00,154.17)(0.500,1.000)2 ' '' '' ( 699.0,155.0 ) ( 700,154.67 ) ' '' '' ( 700.00,154.17)(0.500,1.000)2 ' '' '' ( 700.0,155.0 ) ( 701,156 ) ( 701.0,156.0 ) ' '' '' ( 703.0,155.0 ) ( 703.0,155.0 ) ( 704,153.67 ) ' '' '' ( 704.00,153.17)(0.500,1.000)2 ' '' '' ( 704.0,154.0 ) ( 705,155 ) ( 705,154.67 ) ' '' '' ( 705.00,154.17)(0.500,1.000)2 ' '' '' ( 706.0,155.0 ) ( 706.0,155.0 ) ( 707.0,155.0 ) ( 708,154.67 ) ' '' '' ( 708.00,155.17)(0.500,-1.000)2 ' '' '' ( 707.0,156.0 ) ( 709,155 ) ( 709,154.67 ) ' '' '' ( 709.00,155.17)(0.500,-1.000)2 ' '' '' ( 709.0,155.0 ) ( 710,154.67 ) ' '' '' ( 710.00,155.17)(0.500,-1.000)2 ' '' '' ( 710.0,155.0 ) ( 711,155 ) ( 711.0,155.0 ) ' '' '' ( 718.67,154 ) ' '' '' ( 718.17,154.00)(1.000,1.000)2 ' '' '' ( 719.0,154.0 ) ( 720,156 ) ( 722,154.67 ) ' '' '' ( 722.00,155.17)(0.500,-1.000)2 ' '' '' ( 720.0,156.0 ) ' '' '' ( 723,155 ) ( 723,154.67 ) ' '' '' ( 723.00,154.17)(0.500,1.000)2 ' '' '' ( 724,156 ) ( 724.0,155.0 ) ( 724.0,155.0 ) ( 725.0,155.0 ) ( 726,154.67 ) ' '' '' ( 726.00,155.17)(0.500,-1.000)2 ' '' '' ( 725.0,156.0 ) ( 727,155 ) ( 727,155 ) ( 728,154.67 ) ' '' '' ( 728.00,154.17)(0.500,1.000)2 ' '' '' ( 727.0,155.0 ) ( 729,156 ) ( 729,154.67 ) ' '' '' ( 729.00,155.17)(0.500,-1.000)2 ' '' '' ( 730,155 ) ( 730,153.67 ) ' '' '' ( 730.00,153.17)(0.500,1.000)2 ' '' '' ( 730.0,154.0 ) ( 731,155 ) ( 734,153.67 ) ' '' '' ( 734.00,154.17)(0.500,-1.000)2 ' '' '' ( 731.0,155.0 ) ' '' '' ( 735.0,154.0 ) ( 738,153.67 ) ' '' '' ( 738.00,154.17)(0.500,-1.000)2 ' '' '' ( 735.0,155.0 ) ' '' '' ( 739.0,154.0 ) ( 739.0,155.0 ) ' '' '' ( 742.0,155.0 ) ( 742.0,156.0 ) ( 743.0,155.0 ) ( 743.0,155.0 ) ( 744,153.67 ) ' '' '' ( 744.00,153.17)(0.500,1.000)2 ' '' '' ( 744.0,154.0 ) ( 745,155 ) ( 745,155 ) ( 746,153.67 ) ' '' '' ( 746.00,154.17)(0.500,-1.000)2 ' '' '' ( 745.0,155.0 ) ( 747.0,154.0 ) ( 747.0,155.0 ) ' '' '' ( 749.0,154.0 ) ( 750,153.67 ) ' '' '' ( 750.00,153.17)(0.500,1.000)2 ' '' '' ( 749.0,154.0 ) ( 751,155 ) ( 751,155 ) ( 752,153.67 ) ' '' '' ( 752.00,154.17)(0.500,-1.000)2 ' '' '' ( 751.0,155.0 ) ( 753,154 ) ( 753,153.67 ) ' '' '' ( 753.00,153.17)(0.500,1.000)2 ' '' '' ( 754,155 ) ( 754,155 ) ( 754,153.67 ) ' '' '' ( 754.00,154.17)(0.500,-1.000)2 ' '' '' ( 755,154 ) ( 755,153.67 ) ' '' '' ( 755.00,153.17)(0.500,1.000)2 ' '' '' ( 756,155 ) ( 759,153.67 ) ' '' '' ( 759.00,154.17)(0.500,-1.000)2 ' '' '' ( 756.0,155.0 ) ' '' '' ( 760.0,154.0 ) ( 765,154.67 ) ' '' '' ( 765.00,154.17)(0.500,1.000)2 ' '' '' ( 760.0,155.0 ) ' '' '' ( 766,154.67 ) ' '' '' ( 766.00,154.17)(0.500,1.000)2 ' '' '' ( 766.0,155.0 ) ( 767.0,155.0 ) ( 767.0,155.0 ) ' '' '' ( 776,154.67 ) ' '' '' ( 776.00,155.17)(0.500,-1.000)2 ' '' '' ( 776.0,155.0 ) ( 777,155 ) ( 778,153.67 ) ' '' '' ( 778.00,154.17)(0.500,-1.000)2 ' '' '' ( 777.0,155.0 ) ( 779,154 ) ( 779,153.67 ) ' '' '' ( 779.00,154.17)(0.500,-1.000)2 ' '' '' ( 779.0,154.0 ) ( 780.0,154.0 ) ( 783,153.67 ) ' '' '' ( 783.00,154.17)(0.500,-1.000)2 ' '' '' ( 780.0,155.0 ) ' '' '' ( 784,154 ) ( 785,153.67 ) ' '' '' ( 785.00,153.17)(0.500,1.000)2 ' '' '' ( 784.0,154.0 ) ( 786.0,154.0 ) ( 788,152.67 ) ' '' '' ( 788.00,153.17)(0.500,-1.000)2 ' '' '' ( 786.0,154.0 ) ' '' '' ( 789.0,153.0 ) ( 789.0,154.0 ) ( 790.0,154.0 ) ( 790.0,155.0 ) ( 791.0,154.0 ) ( 791.0,154.0 ) ( 791.0,155.0 ) ( 792,153.67 ) ' '' '' ( 792.00,153.17)(0.500,1.000)2 ' '' '' ( 792.0,154.0 ) ( 793,155 ) ( 793.0,155.0 ) ( 794.0,154.0 ) ( 794.0,154.0 ) ( 794.0,155.0 ) ( 795.0,154.0 ) ( 795.0,154.0 ) ( 796.0,154.0 ) ( 797,153.67 ) ' '' '' ( 797.00,154.17)(0.500,-1.000)2 ' '' '' ( 796.0,155.0 ) ( 798,154 ) ( 798,153.67 ) ' '' '' ( 798.00,153.17)(0.500,1.000)2 ' '' '' ( 799.0,154.0 ) ( 801,153.67 ) ' '' '' ( 801.00,153.17)(0.500,1.000)2 ' '' '' ( 799.0,154.0 ) ' '' '' ( 802,155 ) ( 802,153.67 ) ' '' '' ( 802.00,154.17)(0.500,-1.000)2 ' '' '' ( 803,154 ) ( 803,154 ) ( 803,153.67 ) ' '' '' ( 803.00,153.17)(0.500,1.000)2 ' '' '' ( 804,155 ) ( 804.0,155.0 ) ( 805.0,154.0 ) ( 805.0,154.0 ) ' '' '' ( 807.0,154.0 ) ( 807.0,155.0 ) ( 808,153.67 ) ' '' '' ( 808.00,153.17)(0.500,1.000)2 ' '' '' ( 808.0,154.0 ) ( 809,155 ) ( 809,155 ) ( 812,154.67 ) ' '' '' ( 812.00,154.17)(0.500,1.000)2 ' '' '' ( 809.0,155.0 ) ' '' '' ( 813.0,155.0 ) ( 816,153.67 ) ' '' '' ( 816.00,154.17)(0.500,-1.000)2 ' '' '' ( 813.0,155.0 ) ' '' '' ( 817,154 ) ( 817.0,154.0 ) ( 818,153.67 ) ' '' '' ( 818.00,154.17)(0.500,-1.000)2 ' '' '' ( 818.0,154.0 ) ( 819.0,154.0 ) ( 819.0,155.0 ) ( 820.0,154.0 ) ( 820.0,154.0 ) ( 821.0,154.0 ) ( 821.0,155.0 ) ' '' '' ( 824.0,154.0 ) ( 824.0,154.0 ) ' '' '' ( 828.0,153.0 ) ( 828.0,153.0 ) ( 828.0,154.0 ) ' '' '' ( 830,153.67 ) ' '' '' ( 830.00,154.17)(0.500,-1.000)2 ' '' '' ( 830.0,154.0 ) ( 831,154 ) ( 831,154 ) ( 831.0,154.0 ) ' '' '' ( 834.0,154.0 ) ( 834.0,155.0 ) ( 835.0,154.0 ) ( 835.0,154.0 ) ( 836.0,153.0 ) ( 836.0,153.0 ) ( 837.0,153.0 ) ( 841,153.67 ) ' '' '' ( 841.00,153.17)(0.500,1.000)2 ' '' '' ( 837.0,154.0 ) ' '' '' ( 842.0,154.0 ) ( 849,152.67 ) ' '' '' ( 849.00,153.17)(0.500,-1.000)2 ' '' '' ( 842.0,154.0 ) ' '' '' ( 850,153 ) ( 850.0,153.0 ) ( 851.0,153.0 ) ( 851.0,154.0 ) ' '' '' ( 853.0,153.0 ) ( 853.0,153.0 ) ( 854,152.67 ) ' '' '' ( 854.00,153.17)(0.500,-1.000)2 ' '' '' ( 854.0,153.0 ) ( 855.0,153.0 ) ( 855.0,154.0 ) ( 856.0,153.0 ) ( 856.0,153.0 ) ( 857.0,153.0 ) ( 860,152.67 ) ' '' '' ( 860.00,153.17)(0.500,-1.000)2 ' '' '' ( 857.0,154.0 ) ' '' '' ( 861,152.67 ) ' '' '' ( 861.00,153.17)(0.500,-1.000)2 ' '' '' ( 861.0,153.0 ) ( 862,153 ) ( 862,152.67 ) ' '' '' ( 862.00,152.17)(0.500,1.000)2 ' '' '' ( 863.0,153.0 ) ( 863.0,153.0 ) ( 864.0,153.0 ) ( 864.0,154.0 ) ' '' '' ( 866,152.67 ) ' '' '' ( 866.00,152.17)(0.500,1.000)2 ' '' '' ( 866.0,153.0 ) ( 867,154 ) ( 867.0,153.0 ) ( 868,152.67 ) ' '' '' ( 868.00,152.17)(0.500,1.000)2 ' '' '' ( 867.0,153.0 ) ( 869,154 ) ( 869.0,154.0 ) ( 870,152.67 ) ' '' '' ( 870.00,152.17)(0.500,1.000)2 ' '' '' ( 870.0,153.0 ) ( 871,154 ) ( 871.0,154.0 ) ( 872,152.67 ) ' '' '' ( 872.00,152.17)(0.500,1.000)2 ' '' '' ( 872.0,153.0 ) ( 873,154 ) ( 873,154 ) ( 873.0,154.0 ) ( 874,152.67 ) ' '' '' ( 874.00,152.17)(0.500,1.000)2 ' '' '' ( 874.0,153.0 ) ( 875,154 ) ( 875.0,154.0 ) ( 876.0,154.0 ) ( 876.0,155.0 ) ( 877.0,154.0 ) ( 880,152.67 ) ' '' '' ( 880.00,153.17)(0.500,-1.000)2 ' '' '' ( 877.0,154.0 ) ' '' '' ( 881,153 ) ( 882,152.67 ) ' '' '' ( 882.00,152.17)(0.500,1.000)2 ' '' '' ( 881.0,153.0 ) ( 883,154 ) ( 883.0,154.0 ) ' '' '' ( 885.0,153.0 ) ( 885,152.67 ) ' '' '' ( 885.00,153.17)(0.500,-1.000)2 ' '' '' ( 885.0,153.0 ) ( 886.0,153.0 ) ( 886.0,154.0 ) ( 887.0,153.0 ) ( 888,152.67 ) ' '' '' ( 888.00,152.17)(0.500,1.000)2 ' '' '' ( 887.0,153.0 ) ( 889,154 ) ( 889.0,154.0 ) ' '' '' ( 893.0,153.0 ) ( 893.0,153.0 ) ' '' '' ( 895.0,153.0 ) ( 895,152.67 ) ' '' '' ( 895.00,152.17)(0.500,1.000)2 ' '' '' ( 895.0,153.0 ) ( 896.0,153.0 ) ( 896.0,153.0 ) ' '' '' ( 898,152.67 ) ' '' '' ( 898.00,153.17)(0.500,-1.000)2 ' '' '' ( 898.0,153.0 ) ( 899,153 ) ( 899.0,153.0 ) ( 900,152.67 ) ' '' '' ( 900.00,153.17)(0.500,-1.000)2 ' '' '' ( 900.0,153.0 ) ( 901,153 ) ( 901,153 ) ( 906,152.67 ) ' '' '' ( 906.00,152.17)(0.500,1.000)2 ' '' '' ( 901.0,153.0 ) ' '' '' ( 907,154 ) ( 907,154 ) ( 907.0,154.0 ) ' '' '' ( 909,152.67 ) ' '' '' ( 909.00,152.17)(0.500,1.000)2 ' '' '' ( 909.0,153.0 ) ( 910,154 ) ( 910,154 ) ( 910.0,154.0 ) ( 911.0,153.0 ) ( 913,152.67 ) ' '' '' ( 913.00,152.17)(0.500,1.000)2 ' '' '' ( 911.0,153.0 ) ' '' '' ( 914.0,153.0 ) ( 914.0,153.0 ) ( 915,152.67 ) ' '' '' ( 915.00,153.17)(0.500,-1.000)2 ' '' '' ( 915.0,153.0 ) ( 916,153 ) ( 916,153 ) ( 916,152.67 ) ' '' '' ( 916.00,152.17)(0.500,1.000)2 ' '' '' ( 917.0,153.0 ) ( 917.0,153.0 ) ' '' '' ( 919.0,153.0 ) ( 920,152.67 ) ' '' '' ( 920.00,153.17)(0.500,-1.000)2 ' '' '' ( 919.0,154.0 ) ( 921,153 ) ( 921.0,153.0 ) ( 922.0,153.0 ) ( 923,152.67 ) ' '' '' ( 923.00,153.17)(0.500,-1.000)2 ' '' '' ( 922.0,154.0 ) ( 924,153 ) ( 925,152.67 ) ' '' '' ( 925.00,152.17)(0.500,1.000)2 ' '' '' ( 924.0,153.0 ) ( 926,154 ) ( 926.0,154.0 ) ' '' '' ( 928.0,153.0 ) ( 930,152.67 ) ' '' '' ( 930.00,152.17)(0.500,1.000)2 ' '' '' ( 928.0,153.0 ) ' '' '' ( 931,154 ) ( 931,152.67 ) ' '' '' ( 931.00,152.17)(0.500,1.000)2 ' '' '' ( 931.0,153.0 ) ( 932.0,153.0 ) ( 932.0,153.0 ) ' '' '' ( 934,152.67 ) ' '' '' ( 934.00,153.17)(0.500,-1.000)2 ' '' '' ( 934.0,153.0 ) ( 935,153 ) ( 940,152.67 ) ' '' '' ( 940.00,152.17)(0.500,1.000)2 ' '' '' ( 935.0,153.0 ) ' '' '' ( 941.0,153.0 ) ( 941.0,153.0 ) ( 942,152.67 ) ' '' '' ( 942.00,153.17)(0.500,-1.000)2 ' '' '' ( 942.0,153.0 ) ( 943,152.67 ) ' '' '' ( 943.00,153.17)(0.500,-1.000)2 ' '' '' ( 943.0,153.0 ) ( 944,153 ) ( 944.0,153.0 ) ( 945,152.67 ) ' '' '' ( 945.00,153.17)(0.500,-1.000)2 ' '' '' ( 945.0,153.0 ) ( 946,153 ) ( 946,153 ) ( 947,152.67 ) ' '' '' ( 947.00,152.17)(0.500,1.000)2 ' '' '' ( 946.0,153.0 ) ( 948.0,153.0 ) ( 951,152.67 ) ' '' '' ( 951.00,152.17)(0.500,1.000)2 ' '' '' ( 948.0,153.0 ) ' '' '' ( 952,154 ) ( 952,154 ) ( 952,152.67 ) ' '' '' ( 952.00,153.17)(0.500,-1.000)2 ' '' '' ( 953,153 ) ( 953,151.67 ) ' '' '' ( 953.00,152.17)(0.500,-1.000)2 ' '' '' ( 954.0,152.0 ) ( 955,152.67 ) ' '' '' ( 955.00,152.17)(0.500,1.000)2 ' '' '' ( 954.0,153.0 ) ( 956.0,153.0 ) ( 956.0,153.0 ) ' '' '' ( 958.0,153.0 ) ( 958.0,154.0 ) ( 959.0,153.0 ) ( 959.0,153.0 ) ' '' '' ( 963.0,153.0 ) ( 964,152.67 ) ' '' '' ( 964.00,153.17)(0.500,-1.000)2 ' '' '' ( 963.0,154.0 ) ( 965,153 ) ( 965,153 ) ( 966,151.67 ) ' '' '' ( 966.00,152.17)(0.500,-1.000)2 ' '' '' ( 965.0,153.0 ) ( 967.0,152.0 ) ( 967.0,153.0 ) ' '' '' ( 970,152.67 ) ' '' '' ( 970.00,153.17)(0.500,-1.000)2 ' '' '' ( 970.0,153.0 ) ( 971,153 ) ( 971,153 ) ( 971.0,153.0 ) ' '' '' ( 973,151.67 ) ' '' '' ( 973.00,151.17)(0.500,1.000)2 ' '' '' ( 973.0,152.0 ) ( 974.0,152.0 ) ( 974.0,152.0 ) ( 975.0,152.0 ) ( 975.0,153.0 ) ( 976,152.67 ) ' '' '' ( 976.00,153.17)(0.500,-1.000)2 ' '' '' ( 976.0,153.0 ) ( 977,153 ) ( 977,153 ) ( 977.0,153.0 ) ( 978,152.67 ) ' '' '' ( 978.00,153.17)(0.500,-1.000)2 ' '' '' ( 978.0,153.0 ) ( 979,153 ) ( 979.0,153.0 ) ' '' '' ( 983,152.67 ) ' '' '' ( 983.00,153.17)(0.500,-1.000)2 ' '' '' ( 983.0,153.0 ) ( 984,153 ) ( 986,152.67 ) ' '' '' ( 986.00,152.17)(0.500,1.000)2 ' '' '' ( 984.0,153.0 ) ' '' '' ( 987.0,153.0 ) ( 130,859)(0,0) ( 388,163)(0,0) ( 646,157)(0,0) ( 904,153)(0,0) ( 909,475)(0,0) ( 987.0,153.0 ) ' '' '' ( 849,434)(0,0)[r]mutation rate ( 869.0,434.0 ) ' '' '' ( 130,495 ) ( 130,493.67 ) ' '' '' ( 130.00,494.17)(0.500,-1.000)2 ' '' '' ( 131,494 ) ( 134,492.67 ) ' '' '' ( 134.00,493.17)(0.500,-1.000)2 ' '' '' ( 131.0,494.0 ) ' '' '' ( 135,493 ) ( 138,491.67 ) ' '' '' ( 138.00,492.17)(0.500,-1.000)2 ' '' '' ( 135.0,493.0 ) ' '' '' ( 139,492 ) ( 139,492 ) ( 139.0,492.0 ) ( 140.0,491.0 ) ( 140.0,491.0 ) ' '' '' ( 142.0,490.0 ) ( 142.67,488 ) ' '' '' ( 142.17,489.00)(1.000,-1.000)2 ' '' '' ( 142.0,490.0 ) ( 144,487.67 ) ' '' '' ( 144.00,488.17)(0.500,-1.000)2 ' '' '' ( 144.0,488.0 ) ( 145,485.67 ) ' '' '' ( 145.00,486.17)(0.500,-1.000)2 ' '' '' ( 145.0,487.0 ) ( 146,486 ) ( 146,484.67 ) ' '' '' ( 146.00,485.17)(0.500,-1.000)2 ' '' '' ( 147,485 ) ( 147,483.67 ) ' '' '' ( 147.00,484.17)(0.500,-1.000)2 ' '' '' ( 148,484 ) ( 148.0,483.0 ) ( 148.0,483.0 ) ( 149,480.67 ) ' '' '' ( 149.00,481.17)(0.500,-1.000)2 ' '' '' ( 149.0,482.0 ) ( 150.0,480.0 ) ( 150.0,480.0 ) ( 151,477.67 ) ' '' '' ( 151.00,478.17)(0.500,-1.000)2 ' '' '' ( 151.0,479.0 ) ( 152,478 ) ( 152,476.67 ) ' '' '' ( 152.00,477.17)(0.500,-1.000)2 ' '' '' ( 153,477 ) ( 153,475.67 ) ' '' '' ( 153.00,476.17)(0.500,-1.000)2 ' '' '' ( 154,476 ) ( 154,473.67 ) ' '' '' ( 154.00,474.17)(0.500,-1.000)2 ' '' '' ( 154.0,475.0 ) ( 155,471.67 ) ' '' '' ( 155.00,472.17)(0.500,-1.000)2 ' '' '' ( 155.0,473.0 ) ( 156,472 ) ( 156,470.67 ) ' '' '' ( 156.00,471.17)(0.500,-1.000)2 ' '' '' ( 157,467.67 ) ' '' '' ( 157.00,468.17)(0.500,-1.000)2 ' '' '' ( 157.0,469.0 ) ' '' '' ( 158,465.67 ) ' '' '' ( 158.00,466.17)(0.500,-1.000)2 ' '' '' ( 158.0,467.0 ) ( 159,463.67 ) ' '' '' ( 159.00,464.17)(0.500,-1.000)2 ' '' '' ( 159.0,465.0 ) ( 160,460.67 ) ' '' '' ( 160.00,461.17)(0.500,-1.000)2 ' '' '' ( 160.0,462.0 ) ' '' '' ( 161.0,460.0 ) ( 161.0,460.0 ) ( 162,456.67 ) ' '' '' ( 162.00,457.17)(0.500,-1.000)2 ' '' '' ( 162.0,458.0 ) ' '' '' ( 163,454.67 ) ' '' '' ( 163.00,455.17)(0.500,-1.000)2 ' '' '' ( 163.0,456.0 ) ( 164,450.67 ) ' '' '' ( 164.00,451.17)(0.500,-1.000)2 ' '' '' ( 164.0,452.0 ) ' '' '' ( 165,448.67 ) ' '' '' ( 165.00,449.17)(0.500,-1.000)2 ' '' '' ( 165.0,450.0 ) ( 166,446.67 ) ' '' '' ( 166.00,447.17)(0.500,-1.000)2 ' '' '' ( 166.0,448.0 ) ( 167,442.67 ) ' '' '' ( 167.00,443.17)(0.500,-1.000)2 ' '' '' ( 167.0,444.0 ) ' '' '' ( 168,439.67 ) ' '' '' ( 168.00,440.17)(0.500,-1.000)2 ' '' '' ( 168.0,441.0 ) ' '' '' ( 169,437.67 ) ' '' '' ( 169.00,438.17)(0.500,-1.000)2 ' '' '' ( 169.0,439.0 ) ( 170,433.67 ) ' '' '' ( 170.00,434.17)(0.500,-1.000)2 ' '' '' ( 170.0,435.0 ) ' '' '' ( 171,431.67 ) ' '' '' ( 171.00,432.17)(0.500,-1.000)2 ' '' '' ( 171.0,433.0 ) ( 172,428.67 ) ' '' '' ( 172.00,429.17)(0.500,-1.000)2 ' '' '' ( 172.0,430.0 ) ' '' '' ( 172.67,424 ) ' '' '' ( 172.17,425.00)(1.000,-1.000)2 ' '' '' ( 173.0,426.0 ) ' '' '' ( 174,420.67 ) ' '' '' ( 174.00,421.17)(0.500,-1.000)2 ' '' '' ( 174.0,422.0 ) ' '' '' ( 175,417.67 ) ' '' '' ( 175.00,418.17)(0.500,-1.000)2 ' '' '' ( 175.0,419.0 ) ' '' '' ( 175.67,413 ) ' '' '' ( 175.17,414.00)(1.000,-1.000)2 ' '' '' ( 176.0,415.0 ) ' '' '' ( 176.67,409 ) ' '' '' ( 176.17,410.00)(1.000,-1.000)2 ' '' '' ( 177.0,411.0 ) ' '' '' ( 178,405.67 ) ' '' '' ( 178.00,406.17)(0.500,-1.000)2 ' '' '' ( 178.0,407.0 ) ' '' '' ( 178.67,400 ) ' '' '' ( 178.17,401.00)(1.000,-1.000)2 ' '' '' ( 179.0,402.0 ) ' '' '' ( 179.67,397 ) ' '' '' ( 179.17,398.00)(1.000,-1.000)2 ' '' '' ( 180.0,399.0 ) ( 180.67,392 ) ' '' '' ( 180.17,393.00)(1.000,-1.000)2 ' '' '' ( 181.0,394.0 ) ' '' '' ( 181.67,387 ) ' '' '' ( 181.17,388.00)(1.000,-1.000)2 ' '' '' ( 182.0,389.0 ) ' '' '' ( 182.67,383 ) ' '' '' ( 182.17,384.00)(1.000,-1.000)2 ' '' '' ( 183.0,385.0 ) ' '' '' ( 184,379.67 ) ' '' '' ( 184.00,380.17)(0.500,-1.000)2 ' '' '' ( 184.0,381.0 ) ' '' '' ( 184.67,374 ) ' '' '' ( 184.17,375.00)(1.000,-1.000)2 ' '' '' ( 185.0,376.0 ) ' '' '' ( 185.67,370 ) ' '' '' ( 185.17,371.00)(1.000,-1.000)2 ' '' '' ( 186.0,372.0 ) ' '' '' ( 186.67,366 ) ' '' '' ( 186.17,367.00)(1.000,-1.000)2 ' '' '' ( 187.0,368.0 ) ' '' '' ( 188,360.67 ) ' '' '' ( 188.00,361.17)(0.500,-1.000)2 ' '' '' ( 188.0,362.0 ) ' '' '' ( 188.67,356 ) ' '' '' ( 188.17,357.00)(1.000,-1.000)2 ' '' '' ( 189.0,358.0 ) ' '' '' ( 190,352.67 ) ' '' '' ( 190.00,353.17)(0.500,-1.000)2 ' '' '' ( 190.0,354.0 ) ' '' '' ( 190.67,347 ) ' '' '' ( 190.17,348.00)(1.000,-1.000)2 ' '' '' ( 191.0,349.0 ) ' '' '' ( 191.67,343 ) ' '' '' ( 191.17,344.00)(1.000,-1.000)2 ' '' '' ( 192.0,345.0 ) ' '' '' ( 192.67,339 ) ' '' '' ( 192.17,340.00)(1.000,-1.000)2 ' '' '' ( 193.0,341.0 ) ' '' '' ( 193.67,333 ) ' '' '' ( 193.17,334.00)(1.000,-1.000)2 ' '' '' ( 194.0,335.0 ) ' '' '' ( 194.67,329 ) ' '' '' ( 194.17,330.00)(1.000,-1.000)2 ' '' '' ( 195.0,331.0 ) ' '' '' ( 195.67,325 ) ' '' '' ( 195.17,326.00)(1.000,-1.000)2 ' '' '' ( 196.0,327.0 ) ' '' '' ( 196.67,319 ) ' '' '' ( 196.17,320.00)(1.000,-1.000)2 ' '' '' ( 197.0,321.0 ) ' '' '' ( 197.67,316 ) ' '' '' ( 197.17,317.00)(1.000,-1.000)2 ' '' '' ( 198.0,318.0 ) ( 198.67,312 ) ' '' '' ( 198.17,313.00)(1.000,-1.000)2 ' '' '' ( 199.0,314.0 ) ' '' '' ( 199.67,307 ) ' '' '' ( 199.17,308.00)(1.000,-1.000)2 ' '' '' ( 200.0,309.0 ) ' '' '' ( 200.67,303 ) ' '' '' ( 200.17,304.00)(1.000,-1.000)2 ' '' '' ( 201.0,305.0 ) ' '' '' ( 201.67,299 ) ' '' '' ( 201.17,300.00)(1.000,-1.000)2 ' '' '' ( 202.0,301.0 ) ' '' '' ( 202.67,293 ) ' '' '' ( 202.17,294.00)(1.000,-1.000)2 ' '' '' ( 203.0,295.0 ) ' '' '' ( 203.67,290 ) ' '' '' ( 203.17,291.00)(1.000,-1.000)2 ' '' '' ( 204.0,292.0 ) ( 204.67,286 ) ' '' '' ( 204.17,287.00)(1.000,-1.000)2 ' '' '' ( 205.0,288.0 ) ' '' '' ( 205.67,280 ) ' '' '' ( 205.17,281.00)(1.000,-1.000)2 ' '' '' ( 206.0,282.0 ) ' '' '' ( 206.67,276 ) ' '' '' ( 206.17,277.00)(1.000,-1.000)2 ' '' '' ( 207.0,278.0 ) ' '' '' ( 207.67,272 ) ' '' '' ( 207.17,273.00)(1.000,-1.000)2 ' '' '' ( 208.0,274.0 ) ' '' '' ( 209,265.67 ) ' '' '' ( 209.00,266.17)(0.500,-1.000)2 ' '' '' ( 209.0,267.0 ) ' '' '' ( 210,261.67 ) ' '' '' ( 210.00,262.17)(0.500,-1.000)2 ' '' '' ( 210.0,263.0 ) ' '' '' ( 211,259.67 ) ' '' '' ( 211.00,260.17)(0.500,-1.000)2 ' '' '' ( 211.0,261.0 ) ( 211.67,256 ) ' '' '' ( 211.17,257.00)(1.000,-1.000)2 ' '' '' ( 212.0,258.0 ) ' '' '' ( 213,253.67 ) ' '' '' ( 213.00,254.17)(0.500,-1.000)2 ' '' '' ( 213.0,255.0 ) ( 214,251.67 ) ' '' '' ( 214.00,252.17)(0.500,-1.000)2 ' '' '' ( 214.0,253.0 ) ( 214.67,246 ) ' '' '' ( 214.17,247.00)(1.000,-1.000)2 ' '' '' ( 215.0,248.0 ) ' '' '' ( 215.67,243 ) ' '' '' ( 215.17,244.00)(1.000,-1.000)2 ' '' '' ( 216.0,245.0 ) ( 217,240.67 ) ' '' '' ( 217.00,241.17)(0.500,-1.000)2 ' '' '' ( 217.0,242.0 ) ( 218,236.67 ) ' '' '' ( 218.00,237.17)(0.500,-1.000)2 ' '' '' ( 218.0,238.0 ) ' '' '' ( 219,234.67 ) ' '' '' ( 219.00,235.17)(0.500,-1.000)2 ' '' '' ( 219.0,236.0 ) ( 220,235 ) ( 219.67,233 ) ' '' '' ( 219.17,234.00)(1.000,-1.000)2 ' '' '' ( 221,229.67 ) ' '' '' ( 221.00,230.17)(0.500,-1.000)2 ' '' '' ( 221.0,231.0 ) ' '' '' ( 221.67,227 ) ' '' '' ( 221.17,228.00)(1.000,-1.000)2 ' '' '' ( 222.0,229.0 ) ( 222.67,224 ) ' '' '' ( 222.17,225.00)(1.000,-1.000)2 ' '' '' ( 223.0,226.0 ) ( 224,224 ) ( 223.67,220 ) ' '' '' ( 223.17,221.50)(1.000,-1.500)2 ' '' '' ( 224.0,223.0 ) ( 225,217.67 ) ' '' '' ( 225.00,218.17)(0.500,-1.000)2 ' '' '' ( 225.0,219.0 ) ( 226,218 ) ( 226,216.67 ) ' '' '' ( 226.00,217.17)(0.500,-1.000)2 ' '' '' ( 227,213.67 ) ' '' '' ( 227.00,214.17)(0.500,-1.000)2 ' '' '' ( 227.0,215.0 ) ' '' '' ( 228,211.67 ) ' '' '' ( 228.00,212.17)(0.500,-1.000)2 ' '' '' ( 228.0,213.0 ) ( 229.0,211.0 ) ( 229.0,211.0 ) ( 230,208.67 ) ' '' '' ( 230.00,209.17)(0.500,-1.000)2 ' '' '' ( 230.0,210.0 ) ( 231.0,207.0 ) ' '' '' ( 231.0,207.0 ) ( 232.0,206.0 ) ( 232.0,206.0 ) ( 233.0,205.0 ) ( 233.0,205.0 ) ( 234.0,203.0 ) ' '' '' ( 234.0,203.0 ) ( 235.0,201.0 ) ' '' '' ( 235.0,201.0 ) ( 236.0,200.0 ) ( 236.0,200.0 ) ( 237,197.67 ) ' '' '' ( 237.00,198.17)(0.500,-1.000)2 ' '' '' ( 237.0,199.0 ) ( 238.0,197.0 ) ( 238.0,197.0 ) ( 239,193.67 ) ' '' '' ( 239.00,194.17)(0.500,-1.000)2 ' '' '' ( 239.0,195.0 ) ' '' '' ( 240,191.67 ) ' '' '' ( 240.00,192.17)(0.500,-1.000)2 ' '' '' ( 240.0,193.0 ) ( 241,192 ) ( 241.0,192.0 ) ( 242.0,191.0 ) ( 242.0,191.0 ) ( 243,188.67 ) ' '' '' ( 243.00,189.17)(0.500,-1.000)2 ' '' '' ( 243.0,190.0 ) ( 244.0,188.0 ) ( 245,186.67 ) ' '' '' ( 245.00,187.17)(0.500,-1.000)2 ' '' '' ( 244.0,188.0 ) ( 246,187 ) ( 246.0,186.0 ) ( 247,184.67 ) ' '' '' ( 247.00,185.17)(0.500,-1.000)2 ' '' '' ( 246.0,186.0 ) ( 248,182.67 ) ' '' '' ( 248.00,183.17)(0.500,-1.000)2 ' '' '' ( 248.0,184.0 ) ( 249,183 ) ( 249,183 ) ( 249,181.67 ) ' '' '' ( 249.00,182.17)(0.500,-1.000)2 ' '' '' ( 250,182 ) ( 250,180.67 ) ' '' '' ( 250.00,181.17)(0.500,-1.000)2 ' '' '' ( 251,178.67 ) ' '' '' ( 251.00,179.17)(0.500,-1.000)2 ' '' '' ( 251.0,180.0 ) ( 252.0,179.0 ) ( 253,178.67 ) ' '' '' ( 253.00,179.17)(0.500,-1.000)2 ' '' '' ( 252.0,180.0 ) ( 254,176.67 ) ' '' '' ( 254.00,177.17)(0.500,-1.000)2 ' '' '' ( 254.0,178.0 ) ( 255.0,175.0 ) ' '' '' ( 256,173.67 ) ' '' '' ( 256.00,174.17)(0.500,-1.000)2 ' '' '' ( 255.0,175.0 ) ( 257,174 ) ( 257,172.67 ) ' '' '' ( 257.00,173.17)(0.500,-1.000)2 ' '' '' ( 258.0,173.0 ) ( 258,171.67 ) ' '' '' ( 258.00,172.17)(0.500,-1.000)2 ' '' '' ( 258.0,173.0 ) ( 259.0,171.0 ) ( 259.0,171.0 ) ( 260.0,170.0 ) ( 262,169.67 ) ' '' '' ( 262.00,169.17)(0.500,1.000)2 ' '' '' ( 260.0,170.0 ) ' '' '' ( 263.0,170.0 ) ( 264,168.67 ) ' '' '' ( 264.00,169.17)(0.500,-1.000)2 ' '' '' ( 263.0,170.0 ) ( 265,169 ) ( 265,168.67 ) ' '' '' ( 265.00,168.17)(0.500,1.000)2 ' '' '' ( 266.0,169.0 ) ( 266.0,169.0 ) ( 267.0,168.0 ) ( 267.67,166 ) ' '' '' ( 267.17,167.00)(1.000,-1.000)2 ' '' '' ( 267.0,168.0 ) ( 269,166 ) ( 269.0,166.0 ) ( 270,163.67 ) ' '' '' ( 270.00,164.17)(0.500,-1.000)2 ' '' '' ( 270.0,165.0 ) ( 271,164 ) ( 271,162.67 ) ' '' '' ( 271.00,163.17)(0.500,-1.000)2 ' '' '' ( 272,163 ) ( 272.0,163.0 ) ( 273.0,163.0 ) ( 273.0,164.0 ) ( 274.0,164.0 ) ( 274.0,165.0 ) ' '' '' ( 276,162.67 ) ' '' '' ( 276.00,163.17)(0.500,-1.000)2 ' '' '' ( 276.0,164.0 ) ( 277,163 ) ( 278,161.67 ) ' '' '' ( 278.00,162.17)(0.500,-1.000)2 ' '' '' ( 277.0,163.0 ) ( 279,162 ) ( 279,162 ) ( 280,160.67 ) ' '' '' ( 280.00,161.17)(0.500,-1.000)2 ' '' '' ( 279.0,162.0 ) ( 281,161 ) ( 281.0,161.0 ) ( 282.0,160.0 ) ( 282.0,160.0 ) ( 283.0,159.0 ) ( 284,157.67 ) ' '' '' ( 284.00,158.17)(0.500,-1.000)2 ' '' '' ( 283.0,159.0 ) ( 285,158 ) ( 285,158 ) ( 286,156.67 ) ' '' '' ( 286.00,157.17)(0.500,-1.000)2 ' '' '' ( 285.0,158.0 ) ( 287,157 ) ( 287.0,157.0 ) ( 288.0,157.0 ) ( 288.0,158.0 ) ( 289.0,157.0 ) ( 289.0,157.0 ) ( 290.0,156.0 ) ( 290.0,156.0 ) ( 291.0,156.0 ) ( 291.0,157.0 ) ( 292.0,156.0 ) ( 293,155.67 ) ' '' '' ( 293.00,155.17)(0.500,1.000)2 ' '' '' ( 292.0,156.0 ) ( 294.0,156.0 ) ( 294.0,156.0 ) ( 295,156.67 ) ' '' '' ( 295.00,156.17)(0.500,1.000)2 ' '' '' ( 295.0,156.0 ) ( 296.0,158.0 ) ( 296.0,159.0 ) ( 297,157.67 ) ' '' '' ( 297.00,157.17)(0.500,1.000)2 ' '' '' ( 297.0,158.0 ) ( 298.0,159.0 ) ( 298.0,159.0 ) ( 298.0,159.0 ) ( 299.0,158.0 ) ( 299.0,158.0 ) ' '' '' ( 301.0,156.0 ) ' '' '' ( 302,154.67 ) ' '' '' ( 302.00,155.17)(0.500,-1.000)2 ' '' '' ( 301.0,156.0 ) ( 303,155 ) ( 303,153.67 ) ' '' '' ( 303.00,154.17)(0.500,-1.000)2 ' '' '' ( 304.0,154.0 ) ( 304,153.67 ) ' '' '' ( 304.00,153.17)(0.500,1.000)2 ' '' '' ( 304.0,154.0 ) ( 305,153.67 ) ' '' '' ( 305.00,153.17)(0.500,1.000)2 ' '' '' ( 305.0,154.0 ) ( 306,155 ) ( 306.0,155.0 ) ( 307.0,154.0 ) ( 307.0,154.0 ) ' '' '' ( 309,153.67 ) ' '' '' ( 309.00,154.17)(0.500,-1.000)2 ' '' '' ( 309.0,154.0 ) ( 310,154 ) ( 310,154 ) ( 310.0,154.0 ) ( 311.0,154.0 ) ( 311.0,155.0 ) ( 312.0,154.0 ) ( 312.0,154.0 ) ( 313.0,153.0 ) ( 315,152.67 ) ' '' '' ( 315.00,152.17)(0.500,1.000)2 ' '' '' ( 313.0,153.0 ) ' '' '' ( 316.0,153.0 ) ( 316.0,153.0 ) ' '' '' ( 318,151.67 ) ' '' '' ( 318.00,151.17)(0.500,1.000)2 ' '' '' ( 318.0,152.0 ) ( 319,153 ) ( 319,153 ) ( 319.0,153.0 ) ( 320.0,153.0 ) ( 320.0,154.0 ) ( 321,153.67 ) ' '' '' ( 321.00,154.17)(0.500,-1.000)2 ' '' '' ( 321.0,154.0 ) ( 322,154 ) ( 322,154 ) ( 322,152.67 ) ' '' '' ( 322.00,153.17)(0.500,-1.000)2 ' '' '' ( 323,153 ) ( 323,152.67 ) ' '' '' ( 323.00,152.17)(0.500,1.000)2 ' '' '' ( 324.0,154.0 ) ( 324.0,155.0 ) ( 325.0,154.0 ) ( 325.0,154.0 ) ( 326,151.67 ) ' '' '' ( 326.00,152.17)(0.500,-1.000)2 ' '' '' ( 326.0,153.0 ) ( 327,152 ) ( 329,150.67 ) ' '' '' ( 329.00,151.17)(0.500,-1.000)2 ' '' '' ( 327.0,152.0 ) ' '' '' ( 330,151.67 ) ' '' '' ( 330.00,151.17)(0.500,1.000)2 ' '' '' ( 330.0,151.0 ) ( 331,150.67 ) ' '' '' ( 331.00,151.17)(0.500,-1.000)2 ' '' '' ( 331.0,152.0 ) ( 332,151 ) ( 332.0,151.0 ) ( 333.0,151.0 ) ( 333.0,152.0 ) ( 334,150.67 ) ' '' '' ( 334.00,150.17)(0.500,1.000)2 ' '' '' ( 334.0,151.0 ) ( 335,152 ) ( 335.0,152.0 ) ( 336,150.67 ) ' '' '' ( 336.00,150.17)(0.500,1.000)2 ' '' '' ( 336.0,151.0 ) ( 337,152 ) ( 337,152 ) ( 337.67,152 ) ' '' '' ( 337.17,152.00)(1.000,1.000)2 ' '' '' ( 337.0,152.0 ) ( 339.0,153.0 ) ( 340,151.67 ) ' '' '' ( 340.00,152.17)(0.500,-1.000)2 ' '' '' ( 339.0,153.0 ) ( 341,152 ) ( 342,151.67 ) ' '' '' ( 342.00,151.17)(0.500,1.000)2 ' '' '' ( 341.0,152.0 ) ( 343,153 ) ( 343.0,153.0 ) ( 343.0,154.0 ) ( 344,151.67 ) ' '' '' ( 344.00,152.17)(0.500,-1.000)2 ' '' '' ( 344.0,153.0 ) ( 345.0,152.0 ) ( 345.0,153.0 ) ( 346.0,153.0 ) ( 346.0,154.0 ) ( 347.0,154.0 ) ( 347.0,155.0 ) ( 348.0,154.0 ) ( 348.0,154.0 ) ' '' '' ( 351.0,153.0 ) ( 351.0,153.0 ) ( 352.0,151.0 ) ' '' '' ( 352.0,151.0 ) ( 352.67,150 ) ' '' '' ( 352.17,151.00)(1.000,-1.000)2 ' '' '' ( 353.0,151.0 ) ( 354,150 ) ( 358,149.67 ) ' '' '' ( 358.00,149.17)(0.500,1.000)2 ' '' '' ( 354.0,150.0 ) ' '' '' ( 359,151 ) ( 363,150.67 ) ' '' '' ( 363.00,150.17)(0.500,1.000)2 ' '' '' ( 359.0,151.0 ) ' '' '' ( 364.0,151.0 ) ( 365,150.67 ) ' '' '' ( 365.00,150.17)(0.500,1.000)2 ' '' '' ( 364.0,151.0 ) ( 366,151.67 ) ' '' '' ( 366.00,152.17)(0.500,-1.000)2 ' '' '' ( 366.0,152.0 ) ( 367.0,151.0 ) ( 367.0,151.0 ) ( 368.0,150.0 ) ( 368.0,150.0 ) ( 368.0,151.0 ) ' '' '' ( 371.0,150.0 ) ( 371.0,150.0 ) ( 371.0,151.0 ) ( 372.0,151.0 ) ( 372.0,152.0 ) ( 373.0,151.0 ) ( 374,150.67 ) ' '' '' ( 374.00,150.17)(0.500,1.000)2 ' '' '' ( 373.0,151.0 ) ( 375,150.67 ) ' '' '' ( 375.00,150.17)(0.500,1.000)2 ' '' '' ( 375.0,151.0 ) ( 376,152 ) ( 376,150.67 ) ' '' '' ( 376.00,151.17)(0.500,-1.000)2 ' '' '' ( 377,151 ) ( 377,151 ) ( 377.0,151.0 ) ' '' '' ( 380.0,151.0 ) ( 380.0,152.0 ) ' '' '' ( 382,150.67 ) ' '' '' ( 382.00,150.17)(0.500,1.000)2 ' '' '' ( 382.0,151.0 ) ( 383,152 ) ( 383,152 ) ( 383,150.67 ) ' '' '' ( 383.00,151.17)(0.500,-1.000)2 ' '' '' ( 384.0,151.0 ) ( 384.0,152.0 ) ( 385,150.67 ) ' '' '' ( 385.00,150.17)(0.500,1.000)2 ' '' '' ( 385.0,151.0 ) ( 386.0,151.0 ) ( 387,150.67 ) ' '' '' ( 387.00,150.17)(0.500,1.000)2 ' '' '' ( 386.0,151.0 ) ( 388,150.67 ) ' '' '' ( 388.00,150.17)(0.500,1.000)2 ' '' '' ( 388.0,151.0 ) ( 389,152 ) ( 389,152 ) ( 389.0,152.0 ) ( 390,151.67 ) ' '' '' ( 390.00,152.17)(0.500,-1.000)2 ' '' '' ( 390.0,152.0 ) ( 391,152 ) ( 391.0,152.0 ) ( 392.0,152.0 ) ( 393,151.67 ) ' '' '' ( 393.00,152.17)(0.500,-1.000)2 ' '' '' ( 392.0,153.0 ) ( 394,152 ) ( 394,150.67 ) ' '' '' ( 394.00,151.17)(0.500,-1.000)2 ' '' '' ( 395.0,150.0 ) ( 395,149.67 ) ' '' '' ( 395.00,150.17)(0.500,-1.000)2 ' '' '' ( 395.0,150.0 ) ( 396,149.67 ) ' '' '' ( 396.00,150.17)(0.500,-1.000)2 ' '' '' ( 396.0,150.0 ) ( 397.0,150.0 ) ( 397.0,151.0 ) ( 398.0,151.0 ) ( 398.0,152.0 ) ( 399.0,152.0 ) ( 400,151.67 ) ' '' '' ( 400.00,152.17)(0.500,-1.000)2 ' '' '' ( 399.0,153.0 ) ( 401,152 ) ( 401,151.67 ) ' '' '' ( 401.00,152.17)(0.500,-1.000)2 ' '' '' ( 401.0,152.0 ) ( 402,152 ) ( 401.67,150 ) ' '' '' ( 401.17,151.00)(1.000,-1.000)2 ' '' '' ( 403,150 ) ( 403,148.67 ) ' '' '' ( 403.00,149.17)(0.500,-1.000)2 ' '' '' ( 404.0,149.0 ) ( 404.0,150.0 ) ( 405.0,150.0 ) ( 406,149.67 ) ' '' '' ( 406.00,150.17)(0.500,-1.000)2 ' '' '' ( 405.0,151.0 ) ( 407.0,150.0 ) ( 407.0,151.0 ) ' '' '' ( 410,149.67 ) ' '' '' ( 410.00,149.17)(0.500,1.000)2 ' '' '' ( 410.0,150.0 ) ( 411,151 ) ( 411.0,151.0 ) ' '' '' ( 413,149.67 ) ' '' '' ( 413.00,149.17)(0.500,1.000)2 ' '' '' ( 413.0,150.0 ) ( 414,151 ) ( 415,149.67 ) ' '' '' ( 415.00,150.17)(0.500,-1.000)2 ' '' '' ( 414.0,151.0 ) ( 416.0,149.0 ) ( 416.0,149.0 ) ' '' '' ( 417,150.67 ) ' '' '' ( 417.00,150.17)(0.500,1.000)2 ' '' '' ( 416.0,151.0 ) ( 418,151.67 ) ' '' '' ( 418.00,152.17)(0.500,-1.000)2 ' '' '' ( 418.0,152.0 ) ( 419,152.67 ) ' '' '' ( 419.00,153.17)(0.500,-1.000)2 ' '' '' ( 419.0,152.0 ) ' '' '' ( 420,153 ) ( 421.67,151 ) ' '' '' ( 421.17,152.00)(1.000,-1.000)2 ' '' '' ( 420.0,153.0 ) ' '' '' ( 423,152.67 ) ' '' '' ( 423.00,153.17)(0.500,-1.000)2 ' '' '' ( 423.0,151.0 ) ' '' '' ( 423.67,150 ) ' '' '' ( 423.17,151.00)(1.000,-1.000)2 ' '' '' ( 424.0,152.0 ) ( 425,150.67 ) ' '' '' ( 425.00,150.17)(0.500,1.000)2 ' '' '' ( 425.0,150.0 ) ( 426,150.67 ) ' '' '' ( 426.00,150.17)(0.500,1.000)2 ' '' '' ( 426.0,151.0 ) ( 427,152 ) ( 427.0,152.0 ) ' '' '' ( 429.0,152.0 ) ( 429.0,153.0 ) ' '' '' ( 431,150.67 ) ' '' '' ( 431.00,150.17)(0.500,1.000)2 ' '' '' ( 431.0,151.0 ) ' '' '' ( 432,149.67 ) ' '' '' ( 432.00,150.17)(0.500,-1.000)2 ' '' '' ( 432.0,151.0 ) ( 433,150 ) ( 433,149.67 ) ' '' '' ( 433.00,149.17)(0.500,1.000)2 ' '' '' ( 434,151 ) ( 434.0,151.0 ) ( 435.0,150.0 ) ( 435,150.67 ) ' '' '' ( 435.00,151.17)(0.500,-1.000)2 ' '' '' ( 435.0,150.0 ) ' '' '' ( 436,150.67 ) ' '' '' ( 436.00,151.17)(0.500,-1.000)2 ' '' '' ( 436.0,151.0 ) ( 437,149.67 ) ' '' '' ( 437.00,149.17)(0.500,1.000)2 ' '' '' ( 437.0,150.0 ) ( 438.0,151.0 ) ( 441,150.67 ) ' '' '' ( 441.00,151.17)(0.500,-1.000)2 ' '' '' ( 438.0,152.0 ) ' '' '' ( 442,151 ) ( 442,149.67 ) ' '' '' ( 442.00,150.17)(0.500,-1.000)2 ' '' '' ( 443,150 ) ( 444,149.67 ) ' '' '' ( 444.00,149.17)(0.500,1.000)2 ' '' '' ( 443.0,150.0 ) ( 445,148.67 ) ' '' '' ( 445.00,148.17)(0.500,1.000)2 ' '' '' ( 445.0,149.0 ) ' '' '' ( 446,149.67 ) ' '' '' ( 446.00,150.17)(0.500,-1.000)2 ' '' '' ( 446.0,150.0 ) ( 447.0,150.0 ) ( 448,149.67 ) ' '' '' ( 448.00,150.17)(0.500,-1.000)2 ' '' '' ( 447.0,151.0 ) ( 449,150 ) ( 449.0,150.0 ) ( 450.0,150.0 ) ( 450.0,150.0 ) ( 451,149.67 ) ' '' '' ( 451.00,149.17)(0.500,1.000)2 ' '' '' ( 450.0,150.0 ) ( 452.0,151.0 ) ( 452.0,152.0 ) ( 453.0,151.0 ) ( 455,149.67 ) ' '' '' ( 455.00,150.17)(0.500,-1.000)2 ' '' '' ( 453.0,151.0 ) ' '' '' ( 456,150 ) ( 456,150 ) ( 456.0,150.0 ) ' '' '' ( 458,149.67 ) ' '' '' ( 458.00,150.17)(0.500,-1.000)2 ' '' '' ( 458.0,150.0 ) ( 459,150 ) ( 459,150 ) ( 459,149.67 ) ' '' '' ( 459.00,149.17)(0.500,1.000)2 ' '' '' ( 460,151 ) ( 460.0,151.0 ) ( 461,150.67 ) ' '' '' ( 461.00,151.17)(0.500,-1.000)2 ' '' '' ( 461.0,151.0 ) ( 462,151 ) ( 462.0,150.0 ) ( 464,148.67 ) ' '' '' ( 464.00,149.17)(0.500,-1.000)2 ' '' '' ( 462.0,150.0 ) ' '' '' ( 465,149 ) ( 465.0,149.0 ) ( 465.0,150.0 ) ( 466.0,150.0 ) ( 468,149.67 ) ' '' '' ( 468.00,150.17)(0.500,-1.000)2 ' '' '' ( 466.0,151.0 ) ' '' '' ( 469,150 ) ( 469.0,150.0 ) ( 470.0,149.0 ) ( 470.0,149.0 ) ( 471,148.67 ) ' '' '' ( 471.00,149.17)(0.500,-1.000)2 ' '' '' ( 471.0,149.0 ) ( 472,149 ) ( 472,147.67 ) ' '' '' ( 472.00,148.17)(0.500,-1.000)2 ' '' '' ( 473.0,148.0 ) ( 473.0,149.0 ) ( 474,149.67 ) ' '' '' ( 474.00,149.17)(0.500,1.000)2 ' '' '' ( 474.0,149.0 ) ( 475,151 ) ( 475,149.67 ) ' '' '' ( 475.00,150.17)(0.500,-1.000)2 ' '' '' ( 476,150 ) ( 476,149.67 ) ' '' '' ( 476.00,149.17)(0.500,1.000)2 ' '' '' ( 477,151 ) ( 477.0,151.0 ) ( 480,151.67 ) ' '' '' ( 480.00,151.17)(0.500,1.000)2 ' '' '' ( 477.0,152.0 ) ' '' '' ( 481,153 ) ( 481.0,153.0 ) ' '' '' ( 483.0,153.0 ) ( 483.0,153.0 ) ( 483.0,153.0 ) ' '' '' ( 485.0,152.0 ) ( 486,151.67 ) ' '' '' ( 486.00,151.17)(0.500,1.000)2 ' '' '' ( 485.0,152.0 ) ( 487,153 ) ( 487,151.67 ) ' '' '' ( 487.00,152.17)(0.500,-1.000)2 ' '' '' ( 488,152 ) ( 488,150.67 ) ' '' '' ( 488.00,151.17)(0.500,-1.000)2 ' '' '' ( 489,149.67 ) ' '' '' ( 489.00,149.17)(0.500,1.000)2 ' '' '' ( 489.0,150.0 ) ( 490,151 ) ( 490.0,151.0 ) ( 491,151.67 ) ' '' '' ( 491.00,151.17)(0.500,1.000)2 ' '' '' ( 491.0,151.0 ) ( 492,153 ) ( 492,150.67 ) ' '' '' ( 492.00,151.17)(0.500,-1.000)2 ' '' '' ( 492.0,152.0 ) ( 493,151 ) ( 493.0,151.0 ) ' '' '' ( 495,148.67 ) ' '' '' ( 495.00,149.17)(0.500,-1.000)2 ' '' '' ( 495.0,150.0 ) ( 496,149 ) ( 496,149 ) ( 496,148.67 ) ' '' '' ( 496.00,148.17)(0.500,1.000)2 ' '' '' ( 497,150 ) ( 497,149.67 ) ' '' '' ( 497.00,149.17)(0.500,1.000)2 ' '' '' ( 498,151 ) ( 498,149.67 ) ' '' '' ( 498.00,150.17)(0.500,-1.000)2 ' '' '' ( 499,150 ) ( 499,150 ) ( 499,149.67 ) ' '' '' ( 499.00,149.17)(0.500,1.000)2 ' '' '' ( 500,151 ) ( 500,149.67 ) ' '' '' ( 500.00,150.17)(0.500,-1.000)2 ' '' '' ( 501,149.67 ) ' '' '' ( 501.00,150.17)(0.500,-1.000)2 ' '' '' ( 501.0,150.0 ) ( 502.0,150.0 ) ( 503,150.67 ) ' '' '' ( 503.00,150.17)(0.500,1.000)2 ' '' '' ( 502.0,151.0 ) ( 504.0,151.0 ) ( 504.0,151.0 ) ( 504.67,150 ) ' '' '' ( 504.17,151.00)(1.000,-1.000)2 ' '' '' ( 505.0,151.0 ) ( 506,150 ) ( 506,149.67 ) ' '' '' ( 506.00,149.17)(0.500,1.000)2 ' '' '' ( 507.0,149.0 ) ' '' '' ( 507.0,149.0 ) ( 508.0,148.0 ) ( 508.0,148.0 ) ' '' '' ( 510.0,148.0 ) ( 510.0,149.0 ) ( 511.0,149.0 ) ( 511.0,149.0 ) ( 511.67,149 ) ' '' '' ( 511.17,149.00)(1.000,1.000)2 ' '' '' ( 511.0,149.0 ) ( 513,151 ) ( 513,149.67 ) ' '' '' ( 513.00,150.17)(0.500,-1.000)2 ' '' '' ( 514,150 ) ( 514,150 ) ( 514,149.67 ) ' '' '' ( 514.00,149.17)(0.500,1.000)2 ' '' '' ( 515,151 ) ( 515,149.67 ) ' '' '' ( 515.00,150.17)(0.500,-1.000)2 ' '' '' ( 516.0,150.0 ) ( 516.0,151.0 ) ( 517,149.67 ) ' '' '' ( 517.00,149.17)(0.500,1.000)2 ' '' '' ( 517.0,150.0 ) ( 518,151.67 ) ' '' '' ( 518.00,151.17)(0.500,1.000)2 ' '' '' ( 518.0,151.0 ) ( 519,153 ) ( 519,151.67 ) ' '' '' ( 519.00,152.17)(0.500,-1.000)2 ' '' '' ( 520,152 ) ( 520.0,151.0 ) ( 522,150.67 ) ' '' '' ( 522.00,150.17)(0.500,1.000)2 ' '' '' ( 520.0,151.0 ) ' '' '' ( 523.0,151.0 ) ( 525,149.67 ) ' '' '' ( 525.00,150.17)(0.500,-1.000)2 ' '' '' ( 523.0,151.0 ) ' '' '' ( 526.0,150.0 ) ( 526.0,150.0 ) ( 526.0,150.0 ) ( 527.0,149.0 ) ( 527.67,149 ) ' '' '' ( 527.17,149.00)(1.000,1.000)2 ' '' '' ( 527.0,149.0 ) ( 529.0,149.0 ) ' '' '' ( 529.0,149.0 ) ( 530,147.67 ) ' '' '' ( 530.00,147.17)(0.500,1.000)2 ' '' '' ( 530.0,148.0 ) ( 531,149 ) ( 531.0,149.0 ) ( 532.0,149.0 ) ( 532.0,150.0 ) ' '' '' ( 534,148.67 ) ' '' '' ( 534.00,148.17)(0.500,1.000)2 ' '' '' ( 534.0,149.0 ) ( 535,150 ) ( 535,150.67 ) ' '' '' ( 535.00,150.17)(0.500,1.000)2 ' '' '' ( 535.0,150.0 ) ( 536,149.67 ) ' '' '' ( 536.00,150.17)(0.500,-1.000)2 ' '' '' ( 536.0,151.0 ) ( 537,149.67 ) ' '' '' ( 537.00,150.17)(0.500,-1.000)2 ' '' '' ( 537.0,150.0 ) ( 538,150 ) ( 538,150 ) ( 538.0,150.0 ) ' '' '' ( 540.0,150.0 ) ( 541,150.67 ) ' '' '' ( 541.00,150.17)(0.500,1.000)2 ' '' '' ( 540.0,151.0 ) ( 542,149.67 ) ' '' '' ( 542.00,150.17)(0.500,-1.000)2 ' '' '' ( 542.0,151.0 ) ( 543,150 ) ( 543,149.67 ) ' '' '' ( 543.00,149.17)(0.500,1.000)2 ' '' '' ( 544,149.67 ) ' '' '' ( 544.00,149.17)(0.500,1.000)2 ' '' '' ( 544.0,150.0 ) ( 545,151 ) ( 545,150.67 ) ' '' '' ( 545.00,150.17)(0.500,1.000)2 ' '' '' ( 546,150.67 ) ' '' '' ( 546.00,150.17)(0.500,1.000)2 ' '' '' ( 546.0,151.0 ) ( 547,152 ) ( 547.0,151.0 ) ( 547.0,151.0 ) ' '' '' ( 549,149.67 ) ' '' '' ( 549.00,149.17)(0.500,1.000)2 ' '' '' ( 549.0,150.0 ) ( 550.0,151.0 ) ( 550.0,152.0 ) ( 551,150.67 ) ' '' '' ( 551.00,150.17)(0.500,1.000)2 ' '' '' ( 551.0,151.0 ) ( 552.0,151.0 ) ( 552.0,151.0 ) ( 553.0,150.0 ) ( 553.0,150.0 ) ( 554.0,149.0 ) ( 554.0,149.0 ) ' '' '' ( 556,147.67 ) ' '' '' ( 556.00,147.17)(0.500,1.000)2 ' '' '' ( 556.0,148.0 ) ( 557.0,148.0 ) ( 557.0,148.0 ) ( 558,147.67 ) ' '' '' ( 558.00,148.17)(0.500,-1.000)2 ' '' '' ( 558.0,148.0 ) ( 559,148.67 ) ' '' '' ( 559.00,148.17)(0.500,1.000)2 ' '' '' ( 559.0,148.0 ) ( 560,150 ) ( 560,150 ) ( 562,149.67 ) ' '' '' ( 562.00,149.17)(0.500,1.000)2 ' '' '' ( 560.0,150.0 ) ' '' '' ( 563.0,150.0 ) ( 563,149.67 ) ' '' '' ( 563.00,150.17)(0.500,-1.000)2 ' '' '' ( 563.0,150.0 ) ( 564,150 ) ( 564.0,150.0 ) ' '' '' ( 566.0,149.0 ) ( 566.0,149.0 ) ( 567,147.67 ) ' '' '' ( 567.00,147.17)(0.500,1.000)2 ' '' '' ( 567.0,148.0 ) ( 568,149 ) ( 568,148.67 ) ' '' '' ( 568.00,148.17)(0.500,1.000)2 ' '' '' ( 569,150 ) ( 569,150 ) ( 570,149.67 ) ' '' '' ( 570.00,149.17)(0.500,1.000)2 ' '' '' ( 569.0,150.0 ) ( 571,151 ) ( 570.67,149 ) ' '' '' ( 570.17,150.00)(1.000,-1.000)2 ' '' '' ( 572,148.67 ) ' '' '' ( 572.00,149.17)(0.500,-1.000)2 ' '' '' ( 572.0,149.0 ) ( 573,148.67 ) ' '' '' ( 573.00,149.17)(0.500,-1.000)2 ' '' '' ( 573.0,149.0 ) ( 574.0,149.0 ) ( 576,149.67 ) ' '' '' ( 576.00,149.17)(0.500,1.000)2 ' '' '' ( 574.0,150.0 ) ' '' '' ( 577,151 ) ( 577,150.67 ) ' '' '' ( 577.00,150.17)(0.500,1.000)2 ' '' '' ( 578,152 ) ( 578,152 ) ( 578,151.67 ) ' '' '' ( 578.00,151.17)(0.500,1.000)2 ' '' '' ( 579.0,152.0 ) ( 579.0,152.0 ) ( 580.0,151.0 ) ( 580.0,151.0 ) ( 581,148.67 ) ' '' '' ( 581.00,149.17)(0.500,-1.000)2 ' '' '' ( 581.0,150.0 ) ( 582,148.67 ) ' '' '' ( 582.00,149.17)(0.500,-1.000)2 ' '' '' ( 582.0,149.0 ) ( 583,149 ) ( 583,148.67 ) ' '' '' ( 583.00,148.17)(0.500,1.000)2 ' '' '' ( 584,150 ) ( 584,150 ) ( 584,148.67 ) ' '' '' ( 584.00,149.17)(0.500,-1.000)2 ' '' '' ( 585,149 ) ( 585,147.67 ) ' '' '' ( 585.00,148.17)(0.500,-1.000)2 ' '' '' ( 586,148.67 ) ' '' '' ( 586.00,148.17)(0.500,1.000)2 ' '' '' ( 586.0,148.0 ) ( 587.0,149.0 ) ( 587.0,149.0 ) ( 588,148.67 ) ' '' '' ( 588.00,149.17)(0.500,-1.000)2 ' '' '' ( 587.0,150.0 ) ( 589,149 ) ( 589.0,149.0 ) ' '' '' ( 592,148.67 ) ' '' '' ( 592.00,149.17)(0.500,-1.000)2 ' '' '' ( 592.0,149.0 ) ( 593,149 ) ( 593.0,149.0 ) ( 593.0,150.0 ) ( 594,147.67 ) ' '' '' ( 594.00,147.17)(0.500,1.000)2 ' '' '' ( 594.0,148.0 ) ' '' '' ( 595,149 ) ( 595,148.67 ) ' '' '' ( 595.00,148.17)(0.500,1.000)2 ' '' '' ( 596,150 ) ( 596,150 ) ( 596,148.67 ) ' '' '' ( 596.00,149.17)(0.500,-1.000)2 ' '' '' ( 597,149 ) ( 597,147.67 ) ' '' '' ( 597.00,148.17)(0.500,-1.000)2 ' '' '' ( 598,147.67 ) ' '' '' ( 598.00,148.17)(0.500,-1.000)2 ' '' '' ( 598.0,148.0 ) ( 599,148 ) ( 599,148 ) ( 599,147.67 ) ' '' '' ( 599.00,147.17)(0.500,1.000)2 ' '' '' ( 600.0,149.0 ) ( 600.0,150.0 ) ( 601.0,149.0 ) ( 601.0,149.0 ) ( 602,146.67 ) ' '' '' ( 602.00,146.17)(0.500,1.000)2 ' '' '' ( 602.0,147.0 ) ' '' '' ( 603,148 ) ( 603,146.67 ) ' '' '' ( 603.00,147.17)(0.500,-1.000)2 ' '' '' ( 604.0,147.0 ) ' '' '' ( 604.0,149.0 ) ( 605,149.67 ) ' '' '' ( 605.00,150.17)(0.500,-1.000)2 ' '' '' ( 605.0,149.0 ) ' '' '' ( 606.0,149.0 ) ( 608,148.67 ) ' '' '' ( 608.00,148.17)(0.500,1.000)2 ' '' '' ( 606.0,149.0 ) ' '' '' ( 609.0,149.0 ) ( 609.0,149.0 ) ( 610,149.67 ) ' '' '' ( 610.00,149.17)(0.500,1.000)2 ' '' '' ( 610.0,149.0 ) ( 611,151 ) ( 611,151 ) ( 611,150.67 ) ' '' '' ( 611.00,150.17)(0.500,1.000)2 ' '' '' ( 612,152 ) ( 612,150.67 ) ' '' '' ( 612.00,151.17)(0.500,-1.000)2 ' '' '' ( 613.0,149.0 ) ' '' '' ( 613.0,149.0 ) ( 614,149.67 ) ' '' '' ( 614.00,150.17)(0.500,-1.000)2 ' '' '' ( 614.0,149.0 ) ' '' '' ( 615,150 ) ( 615.0,150.0 ) ( 616.0,150.0 ) ( 616.0,151.0 ) ( 617.0,150.0 ) ( 617,149.67 ) ' '' '' ( 617.00,150.17)(0.500,-1.000)2 ' '' '' ( 617.0,150.0 ) ( 618,149.67 ) ' '' '' ( 618.00,150.17)(0.500,-1.000)2 ' '' '' ( 618.0,150.0 ) ( 619,149.67 ) ' '' '' ( 619.00,150.17)(0.500,-1.000)2 ' '' '' ( 619.0,150.0 ) ( 620,150 ) ( 620,150 ) ( 620,149.67 ) ' '' '' ( 620.00,149.17)(0.500,1.000)2 ' '' '' ( 621.0,150.0 ) ( 621.0,150.0 ) ( 622.0,150.0 ) ( 623,149.67 ) ' '' '' ( 623.00,150.17)(0.500,-1.000)2 ' '' '' ( 622.0,151.0 ) ( 624,150 ) ( 625,148.67 ) ' '' '' ( 625.00,149.17)(0.500,-1.000)2 ' '' '' ( 624.0,150.0 ) ( 626.0,149.0 ) ( 627,149.67 ) ' '' '' ( 627.00,149.17)(0.500,1.000)2 ' '' '' ( 626.0,150.0 ) ( 628,151 ) ( 628,149.67 ) ' '' '' ( 628.00,150.17)(0.500,-1.000)2 ' '' '' ( 629,149.67 ) ' '' '' ( 629.00,150.17)(0.500,-1.000)2 ' '' '' ( 629.0,150.0 ) ( 630.0,149.0 ) ( 630,148.67 ) ' '' '' ( 630.00,149.17)(0.500,-1.000)2 ' '' '' ( 630.0,149.0 ) ( 631,149 ) ( 631.0,149.0 ) ( 632,148.67 ) ' '' '' ( 632.00,149.17)(0.500,-1.000)2 ' '' '' ( 632.0,149.0 ) ( 633,149 ) ( 633,149 ) ( 633,148.67 ) ' '' '' ( 633.00,148.17)(0.500,1.000)2 ' '' '' ( 634,149.67 ) ' '' '' ( 634.00,150.17)(0.500,-1.000)2 ' '' '' ( 634.0,150.0 ) ( 635,150 ) ( 635,148.67 ) ' '' '' ( 635.00,149.17)(0.500,-1.000)2 ' '' '' ( 635.67,150 ) ' '' '' ( 635.17,150.00)(1.000,1.000)2 ' '' '' ( 636.0,149.0 ) ( 637,152 ) ( 637.0,152.0 ) ( 638,151.67 ) ' '' '' ( 638.00,152.17)(0.500,-1.000)2 ' '' '' ( 638.0,152.0 ) ( 639.0,152.0 ) ( 639.0,153.0 ) ( 639.67,150 ) ' '' '' ( 639.17,151.00)(1.000,-1.000)2 ' '' '' ( 640.0,152.0 ) ( 641.0,150.0 ) ( 642,150.67 ) ' '' '' ( 642.00,150.17)(0.500,1.000)2 ' '' '' ( 641.0,151.0 ) ( 643,152 ) ( 643,150.67 ) ' '' '' ( 643.00,151.17)(0.500,-1.000)2 ' '' '' ( 644.0,151.0 ) ( 645,151.67 ) ' '' '' ( 645.00,151.17)(0.500,1.000)2 ' '' '' ( 644.0,152.0 ) ( 646,153 ) ( 646.0,153.0 ) ( 647,151.67 ) ' '' '' ( 647.00,151.17)(0.500,1.000)2 ' '' '' ( 647.0,152.0 ) ( 648,153 ) ( 648,153 ) ( 647.67,151 ) ' '' '' ( 647.17,152.00)(1.000,-1.000)2 ' '' '' ( 649,151 ) ( 651,149.67 ) ' '' '' ( 651.00,150.17)(0.500,-1.000)2 ' '' '' ( 649.0,151.0 ) ' '' '' ( 652,150 ) ( 652.0,150.0 ) ( 653.0,149.0 ) ( 653.0,149.0 ) ( 654.0,149.0 ) ( 654.0,150.0 ) ( 655,149.67 ) ' '' '' ( 655.00,150.17)(0.500,-1.000)2 ' '' '' ( 655.0,150.0 ) ( 656,150 ) ( 656,149.67 ) ' '' '' ( 656.00,149.17)(0.500,1.000)2 ' '' '' ( 657,150.67 ) ' '' '' ( 657.00,151.17)(0.500,-1.000)2 ' '' '' ( 657.0,151.0 ) ( 658,151 ) ( 658.0,151.0 ) ( 659,150.67 ) ' '' '' ( 659.00,151.17)(0.500,-1.000)2 ' '' '' ( 659.0,151.0 ) ( 660.0,151.0 ) ' '' '' ( 660.0,152.0 ) ( 660.0,152.0 ) ( 661.0,152.0 ) ( 661.0,153.0 ) ' '' '' ( 663.0,152.0 ) ( 663.67,150 ) ' '' '' ( 663.17,151.00)(1.000,-1.000)2 ' '' '' ( 663.0,152.0 ) ( 665,150 ) ( 665,149.67 ) ' '' '' ( 665.00,149.17)(0.500,1.000)2 ' '' '' ( 666.0,151.0 ) ( 666.0,151.0 ) ( 666.0,151.0 ) ( 667,149.67 ) ' '' '' ( 667.00,149.17)(0.500,1.000)2 ' '' '' ( 667.0,150.0 ) ( 668,151 ) ( 668,149.67 ) ' '' '' ( 668.00,150.17)(0.500,-1.000)2 ' '' '' ( 669,150 ) ( 669,150 ) ( 669,149.67 ) ' '' '' ( 669.00,149.17)(0.500,1.000)2 ' '' '' ( 670,151 ) ( 670.0,151.0 ) ' '' '' ( 672,148.67 ) ' '' '' ( 672.00,149.17)(0.500,-1.000)2 ' '' '' ( 672.0,150.0 ) ( 673,149 ) ( 673,148.67 ) ' '' '' ( 673.00,148.17)(0.500,1.000)2 ' '' '' ( 674,150 ) ( 674.0,150.0 ) ( 675.0,150.0 ) ( 675,149.67 ) ' '' '' ( 675.00,149.17)(0.500,1.000)2 ' '' '' ( 675.0,150.0 ) ( 676.0,151.0 ) ( 676.0,152.0 ) ( 677,150.67 ) ' '' '' ( 677.00,150.17)(0.500,1.000)2 ' '' '' ( 677.0,151.0 ) ( 678.0,151.0 ) ( 678,150.67 ) ' '' '' ( 678.00,151.17)(0.500,-1.000)2 ' '' '' ( 678.0,151.0 ) ( 679.0,150.0 ) ( 680,148.67 ) ' '' '' ( 680.00,149.17)(0.500,-1.000)2 ' '' '' ( 679.0,150.0 ) ( 681,149 ) ( 681,149 ) ( 681.0,149.0 ) ' '' '' ( 684.0,149.0 ) ( 685,149.67 ) ' '' '' ( 685.00,149.17)(0.500,1.000)2 ' '' '' ( 684.0,150.0 ) ( 686.0,151.0 ) ( 686.0,152.0 ) ( 687,148.67 ) ' '' '' ( 687.00,149.17)(0.500,-1.000)2 ' '' '' ( 687.0,150.0 ) ' '' '' ( 688,149 ) ( 688,147.67 ) ' '' '' ( 688.00,148.17)(0.500,-1.000)2 ' '' '' ( 689,148.67 ) ' '' '' ( 689.00,148.17)(0.500,1.000)2 ' '' '' ( 689.0,148.0 ) ( 690,150 ) ( 690,148.67 ) ' '' '' ( 690.00,148.17)(0.500,1.000)2 ' '' '' ( 690.0,149.0 ) ( 691.0,149.0 ) ( 692,147.67 ) ' '' '' ( 692.00,148.17)(0.500,-1.000)2 ' '' '' ( 691.0,149.0 ) ( 693,147.67 ) ' '' '' ( 693.00,148.17)(0.500,-1.000)2 ' '' '' ( 693.0,148.0 ) ( 694.0,148.0 ) ( 694,147.67 ) ' '' '' ( 694.00,147.17)(0.500,1.000)2 ' '' '' ( 694.0,148.0 ) ( 695,149 ) ( 695,147.67 ) ' '' '' ( 695.00,148.17)(0.500,-1.000)2 ' '' '' ( 696,148.67 ) ' '' '' ( 696.00,148.17)(0.500,1.000)2 ' '' '' ( 696.0,148.0 ) ( 697,150 ) ( 697,150 ) ( 697,149.67 ) ' '' '' ( 697.00,149.17)(0.500,1.000)2 ' '' '' ( 698,151 ) ( 698.0,151.0 ) ( 699.0,149.0 ) ' '' '' ( 699.0,149.0 ) ' '' '' ( 703,148.67 ) ' '' '' ( 703.00,149.17)(0.500,-1.000)2 ' '' '' ( 703.0,149.0 ) ( 704.0,149.0 ) ( 704.0,150.0 ) ( 705.0,149.0 ) ( 705.0,149.0 ) ( 706.0,149.0 ) ' '' '' ( 706.0,151.0 ) ( 707,150.67 ) ' '' '' ( 707.00,151.17)(0.500,-1.000)2 ' '' '' ( 707.0,151.0 ) ( 708.0,150.0 ) ( 708.0,150.0 ) ( 709.0,150.0 ) ( 709.0,151.0 ) ( 709.67,148 ) ' '' '' ( 709.17,149.00)(1.000,-1.000)2 ' '' '' ( 710.0,150.0 ) ( 711,148 ) ( 711.0,148.0 ) ' '' '' ( 713.0,147.0 ) ( 714,146.67 ) ' '' '' ( 714.00,146.17)(0.500,1.000)2 ' '' '' ( 713.0,147.0 ) ( 715,148 ) ( 714.67,147 ) ' '' '' ( 714.17,148.00)(1.000,-1.000)2 ' '' '' ( 715.0,148.0 ) ( 716,146.67 ) ' '' '' ( 716.00,147.17)(0.500,-1.000)2 ' '' '' ( 716.0,147.0 ) ( 717,147 ) ( 717.0,147.0 ) ( 718.0,147.0 ) ' '' '' ( 718.0,149.0 ) ( 719.0,149.0 ) ' '' '' ( 720,150.67 ) ' '' '' ( 720.00,150.17)(0.500,1.000)2 ' '' '' ( 719.0,151.0 ) ( 721.0,151.0 ) ( 721.0,151.0 ) ( 721.67,150 ) ' '' '' ( 721.17,150.00)(1.000,1.000)2 ' '' '' ( 722.0,150.0 ) ( 723,152 ) ( 723,151.67 ) ' '' '' ( 723.00,151.17)(0.500,1.000)2 ' '' '' ( 724,150.67 ) ' '' '' ( 724.00,150.17)(0.500,1.000)2 ' '' '' ( 724.0,151.0 ) ' '' '' ( 725,151.67 ) ' '' '' ( 725.00,152.17)(0.500,-1.000)2 ' '' '' ( 725.0,152.0 ) ( 726,149.67 ) ' '' '' ( 726.00,150.17)(0.500,-1.000)2 ' '' '' ( 726.0,151.0 ) ( 727,150 ) ( 727,150 ) ( 727,148.67 ) ' '' '' ( 727.00,149.17)(0.500,-1.000)2 ' '' '' ( 728,149 ) ( 730,147.67 ) ' '' '' ( 730.00,148.17)(0.500,-1.000)2 ' '' '' ( 728.0,149.0 ) ' '' '' ( 731.0,148.0 ) ( 732,148.67 ) ' '' '' ( 732.00,148.17)(0.500,1.000)2 ' '' '' ( 731.0,149.0 ) ( 733,150 ) ( 733,150 ) ( 733.0,150.0 ) ( 734.0,150.0 ) ( 734.0,151.0 ) ' '' '' ( 736.0,150.0 ) ( 736.0,150.0 ) ( 736.0,151.0 ) ( 736.67,148 ) ' '' '' ( 736.17,149.00)(1.000,-1.000)2 ' '' '' ( 737.0,150.0 ) ( 738,148 ) ( 738,147.67 ) ' '' '' ( 738.00,147.17)(0.500,1.000)2 ' '' '' ( 738.67,150 ) ' '' '' ( 738.17,150.00)(1.000,1.000)2 ' '' '' ( 739.0,149.0 ) ( 740.0,151.0 ) ( 741,150.67 ) ' '' '' ( 741.00,150.17)(0.500,1.000)2 ' '' '' ( 740.0,151.0 ) ( 742,149.67 ) ' '' '' ( 742.00,149.17)(0.500,1.000)2 ' '' '' ( 742.0,150.0 ) ' '' '' ( 743.0,149.0 ) ' '' '' ( 743.0,149.0 ) ' '' '' ( 745.0,148.0 ) ( 749,147.67 ) ' '' '' ( 749.00,147.17)(0.500,1.000)2 ' '' '' ( 745.0,148.0 ) ' '' '' ( 750,147.67 ) ' '' '' ( 750.00,147.17)(0.500,1.000)2 ' '' '' ( 750.0,148.0 ) ( 751,149 ) ( 751,149 ) ( 751,148.67 ) ' '' '' ( 751.00,148.17)(0.500,1.000)2 ' '' '' ( 752,150 ) ( 752,148.67 ) ' '' '' ( 752.00,149.17)(0.500,-1.000)2 ' '' '' ( 753.0,149.0 ) ( 753.0,150.0 ) ( 754,149.67 ) ' '' '' ( 754.00,150.17)(0.500,-1.000)2 ' '' '' ( 754.0,150.0 ) ( 755,150 ) ( 755.0,150.0 ) ( 756,150.67 ) ' '' '' ( 756.00,151.17)(0.500,-1.000)2 ' '' '' ( 756.0,150.0 ) ' '' '' ( 757.0,150.0 ) ( 757,150.67 ) ' '' '' ( 757.00,150.17)(0.500,1.000)2 ' '' '' ( 757.0,150.0 ) ( 757.67,151 ) ' '' '' ( 757.17,152.00)(1.000,-1.000)2 ' '' '' ( 758.0,152.0 ) ( 759,151 ) ( 759.0,151.0 ) ( 760,149.67 ) ' '' '' ( 760.00,149.17)(0.500,1.000)2 ' '' '' ( 760.0,150.0 ) ( 761,151 ) ( 761,151 ) ( 761.0,151.0 ) ( 762.0,150.0 ) ( 762.0,150.0 ) ( 763,148.67 ) ' '' '' ( 763.00,148.17)(0.500,1.000)2 ' '' '' ( 763.0,149.0 ) ( 764.0,150.0 ) ' '' '' ( 764.0,152.0 ) ' '' '' ( 766,152.67 ) ' '' '' ( 766.00,152.17)(0.500,1.000)2 ' '' '' ( 766.0,152.0 ) ( 767,154 ) ( 767.0,153.0 ) ( 768.67,151 ) ' '' '' ( 768.17,152.00)(1.000,-1.000)2 ' '' '' ( 767.0,153.0 ) ' '' '' ( 770,151 ) ( 770.0,151.0 ) ( 770.0,152.0 ) ' '' '' ( 772,149.67 ) ' '' '' ( 772.00,150.17)(0.500,-1.000)2 ' '' '' ( 772.0,151.0 ) ( 773,150 ) ( 773,149.67 ) ' '' '' ( 773.00,150.17)(0.500,-1.000)2 ' '' '' ( 773.0,150.0 ) ( 774,150 ) ( 774,149.67 ) ' '' '' ( 774.00,149.17)(0.500,1.000)2 ' '' '' ( 775,151 ) ( 775,150.67 ) ' '' '' ( 775.00,150.17)(0.500,1.000)2 ' '' '' ( 776,152 ) ( 776,149.67 ) ' '' '' ( 776.00,150.17)(0.500,-1.000)2 ' '' '' ( 776.0,151.0 ) ( 777,150 ) ( 777.0,150.0 ) ( 778,149.67 ) ' '' '' ( 778.00,150.17)(0.500,-1.000)2 ' '' '' ( 778.0,150.0 ) ( 779,150 ) ( 779,150 ) ( 780,148.67 ) ' '' '' ( 780.00,149.17)(0.500,-1.000)2 ' '' '' ( 779.0,150.0 ) ( 781.0,149.0 ) ( 781.0,150.0 ) ' '' '' ( 783.0,150.0 ) ( 783.0,151.0 ) ' '' '' ( 785,148.67 ) ' '' '' ( 785.00,148.17)(0.500,1.000)2 ' '' '' ( 785.0,149.0 ) ' '' '' ( 786,150 ) ( 786,148.67 ) ' '' '' ( 786.00,149.17)(0.500,-1.000)2 ' '' '' ( 787.0,148.0 ) ( 787.0,148.0 ) ( 788,146.67 ) ' '' '' ( 788.00,146.17)(0.500,1.000)2 ' '' '' ( 788.0,147.0 ) ( 789,148 ) ( 789,147.67 ) ' '' '' ( 789.00,147.17)(0.500,1.000)2 ' '' '' ( 790,149 ) ( 790.0,149.0 ) ' '' '' ( 794.0,149.0 ) ( 794.0,150.0 ) ( 795,148.67 ) ' '' '' ( 795.00,148.17)(0.500,1.000)2 ' '' '' ( 795.0,149.0 ) ( 796,150 ) ( 796,149.67 ) ' '' '' ( 796.00,149.17)(0.500,1.000)2 ' '' '' ( 797,151 ) ( 797,148.67 ) ' '' '' ( 797.00,149.17)(0.500,-1.000)2 ' '' '' ( 797.0,150.0 ) ( 798,149 ) ( 798.0,149.0 ) ( 798.67,148 ) ' '' '' ( 798.17,148.00)(1.000,1.000)2 ' '' '' ( 799.0,148.0 ) ( 800.0,149.0 ) ( 801,148.67 ) ' '' '' ( 801.00,148.17)(0.500,1.000)2 ' '' '' ( 800.0,149.0 ) ( 802,150 ) ( 802.0,150.0 ) ( 803.0,149.0 ) ( 803,149.67 ) ' '' '' ( 803.00,149.17)(0.500,1.000)2 ' '' '' ( 803.0,149.0 ) ( 804,151 ) ( 804,149.67 ) ' '' '' ( 804.00,150.17)(0.500,-1.000)2 ' '' '' ( 805,150 ) ( 805,149.67 ) ' '' '' ( 805.00,149.17)(0.500,1.000)2 ' '' '' ( 806,151 ) ( 806,151 ) ( 807,150.67 ) ' '' '' ( 807.00,150.17)(0.500,1.000)2 ' '' '' ( 806.0,151.0 ) ( 808,152 ) ( 808.0,152.0 ) ( 809,149.67 ) ' '' '' ( 809.00,149.17)(0.500,1.000)2 ' '' '' ( 809.0,150.0 ) ' '' '' ( 810.0,150.0 ) ( 810.0,150.0 ) ( 811,151.67 ) ' '' '' ( 811.00,151.17)(0.500,1.000)2 ' '' '' ( 811.0,150.0 ) ' '' '' ( 812.0,151.0 ) ' '' '' ( 812.0,151.0 ) ( 813.0,150.0 ) ( 813.0,150.0 ) ' '' '' ( 815.0,149.0 ) ( 815.0,149.0 ) ( 816.0,148.0 ) ( 817,147.67 ) ' '' '' ( 817.00,147.17)(0.500,1.000)2 ' '' '' ( 816.0,148.0 ) ( 818,149 ) ( 818,149 ) ( 818,147.67 ) ' '' '' ( 818.00,148.17)(0.500,-1.000)2 ' '' '' ( 819.0,148.0 ) ( 819.0,149.0 ) ( 820.0,149.0 ) ' '' '' ( 820.0,151.0 ) ( 821,150.67 ) ' '' '' ( 821.00,151.17)(0.500,-1.000)2 ' '' '' ( 821.0,151.0 ) ( 822.0,150.0 ) ( 825,148.67 ) ' '' '' ( 825.00,149.17)(0.500,-1.000)2 ' '' '' ( 822.0,150.0 ) ' '' '' ( 826,147.67 ) ' '' '' ( 826.00,147.17)(0.500,1.000)2 ' '' '' ( 826.0,148.0 ) ( 827.0,148.0 ) ( 827.0,148.0 ) ( 828,147.67 ) ' '' '' ( 828.00,148.17)(0.500,-1.000)2 ' '' '' ( 828.0,148.0 ) ( 828.67,149 ) ' '' '' ( 828.17,149.00)(1.000,1.000)2 ' '' '' ( 829.0,148.0 ) ( 830,151 ) ( 829.67,149 ) ' '' '' ( 829.17,150.00)(1.000,-1.000)2 ' '' '' ( 831.0,149.0 ) ( 831.0,150.0 ) ' '' '' ( 833.0,149.0 ) ( 834,148.67 ) ' '' '' ( 834.00,148.17)(0.500,1.000)2 ' '' '' ( 833.0,149.0 ) ( 835,150 ) ( 835,149.67 ) ' '' '' ( 835.00,149.17)(0.500,1.000)2 ' '' '' ( 836,147.67 ) ' '' '' ( 836.00,148.17)(0.500,-1.000)2 ' '' '' ( 836.0,149.0 ) ' '' '' ( 837.0,148.0 ) ( 838,148.67 ) ' '' '' ( 838.00,148.17)(0.500,1.000)2 ' '' '' ( 837.0,149.0 ) ( 839.0,149.0 ) ( 839.0,149.0 ) ( 840,149.67 ) ' '' '' ( 840.00,149.17)(0.500,1.000)2 ' '' '' ( 840.0,149.0 ) ( 841.0,151.0 ) ( 841.0,152.0 ) ' '' '' ( 843.0,150.0 ) ' '' '' ( 844,148.67 ) ' '' '' ( 844.00,149.17)(0.500,-1.000)2 ' '' '' ( 843.0,150.0 ) ( 845,149 ) ( 845,148.67 ) ' '' '' ( 845.00,148.17)(0.500,1.000)2 ' '' '' ( 846,149.67 ) ' '' '' ( 846.00,150.17)(0.500,-1.000)2 ' '' '' ( 846.0,150.0 ) ( 847,150 ) ( 847.0,150.0 ) ' '' '' ( 849,147.67 ) ' '' '' ( 849.00,148.17)(0.500,-1.000)2 ' '' '' ( 849.0,149.0 ) ( 850,148 ) ( 850,146.67 ) ' '' '' ( 850.00,147.17)(0.500,-1.000)2 ' '' '' ( 851,147.67 ) ' '' '' ( 851.00,147.17)(0.500,1.000)2 ' '' '' ( 851.0,147.0 ) ( 852,149 ) ( 852,147.67 ) ' '' '' ( 852.00,147.17)(0.500,1.000)2 ' '' '' ( 852.0,148.0 ) ( 853.0,148.0 ) ( 853.0,148.0 ) ' '' '' ( 855.0,148.0 ) ( 855.0,149.0 ) ( 856.0,148.0 ) ( 856.0,148.0 ) ( 857,148.67 ) ' '' '' ( 857.00,148.17)(0.500,1.000)2 ' '' '' ( 857.0,148.0 ) ( 858,150 ) ( 858,150 ) ( 858,148.67 ) ' '' '' ( 858.00,149.17)(0.500,-1.000)2 ' '' '' ( 859.0,148.0 ) ( 859.0,148.0 ) ' '' '' ( 861.0,148.0 ) ( 861.0,148.0 ) ( 861.0,148.0 ) ( 862,146.67 ) ' '' '' ( 862.00,146.17)(0.500,1.000)2 ' '' '' ( 862.0,147.0 ) ( 863.0,148.0 ) ( 863.0,149.0 ) ( 864.0,148.0 ) ( 865,147.67 ) ' '' '' ( 865.00,147.17)(0.500,1.000)2 ' '' '' ( 864.0,148.0 ) ( 866,149 ) ( 866.0,149.0 ) ( 867.0,148.0 ) ( 868,147.67 ) ' '' '' ( 868.00,147.17)(0.500,1.000)2 ' '' '' ( 867.0,148.0 ) ( 869,149 ) ( 869,148.67 ) ' '' '' ( 869.00,148.17)(0.500,1.000)2 ' '' '' ( 870,150 ) ( 870.0,150.0 ) ( 871,149.67 ) ' '' '' ( 871.00,150.17)(0.500,-1.000)2 ' '' '' ( 870.0,151.0 ) ( 872,148.67 ) ' '' '' ( 872.00,148.17)(0.500,1.000)2 ' '' '' ( 872.0,149.0 ) ( 873.0,149.0 ) ( 873.0,149.0 ) ( 874.0,149.0 ) ( 875,149.67 ) ' '' '' ( 875.00,149.17)(0.500,1.000)2 ' '' '' ( 874.0,150.0 ) ( 876.0,151.0 ) ( 876.67,150 ) ' '' '' ( 876.17,151.00)(1.000,-1.000)2 ' '' '' ( 876.0,152.0 ) ( 878.0,150.0 ) ' '' '' ( 878.0,152.0 ) ( 879,149.67 ) ' '' '' ( 879.00,150.17)(0.500,-1.000)2 ' '' '' ( 879.0,151.0 ) ( 880,150 ) ( 880.0,150.0 ) ( 881,148.67 ) ' '' '' ( 881.00,148.17)(0.500,1.000)2 ' '' '' ( 881.0,149.0 ) ( 882.0,149.0 ) ( 882,149.67 ) ' '' '' ( 882.00,149.17)(0.500,1.000)2 ' '' '' ( 882.0,149.0 ) ( 883,150.67 ) ' '' '' ( 883.00,151.17)(0.500,-1.000)2 ' '' '' ( 883.0,151.0 ) ( 884,151 ) ( 884.0,151.0 ) ( 885.0,148.0 ) ' '' '' ( 885.0,148.0 ) ' '' '' ( 887.0,148.0 ) ' '' '' ( 887.0,150.0 ) ( 888.0,150.0 ) ( 888.0,151.0 ) ( 889.0,150.0 ) ( 891,148.67 ) ' '' '' ( 891.00,149.17)(0.500,-1.000)2 ' '' '' ( 889.0,150.0 ) ' '' '' ( 892,149.67 ) ' '' '' ( 892.00,150.17)(0.500,-1.000)2 ' '' '' ( 892.0,149.0 ) ' '' '' ( 892.67,149 ) ' '' '' ( 892.17,150.00)(1.000,-1.000)2 ' '' '' ( 893.0,150.0 ) ( 894,148.67 ) ' '' '' ( 894.00,149.17)(0.500,-1.000)2 ' '' '' ( 894.0,149.0 ) ( 895,149 ) ( 895,149 ) ( 895.0,149.0 ) ' '' '' ( 897,148.67 ) ' '' '' ( 897.00,149.17)(0.500,-1.000)2 ' '' '' ( 897.0,149.0 ) ( 898.0,149.0 ) ( 898.0,149.0 ) ( 899,147.67 ) ' '' '' ( 899.00,148.17)(0.500,-1.000)2 ' '' '' ( 898.0,149.0 ) ( 900,148 ) ( 900.0,148.0 ) ( 901.0,148.0 ) ( 904,147.67 ) ' '' '' ( 904.00,148.17)(0.500,-1.000)2 ' '' '' ( 901.0,149.0 ) ' '' '' ( 905,148.67 ) ' '' '' ( 905.00,148.17)(0.500,1.000)2 ' '' '' ( 905.0,148.0 ) ( 906,148.67 ) ' '' '' ( 906.00,148.17)(0.500,1.000)2 ' '' '' ( 906.0,149.0 ) ( 907,150 ) ( 907,150 ) ( 907,149.67 ) ' '' '' ( 907.00,149.17)(0.500,1.000)2 ' '' '' ( 908,150.67 ) ' '' '' ( 908.00,151.17)(0.500,-1.000)2 ' '' '' ( 908.0,151.0 ) ( 909.0,150.0 ) ( 909.0,150.0 ) ( 910.0,148.0 ) ' '' '' ( 910.0,148.0 ) ( 911,148.67 ) ' '' '' ( 911.00,148.17)(0.500,1.000)2 ' '' '' ( 911.0,148.0 ) ( 912,150 ) ( 912,149.67 ) ' '' '' ( 912.00,149.17)(0.500,1.000)2 ' '' '' ( 913,151 ) ( 913,149.67 ) ' '' '' ( 913.00,149.17)(0.500,1.000)2 ' '' '' ( 913.0,150.0 ) ( 914,148.67 ) ' '' '' ( 914.00,149.17)(0.500,-1.000)2 ' '' '' ( 914.0,150.0 ) ( 915,149 ) ( 915,147.67 ) ' '' '' ( 915.00,148.17)(0.500,-1.000)2 ' '' '' ( 916,148 ) ( 916,148 ) ( 916.0,148.0 ) ' '' '' ( 920.0,148.0 ) ' '' '' ( 920.0,150.0 ) ( 921.0,149.0 ) ( 921.0,149.0 ) ( 922.0,148.0 ) ( 922,147.67 ) ' '' '' ( 922.00,148.17)(0.500,-1.000)2 ' '' '' ( 922.0,148.0 ) ( 923,148.67 ) ' '' '' ( 923.00,148.17)(0.500,1.000)2 ' '' '' ( 923.0,148.0 ) ( 924,150 ) ( 924.0,150.0 ) ( 925.0,150.0 ) ( 925.0,151.0 ) ( 926,149.67 ) ' '' '' ( 926.00,149.17)(0.500,1.000)2 ' '' '' ( 926.0,150.0 ) ( 927,151 ) ( 927.0,151.0 ) ( 928.0,150.0 ) ( 928,150.67 ) ' '' '' ( 928.00,150.17)(0.500,1.000)2 ' '' '' ( 928.0,150.0 ) ( 929,152 ) ( 930,150.67 ) ' '' '' ( 930.00,151.17)(0.500,-1.000)2 ' '' '' ( 929.0,152.0 ) ( 931,151 ) ( 931,151 ) ( 931.0,151.0 ) ( 932,148.67 ) ' '' '' ( 932.00,149.17)(0.500,-1.000)2 ' '' '' ( 932.0,150.0 ) ( 933,149 ) ( 933,147.67 ) ' '' '' ( 933.00,148.17)(0.500,-1.000)2 ' '' '' ( 934,148 ) ( 934,148 ) ( 933.67,148 ) ' '' '' ( 933.17,148.00)(1.000,1.000)2 ' '' '' ( 935,150 ) ( 935,149.67 ) ' '' '' ( 935.00,149.17)(0.500,1.000)2 ' '' '' ( 936,150.67 ) ' '' '' ( 936.00,151.17)(0.500,-1.000)2 ' '' '' ( 936.0,151.0 ) ( 937.0,151.0 ) ( 937.0,151.0 ) ( 938,149.67 ) ' '' '' ( 938.00,150.17)(0.500,-1.000)2 ' '' '' ( 937.0,151.0 ) ( 939,150 ) ( 939.0,150.0 ) ' '' '' ( 941,150.67 ) ' '' '' ( 941.00,150.17)(0.500,1.000)2 ' '' '' ( 941.0,150.0 ) ( 942,152 ) ( 941.67,150 ) ' '' '' ( 941.17,151.00)(1.000,-1.000)2 ' '' '' ( 943.0,150.0 ) ( 943,149.67 ) ' '' '' ( 943.00,149.17)(0.500,1.000)2 ' '' '' ( 943.0,150.0 ) ( 944,151 ) ( 944.0,151.0 ) ( 945,149.67 ) ' '' '' ( 945.00,149.17)(0.500,1.000)2 ' '' '' ( 945.0,150.0 ) ( 946,151 ) ( 946.0,150.0 ) ( 947,149.67 ) ' '' '' ( 947.00,149.17)(0.500,1.000)2 ' '' '' ( 946.0,150.0 ) ( 948,151 ) ( 948.0,151.0 ) ( 949.0,148.0 ) ' '' '' ( 951,147.67 ) ' '' '' ( 951.00,147.17)(0.500,1.000)2 ' '' '' ( 949.0,148.0 ) ' '' '' ( 952,146.67 ) ' '' '' ( 952.00,146.17)(0.500,1.000)2 ' '' '' ( 952.0,147.0 ) ' '' '' ( 953,147.67 ) ' '' '' ( 953.00,148.17)(0.500,-1.000)2 ' '' '' ( 953.0,148.0 ) ( 954,148 ) ( 956,147.67 ) ' '' '' ( 956.00,147.17)(0.500,1.000)2 ' '' '' ( 954.0,148.0 ) ' '' '' ( 957.0,148.0 ) ( 957.0,148.0 ) ( 958.0,148.0 ) ( 959,147.67 ) ' '' '' ( 959.00,148.17)(0.500,-1.000)2 ' '' '' ( 958.0,149.0 ) ( 960,146.67 ) ' '' '' ( 960.00,146.17)(0.500,1.000)2 ' '' '' ( 960.0,147.0 ) ( 961.0,147.0 ) ( 962,146.67 ) ' '' '' ( 962.00,146.17)(0.500,1.000)2 ' '' '' ( 961.0,147.0 ) ( 963,148.67 ) ' '' '' ( 963.00,148.17)(0.500,1.000)2 ' '' '' ( 963.0,148.0 ) ( 964.0,149.0 ) ( 964.0,149.0 ) ( 965.0,148.0 ) ( 965.0,148.0 ) ' '' '' ( 967.0,148.0 ) ( 967.0,149.0 ) ( 968.0,149.0 ) ( 968.0,149.0 ) ( 968.0,149.0 ) ( 969.0,149.0 ) ( 969.0,150.0 ) ' '' '' ( 971.0,148.0 ) ' '' '' ( 971.0,148.0 ) ( 972,145.67 ) ' '' '' ( 972.00,146.17)(0.500,-1.000)2 ' '' '' ( 972.0,147.0 ) ( 973.0,146.0 ) ( 973.0,147.0 ) ( 974.0,146.0 ) ( 974,147.67 ) ' '' '' ( 974.00,147.17)(0.500,1.000)2 ' '' '' ( 974.0,146.0 ) ' '' '' ( 975,149 ) ( 975,147.67 ) ' '' '' ( 975.00,148.17)(0.500,-1.000)2 ' '' '' ( 976.0,148.0 ) ( 976.0,149.0 ) ( 977,150.67 ) ' '' '' ( 977.00,150.17)(0.500,1.000)2 ' '' '' ( 977.0,149.0 ) ' '' '' ( 978.0,151.0 ) ( 979,149.67 ) ' '' '' ( 979.00,150.17)(0.500,-1.000)2 ' '' '' ( 978.0,151.0 ) ( 980.0,150.0 ) ( 981,149.67 ) ' '' '' ( 981.00,150.17)(0.500,-1.000)2 ' '' '' ( 980.0,151.0 ) ( 982,150 ) ( 982,148.67 ) ' '' '' ( 982.00,149.17)(0.500,-1.000)2 ' '' '' ( 983,149 ) ( 983,149 ) ( 983.0,149.0 ) ( 984,148.67 ) ' '' '' ( 984.00,149.17)(0.500,-1.000)2 ' '' '' ( 984.0,149.0 ) ( 985,149 ) ( 985.0,149.0 ) ( 986.0,149.0 ) ( 130,495)(0,0) ( 302,156)(0,0) ( 474,149)(0,0) ( 646,153)(0,0) ( 818,149)(0,0) ( 909,434)(0,0) ( 986.0,150.0 ) ' '' '' ( 130.0,131.0 ) ' '' '' ( 130.0,131.0 ) ' '' '' ( 989.0,131.0 ) ' '' '' ( 130.0,859.0 ) ' '' '' the average number of ( non - unique ) macro - classifiers used by fdgp - xcsf ( fig .[ fig : performance ] ) rapidly increases to approximately 1400 after 3,000 trials , before converging to around 150 ; this is more compact than xcsf with interval conditions ( ) , showing that fdgp - xcsf can provide strong generalisation .the networks grow , on average , from 3 nodes to 3.5 , and the average connectivity remains static around 2.1 , while the average value of increases by from 28.5 to 31.5 ( not shown ) .the average mutation rate declines from 50% to 2% over the first 15,000 trials before converging to around 1.2% ( fig .[ fig : performance ] ) .it has been shown that xcsf is able to design ensembles of dynamical fuzzy logic networks whose emergent behaviour is able to be collectively exploited to solve a continuous - valued task via reinforcement learning , where performance in the continuous frog problem was superior to those reported previously in , and . l. bull and r. j. preen . on dynamical genetic programming : random boolean networks in learning classifier systems . in _ proceedings of the 12th european conference on genetic programming _ ,eurogp 09 , pages 3748 , berlin , heidelberg , 2009 .springer - verlag .t. kok and p. wang .a study of 3-gene regulation networks using nk - boolean network model and fuzzy logic networking . in c.kahraman , editor , _ fuzzy applications in industrial engineering _ ,volume 201 of _ studies in fuzziness and soft computing _ , pages 119151 .springer berlin / heidelberg , 2006 .r. j. preen and l. bull .discrete dynamical genetic programming in xcs . in _ proceedings of the 11th annual conference on genetic and evolutionary computation _ , gecco 09 , pages 12991306 , new york , ny , usa , 2009 .j. ramirez - ruiz , m. valenzuela - rendon , and h. terashima - marin .qfcs : a fuzzy lcs in continuous multi - step environments with continuous vector actions . in _ proceedings of the 10th international conference on parallel problem solving from nature : ppsn x _ , pages 286295 , berlin , heidelberg , 2008 .springer - verlag .h. t. tran , c. sanza , y. duthen , and t. d. nguyen .xcsf with computed continuous action . in _ proceedings of the 9th annual conference on genetic and evolutionary computation _ , gecco 07 , pages 18611869 , new york , ny , usa , 2007 .s. w. wilson .three architectures for continuous action . in _ proceedings of the 2003 - 2005 international conference on learning classifier systems _, iwlcs03 - 05 , pages 239257 , berlin , heidelberg , 2007 .springer - verlag .
|
a number of representation schemes have been presented for use within learning classifier systems , ranging from binary encodings to neural networks , and more recently dynamical genetic programming ( dgp ) . this paper presents results from an investigation into using a fuzzy dgp representation within the xcsf learning classifier system . in particular , asynchronous fuzzy logic networks are used to represent the traditional condition - action production system rules . it is shown possible to use self - adaptive , open - ended evolution to design an ensemble of such fuzzy dynamical systems within xcsf to solve several well - known continuous - valued test problems . = 10000 = 10000 [ knowledge acquisition , parameter learning ]
|
the three body problem is one of the most fascinating topics in mathematics and celestial mechanics . the basic definition of the problem is as follows : three point masses ( or bodies of spherical symmetry ) move in space , under their mutual gravitational attraction ; given their initial conditions , we want to determine their subsequent motion .like many mathematical problems , it is not as simple as it sounds .although the two body problem can be solved in closed form by means of elementary functions and hence we can predict the quantitative and qualitative behaviour of the system , the three body problem is a complicated nonlinear problem and no similar type of solution exists .more precisely , the former is integrable but the latter is not ( if a system with n degrees of freedom has n independent first integrals in involution , then it is integrable ; that is not the case for the three body problem ) .one issue that is of great interest in the three body problem , is the stability ( and instability ) of triple systems .the stability ( and instability ) of triple systems is an intriguing problem which remains unsolved up to date .it has been a subject of study by many people , not only because of the intellectual challenge that poses , but also because of its importance in many areas of astronomy and astrophysics , e.g. planetary and star cluster dynamics . in this work ,we review the three body stability criteria that have been derived over the past few decades .we deal with the gravitational non - relativistic three - body problem and we concentrate on hierarchical triple systems . by hierarchical , we mean systems in which we can distinguish two different motions : two of the bodies form a binary and move around their centre of mass , while the third body is on a wider orbit with respect to the binary barycentre. this may not be the most strict definition of a hierarchical triple system ( e.g. see eggleton and kiseleva 1995 ) , but we use that one in order to cover as many triple system configurations as possible .we would also like to point out that some of the criteria may apply to systems that are not hierarchical or they are marginally hierarchical ( e.g. wisdom s criterion for resonance overlap ) , according to the definition given in the previous paragraph .however , as they are related to other criteria that refer to hierarchical systems , we felt that we should mention them too .there are two main types of stability criteria , depending on how they were derived : analytical and numerical . following that classification, we are going to present the analytical criteria first and then we will discuss the criteria that have been derived from numerical integrations .finally , we present criteria that are based on the concept of chaos . throughout the next paragraphs, we decided that it would be better if we kept the notation that each author used ( with a few exceptions for the benefit of the reader ) .the derivation of analytical stability criteria in the three body problem has been dominated by the generalisation of the concept of surfaces of zero velocity of the restricted three - body problem , first introduced by hill ( 1878a,1878b,1878c ) .it is known that in the circular restricted three body problem , there are regions in physical space where motion can and can not occur .these regions are determined by means of the only known integral of the circular restricted problem , the so called jacobi constant .this notion has been extended to the general three body problems by several authors : golubev ( 1967 , 1968a , 1968b ) , saari ( 1974 ) , who used an inequality similar to sundman s , marchal and saari ( 1975 ) , who used sundman s inequality , bozis ( 1976 ) , who used algebraic manipulations of the integrals of motion in the planar three body problem , zare ( 1976 , 1977 ) , who made use of hamiltonian dynamics ; saari ( 1984 , 1987 ) , who produced the best possible configurational velocity surfaces. also , sergysels ( 1986 ) , derived zero velocity surfaces for the general three dimensional three body problem , by using the method of bozis ( 1976 ) and a rotating frame that does not take into account entirely the rotation of the three body system .finally , ge and leng ( 1992 ) produced the same result as saari ( 1987 ) , using a modified version of the transformation given in zare ( 1976 ) .easton ( 1971 ) , tung ( 1974 ) and mialni and nobili ( 1983 ) also discussed the topology of the restrictive surfaces . the quantity , where is the angular momentum and is the energy of the three body system , controls the topology of the restrictive surfaces and it is the analog of the jacobi constant of the circular restricted problem .szebehely ( 1977 ) and szebehely and zare ( 1977 ) , using two body approximations , produced an expression for , which involved the masses , the semi - major axes and the eccentricities of the system .then , that expression was compared with the critical value at the collinear lagrangian points , which determine the openings and closings of the zero velocity surfaces .if the value of for a given triple configuration was smaller than the one at the inner lagrangian point , then there could be no exchange of bodies , i.e the system was hill stable .although there was some discussion on the effect of the inclination , the derivation was for coplanar orbits .marchal and his collaborators ( marchal and saari 1975 , marchal and bozis 1982 ) , produced a generalisation of the hill curves to the general three dimensional three body problem by using the quantity as the controlling parameter of the restrictive surfaces , where is the mean quadratic distance , is the mean harmonic distance and they are defined by the following equations : where and is the distance between and .walker et al .( 1980 ) derived the critical surfaces in terms of the parameters with ( is the distance between and , is the distance between the centre of mass of and , and ) . measures the disturbance of by the binary , while is a measure of the disturbance of the binary by .thus , for a given triple configuration , they evaluated the quantities and determined whether the system was hill stable or not .walker and roy ( 1981 ) investigated the effect that the eccentricities had on the stability limit , as the walker et al .( 1980 ) derivation applied only for coplanar , initially circular and corotational triple systems .they paid particular attention to the initial orbital phases of the system and they found that the critical value of ( being the semi - major axis ratio of the two orbits ) could be affected by up to .similar work was also done in valsecchi et al .( 1984 ) , but instead of using two body expressions for the angular momentum and energy of the system as walker et al .( 1980 ) did , they used the exact expressions ; however the disagreement between the two methods was very small .this was also confirmed by kiseleva et al .( 1994b ) , who used the exact expressions for the angular momentum and energy to evaluate ( the critical initial semi - major axis for the szebehely - zare criterion ) .they found that their value was always larger by at most compared to the one obtained by two body approximations .roy et al . ( 1984 ) computed the distance of the closest approach of to for a coplanar , corotational , hierarchical three body system ( with for the inner binary ) and derived a condition for stability by manipulating the angular momentum and energy integrals .they ended up with the following inequality : [\mu\frac{(1-\mu)^{2}}{1-k}+\frac{\mu\mu_{3}}{k}+ \frac{\mu_{3}(1-\mu)^{2}}{1-k\mu}]^{2},\ ] ] where is defined by the relation ( and are the magnitudes of the two jacobian vectors of the hierarchical triple system ) and it represents the distance of closest approach of to .if there exists a dynamical barrier between and , then , there will be values of for which inequality ( [ roy1984 ] ) will not be satisfied .the largest of these values will give the measure of the closest approach of the two orbits .their result was in agreement with the criterion .the concept of hill type surfaces that pose restrictions to the motion of three body systems , has also been used to study the motion in special cases .szebehely ( 1978 ) , in the context of the circular restricted three body problem , derived a simple condition for a satellite to remain in orbit around the smaller primary in presence of the perturbations of the larger one .the condition is : being the radius of the satellite circular motion around its primary and .the above condition is valid for both prograde and retrograde motion .markellos and roy ( 1981 ) obtained a more accurate result for the same problem : +o(\mu)\ ] ] for prograde orbits and +o(\mu)\ ] ] for retrograde orbits , where corresponds to of szebehely ( 1978 ) and again , . walker ( 1983 ) investigated the hill - type stability of a coplanar , with initially circular orbits , hierarchical three body system , where the total mass of the binary was small compared to the mass of the external body ( e.g. satellite - planet - star ) .his results were in good agreement with szebehely ( 1978 ) and markellos and roy ( 1981 ) .donnison and williams ( 1983 , 1985 ) used the condition to determine the hill stability of coplanar hierarchical three body systems with ( and form the inner binary ) . using two body approximations for the angular momentum and the energy of the system and taking advantage of the fact that one of the masses was much greater that the other two , they concluded that their system was stable ( in terms of exchange ) when the following condition was satisfied : where and are the semi - major axes of the inner and outer orbit respectively ; the plus sign corresponds to prograde motion , while the minus sign to retrograde motion.finally , is the largest of either inner or outer eccentricity .donnison ( 1988 ) , using the same approach mentioned above , investigated the stability of low mass binary systems moving on elliptical orbits in the presence of a large third mass , i.e. . brasser ( 2002 ) dealt with systems where was smaller that the other two masses , which were of comparable size ( and form the inner binary ) .gladman ( 1993 ) , based on the work done by marchal and bozis ( 1982 ) , produced analytical formulae for the critical separation that two planets and , orbiting a star , should have in order to be hill stable .he derived the following formulae ( to lowest order ) : \(i ) for initially circular orbits ( ii ) equal mass planets , small eccentricities ( iii ) equal mass planets , equal but large eccentricities where and and and are the eccentricities of the inner and outer orbit respectively . veras and armitage ( 2004 ) , generalising gladman s result , derived a criterion for two equal mass planets on initially circular inclined orbits to achieve hill stability .they found that the planets were hill stable if their initial separation was greater than + ... ,\end{aligned}\ ] ] where is the mass of the star , is the mass of the planets and the inclination of the orbits . finally , in a series of papers , donnison ( 1984a , 1984b , 2006 ) made use of the criterion to determine the stability of triple systems , where the outer body moved on a parabolic or hyperbolic orbit with respect to the centre of mass of the other two bodies .the first two papers dealt with coplanar systems , while the latest one examined systems with inclined orbits . in each paper , there was discussion about some special cases ( equal masses and large in paper i , equal and unequal binary masses in paper ii , equal masses , unequal binary masses , large in paper iii ; in all cases belonged to the inner binary ) .the main disadvantage of the criterion is that it is a sufficient but not a necessary condition for stability .exchange might not occur even when the condition is violated but it certainly can not occur when the condition is satisfied .the lobes could also be open to infinity , but the bodies may or may not escape to infinity .finally , things are not clear again when the third body is started outside ( inside ) the lobes , since the criterion can not give any information whether the third body will escape or not from the system ( will keep orbiting the binary or form a binary with one of the other masses ) . the situation where one member of a triple system escapes to infinity was investigated by several authors .they derived sufficient conditions for the motion to be of hyperbolic - elliptic type , i.e. conditions for the distance between one body and the centre of mass of the two other bodies to increase indefinitely as time goes to infinity , while the distance between the other two bodies remains bounded .such conditions can be found in standish ( 1971 ) , yoshida ( 1972 ) , griffith and north ( 1973 ) , marchal ( 1974 ) .yoshida ( 1974 ) derived another criterion for hyperbolic - elliptic motion under the condition that the magnitude of the angular momentum of the three body system was above a certain level and bozis ( 1981 ) , in a paper closely related to the one of yoshida ( 1974 ) , he considered conditions for the smallest mass of a triple system to escape to infinity . finally , a stronger escape criterion has been proposed by marchal and his collaborators ( marchal et al .1984a , 1984b ) .references to criteria before 1970 , can be found in the above mentioned papers . usually , those criteria required that the distance and radial velocity of the potential escaper ( with respect to the barycentre of the binary formed by the other two bodies ) were above certain values at some time . however , for large distances , there is little difference between the criteria ( anosova 1986 ) .it should also be added here , that , in addition to the sufficient conditions for escape of one body , some of the above mentioned authors also gave sufficient conditions for ejection without escape ; in such a situation , the ejected mass reaches a bounded distance and falls back toward the other two masses .such conditions can be found in standish ( 1972 ) , griffith and north ( 1973 ) and marchal ( 1974 ) .the numerical work involves a wide range of simulations of triple systems .several authors set up numerical experiments and investigated the orbital evolution of hierarchical triple systems .harrington ( 1972 , 1975 , 1977 ) , in a series of papers , carried out numerical integrations of hierarchical triple systems with stellar and planetary mass ratios . in his first paper , he integrated equal mass systems with different initial conditions in order to determine their stability .he considered a system to be stable if there had been no change in the orbital elements during the period of integration , particularly in the semi - major axes or the eccentricities .the following situations were also defined as unstable : escape of one body , collision , i.e. two components got sufficiently close that it could be assumed that there were tidal or material interactions between the bodies involved , change to which bodies comprise the inner binary .a total of 420 orbits were integrated for 10 to 20 revolutions of the outer orbit .it was found that stability was insensitive to the eccentricity of the inner binary , for moderate eccentricity , to the argument of periastron of either orbit and to the mutual inclination of the two orbits ( except when the inclination was within a few degrees of a perpendicular configuration ) . as a measure of stability, he used the quantity ( was the outer periastron distance and the inner semi - major axis ) and he found that stability existed above for prograde and for retrograde orbits . in his second paper ,harrington integrated coplanar systems with unequal masses ( with the largest mass ratio never exceeding ) and based on his numerical results , he derived the following limiting condition for stability : \log{[1+m_{3}/(m_{1}+m_{2})]},\ ] ] where is the outer periastron distance , is the inner semi major axis and is the parameter limit for equal masses .the above condition was improved in the last of the three papers , in which harrington performed numerical simulations for systems which consisted of a stellar binary and a body of planetary mass ( equation [ harr1 ] does not apply in this case ) .the new empirical condition for stability was ( regardless of which of the components the planet was ) : +k.\ ] ] and were determined empirically , with being the limit on for the equal mass case and it was taken directly from the results of the first paper and was then determined by a least square fit to the unequal mass cases ; is if this is to be a mean fit and is approximately if it is to be an upper limit .for coplanar prograde orbits , and and for retrograde , and .harrington also found that retrograde orbits were more stable than the prograde ones , a result which is in contrast with szebehely s and zare s predictions , as they found that prograde orbits were more stable than retrograde orbits .however , the results for equal masses and direct orbits were in good agreement , although szebehely s results allow a slightly closer outer orbit . of course , it should be borne in mind that the criterion is a sufficient stability condition , based on the possibility of exchange of bodies .it should also be pointed out that the definition of stability given by harrington is a bit ambiguous .he classifies a triple system as stable if there is no `` significant change '' in the orbital elements during the period of integration , and particularly in the semi - major axes and eccentricities .another point that raises some concern is that the integrations were performed for only 10 or 20 outer orbital periods .this could prove inadequate , although harrington suggested that instabilities of this kind ( exchange etc . )set in very quickly .graziani and black ( 1981 ) , in the context of planet formation and extrasolar planets , used numerical integrations to model planetary systems ( star and two planets , which had the same mass in most of the numerical simulations ) with prograde , coplanar and initially circular orbits .the systems were integrated for at least 100 revolutions of the longest period planet , or until instability was evident .the authors classified a system as unstable if there was clear evidence for secular changes in any of the orbits during the numerical integration .based on their results , they obtained the following condition for stability : where the planets and orbit the star .the parameter gives the minimum initial separation between the companions in units of their mean distance from the central star , while is the mean mass of the two companions in units of the mass of the star .more specifically , with and being the semi - major axes of the inner and outer orbit respectively .systems with became unstable within a few tens of planetary orbits .black ( 1982 ) modified the above condition to apply for .the modified stability condition is : both the above stability conditions were confirmed by more integrations ( pendleton and black 1983 ) .however , equations ( [ black1 ] ) and ( [ black2 ] ) were in disagreement with equation ( [ harr2 ] ) , except a narrow range around .donnison and mikulskis ( 1992 ) produced a modified version of equations ( [ black1 ] ) and ( [ black2 ] ) , based on numerical integrations of circular , coplanar and prograde systems .a system was considered to be unstable when there was a change of more than in either of the semi - major axes or / and either of the eccentricities altered by more than 0.1 .each numerical model was integrated for at least 1000 inner binary orbits or until the existence of instability was evident ( which usually happened within the first 100 orbits ) .they derived the following values for : and donnison and mikulskis ( 1994 ) , following the same procedure as above , produced the following formulae for in the case of retrograde orbits : and the results of donnison and mikulskis ( 1992 , 1994 ) were in good agreement with the results of black and his collaborators ( for prograde orbits of course ) , but quite different from harrington s results , except in the equal mass case .there was also agreement with the theory of szebehely and zare ( 1977 ) , but only for prograde orbits .dvorak ( 1986 ) investigated the stability of p - type orbits in stellar binary systems , i.e. planet orbiting the binary system , in the context of the elliptic restricted three body problem .he performed numerical integrations of planets on initially circular orbits orbiting an equal mass binary system .the integration time span was 500 binary periods and a planetary orbit was classified as stable if its eccentricity remained smaller than throughout the whole integration time .his results showed a region of stability far away from the primaries , a region of instability closer to the primaries and a chaotic ( in the sense of unpredictability ) zone between those two regions .this chaotic zone was limited by the lower critical orbit ( lco ) , defined as the largest unstable orbit for all starting positions of the planet , and the upper critical orbit ( uco ) , defined as the orbit with the smallest semimajor axis for which the system was stable for all starting positions .a least squares parabolic fit to the numerical integration results yielded : where is the eccentricity of the primaries and the distance is given in au .each coefficient is listed along with its formal uncertainty . although the above formulae were derived for systems where the primaries had equal masses , additional numerical integrations of p - type orbits in systems with unequal mass primaries ( dvorak et al .1989 ) showed no dependence of the critical orbits on the mass ratio of the primaries .finally , concerning p - type orbits , pilat - lohinger et al .( 2003 ) investigated the stability of such orbits in three dimensional space .they integrated initially circular planetary orbits in equal mass binary systems , with a binary eccentricity varying from 0 to 0.5 .the mutual inclination of the orbits was in the range .the orbits were classified as in dvorak ( 1986 ) , i.e. stable , chaotic and unstable , where stable meant that the planet did not suffer from a close encounter with one of the primaries for the whole integration time span ( 50000 periods of the primaries ) .it turned out that the inclination did not affect the stability limit significantly .rabl and dvorak ( 1988 ) , by using numerical integrations , established stability zones for s - type orbits in stellar binary systems ( planet orbiting one of the stars of the binary system ) .the setup of their systems was similar to the one in dvorak ( 1986 ) , i.e. initially circular orbit for the massless particle and equal mass primaries .the maximum binary eccentricity considered was 0.6 .an initially circular s - type orbit was classified as stable , if it remained elliptical with respect to its mother primary during the whole integration time of 300 periods of the primary bodies .based on their results , they derived the following formulae : where is the eccentricity of the stellar binary .note that the meaning of lco and uco is different compared to the p - type orbit case ( the stable orbits lie inside lco , while the unstable ones outside uco ) . as in dvorak ( 1986 ) , the results showed the existence of a grey ( chaotic ) area between lco and uco .pilat - lohinger and dvorak ( 2002 ) performed more numerical experiments on s - type orbits .their models took into consideration varying binary mass ratios ( ) and , besides a varying primary eccentricity , the planetary mass had an eccentricity from 0 to 0.5 .the integration time was 1000 binary periods .they found that an increase in the eccentricities reduced the stability zone ( the planetary eccentricity had less influence than the binary eccentricity , but it reduced the stability zone in a similar way ) .the results were also in agreement with the results of rabl and dvorak ( 1988 ) .however , a quick inspection of the result tables in pilat - lohinger and dvorak ( 2002 ) , may suggest that the primary mass ratio has an effect on the stability zones , in contrast to what was mentioned above in the case of p - type orbits .holman and wiegert ( 1999 ) , also investigated the stability of p - type and s - type orbits in stellar binary systems .they performed numerical simulations of particles on initially circular and prograde orbits around the binary or around one of the stars , in the binary plane of motion and with different initial orbital longitudes .the binary mass ratio was taken in the range and the binary eccentricity in the range .the integrations lasted for binary periods .if a particle survived the whole integration time at all initial longitudes , then the system was classified as stable . using a least squares fit to their data, they obtained : ( i ) for the inner region ( s - type orbit ) : {b } \label{hol1}\end{aligned}\ ] ] ( ii ) for the outer region ( p - type orbit ) : {b } , \label{hol2}\end{aligned}\ ] ] where is the critical semi - major axis , is the binary semi - major axis , is the binary eccentricity and . equation ( [ hol1 ] ) is valid to typically and to in the worst case over the range of and , while equation ( [ hol2 ] ) is valid to typically and to in the worst case over the same ranges .an interesting finding was that , in the outer region , ` islands ' of instability existed outside the inner stable region ; this phenomenon was attributed to mean motion resonances and indicated that there was not a sharp boundary between stable and unstable regions .it should be mentioned here that equation ( [ hol2 ] ) , as presented in the paper of holman and wiegert , appears not to depend on at all . however , this is probably a misprint , as equation ( [ hol1 ] ) might suggest .the results of holman and wiegert are in good agreement with the results of dvorak ( 1986 ) and rabl and dvorak ( 1988 ) .figures 1 demonstrate that agreement . against binary eccentricity for a particle orbiting the binary .the top graph is for p - type orbits and the bottom one is for s - type orbits .the continuous lines comes from the results obtained from dvorak ( 1986 ) and rabl and dvorak ( 1988 ) , while the holman - wiegert results are shown with the dashed lines . for both graphs , andthe binary semi - major axis is 1 au.,title="fig:",width=302,height=226 ] against binary eccentricity for a particle orbiting the binary .the top graph is for p - type orbits and the bottom one is for s - type orbits .the continuous lines comes from the results obtained from dvorak ( 1986 ) and rabl and dvorak ( 1988 ) , while the holman - wiegert results are shown with the dashed lines . for both graphs , andthe binary semi - major axis is 1 au.,title="fig:",width=302,height=226 ] kiseleva and her collaborators , performed numerical integrations of hierarchical triple systems with coplanar , prograde and initially circular orbits ( kiseleva et al .1994a , 1994b ) .the mass ratios were within the range .a system was classified as stable if it preserved its initial hierarchical configuration during the whole of the integration time span , which was normally 100 outer binary orbital periods , but certain cases were followed for 1000 or even for 10000 outer orbits ( however , it appeared that the longer integration time had little effect on the stability boundary ) .these numerical calculations were later extended to eccentric binaries , inclined orbits ( from to ) and different initial phases , and an empirical condition for stability was derived ( eggleton and kiseleva 1995 ) : where is the critical initial ratio of the periastron distance of the outer orbit to the apastron distance of the inner orbit , is related to the critical initial period ratio by the following relation : where and are the eccentricities of the inner and outer orbit respectively .the coefficients of equation ( [ egg1 ] ) were obtained rather empirically , based on the numerical results that the authors had at their disposal . as for the effect of certain characteristics on the stability boundary , such as the orbital eccentricities , it was determined by the examination of a small number of mass ratios that the authors believed to be reasonably representative .the criterion appears to be reliable to about for a wide range of circumstances , which is not very bad , considering the amount of parameters and the complex nature of the critical surface .it probably does not work very well in situations where there is a resonance or commensurability , but these are more common in systems with extreme mass ratios ( e.g. star and planets ) , while the intention of the authors ( as stated in their paper ) was to investigate triple systems of comparable masses .it should be pointed out here that there is a misprint in formula ( [ egg1 ] ) as given in eggleton and kiseleva ( 1995 ) : the sign of the term is plus , while it should be minus ( aarseth 2003 ) . in the two previous sections, we presented stability criteria that were derived either analytically or based on results from numerical simulations . in this section , we discuss criteria that are based on the concept of chaos . wisdom ( 1980 ) , applied the chirikov resonance overlap criterion for the onset of stochastic behaviour ( chirikov 1979 ) to the planar circular restricted three body problem .he derived the following estimate of when resonances should start to overlap ( the derivation holds for small eccentricities ) : where . by using kepler s third law ,this can be expressed in terms of the semi - major axis separation as ( murray and dermott 1999 ) where is the semi - major axis of the perturber .hence , when the particle is in the region , the orbit is chaotic . a similar result to the one of wisdom , was obtained through the use of a mapping , which was based on the approximation that perturbations to the massless body are localised near conjunction with the perturber ( duncan et al .it was found that which is in agreement with equation ( [ wis ] ) .mardling and aarseth ( 1999 ) approached the stability problem in a different way , by noticing that stability against escape in the three body problem is analogous to stability against chaotic energy exchange in the binary - tides problem . the way energy and angular momentumare exchanged between the two orbits of a stable ( unstable ) hierarchical triple system is similar to the way they are exchanged in a binary undergoing normal ( chaotic ) tide - orbit interaction .having that in mind , they derived the following semi - analytical formula for the critical value of the outer pericentre distance : ^{\frac{2}{5}}\ ] ] where is the mass ratio of the outer binary and is the outer binary eccentricity . if , then the system is considered to be stable .the above formula is valid for prograde and coplanar systems and it applies to escape of the outer body .c was determined empirically and it was found to be 2.8. a small heuristic correction of up to was then applied for non - inclined orbits , to account for the increased stability ( aarseth and mardling 2001 , aarseth 2004 ) . also , as stated in aarseth and mardling ( 2001 ), the criterion ignores a weak dependence on the inner eccentricity and inner mass ratio .finally , we should mention here , that , numerical tests have showed that the criterion is working well for a wide range of parameters , but it has not been tested for systems with planetary masses so far ( aarseth 2004 ) , probably because the authors were mainly interested in using the formula in star cluster simulations .we would like to mention here that , mardling ( 2007 ) has derived a resonance overlap criterion for the general three body problem .we should point out , that the presence of chaos does not necessarily indicate instability , e.g. see murray ( 1992 ) , gladman(1993 ) .the reader should also recall the results of dvorak ( 1986 ) and rabl and dvorak ( 1988 ) , with the zones of unpredictability between the stable and unstable orbits .however , that kind of behaviour appears to depend on various parameters of the system , such as the mass ratios of the system .for example , mudryk and wu ( 2006 ) , in their study of a planet orbiting one of the components of a stellar binary system , found little evidence of bound chaos near the instability boundary ( except in the case where the perturber is very small compared to the star , i.e. the case discussed by gladman or covered by wisdom s criterion ) and as a result of that , they adopted the boundary of resonance overlap as the boundary of instability .that appears to be the case with mardling and aarseth too .a nice discussion in resonances and instability can be found in mardling ( 2001 ) .we have attempted to collect and present the various criteria that have been derived for the stability of hierarchical triple systems over the past few decades .tables 1 , 2 and 3 present the various criteria in a rather concise manner .each table consists of four columns , i.e. the name column , which gives the name of the relative paper(s ) , the model / restrictions column , which gives a brief description of the systems for which the criterion is applicable ( a blank line indicates that the criterion applies to the general case , without any restrictions ) , the stability type column , which states what stability means for a specific criterion and finally the comments column , where we give any extra information we consider important .table 1 lists the criteria that were derived analytically .most of them were based on a generalisation of the concept of zero velocity surfaces of the circular restricted three body problem , with the quantity playing the role of the jacobi constant . as stated in the corresponding section ,the criterion is a sufficient condition and therefore , no conclusion can be drawn when it is violated .the marchal and bozis ( 1982 ) criterion is a good choice for one who intends to use a criterion from that specific category .however , depending on the system investigated , the other criteria could also be a useful alternative and even easier to apply .table 1 also lists sufficient criteria for escape of one of the bodies .although those criteria are not very useful on their own , because of their nature ( they require some conditions to be satisfied at a moment ) , they could be used as part of a computer code ( e.g. for cluster simulations ) ; however , their sufficient nature is a major disadvantage for that type of use .table 2 presents criteria that were based on results from numerical integrations . a task that is not particularly easy , as a triple system has many parameters to be taken into consideration ( mass ratios and orbital parameters ) and covering the whole of the parameter space at once is a rather difficult thing .sometimes the various criteria were in agreement with each other , sometimes they were not .this can be attributed to many factors .the main one , in our opinion , is the different meaning that stability may have for different people .szebehely ( 1984 ) gave 47 different definitions for stability in his dictionary of stability. as the reader has probably noticed , almost each author mentioned in section ( 2.2 ) , gave a different definition of what he considered as stable system .another issue that raises concern is the integration time span .a system may appear to be stable for a certain time span , but becomes unstable when the integration is extended over longer timescales .also , the choice of initial conditions may have an effect on the outcome .finally , as stated in kiseleva et al .( 1994a ) , a matter of concern about those criteria is the fact that they involve instantaneous and not mean orbital parameters .the last two criteria of the table are probably the best from the numerical ones , the eggleton - kiseleva for stellar systems and the holman - wiegert for planets in binary systems ( keep in mind that the planets are on intially circular orbits ) .we would like to open a parenthesis here and mention that the stability of planets in binary systems is an area of research that is expected to become more and more important in the future , as there is an increasing number of exoplanets that are members of binary or multiple stellar systems ( e.g. see eggenberger et al .it appears that none of the above mentioned stability criteria , analytical or numerical , can cover the issue on its own .for instance , the planetary eccentricity is an important parameter not appearing in the criteria , although many exoplanets have eccentric orbits ( of course most of the criteria were developed when none or very few exoplanets had been discovered by that time ) .therefore , at the moment , one should choose the criterion ( or a combination of different criteria ) that fits the system he investigates better .finally , table 3 lists criteria that involve the concept of chaos . in that context ,instability in a three body system was thought to be the consequence of the overlap of sub - resonances within mean motion resonances .it was also mentioned that the presence of chaos in some cases , would not necessarily indicate instability .the author wants to thank the institute for materials and processes at the school of engineering and electronics of edinburgh university , where most of this work took place .the author also thanks the anonymous referees for their useful comments on various aspects of this work .aarseth , s.j .: gravitational n - body simulations : tools and algorithms . cambridge university press , cambridge pp . 295( 2003 ) + aarseth , s.j .: formation and evolution of hierarchical systems .revmexaa * 21 * , 156 - 162 ( 2004 ) + aarseth , s.j . , mardling , r.a .: the formation and evolution of multiple systems . in : podsiadlowski p. , rappaport s. , king a. r. , dantona f. , burderi l. ( eds ) evolution of binary and multiple star systemsasp conference series , vol .77 - 88 ( 2001 ) + anosova , j.p . : dynamical evolution of triple systems .ap ss * 124 * , 217 - 241 ( 1986 ) + black , d.c . : a simple criterion for determining the dynamical stability of three - body systems .aj * 87 * , 1333 - 1337 ( 1982 ) + bozis , g. : zero velocity surfaces for the general planar three - body problem .ap ss * 43 * , 355 - 368 ( 1976 ) + bozis , g. : escape of the smallest mass of a triple system .pasj * 33 * , 67 - 75 ( 1981 ) + brasser , r. : hill stability of a triple system with an inner binary of large mass ratio .mnras * 332 * , 723 - 728 ( 2002 ) + chirikov , b.v .: universal instability of many - dimensional oscillator systems .rep . * 52 * , 263 - 379 ( 1979 ) + donnison , j.r . : the stability of masses during three - body encounters .. mech . * 32 * , 145 - 162 ( 1984a ) + donnison , j.r . : the stability of binary star systems during encounters with a third star .mnras * 210 * , 915 - 927 ( 1984b ) + donnison , j.r . :the effects of eccentricity on the hierarchical stability of low - mass binaries in the three - body systems .mnras * 231 * , 85 - 95 ( 1988 ) + donnison , j.r . : the hill stability of a binary or planetary system during encounters with a third inclined body .mnras * 369 * , 1267 - 1280 ( 2006 ) + donnison , j.r . , mikulskis , d.f . : three - body orbital stability criteria for circular orbits .mnras * 254 * , 21 - 26 ( 1992 ) + donnison , j.r . , mikulskis , d.f . : three - body orbital stability criteria for circular retrograde orbits .mnras * 266 * , 25 - 30 ( 1994 ) + donnison , j.r . , williams , i.p . : the stability of coplanar three - body systems with applications to the solar system . cel. mech . * 31 * , 123 - 128 ( 1983 ) + donnison , j.r . ,williams , i.p . : the hierarchical stability of satellite systems .mnras * 215 * , 567 - 573 ( 1985 ) + duncan , m. , quinn , t. , tremaine , s. : the long - term evolution of orbits in the solar system - a mapping approach .icarus * 82 * , 402 - 418 ( 1989 ) + dvorak , r. : critical orbits in the elliptic restricted three - body problem . a a * 167 * , 379 - 386 ( 1986 ) + dvorak , r. , froeschl , ch . ,froeschl , cl .: stability of outer planetary orbits ( p - types ) in binaries .a a * 226 * , 335 - 342 ( 1989 ) + easton , r. : some topology of 3-body problems .j. differ .equations * 10 * , 371 - 377 ( 1971 ) + egenberger , a. , udry , s. , mayor , m. : statistical properties of exoplanets iii .planet properties and stellar multiplicity . a a * 417 * , 353 - 360 ( 2004 ) + eggleton , p. , kiseleva , l. : an empirical condition for stability of hierarchical triple systems .apj * 455 * , 640 - 645 ( 1995 ) + ge , y. , leng , x. : an alternative deduction of the hill - type surfaces of the spatial3-body problem .. astron . * 53 * , 233 - 254 ( 1992 ) + gladman , b. : dynamics of systems of two close planets .icarus * 106 * , 247 - 263 ( 1993 ) + golubev , v.g . :regions where motion is impossible in the three body problem .sssr * 174 * , 767 - 770 ( 1967 ) + golubev , v.g .: hill stability in the unrestricted three - body problem .* 13 * , 373 - 375 ( 1968a ) + golubev , v.g .: hill stability in the unbounded three - body problem .sssr , * 180 * , 308 - 311 ( 1968b ) + graziani , f. , black , d.c . : orbital stability constaints on the nature of planetary systems .apj * 251 * , 337 - 341 ( 1981 ) + griffith , j.s . ,north , r.d .: escape or retention in the three body problem .* 8 * , 473 - 479 ( 1974 ) + harrington , r.s .: stability criteria for triple stars .* 6 * 322 - 327 ( 1972 ) + harrington , r.s . :production of triple stars by the dynamical decay of small stellar systems .aj * 80 * , no . 12 , 1081 - 1086 ( 1975 ) + harrington , r.s . :planetary orbits in binary stars .aj * 82 * , no . 9 , 753 - 756 ( 1977 ) + hill , g. w. : researches in the lunar theory. am . j. math . * 1 * , 5 - 26 ( 1878a ) + hill , g. w. : researches in the lunar theory . am . j. math .* 1 * , 129 - 147 ( 1878b ) + hill , g. w. : researches in the lunar theory .am . j. math .* 1 * , 245 - 261 ( 1878c ) + holman , m.j . ,wiegert , p.a . : long - term stability of planets in binary systems .aj * 117 * , 621 - 628 ( 1999 ) + kiseleva , l.g . , eggleton , p.p . ,anosova , j.p . : a note on the stability of hierarchical triple stars with initially circular orbits .mnras * 267 * , 161 - 166 ( 1994a ) + kiseleva , l.g . , eggleton , p.p . , orlov , v.v .: instability of close triple systems with coplanar initial doubly circular motion .mnras * 270 * , 936 - 946 ( 1994b ) + marchal , c. : sufficient conditions for hyperbolic - elliptic escape and for ejection without escape in the three body problem .* 9 * , 381 - 393 ( 1974 ) + marchal , c. , bozis , g. : hill stability and distance curves for the general three - body problem .* 26 * , 311 - 333 ( 1982 ) + marchal , c. , saari , d.g .: hill regions for the general three - body problem .* 12 * , 115 - 129 ( 1975 ) + marchal , c. , yoshida , j. , yi - sui , s. : a test of escape valid even for very small mutual distances i. the acceleration and the escape velocities of the third body .. mech . * 33 * , 193 - 207 ( 1984a ) + marchal , c. , yoshida , j. , yi - sui , s. : three - body problem .. mech . * 34 * , 65 - 93 ( 1984b ) + mardling , r.a . : stability in the general three - body problem . in : podsiadlowski p. , rappaport s. , king a. r. , dantona f. , burderi l. ( eds ) evolution of binary and multiple star systems .asp conference series , vol .101 - 116 ( 2001 ) + mardling , r.a . : resonance , chaos and stability in the general three - body problem . in vesperinie. , giersz m. , sills a. ( eds ) dynamical evolution of dense stellar systems .iau symposium 246 , 2007 , preprint .+ mardling , r.a . ,aarseth , s.j . : dynamics and stability of three - body systems . in : steves b.a . , roy a.e .( eds ) the dynamics of small bodies in the solar system , a major key to solar system studies , nato asi , vol .385 - 392 kluwer , dordrecht ( 1999 ) + markellos , v.v . ,roy , a.e . : hill stability of satellite orbits .* 23 * , 269 - 275 ( 1981 ) + milani , a. , nobili , a.m. : on topological stability in the general three - body problem .. mech . * 31 * , 213 - 240 ( 1983 ) + mudryk , l.r . ,wu , y. : resonance overlap is responsible for ejecting planets in binary systems .apj * 639 * , 423 - 431 ( 2006 ) + murray , c.d . : solar system - wandering on a leash .nature * 357 * , 542 - 543 ( 1992 ) + murray c. d. , dermott s. f. : solar system dynamics .cambridge university press , cambridge ( 1999 ) + pendleton , y.j . ,black , d.c .: further studies on criteria for the onset of dynamical instability in general three - body systems .aj * 88 * , no . 9 ,1415 - 1419 ( 1983 ) + pilat - lohinger , e. , dvorak , r. : stability of s - type orbits in binaries .. astron . * 82 * , 143 - 153 ( 2002 ) + pilat - lohinger , e. , funk , b. , dvorak , r. : stability limits in double stars - a study of inclined planetary orbits . a a * 400 * , 1085 - 1094 ( 2003 ) + rabl , g. , dvorak ,r. : satellite - type planetary orbits in double stars : a numerical approach . a a * 191 * , 385 - 391 ( 1988 ) + roy , a.e . ,carusi , a. , valsecchi , g.b . ,walker , i.w . : the use of the energy and angular momentum integrals to obtain a stability criterion in the general hierarchical three - body problem . a a * 141 * , 25 - 29 ( 1984 ) +saari , d.g .: restrictions on the motion of the three - body problem .siam j. appl* 26 * , no . 4 , 806 - 815 ( 1974 ) + saari , d.g .: from rotations and inclinations to zero configurational velocity surfaces.i - a natural rotating coordinate system .. mech . * 33 * , 299 - 318 ( 1984 ) + saari , d.g . : from rotations and inclinations to zero configurational velocity surfaces , ii .the best possible configurational velocity surfaces .40 * , 197 - 223 ( 1987 ) + sergysels , r. : zero velocity hypersurfaces for the general three - dimensional three - body problem .. mech . * 38 * , 207 - 214 ( 1986 ) + standish , e.m.jr . : sufficient conditions for escape in the three - body problem .. mech . * 4 * , 44 - 48 ( 1971 ) + standish , e.m.jr . : sufficient conditions for return in the three - body problem .* 6 * , 352 - 355 ( 1972 ) + szebehely , v. : analytical determination of the measure of stability of triple stellar systems .. mech . * 15 * , 107 - 110 ( 1977 ) + szebehely , v. : stability of artificial and natural satellites .* 18 * , 383 - 389 ( 1978 ) + szebehely , v. : review of concepts of stability .. mech . * 34 * , 49 - 64 ( 1984 ) + szebehely , v. , zare , k. : stability of classical triplets and of their hierarchy . a * 58 * , 145 - 152 ( 1977 ) + tung , c.c . : some properties of classical integrals of general problem of three bodies .scientia sinica * 17 * , 306 - 330 ( 1974 ) + valsecchi , g.b . , carusi , a. , roy , a.e . : the effect of orbital eccentricities on the shape of the hill - type analytical stability surfaces in the general three - body problem .. mech . * 32 * , 217 - 230 ( 1984 ) + veras , d. , armitage , p.j . : the dynamics of two massive planets on inclined orbits .icarus * 172 * , 349 - 371 ( 2004 ) + walker , i.w . : on the stability of close binaries in hierarchical three - body systems .* 29 * , 215 - 228 ( 1983 ) + walker , i.w . ,emslie , a.g . ,roy , a.e .: stability criteria in many - body systems i. an empirical stability criterion for co - rotational three body systems .* 22 * , 371 - 402 ( 1980 ) + walker , i.w . ,roy , a.e .: stability criteria in many - body systems ii . on a sufficient condition for the stability of coplanar hierarchical three body systems .* 24 * , 195 - 225 ( 1981 ) + wisdom , j. : the resonance overlap criterion and the onset of stochastic behavior in the restricted three - body problem .aj * 85 * , no . 8 , 1122 - 1133 ( 1980 ) + yoshida , j. : improved criteria for hyperbolic - elliptic motion in the general three - body problem .pasj * 24 * , 391 - 408 ( 1972 ) + yoshida , j. : improved criteria for hyperbolic - elliptic motion in the general three - body problem .ii . pasj * 26 * , 367 - 377 ( 1974 ) + zare , k. : the effects of integrals on the totality of solutions of dynamical systems .* 14 * , 73 - 83 ( 1976 ) + zare , k. : bifurcation points in the planar problem of three bodies .* 16 * , 35 - 38 ( 1977 ) +
|
in this paper , we give a summary of stability criteria that have been derived for hierarchical triple systems over the past few decades . we give a brief description and we discuss the criteria that are based on the generalisation of the concept of zero velocity surfaces of the restricted three body problem , to the general case . we also present criteria that have to do with escape of one of the bodies . then , we talk about the criteria that have been derived using data from numerical integrations . finally , we report on criteria that involve the concept of chaos . in all cases , wherever possible , we discuss advantages and disadvantages of the criteria and the methods their derivation was based on , and some comparison is made in several cases .
|
let be an independent and identically distributed random sample of a random vector , with joint cumulative distribution function and marginal distribution functions and .let represent a multiplicative kernel distribution function ; i.e. , and denote a bandwidth sequence .the transformation kernel estimator of copulas introduced in is defined as follows : where is an increasing transformation and , are pseudo - observations .it is customary in coplua estimation to take , , where and are the empirical marginal cumulative distribution functions .this estimator presents an advantage comparatively to the estimator proposed by fermanian _(2004 ) , as it does not depend on the marginal distributions .taking equal to the standard gaussian distribution leads to the probit transformation proposed , for instance , in marron and ruppert ( 1994 ) . for nonparametric kernel estimation for the copula density using the probit transformation, we refer to geenens _ ( 2014 ) and references therein .+ in this paper we are concerned with kernel estimation for the copula function , avoiding as such the inconsistency problem due to many unbounded copula densities .however , there is a boundary bias problem which may be solved by using the transformation kernel estimator , with a suitable bandwidth .since the choice of the bandwidth is problematic for , as pointed out in , we shall deal with a variable bandwidth that may depend either on the data or the location point .thus , we define the following estimator : we shall assume that is the integral of a symmetric bounded kernel supported on ] . these results enable us to apply various methods of bandwidth selection and obtain the consistency of estimators , under certain conditions on . + the rest of the paper is organized as follows . in section 2 ,we state our main theoreticla results and give their proofs . in section 3 , we present a practical method for seclecting the bandwith , which is based on a cross - validation criterion . in section 4 , we make a simulation study using data generated with the frank copula .finally , the paper is ended by an appendix .we state our theoretical results in this section [ t1 ] suppose that the copula function has bounded first - order partial derivatives on and the transformation admits a bounded derivative .then , for any sequence of positive constants satisfying and , we have almost surely , for some , as where .[ t2 ] suppose that the copula function has bounded second - order partial derivatives on and that the transformation admits a bounded derivative .then , for any sequence of positive constants satisfying and we have almost surely , for some , as , where .the following proposition is an immediate consequence of theorem [ t1 ] and theorem [ t2 ] [ pp1 ] let for some and such that then , under the assumptions of theorems [ t1 ] and [ t2 ] , we have almost surely , as , ( * theorem [ t1 ] * ) we begin by some notation .recall that , and are the empirical cumulative distribution functions of , and , respectively .then the copula estimator based directly on sklar s theorem can be defined as with and the quantile functions corresponding to and .define the bivariate empirical copula process as ,\qquad ( u , v)\in [ 0,1]^2\ ] ] and introduce the following quantity . which represents the uniform bivariate empirical distribution function based on a sample of independent and identically distributed random variables with marginals uniformly distributed on ] . for , , set and then , one has \\ & = & \frac{1}{n}\sum_{i=1}^{n}\left [ k\left(\frac{\phi^{-1}(u)-\phi^{-1}(\hat{f}_n \circ f^{-1}(u_i))}{h}\right)k\left(\frac{\phi^{-1}(v)-\phi^{-1}(\hat{g}_n\circ g^{-1}(v_i))}{h } \right)-\mathbb{i}\{u_i\leq u , v_i\leq v\}\right]\\ & = : & \frac{1}{n}\sum_{i=1}^{n}g(u_i , v_i , h),\end{aligned}\ ] ] where belongs to the class of measurable functions defined as , 0<h<1 \,\text{and } \ , \zeta_{1,n } ; \zeta_{2,n}:[0,1]\mapsto [ 0,1 ] \,\text{nondecreasing . } \end{array } \right\}\ ] ] since , one can observe that now , we have to apply the main theorem of mason and swanepoel ( 2010 ) which gives the order of convergence of the deviation from their expectations of kernel - type function estimators . towards this end , the above class of functions must satisfy the following four conditions : * there exists a finite constant such that * there exists a constant such that for all ] , one can write for large enough , and hence , combining this and proposition [ p1 ] , we obtain thus the corollary [ crl1 ] follows from . + coming back to the proof of our theorem [ t1 ] , we have to show that the deviation , suitably normalized , is almost surely uniformly bounded , as . for this , it suffices to prove that ^ 2}\frac{\left|\sqrt{n}d_{n , h}(u , v)\right|}{\sqrt{2\log\log n}}\leq 3.\ ] ] we will make use of an approximation of the empirical copula process by a kiefer process ( see e.g. , zari , page 100 ) . let be a -parameters wiener process defined on ^ 2\times[0,\infty) ] .+ by theorem 3.2 in zari , for , there exists a sequence of gaussian processes , n>0\right\} ] , put and .then , we can write \\ & = & \int_{-1}^{1}\int_{-1}^{1}\mathbb{e}\mathbb{i}\{u_i\leq\zeta_{1,n}^{-1}[\phi(\phi^{-1}(u)-sh)],v_i\leq \zeta_{2,n}^{-1}[\phi(\phi^{-1}(v)-th ) ] \}k(s)k(t)dsdt\\ & = & \int_{-1}^{1}\int_{-1}^{1}c\left(\zeta_{1,n}^{-1}[\phi(\phi^{-1}(u)-sh ) ] , \zeta_{2,n}^{-1}[\phi(\phi^{-1}(v)-th)]\right)k(s)k(t)dsdt.\end{aligned}\ ] ] thus ,\zeta_{2,n}^{-1}[\phi(\phi^{-1}(v)-th ) ] \right ) - c(u , v)\right ] k(s)k(t)dsdt.\ ] ] making use of the chung ( 1949 ) s law of the iterated logarithm , we can infer that , whenever is continuous and admits a bounded density , for all ] .thus , for all large , one can write k(s)k(t)dsdt.\ ] ] by applying a 2-order taylor expansion for the copula function , we obtain (u , v ) + [ \phi(\phi^{-1}(v)-th)-v]c_v(u , v)+ [ \phi(\phi^{-1}(u)-sh)-u]^2\frac{c_{uu}(u , v)}{2 } & \\ & + [ \phi(\phi^{-1}(v)-th)-v]^2\frac{c_{vv}(u , v)}{2}+ [ \phi(\phi^{-1}(u)-sh)-u][\phi(\phi^{-1}(v)-th)-v ] c_{uv}(u , v)+o(h^2 ) , & \end{aligned}\ ] ] where applying again a 1-order taylor expansion for the function , we get and thus ^ 2\frac{c_{uu}(u , v)}{2 } + [ \phi'(\phi^{-1}(v))th]^2\frac{c_{vv}(u , v)}{2}+ [ \phi'(\phi^{-1}(u))][\phi'(\phi^{-1}(u))]st h^2c_{uv}(u , v)+o(h^2 ) . &\end{aligned}\ ] ] using the fact that is 2-order kernel ; i.e. , and we obtain , by fubini s theorem , that for all ^ 2 ] , the uniform almost sure consistency of is guarranted by proposition [ pp1 ] .here , we make some numerical experiments to show the performance of the transformation kernel estimator . before hand , we determine graphically the optimal bandwidth , by visualizing the curve of over ] is then equal to ] , the formulas + where is the transformation estimation calculated with the sample . for arbitrary values of and different values for the couple , we obtain the results in table [ tab1 ] . in each colum of value , we report the first and the below for arbitrary chosen couples .the results are very conclusive , showing that the cross validation method may be applied to select the bandwidth for the transformation kernel estimator of copulas .( * proposition [ p1 ] * ) + to simplify the notations , we consider a general function which is the integral of a symmetric bounded kernel , supported on ^ 2 ] , and . for any function and , we can write where ^ 2}|k(s , t)| ] . thus ( g.i ) holds by taking + * checking for ( g.ii ) .* we have to show that , where is a positive constant .one can write ^ 2 & \\ & = \mathbb{e}\left[k^2\left(\frac{\phi^{-1}(u)-\phi^{-1}(\zeta_{1,n}(u))}{h},\frac{\phi^{-1}(v)-\phi^{-1}(\zeta_{2,n}(v))}{h}\right)\right ] & \\ & - 2\mathbb{e}\left[k\left(\frac{\phi^{-1}(u)-\phi^{-1}(\zeta_{1,n}(u))}{h},\frac{\phi^{-1}(v)-\phi^{-1}(\zeta_{2,n}(v))}{h}\right)\mathbb{i}\{u\leq u , v\leq v\}\right ] + c(u , v)&\\ & = : a -2b + c(u , v)&.\end{aligned}\ ] ] since the function is a kernel of a distribution function , we may assume without loss of generality that it takes its values in ] , as , that is is asymptotically equivalent to .as well , we have is asymptotically equivalent to .thus , for all large , we can write &\\ & -2\mathbb{e}\left [ \int_{-1}^{1}\int_{-1}^{1}\mathbb{i}\left\{u\leq u\wedge \phi(\phi^{-1}(u)-sh),v\leq v\wedge \phi(\phi^{-1}(v)-th)\right\}k(s , t)dsdt \right]&\\ & + \int_{-1}^{1}\int_{-1}^{1}c(u , v)k(s , t)dsdt.&\end{aligned}\ ] ] that is , now , we have to discuss condition ( g.ii ) in the four following cases : + * case 1 . *+ in this case the second member of inequality is reduced and we have (s , t)dsdt.\end{aligned}\ ] ] by a taylor expansion for the copula function , we have (u , v)\\ & + & [ v-\phi(\phi^{-1}(v)-th)]c_v(u , v)+ o(h ) . \end{aligned}\ ] ] applying again a taylor - young expansion for the function , we obtain and thus (s , t)dsdt \\ & \leq & 4h\left[\|c_u\|+\|c_v\|\right]\sup_{x\in\mathbb{r}}|\phi'(x)|\|k\|.\end{aligned}\ ] ] taking \left\|\phi'\right\|\left\|k\right\| ] . + * case 3 . * . + here, inequality is rewritten into (s , t)dsdt & \\ & -\int_{-1}^{1}\int_{-1}^{1}\left[c\left(\phi(\phi^{-1}(u)-sh),v\right)-c(u , v)\right]k(s , t)dsdt . & \end{aligned}\ ] ] by applying successively a taylor expansion for and for , we get (s , t)dsdt\\ & & -\int_{-1}^{1}\int_{-1}^{1}c_u\left(\theta_2,v\right)\left[\phi(\phi^{-1}(u)-sh)-\phi(\phi^{-1}(u))\right]k(s , t)dsdt\\ & \leq & \int_{-1}^{1}\int_{-1}^{1}c_v\left(\phi(\phi^{-1}(u)-sh),\theta_1\right)\phi'(\gamma_1).(-th)k(s , t)dsdt\\ & & -\int_{-1}^{1}\int_{-1}^{1}c_u\left(\theta_2,v\right)\phi'(\gamma_2).(-sh)k(s , t)dsdt,\end{aligned}\ ] ] where + this implies thus condition ( g.ii ) holds , with + * case 4 . * . +this case is analogous to case 3 , where the roles of and are interchanged .hence , condition ( g.ii ) is fulfilled , with the same constant + * checking for ( f.i)*. we have to check the uniform entropy condition for the class of functions , 0<h<1 \,\text{and } \ , \zeta_{1,n}\zeta_{2,n}:[0,1]\mapsto [ 0,1 ] \,\text{nondecreasing . }\end{array } \right\}\ ] ] to this end , we consider the following classes of functions , where is an increasing function : + + + + ^ 2\right\} ] and +\frac{1}{m^2}$ ] .+ let .then , we have and . hence and . by continuity and .define and then and , which are equivalent to and by right - continuity of the kernel , we obtain ^ 2 , g_m(x , y)\longrightarrow g(x , y ) , m\rightarrow \infty\ ] ] and conclude that is pointwise measurable class .+ b , d. , seck , c.t . andasymptotic confidence bands for copulas based on the local linear kernel estimator ._ applied mathematics _, 6 , 2077 - 2095 .http://dx.doi.org/10.4236/am.2015.612183 chung , k - l ( 1949 ) .an estimate concerning the kolmogoroff limit distribution .trans ammath soc 67:3650 .deheuvels , p. ( 1979 ) .la fonction de dpendence empirique et ses proprits .un test non paramtrique dindpendance ._ bulletin royal belge de lacadmie des sciences _ , ( 5 ) , 65 , 274 - 292 .fermanian , j. , radulovic , d. and wegkamp , m. ( 2004 ) .weak convergence of empirical copula processes ._ international statistical institute ( isi ) and bernoulli society for mathematical statistics and probability ._ , vol . 10 , 5:847 - 860 .geenens , g. , charpentier , a. , and paindaveine , d. ( 2014 ) .probit transformation for nonparametric kernel estimation of the copula density ._ ecares working paper 2014 - 23_. marron , j. s. and ruppert , d. ( 1994 ) .transformations to reduce boundary bias in kernel density estimation ._ journal of the royal statistical society .series b ( methodological ) _ , 56(4):653 - 671 .doi : 10.2307/2346189 .mason , d. m. and swanepoel , j.h.w ( 2010 ) . a general result on the uniform in bandwidth consistency of kernel - type function estimators ._ sociedadde estadistica e investigation operativa 2010 ._ , doi 10.1007/s11749 - 010 - 0188 - 0 sklar , a. ( 1959 ) .fonctions de rpartition dimensions et leurs marges .inst . statistic .paris _ , 8 , 229 - 231 .van der vaart , a. w. and wellner , j. a. : weak convergence and empirical processes , _ springer , new york _ , 1996 .
|
in this paper we establish the uniform in bandwidth consistency for the transformation kernel estimator of copulas introduced in . to this end , we prove a uniform in bandwidth law of the iterated logarithm for the maximal deviation of this estimator from its expectation . we then show that , as goes to infinity , the bias of the estimator converges to zero uniformly in the bandwidth varying over a suitable interval . a practical method of selecting the optimal bandwidth is presented . finally , we make conclusive simulation experiments , showing the performance of the estimator on finite samples .
|
estimation of covariance matrices and their inverses ( a.k.a .precision matrices ) is of fundamental importance in almost every aspect of statistics , ranging from the principal component analysis [ ] , graphical modeling [ ] , classification based on the linear or quadratic discriminant analysis [ ] , and real - world applications such as portfolio selection [ ] and wireless communication [ ] .suppose we have temporally observed -dimensional vectors , with having mean zero and covariance matrix whose dimension is .our goal is to estimate the covariance matrices and their inverses based on the data matrix . in the classical situation where is fixed , and are mean zero independent and identically distributed ( i.i.d . )random vectors , it is well known that the sample covariance matrix is a consistent and well behaved estimator of , and is a natural and good estimator of . see for a detailed account. however , when the dimensionality grows with , random matrix theory asserts that is no longer a consistent estimate of in the sense that its eigenvalues do not converge to those of ; see , for example , the marenko pastur law [ ] or the tracy widom law [ ] .moreover , it is clear that is not defined when is not invertible in the high - dimensional case with . during the last decade, various special cases of the above covariance matrix estimation problem have been studied . in most of the previous papersit is assumed that the vectors are i.i.d . and thus the covariance matrix is time - invariant .see , for example , ( ) , , ( ) , where consistency and rates of convergence are established for various regularized ( banded , tapered or thresholded ) estimates of covariance matrices and their inverses . as an alternative regularized estimate for sparse precision matrix, one can adopt the lasso - type entry - wise 1-norm penalized likelihood approach ; see .other estimates include the cholesky decomposition based method [ ] , neighborhood selection for sparse graphical models [ ] , regularized likelihood approach [ ] and the sparse matrix transform [ ] . considered covariance matrix estimation for univariate stationary processes .the assumption that are i.i.d .is quite restrictive for situations that involve temporally observed data . in and authors considered time - varying gaussian graphical models where the sampling distribution can change smoothly over time .however , they assume that the underlying random vectors are independent . using nonparametric smoothing techniques, they estimate the time - vary covariance matrices in terms of covariance matrix functions .their asymptotic theory critically depends on the _ independence _ assumption .the importance of estimating covariance matrices for dependent and nonstationary processes has been increasingly seen across a wide variety of research areas . in modeling spatial temporal data , proposed quadratic nonlinear dynamic models to accommodate the interactions between the processes which are useful for characterizing dynamic processes in geophysics [ ] . non - gaussian clutter and noise processes in space time adaptive processing , where the space time covariance matrix is important for detecting airborne moving targets in the nonstationary clutter environment [ ] . in finance , considered multivariate stochastic volatility models parametrized by time - varying covariance matrices with heavy tails and correlated errors . investigated the markowitz portfolio selection problem for optimal returns of a large number of stocks with hidden and heterogeneous gaussian graphical model structures .in essence , those real - world problems pose a number of challenges : ( i ) nonlinear dynamics of data generating systems , ( ii ) temporally dependent and nonstationary observations , ( iii ) high - dimensionality of the parameter space and ( iv ) non - gaussian distributions .therefore , the combination of more flexible nonlinear and nonstationary components in the models and regularized covariance matrix estimation are essential to perform related statistical inference .in contrast to the longstanding progresses and extensive research that have been made in terms of heuristics and methodology , theoretical work on estimation of covariance matrices based on high - dimensional time series data is largely untouched . in this paperwe shall substantially relax the i.i.d .assumption by establishing an asymptotic theory that can have a wide range of applicability .we shall deal with the estimation of covariance and precision matrices for high - dimensional stationary processes in sections [ sec : stationary ] and [ sec : precision_statproc ] , respectively .section [ sec : stationary ] provides a rate of convergence for the thresholded estimator , and section [ sec : precision_statproc ] concerns the graphical lasso estimator for precision matrices . for locally stationary processes , an important class of nonstationary processes , we shall study in section [ sec : covariancematrixestiamtion_nonstatproc ] the estimation of time - varying covariance and precision matrices .this generalization allows us to consider time - varying covariance and precision matrix estimation under temporal dependence ; hence our results significantly extend previous ones by and .furthermore , by assuming a mild moment condition on the underlying processes , we can relax the multivariate gaussian assumption that was imposed in and [ and also by ( ) in the i.i.d . setting ] .specifically , we shall show that , thresholding on the kernel smoothed sample covariance matrices , estimators based on the localized graphical lasso procedure are consistent estimators for time - varying covariance and precision matrices . to deal with temporal dependence , we shall use the functional dependence measure of . with the latter , we are able to obtain explicit rates of convergence for the thresholded covariance matrix estimates and illustrate how the dependence affects the rates . in particular ,we show that , based on the moment condition of the underlying process , there exists a threshold value .if the dependence of the process does not exceed that threshold , then the rates of convergence will be the same as those obtained under independence .on the other hand , if the dependence is stronger , then the rates of convergence will depend on the dependence .this phase transition phenomenon is of independent interest .we now introduce some notation .we shall use to denote positive constants whose values may differ from place to place .those constants are independent of the sample size and the dimension .for some quantities and , which may depend on and , we write if holds for some constant that is independent of and and if there exists a constant such that .we use and . for a vector , we write and for a matrix , , , and . for a random vector , write , , if ^{1/a } < \infty ] .if and , let solve , then . if , and , let be the solution to the equation over the interval ] is decreasing over .a plot of the function in ( [ eqn : fmax ] ) is given in figure [ fig : m15608](a ) .let be the minimizer of the right - hand side of ( [ eqn : fmax ] ). for ( i ) , assume for some . then satisfies , which implies , and hence ( i ) follows .note that ( ii ) follows in view of and .similarly we have ( iii ) since . the last case ( iv )is straightforward since for all . if , assume , and then ( [ eqn : fmax ] ) still holds with therein replaced by .a plot for this case is given in figure [ fig : m15608](b ) .note that if .then we can similarly have ( i)(iv ) . from the proof of corollary [ cor : m140825 ] ,if , in case ( iii ) , we can actually have the following dichotomy : let be the solution to the equation .then the minimizer ] if . for , ( [ eqn : fmax ] ) indicates that is not needed ; see also remark [ rem : a031046 ]. using the argument for theorem [ thmm : f08122 ] , we can similarly establish a spectral norm convergence rate . considered the special setting with i.i.d .our theorem [ thmm : spectral ] is a significant improvement by relaxing the independence assumption , by obtaining a sharper rate and by presenting a moment bound . as in theorem[ thmm : f08122 ] , we also have the phase transition at .note that only provides a probabilistic bound .[ thmm : spectral ] let the moment and the dependence conditions in theorem [ thmm : f08122 ] be satisfied .let and , for , and , respectively .define and .then there exists a constant , independent of , and , such that \\[-8pt ] \nonumber & & { } + p \min \biggl ( { 1 \over\sqrt{n } } , { u^{1-q/2 } \over n^{q/4 } } , \bigl(h(u ) + g(cu)\bigr)^{1/2 } \biggr),\end{aligned}\ ] ] where and are given in ( [ eqn : d_u ] ) and ( [ eqn : d_u1 ] ) , respectively .we shall only deal with the weaker dependent case with .the other cases similarly follow .recall the proof of theorem [ thmm : f08122 ] for , and .let matrices .similar to ( [ eqn : bias_part_general ] ) , let . then let and , where is a large constant . since , by ( [ eq : july14840 ] ) , \,d z \\ & \lesssim & m^2_\ast(u/2),\nonumber\end{aligned}\ ] ] where .similar to ( [ eq : july132 ] ) , since on , using the idea of ( [ eq : july647 ] ) , we have \\[-8pt ] \nonumber & \le&2 \sum_{j , k } \xi_{jk}^2 { \mathbb{i}}\bigl(|\xi_{jk}| > u/2\bigr ) + 2 b_\ast^2(u/2).\end{aligned}\ ] ] by ( [ eqn : delta_bound])([eq : f24112 ] ) and ( [ eqn : f06524])([eqn : spectral_a3 ] ) , we have ( [ eq : f06626 ] ) since .the bounds in theorems [ thmm : f08122 ] and [ thmm : spectral ] depend on the smallness measures , the moment order , the dependence parameter , the dimension and the sample size .the problem of selecting optimal thresholds is highly nontrivial .our numeric experiments show that the cross - validation based method has a reasonably good performance . however , we are unable to provide a theoretical justification of the latter method , and pose it as an open problem .[ exmp : nonlin_stat ] we consider the nonlinear process defined by the iterated random function where s are i.i.d .innovations , and is an -valued and jointly measurable function , which satisfies the following two conditions : ( i ) there exists some such that and ( ii ) then , it can be shown that defined in ( [ eq : iteratedrandomfun_stat ] ) has a stationary ergodic distribution and , in addition , has the _ geometric moment contraction _ ( gmc ) property ; see for details .therefore , we have and theorems [ thmm : f08122 ] and [ thmm : spectral ] with and can be applied .[ exmp : linstat ] an important special class of ( [ eq : casual ] ) is the vector linear process where , are matrices , and are i.i.d .mean zero random vectors with finite covariance matrix .then exists almost surely with covariance matrix if the latter converges .assume that the innovation vector , where are i.i.d .with mean zero , variance and , , and the coefficient matrices satisfy , . by rosenthal s inequality , the functional dependence measure , and hence by ( [ eq : srdtail ] ) . by theorem[ thmm : f08122 ] , the normalized frobenius norm of the thresholded estimator has a convergence rate established in ( [ eq : a807148 ] ) with , and .note that our moment condition relaxes the commonly assumed sub - gaussian condition in previous literature [ ] .for the vector ar(1 ) process , where is a real matrix with spectral norm , it is of form ( [ eq : linearprocess ] ) with , and the functional dependence measure .the rates of convergence established in ( [ eq : a807148 ] ) hold with and . the thresholded estimate may not be positive definite .here we shall propose a simple modification that is positive definite and has the same rate of convergence .let be its eigen - decomposition , where is an orthonormal matrix and is a diagonal matrix . for , consider where and is the rate of convergence in ( [ eq : a807148 ] ) .let be the diagonal elements of .then we have by theorem [ thmm : f08122 ] that , and consequently if , since , we have . then .note that the eigenvalues of are bounded below by , and thus it is positive definite . in practicewe suggest using .the same positive - definization procedure also applies to the spectral norm and its rate can be similarly preserved . in this sectionwe shall compute the smallness measure for certain class of covariance matrices , so that theorem [ thmm : f08122 ] is applicable .we consider some widely used spatial processes .let the vectors , , be observed at sites .assume that the covariance function between and satisfies where is a distance between sites and , and is a real - valued function with and .for example , we can choose as the euclidean distance between sites and .assume that , as , where the index characterizes the spatial dependence , or where is the characteristic length - scale , and condition ( [ eq : a811028 ] ) outlines the geometry of the sites , and can be roughly interpreted as the correlation dimension .it holds with if are points in a disk or a square , and if . the rational quadratic covariance function [ ] is an example of ( [ eq : a811032 ] ) , and it is widely used in spatial statistics , where is the smoothness parameter and is the length scale parameter .we now provide a bound for . by ( [ eq : a811032 ] ) and ( [ eq : a811028 ] ) , as , the covariance tail empirical process function for some constant independent of , and .if , then in the strong spatial dependence case with , we have to this end , it suffices to prove this relation with .. then class ( [ eq : a821010 ] ) allows the -exponential covariance function with , and some matrn covariance functions [ ] that are widely used in spatial statistics . with ( [ eq : a811028 ] ) , following the argument in ( [ eqn : a02849 ] ) , we can similarly have corollary [ cor : f_stationary ] of theorem [ thmm : f08122 ] concerns covariance matrices satisfying ( [ eqn : a02736 ] ) . slightly more generally , we introduce a decay condition on the tail empirical process of covariances .note that ( [ eqn : a02736 ] ) is a special case of ( [ eqn : sparsity_def ] ) with and . for ( [ eqn : rationalquad_covfuns ] ) with possibly large length scale parameter , we can let .similarly , corollary [ cor : fexp ] can be applied to satisfying ( [ eq : a821010 ] ) and the class defined in ( [ eqn : expsparsity ] ) , with and .[ def : csc ] for , let , , be the collection of covariance matrices such that and , for all , and , , be the collection of with and [ cor : f_stationary ] assume ( [ eqn : sparsity_def ] ) .let conditions in theorem [ thmm : f08122 ] be satisfied and .let . if , then for , .if and , let , then . if and then the equation has solution ^{1/2} ] square with three different scale length parameters : and .,title="fig : " ] + [ cols="^,^ " , ] @ for the uniform random sites model on the ^ 2 ] over , can be either or .we observe that when the spatial dependence decreases , that is , the covariance matrix has more small entries [ e.g. , figure [ fig : rational_quad_cov_mat](d ) ] , a larger threshold is needed to yield the optimal rate of convergence . when the temporal dependence increases ( i.e. , ) , a larger threshold is needed and the rate of convergence is slower than the one in the weaker dependence case ( i.e. , ) .@ ) and stronger ( ) temporal dependence cases.,title="fig : " ] + + ) and stronger ( ) temporal dependence cases.,title="fig : " ] + we now compare ( [ eqn : sparsity_def ] ) with the commonly used sparsity condition defined in terms of the _ strong -ball _ [ ] when , ( [ eqn : strongell_qball ] ) becomes , a sparsity condition in the rigid sense .we observe that condition ( [ eqn : sparsity_def ] ) defines a broader class of sparse covariance matrices in the sense that , which follows from hence corollary [ cor : f_stationary ] generalizes the consistency result of in to the non - gaussian time series .note that our convergence is in norm , while the error bounds in previous work [ see , e.g. , ( ) ] are of probabilistic nature ; namely in the form is bounded with large probability under the strong -ball conditions .the reverse inclusion may be false since the class specifies the uniform size of sums in matrix columns , whereas ( [ eqn : sparsity_def ] ) can be viewed as an overall smallness measure over all entries of the matrix . as an example , consider the covariance matrix where so that is positive - definite .then for any threshold level , and for any ] .details of the derivation of ( [ eqn : spectral - rate - precision - mat ] ) is given in the supplementary material [ ] .if satisfies [ ] , we have with .simple calculations show that , if and , then for ] and . as per convention, we assume that the bandwidth satisfies the natural condition : and . the thresholded covariance estimator for nonstationary processesis then defined as parallelizing theorem [ thmm : f08122 ] , we give a general result for the thresholded estimator for time - varying covariance matrices of the nonstationary , nonlinear high - dimensional time series . as in ( [ eq : functiondependencemeasure_stat ] ) and ( [ eq : srdtail ] ) , we similarly define the functional dependence measure where .we also assume that ( [ eq : srdtail ] ) holds . for presentational simplicity let .let , , theorem [ th : a08946 ] provides convergence rates for the thresholded covariance matrix function estimator .due to the nonstationarity , the bound is worse than the one in theorem [ thmm : f08122 ] since we only use data in the local window ] and . under the moment anddependence conditions of theorem [ thmm : f08122 ] , we have uniformly over ] .hence .it remains to deal with . with a careful check of the proof of theorem [ thmm : f08122 ] ,if we replace and therein by and , respectively , then we can have if the following nagaev inequality holds : the above inequality follows by applying the nonstationary nagaev inequality in section 4 in to the process , . note that the functional dependence measure of the latter process is bounded by ; see ( [ eq : julypdm ] ) and ( [ eq : a08141 ] ) .if in ( [ eq : kernelestimator_sigma ] ) we use the local linear weights [ ] , then it is easily seen based on the proof of theorem [ th : a08946 ] that ( [ eq : a808148 ] ) holds over the whole interval ] . the actual estimation procedure of based on the data is a variant of the graphical lasso estimator of , which minimizes the following objective function : where is the kernel smoothed sample covariance matrix given in ( [ eq : kernelestimator_sigma ] ) .the same minimization program is also used in . as in ( [ eq : a08206 ] ) and ( [ eqn : a81049 ] ) , let as in ( [ eqn : a805928 ] ) , choose . for the estimator ( [ eq : a08204 ] ) , we have the following theorem .we omit the proof since it is similar to the one in theorems [ thmm : inv ] and [ th : a08946 ] .[ thmm : a08209 ] assume } |\omega''_{jk}(t)| < \infty ] , where is independent of and .let be the solution to the equation .then .[ ex : a08152 ] let be a stationary -dimensional process with mean and identity covariance matrix .then the modulated process has covariance matrix . considered the special setting in which are i.i.d .standard gaussian vectors , and hence are independent .[ ex : a08153 ] consider the nonstationary linear process where continuous matrix functions. we can view ( [ eq : nonstationarylp ] ) as a time - varying version of ( [ eq : linearprocess ] ) , a framework also adopted in .as in example [ exmp : linstat ] , we assume a uniform version [ ex : a08154 ] we consider a nonstationary nonlinear example adapted from example [ exmp : nonlin_stat ] .let the process be defined by the iterated random function where is an -valued and jointly measurable function that may change over time .as in example [ exmp : nonlin_stat ] , we assume satisfy : ( i ) there exists some such that ; ( ii ) then have the gmc property with . therefore , theorem [ th : a08946 ] can be applied with and .we thank two anonymous referees , an associate editor and the editor for their helpful comments that have improved the paper .
|
we consider estimation of covariance matrices and their inverses ( a.k.a . precision matrices ) for high - dimensional stationary and locally stationary time series . in the latter case the covariance matrices evolve smoothly in time , thus forming a covariance matrix function . using the functional dependence measure of wu [ _ proc . natl . acad . sci . usa _ * 102 * ( 2005 ) 1415014154 ( electronic ) ] , we obtain the rate of convergence for the thresholded estimate and illustrate how the dependence affects the rate of convergence . asymptotic properties are also obtained for the precision matrix estimate which is based on the graphical lasso principle . our theory substantially generalizes earlier ones by allowing dependence , by allowing nonstationarity and by relaxing the associated moment conditions . ,
|
in 1983 islam , , showed that the trajectory of light in schwarzschild de sitter , henceforth sds , space is independent of the cosmological constant .as we shall see , this conclusion is , for the most part , correct but does not imply that physical measurements associated with trajectories of light do not depend on as well . making this concept clear will enable us to see a source of confusion in some of the recent literature on the topic .it seems that merely based on islam s work it was generally assumed that plays no role in gravitational lensing phenomena and has no place in the analysis ; and that the appearance of in some equations could be transformed away , in one way or another , and therefore is artificial , revealing the true independence on .this general belief turns out to be true only in situations where no measurements made by specific observers are considered . however , to study the phenomenon properly , it is important to consider measurements made by observers and the dependence of measurable quantities on the system parameters . in 2007rindler and ishak , , showed that if measurable intersection angles are considered , in a standard simple setup of gravitational deflection of light , then results of interest do depend on .rindler and ishak s conclusions immediately led to both enthusiasm and scepticism ; perhaps they were mistakenly seen to be in direct contradiction to the common belief that followed after islam s work .since their original paper , there was much activity surrounding this topic .some authors searched for other setups and methods of analysis in which results of interest depend on in support of rindler and ishak s conclusions , see for example , , , .others tried to find errors in rindler and ishak s work and explain the invalidity of their conclusions , and ultimately show that the traditional approach to the topic needs no modification , see for example , , , . all together , the papers that followed amount to a very interesting discussion of the subject , in which , unfortunately , there are no definitively agreed upon answers to many important questions . in what follows we attempt to make the theory abundantly clear and explain the exact role of in gravitational lensing phenomena .we discuss and clarify key issues and illuminate sources of disagreement in the recent literature . in turnwe hope to settle the ongoing debate on the influence of and present a clear description of light deflection phenomenon in sds space together with all the necessary tools for analyzing any setup .along the course of our investigation , we derive and introduce an invariant general formula , which allows the determination of a measurable intersection angle from fundamental parameters .this formula seems to be essential in the study of the present topic , but quite surprisingly is missing from the current literature .we also address the role of relativistic aberration of light in the analysis and demonstrate how our general formula encompasses this effect and allows for a simple way to account for it .in fact , the general formula can be used to derive an invariant aberration equation , applicable to any background geometry and orientation , and which reduces to the known aberration equation as a special case . the general angle formula and the general aberration equation we presentmay be considered as some of the most significant results of this paper ; their applicability may extend to multiple areas well beyond the current topic .our presentation is organized as follows . in section [ sec2 ]we discuss the influence of on the geometry and build an intuitive understanding of how this may lead to the appearance of in results of interest . in section [ sec3 ]we turn our attention to null geodesics and address the fundamental issue regarding the appearance of in the orbital equation of light and its solution .in section [ sec4 ] we continue the discussion of the above issue and present the necessary tools needed to pose and answer some important questions .in section [ sec5 ] we derive the general formula for measurable intersection angles and demonstrate its use in a few applications .finally , in section [ sec6 ] we discuss some of the recent papers on the topic and respond to their results and conclusions .consider the kottler metric , describing sds spacetime , where here we have an object of mass at the centre of the coordinates , in a universe with a cosmological constant .the range that we are interested in is ; for the case where both and are sufficiently small , this implies that , where and . in this range, is a time - like coordinate while , and are space - like coordinates . and are known as the schwarzschild and the de sitter horizons , respectively . sometimes also called the inner and outer horizons , respectively , in the context of sds space .it is easily verified that any orbit in this geometry can be confined to a single azimuthally symmetric spatial slice containing the origin .therefore , without loss of generality we can take , and consider motion in the sub spacetime , with the metric it is useful to take slices of constant in this spacetime and study orbits in the two dimensional subspace , parametrized by and .the metric on such a slice of space is it is immediately evident that this space is not flat , however since it is parametrized by polar coordinates , ( , ) , we can construct flat diagrams depicting the orbits taking place in the slice .we must however keep in mind the difference between our flat diagrams and the curved physical space in which measurements may take place . that is ,diagrams will be drawn on a flat ( , ) plane , real events will be taking place in the curved spacetime , a slice of which is represented by metric .making this distinction is particularly important when considering angles .intersection angles between curves appearing on the flat diagram may be considerably different when projected onto the curved physical space . for a visual demonstration of the issuelet us consider the portion of sds space in between the two horizons and isometrically embed the two dimensional slice with metric in flat three dimensional space . through an isometric embedding , which preserves distances, we can picture the structure of the underlying geometry in which the physical events take place . to this end , let us take a flat 3-space with cylindrical coordinates ( ,, ) and metric the complete description of the embedding is complicated for and , but when both parameters are small enough , specifically when the product is negligibly small , then a convenient approximation can be used to get the shape of the surface .we consider this , realistic , case of such small parameters and approximate the embedded surface for small and large in turn . for small , , andthe metric of is approximately where embedding this 2-surface in the flat 3-space of metric yields the following relationships : the embedded surface is therefore the set in flat 3-space satisfying it is known as flamm s paraboloid . to ensure a one to one correspondence of points we only consider one half of the paraboloid , allowing only positive on the embedded surface .hence , at small the intrinsic geometry of the surface described by metric can be approximated by flamm s paraboloid , shown in figure [ fig1 ] . for large , , andthe metric of is approximately where embedding this 2-surface in the flat 3-space of metric yields the following relationships . embedded surface is therefore the set in flat 3-space satisfying it describes half of a spherical shell . to ensure a one to one correspondence of points we only consider positive values of on the embedded surfacehence , at large values of , the intrinsic geometry of the surface described by metric can be approximated by half of a spherical shell , shown in figure [ fig2 ] .the overall shape of the embedded surface of metric can be approximated by piecing together flamm s paraboloid for small and the half shell for large .this resulting surface , depicted in figure [ fig3 ] , is a qualitative representation of the shape of the slice ; its main use is in visualizing how the distances associated with the coordinates stretch due to intrinsic geometry .one may argue that to properly connect the surfaces of large and small , the lower half of the spherical shell at large must be used , that is , must be taken negative in the transformation when ensuring bijection , however , for our purposes this is not important .this visualization will be an aid in qualitatively understanding how the system parameters and affect measurable angles .let us consider a static observer in the sub spacetime with metric and constant coordinates ( ) .let the local frame of this observer be confined to this sub spacetime as well , that is and .since the direction of increasing proper time of the local frame of this observer coincides with the direction of increasing , the space portion of the observer s frame coincides with a local patch around ( ) in the ( ) surface with metric .that is , the space , and curvature , around the static observer can be described by metric , and can be visualized as a small patch on the isometrically embedded surface of figure [ fig3 ] .this fact makes the special case of a static observer particularly useful in building understanding .however , outcomes of measurements generally depend on the motion of observers , and therefore a more detailed treatment is required for a complete description and establishment of practical relationships . as we progress to derive some general results , for arbitrary observers , we shall treat the case of a static observer at every step where observable angles are of interest .it will serve as a simple example of the physical phenomena at hand and as a specific case for others to be compared with .consider now two arbitrary curves on the flat ( ) plane , intersecting at a point .these curves may describe actual trajectories taking place in the curved physical space with metric .the true ( spatial ) shape of these trajectories is fully determined only when projected from the flat plane onto the curved space , where the trajectories may physically exist .consider a static observer at who makes a measurement of the intersection angle between the two curves .the situation is illustrated in figure [ fig4 ] below . from the discussion in the previous paragraph, it is apparent that intersection angles measured by a static observer will be those on the embedded surface , which are sustained by the projected curves . clearly , the euclidean intersection angle , , appearing on the flat plane is different than the measurable intersection angle , , appearing on the embedded , curved , surface , see figure [ fig4 ] .this is precisely the point we aim to make , and a fact that must be kept in mind when plotting curves that represent physical trajectories , on the flat plane .is euclidean and belongs to the flat plane .the intersection angle takes place on the curved surface and measurable by a static observer.,width=321 ] the difference between and comes only from the fact that the physical space is curved due to and .it is already clear , qualitatively , that even if one of or were zero these angles would still not equal , and given the angle one would need both and to find , and vice versa .finally , we see that while the euclidean angle , , depends only on the shape of the curves , the measurable angle , , depends on both the shape of the curves and the shape of the space itself , in which the true trajectories exist and intersect .the dependence of the shape of the space on the system parameters is clear and comes directly from the given metric .the dependence of the shape of curves on the parameters is determined in accordance to the particular situation being analyzed .of course , the curves of central interest in the present work are the ones describing trajectories of light rays .let us restate the main conclusions of this section that are important to keep in mind in what follows .first , a clear distinction must be made between quantities that belong to the flat ( euclidean ) plane on which diagrams are drawn , and quantities that are physically measurable .and second , to properly account for the various ways of influence when considering the dependence of measurable intersection angles between curves on the system parameters in general , one must consider both effects of the parameters on the curves and on the geometry of the space , where curves may physically exist and measurements may take place .again we confine the motion to the plane without loss of generality , and use metric .the two trivial killing vectors ( and ) , along with the null condition , satisfied by trajectories of light , yield the following equations . here , is an affine parameter , parametrizing the trajectory , and and are constants of the motion .these equations can be combined to give the differential equation , satisfied by a curve in the ( ) plane , describing the path of a light ray . , \label{tp2e1}\ ] ] where solutions for this equation divide into a few categories and exhibit a number of interesting features .although obtaining the exact solutions is not simple , they do exist in the literature , , and can be used at any time to describe a path exactly or to test the validity of an approximation to any degree of accuracy .fortunately , for realistic values of and , the combination is very small , and approximations in the low orders of prove to be very accurate .such approximations are most popular in the literature and textbooks on the subject , but it is comforting to know that exact solutions exist as well . the type of trajectories we shall mainly be interested in is the one for which there is an axis of symmetry along with other important features that we discuss in what follows .such symmetric trajectories have a point of closest approach , with a minimum value of , and extend to infinity ( in the analytical sense , on the ( ) plane ) on both sides of the axis of symmetry .it can be shown that the value of for these trajectories does not go below . in regions where the value of is much larger than , these trajectories exhibit asymptotic behaviour andcan be described by straight lines , referred to as the asymptotes of the trajectory .the features listed here are well known and easy to establish analytically. we shall not cover all the mathematical details here but rather give an account of the key physical features and parameters that are important for what follows .concentrating on the symmetric trajectories with a point of closest approach , let the coordinates of this point be ( ) . at this point ,the derivative is zero , and equation gives let us also define a third parameter , , as follows . this allows us to write equation in three ways using the three different parameters , , and .so in addition to we also have , for convenience , , \label{tp2e3}\ ] ] and .\label{tp2e4}\ ] ] notice that only when the parameter is used in the governing differential equation does make an appearance .all three parameters will be discussed in considerable detail in the next section . without any mathematical labour, we can assume that a required solution to equation ( as well as and ) exists and can be written as follows , using either of the three parameters . or here , is a constant of integration that is related to the orientation of the path . in each casethere are two independent constants of motion to find in order to determine a specific trajectory in the subspace of interest , which is a particular set of points ( ) through which the light ray passes . to this end, we must consider some boundary conditions . inwhat follows , four different sets of boundary conditions will be discussed in turn .we shall always assume that the value of is given in addition to any boundary conditions .let and be two points in space through which the light ray passes , with coordinates ( ) and ( ) , respectively .assume that the path of light connecting these points satisfies the conditions discussed above , i.e. point of closest approach , symmetry etc .using the boundary conditions in gives the following two equations with two unknowns . it can be shown that the values of and are in general not unique for such boundary conditions ; the possible values constitute a countable set , describing a family of curves connecting the two points . in this familyeach curve has a specific value of , and there exists a unique trajectory with the largest value of connecting the two points . in practice , it is this trajectory which is usually of primary concern , and the one that is often approximated to various orders in .either way , it can be shown in general that for given two points in space connected by the path of light , a given mass , and some additional restriction ( which may be set by a requirement on the time - like interval or space - like distance of travel ) , it is possible to find unique values of and , for which equation will describe the required unique trajectory .of course , an identical procedure can be followed by using equation and the parameters and instead , leading to identical conclusions .therefore , these considerations reveal that for such boundary conditions , the trajectory , which is a set of points on the ( ) plane satisfying the governing equation , depends only on the mass and the two points in space and through which it passes ; it is independent of in the simple sense that changing the value of will not alter the path satisfying these boundary conditions .in other words , with these boundary conditions the path of light in the subspace parametrized by and can be determined with or without knowledge of .let ( ) be the coordinates of the point of closest approach of the trajectory .since in this case is known from the start , we can use equation as our first integral , for which all the parameters are known . integrating this equationwill give a solution of the form , in which only the parameter remains to be determined from the boundary conditions .plugging and in gives an equation for , which for a given choice of branch establishes a unique value of c( ) .that is , for these boundary conditions there is a unique path .the values of and are determined uniquely ( up to a sign , which does not affect the shape of the path ) from the values of , and .again , we see that has no influence on the path in the same sense as for the previous set of boundary conditions .varying the value of does not alter the path .a little investigation reveals that the parameter , which depends only on and , determines the overall shape of the path , while the parameter only determines the orientation ( the direction of the axis of symmetry ) . due tothis fact and no loss of generality in setting orientation , it is often sufficient to use only the parameter to describe the path in many situations .these boundary conditions are particularly useful due to the uniqueness of the corresponding paths and the ability to find the parameter directly , without the need for integration or knowledge of , from equation .this set of boundary conditions can be considered as a generalization of the previous set .let ( ) be a known point on the path and be the given euclidean intersection angle on the flat diagram , sustained by the path of light under investigation and the radial path of light passing through ( ) .the situation is depicted in the following figure .plane , passing through a point with coordinates ( ) .the figure also shows the point of closest approach of this path , with , and a radial path of light , which also passes through the point ( ) . the intersection angle between the two paths on this flat diagramis .,width=321 ] in the flat , euclidean , space of this diagram , the angle is related to the differentials of the path at this point in the following way : this relationship can be easily formed by considering the local space around ( ) , and separating the radial and angular components of the tangent to the path . for simplicitylet us drop the absolute value , and from now on assume that when there is a sign ambiguity it is the positive that is taken .the above can then be immediately rearranged to obtain as a function of and .thus , boundary conditions which give a known point and a euclidean intersection angle with a radial line at that point are equivalent to giving a known point and a derivative at that point . with these boundary conditions equations and can be used to find the parameters and , either of which is sufficient to find the overall shape of the path , up to orientation . upon integration, the parameter can be found as well by plugging the point ( ) in the resulting relationship of the form or .thus , with these boundary conditions the path is determined uniquely ; the set of points ( ) through which the light ray passes depends only on , , and . as in both previous cases ,the trajectory does not depend on .we see that this set of boundary conditions is in a sense equivalent to set 2 , which may be considered as a special case .whether it is set 1 that is initially given ( with some condition to ensure uniqueness ) or set 3 , it may be convenient in each case to find the value of and classify the path according to this parameter , since its interpretation is intuitive and it is all that is needed for a complete description of the path , up to orientation . with this in mind, we shall always assume that a given trajectory of light , of the required type , may be uniquely described by a set of values , and , regardless of what euclidean , or coordinate related , boundary conditions that are in the plane we initially start with .let ( ) be a known point on the path and be the measurable intersection angle , at this point , between the trajectory of light under investigation and the radial trajectory of light , passing through ( ) , measured by an observer with 4-velocity .this set of boundary conditions is different from the previous three sets in a fundamental way .it includes a directly measurable quantity as a boundary condition .although the coordinates of the points , , ( ) , ( ) and the derivative ( or ) of sets 1 , 2 and 3 can , in principle , be determined through measurements , they are all euclidean quantities that belong to the flat diagram .they may or may not have a physical interpretation as well , but their mathematical origin in the analysis has nothing to do with actual measurements .in contrast , the current set of boundary conditions includes a measurable angle , which may have a complicated relationship with the euclidean quantities appearing on the plane that are needed to determine the path . considering the discussion of the previous section and referring to figure [ fig4 ], we see that for the special case of a static observer , there can be constructed an intuitive relationship between the measurable angle and the euclidean angle . in this special case , which serves as a clear example , out of the parameters appearing in the relationship between and will be both and , since they both influence the geometry of the embedded space .in general , for an observer with arbitrary 4-velocity , , the relationship between the angles will contain , , , and the components of as parameters .thus , to determine the path in the ( ) plane with these boundary conditions one can find the euclidean intersection angle , , from , , , , and , and use it along with the point ( ) as in the case of set 3 .evidently , this set of boundary conditions is , in some sense , equivalent to set 3 , both sets yield a unique path . with a given observer , for the current set, there is a one to one correspondence with the parameters of set 3 , which can be used to convert from one set of boundary conditions to another .it is clear that the value of must be known in order to convert of this set into of set 3 .in fact , without the knowledge of it is not possible to find the trajectory of light which satisfies the boundary conditions of the current set . hence , with these boundary conditions the path is determined by , , , , , and .we notice that does affect the path in this case , and , overall , it affects the path when certain ( directly ) measurable parameters are used as boundary conditions .it does not affect the path if all the boundary conditions are euclidean or coordinate related , which appear on the flat diagram .+ with the above examples in mind we see that , in contrast to the influence of , the influence of on a path of light can come only from uncommon boundary conditions that are usually associated with measurements . since in most cases in the literature the boundary conditions are coordinate - like , or euclidean , then in light of the above examples it may be loosely concluded that has no direct affect on the resulting paths . however , this common conclusion may be somewhat misleading if the assumptions on the boundary conditions are not stated explicitly .indeed , it is important to keep in mind that no general conclusions should be made regarding the overall influence of , which is sensitive to the particular situation being analyzed .as an additional example to set 4 , which brings in through an observable quantity , consider a set of boundary conditions that contains two points on the path , one of them being the position of the source emitting the light ray ; in the cosmological context this source could be a distant galaxy .such a set is similar to set 1 , it can be used in an identical way to establish the path of the ray , though it may have one important difference in regards to .given some astrophysical model , or tabulated data , which provides the position of the source , it could be the case that the position is a function of both time and , and therefore , the appearance of once again will come from the boundary conditions but in a different way than it was for set 4 .thus , we stress that the influence of on a path of light and associated quantities of interest depends closely on the particular situation being analyzed , and in saying that a path is independent of one implicitly means that the path is subject to coordinate - like , or euclidean , boundary conditions which do not depend on themselves .overall , it should be clear now in what way may influence a path of light , and how its influence is hidden in measurements , or rather , more generally , in boundary conditions .when analyzing a common setup , it may be straightforward to foresee whether will have an influence on results of interest or not .let us consider a set of euclidean boundary conditions , such as one of the first three sets discussed , and investigate the qualitative dependence of the resulting path of light in the ( ) plane on the system parameters and .as explained , it is convenient to convert any given set of euclidean boundary conditions to the set of , , if it is not initially expressed as such .further , without loss of generality , for illustration purposes we can orient the coordinates so that .the following figures depict the dependence of the path on the parameters and , for a set value of .plane , passing through a point with coordinates ( ) .the value of is successively increasing , starting from the top , and its influence is illustrated through the three diagrams .the value of is kept constant and it is assumed that the outer horizon is too far to be shown on the graphs.,width=321 ] plane , passing through a point with coordinates ( ) .while the value of is kept constant , the value of is successively increasing , starting from the left , and its lack of influence on the path is illustrated through the three diagrams .the outer horizon is also shown on the three diagrams as the dashed circle . although the shape of the path does not change with varying , the geometry of the underlying space as well as the location of the outer horizon both change.,width=321 ] these figures make it clear that , in the region between the horizons , for typical euclidean boundary conditions , only when varying the path of light changes .varying only changes the location of the outer horizon on the diagram .but although the path itself may be independent of , we shall make it abundantly clear that there is an influence of on measurements of intersection angles of light rays , and as one may expect this influence near the outer horizon may be quite significant .+ the fact that , while both and appear in the metric , but only has an effect on paths of light in space deserves further attention. it is illuminating to study the paths in de sitter space , for the case in equations , , .the three equations are then ,\ ] ] ,\ ] ] and .\ ] ] we immediately recognize that for the paths are straight lines with a point of closest approach at .notice that in this case and , as before , either of these two parameters can be found from euclidean boundary conditions without the need for , and conveniently describe the entire path up to orientation .the parameter , on the other hand , has no independent interpretation in this case ; it is determined through its relation to , and can only be found given knowledge of .thus , paths of light in de sitter space are straight lines , and are independent of for given euclidean boundary conditions . in other words ,the set of points that lay on the path of a light ray in the ( ) plane that connects two given points is independent of the value of .intuitively , in defining a bending angle for paths of light , the value of such an angle should be zero for a path which is a straight line .this is an intuitive and important requirement to keep in mind when considering bending angles in sds space .it is also interesting to further investigate the non influence of on paths of light from the following mathematical perspective .evidently , the way in which appears in the first order differential equation , , makes it entangled , in some sense , with the parameter , allowing for the complete absorption of by transforming to a new parameter , for example or . forthe sake of curiosity , let us consider a more general coefficient of in the metric , changing to in , for some .proceeding as before to obtain the first order equation of the path , we find . \label{tp2e8}\ ] ] again , restricting to symmetric trajectories with a point of closest approach , setting at gives which can be used to rewrite in terms of the coordinate distance of closest approach , : &.\end{aligned}\ ] ] the value of can be set by boundary conditions in a given setup , making the effect of on the path clear for a given value of .interestingly , only when does the effect of on the path vanish .then completely disappears from the equation , leaving and the only parameters .it is this specific value of that happens to occur in the sds ( and de sitter ) metric , making it the only special case in which has no affect on paths of light in space .thus , the power of 2 appearing in the coefficient of reveals much about its geometric characteristics and its apparent influence on paths of light .going back to equations , and , we wish to make a clear distinction between the three parameters , and , and gain clear mathematical and physical interpretations for each . as discussed , the parameter is particularly useful ; it gives the shape of a unique path up to orientation .given , all the important features of a trajectory can be found without knowledge of . given any other complete set of euclidean boundary conditions , can be found and used to describe the path on its own . an important question is whether is measurable . in principle , a static observer in a spherically symmetric , static spacetime can find its radial coordinate through measurements .for example , the measurable circumference of a stationary ring centred around the origin is . by slowly moving around the circumference or setting an array of observers ,the corresponding length can be found and can be determined . similarly , by dividing the ring into sections , angular separations can be set .see ( chapter 9 ) for remarkably clear and illuminating discussions related to such measurements .thus , in principle , the coordinates of a given static point ( ) in the space slice can be found through measurements by observers in that space . in particular , the coordinates of any point through which a given , fixed , light ray passes can be found by means of measurements , including ( ) .the method in this example may not be practical but it is meant to make a clear illustration of the fact that it is possible , in principle , to determine the value of through measurements without knowledge of , or even . clearly , it is possible to convert to and vice versa , for values of , without the knowledge of , see equation .therefore , as far as the mathematical description of the path is concerned , the two parameters are equivalent for . since can be found from , which can be found from measurements , we conclude that can be found , indirectly , from measurements as well , without the need for .we shall see that happens to be the impact parameter , to be defined more precisely in what follows .finally , given the parameter , the shape of the path can be described up to orientation , without the need for .the remaining parameter to discuss is , which is unfortunately the least useful and most popular of the three .it is immediately evident that given a fixed path , for which and can be determined , the value of the parameter can only be found with the knowledge of from equations .therefore , for a given value of , one needs the value of to determine the shape of the path , up to orientation . of course ,in a situation where is given a priori , one may conclude that influences the shape of the path .however , should not be treated as a boundary condition , but rather as a parameter to be determined from boundary conditions , in the same way as and of equations - .further , considering the relationship between and in equations leads to the following question . which of the two parameters is independent of , if any , and which is dependent ?at this point , the answer to this question is somewhat straightforward . for a path with typical euclidean boundary conditionsthe value of can be determined independently of .therefore can be viewed as a parameter of the trajectory that is independent of .in fact , can be used as a boundary condition since it is in one to one correspondence with , for a given and .this leaves the parameter as the parameter that depends on the values of and in the relationship given by equations .thus , for a given trajectory , should never be treated as a parameter that is independent of , especially when studying the effects of .technically , we could even throw avogadro s number , say , into the sum containing and , that is : , and the situation would not change , since the boundary conditions will determine the value of the whole sum in the brackets .it is the value of ( represented by this sum ) that sets the shape of the path , while the value of shifts to compensate for , or whatever else you throw at it , like avogadro s number or any other imaginable constant . in other words, the boundary conditions will set the value in the brackets above , which is a constant of the path that does not depend on , shifting the value of or adding anything new into the brackets will result in a shift of the value of so that the total value of the brackets remains the same .although the physical interpretation of the parameter is not yet clear , these considerations clarify the mathematical role of in a typical situation .an important question now is whether it is theoretically possible to measure directly or , rather , find it from measurable quantities without knowledge of .if possible , this could lead to a way of finding experimentally ( by determining and independently ) , and allow for situations in which the parameter can be known a priori , which would force us to reconsider it as a possible boundary condition .let us investigate the above question in detail . at a given point in the ( )plane through which a ray of light passes , the possible measurements that can be made by an observer on the ray are the energy of the photons and the angle the ray makes with a given reference direction . of course ,for light consisting of a bundle of rays there may be more possible measurements to make , for example the size of the visible solid angle associated with the bundle .such measurements we study in detail in , but these are of no major consequence in the current discussion ; more on this in the next section . in realistic situations ,the deflecting mass is a luminous object , making radial light rays a good reference .as previously discussed , the coordinate parameters and can be found , in principle , through measurements independent of .if we consider an extended frame around the observer , large enough to contain a sufficient amount of points through which the light passes to make accurate measurements , and if the coordinates ( ) of each point are found as well , then the change in can be compared to the change in of this ray , making the derivative an indirectly measurable quantity . also , if the proper time in the observer s frame is given by , the changes in and can be compared to the change in time , making the quantities and indirectly measurable as well . with this in mindwe proceed . for simplicitylet us first consider the extended frame of a static observer ( or , rather , multiple neighbouring static observers ) . to be able to determine the value of through measurements , for a given light ray in the ( ) plane , one must find a relationship between and directly measurable quantities . by definition , , where and , for an affine parameter .let be the measurable energy of the photons , and be the measurable angle between the light ray under consideration and a radial light ray passing through this point .let be the euclidean intersection angle , corresponding to , see figures [ fig4 ] and [ fig5 ] for an illustration of the situation .let be the 4-velocity of the observer and be the 4-momentum of the ray of light under investigation . with the proper time , and an appropriate choice of , and can be expressed as the subscripts and in the coordinates above are introduced for clarity , and will be dropped when there is no room for ambiguity ; clearly we are free to set . with the metric tensor , the measurable energy , ,can then be expressed in terms of the inner product for the case of a static observer then , where , it is trivial to find from the required condition .thus , we have here , is a measurable quantity , by definition ; the constant of motion can be determined , from the measurement of , only if both and are known .further , hence , the constant of motion can be expressed entirely in terms of measurable quantities ( in this case , measurable by a static observer ) , and can be determined without knowledge of or . with the above relationships the parameter can be expressed as follows : and again , we see that this equation can not be used to determine from measurable quantities without a prior knowledge of the value of , and in this case as well .since the derivative , as well as , can be found at the point of intersection , as discussed , it is possible to determine through and since for the stationary observer , as for any other , the ray moves at the speed of light , set to unity in our coordinates , we have using in the above gives which can be used in to re - express in terms of , then thus , even if the angle can be determined through measurements in an extended frame , one still needs the values of and to calculate .consider now the measurable angle at the point of intersection , measured by a static observer , which is obviously different from the euclidean angle , as discussed .the relationship between these angles , derived in the next section , turns out to be in contrast to , the angle can be determined through a direct measurement at a single point by a single observer . to determine an extended frame is needed , which for theoretical reasons is important to consider but may not be practical .equation can be used to replace by the measurable angle in the last expression of , , giving the above relationship is of simple form and allows finding from the measurable intersection angle .in fact , this equation can be used to recover the relationship between and , , by setting at , and may be of use in certain applications .however , once again we see that without the knowledge of ( and ) the value of can not be established . in summary, we found that out of the three , related , constants of motion , and , it is only the value of that can be established without prior knowledge of form the possible measurements discussed here .in particular , the value of can not be found without prior knowledge of from such measurements .these conclusions remain true when considering measurements done by any observer .the special case of a static observer was considered here only for a simple illustration of the situation .thus , the answer to the previous question concerning the determination of is in the negative .the value of can not be established without knowledge of , can not be used as a realistic boundary condition , and finally , due to its dependence on , its use can be misleading when investigating the influence of on other quantities .we notice that , since it is theoretically possible to measure and determine from measurements , equation can be used to express in terms of measurable quantities .hence , this suggests one theoretically possible , although maybe not practical , method to find experimentally .this method of finding is somewhat equivalent to determining the parameter distance and the measurable distance between two points , and using a relationship between the two quantities , similar to equation , to establish the value of .these effects are a result of the curvature induced by , and can be viewed as the effect of on the embedded surface of figure [ fig3 ] . affects the relationships between measurable quantities and corresponding euclidean ( or coordinate ) quantities .a visual example of the influence of on such relationships can be seen in figure [ fig4 ] , which is a particularly good illustration when considering measurements made by static observers .we state again , the possible measurements discussed in this section are for theoretical purposes only , whether or not they are practical is of no concern .the main goal of this section is to interpret and discuss the parameters , and , and determine which of these can be found through measurements without knowledge of .we have established the mathematical roles of all three , and found that for a given path of light only depends on ; its value can not be determined without it . while the geometrical interpretation of is clear from its definition , the geometrical interpretation of requires a little more analysis , to be done shortly , which will reveal that is the impact parameter of the trajectory . as for the parameter , there is no clear geometrical interpretation in the general case of and . in the special case where , is the impact parameter , since .but even when and , loses its geometrical meaning and gains dependence on .thus , in schwarzschild space , the usefulness of comes only from the fact that . in sds space, the parameter loses its worth .when discussing some of the recent papers on the topic , we shall have clear definitions of the important quantities in mind . much of the disagreement in the literature seems to emerge from misunderstanding conclusions due to lack of clarity and ambiguity . in many cases , parameters that are defined and often used in analyzing trajectories of light in schwarzschild spaceare imported to the analysis in sds space without mentioning their exact definitions or discussing if they remain appropriate to use .furthermore , even in cases where these imported parameters do remain appropriate to use in sds space , their interpretations may change considerably , which should be noted to avoid confusion . for the sake of clarity, we present a few definitions in what follows . although the manner in which influences measurements while not having an influence on paths of light should be clear by now from the previous sections , the definitions presented in this section are meant to clarify some of the terminology in the current literature on the topic . the parameters discussed may or may not be of much practical or theoretical use , however, they encompass some of the popular quantities used in the literature and can aid in making it simple and systematic to understand the results and conclusions of some recent papers . in the general context, the impact parameter is defined for a trajectory in a radially dependent potential field , whose first derivative vanishes at large values of the radial coordinate , as the perpendicular distance between an asymptote of the trajectory and the origin .in such a potential field , trajectories that go to infinity can be approximated by straight lines at large radial coordinate , , and for our purposes we also assume that these trajectories have a point of closest approach to the origin with a minimum value of .see the following figure . )plane under the influence of a radially dependent potential field .the solid curve represents the trajectory of interest , the dashed line represents one of its asymptotes .the impact parameter is the distance appearing on the diagram.,width=321 ] in the context of general relativity , specifically for trajectories of light in schwarzschild space , a second definition , equivalent to the first , is used in many books . in this context, the impact parameter is defined as the perpendicular distance between the path and the radial line , that is parallel to an asymptote of the path , at large values of .more exactly , it is the limit that this distance approaches as goes to infinity .the next figure will make this definition clear . )plane with a point of closest approach .the solid curve represents the path of interest , the dashed line represents one of its asymptotes , and the solid line is a radial line parallel to the asymptote .the impact parameter equals the limit of the distance , appearing on the diagram , as .,width=321 ] we see that this second definition suggests an experimental method to find the impact parameter for a given , fixed , path of light . for example , in schwarzschild space , which is asymptotically flat , radial lines can , in principle , be identified , and the required distance corresponding to the impact parameter of figure [ ip2 ] can , theoretically , be measured directly by static observers .thus , in addition to the fact that the impact parameter can be calculated from some boundary conditions , in schwarzschild space it can also be found from direct measurements as well . when extending the concept of the impact parameter to trajectories of light in sds space , which are mathematically the same as in schwarzschild space , both of the two common definitions remain valid .however , in this case , the second definition no longer suggests a method to measure the impact parameter directly , as it does for schwarzschild space , since the space is no longer asymptotically flat .thus , the impact parameter of trajectories of light in sds space can still be calculated from some boundary conditions , which determine the trajectory , but can no longer be measured directly .the impact parameter can be found analytically as follows .first , let us refer to figure [ ip2 ] and orient the angular coordinate , , so that the radial line will corresponds to .far from the origin , the perpendicular coordinate distance between a point on the path under investigation and the radial line is , where the coordinates and are of a point on the path ( with ) .therefore , the impact parameter is the limit of as approaches infinity .this limit can be easily found with the aid of equations and . of equation is the euclidean intersection angle at a point on the path under investigation sustained by the path and the radial line through this point . of equation is assumed to be a fixed parameter for this particular path . and so we find that the impact parameter of a given fixed trajectory is the constant of motion .this gives us the geometrical significance of , but again , since sds space is not asymptotically flat , the value of can not be measured directly , though it could easily be found analytically from boundary conditions .the facts to keep in mind when bringing up the concept of the impact parameter in the context of trajectories of light in sds space are the following : in the special case of , the space is asymptotically flat and we have , so not only does become the impact parameter , but also the impact parameter becomes directly measurable at large distances .however , these two features do not remain true for .in general , the impact parameter , , always maintains its mathematical role and geometrical meaning for any value of , but can not always be interpreted as a physical distance .the parameter , on the other hand , loses its mathematical and geometrical meanings when a non - zero is introduced .overall , the impact parameter is a euclidean quantity that belongs to diagrams on the flat ( ) plane , it only gains a physical ( measurable ) significance in a special situation .+ the following two figures will be referred to in the subsequent definitions .they depict a typical path of the kind we are interested in , with a few important features . ) plane , representing a typical symmetric path of light with a point of closest approach at ( ) .the features and parameters appearing on this figure are defined and discussed below.,width=291 ] direction for clarity . the features and parameters appearing on this figureare defined and discussed below.,width=321 ] in these diagrams , the path of light under investigation is the curve represented by . for the chosen orientation ,the shape of is entirely determined by , or equivalently , which both appear on the diagrams .the straight ( dashed ) lines and are the asymptotes of the path , which approximate the path well at sufficiently large values of .the ( dotted ) circle represents a region outside of which the effects of are negligible on , both , paths of light and curvature of space .it is outside of this region that is considered to be sufficiently large , where the path is straight and euclidean quantities are not distorted by .of course , the position of will ultimately depend on the sensitivity of instruments and the desired accuracy .however , it is usually assumed that the intersection of with the axis ( in the diagrams ) occurs well beyond this circle .the cartesian coordinates ( ) are related to the polar coordinates ( ) in the usual way , and .this makes the vectors and well defined at every point on the plane . in the orientation of these diagrams, is symmetric about the axis , and the point on is symmetric about the origin as well .let us refer to this point as the point of symmetry , which in this case is the point of intersection of with the axis .at this point , the euclidean intersection angle appearing on the diagrams between and the axis is .this angle ( when very small ) is approximately half the magnitude of the angle between and , the asymptotes of the path , which is given by .the curves and represent radial rays of light , which are straight lines , with constant angular coordinate and , respectively .the purpose of is for the illustration of the impact parameter , , while the purpose of is to serve as a reference direction at a point on the path .although the de sitter horizon is assumed to be outside the range of these diagrams and has no affect on the illustrated path , the possible influence of on measurements through the curvature of space should not be neglected .let be the measurable intersection angle by a static observer at corresponding to the euclidean angle .the bending angle is originally defined for paths of light in schwarzschild space and is also referred to as the total bending angle , the deflection angle , and the total deflection angle by some authors . in certain cases definitions differ by a factor of 2 , and the word total " is used to make the distinction for clarity . extending this concept to paths of light in sds space can give rise to some ambiguity and confusion , so we shall do it carefully . since the curve ( and its associated euclidean quantities ) in the above figures does not depend on , as should be presently clear , such curves may be used in modelling paths of light in either schwarzschild or sds space . in the context of schwarzschild space , the bending angle is usually defined , in most textbooks , in one of the following two equivalent ways .+ + * definition 1 : * the bending angle of a symmetric path of light in schwarzschild space is the ( small ) angle between the two asymptotes of the path .+ + in reference to figures [ split1 ] and [ split2 ] , the bending angle is the euclidean angle , between and .this definition is purely mathematical in the sense that there is no reference to any measurements .the definition suggests that the bending angle can be found by determining the path from some boundary conditions and finding the bending angle through its asymptotic behaviour .+ + * definition 2 : * the bending angle of a symmetric path of light in schwarzschild space is double the ( small ) measurable intersection angle by a static observer between the path and a radial ray at the point of symmetry , far from the origin . + + according to this definition , referring to figures [ split1 ] and [ split2 ] , the bending angle is double the measurable intersection angle , which corresponds to the euclidean angle . the assumption made in the figure that the point of symmetry , , is outside the circle , where the affects of are negligible , is what s meant by being far from the origin in the definition . hence , in the asymptotically flat schwarzschild space , the measurable angle by a static observer at the point of symmetry is the same as the euclidean angle appearing on the flat diagram .it is also clear that , since the path is already exhibiting its asymptotic behaviour at .therefore we see that , in the context of schwarzschild space , the two definitions are equivalent .the second definition suggests that the bending angle is a quantity that can be directly measured .similar to the impact parameter , in schwarzschild space , the bending angle can be found from some boundary conditions that determine the path as well as measured directly at a distant point .however , in contrast to the case of the impact parameter , when extending the concept of bending angle to sds space the two common definitions of the parameter given here are no longer equivalent .since will affect the geometry at , the measurable intersection angle , , will be different than the euclidean angle , . in extending the concept of the bending angleto sds we shall build on both of the above definitions and define two kinds of angular quantities , purely mathematical and measurable , concerned with symmetric paths of light . first , by restricting to definition 1 of the bending angle in schwarzschild space ,let us explicitly state what will be referred to as the bending angle of a symmetric path of light in sds space .+ + * definition : * the bending angle of a symmetric path of light in sds space is the ( small ) angle between the two asymptotes of the path .+ + although measurements by observers are important to consider , the bending angle is a measure of how much the entire path is bent , and should be independent of observers . for this reason we extend the concept of the bending angle to sds space in accordance with definition 1 ( of schwarzschild space ) and reserve definition 2 for a different quantity that is measurable . in reference to figures [ split1 ] and [ split2 ] , according to the above definition ,the bending angle is . with this definition for the bending angle in sdswe see again a similarity with the case of the impact parameter . the bending angle can be found from some boundary conditions that determine the path , and therefore can be determined from measurable quantities , but can no longer be measured directly . in particular , the bending angle can be found by taking the limit as goes to infinity in the solution for the orientation in figure [ split1 ] , and since the path does not depend on the bending angledoes not depend on either .it is clear from the symmetry that the bending angle should only depend on and , and since these parameters only appear as the combination in the analysis , the bending angle will be a function of .it is easily found that for a small bending angle , , to first order in , we have also , to this order in , equation gives therefore , equation can also be used to replace in the above equation and express in terms of , and .but , given the discussion of the parameter in this section , we see that this relationship will be of little use and , in a way , misleading . finally , it is important to keep in mind that , in the case of sds space , the bending angle should be interpreted only as a euclidean quantity , which belongs to the flat ( ) plane . since paths of light are independent of , extending the bending angle to sds space in such a way does not affect its mathematical interpretation .now , however , only in the special case of the bending angle gains a physical significance as well by becoming equivalent to a measurable quantity . in light of the definition 2 of the bending angle in schwarzschild space, we define a similar angular quantity for a path of light in sds space , which refers to an actual measurement . in reference to figures [ split1 ] and [ split2 ] and the paragraph following it ,measurable deflection angle at the point of symmetry by a static observer _ be defined as the angle , which corresponds to the euclidean angle . for concreteness , rather than taking to be the measurable intersection between the path of light and the axis , which leaves room for ambiguity , we can define it to be the measurable intersection angle between the path of light and the radial light ray going through . notice that for this definition , of a measurable angular quantity , we only consider the one sided intersection angle ( in contrast to the double of definition 2 above ) , since it is the measurement that is significant here rather than the overall shape of the path . to further distinguish this measurable , one sided , angle from the euclidean bending angle , we refer to it as a measurable _ deflection _ angle .the way in which the measurable angle , , is related to its corresponding euclidean angle , , is illustrated in figure [ fig4 ] ; is the projection of onto the embedded surface discussed in section [ sec2 ] . the angle is physically measurable by using the radial ray at as a reference , which in the euclidean sense is parallel to the direction of the path at , and for this reason it is a measure of the deflection of the path as it goes from ( ) to .if the mass at the centre of coordinates is luminous , as is usually the case in practice , then radial reference rays are available at all points to all observers . since the observer and the point of measurement are set in the definition , the measurable angle can be considered as a function of only , in addition to and , of course .clearly , for a fixed path , this measurable deflection angle will depend on , in the simple sense that changing while keeping the boundary conditions will alter the measurement . by means of equation , which will be derived in the next section, we can explicitly write the relationship between the measurable angle and the corresponding euclidean angle . since , for convenience can also be expressed in terms of and , or and .and of course , can be determined from and regardless of orientation .+ with the last two definitions of the bending angle and the measurable deflection angle at the point of symmetry , we have sufficiently extended the usual concept of the bending angle for paths of light in schwarzschild space to sds space .it is now important to mention that in doing so , it is intuitive to expect , or rather require , the following conditions. first , for any defined angular quantity , which is a measure of the deflection of the path , should reduce to the usual bending angle of schwarzschild space , so that it can be interpreted as a proper generalization . and second, for , in which case the path is a straight line on the ( ) plane , the defined angular quantity should equal zero .it is easily verified that the definitions we made above meet these two conditions . for new definitions simply reduce to the original definitions 1 and 2 . for we must deal with a limit and some assumptions on to show that the condition is met for ( or rather use the generalization of this angle , below , for a more intuitive approach ) .next , we generalize the last definition of the measurable deflection angle , , to an arbitrary point on the path and an arbitrary observer in the following two definitions .the following figure is a magnification of the area around the point on figure [ split2 ] .it represents a small patch on the ( ) surface containing . ,this is the area around the point where and intersect .the vectors and angles appearing on the figure are defined below.,width=321 ] at any point on figure [ split2 ] , including , the vectors and are well defined , and shown on the above diagram . is the euclidean angle on this flat diagram sustained by the vector and the tangent vector of at this point . is the euclidean angle between and . is the euclidean intersection angle between and , so that .let be a measurable angle by a static observer at , which corresponds to the euclidean angle , in the sense of the projection onto the embedded surface of section [ sec2 ] .let and be the measurable angles by a static observer at corresponding to the euclidean angles and , respectively .we generalize the previous definition of the measurable deflection angle as follows . in reference to figure [ split2 ] and the paragraph following it ,measurable deflection angle by a static observer at a point _ be defined as the angle , which corresponds to the euclidean angle . is equal to the projection of onto the embedded surface discussed in section [ sec2 ] , and therefore it depends on both and . due to symmetry , for given values and , can be found analytically independent of orientation , assuming that at this satisfies the solution .the reference direction used to determine is the direction parallel to the axis in the setup of figures [ split1 ] and [ split2 ] , which in the euclidean sense is parallel to the direction of the path at , and for this reason is a measure of the deflection of the path as it goes from to . for the standard transformation between the polar and cartesian coordinates in the plane , the direction of increasing is well defined .a vector in this direction in cartesian coordinates is which can be transformed to polar coordinates at any point on the plane through . for concreteness , in reference to figures [ split1 ] and [ split2 ] ( and [ fig10a ] ), we can define the bending angle to be the measurable intersection angle between the path of light under investigation and the path of light whose tangent is parallel to the vector at the point of measurement on the ( ) plane .this angle is well defined , but unlike the special case of , for which a radial light ray could serve as the reference direction , in this general case the available radial light ray , , is not going in the required direction .analytically , this angle can be found by referring to figures [ split1 ] , [ split2 ] and [ fig10a ] and using the measurable angles and corresponding to and , respectively .then , , where both and refer to angles measured in reference to the radial light ray , and therefore both will satisfy a relationship of the form , which allows for expressing in terms of and .clearly , equals the value of at the point on the path , given the orientation of figures [ split1 ] and [ split2 ] .the angle is the euclidean intersection angle between the path of light under investigation , , and the radial light ray at the point on the path , and therefore can be expressed in terms of , and . as it is for the measurable deflection angle , this measurable deflection angle , , also depends on both and .the angle can be physically measured if the required reference light ray exists . although not practical , but of theoretical significance, it is worth mentioning that a reference light ray for the required measurement can be produced in an experiment , even without the knowledge of .notice that this definition reduces to the bending angle of schwarzschild space ( if doubled ) when is taken to be zero , assuming that is in the asymptotically flat region , outside as in figure [ split1 ] .in addition , for the case of , the paths are straight lines , and the deflection angle equals zero at any point on a path , as expected .+ we generalize the previous definition of the measurable deflection angle even further as follows .given the details of the previous definition of the bending angle , let and represent the 4-vectors of the intersecting null geodesics at corresponding to the path of light under investigation , , and the path of light whose tangent is parallel to at , respectively . for analytical purposes , can be found , up to an overall factor , from the derivative of the path given by the governing differential equation , , at the point of intersection and the null condition . can be expressed in kottler coordinates , up to an overall factor , by converting to polar coordinates at the point of measurement and using the null condition .measurable deflection angle by a given observer at a point _ be defined as the measurable angle between and by an observer with 4-velocity at .let us designate this measurable deflection angle by .for the three 4-vectors , and , the angle is well defined .it may not yet be clear , though will be in the next section , how this angle can be found analytically directly from these vectors . in principle , with reference to a static observer , we can find and use the aberration equation to relate to , thereby expressing in terms of the parameters of the setup , including the relative speed between the observers .since the vector is used as a reference direction in the definition of , we see that , as , is a measure of the deflection of as it goes from ( ) to .clearly , for a static observer reduces to , and satisfies all the expected limits of the definition . as before, this angle can be physically measured if a reference ray , with the required 4-vector , exists , and its value depends on in addition to .+ we conclude as follows . in order to properly extend the concept of a bending angle for a trajectory of light to sds space, we restricted the original definition of the bending angle in schwarzschild space to the geometrical definition ( definition 1 ) , for which no measurements are considered , and further defined three additional measurable angular quantities .the use of the bending angle , , in sds space is clear from its definition and geometrical interpretation .it gives a quantitative measure of an important geometrical characteristic of the path in the ( ) plane .any path of the required type can be classified by its bending angle , which is in one to one correspondence to .although it is a euclidean angle in nature , its value gives a visual and intuitive quality of the path , which makes it a useful parameter as well . on the other hand , the measurable deflection angles , , and , give a type of observable measure of the deflection of the path as it goes from ( ) to the observer , the practical use of which is not obvious .although measurements are important to consider , these measurable quantities are observer dependent and are not informative in describing the behaviour of the path in the ( ) plane , where the concept of the bending angle has originated .nevertheless , these four definitions encompass many of the recent attempts to extend the bending angle to sds space . with these definitions in mindit is straightforward to understand and compare the conclusions of many recent papers on the topic .much of the disagreement in the recent literature seems to originate from lack of proper definitions , resulting in a mix - up of distinct quantities and improper comparison of results .+ in closing this section it is worth noting again the following important conclusions about the commonly used parameters .of the parameters , and , given typical boundary conditions , and do not depend on and can be determined from measurable quantities without its knowledge .the parameter does depend on , it can not be known a priori in an experiment and can not be used as a boundary condition .it can only be found if the value of is known . out of the angles , , and , only the bending angle does not depend on .the other three angles are progressive generalizations , they are measurable and all depend on , as should be expected given the previous discussions .we have considered creating a table that summarizes the above mentioned parameters and categorizes them based on their dependence on and way of measurement , but decided against it for the following reasons .the three parameters , and are discussed to exhaustion , and the angular quantities are mentioned here for the purpose of their exact definition that is accompanied by an adequate discussion .simply put , the parameters , and have important physical and geometrical interpretations that should be abundantly clear by now and kept in mind throughout the rest of our presentation , and when we later refer to the angular parameters , , and it is our intention that their exact definitions and our discussion of them will be read and understood .finally , it is interesting to note that in the case where is the _ radius _ of a static star , and is determined from other theoretical considerations , its value still does not depend on , .the main goal of this section is to find an expression for a measurable intersection angle for a given observer associated with two null geodesics in terms of the three 4-vectors representing the 4-velocity of the observer and the two tangent 4-vectors of the null geodesics evaluated at the point of intersection . of course, since the motivation for the derivation of such an expression sprang out of the investigation of light rays in sds space , being the central theme of this work , we shall attempt to keep our attention on it throughout this section .however , some of the results derived here are far reaching in their applicability , and may themselves be of much greater significance than their application to sds space . before considering the case of a general observer in sds space , as a warm up , we derive the expression of a measurable intersection angle by a static observer in terms of the euclidean intersection angle appearing on the flat ( ) plane .this derivation is particularly informative and serves as an intuitive way to illustrate the influence of on measurable angles .although we should always assume that the background spacetime is sds , it is noteworthy that the following derivation only assumes a spherically symmetric , static metric that is locally minkowski . inspherical coordinates ( ) , we may assume that is the areal radius , and the metric can be written as in equation .further , the coordinate is restricted to a region where in is positive , and without loss of generality we restrict all motion and measurements to the slice .the 4-velocity , , of a static observer in these coordinates is defined by the requirement that . with the condition ,we have let the space - like coordinates in the local minkowski spacetime of the observer be and .since the 4-velocity vector of the static observer is parallel to , the local ( ) plane corresponds to a small neighbourhood in the ( ) plane around the location of the observer .we can orient the coordinates and without loss of generality such that and are parallel to and , respectively , at the location of the observer .the metric of the local space around the observer can be written in terms of the coordinates and , as given by equation , or in terms of and , as the flat metric , the minkowski coordinates and serve as real distance measurements of a static observer at the given point .let and be the 4-vectors of two intersecting null geodesics at the point of the observer . for simplicity , we first assume that the path associated with is radial .let be the point of intersection on the ( ) plane , with coordinates ( ) .let the point be a neighbouring point to lying on the path associated with , with coordinates ( ) .let be a neighbouring point on the radial path with coordinates ( ) , where the is the same as for .plane , passing through a point with coordinates ( ) .the figure also shows the point of closest approach of this path , with , and a radial path of light , which also passes through the point ( ) .the boxed diagram is of a small neighbourhood around the point ( ) , it can be thought of as a magnification of this point .the points and are points in this neighbourhood lying on the paths corresponding to and , respectively .the space - like vectors and at on the diagram are the projections of and onto the ( ) space , respectively . the intersection angle between the two paths on this flat diagram , which is the angle between and , is .,width=321 ] assuming that the points and are in the immediate vicinity of the observer , let the distance measured form to be and the distance from to be . the measurable intersection angle by the static observer , , corresponding to on the figure above ,can then be expressed as since and are parallel with and , respectively , we immediately see that using in , the angle in the figure can be expressed as using in , dropping the subscript , we find this is the first equation we were seeking : it relates the measurable intersection angle to the euclidean angle . in sds space, will depend on both and , which is how has an influence on such measurements .the source of this influence can be viewed as the stretching of space due to , quantitatively entering the analysis through the first of equations .we can also express the measurable angle , , in terms of and .using in gives if can be found from some boundary conditions , then the last equation above is particularly useful .although equation was derived under the assumption that one of the paths is radial , it is useful due to its simple form .it illustrated the role of in a most simple and intuitive way , and it can be used as a starting point to establish a more general relationship .consider now the situation where neither nor is associated with the radial trajectory .again , let and be the projections of the vectors onto the slice ( ) and be the euclidean angle , appearing on the flat plane , between them .the following figure is of a neighbourhood around the point of intersection on the ( ) plane , much like the boxed part of the last figure , however this time both and are in arbitrary directions . , this figure is of a small neighbourhood around an intersection point on the ( ) plane .the vectors and are the tangent vectors of the intersecting paths on the plane , the dashed line represents the radial direction at this point .the angles and are euclidean angles on the flat plane between the radial direction and the vectors and , respectively.,width=321 ] what we are after is the measurable intersection angle , corresponding to on the above figure .let and be the measurable angles corresponding to and , respectively .clearly , since the angles and are sustained with the radial direction , they can be expressed in terms of and according to equation .this outlines a method of finding the expression for the measurable angle in an arbitrary orientation .the influence of in this case is clear and comes from the reference to equation .let us consider a different approach to the problem .we have already established that the local space around the static observer can be represented by metric in the coordinates ( ) , as well as the flat metric in local minkowski coordinates . the intersection angle measured between two trajectories of light with 4-velocities and is the angle sustained by the projections of and onto the local space of the observer . for a static observer ,these projections are just projections onto the local ( ) space , since it corresponds to the local ( ) space . with and being the projections of and , without restricting to any particular orientation, we have in general intuitively , since the angle belongs to the minkowski space of the observer , we may consider the vectors in to exist in this minkowski space and to be written in the and coordinates , with the inner products taking place in the ( ) plane .however , since inner products are coordinate independent , there is no need to use any coordinates other than the given ( ) to establish a relationship from for a particular situation .equation can be used for any given 4-vectors and , the projected vectors and can be found , for a static observer , simply by eliminating the component in each of the 4-vectors .for the special case where is associated with a radial trajectory , we find using , , and in from the trigonometric identity we find since , we have where is the corresponding euclidean intersection angle on the flat ( ) plane .thus , we have derived equation as a special case of equation .a similar reasoning to the one used in establishing equation will be employed in the derivation of the general relation , applicable to any observer , which is the main goal of this section . in the meanwhile ,we notice that with an established relationship for a static observer one can construct a relationship for any other observer by using the aberration equation to relate the measurable angles .this fact is important and may be of practical use , however , it may be inconvenient in some cases to refer to a static observer that is not part of the setup .moreover , it is of mathematical curiosity to establish a relationship between a measurable angle and the associated three 4-vectors from first principles , with no reference to a proxy observer .the following is a derivation of the general formula for the measurable intersection angle by any observer .the final result is the main goal of this section and , perhaps , the result of most importance in this paper . for the sake of generality, we make no assumptions except that the metric of the space where the event occurs is locally minkowski . for simplicity , in the derivationwe consider the spacetime to be four dimensional , and we stick with the convention of positive signature .the generalization of the derivation to any higher dimension is trivial and the final relationship is true for all dimensions .let be the 4-velocity of the observer at the event of intersection .let and be the 4-vectors of the intersecting trajectories . at this pointwe do not make any assumptions on and .the trajectories can be time - like , null or space - like .an example of a space - like trajectory is a simultaneous chain of events in some extended rigid frame .consider a rigid line in the frame of which clocks at different locations are synchronized and simultaneity is well defined .consider now a flash , or rather a brief change in colour , taking place simultaneously at each point on the line . in a different extended frame , in relative motion to the frame of the line, the chain of events will not be simultaneous .rather , in the second frame the flash , or change in colour , travels along the points of the line faster than the speed of light , appearing as a traced path .this is a space - like trajectory , projected onto the second frame .clearly , time - like and null trajectories represent paths of massive objects and rays of light , respectively .let ( ) be a given set of coordinates in a patch of the underlying spacetime , and let ( ) be the local minkowski coordinates of the observer at the point of intersection , with being the proper time .at the point of intersection , during a short interval of proper time around the event , the trajectories pass through the frame of the observer , tracing paths in the local space , ( ) , of the observer .this is illustrated in figures [ fig13 ] and [ fig13a ] ., is normal is shown as well .this local space together with the 4-velocity vector constitute the local minkowski spacetime of the observer at the event of intersection.,width=321 ] .the measurable intersection angle , , is the angle sustained by the traced paths.,width=283 ] the measurable intersection angle by the observer is the angle between the traced paths in the observer s space , .this angle is determined by the tangent vectors of the projected paths in space .these tangent vectors are the projections of the 4-vectors and onto the space of the observer .let and be the projections of and onto the space of the observer , respectively .let and be components of and , respectively , parallel to .we have see figure [ fig14 ] . andthe perpendicular local space ( the laboratory space ) , the 4-vectors and and their projections and onto the local space . also shown , the components of the vectors parallel to u , and .the measurable intersection angle , is the angle sustained by the two projected vectors and on this diagram.,width=264 ] the measurable intersection angle , in the observer s frame , can therefore be expressed as usual . the task now is to find and given the three 4-vectors , and , and take the inner product in accordance with the metric .we shall be abundantly clear in the following derivation . in the local minkowski coordinates of the observer , the 4-vectors and be expressed as and , clearly , in the coordinates , we have in general , for any set of independent coordinates , and a vector , such that , the quantities are determined by , where is the differential 1-form corresponding to the coordinate , for a given value of the index .therefore , and the 4-vector and its dual covector , , with respect to the metric , can be expressed in the two sets of coordinates as follows . and since and , we have and using and in and . therefore , and we can now express and as follows . and therefore , and here is the usual kronecker delta .let so that using the metric tensor of the spacetime , , to lower the upper index of gives let be the metric of the local space of the observer , that is the metric of the subspace perpendicular to at the event of measurement .the natural requirement of to be consistent with is to simply be a restriction of onto the subspace under consideration and the tangent vectors within it .that is , for the vectors and , in the local space of the observer , now , , since with this , we can go back to to find since and are arbitrary , we have shown that the considerations above should make it intuitively evident that the observer dependent tensor is a projection tensor , which projects any 4-vector onto the local space of the observer , and the related covariant tensor is the metric tensor of that space .of course , if we express and in the minkowski coordinates ( ) , then the inner products in equation can be taken with respect to the flat metric of the observer s space , however , in the original coordinates of the spacetime , , at the point of measurement , the local metric of the observer s space is given by the tensor , and the vectors and are given by equations .with the considerations above we go back to equation to find the required expression for the measurable intersection angle . * theorem 1 : * + the measurable intersection angle by an observer with 4-velocity , sustained by two paths with tangent 4-vectors and at the point of intersection is given by + equation can be applied to any observer and any trajectories , whether time - like , null or space - like .however , it can be considerably simplified for the case of null trajectories , which is , fortunately , the case of interest . with and being null , + * corollary 1 : * + for the case of null trajectories, the above theorem reduces to + the above equation is the general formula we were seeking . in four dimensional spacetime, it gives the desired expression of the measurable angle , , in terms of the 4-velocity of the observer , , and the null 4-vectors of the intersecting trajectories , and .the formula is coordinate independent , which may be important in many applications . before ending this sectionwe consider a few applications of equation .we shall demonstrate its use in the frameworks concerned with angle measurements in sds space and the aberration of light phenomenon of special and general relativity .as always , for the sake of demonstration and simplicity , whenever there is a choice of positive or negative sign , if not stated otherwise , we shall take the positive . in reference to the first part of this section and the sds metric of section [ sec2 ] , we proceed as follows .let be the 4-velocity of a static observer , given by equation in kottler coordinates .again , without loss of generality we assume the trajectories are confined to the subspace .for the sake of comparison and simplicity let us take the 4-vector to be exclusively in the radial direction .the inner products that we need are using , and in , we find and with the help of the null conditions and , we get the above equation is identical to equation , demonstrating the consistency of the general formula . as it was already shown following equation , with being the euclidean intersection angle in the flat ( ) plane , it is straight forward to derive equations , , or from equation .as discussed before , one can use the aberration equation to relate the measurable angle of the static observer to the measurable angle of an observer in relative motion to it .however , since equation can be used to express the measurable angles of any two observers in any metric , one may suspect that it may be used to derive the aberration equation itself .the well known aberration equation of special relativity is the following : here , and are the two different angles measured by the different observers , and is their relative speed ( sometimes taken to be negative in the equation ) . it is commonly derived in textbooks from geometric considerations or special relativistic velocity transformations .see for example .the equation is valid under the assumption that in the frame of one of the observers the other observer is travelling in the same direction as one of the light rays .we shall derive a general aberration equation , applicable to any two observers and any two light rays in any orientation .we then demonstrate how equation can be obtained , for the particular orientation assumed in the usual derivation of the aberration equation .as it was for the derivation of equation , we shall assume nothing of the background metric of the spacetime , except that it is locally minkowski and of positive signature .for simplicity and concreteness let us take the dimension to be four .let and be the 4-velocities of two observers at the event of intersection , with ( ) and ( ) being the minkowski coordinates of their respective local frames .let and be the null 4-vectors , at the event of intersection , of any two trajectories of light .the derivation of the general aberration equation is immediate . with being the angle measured by the observer with 4-velocity , from equation and dividing bygives + * theorem 2 : * + the general relationship between the measurable angles and , related to observers with 4-velocity vectors and , respectively , is given by + the above equation can be regarded as the general aberration equation .it relates the measurable angles in terms of the associated 4-vectors , it is coordinate independent and holds for any metric of general relativity .a specific aberration relationship can be obtained from for any particular orientation ; for the orientation assumed in the usual derivation of the aberration equation , , that is , where the direction of motion of one observer coincides with a direction of a ray , it can be done as follows .let the direction of motion of the observer with 4-velocity in the frame of the observer with 4-velocity coincide with the direction of the light rays with 4-vector .let be the relative speed between the two observers .solving for in gives let us express the vectors and the inner products of equation in the local minkowski coordinates ( ) of the observer with 4-velocity . for convenience ,we align the axis in the space of this observer with the direction of motion of the other observer and one of the rays . with being the proper time of the observer with 4-velocity , in these coordinates, we have and the components and can be expressed in terms of the relative velocity , , as follows . by definition and since , we have two equations in two unknowns .solving for and gives the above are well known relationships of special relativity .further , the null condition gives the somewhat obvious expression for the angle in these coordinates is obtained from equation as follows . the inner products appearing in equation are and using , and in gives thus , we have derived the known aberration equation for the usually assumed orientation from the general equation .this demonstrates the usefulness and consistency of both equations and .overall , the proposed general , coordinate independent , aberration equation , , may be applied to any setup and can considerably simplify the analysis in many situations .lastly , for completion , let us state the first order approximation in angles of equation . for small angles and ,to lowest order we find this simple relationship may be of use in some situations , and of course , the well known first order approximation of the usual aberration equation , , can be easily derived from it .going back to paths of light in sds space , specifically in the subspace , let us employ equation to express the measurable angle by a given observer in terms of relevant parameters .we shall consider the measurable angle in reference to a ray going in the radial , increasing , direction , since these rays are usually available in a realistic situation .although the trajectories are assumed to be confined to , the metric of the spacetime is still given by equation .let be the 4-velocity of the observer making the measurement at the point of intersection .let and be the 4-vectors of the intersecting trajectories of light , such that corresponds to the radial trajectory . in kottler coordinates , and assuming that the path corresponding to the 4-vector has a point of minimum value of , , the components of in these coordinates are subject to equation .for this path therefore , where is an affine parameter , parametrizing the trajectory .the null conditions give the following relationships and for convenience , we have assumed that all the components of the null vectors are positive .let the measurable angle by the observer be ( we shall add the subscript m when ambiguity may arise ) , using equation , we find in the above equation , comes in through and .the measurable angle is conveniently expressed in kottler coordinates and the relationship is applicable to any observer .of course , due to the condition not all of the four components ( ) can be independent , and at least one must depend on .in different setups , any of the three space - like components , and , may or may not depend on , and therefore , the particular influence of depends closely on the situation being analyzed . also , notice that the relationship between the parameters and can be written as follows . this makes it slightly tempting to use the parameter to simplify equation .however , considering what we know of this parameter , we see that it will partially mask the appearance of , and may lead to misinterpretations when investigating the influence of on the measurable angle . out of the three parameters , and ,the parameter is the most appropriate and intuitive to use in the analysis at hand , and especially convenient in kottler coordinates . to simplify the general expression given by ,let then and a little algebra yields these expressions are particularly easy and convenient to use when is given as a boundary condition .then , it is not even necessary to find a solution for the deflected trajectory , and the measurable intersection angle can found immediately . with any other boundary conditions , such as two points on the path (coordinate locations of source and observer , for example ) , we can use an exact solution to express in terms of these two points to any desired degree of accuracy .further , although it was previously assumed that both and are relatively small for conceptual reasons , we have not yet made any mathematical approximations related to these parameters .thus , the relationships above are exact ; quantities may be calculated to any degree of accuracy and approximations can be made when convenient or necessary .let us apply the above results to a few specific observers .if we set the observer to be static , we get and the last equation is identical to , as expected . using equation , equation can be expressed as the above is a relationship between the intersection angle , measured by an observer with 4-velocity , and the intersection angle , measured by a static observer .it may be of practical use in situations where reference to a static observer is advantageous .notice how the relationship reminds one of the general aberration equation previously derived , from which this result could be obtained directly .consider now an observer on a circular trajectory , with constant coordinate .that is , , which gives in certain situations the component can be considered independent , since it can be determined experimentally , in others can be expressed in terms of and .for example , for measurements in the solar system , , can be determined from the period of rotation experimentally , or expressed in terms of the mass of the sun and . in the case where the deflected ray just gazes the surface of the sun , can be given by other existing theories or sources , which sets a convenient boundary condition and can be used directly in the above relationships , eliminating the need for a solution .further , if we also confine the motion of the observer to the plane of the rays , setting , the condition gives therefore , and the effects of , , and the velocity component , , on the measured angle can be studied from the above relationship , which can be considerably simplified with some standard approximations .no assumptions were taken regarding the sign of .a positive sign will mean that the observer and the deflected ray move in the same angular direction , a negative sign means the opposite .if we choose to refer to a static observer at the event of measurement , then equation gives the above relationship allows an investigation into how varying the value of increases or decreases the measurable angle relative to .we see how the terms and are of some fundamental importance in this kind of analysis .most of the relationships of interest can be expressed using combinations of these terms .notice that in places where these terms are being subtracted from one another we have a perfect cancellation of .this fact is important to keep in mind when interpreting results or making approximations involving .some approximations may prevent this sensitive cancellation , causing terms of to appear where they do not belong , and ultimately lead to misinterpretations .this observation applies to all the specific observers discussed here .next , consider a radially moving observer .for this observer , and the condition gives therefore , and after some algebra , the above relationships can be used to study the effects of , , and the velocity component , , on the measurable intersection angle . inspecting these equationssuggests that an increasing positive causes the measurable angle to increase , as one would expect in this setup .this observation may lead to a method of minimizing the relative experimental uncertainty coming from the measurement of the , usually small , angle . minimizing such uncertaintiesis important when trying to establish a value of experimentally .equations and are exact relationships . togetherthey demonstrate the additional effect of a radial velocity on the measurable intersection angle and the way in which this aberration phenomena may be taken advantage of in an experimental attempt of measuring .lastly , let us consider a radially moving observer , located sufficiently far from the mass where its effects are completely negligible ( outside the circle on figure [ split1 ] ) , and whose motion corresponds to the hubble flow in de sitter space , induced by .such conditions can model a realistic astrophysical setup ; for example , where the source and the deflecting mass are distant galaxies , and together with the observer the three objects are separating due to the effects of a positive cosmological constant . the main assumption here is , such that , and the metric at the event of measurement is approximately that of de sitter space . where from equation in the appendix, the 4-velocity of an observer moving according to hubble flow , also referred to as a comoving observer , far away from the mass , in kottler coordinates is notice how in this case the velocity component , , itself depends on , as to be expected , since the motion of the observer is caused by . using the above in equation produces above equation is exact , given , and can be considerably simplified by making approximations related to the relative magnitudes of , , and . of course ,due to the chosen orientation , the above relationship , as well as , can also be obtained by means of the usual aberration equation , , and the expression for , .the required relative speed in the aberration equation can be obtained through the same method leading to equations .notice that the effects of in this case come from both the geometry and the velocity of the observer . whether a positive diminishes or increases the measurable angle for such an observer can be studied from the above equation , for this particular orientation of rays . to address this question in a more general setup, equation can be employed to produce similar relationships to for any orientation of interest .also notice that in the cosmological context , where the deflecting mass may be a distant galaxy , the values of the coordinate and the parameter are determined indirectly , and may or may not depend on themselves as well . in the simplest case , can be at the edge of the deflecting galaxy , and can be found from other existing methods or tabulated data on the particular galaxy .in other cases , must be determined from other boundary conditions , which depending on the model and coordinates used , may themselves depend on directly or necessitate the appearance of in their relation to .furthermore , in the cosmological context , in a realistic case where all measurements can only be done by an observer at one point ( such as on earth in our galaxy ) , the determination of and from such measurable quantities and the dependence of these measurements on are issues that , on their own , deserve a detailed investigation . in order to avoid deviating too far off course ,this investigation , which makes extensive use of our formula , was reserved for a separate report , . fornow , however , we can learn much from the results derived in this section on the influence of and investigate the ways in which its value can be determined experimentally from some measurements of angles .the relationships obtained in this section can be used to study the influence of different parameters on measurable angles and reveal many interesting results .various experiments concerned with the determination of from angle measurements can be analyzed , and even suggested , by means of these relationships .+ finally , it is clear that the results derived in this section are indispensable for a general analysis , which involves finding measurable intersection angles of light rays in sds space .equation is a general , mathematical , result , while equation specifically applies to the slice of sds space and a particular orientation of light rays . of course, by means of equation , we can generalize the expressions to two arbitrary light rays in the plane of motion , without constricting one of the rays to be radial .even further , we can generalize to arbitrary light rays confined to two different planes .however , due to the popularity of the usual conditions that lead to equation , let us summarize by restating equations and , which constitute the complete set of tools needed to analyze paths of light and associated measurable angles in sds space . and the fact that we chose to use the parameter in the above expressions makes them particularly useful in applications involving symmetric trajectories with a point of closest approach , which is by far the most popular case in the literature on the topic .however , the equations above are not limited to such situations .when there is no point of closest approach , can be replaced by the impact parameter ( or some other parameter ) in both equations .although we have argued that in sds spacetime it may be more appropriate to choose the parameter over in expressions , from a mathematical perspective the parameter is more general and its use may sometimes be necessary . to be clear , a general analysis of the kind discussed above can be carried out from basic principles by means of the euler - lagrange equations and equation .these two tools , together with some boundary conditions , are all that is needed for a complete analysis and can be used for any setup and any coordinates . for the specific case of the slice of sds space in kottler coordinates , the differential equation governing a trajectory of light , given by euler - lagrange equations , reduces to and the expression for a measurable angle , given by equation , with reference to a radial light ray , becomes . until recently, equation was generally regarded as the main tool in investigating the influence of , and measurable angles were mainly found through euclidean methods justified in certain approximations .rindler and ishak s work promoted attention to other sources through which can influence mathematical results . the present work , however , is the first to introduce equation to this topic , which now contains the necessary and sufficient tools needed to analyze the influence of on measurements correctly for any observer .especially when investigating the influence of on measurable angles , it is clear that equation on its own is not enough .equation , in some sense , brings the concept of measurement into the analysis , and as we ve seen , this is where makes an entrance .let us re - emphasise that although does not explicitly enter the analysis through the governing differential equation , , it still influences the geometry through the metric which in turn affects measurements .this influence on the geometry is accounted for in the derivation of equation , through which enters the analysis explicitly .furthermore , in situations where is determined from boundary conditions that may depend on , can enter the analysis through in both equations and . additionally , as we ve already seen , may also enter the analysis through the components of , which are not all independent due to the normality requirement andmay depend on themselves through other ways .the most important lesson here is that the influence of can come from various sources , making it hard to propose general conclusions on some important issues in this topic .the influence is sensitive to a particular situation that is being analyzed , and this allows for a various possibilities of how appears in results of interest .the applications of the general formula for the intersection angles , , extend well beyond light rays in sds space .this formula is fundamental , in a geometrical sense , and coordinate independent .it may play a central role in many types of analysis , and can simplify things considerably .it also allows the generalization and provides another perspective of special relativistic aberration of light , and can be viewed as its general relativistic counterpart . as an additional application of the general formula, we utilized it to find expressions of cosmological distances analytically and to modifying the conventional analysis of weak gravitational lensing to account for .we felt that the latter deserved to be the centre of a dedicated paper on the contribution of to the lens equation , . in the present paper , however , we tried to concentrate on studying the influence of on the fundamental level , which is crucial to properly understand the recent debate on s effects on bending and intersection angles , which encouraged our investigation .some important results that are derived in are included in the appendix in order to be directly referred to in the next section , where we respond to some of the recent papers on the topic .in this section we respond to some of the recent papers on the topic and compare results of significance to the ones derived in the present work .we give a brief summary of each paper we respond to , and put it in the context of the previous sections . for a detailed examination of our comparisons ,we encourage the reader to refer to the papers we discuss .in this part we summarize and respond to the paper published by w. rindler and m. ishak in 2007 , titled contribution of the cosmological constant to the relativistic bending of light revisited " , .since then , the authors have published follow - up papers on the topic , , to which the following discussion applies . in their paper ,the authors begun by noting the work of islam , , and other papers that followed , and clearly stated that they agree with the accepted conclusion that drops out of the governing differential equation for path of light .following this claim , they presented the key idea of their new approach : actual observations depend on the geometry ( metric ) in addition to the orbit equation of a light ray , and when such effects are taken into account does contribute to results of interest .they start their analysis by describing the influence of on the geometry and qualitatively describe how this influence will contribute to measurements associated with light rays .they proceed by writing an approximate solution to first order in of the orbit equation in the ( ) plane .( 9 ) of ) similar to the approach in chapter 11 of by rindler , the authors orient the path so that at , and chose the constant of motion as the parameter in the solution. the relationship between and is ( eq .( 10 ) of ) their and correspond exactly to ours of the previous sections .they note that other authors used the parameter in such discussions , but argued that while is meaningful in schwarzschild space it is not the case in sds space , which is not asymptotically flat .next , the authors pointed out that while their solution equally applies to both schwarzschild space and sds space , only in the case of schwarzschild space the bending angle can be found by letting go to infinity in the solution ; in sds space this limit makes no sense .this way of finding the bending angle corresponds to our definition 1 of section [ sec4b ] , which we discussed in detail and compared to other definitions .the authors then explain the need for other angles in describing the deflection of a path of light .this is an issue to which we dedicated much attention ourselves , and is the main reason for including the detailed definitions of section [ sec4b ] in the present work .the authors then proceed by observing that a measurable angle is found correctly through the invariant formula ( eq .( 11 ) of , in original notation ) here the metric tensor components , , are those of the line element in our section [ sec2 ] , and are the tangents of the deflected ray and a radial ray , respectively , on the ( ) plane , and is the measured angle . notice that the above equation is identical to equation of our section [ sec5a ] .this is the key step in accounting for the contribution of the geometry to the measurable angle of interest , and this is precisely where pays its role .in fact , this step is what separates rindler and ishak s work from all the preceding attempts to investigate the influence of on measurements associated with light rays in sds space . with their solution to the deflected trajectory ,, they find an expression for and designate it by .this allows them to write an expression for the measurable intersection angle , , as a function of and as follows , ( eq .( 15 ) of ) and ( eq .( 16 ) of ) where notice how equation is identical to our equations and of section [ sec5a ] .at this point , the authors did not make use of their approximate solution yet , which does not carry any terms of .thus , without the need of any approximations which the authors proceeded with , the main point of their argument is established by the expressions for the measurable angle , where explicitly appears through .the authors then made a definition of the _ one - sided bending angle _ as follows , here is the one - sided bending angle , is the measurable angle with the radial and is the angular position coordinate of the observer .the reasoning for this definition comes from their figure 2 , the important features of which can be seen in our figures [ split1 ] , [ split2 ] and [ fig10a ] .this definition is similar to our definitions of , the _measurable deflection angle by a static observer _ , of section [ sec4b ] , and its euclidean counterpart .more on this in what follows . finally , by using their approximate solution , , the authors obtained explicit results for the specific cases of and , under the assumption that is small , see equations ( 17 ) and ( 19 ) in .the results are expressed in terms of , and , which allows them to discuss the influence of and compare the newly defined bending angle to the case of schwarzschild space .rindler and ishak s approach to this topic is quite original and turns out to be very significant .they brought the concept of measurement into the picture and modified the current view regarding the influence of .however , let us summarize the drawbacks that we find in the following three points .first , as we already stated , the use of an approximate solution is not needed for the main argument .the influence of on an important measurable quantity is clear from equation .moreover , there is no need to define a new parameter and use it in final results ; this task is best fulfilled by the parameter , which has a clear and useful geometrical interpretation .second , the authors never address the question of which observer is making the measurement . in the context of the present work the answer is obvious, it is the static observer that is implicitly taken in all the expressions for measurable angles in .however , not mentioning it explicitly , in a way , hides the fact that measurable angles are observer dependent , and the influence of through the 4-velocity of the observer may be as important to study as the influence of through the metric itself .this lack of clarity , described by the latter point , may have been a cause for some arguments by other authors who responded to , see .lastly , upon closer examination of equation , we find that the definition of the one - sided bending angle , , is somewhat peculiar , in the following sense . on the right hand side of the equation ,the angle is directly measurable , while the angle is purely euclidean . in other words, belongs to the local frame of the particular observer , while is a euclidean angle that belongs to a diagram on the ( ) plane .this observation was not commented on in any of the preceding papers that respond to , whether in agreement or disagreement .although not of major consequence , this definition of a bending angle leads to some problems . let us discuss this issue in the context of our section [ sec4b ] and write equation in our notationto this end , we refer to figure [ fig10a ] of section [ sec4b ] and consider the angles in it . is the euclidean angle between the bending trajectory and the vector , is the euclidean angle between a radial trajectory and the vector , and is the euclidean angle between a radial trajectory and the bending trajectory .their measurable counterparts , by a static observer , are , respectively . the ambiguity that arises with the vector dealt with in the precise definition of in section [ sec4b ] .as explained , we have chosen the reference to the vector due to the fact that at the tangent of the trajectory is parallel to this vector , and in this sense the angle between the trajectory and the vector is a measure of the one - sided deflection .it seems that rindler and ishak followed similar reasoning in their definition .now , in terms of the angles mentioned above , a measure of the deflection we are interested in is provided by either or , of which only one is physically measurable , although both can be determined analytically .a straightforward way to find these angles is through and , which is where the importance of the angles , , and comes in and why defining these angles is necessary . in the spirit of analyzing the effect of on measurements, we used the angle rather than in defining the deflection angle at a given point , and emphasised that it is measurable . perhaps in the same spirit , rindler and ishak defined their one - sided bending angle , , with reference to the measurable angle . to compare our definitions ,let us relate the angles that are used in defining to the angles of figure [ fig10a ] . clearly , and .thus , using our notation we can define an identical deflection angle to the one defined by rindler and ishak as . to summarize , by using the angles , and their measurable counterparts and , we have defined three angular quantities that serve as a measure of the deflection of a light ray at a given point .these are : it is not clear as to why rindler and ishak chose this particular definition .mixing measurable and euclidean angles makes it hard to interpret results and discuss their significance .the angle itself is neither measurable nor does it appear on a diagram that depicts the situation being analyzed .hence , the geometrical significance of the euclidean angle and the physical significance of the measurable are absent in the hybrid angle .notice , however , that in the special case of , which leads to equation ( 17 ) of , our bending angle , , and rindler and ishak s bending angle are equal , since in this case .this is the case when the measurement is taken at the point of symmetry , in the language of section [ sec4b ] , for which we defined the angle .therefore , while equation ( 17 ) of makes perfect sense , equation ( 19 ) of , obtained for the case , must be interpreted with extra care and its usefulness is not immediately clear .another problem with the definition of is that paths which are straight lines on the ( ) plane may have a non - zero bending angle .a few examples can be thought of to demonstrate this fact , the simplest of which is perhaps a trajectory of light for the special case .overall , this non - zero bending angle occurrence can be seen from the fact that while for the case of a straight line , the angle becomes a difference between a euclidean angle and its measurable counterpart , which in general is non - zero when the space is curved .thus , in light of the discussion of section [ sec4b ] , concerning the requirements of quantities that represent deflection angles in schwarzschild , sds and de sitter spaces , we see that the quantity , originally , does not meet some of our expectations . + before ending our discussion of rindler and ishak s work , for the sake of later argument , let us quote some important results that we obtained by means of rindler and ishak s methods presented in .these results are derived in detail in , where we investigate the contribution of to the lens equation .the first order solution given by is all that is needed to obtain the , well know , first order single source lens equation , eq . . referring to figure [ lensing ] of the appendix , the above relationship serves as a map between the distance on the lensed plane and the angular position at the point of observation , in terms of and the euclidean parameters , and .see the appendix for more details .the above relationship can be modified by utilization of equation ( arguably the equation of most significance in ) to replace the euclidean parameters on the right hand side with measurable parameters .the result is a map between and the measurable position angle , in terms of angular diameter distances , all measured by a static observer . see the appendix for more details .the above agrees with our equation .notice the presence of in the above equation , which came about due the use of measurable parameters .next , equation can be further modified by employing the standard aberration equation to convert the quantities that are measurable by a static observer to quantities that are measurable by a comoving observer , which has a relative velocity of and is moving in the radial direction . the result is a map between and the measurable position angle , using angular diameter distances , all measured by a comoving observer . the above agrees with our equation .notice the appearance of in the above equation and how it differs from . due to the assumption of comoving motion ,the above can be regarded as a cosmological gravitational lens equation for a single source , and it is noteworthy that it was derived by means of rindler and ishak s methods of combined with the standard aberration equation .much of the criticism of rindler and ishak s conclusions is based on the fact that with a positive the source , observer and lens should be in relative , comoving , motion , which is not accounted for in .we shall use this last result when responding to some of the comments made in in a later section . in this partwe summarize and respond to some aspects of the paper published by m. sereno in 2008 , titled influence of the cosmological constant on gravitational lensing in small systems " , .since then , the author has published the follow - up papers and on the topic , to which the following discussion also applies .although the author supports the conclusions of rindler and ishak , the analysis in provides an example of the misuse of the parameter , which leads to a questionable interpretation of results . in this paper ,the author begins with a brief introduction in which he mentions rindler and ishak s work , .he begins his analysis with the kottler metric , our equation , and proceeds to write down the orbital equation for a light ray in ( ) space in terms of the parameter in integral from : ( eq .( 3 ) of , in original notation ) ^{-\frac{1}{2}}. \label{ser : equ}\ ] ] here is the coordinate of the source , and the integral is to be taken from the coordinate of the source , , to the coordinate of the observer , ( in the original notation ) .also , the observer is assumed to be positioned at , without loss of generality .the parameter .the above equation is equivalent to our equation , which we have discussed extensively , and which can also be written in terms of the parameters and ( in our notation ) .although the author defined the parameters and , which are identical to our and , respectively , he never used either in the expression of his solution to the orbital equation .the advantages in using either or instead of the parameter are discussed in detail throughout our sections [ sec3 ] and [ sec4 ] .we have shown that the parameter can not be considered independent of , and its use in results can be misleading when investigating the influence of .the author then proceeds to write an approximate solution to his equation ( 3 ) ( our above ) , expended in orders of and , which are both represented by for simplicity .( 5 ) of , in original notation ) although it appears somewhat complicated , his solution is essentially a relationship between and in terms of , , and .this relationship is a function that represents a set of points which constitute the path of a light ray in ( ) space . in light of our investigation of section [ sec3 ] , and given the fact that the boundary conditions the author considers are purely coordinate - like , we know that the path of light connecting the source and observer is independent of . in other words , the set of points in ( ) space that constitute the path of a light ray does not depend on .the appearance of in the authors solution is entirely due to his choice of using the parameter , which itself depends on .the perfect cancellation of the terms in equation that one would expect when transforming to either or is completely hidden by the approximation taken .in fact , a solution to can be written without , even without the use of or , since either of which can be expressed in terms of the mass , and the boundary conditions ( ) and ( ) , without invoking .thus , if done correctly and with no approximations on the solution to the orbital equation should not contain any terms of at all .this is in contradiction with the conclusion made by the author following his equation ( 5 ) .although sereno s conclusions seem to be in agreement with those of rindler and ishak , we see that rindler and ishak took a completely different approach to this topic .they acknowledged the work of islam and that should not contribute to the orbital equation or its solution , and they brought into the analysis through considerations of measurements .sereno , on the other hand , without considering measurements , brought into the orbital equation by using the parameter . moreover , his approximation masked the fact that can be transformed away from the equation by using a more appropriate parameter , such as or .in fact , if we compare sereno s solution , , to rindler and ishak s solution , , we see while appears in one it does not appear in the other , which is quite a major conceptual disagreement .rindler and ishak argued against the use of the parameter , which led them to define their parameter .the main point here is that when investigating the appearance of in relationships of interest , the choice of the parameters used in these relationships is crucial ; the advantage in using parameters that are independent of themselves is obvious .in this part we summarize and respond to some aspects of the paper published by a. bhadra , s. biswas and k. sarkar in 2010 , titled gravitational deflection of light in the schwarzschild - de sitter space - time " , .the authors of this paper seem to support rindler and ishak s conclusions , but there are a number of issues we find with their analysis that we shall discuss .the authors begin by presenting the idea that does affect the orbit of a photon , as well as the resulting bending angle ; unfortunately , a common idea on that side of the argument , .the authors mention rindler and ishak s original work , , and briefly discuss the ongoing debate regarding their conclusions .the position they seem to take is that , in addition to what was found by rindler and ishak , there is more to the contribution of , which comes from the orbital equation .they begin their analysis by sating the kottler metric , our equation , and the orbital equation in ( ) space in terms of the parameter , our equation . in defining their , which is identical to our , they state that it behaves as the impact parameter at large distances , which is incorrect .the quantity ^{-\frac{1}{2}}$ ] ( ) is what actually behaves as the impact parameter at large distances , see our section [ sec4b ] . for a solution to the orbital equation, the authors used the exact same approximation as in , and even used the same parameter , see equation .however , they claimed that , ultimately , the parameter must be replaced with and , since it is and that appear in the first order orbital equation and carry meaning . the relationship between , and can be easily obtained by plugging the solution into the differential equation , or simply by combining equations and . in either case , we find the above relationship is in disagreement with the one stated by the authors : ( eq .( 6 ) of ) the derivation of this equation is not explicit , so the source of error is not clear .thus , in addition to proposing the use of and instead of , the authors propose an incorrect relationship to make the transformation .furthermore , the authors claim that by virtue of equations and , the parameter depends on as well .the authors proceed to investigate the bending of the orbit , and define an appropriate deflection angle for light rays in sds space . to this end, they utilized rindler and ishak s method and quoted the fundamental equation of their analysis in , our , for the measurable angle by a static observer , .they expressed this angle in terms of , and approximated it to first orders in and .( 11 ) of ) next , following similar reasoning to that in , the authors defined the angle , and expressed it by using and the approximate solution , given small angles and .( 12 ) of ) notice that they chose to use in this expression , rather than either or . also , recall that the angle , defined in this way , is a mixture of measurable and coordinate - like quantities . up to this point , other than the different treatment and interpretations of the parameters and , the results of are in perfect agreement with those of .however , following their equation ( 12 ) , the authors of explain that rindler and ishak s decision to put the observer at in the procedure of is not justified , and ultimately conclude that the angle should be expressed in terms of the arbitrary , but far from the origin , locations of the observer and the source , ( ) and ( ) , respectively ( in original notation ) .their following result , which they call the _ total deflection angle _ , is ( eq .( 13 ) of ) the above is a sum of two angles defined by , of which one represents the deflection of the ray as it goes from the source to , while the other represents the deflection of the ray as it goes from to the observer . in a sense , it is a two sided angle of section [ sec6a ] , understanding the definition of which is key to the present discussion .notice that in the above expression can be found from , , and , without invoking , which makes it somewhat of an unnecessary parameter in this situation .replacing in terms of these boundary conditions will not change the appearance of in the expression .however , the authors set forth to replace with the parameter , by approximating the exact relationship given by to first orders in and .( 14 ) of ) by using the above relationship they rewrite in terms of , to which , again , they incorrectly refer as the impact parameter .( 15 ) of ) the above is their final expression for the _ total deflection angle _ ; it is expressed to first orders in and .hence , the expression in is obtained by using a two sided angle , and bringing the parameter ( and , consequently , its dependence on ) to the final result .this combines the problem we find with rindler and ishak s analysis in , and the problem we find with sereno s analysis in .similar to the case of section [ sec6a ] , and of no surprise , the deflection angle of equations and is non - zero for trajectories that are straight lines .finally , in light of our own investigation in the previous sections , it is worth saying that in the analysis of the key contribution of comes from equation and should not come from equation at all . as noted by the authors in a following paragraph, the difference between their results and the ones obtained in is primarily due to the fact that they included in the orbit equation as well , by making use of the parameter .in this part we summarize and respond to the paper published by h. arakida and m. kasai in 2012 , titled effect of the cosmological constant on the bending of light and the cosmological lens equation " , .the authors of this paper aim to clear up the confusion in the ongoing debate on the topic , which started following rindler and ishak s .the authors claim that does appear in the orbital equation of light and its solution , but does not contribute to the bending angle , due to its absorption into the impact parameter .these conclusions seem to be in direct contradiction with those of rindler and ishak , who claimed the exact opposite .let us discuss the analysis in to clarify the reasons that led the authors to their conclusions .the authors begin by solving the orbital equation for schwarzschild space , which is identical in form to the one in sds space , and which they later make use of in that case .further , turning attention to sds spacetime and working with the kottler metric , they defined the parameters and in the same notation as ours. they recognized that , with , is the impact parameter rather than , being the distance of closest approach with ( see our definitions in section [ sec4b ] ) .their equation ( 10 ) is their orbital equation of light in sds space , written in terms of and .it is equivalent to our equation . upon stating this equationthe authors emphasised that it obviously "includes , and stated that arguments against this fact would be overstated " .next , by using the results earlier obtained for the case of schwarzschild space , the authors state an approximate solution to in terms of .( 12 ) of , in original notation ) here , .this solution assumes the particular orientation at minimum , and it is correct to second order in .the authors note that contributes to the trajectory , , as well as the orbital equation by virtue of the relationship between and , .this argument is , unfortunately , used in a few papers on the topic , in particular , and we ve already discussed the problems it carries .for instance , even if was replaced in with and , one could solve for by plugging any known point on the path into the relationship .putting the resulting expression for back in will eliminate the appearance of in the equation completely .this is all due to the specific way in which and are connected , which was discussed in detail in section [ sec4 ] .also , the authors stated that some previous approximate solutions , such as rindler and ishak s , are incorrect , since they leave residual terms of second order in when put into the governing equation .this criticism can not be justified , since rindler and ishak s solution , , carries only first order terms in and it is an approximation that is correct only to this order , as clearly stated .the authors proceed by writing an expression for their deflection angle , , in terms of : ( eq .( 13 ) of , in original notation ) this expression is obtained by taking the limit in the solution , , with the assumption of small .this angle corresponds to the bending angle , , we defined for sds space in section [ sec4b ] , and it is correct to second order in . as discussed in that section ,this quantity is purely mathematical and has nothing to do with actual measurements of angles , it appears on the flat diagram , such as figure [ split1 ] , and serves as a measure of the bending of the path on the plane .based on the form of the above relationship , the authors concluded that does not contribute to the deflection angle , since it is absorbed in .this raises the question as to why do the authors draw their conclusions by considering more fundamental than in the orbital equation and its solution , while they stick to in making conclusions regarding the bending angle . in other words ,the authors point out the appearance of when they use , and the absence of when they use .the choice of their preference of which parameter to use at which occasion is unclear , and in just the same way , opposite conclusions can be made by switching the use of these parameters .the choice of over in the solution by rindler and ishak , for example , led to the conclusion that has no influence on the orbit , as was also concluded by islam and many others .( to first order in , rindler and ishak s equals our , see equation . ) next , in order to compare to previously derived results , including equation of the previous section , the authors replaced with , and expended the expression to lowest orders of and .( 14 ) of , in original notation ) here , .although they point out some agreement that they find in their comparison , it is important to make a clear distinction between the method used to derive the above and the method used to derive , for example . in deriving the above , no reference to any real measurements and any possible observersthe influence of , therefore , comes only from the use of the parameter . on the other hand , in deriving ,a truly measurable angle was considered ( ) , which brought in the contribution of through its influence on the geometry , introducing factors that can not be transformed away .the reason for any similarities between the two equations is due to the use of , and the appearance of that is carried with it , in both methods .thus , one must be careful when interpreting and comparing such relationships .overall , up to this point , the authors did not address real measurements at all , which is what sparked the whole debate on the influence of .two important points to take from this are that the choice of parameters affects the appearance of in results of interest ( once again ) , and that the choice of parameters must be stated explicitly in order to avoid confusion and ambiguity when making final conclusions .it is also important to note that the particular way in which the bending angle , , was defined in is exactly what rindler and ishak were trying to avoid in when extending the concept to light rays in sds space , due to the conceptual problem with the limit .while rindler and ishak resorted to measurable angles , through which the contribution of was found , the authors of showed that appears in results of interest only when using the parameter .the authors then proceed with their investigation and also found that , in regards to the cosmological lens equation , the effect of is completely absorbed in an angular diameter distance ; an issue to which the discussion of the next section applies , and which we address in full detail in . in this partwe summarize and respond to the paper published by m. park in 2008 , titled rigorous approach to gravitational lensing " , .the author of this paper takes a different approach to the topic at hand than the ones we ve seen in the papers discussed above . rather than concerning with the contribution of to quantities such as the bending angle ,the author directly derived a cosmological lens equation that accounts for and the relative comoving motion between the observer , source and the massive object .some of the results derived in are central to our response to , which is the main reason for having them included in the appendix .the author used an original method to analyze the standard setup of gravitational lensing by a single source .he ultimately derived the lens equation for a comoving observer in sds space from first principles .the lens equation applies to a comoving observer in the sense that the measurable parameters that appear in the equation are measurable by this observer .such an equation is useful in the cosmological context , where the objects involved are distant galaxies , for example .the author started his analysis from the mcvittie metric , , equation ( 1 ) in , and specialized it to sds spacetime by setting all the cosmological parameters except to zero , resulting in a scale factor , with .he then transformed to more convenient spatial coordinates , which later allow him to express angular diameter distances in an easy way . using these coordinates he approximated the components of the metric to first order in , and expressed it as follows : ( eq .( 7 ) of , in original notation ) his spatial coordinates , ( ) , are centred on a point away from the massive object . in these coordinates ,the origin is a point which can describe the location of comoving observer at any time .the massive object ( lens ) is positioned on the -axis , without loss of generality , and moves away from the origin in accordance to hubble flow .his parameter is just an arbitrary constant associated to his transformation .it can be set by knowing the relative locations of the observer ( at the origin ) and the lens at a given time .note that his time - like coordinate is different to our in the kottler metric , .far from the mass , the in coincides with the proper time of a comoving observer , which is the frw time coordinate in that limit . also note that his is twice that of our in all preceding discussion ; we will make it clear when using our notation or the notation of .working to first order in and confining the motion of the photon to the plane ( ) , the author formed a diagram describing the lensing setup , and found the trajectory of a light ray , satisfying the required boundary conditions .see figure 1 in , which is similar to our figure [ lensing ] in the appendix .he proceeded to write an expression for the intersection angle at the origin , between the light ray coming from the source and the light ray coming from the lens , equation ( 26 ) in .since this angle occurs at the origin on his diagram , by the construction of his spatial coordinates , it is equivalent to the measurable angle by an observer located at the origin , a comoving observer in the frw sense .this allowed the author to establish the cosmological lens equation .( 29 ) of , in original notation ) in this equation , the distance - like parameters and are angular diameter distances , measured by the observer at the origin .they precisely correspond to the coordinate distances used in the derivation , which explains the author s choice of transformation .hence , the author does account for measurements by virtue of choosing his coordinates such that some euclidean angles and coordinate distances that appear on the diagram are equivalent to some important measurable angles and distances that are needed to express final results .note that this method of incorporating measurable quantities into the analysis can only work for a comoving observer , in a region far from the mass where its effects are completely negligible .the angle in the above equation is the undeflected position angle that the observer would measure in the absence of the mass .let us put equation in the notation of the appendix by transforming the parameters accordingly . to first order in and , equation written in our notation is : this equation can be solved for and compared to the relationships stated in the appendix. again , to first order in and , we find the above is the cosmological gravitational lens equation , expressed entirely in terms of directly measurable parameters ; it assumes the measurements are taken by a comoving observer .this equation is in perfect agreement with our equation , which is an approximation of equation , obtained by series expansion in .this leads us to conclude that park s result is correct to the highest order of his approximation .it is worth noting that our approach in deriving in is significantly different than the method used by park to derive .it is reassuring to see completely diverse procedures lead to identical final result .however , following the establishment of equation , the author set to replace some appearances of the distances and in the equation with the distance ( in his notation ) . is the angular diameter distance from the source to the lens , it corresponds exactly to our in the appendix ; in principle it could be measured directly by an observer at the source or at the location of the lens .hence , is a measurable quantity , but the observer that can measure it must be located away from the assumed point of observation .thus , if all observations are assumed to be taken at a single point , as in the cosmological context , then the angular diameter distance must be determined indirectly , from other measurements . in order to include in, the author used the relationship given by his equation ( 30 ) in , which is equivalent to our equation in the appendix .his final result is ( eq .( 31 ) of , in original notation ) which in our notation , to first order in and , is solving the above for , we find , to first order in and , again , the above equation is in perfect agreement with our results , which can be seen by using equation ( a.13 ) to include in our cosmological lens equation .in fact , since our results are exact in , we see that if park were to work with any higher order terms of , he would have found that all these terms would be zero in his approximation as well .notice how gets thoroughly absorbed into the angular diameter distance .thus , only when expressing the lens equation entirely in terms of the angular diameter distances and does make an appearance ; an appearance that can be completely transformed away by using the angular diameter distance . clearly , given the relationship between , and ( equation ( a.13 ) ) , using only two of the three parameters is enough to express any result of interest .this raises the following question : which parameters should be used in expressing the cosmological lens equation ? or more specifically : should the parameter be used at all ? of the three parameters , and , only and are directly measurable at the assumed point of observation .and although can be found indirectly from other measurements that can be made at the point of observation , the value of can be established only with knowledge of ( as in equation ( a.13 ) , for example ) . with this in mind , we can address the above question by considering two possible cases in which the lens equation may be used .first , in a case where all the parameters of interest , such as the three , and , are available from some tabulated data or another source , one can use the lens equation in either form , with or without . in this case using in the cosmological lens equation is preferable , since it simplifies the expression .this will allow the predictions of images and masses by means of the lens equation , but will not allow studying the effects of on measurable quantities directly , which are completely absorbed in . then , although will not appear in the lens equation , if it is to be accounted for , its value must still be used at some point to establish the tabulated data , specifically the value of .thus , we see that the lack of appearance of in a relationship does not necessary imply its lack of influence on the phenomenon being studied .second , in a case where no pre - recorded parameters are available , it is clearly advantageous to use parameters that are measurable directly in the cosmological lens equation . therefore , in this case , equation ( or ) is preferable , in which appears explicitly and its influence on measurable quantities can be studied directly . in short, we see that has an effect on the cosmological lens equation in any case , and needs to be accounted for directly or indirectly .this should be kept in mind when choosing parameters in the expression of the cosmological lens equation and making any conclusions .then , using or not using the angular diameter distance in the final expression is really a matter of preference in a given situation .further , following his equation ( 31 ) in , the author states that [ his ] result is in contradiction to the recent claims by which assert that there should be a correction to the conventional lensing analysis " .this statement is somewhat inequitable , since in rindler and ishak never concern with the gravitational lens equation directly , and consider a setup that is quite different , for which they produce results applicable only to a static observer . later in his discussion, the author explains that the disagreement between his and rindler and ishak s results may be due to the following two problems : 1 .the setup in is not realistic , since they consider a static observer and neglect the relative comoving motion between the observer , source and lens .the relationships in are not expressed in terms of angular diameter distances , which is necessary for comparison with conventional results .he then explained that in their follow - up paper , they failed to address these problems properly , and suggested that it is possible to modify their existing results for an appropriate comparison .he pointed out that using relativistic aberration to modify their results can help resolve the first problem , but converting parameters to angular diameter distances could be tricky , which , as he explains , makes his approach favourable . in a paper published by ishak et .al . in 2010 , , the authors argued that the apparent disagreement between the conclusions of and those of park can be due to the fact that park dropped terms of order from his final result , equation , which carry terms of .however , to properly compare the results of and we have used the method in to derive a lens equation subject to the same conditions as in , and found perfect agreement . more on itbellow , recall the end of section [ sec6a ] .much of the analysis of involves finding relationships between measurable and coordinate - like distances .we found that the methods presented in allow for converting a coordinate distance to the angular diameter distance measured by a static observer .this finding allowed for the derivation outlined at the end of section [ sec6a ] .equation is a cosmological lens equation , which we derived through rindler and ishak s methods and the standard aberration equation .it accounts for the effects of on the geometry and the relative comoving motion , induced by , between the observer , source and lens .the distance - like and angular quantities on the right side of equation , as well as on the right side of equation , are all measurable by a comoving observer .since equation agrees with our , which agrees with equation , we find perfect agreement between park s result and the one we ve obtained through rindler and ishak s methods . although the two methods are quite different , when done correctly they produce identical results .finally , let us re - emphasize that it should not be concluded from park s results that the influence of on the cosmological lens equation is of or higher .in fact , what park found , as we did as well , is that there is a term of in the lens equation , when considering a comoving observer .this is an important fact when comparing it to the lens equation for a static observer , for which we found through our methods , as well as rindler and ishak s methods , that the lowest order term that appears is .hence , given the investigations of our previous sections we were able to make a clear comparison between the results and conclusions in and .in this part we summarize and respond to the paper published by i. b. khriplovich and a. a. pomeransky in 2008 , titled does the cosmological term influence gravitational lensing ? " , .the results of this paper are often referred to in arguments against the conclusions of .the authors of this paper used both the kottler metric and the frw metric , equation ( 8) in , to investigate the appearance of in a given expression of interest . far away from the mass ,the kottler metric is well approximated by the de sitter metric , which is equivalent to the frw metric with a scale factor ( ) . by arriving at specific relationships throughboth the use of de sitter coordinates and frw coordinates separately , the authors compared the contribution of in the two different cases , and made conclusions based on this comparison .the authors begun their analysis by considering the invariant , where and are tangents of two intersecting null geodesics .they designate the positive root of this invariant by . in a local frame of some observer , it is trivial to show that for a small intersection angle between the light rays the invariant can be expressed ( up to a factor of ) as ( eq .( 1 ) of , in original notation ) here , and are the energy of the photons and the intersection angle between them , respectively , that the observer measures .this equation can be easily obtained from the first order in angle approximation of our equation , keeping in mind equation , and its true for any observer as long as is small .it is assumed here that the two intersecting photons are of the same energy .note that the quantities appearing on the right hand side of the above equation are directly measurable , and their values are observer dependent , while the quantity on the left hand side of the equation is a constant for the particular intersecting trajectories . for different observers ,the measurements of and shift accordingly , so that their product always remains the same .the authors first considered the standard setup of gravitational lensing in kottler coordinates .see figure 1 in , which is similar to our figure [ lensing ] in the appendix .after approximating the solution to the orbital equation of light , far away from the mass , the authors express the measurable intersection angle , , between the bending trajectory and a purely radial trajectory , in these coordinates : ( eq .( 6 ) of , in original notation ) here is the euclidean intersection angle appearing on their diagram , their , and are equal to our , and of the previous sections , respectively . note that the subscripts of the metric component in the above equation as well as in figure 1 of are most likely a mistype , this component should be .we immediately recognize that the above relationship refers to a static observer in kottler ( or de sitter ) coordinates .this relationship is in perfect agreement with rindler and ishak s main result of , equation , and of course with our results of section [ sec5 ] .this equation is the most basic example of a relationship between a measurable angle and a euclidean angle that appears on a flat plane , on which a diagram of the setup is drawn .notice that the solution to the orbital equation is not necessary to form this particular relationship .it is also important to note that the main reason for this agreement between the results is due to the fact that the same static observer is involved in both approaches , which is unfortunately not specifically stated in neither nor . with the above expression for , the authors proceeded to express the invariant as follows : ( eq .( 7 ) of , in original notation ) here , the subscript of refers to the fact that the analysis is carried out with de sitter coordinates .it should be clear that given the fact that it is the static observer that is involved in the angle measurement , is the energy that is measured by a static observer as well .evidently , kottler ( or de sitter ) coordinates were employed in this paper merely in order to form relationships for a static observer in sds space ; to form the same relationships for a different observer the authors employed other coordinates , as we discuss below .it is not perfectly clear as to why the authors chose to use the euclidean angle in the expression for above , and what purpose this expression serves .since the energy has no obvious non - measurable counterpart , it is only that can be switched around with its euclidean counterpart , , in the expression for .as should be abundantly clear by now , a relationship between such measurable and euclidean angles should always involve when working in kottler ( or de sitter ) coordinates .thus , when a given expression involves one of the angles or , but does not involve , by replacing the angle involved with its counterpart is forced into the expression . the reason for choosing one angle over the other as a parameter in a given expression should always be clarified before drawing any conclusion from the expression .it is often advantageous to express some relationship with purely measurable parameters or , conversely , with purely euclidean ( or coordinate - like ) parameters .the expression for above mixes measurable and euclidean parameters with no satisfactory reason .next , the authors proceeded their investigation by employing frw coordinates to produce an expression for the invariant with reference to a comoving observer . far away from the mass ,the kottler metric is well approximated by the de sitter metric , which is equivalent to the frw metric with the scale factor ( in the notation of ) .see equations ( 8) and ( 9 ) in . in that region of space , a comoving observer is simply an observer with constant frw spatial coordinates , and it is in this way , as recognized by the authors of , it is easy to produce results for this observer by using the frw metric . through the use of this metricthe authors find : ( eq .( 16 ) of , in original notation ) here , the subscript of refers to the fact that the analysis is carried out with frw coordinates .the authors argue that these coordinates are the most appropriate for the description of observations , but given the tools of our section [ sec5 ] , we recognize that these coordinates are simply convenient to use when dealing with comoving observers .identical results can be obtained with any equivalent metric as long as the observer is the same , and its 4-velocity is transformed appropriately and accounted for in the derivation .the parameter in the above equation is not the same that was used in the previous sections .this is the constant frw coordinate distance between the comoving observer and the lens , while the distance of closest approach to the lens , with reference to areal radius coordinate , is represented by .( in the frw sense , is the distance , which is the coordinate separation multiplied by the scale factor , at the time of closest approach of the photon to the lens . )the proper interpretation of in this context deserves further attention , but we shall not digress into it here . since is the measurable energy by a comoving observer , the quantity in equals the intersection angle that is measurable by this observer as well . as before ,the choice of parameters in the above expression for as well as its purpose are not perfectly clear , and we see a mix between measurable and non - measurable quantities .note that to arrive at the above equation one simply needs to express the measurable intersection angle , , appearing in as , which can be easily done by drawing the diagram of the lensing setup with reference to frw coordinates .in fact , the solution to the orbital equation is not needed to find the required expression . andfinally , although does not explicitly appear in the above expression for , it does not tell us anything about its influence on measurements of angles or about its possible appearance in other relationships of interest .this absence of in the above expression , in contrast to its appearance in equation , seems to be wrongfully interpreted throughout the literature .it is clear that with our general formula for the measurable angle , equation , we can easily produce results by using any coordinates for any observer .it saves the trouble of transforming to a specific coordinate system merely to consider the measurement of a specific observer , as was done by the authors of and , for example .although the authors of did consider measurements by both static and comoving observers , neither the bending angle in sds space nor the lens equation were specifically addressed . and while they also touched up on the actual trajectory of light , see equations ( 3 - 5 ) and ( 18 ) in , which led them to define the parameter , they did not really need these relationships to establish their ultimate results , equations and .the quantities and in are directly measurable and local , and as long as the intersection angle at the point of observation is small , the rest of the trajectories does not matter .clearly , it also does not matter what metric one chooses to work with if the metrics are equivalent .the authors of decided to use the frw metric , far away from the mass , merely to consider measurements made by a comoving observer . in this sense , in the cosmological context , these coordinates are the ones that are more appropriate to describe measurements , as they claim. however , let us re - emphasize that from the results of it can not be concluded that has no effect on gravitational lensing ; more specifically , it is incorrect to reason that the results of imply the non - contribution of to the cosmological lens equation .in a universe with a cosmological constant , the space outside a spherically symmetric non - rotating mass is well described by the kottler metric . with the recent increasing interest in the cosmological constant ,sds spacetime became a popular background for investigating the various effects of gravity .a natural way to study the effects of is to revisit the classical tests of general relativity .one of the most popular predicted phenomena associated with such tests is the deflection of light by a massive object .it was a long time ago that the question of whether or not plays a roll in this phenomenon has been asked , but unfortunately until this day this topic seems to be suffering from misconceptions and disagreements .we see that the answer to the above question is not simply in the positive or negative , but is very sensitive to the particular situation that is being considered . in the course of the ongoing investigation it became clear that in order to properly address the above question one must consider both the geometry of the underlying space and the act of observation by a given observer , on top of the orbital equation for a light ray and its solution .it is mainly due to the work of islam , , that it was generally agreed upon that has no affect on the orbit of a light ray , as acknowledged by rindler and ishak in . andit is due to the findings of rindler and ishak in that it was realized by many that real measurements must be considered as well in investigating the contribution of . given that the previous investigations and conclusions by islam in and rindler and ishak in are correct , what we have done in the present work is address the following question : in what way do results and expressions of interest depend on which observer is making the measurement ?in other words , since according to islam the path of light is not affected by , and according to rindler and ishak the measurement of an angle is affected by , the circumstances naturally leads to the question above .investigating this question in detail led us to the results of section [ sec5 ] , the most important of which are not found in the literature and are fundamental to the topic at hand .we have begun our investigation from fundamental considerations , and revisited the original issue of whether or not affects the path of a light ray itself in section [ sec3 ] .it was found that the dependence of a path on was entirely involved in the boundary conditions that are being used in a given situation . evidently , whether or not enters the orbital differential equation does not matter , due to the particular way in which its appearance can be entirely absorbed into a new parameter . specifically , even if the orbital equation is written in terms of a parameter ( such as ) that brings in a term of with it , this term of will vanish from the solution completely when certain boundary conditions are enforced .such boundary conditions are purely euclidean , or rather coordinate related , which are the most popular in the literature and most appropriate in common situations ; for such boundary conditions varying the value of would not affect the set of points through which the light passes . for this reason, we recognized that it is acceptable to conclude , but with caution , that does not affect the path of light and best not be used in the orbital equation .however , it is also important to understand that when considering directly measurable quantities as boundary conditions , usually enters the equation describing the path .in addition , of course , in situations where the boundary conditions themselves depend on directly , will also appear in the equation describing the path .an important lesson here is that the contribution of to results of interest depends closely on the situation being analyzed , and any general conclusions should be drawn carefully .our investigations illuminate many possible sources of confusion and misinterpretation regarding this issue , which unfortunately seem to have had a great affect on recent literature .let us re - emphasise that perhaps the most important result of this work is equation .it opens up a way to a more general analysis and is essential to properly investigate the effects of on measurable angles .in addition , it allows for an elegant approach to many situations when analyzing gravitational lensing , and yields an invariant general relativistic aberration equation .it is interesting to note that in some recent papers , such as and , the authors used a transformation of coordinates in order to be able to find a measurable angle by a given observer .it seems that trying to express a measurable angle by an arbitrary observer in an analytic , and coordinate independent , way is generally avoided in the literature .often , the coordinate transformations that make it easy to express a given measurable angle abandon the use of spherical symmetry and complicate the overall analysis considerably , see .this undesirable consequence and other complications that a coordinate transformation may bring can be easily avoided by working with the general formula ; it can be put to use in any coordinate system and produce results related to any observer of interest .more on this in , where we demonstrate the latter point in the context of weak gravitational lensing , and compare results obtained by means of equation to results obtained by means of a coordinate transformation ( as was done in , for example ) .in addition to the papers discussed in section [ sec6 ] , there are other papers on the topic that are worth looking at , including , , , , , and some references therein .although our responses to some of these papers are not included in the present report , the material we presented here is useful in understanding and interpreting their results , and it is of fundamental importance for making proper comparison of the different conclusions the authors arrive to .it is also worth mentioning that when studying the effects of , approximations on should be avoided or made with care . due to the sensitive way in which vanishes from exact results , within a given approximation may end up appearing in relationships where it does not belong .and although such an approximation might be justified , due to the smallness of or some other parameter , and may be numerically accurate , this appearance of in resulting relationships may be theoretically misleading ; see and our section [ sec6b ] . finally , we hope that the material presented in this work will provide a proper perspective when addressing questions regarding the influence of , and that it will aide in gaining a clear understanding of , and ultimately settling , the recent debate on the topic .the relationships that are stated below are derived in , where we turn attention to the role of in cosmological distance measurements and the gravitational lens equation .the following figure is referred to in the definitions and the relationships below .the relationships listed below refer to the setup of figure [ lensing ] .all of the parameters used are defined above .+ 4-velocity vectors of a static , and a far from the origin comoving observer , in kottler coordinates : + gravitational lens equation in terms of euclidean parameters , to first order in and : gravitational lens equation in terms of measurable parameters by an observer with 4-velocity , to first order in and : where specific case of for a comoving observer , first order in and , exact in : approximation of the above , first order in and , lowest powers of : + useful relationship for angular diameter distances : specific case of for a static observer : specific case of for a comoving observer : j. n. islam , phys .a * 97 * , 6 , 239 - 241 , ( 1983 ) . w. rindler and m. ishak , phys . rev .d * 76 * , 043006 ( 2007 ) , [ arxiv:0709.2948 ] . m. sereno , phys .d * 77 * , 043004 ( 2008 ) , [ arxiv:0711.1802 ] .a. bhadra , s. biswas and k. sarkar , phys . rev .d * 82 * , 063003 ( 2010 ) , [ arxiv:1007.3715 ] .t. schucker , gen .rel . grav . *41 * , 7 , 1595 - 1610 , ( 2009 ) , [ arxiv:0807.0380 ] .r. kantowski , b. chen and x. dai , astrophys .j. * 718 * , 913 ( 2010 ) , [ arxiv:0909.3308 ] . m. park , phys .d * 78 * , 023014 ( 2008 ) , [ arxiv:0804.4331 ] . h. arakida and m. kasai , phys .d * 85 * , 023006 ( 2012 ) , [ arxiv:1110.6735 ] . i. b. khriplovich and a. a. pomeransky , int .j. of mod .d * 17 * , 12 , 2255 - 2259 , ( 2008 ) , [ arxiv:0801.1764 ] . f. simpson , j. a. peacock and a. f. heavens , mon . not . r. astro .soc . * 402 * , 3 , 2009 - 2016 , ( 2010 ) , [ arxiv:0809.1819 ] .f. kottler , ann .( leipzig ) * 361 * , 14 , 401462 , ( 1918 ) .k. lake , phys .d * 65 * , 087301 ( 2002 ) , [ arxiv : gr - qc/0103057 ] .w. rindler , _ relativity : special , general , and cosmological _ ( oxford university press , new york , 2006 ) .d. lebedev and k. lake , on the contribution of the cosmological constant to the single source gravitational lens equation " .m. ishak , w. rindler and j. dossett , mon . not .soc . * 403 * , 4 , 2152 - 2156 , ( 2010 ) , [ arxiv:0810.4956 ] . m. ishak and w. rindler , gen .grav . * 42 * , 9 , 2247 - 2268 , ( 2010 ) , [ arxiv:1006.0014 ] . m. ishak , w. rindler , j. dossett , j. moldenhauer and c. allison , mon . not .soc . * 388 * , 3 , 1279 - 1283 , ( 2008 ) , [ arxiv:0710.4726 ] . m. ishak , phys .d * 78 * , 103006 ( 2008 ) , [ arxiv:0801.3514 ] . m. sereno , phys .d * 78 * , 083003 ( 2008 ) , [ arxiv:0809.3900 ] .m. sereno , phys .letters * 102 * , 021301 ( 2009 ) , [ arxiv:0807.5123 ] .t. biressa and j. a. de freitas pacheco , gen .rel . grav . * 43 * , 10 , 2649 - 2659 , ( 2011 ) , [ arxiv:1105.3907 ] . g. c. mcvittie , mon . not .. soc . * 93 * , 325 - 339 , ( 1933 ) .h. miraghaei and m. nouri - zonoz , gen .. grav . * 42 * , 12 , 2947 - 2956 , ( 2010 ) , [ arxiv:0810.2006 ] . k. lake , ( 2007 ) , [ arxiv:0711.0673 ] .
|
in this paper we review and build on the common methods used to analyze null geodesics in schwarzschild de sitter space . we present a general technique which allows finding measurable intersection angles of null trajectories analytically , and as one of its applications we establish a general relativistic aberration relationship . the tools presented are used to analyze some standard setups of gravitational deflection of light and gain a clear understanding of the role that the cosmological constant , , plays in gravitational lensing phenomena . through reviewing some recent papers on the topic with the present results in mind , we attempt to explain the major sources of disagreement in the ongoing debate on the subject , which started with rindler and ishak s original paper , regarding the influence of on lensing phenomena . to avoid ambiguities and room for misunderstanding we present clear definitions of the quantities used in the present analysis as well as in other papers we discuss .
|
spherical images are common in nature , for example , in cosmology , astrophysics , planetary science , geophysics , and neuro - science , where images are naturally defined on the sphere .clearly , images defined on the sphere are different to euclidean images in 2d and 3d in terms of symmetries , coordinate systems and metrics constructed ( see for example ) .image segmentation aims to separate a given image into different components , where each part shares similar characteristics in terms of , e.g. , edges , intensities , colours , and textures .it generally serves as a preliminary step for object recognition and interpretation , and is a fundamental yet challenging task in image processing . in this paper , we present an effective segmentation method that uses spherical wavelets to segment spherical images . in the literature , many different approaches have been proposed for image segmentation for 2d , 3d and vector - valued images , e.g. , . in particular , in the well - known mumford - shah model was proposed , which formulates the image segmentation problem by minimising an energy function and finding optimal piecewise smooth approximations of the given image .more details about these kind of methods can be found in .these types of methods generally give good segmentation results .however , their applicabilities and performance heavily depend on the models used ; in some cases ( e.g. segmenting images containing complex textures ) the models are difficult or expensive to compute due to the non - convex nature of the problem . in , a graph - cut based method was proposed to segment point clouds into different groups . the more pixels in the image , the larger the size of the eigenvalue problem need to be solved , which makes the method inefficient in terms of speed and accuracy .methods based on deformable models segment via evolving geodesic active contours that are built from a partial differential equation , with the ability to detect twisted , convoluted and occluded structures , but are sensitive to noise and blur in images .recently , segmentation methods designed utilising techniques in image restoration ( e.g. ) , were proposed for gray - scale images . in , a segmentation model that combines the image segmentation model of and the data fidelity terms from image restoration models considered to deal with images contaminated by different types of noise ( e.g. gaussian , poisson or impulsive noise ) . in ,the methodology of two - stage methods , solving image restoration models first followed by a thresholding second stage , was proposed .one advantage of these methods is the fast speed of implementation .the t - rof method ( thresholding the rudin - osher - fatemi model ) in concluded that the thresholding approach for segmentation was equivalent to solving the chan - vese segmentation model . in additional to the methods above , approaches based on wavelets and tight frames have been proposed for segmentation . in , a tight - frame based segmentation method was designed for a vessel segmentation problem in medical imaging .the major advantage of this method is the ability to segment twisted , convoluted and occluded structures without user interactions .moreover , the ability of the method to follow the branching of different layers , from thinner to larger structures , makes the method a good candidate for a tubular - structured segmentation problem in medical imaging .however , all the tight - frame systems discussed and used in ( e.g. framelets , contourlets , curvelets , and dual - tree complex wavelet ) are designed for 2d or 3d data on a euclidean manifold .consequently , these approaches can not be applied to problems where data - sets live natively on the sphere .wavelets have become a powerful analysis tool for spherical images , due to their ability to simultaneously extract both spectral and spatial information .a variety of wavelet frameworks have been constructed on the sphere in recent years , e.g. , and have led to many insightful scientific studies in the fields mentioned above ( see ) .different types of wavelets on the sphere have been designed to probe different structure in spherical images , for example isotropic or directional and geometrical features , such as linear or curvilinear structures , to mention a few .axisymmetric wavelets are useful for probing spherical images with isotropic structure , directional wavelets for probing directional structure , ridgelets for analysing antipodal signals on the sphere , and curvelets for studying highly anisotropic image content such as curve - like features ( we refer to for the general definition of euclidean ridgelets and curvelets ) .fast algorithms have been developed to compute exact forward and inverse wavelet transforms on the sphere for very large spherical images containing millions of pixels ( leveraging novel sampling theorems on the sphere and the rotation group ) .localisation properties of wavelet constructions have also been studied in detail , showing important quasi - exponential localisation and asymptotic uncorrelation properties for certain wavelet constructions .an investigation into the use of axisymmetric and directional wavelets for sparse image reconstruction was performed recently in , showing excellent performance .the spherical wavelets adopted in the experimental section of this paper are reviewed briefly in section [ sec : review ] . in this paper, we devise an iterative framework for segmenting spherical images using wavelets defined on the sphere , extending the method proposed in .the first stage of the method , as a preprocessing step , suppresses noise in the given data by soft thresholding wavelet coefficients .then , potential boundary pixels are classified gradually by the iterative procedure .the framework is compatible with any arbitrary type of spherical wavelet , such as the axisymmetric wavelets , directional wavelets , or curvelets mentioned above .the iterative strategy in the proposed framework is effective , particularly for images containing anisotropic textures .there is also flexibility regarding the implementation of iterations .motivated by the two - stage methodology in , when segmenting images containing many ( or mostly ) isotropic structures , the iterative strategy in our method can be replaced by a simple thresholding to reduce the computation time significantly without sacrificing segmentation quality considerably .we test the proposed framework on a variety of types of spherical images , including an earth topographic map , a light probe ( spherical ) image , two sets of solar data , and two retina images projected on the sphere . to the best of our knowledge , this is the first segmentation method that works directly on the _ whole _ sphere and is practical for any type of spherical images , benefiting from the compatibility of the method with any type of spherical wavelets .a method was proposed for segmenting spherical particles in volumetric data sets based on an extension of the generalised hough transform and an active contour approach . however , the data considered in were 3d data containing spherical - like particles , not data defined on the sphere directly .the main contributions in this paper are : ( 1 ) a segmentation framework for spherical images is devised , for the first time ; ( 2 ) the framework uses an iterative strategy with the flexibility to tailor the iterative procedure according to data types and features ; ( 3 ) spherical wavelets , including axisymmetric wavelets , directional wavelets and the newly - constructed hybrid directional - curvelet wavelets , are implemented and tested in the framework ; ( 4 ) a series of applications are presented , illustrating the performance of our proposed segmentation method .the remainder of this paper is organised as follows . in section [ sec : review ] , we review related work about spherical wavelets and segmentation methods , and present our new hybrid directional - curvelet wavelet construction . in section [ sec : alg ] , we introduce our spherical segmentation method . in section [ sec : results ] , the proposed method and methods for comparison are tested on a variety of spherical images such as an earth map , light probe images , and two solar maps . to further demonstrate the ability of our method on segmenting highly directional and elongated structures , in section [ sec : results ] we also apply it to retinal images , which contain a complex network of blood vessels .conclusions are given in section [ sec : con ] .let be the given image defined on the sphere . without loss of generality , we assume in [ 0 , 1 ] .let denote spherical coordinates with colatitude ] and .wavelet coefficients are computed by the wavelet forward transform ( analysis ) defined by where is a rotation operator related to a 3d rotation matrix by ( is the cartesian vector of ) , is the usual rotation invariant measure on the sphere ; the symbol , the operator , and denote the inner product of functions , directional convolution on the sphere and complex conjugation , respectively .low - frequency content of the signal not probed by wavelets are probed by the scaling function , which is generally axisymmetric .the scaling coefficients are given by where , and the operator denotes axisymmetric convolution on the sphere .the spherical image can be synthesised perfectly from its wavelet and scaling coefficients ( under the wavelet admissibility condition ) by the wavelet backward transform ( synthesis ) by where is the usual invariant measure on so(3 ) ._ construction of different types of wavelets ._ spherical wavelets , constructed to ensure the admissibility condition is satisfied , are defined in harmonic space in the factorised form by where kernel , a positive real function , is constructed to be a smooth function with compact support to control the angular localisation properties of wavelet , with harmonic coefficients ; see for the detailed definition .the directionality component , with harmonic coefficients , is designed to control the directional localisation properties of .the wavelets recovered are steerable when imposing an azimuthal band - limit on the directionality component such that for .while steerability is achieved , the directional localisation of the wavelet is controlled by imposing a specific form for the directional auto - correlation of the wavelet .the detailed construction of and for directional wavelets can be found in and those for curvelets can be found in .in particular , the spherical curvelets proposed in exhibits the parabolic scaling relation .such a geometric feature is unique to curvelets , making it highly anisotropic and directionally sensitive , and thus suitable for extracting local curvilinear structures effectively .moreover , scale - discretised wavelets support the exact analysis and synthesis of both scalar and spin signals , although only the former are considered herein .[ fig : ml_tiling ] and fig .[ fig : ml_show ] show the harmonic tilings of different types of scale - discretised wavelets and the corresponding wavelets plotted on the sphere , respectively .we refer the reader to , and for details about the construction of scale - discretised axisymmetric and directional wavelets , and curvelets , respectively .code to compute these wavelet transforms is public and available in the existing s2let package , which relies on the ssht and so3 packages . [cols="^,^,^ " , ]in this paper we proposed a wavelet - based segmentationmethod ( wssa ) for spherical images , which is , to the best of our knowledge , the first method performing segmentation directly on the sphere .the method is compatible with any invertible wavelet transform constructed on the sphere ( e.g. axisymmetric wavelets , directional wavelets , curvelets , or hybrid wavelets ) .consequently , wssa is very flexible and can be equipped with spherical wavelets appropriate for the texture property of the given spherical data of interest .wssa needs just a few iterations to converge , and the main computation within each iteration is the pair of forward and backward wavelet transforms .we applied our wssa method to several real - world problems , i.e. , the earth topographic map , a light probe image , two solar maps , and two projected spherical retina images .the comparisons with the k - means method and different types of wavelets demonstrate that the wssa method is an efficient and effective spherical segmentation method and is superior to k - means .one important future work will be focusing on purifying the uncertainty area at each step in each iteration of wssa to improve segmentation quality according to specific applications .this work is supported by the uk engineering and physical sciences research council ( epsrc ) by grant ep / m011852/1 and ep / m011089/1 .we thank professor raymond h. f. chan in cuhk for the very helpful discussion .we also thank dr david peres - suarez for his help in providing the first set of solar data .l. bar , t. f. chan , g. chung , m. jung , n. kiryati , n. sochen and l. a. vese . mumford and shah model and its applications to image segmentation and image restoration , 2nd edition , editor : o. scherzer , springer 2015 ( online ) .e. cands and d. donoho .ridgelets : akeytohigher - dimensional intermittency ?_ philosophical transactions of the royal society of london a : mathematical , physical and engineering sciences _ , 357(1760):24952509 , 1999 .p. debevec , rendering synthetic objects into real scenes : bridging traditional and image - based graphics with global illumination and high dynamic range photography ._ proceedings of siggraph 98 _ , 189198 , 1998 .e. franchini , s. morigi , and f. sgallari .segmentation of 3d tubular structures by a pde - based anisotropic diffusion model .m. dhlen et al .( eds . ) : mmcs 2008 , lncs5862 , pp . 224241 , 2010 , _springer - verlag berlin heidelberg _ , 2010 .h. jelinek , m. cree , j. leandro , j. soares , r. cesar , and a. luckie . automated segmentation of retinal blood vessels and identification of proliferative diabetic retinopathy ._ josa a _ , 24(5):14481456 , 2007 .d. marinucci , d. pietrobon , a. balbi , p. baldi , p. cabella , g. kerkyacharian , p. natoli , d. picard and n. vittorio .spherical needlets for cosmic microwave background data analysis ._ mon . not ._ , 383:539545 , 2008 .j. d. mcewen , m. p. hobson , a. n. lasenby and d. j. mortlock . a high - significance detection of non - gaussianity in the wmap 1-year data using directional spherical wavelets ._ mon . not ._ , 359(4):15831596 , 2005 .j. d. mcewen , m. buettner , b. leistedt , h. v. peiris , p. vandergheynst , and y. wiaux . on spin scale - discretised wavelets on the sphere for the analysis of cmb polarisation ._ in proceedings iau symposium _ , 306 , 2014 .j. d. mcewen , g. puy , j. thiran , p. vandergheynst , d. ville , and y. wiaux . sparse image reconstruction on the sphere : implications of a new sampling theorem ._ ieee transactions on image processing _ , 22(6):22752285 , 2013 .j. d. mcewen , p. vielva , m. p. hobson , e. martnez - gonzlez , and a. n. lasenby .detection of the isw effect and corresponding dark energy constraints made with directional spherical wavelets . _ mon . not ._ , 376(3):12111226 , 2007 .j. d. mcewen , p. vielva , y. wiaux ., r. b. barreiro , l. cayn , m. p. hobson , a. n. lasenby , e. martnez - gonzlez , and j. l. sanz .cosmological applications of a wavelet analysis on the sphere ._ j. fourier anal . and appl ._ , 13(4):495510 , 2007 .y. rathi , o. michailovich , k. setsompop , s. bouix , m. e. shenton , and c .- f .sparse multi - shell diffusion imaging ._ medical image computing and computer - assisted intervention : miccai , international conference on medical image computing and computer - assisted intervention _, 14(2):5865 , 2011 .j. schmitt , j. l. starck , j. m. casandjian , j. fadili , and i. grenier .multichannel poisson denoising and deconvolution on the sphere : application to the fermi gamma - ray space telescope ._ aap _ , 546:a114 , 2012 .f. simons , i. loris , g. nolet , i. c. daubechies , s. voronin , j. s. judd , p. a. vetter , j. charlty , and c. vonesch . solving or resolving global tomographic models with spherical wavelets , and the scale and sparsity of seismic heterogeneity ._ geophysical journal international _ , 187:969988 , 2011 .
|
segmentation is the process of identifying object outlines within images . there are a number of efficient algorithms for segmentation in euclidean space that depend on the variational approach and partial differential equation modelling . wavelets have been used successfully in various problems in image processing , including segmentation , inpainting , noise removal , super - resolution image restoration , and many others . wavelets on the sphere have been developed to solve such problems for data defined on the sphere , which arise in numerous fields such as cosmology and geophysics . in this work , we propose a wavelet - based method to segment images on the sphere , accounting for the underlying geometry of spherical data . our method is a direct extension of the tight - frame based segmentation method used to automatically identify tube - like structures such as blood vessels in medical imaging . it is compatible with any arbitrary type of wavelet frame defined on the sphere , such as axisymmetric wavelets , directional wavelets , curvelets , and hybrid wavelet constructions . such an approach allows the desirable properties of wavelets to be naturally inherited in the segmentation process . in particular , directional wavelets and curvelets , which were designed to efficiently capture directional signal content , provide additional advantages in segmenting images containing prominent directional and curvilinear features . we present several numerical experiments , applying our wavelet - based segmentation method , as well as the common k - means method , on real - world spherical images , including an earth topographic map , a light probe image , solar data - sets , and spherical retina images . these experiments demonstrate the superiority of our method and show that it is capable of segmenting different kinds of spherical images , including those with prominent directional features . moreover , our algorithm is efficient with convergence usually within a few iterations . image segmentation , wavelets , curvelets , tight frame , sphere .
|
[ sec : intro ] turbulent magnetized plasmas are encountered in a wide variety of astrophysical situations like the solar corona , accretion disks , but also in magnetic fusion devices such as tokamaks . in practice , the study of such plasmas requires solving the maxwell equations coupled to the computation of the plasma response .different ways are possible to compute this response : the fluid or the kinetic description .unfortunately the fluid approach seems to be insufficient when one wants to study the behavior of zonal flow , the interaction between waves and particles or the occurrence of turbulence in magnetized plasmas for example .most of the time these plasmas are weakly collisional , and then they require a kinetic description represented by the vlasov - maxwell system . the numerical simulation of the full vlasov equation involves the discretization of the six - dimensional phase space , which is still a challenging issue .in the context of strongly magnetized plasmas however , the motion of the particles is particular since it is confined around the magnetic field lines ; the frequency of this cyclotron motion is faster than the frequencies of interest .therefore , the physical system can be reduced to four or five dimensions by averaging over the gyroradius of charged particles ( see for a review ) .the development of accurate and stable numerical techniques for plasma turbulence ( 4d drift kinetic , 5d gyrokinetic and 6d kinetic models ) is one of our long term objectives .actually there are already a large variety of numerical methods based on direct numerical simulation techniques .the vlasov equation is discretized in phase space using either semi - lagrangian , finite element , finite difference or discontinuous galerkin schemes .most of these methods are based on a time splitting discretization which is particularly efficient for classical systems as vlasov - poisson or vlasov - maxwell systems . in that case , the characteristic curves corresponding to the split operator are straight lines and are solved exactly . therefore , the numerical error is only due to the splitting in time and the phase space discretization of the distribution function .furthermore for such time splitting schemes , the semi - lagrangian methods on cartesian grids coupled with lagrange , hermite or cubic spline interpolation techniques are conservative . hence , these methods are now currently used and have proved their efficiency for various applications . in this context semi - lagrangian methods are often observed to be less dissipative than classical finite volume or finite difference schemes . however , for more elaborated kinetic equations like the 4d drift kinetic or 5d gyrokinetic equations , or even the two dimensional guiding center model , time splitting techniques can not necessarily be applied . thus characteristic curves are more sophisticated and required a specific time discretization .for instance , in several numerical solvers have been developed using an eulerian formulation for gyro - kinetic models .however , spurious oscillations often appear in the non - linear phase when small structures occur and it is difficult to distinguish physical and numerical oscillations . moreover , for these models semi - lagrangian methods are no more conservative, hence the long time behavior of the numerical solution may become unsuitable . for this purpose ,we want to develop a class of numerical methods based on the hermite interpolation which is known to be less dissipative than lagrange interpolation together with a weighted essentially non - oscillatory ( weno ) reconstruction applied to semi - lagrangian and finite difference methods .actually , hermite interpolation with weno schemes were already studied in in the context of discontinuous galerkin methods with slope limiters .a system of equations for the unknown function and its first derivative is evolved in time and used in the reconstruction . moreover , a similar technique , called cip ( cubic interpolation propagation ) , has also been proposed for transport equations in plasma physics applications , but the computational cost is strongly increased since the unknown and all the derivatives are advected in phase space . in , a semi - lagrangian method with hermite interpolation has been proposed and shown to be efficient and less dissipative than lagrangian interpolation . in this latter case , the first derivatives are approximated by a fourth order centered finite difference formula . here, we also apply a similar pseudo - hermite reconstruction and meanwhile introduce an appropriate weno reconstruction to control spurious oscillation leading to nonlinear schemes .we develop third and fifth order methods and apply them to semi - lagrangian ( non - conservative schemes ) and conservative finite difference methods .our numerical results will be compared to the usual semi - lagrangian method with cubic spline reconstruction and the classical fifth order weno finite difference scheme .the paper is organized as follows : we first present the vlasov equation and related models which will be investigated numerically . then in section [ sec : hsl ] , the semi - lagrangian method is proposed with high order hermite interpolation with a weno reconstruction to control spurious oscillations . in section [ sec : hfd ] , conservative finite difference schemes with hermite weno reconstructions are detailed . the section [ sec : test ] the one - dimensional free transport equation with oscillatory initial data is investigated to compare our schemes with classical ones ( semi - lagrangian with cubic spline interpolation and conservative finite difference schemes with weno reconstruction ) .then we perform numerical simulations on the simplified paraxial vlasov - poisson model and on the guiding center model for highly magnetized plasma in two dimension .the evolution of the density of particles in the phase space , is given by the vlasov equation , where the force field is coupled with the distribution function giving a nonlinear system .we mention the well known vlasov - poisson ( vp ) and vlasov - maxwell ( vm ) models describing the evolution of particles under the effects of self - consistent electro - magnetic fields .we define the charge density and current density by where is the single charge .the force field is given for the vlasov - poisson model by where represents the mass of one particle .for the vlasov - maxwell system , we have and , are solution of the maxwell equations with the compatibility condition which is verified by the vlasov equation solution . in the sequel we will also consider the so - called guiding center model , which has been derived to describe the evolution of the charge density in a highly magnetized plasma in the transverse plane of a tokamak .this model is described as follows -\delta\phi=\rho , \end{array } \right .\label{eq : gc}\ ] ] where the velocity is divergence free .transport equations or ( [ eq : gc ] ) can be recast into an advective form where . hence, classical backward semi - lagrangian method can be applied to solve .furthermore , under the assumption , equations or can also be rewritten in a conservative form for which a finite difference method can be used .we introduce a high order hermite interpolation coupled with a weight essentially non - oscillatory ( hweno ) reconstruction for semi - lagrangian methods .actually , the semi - lagrangian method becomes a classical method for the numerical solution of the vlasov equation because of its high accuracy and its small dissipation .moreover , it does not constraint any restriction on the time step size . indeed , the key issue of the semi - lagrangian method compared to classical eulerian schemes is that it uses the characteristic curves corresponding to the transport equation to update the unknown from one time step to the next one .let us recall the main feature of the backward semi - lagrangian method .for a given , the differential system \mathbf{x}(s)= { \mathbf{x } } , \end{array } \right.\ ] ] is associated to the transport equation .we denote its solution by .the backward semi - lagrangian method is decomposed into two steps for computing the function at time from the function at time : 1 .for each mesh point of phase space , compute , the value of the characteristic at time who is equal to at time .2 . as the function of transport equationverifies we obtain the value of by computing by interpolation , since is not usually a mesh point . in practice ,a cubic spline interpolation is often used .it gives very good results , but it has the drawback of being non local which causes a higher communication overhead on parallel computers . moreover spurious oscillations may occur around discontinuities . on the other hand ,the cubic hermite interpolation is local , and has been shown in to be less dissipative than lagrange interpolation polynomial .however , it has still spurious oscillations for discontinuous solution . here ,we develop a third and fifth order hermite interpolation coupled with a weighted essentially non - oscillatory procedure , such that it is accurate for smooth solutions and it removes spurious oscillations around discontinuities or high frequencies which can not be solved on a fixed mesh . consider a uniform mesh of the computational domain and assume that the values of the distribution function and its derivative are known at the grid points .the standard cubic hermite polynomial on the interval $ ] can be expressed as follows : & & \displaystyle + \frac{\delta x(f'_i+f'_{i+1})-2(f_{i+1}-f_i)}{\delta x^3}(x - x_i)^2(x - x_{i+1 } ) , \end{array } \label{eq : poly_hermite}\ ] ] the polynomial verifies : h_3(x_{i+1})=f_{i+1},&h'_3(x_{i+1})=f'_{i+1}. \end{array } \right.\ ] ] moreover , we define two quadratic polynomials on by \displaystyle h_r(x)\,=\,f_i+\frac{f_{i+1}-f_i}{\delta x}(x - x_i)+\frac{\delta x {f}'_{i+1}-(f_{i+1}-f_i)}{\delta x^2}(x - x_i)(x - x_{i+1 } ) .\end{array}\right.\ ] ] the polynomial verifies while verifies the idea of weno reconstruction is now to apply the cubic polynomial when the function is smooth , otherwise , we use the less oscillatory second order polynomial between or .thus , let us write as follows where and are weno weights depending on .when the function is smooth , we expect that so that we recover the cubic hermite polynomial .otherwise , we expect that according to the region where is less smooth .to determine these weno weights , we follow the strategy given in and first define smoothness indicators by integration of the first and second derivatives of and on the interval : then we set and as where where to avoid the denominator to be zero .observe that when the function is smooth , the difference between and becomes small and the weights and .otherwise , when the smoothness indicator , blows - up , then the parameter and the weight goes to zero , which yields . finally , let us mention that here the value of the first derivative at the grid point is approximated by a fourth - order centered finite difference formula we can extend previous method to a fifth order hermite weno ( hweno5 ) interpolation . in the same way , we first construct a fifth degree polynomial on the interval and then three third degree polynomials , , verifying & h_c(x_j)=f_j , \quad j = i-1,i , i+1,i+2 , & \ , \\[3 mm ] & h_r(x_j)=f_j , \quad j = i , i+1,i+2 , & h_r'(x_{i+2})=f'_{i+2 } , \end{array}\right.\ ] ] where the first derivative is given by a sixth order centered approximation then the polynomial can be written as a convex combination where , , are weno weights depending on .similarly smoothness indicators are computed by integration of the first , second and third order derivatives of , , on the interval : finally , the weno weights are determined according to the smoothness indicators \displaystyle w_c(x ) = \frac{\alpha_c(x)}{\alpha_l(x ) + \alpha_c(x ) + \alpha_r(x ) } , & \displaystyle\alpha_c(x ) = \frac{c_c(x)}{(\varepsilon + \beta_c)^2 } , & \displaystyle c_c(x)=1-c_l(x)-c_r(x ) , \\[5 mm ] \displaystyle w_r(x ) = \frac{\alpha_r(x)}{\alpha_l(x ) + \alpha_c(x ) + \alpha_r(x ) } , & \displaystyle\alpha_r(x ) = \frac{c_r(x)}{(\varepsilon + \beta_r)^2 } , & \displaystyle c_r(x)=\frac{(x - x_{i-1})^2}{9\delta x^2}. \end{array}\right.\ ] ] this polynomial reconstruction allows to get fifth order accuracy for smooth stencil and the various stencils are expected to damp oscillations when filamentation of the distribution function occurs .finally , let us observe that this technique can be easily extended to high space dimension on cartesian grids .when the velocity is not constant ( [ eq : transport_vlasov ] ) , the semi - lagrangian method is not conservative even when , hence mass is no longer conserved and the long time behavior of the numerical solution can be wrong even for small time steps .therefore , high order conservative methods may be more appropriate even if they are restricted by a cfl condition .an alternative is to use the finite difference formulation in the conservative form and to use the semi - lagrangian method for the flux computation . in this section ,we extend hermite weno reconstruction for computing numerical flux of finite difference method .suppose that is approximation of .we look for the flux such that it approximates the derivative to -th order accuracy : let us define a function such that then clearly .\ ] ] hence we only need let us denote by one primitive of then implies thus , given the point values , the primitive function is exactly known at .we thus can approximate by an interpolation method .therefore , now let us interpolate the primitive function .here we give the hermite weno scheme and outline the procedure of reconstruction only for the fifth order accuracy case .the aim is to construct an approximation of the flux by the hermite polynomial of degree five together with a weno reconstruction from point values : 1 . we construct the hermite polynomial such that 2 .we construct cubic reconstruction polynomials , , such that : h_c(x_{j+1/2 } ) = g_{j+1/2 } , j=-2,-1,0,1 , & \ , \\[4 mm ] h_r(x_{j+1/2 } ) = g_{j+1/2 } , j=-1,0,1 , & h'_r(x_{i+1/2 } ) = g'_{i+1/2},\\[4 mm ] \end{array}\right.\ ] ] where is the sixth order centered approximation of first derivative .let us denote by , , , the first derivatives of , , , respectively . by evaluating , , , at , we obtain and h_c(x_{i+1/2 } ) & = & \frac{-f_{i-1 } \,+\ , 5\,f_i + 2\,f_{i+1}}{6},\\[3 mm ] h_r(x_{i+1/2 } ) & = & \frac { f_{i } \,+\ , 5\,f_{i+1 } \,-\ , 2\,g'_{i+1/2}}{4}.\end{aligned}\ ] ] 3 .we evaluate the smoothness indicators , , , which measure the smoothness of , , on the cell . & = & \frac{1}{16}\left(835f_{i-1}^2 + 139f_i^2 + 300(h'_{i-1/2})^2 - 674f_{i-1}f_i - 996f_{i-1}h'_{i-1/2 } + 396f_i h'_{i-1/2}\right),\\[3 mm ] \beta_c & = & \int_{x_i}^{x_{i+1 } } \delta x ( h'_c(x))^2 + \delta x^3 ( h''_c(x))^2 dx\\[3 mm ] & = & \frac{1}{12}\left(13f_{i-1}^2 + 64f_i^2 + 25f_{i+1}^2 - 52f_{i-1}f_i + 26f_{i-1}f_{i+1 } - 76f_i f_{i+1}\right),\\[3 mm ] \beta_r & = & \int_{x_i}^{x_{i+1 } } \delta x+ \delta x^3 ( h''_r(x))^2 dx\\[3 mm ] & = & \frac{1}{16}\left(55f_{i}^2 + 367f_{i+1}^2 + 156(h'_{i+1/2})^2 - 266f_{i}f_{i+1 } + 156f_{i}h'_{i+1/2 } - 468f_{i+1 } h'_{i+1/2}\right).\end{aligned}\ ] ] 4 .we compute the non - linear weights based on the smoothness indicators \displaystyle w_c = \frac{\alpha_c}{\alpha_l + \alpha_c + \alpha_r } , & \displaystyle\alpha_c = \frac{c_c}{(\varepsilon + \beta_c)^2 } , \\[5 mm ] \displaystyle w_r = \frac{\alpha_r}{\alpha_l + \alpha_c + \alpha_r } , & \displaystyle\alpha_r = \frac{c_r}{(\varepsilon + \beta_r)^2 } , \end{array}\right.\ ] ] where the coefficients , , are chosen to get fifth order accuracy for smooth solutions and the parameter avoids the blow - up of , .the flux is then computed as the reconstruction to is mirror symmetric with respect to of the above procedure .we start with a very basic test on the one dimensional transport equation with constant velocity to check the order of accuracy and to compare the error amplitude of the various numerical schemes .then we perform numerical simulations on the simplified paraxial vlasov - poisson model and on the guiding center model for highly magnetized plasma in the transverse plane of a tokamak . in this sectionwe will compare our hermite weno reconstruction with the usual semi - lagrangian method with cubic spline interpolation without splitting , and with the classical fifth order finite difference technique coupled with a fourth order runge - kutta scheme for the time discretization .we compare our hermite weno reconstruction with various classical methods for solving the free transport equation ,\quad t\geq0 , \label{eq : test1d}\ ] ] with periodic boundary conditions .let us first consider a smooth solution , where the initial condition is chosen as .\ ] ] we present in table [ tab:1d1 ] , the numerical error for different methods . on the one hand for semi - lagrangian methods ,the hermite weno interpolation is compared with the cubic spline interpolation .the semi - lagrangian method is unconditionally stable , we thus choose a cfl number larger than one , _e.g. _ cfl .we observe that the cubic spline and hermite weno reconstructions have both third order accuracy , and the numerical error has almost the same amplitude .the semi - lagrangian method with a fifth order hermite weno reconstruction has fifth order accuracy , thus it is much more accurate than the previous third order methods . on the other hand we focus on the finite difference method andcompare the hermite weno reconstruction with the classical fifth order weno reconstruction .we observe that these two methods have fifth order accuracy , but the hermite weno interpolation method is much more accurate than the usual weno method .furthermore , for the same order of accuracy the semi - lagrangian method is much more precise than the finite difference scheme , which is expected for linear problems since the error only comes from the polynomial interpolation ..[tab:1d1]1d transport equation : _ error in -norm and order of convergence for smooth solutions for semi - lagrangian and finite difference methods .the final time is . _ [ cols="^,^,^,^,^,^,^ " , ]in this paper , we have first developed a hermite weighted essentially non - oscillatory reconstruction for semi - lagrangian method and finite difference method respectively .we illustrate that such a reconstruction is less dissipative than usual weighted essentially non - oscillatory reconstruction .then we have compared our approach with the usual semi - lagrangian method with cubic spline and finite difference weno reconstruction .the semi - lagrangian method is efficient and accurate for linear phase even with a large time step , however , it becomes less accurate for nonlinear phase and may lead to the wrong solution in some cases , for instance , the beam test .the finite difference method is stable under the classical cfl condition , but it is much more stable in nonlinear phase and it conserves mass . we thus apply a mixed method using the semi - lagrangian method in linear phase and finite difference method during the nonlinear phase , called mixed hweno5 method .we finally apply the mixed hweno5 method to the simulation of the diocotron instability and observe that although the mixed hweno5 method is a little more dissipative than the semi - lagrangian with cubic spline method , but it is much more stable during the nonlinear phase .the next step is now to apply our mixed method to more realistic and high dimensional plasma turbulence simulations , for instance , 4d drift - kinetic simulation or 5d gyrokinetic simulation .the authors are partially supported by the european research council erc starting grant 2009 , project 239983-_nusikimo_.
|
we introduce a weno reconstruction based on hermite interpolation both for semi - lagrangian and finite difference methods . this weno reconstruction technique allows to control spurious oscillations . we develop third and fifth order methods and apply them to non - conservative semi - lagrangian schemes and conservative finite difference methods . our numerical results will be compared to the usual semi - lagrangian method with cubic spline reconstruction and the classical fifth order weno finite difference scheme . these reconstructions are observed to be less dissipative than the usual weighted essentially non - oscillatory procedure . we apply these methods to transport equations in the context of plasma physics and the numerical simulation of turbulence phenomena . keywords . finite difference method ; semi - lagrangian scheme ; hermite weno reconstruction ; vlasov - poisson model ; guiding - center model ; plasma physics .
|
compressed sensing ( cs ) , is a powerful sub - nyquist sampling theory for the acquisition and recovery of sparse signals , that has received special attention in signal and image processing as well as other related fields such as statistics and computer science .the cs theory states that if the unknown signal is inherently sparse , then it is possible to acquire and reconstruct signal ( by solving a convex optimization problem ) with a much lower number of measurements that would be otherwise needed under the existing nyquist sampling scheme . in image processing, the cs theory is particularly relevant in several applications , such as magnetic resonant imaging ( mri ) or hyper - spectral imaging , where acquisition time and/or sensing hardware cost play a significant role .also , the sparsity assumption typically holds due to , for example , inherent wavelet structure in images . in recent years, the cs literature has seen seen significant advances in both theory and applications ( many of which are collected in the cs repository ) .there are also a variety of specialized solvers for the cs recovery problem , which are developed from different angles , such as pursuit algorithms , , , optimization algorithms , a complexity regularization algorithm , and bayesian methods . in this work ,we focus on a particular aspect of cs recovery , wherein the emphasis is on robustness .this is originally raised by pham and venkatesh .they recognize that existing cs recovery schemes can be statistically inefficient when the corruption of cs measurements is modeled as impulsive noise .such impulsive corruption can occur due to bit errors in transmission , malfunctioning pixels , faulty memory locations , and buffer overflow , and has been raised in many image processing works . to address this problem ,pham and venkatesh proposed a new formulation , known as robust cs , which combines traditional robust statistics and existing cs into a single framework to effectively suppress outliers in the recovery . whilst the focus of is on the theoretical justification of the new formulation, they also suggested a provably convergent algorithm to solve their robust cs formulation .this majorization minimization ( mm ) algorithm finds the robust cs solution by iteratively solving a number of cs problems , the solutions from which converge to the true solution .however , this is not computationally efficient because each iteration involves a full cs recovery , which is always iterative in nature . to overcome the computational limitation of the original robust cs algorithm proposed in , we propose two new algorithms that directly solve the robust cs formulation .they both have only one main loop and iteratively majorize the original robust cs objective function .one algorithm is adapted from the fast iterative shrinkage thresholding ( fista ) framework developed by beck and teboulle , which shares the same spirit as an unpublished work of nesterov .the other algorithm is based on a framework known as alternating direction method of multipliers ( admm ) .even though the original fista scheme was derived for the original cs problem , it can be used for robust cs .our contribution is a theoretical result that allows one to compute the lipchitz constant for the application of fista .additionally , we also derive a generalized admm algorithm for solving the robust cs formulation efficiently , which differs from the fista algorithm in that operator splitting and approximation updates are used .this results in a method that has same update complexity as fista , but is more flexible to extend .furthermore , we also extend robust cs in a number of directions , including additional affine constraints , -norm loss function , mixed - norm regularization , and multi - tasking .we show that the admm is a powerful optimization framework for the robust cs problem as it can be modified or generalized to cope with these extensions , where often other cs techniques , including fista , find impossible to do so .we show that the derived algorithms are simple to implement , provably convergent under the admm theory , and that they effectively solve complex robust cs formulations .the paper is organized follows .section ii gives some background on robust cs , whilst section iii describes the fista and admm algorithms for solving the robust cs formulation .section iv presents four extensions of the robust cs formulation and derive computationally efficient algorithms for solving them .section v contains numerical experiments to demonstrate the computational efficiency of the proposed algorithms .finally , section vi concludes the paper .all matlab code to implement our methods described in this paper and reproduce our results is readily available at the following website http://www.computing.edu.au/~dsp/code.php .in compressed sensing ( cs ) , one is interested in the recovery of a sparse signal though the compressed measurement here , is the cs matrix that represents the compressive sampling operation and is additive noise .the cs matrix is required to some stable embedding conditions for stable recovery . as in the cs setting ,the recovery of from is generally ill - posed .the cs theory has established that under an assumption that is sparse , it is possible to recover reliably from with an error upper bounded by the noise strength . among various approaches to solve the cs recovery problem ,the optimization formulation often provides the best achievability for a given cs matrix in the normal cs setting , the noise in ( [ equ_cs_model ] ) is often considered gaussian with bounded norm and thus the maximum error induced by a cs recovery is .however , pham and venkatesh have discovered that when the noise is indeed impulsive , such a result will still hold for normal cs recovery but is rather inefficient .thus , they propose a modification to the cs formulation , known as robust cs , to appropriately address the characteristics of the underlying additive noise .this is achieved by considering the robust loss function instead of the quadratic cost function in ( [ equ_cs_opt ] ) here , and is the huber s penalty function ( soft limiter ) given as follows and its derivative is given by the parameter of the huber s penalty function is determined by the fraction of the outliers whilst the scale parameter is often estimated from some statistic of the median , such as the median of the absolute deviation ( mad ) . for detail , see .as is quadratic or linear depending on the actual value of , solving ( [ equ_robust_opt ] ) directly is not trivial .pham and venkatesh suggested that instead of solving ( [ equ_robust_opt ] ) , a better alternative is to solve a series of the normal cs problems .the idea is to replace with an approximate quadratic function at every outer iteration with the general form where pham and venkatesh detailed two options for , which are commonly used in the robust statistics literature * modified residuals ( mr ) : * iteratively reweighted : , .when using as shown in ( [ equ_lx ] ) for in ( [ equ_robust_opt ] ) , the resultant problem is essentially a normal cs problem and thus considered solved .whilst the above strategy will work , it is inefficient because each outer iteration involves a full cs problem and it is known that the cs problem needs to be solved iteratively as well .therefore , the double loops are the main computational deficiency of the above strategy . to address this limitation , we consider bypassing the inner cs step and thus there will be only one loop for the overall algorithm .there are two powerful optimization frameworks that are suitable for this purpose , which we describe next .fast iterative shrinkage thresholding ( fista ) is an optimization approach that effectively decouples the variables from the smooth loss function in the compressed sensing objective .this approach was proposed by , which also shares the same philosophy as an unpublished work of . technically , fista is a variant of majorization minimization ( mm ) algorithms and has a special choice for the quadratic majorization as well updates that involve historical points .consider minimizing a convex optimization of the form where here , is a smooth loss function , but the variables in this loss function are coupled .the core idea of fista is to consider a quadratic majorization of , denotes as , such that it effectively decouples the variables .if such decoupling is possible , the approximate problem is then easier to solve even when the regularization term is possibly non - smooth ( such as ) , because it can be decomposed into a number of univariate optimization problems whose solution is analytical .the first trick of fista is to decouple the variables by considering the majorization at iteration and approximation point here , is used as the approximation point rather than as it involves historical updates of by a careful choice , which is subsequently show in ( [ equ_fista_z ] ) . also , is the lipchitz constant of the gradient of the loss function to ensure that is a proper majorization of .thus , at iteration , fista finds via where . for the quadratic loss function , it can be shown that , and .for the -norm regularization as in the case of cs , this results in this problem can be solved element - wise and its solution is where the soft - thresholding shrinkage operator is defined as the second trick of fista is to use a clever update of the approximation point to speed up convergence the original fista framework can be readily used for robust cs case if and the lipchitz constant can be computed for the robust loss function . in case of , it can be easily seen that it remains to compute the lipchitz constant .to do so , we rely on the following result : [ lemma_l_sum ] let be a smooth convex function on and suppose that the domain is divided into two regions and , such that if and if , and , and that for . denote as and the lipchitz constants of and respectively on the domains and .then the lipchitz constant of is bounded by the proof of this lemma is detailed in the appendix .the result implies that for mixed functions like the robust cs loss functions being considered , we just take the sum of lipchitz constants over each continuous and bounded domain .the lipchitz constant for the quadratic part is as before , i.e. , whilst for the linear part we can split into negative and positive domain . in both cases , the lipchitz constant is zero due to the fact that is a constant .thus , the lipchitz constant for the robust cs cost function is still .alternating direction method of multipliers ( admm ) is a simple but powerful framework in optimization , which is suited for today s large - scale problems arising in machine learning and signal processing .the method was in fact developed a long ago before advanced computing power was available , and re - discovered many times under different perspectives .recently , has unified the framework in a simple and concise explanation . in either the cs or robust cs problem , the main technical challenge is that the variables are coupled through in either the quadratic or robust loss function .this makes it rather difficult when the extra constraint with non - smooth norm is introduced . in principle , the problem is easier to tackle if the variables can be decoupled , so that the problem can be solved element - wise or group - wise . using a clever trick , known as operator splitting , the admm framework suggests to separate the regularization term from the smooth term by introducing an additional variable , which is tied to the original variable via an affine constraint : here , is the robust cs loss function . for this type of regularized objective function, admm considers the following augmented lagrangian here , is the parameter associated with the augmentation , and this is to improve the numerical stability of the algorithm .the strategy for minimizing this augmented lagrangian is iterative updating of the primal and dual variables . with a further normalization on the dual variable , it is shown that as far as the primal and dual variables and are concerned where the constant is independent of and ( actually ) .note of the semi - colon , which treats as a parameter rather than a variable when solving for other variables .thus , the optimality point of the lagrangian can be found by iteratively updating the variables as follows : we note that the update steps for and are straightforward . in particular , for it is known that it is a soft - thresholding shrinkage operation due to the nature of , there is no exact solution for ( [ equ_x_step ] ) , and finding it always necessitates iterative algorithms .this will increase computational burden to the overall algorithm in a similar way as the previous robust cs algorithms introduced in . to alleviate the computational problem, we propose to follow a novel framework , known as generalized admm and developed by eckstein and bertsekas . in generalized admm, the update steps can be solved _approximately _ as long as the differences between the exact and approximate solutions generate a summable sequence .when such a condition is satisfied , the generalized admm theory has proved that the algorithm will converge to the solution ( * ? ? ?* theorem 8) . to utilize the generalized admm theory ,once again we adapt an mm algorithm to solve ( [ equ_x_step ] ) , which is in the same spirit as the original robust cs . in essence , this replaces with a suitable quadratic majorization as discussed previously .the major difference is that we only perform the minimization of the majorization _ once _ , as opposed to iteratively as in .specifically , we propose to modify the update step for in ( [ equ_x_step ] ) by using the quadratic approximation of at iteration as ( shown in ( [ equ_lx ] ) ) it can be easily recognized that the solution of this problem is exact we note that the quadratic approximation of fista can also be used .however , the choice above leads to a better approximation and hence will converge to the true solution faster .it is also easily seen that for the mr choice of the quadratic approximation where , the matrix under inversion in ( [ equ_x_final ] ) is fixed hence , the inversion can be computed once and cached so that the update step in subsequent iterations can be fast .the generalized admm for the specific case being considered can be stated as follows : consider an admm algorithm that solves the convex problem ( [ equ_robust_opt ] ) via the updates ( [ equ_x_final2 ] ) , ( [ equ_z_step ] ) , ( [ equ_u_step ] ) .denote as the exact solution of ( [ equ_x_step ] ) , and as the approximate of ( [ equ_x_step ] ) via ( [ equ_x_final2 ] ) .if the sequence is summable , i.e. , , then the above updates will generate a sequence that converge to the true solution of ( [ equ_robust_opt ] ) .next , we discuss the convergence stopping condition of the proposed generalized admm algorithm .when the update steps are solved exactly , the existing admm theory states that the penalty parameter affects both the primal residual ( defined as ) , and the primal residual ( defined as ) in an opposite manner : a large tends to generate a small primal residual and a large dual residual and vice versa .thus , selecting the optimal penalty parameter is typically a trade - off between primal and residual residuals with an admm algorithm , and generally works for most cases .however , more emphasis should be made to the primal residual in the case of the proposed generalized admm algorithm because the update step of the primal variable is not solved exactly .this will ensure that the approximation error in the primal variable is promptly compensated by the dual update , at the small sacrifice in convergence rate due to the residual error being slightly larger .intensive numerical studies suggest that a value for of between 2 and 5 for works rather well in many cases .we shall examine this in more detail in the experimental section , where we use . for stopping condition, we terminate the algorithm when the primal and dual variables are sufficiently small . for standard settings of absolute and relative tolerancesplease see .the fista and admm algorithms for robust cs presented tackle the optimization from slightly different angles . whilst fista solves the problem by replacing the robust cost function with a simpler quadratic approximation that decouples the variables , the admm decouples the regularization norm via operator splitting . whilst fista has only one approximation ,admm involves operator splitting _ and _ quadratic approximation at the step that updates .thus it appears that fista may have a convergence advantage due to being simpler and having less tuning requirements .however , numerical experience indicates that for a given tolerance , the admm algorithm is actually faster than fista in terms of both number of iterations or computational time to reach a given tolerance .this will be illustrated further in the experimental section .the advantage of admm is better realized when one needs to extend robust cs in similar ways as many extensions on the basic cs have been made in the literature .this is difficult , if not impossible , with the fista scheme .next , we discuss several possible extensions that can be simply achieved with the proposed admm algorithm . in some cases, one would like to impose additional affine constraints on the optimization problem .this could be of prior knowledge on the power modeling ( i.e. , when is known a priori ) and this could potentially improve stabilization of the cs solution .thus , the lagrangian ( [ equ_lagrangian1 ] ) could be altered as follows here , and are the dual variables for the equality constraints .again , by scaling the dual variables and we obtain thus , the admm update step for is the solution of the problem once again , if this step is to be solved approximately using a quadratic majorization with as discussed previously then it can be shown that where .it can be shown that the updates step for remains the same as ( [ equ_z_final ] ) except that and are replaced with and respectively .finally , the updates of the dual variables are just like the basic admm algorithm , convergence is determined when both the primal and dual residuals are sufficiently small . whilst the dual residual is as before , i.e. , , there are effectively two residual vectors and . depending on the desired accuracy requirement of a particular application, the stopping criterion can be determined accordingly ( see ) . in certain situations , one may wish to impose regularization on the solution of the recovery .such a motivation may arise from the fact that the absolute sparse model may not be realistic , and thus it is more desirable to consider even in the sparse case , an additional quadratic regularization with a small could improve numerical stability against rank deficiency of .in the case of quadratic loss function , i.e. , , this is known as the elastic - net .thus , the proposed formulation could be interpreted as a robust version of the elastic - net .the robust cs formulation is treated a special case when . for the original elastic - net, it is easily recognized that a simple algebra can convert it to a lasso ( or cs ) form , and thus it can be solved with many efficient -regularization algorithms . for the proposed robust elastic - net, it is not possible because of the loss function being not quadratic .however , it is trivial to show that it is possible to modify the fista and generalized admm algorithms discussed in the previous section to cater for this additional regularization term . indeed, this regularization term only affects the update step of . in both fista and generalized admm, the majorization is a quadratic function and thus absorbing this extra quadratic term is straightforward .for example , in the case of the fista algorithm , we need to solve ( c.f .( [ equ_fista ] ) which is equivalent to which is of the same form and this induces the soft - thresholding shrinkage operation .likewise , in the case of the generalized admm algorithm , we need to to solve ( c.f ( [ equ_x_mm ] ) ) and thus this has only a slight modification compared with ( [ equ_x_final ] ) thus , extension to mixed - norm regularization is straightforward of the proposed admm algorithm . in the original robust cs paper ,the huber loss is selected .this is suitable for impulsive noise being modeled as a contaminated mixture .however , the robust cs framework is not necessarily restricted to the huber loss function and indeed many loss functions in the robust statistics can be used to cater for different noise types .one particular interest is the -norm loss function , which is optimal when the impulsive noise is modeled as a cauchy distribution . in this case , and thus it is desirable to solve we note that the fista algorithm is not easily derived , because the loss function is not differentiable . to overcome the difficulty associated with two parts of the objective function that are both non - differentiable , we propose to apply the operator splitting mechanism of the admm framework twice .specifically , we introduce two additional variables and and rewrite the formulation as thus , the augmented lagrangian is with the scaled dual variables and , we can rewrite with this form , the updates for the variables are easily computed under the admm principle . for , the update solves the problem which yields the exact solution for both and , it is easily recognized that the update steps are simple soft - thresholding operations . for ,the update step solves where .likewise , for the update step solves they both have a similar form as ( [ equ_z_step ] ) , and thus from ( [ equ_z_final ] ) we deduce ( c.f . ( [ equ_shrinkage ] ) ) as the updates for and .finally , the dual updates are the stopping criterion is when the residual vectors are sufficiently small , including , , , and .the recent literature on cs also reveals that the basic sparsity recovery scheme can be improved if one exploits further domain knowledge .such an exploitation could be based on the constraint of the sparsity models .extensions , such as model - based cs and group sparsity , are key examples of the exploitation that can effectively reduce the cs requirements for a comparable recovery error when compared with conventional cs . here, we focus on a slight variation where there are multiple cs tasks to be performed : there are multiple cs measurements , each follows the model .haar wavelet coefficients - multi random bars ] in the image processing context , this could arise in , for example , compressed sensing of multiple video images . in these circumstances, there many be similarities between images .for example , moving images likely consist of relatively same large background and small moving objects .thus , the sparse representation of these original images may have similar sparse coefficients representing the common background part ( see fig .[ fig_mt_example ] for an illustration of a sequence of random bars images used later in the experiment ) .for that reason , it follows from the existing results on advanced cs that exploiting the shared structure between tasks is likely to improve cs recovery compared to the case where the tasks are performed independently .denote as ] the collection of cs measurements .extending the single - task robust cs , the multi - task robust cs can be formulated as follows here , and where s denote the columns of .clearly the loss term is the same , whilst for the regularization terms , we seek sparsity along the columns of but denseness along the rows of .this clearly reflects the prior assumption that sparse coefficients of the common parts are likely to be similar , hence the corresponding rows of should be dense , whilst it is sparse column - wise to respect the single - task cs s assumption .when is a quadratic loss function , this is a special matrix formulation of group lasso in the statistics literature .we now show that it is possible to extend both the fista and generalized admm algorithms to cater for this formulation . before doing so, we present a generalization of the soft - thresholding shrinkage operation as follows : the optimization problem has the solution this result can be proved by simple geometrical arguments .indeed , denote as the solution of ( [ equ_gs ] ) , then we consider all points such that .it turns out that these points are lying on the ball with center at and radius . among these points ,only the point that satisfies , i.e. , intersection of the ball and the vector , will have minimum norm , which minimizes the second term in ( [ equ_gs ] ) .substituting this into ( [ equ_gs ] ) yields the form of the soft - thresholding shrinkage problem , for which the result is obtained after simple manipulations . generalizing ( [ equ_fista ] ) for the multi - task settings , denote as ] , .the stopping criterion is when all primal and dual residual matrices are small , they include like the single - task case , one should set sufficiently large to obtain a smooth decrease of the objective function . _ further extensions ._ we have presented some fundamental extensions of the cs formulation . under the admm frameworks, it appears that it is possible to consider extensions based on the combination of the basics extensions presented .for example , the loss could be used with affine constraint or in multi - task setting , etc .such extensions will be worthwhile investigation for future work ._ regularization path ._ in practice , the optimal value of the regularization is not known in advance , and thus one needs to select a proper value to do robust cs recovery .such a problem is known in statistics as model selection .typically , one needs to compute the recovery along the regularization path , and select the one which meets the norm constraint .this is discussed in detail in .essentially , some estimates of the noise statistics must be obtained in order to construct the bound on the residual .it is well - known that there exist a above which the solution is zero . for decreasing values of ,the residual will become smaller whilst the recovery becomes denser .the optimal is the maximum value of such that the bound constraint on the residual vector is met . in cs recovery, this happens when , whilst in robust cs recovery , pham and venkatesh have suggested , which is a generalization of the cs selection criteria for the robust case . in our implementation , we combine a coarse grid search and a fine bi - section search to find this optimal ( see fig . [ fig_reg_path ] for an illustration ) .regularization path of robust cs for random bars example ] _ cholesky factorization ._ as can be seen , most update step of in different admm variants involves the computation of the form where is an positive definite matrix .the matrix under inversion has a size of and it is large in image processing application .thus , it is inefficient to compute the inversion directly to obtain the update . a much more efficient approach is to use cholesky decomposition to achieve the goal .it is known from linear algebra that if is a positive definite matrix then it admits the factorization and thus can be efficiently computed by solving first , then , which can be written as . for compressed sensing applications where is a fat matrix ,further exploitation can be made by reducing the dimension of the matrix for cholesky factorization .indeed , according to the matrix inversion lemma where .suppose that the cholesky factorization of is then we can avoid the direct inversion of by exploiting the fact that if then the matrix inversion lemma once again gives where .finally , we note that this cholesky factorization is independent of the regularization parameter and thus it can be cached for the whole regularization path to reduce computation .we examine the convergence property of the fista and admm algorithms and compare them with the previously proposed method in , which we refer to as nested robust cs algorithm due to the nature of the double loops inside that algorithm . asthe nested robust cs algorithm is dependent on the particular cs solver being used for the inner loop , we select the admm implementation as the cs solver because it provides the best computational accuracy and speed . note that pham and venkatesh used the ` l1_ls ` algorithm originally , which is known for high - accuracy but computationally expensive .however , numerical experience shows that the inner steps do not required to be solved with high accuracy .thus , the admm implementation as a cs solver for the nested robust cs algorithm is better overall . in this case, it can be seen that the computational complexity per iteration ( regardless of inner or outer ) in all compared algorithms are approximately the same : they all involve the computation of the majorization point and the soft - thresholding shrinkage operation . to compare the algorithms , we examine two aspects : the error versus the iterations and the computational time taken to achieve a particular tolerance . whilst the former indicates how fast an algorithm converges , the latter provides a much valuable insight for practical purpose . to do so , we let all algorithms run for sufficiently large number of iterations and measure the error ( with respect to the true value of the robust cs solution ) as iterations go on , and the computational time taken when the error reaches certain thresholds . for the admm - based cs solver used in the inner loop of the nested robust cs algorithm , we select the termination with relative tolerance of and absolute tolerance of ( see ) .this allows a reasonable convergence within the inner loops .we also choose the modified residual approach for nested robust cs as it is simpler without loosing convergence advantage .all algorithms are implemented in matlab , and roughly optimized .we revisit the random bars example in ( see also fig . [ fig_random_bars ] ) and the results of this study is shown in fig .[ fig_conv_random ] . in this example , the signal to noise ratio is 20db and the impulsive noise is modeled as a two - component gaussian mixture model where the there is 10% contamination whose variance is times that of the main component . here , the left subplot shows the reduction of the error versus the iterations , whilst the right plot shows the time taken to achieve the relative accuracy from initialized zeros ( as indicated by 1e0 ) to as small as of the initial error ( as indicated by 1e-10 ) .we note the error profile of the nested robust cs algorithm ranges considerably due to the fact that we measure with respect to the global solution of the outer loop and that within each cs inner loop the algorithm still converges normally .clearly the error profile plot indicates that the admm algorithm offers the best convergence speed per iteration , followed by the fista algorithm .for example , to achieve an accuracy of of the initial error , it only takes the admm algorithm less than 100 iterations , whilst the fista algorithm needs to spend more than 20 times , and the nested algorithm would need 200 times the number of iterations . in terms of the actual time taken to achieve a particular tolerance ,the right subplot further indicates the advantage of admm and fista algorithms over the nested one . in practice, one would be interested in the tolerance of between to , over which the admm and fista algorithms are observed to be 100 and 10 times faster than the nested algorithm respectively . in fig .[ fig_random_bars ] , we shows the actual image recovery of all compared methods , including the cs , the nested robust cs , the fista robust cs , and the admm robust cs algorithms on this random bars example .the original random bars image is shown on the top left subplot , whilst its haar wavelet coefficients are shown on the top right subplot . the results clearly show that all robust cs methods achieve an psnr of about 26db , which is 1.5db better that that of conventional cs recovery .we note that there is a very minor different between robust cs algorithms , due to different convergence termination conditions , which is unavoidable .next , we examine how much improvement can be made to robust cs if the power is known .the affine robust cs formulation is slightly different to the robust cs formulation in that additional constraint is imposed , and here we select and assume that is known .first , we examine the convergence behavior of the affine admm robust cs algorithm to solve this formulation by revisiting the random bars example . in this case, we select and let the algorithm run over sufficient number of iterations . the results are shown in fig .[ fig_conv_affine ] .again , the left subplot shows the absolute error against the iterations whilst the right subplots indicates computational time taken to reach a particular accuracy .compared with those of the admm robust cs algorithm , it can be clearly seen that the affine admm robust cs algorithm takes more time to reach .this is as expected because there are only minor changes to the update steps of the primal and dual variables .next , we examine the actual image recovery of affine robust cs formulation . once again , the random bars example is used and the recovered images are shown in fig .[ fig_affine ] . here , we compare with the robust cs formulation via the admm algorithm .the result indicates that there is a slight gain in the recovery , though it is rather little . as a result ,the recovered images look similar .next , we demonstrate the robust cs algorithm with loss function rather than the huber s loss function used in .this is useful in situations with very impulsive corruption , where the noise is best modeled by a cauchy distribution .to do so , we revisit the random bars example , but we use cauchy noise instead . for the admm robust cs algorithm ,the model selection criteria is the norm of the residual , rather than the huber s loss function to reflect the new formulation .other than that , all other experimental settings remain the same .first , we examine the convergence behavior of the admm robust cs algorithm with loss . fig .[ fig_conv_l1_admm ] shows the typical convergence behavior of the algorithm in terms of accuracy versus iterations ( left ) and computational time taken to reach certain accuracy ( right ) .it is observed that the convergence is slower with modest accuracy as compared with the formulation using huber s loss function .this is as expected from admm optimization theory due to an increasing number of variables to solve the loss formulation .nevertheless , modest accuracy might be sufficient for many practical situations .next , we examine image recovery quality in cauchy noise .[ fig_cauchy ] shows the image recovery for cs , robust cs using nested , admm , and -regularized admm algorithms respectively . due to cauchy noise, it is of interest to note that the cs completely fails with no meaningful pattern recovered .the other nested and admm algorithm still maintain reasonably recovery quality with an psnr of around 21db .the -regularized admm algorithm achieves the best result with an psnr of 25db , a significant improvement compared with the other two robust cs algorithms .it is also noted that the computational time of the -regularized admm algorithm is almost equal to that of the admm robust cs algorithm due to the fact that the update steps of the two algorithms have similar complexity . though the loss is primarily used for noise modeling as the cauchy distribution, it is still of interest to examine how it behaves if the noise is modeled as from the gaussian mixture as used previously .we again revisit the settings in the previous experiment and the result is shown in fig .[ fig_gmm ] .surprisingly , the -loss formulation provides a considerable psnr gain of 4db over the huber s loss robust cs formulation .thus , despite having less favorable convergence properties , the robust cs formulation with loss still appears a better performer for practical image recovery .finally , we demonstrate the usefulness of the multi - task robust cs formulation when a sequence of 10 compressed images corrupted by impulsive noise need to be recovered . whilst each image in the sequencecan be recovered separately , the multi - task robust cs formulation suggests that exploiting the shared structure between the tasks may provide better recovery . to do so, we consider a sequence of random bars frames shown in the top row of fig .[ fig_mt_recovery ] . here, there are common static random bars and a moving block across the frames .obviously , the wavelet coefficients for common static bars are shared between the cs tasks .only the coefficients corresponding to the moving block distinguish between tasks .this is clearly illustrated in fig .[ fig_mt_example ] which shows an image plot of haar wavelet coefficients of all 10 random bars image in a sequence : the horizontal lines correspond to common coefficients .the settings for the recovery are the same as previous experiments . for robust cs, we select the admm algorithm , and similarly for multi - task robust cs we also select the corresponding multi - task admm algorithm .the first 4 recovered images are shown in fig .[ fig_mt_recovery ] : the second row shows cs recovery , the third row shows robust cs recovery , and finally the last row shows multi - task robust cs recovery .the actual psnrs for every frame are shown on fig .[ fig_mt_psnr ] . here , we observe clearly that , on average , the multi - task robust cs formulation does provide a significant improvement over the robust cs formulation , both of which outperform cs recovery considerably .we have presented more computationally efficient and extendable approaches to the recently proposed robust cs algorithm . we have also extended robust cs formulation in a number of ways , including affine constraints , -loss function , and multi - task formulation . for improving computational efficiency of robust cs, we found that the ( generalized ) admm robust cs algorithm is the best , then followed by the fista robust cs algorithm .we also found that imposing affine constraint can provide improvement , though slightly .the striking result is that loss formulation for robust cs seems to offer considerable gain over the huber s loss formulation , despite the fact that its convergence seems slower . finally , in the case where one needs to robustly recover a sequence of compressed images , the multi - task formulation is proved to provide additional advantages in terms of both psnr output and computational speed .we start from the definition of the lipchitz constant as a term such as as there are two possible scenarios , , and and from the definition of and , we immediately have where is defined as the minimum constant such that let .for arbitrary and we construct such that it is a convex combination of and , so that and . then using triangle inequalities and definitions of and , we have the proof immediate follows from ( [ equ_sup_lf ] ) and ( [ equ_sup12 ] ) .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein , _ foundations and trends in machine learning_.1em plus 0.5em minus 0.4emnow publisher , 2011 , vol . 3 , no . 1 , ch . distributed optimization and statistical learning via the alternating direction method of multipliers , pp .e. candes , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee transactions on information theory _ , vol .52 , no . 2 ,489509 , 2006 .r. chan , c .- w . ho , and m. nikolova , `` salt - and - pepper noise removal by median - type noise detectors and detail - preserving regularization , '' _ ieee transactions on image processing _ , vol . 14 , no . 10 , pp . 14791485 , 2005 .d. donoho and g. reeves , `` the sensitivity of compressed sensing performance to relaxation of sparsity , '' in _ proceedings of the ieee international symposium on information theory ( isit)_.1em plus 0.5em minus 0.4emieee , 2012 , pp .22112215 .j. eckstein and d. bertsekas , `` on the douglas - rachford splitting method and the proximal point algorithm for maximal monotone operators , '' _ mathematical programming _ , vol .55 , no . 1 ,pp . 293318 , 1992 .z. guo , t. wittman , and s. osher , `` l1 unmixing and its application to hyperspectral image enhancement , '' in _ proceedings spie conference on algorithms and technologies for multispectral , hyperspectral , and ultraspectral imagery xv _ , vol . 7334 , 2009 , pp .73341m73341 m .t. hashimoto , `` bounds on a probability for the heavy tailed distribution and the probability of deficient decoding in seequential decoding , '' _ ieee transactions on information theory _51 , no . 3 , pp .9901002 , 2005 .kim , k. koh , m. lustig , s. boyd , and d. gorinevsky , `` a method for large - scale -regularized least squares , '' _ ieee journal on selected topics in signal processing _ , vol . 4 , no . 1 ,pp . 606617 , 2007 .n. rao , r. nowak , s. wright , and n. kingsbury , `` convex approaches to model wavelet sparsity patterns , '' in _ proceedings of the ieee international conference on image processing_.1em plus 0.5em minus 0.4emieee , 2011 , pp .19171920 .m. yuan and y. lin , `` model selection and estimation in regression with grouped variables , '' _ journal of the royal statistical society : series b ( statistical methodology ) _ , vol .68 , no . 1 ,4967 , 2006 .
|
compressed sensing ( cs ) is an important theory for sub - nyquist sampling and recovery of compressible data . recently , it has been extended by pham and venkatesh to cope with the case where corruption to the cs data is modeled as impulsive noise . the new formulation , termed as robust cs , combines robust statistics and cs into a single framework to suppress outliers in the cs recovery . to solve the newly formulated robust cs problem , pham and venkatesh suggested a scheme that iteratively solves a number of cs problems , the solutions from which converge to the true robust compressed sensing solution . however , this scheme is rather inefficient as it has to use existing cs solvers as a proxy . to overcome limitation with the original robust cs algorithm , we propose to solve the robust cs problem directly in this paper and drive more computationally efficient algorithms by following latest advances in large - scale convex optimization for non - smooth regularization . furthermore , we also extend the robust cs formulation to various settings , including additional affine constraints , -norm loss function , mixed - norm regularization , and multi - tasking , so as to further improve robust cs . we also derive simple but effective algorithms to solve these extensions . we demonstrate that the new algorithms provide much better computational advantage over the original robust cs formulation , and effectively solve more sophisticated extensions where the original methods simply can not . we demonstrate the usefulness of the extensions on several cs imaging tasks .
|
nested lattice coding for communication over gaussian networks has received considerable attention in recent times .it has been shown that nested lattice codes with closest lattice - point decoding can achieve the capacity of the power constrained additive white gaussian noise ( awgn ) channel .they are also known to achieve the capacity of the dirty - paper channel . inspired by these results ,they have been applied to design protocols for reliable communication over wireless gaussian networks .they have been used with much success for the interference channel , the gaussian bidirectional relay channel , and generalized to the problem of physical layer network coding for multiuser gaussian channels .nested lattice coding has also been used for security in wiretap channels and bidirectional relay networks . for a more comprehensive treatment of lattices and their applications in communication problems ,see . constructing lattices that have good structural properties is a problem that has been studied for a long time .poltyrev studied lattices in the context of coding for reliable transmission over the awgn channel without power constraints , and showed that there exist lattices which are `` good '' for awgn channel coding , i.e. , achieve a vanishingly small probability of error for all sufficiently small values of the noise variance .in addition to coding for the awgn channel , lattices were also studied in prior literature in the context of several other problems such as sphere packing , sphere covering , and mse quantization . in the sphere packing problem , we want to find an arrangement of non - intersecting spheres of a given radius that maximizes the average number of spheres packed per unit volume . on the other hand , the covering problem asks for an optimal covering of space by spheres of a given radius , that minimizes the average number of spheres per unit volume . in the mse quantization problem ,we want to find a minimal set of codewords which will ensure that the average mean squared error / distortion is less than a specified quantity .the use of lattices to generate good sphere packings , sphere coverings , and quantizers is a well - studied problem .finding lattices with good stuctural properties is of particular importance in designing lattice codes that use nested lattice shaping for power constrained gaussian channels .a poorly designed shaping region leads to loss in transmission rates .it was shown in that using nested lattice codes , where the fine lattices are good for awgn channel coding ( in the sense of poltyrev s definition ) and the coarse lattices are good for mse quantization , we can achieve the capacity of the power constrained awgn channel .furthermore , the rates guaranteed by for bidirectional relaying and the compute - and - forward protocol are achievable using nested lattices that satisfy the aforementioned properties .it was shown that if in addition to the above properties , the duals of the coarse lattices are also good for packing , then a rate of ( where snr denotes the signal - to - noise ratio ) can be achieved with perfect ( shannon ) secrecy over the bidirectional relay . instead of studying arbitrary lattices ,it is easier to study lattices that have a special structure , i.e. , lattices constructed by lifting a linear code over a prime field to .one such technique to obtain lattices from linear codes is construction a , where the lattice is obtained by tessellating the codewords of the linear code ( now viewed as points in ) across the euclidean space .it was shown in that if we pick a linear code uniformly at random , then the resulting construction - a lattice is asymptotically good for covering , packing , mse quantization , and awgn channel coding with high probability .the problem with general construction - a lattices is the complexity of closest lattice - point decoding .there is no known polynomial - time algorithm for decoding construction - a lattices obtained from arbitrary linear codes . a natural way of circumventing this is to restrict ourselves to ldpc codes to construct lattices .we can then use low - complexity belief propagation ( bp ) decoders instead of the closest lattice - point decoder which has exponential complexity .such lattices , termed low - density construction - a ( lda ) lattices , were introduced in in .simulation results in showed that these lattices perform well with bp decoding . while there is no formal proof that these lattices are good under bp decoding , it was proved in that lda lattices are good for awgn channel coding , and subsequently shown in nested lda lattices achieve the capacity of the power constrained awgn channel with closest lattice - point decoding . in this paper , we show that lda lattices have several other goodness properties .we will prove that a randomly chosen lda lattice ( whose parameters satisfy certain conditions ) is good for packing and mse quantization with probability tending to as .in addition , we will show that the dual of a randomly chosen lda lattice is good for packing with probability tending to as .this means that the capacities of the power constrained awgn channel and the dirty paper channel , the rates guaranteed by compute - and - forward framework , and the rates guaranteed by for perfectly secure bidirectional relaying can all be achieved using nested lda lattices ( with closest lattice - point decoding ) .however , showing that the aforementioned results can all be achieved using belief propagation decoding still remains an open problem .even though other awgn - good lattice constructions that permit low - complexity decoding algorithms have been proposed , this is the first instance where such a class of lattices have been shown to satisfy other goodness properties , and this is the main contribution of this work .the rest of the paper is organized as follows : we describe the notation and state some basic definitions in the next two subsections .section [ sec : lda_ensemble ] describes the ensemble of lattices , and the main result is stated in theorem [ theorem : lda_simultaneousgoodness ] .some preliminary lemmas are stated in section [ sec : prelim_lemmas ] .this is then followed by results on the various goodness properties of lattices in the lda ensemble . in section [ sec :channelcoding ] , the goodness of these lattices for channel coding is described .this is followed by section [ sec : ldapacking ] on the packing goodness of lda lattices . in section [ sec : lda_msequantization ] , we discuss sufficient conditions for goodness of these lattices for mse quantization .we then prove the goodness of the duals for packing in section [ sec : lda_dualpacking ] , and conclude with some final remarks in section [ sec : remarks ] .some of the technical proofs are given in the appendices .the set of integers is denoted by , and the set of reals by . for a prime number , the symbol denotes the field of integers modulo .matrices are denoted by uppercase letters , such as , and column vectors by boldface lowercase letters , such as .the ( or euclidean ) norm of a vector is denoted by .the support of a vector is the set of all coordinates of which are not zero , and is denoted by .if is a finite set , then is the number of elements in .the same notation is used for the absolute value of a real number ( ) , but the meaning should be clear from the context .if and are two subsets of , and are real numbers , then is defined to be .similarly , for , we define .we define to be the ( closed ) unit ball in dimensions centered at .for , and , the dimensional closed ball in centered at and having radius is denoted by .we also define , the volume of a unit ball in dimensions . for , denotes the binary entropy of .if is a sequence indexed by , then we say that if as .we will state some basic definitions related to lattices .the interested reader is directed to for more details .let be a full - rank matrix with real - valued entries .then , the set of all integer - linear combinations of the columns of forms an additive group and is called an -dimensional lattice , i.e. , .the matrix is called a _ generator matrix _ for . the _ dual lattice _ of , denoted by , is defined as . if is a generator matrix for , then is a generator matrix for .the set of all points in for which the zero vector is the closest lattice point ( in terms of the norm ) , with ties decided according to a fixed rule , is called the _ fundamental voronoi region _ , and is denoted by .the set of all translates of by points in partitions into sets called _voronoi regions_. the _ packing radius _ of , , is the radius of the largest -dimensional open ball that is contained in the fundamental voronoi region .the _ covering radius _ of , , is the radius of the smallest closed ball that contains .let be the volume of the fundamental voronoi region .then , the _ effective radius _ of is defined to be the radius of the -dimensional ball having volume , and is denoted by .these parameters are illustrated for a lattice in two dimensions in fig .[ fig : latticeparam ] .if are -dimensional lattices satisfying , then is said to be _ nested _ within , or is called a _ sublattice _ of .the lattice is called the _ fine lattice _ , and called the _ coarse lattice_.the quotient group has elements , and the above quantity is called the _ nesting ratio_. this is equal to the number of points of within .we now formally define the `` goodness '' properties that we want lattices to satisfy .a sequence of lattices , ( indexed by the dimension , ) , is _good for packing _ if[multiblock footnote omitted ] lattices have been well - studied in the context of vector quantization , where the aim is to obtain a codebook of minimum rate while ensuring that the average distortion ( which is the mean squared error in this case ) is below a thresholdnormalized second moment per dimension _ of an -dimensional lattice is defined as this is equal to the normalized second moment of a random variable ( the error vector in the context of quantization ) which is uniformly distributed over the fundamental voronoi region of , and we want this to be as small as possible .the normalized second moment of any lattice is bounded from below by that of an -dimensional sphere , which is equal to ( see e.g. , ) .a sequence of lattices is said to be _good for mse quantization _ if as .we also want to use lattices to design good codebooks for reliable transmission over additive noise channels .classically , a lattice was defined to be good for awgn channel coding if with high probability , the closest lattice - point decoder returned the actual lattice point that was transmitted over an awgn channel without power constraints .this notion was made slightly more general in , using the notion of semi norm - ergodic noise : a sequence of random vectors ( where is an -dimensional random vector ) having second moment per dimension ] , the probability that the lattice point closest to is not goes to zero as , i.e. , \to 0 \text { as } n\to\infty,\ ] ] as long as for all sufficiently large .an ldpc code can be defined by its parity check matrix , or by the corresponding edge - labeled tanner graph . a -regular bipartite graph defined as an undirected bipartite graph with every left vertex ( i.e. , every vertex in ) having degree , and every right vertex ( i.e. , every vertex in ) having degree .the vertices in are also called the variable nodes , and those in are called parity check ( or simply , check ) nodes . if is a subset of ( resp . ) , then is the neighbourhood of , defined as ( resp . ) .throughout this paper , and are real numbers chosen so that , and . for , define . for each , let ( which is a sequence indexed by ) be the smallest prime number greater than or equal to , and denote the field of integers modulo .we study the constant - degree lda ensemble introduced in .specifically , let denote a -regular bipartite graph ( ) , with variable nodes , check nodes , and satisfying .the graph is required to satisfy certain expansion properties , which are stated in the definition below .let be positive real numbers satisfying , and .the graph is said to be -good if 1 .if , and , then .if , and , then .if , and , then , .4 . if , and , then .[ defn : goodgraphs ] the following lemma by di pietro asserts that a randomly chosen graph satisfies the above properties with high probability .let be chosen uniformly at random from the standard ensemble ( * ? ? ?* definition 3.15 ) of -regular bipartite graphs with variable nodes .let and be positive constants . if satisfies then the probability that is not -good tends to zero as .[ lemma : goodgraphs ] let , and be two constants , and .let be the smallest prime number greater than ., and for convenience , but choosing to be the smallest prime number greater than , and will not change any of the results . ] let .let us pick a -regular bipartite graph with variable nodes . throughout the paper ,we assume that the parameters of satisfy the hypotheses of lemma [ lemma : goodgraphs ] , and that is -good .let denote the parity check matrix corresponding to the tanner graph .we describe the lda ensemble obtained using the tanner graph , which will henceforth be called the lda ensemble .we construct a new matrix , , by replacing the s in with independent random variables uniformly distributed over . for and , let be i.i.d .random variables , each uniformly distributed over , and let be the entry of .then , the entry of , denoted , is given by .therefore , is equal to if is , and zero otherwise .for example , if then note that the `` skeleton matrix '' is fixed beforehand , and the only randomness in is in the coefficients .the matrix is therefore the parity check matrix of an -length regular ldpc code over with high probability ( if ) .the lda lattice is obtained by applying construction a to the code , i.e. , .equivalently , if denotes the natural embedding of into , then . and .,width=264 ] for a given ,let us define to be the set of all variable nodes that participate in the check equations for which the entry of ( i.e. , ) is nonzero .formally , .equivalently , iff there exists such that and .this is illustrated in fig .[ fig : su_graph ] .the rest of the article will be dedicated to proving the following theorem : let , , suppose that satisfies ( [ eq : dta_v_condns ] ) , and the corresponding is -good .let if we pick at random from the lda ensemble , then the probability that is simultaneously good for packing , channel coding , and mse quantization tends to as .moreover , the probability that is also simultaneously good for packing , tends to as .[ theorem : lda_simultaneousgoodness ] we will prove each of the goodness properties in separate sections . the conditions on the parameters of the lattice to ensure goodness for channel coding are stated in theorem [ theorem : lda_awgn ] .goodness for packing is discussed in corollary [ theorem : lda_packing ] , and mse quantization in theorem [ theorem : lda_msequantization ] .sufficient conditions for the packing goodness of the duals of lda lattices are given in theorem [ theorem : lda_dualpacking ] .the above theorem can then be obtained by a simple application of the union bound .but before we proceed to the main results , we will discuss some useful lemmas that we will need later on in the proofs .in this section , we record some basic results that will be used in the proofs . recall that is the volume of a unit ball in dimensions .we have the following upper bound on the number of integer points within a ball of radius : let , , and denote the unit ball in dimensions . then , furthermore , if , then [ lemma : zn_cap_rb ] recall the randomized construction of the parity check matrix from the -good graph , described in the previous section . also recall that for , is the set of all variable nodes that participate in the check equations for which .we have the following result which describes the distribution of .let , and .then , =\begin{cases } \frac{1}{p^{|\su| } } & \text { if } \mathrm{supp}(\x)\subset \su \\ 0 & \text{else . } \end{cases}\ ] ] [ lemma : prhu_x ] let .the entry of is given by .consider any . from the definition of , it is easy to see that the variable node does not participate in any of the parity check equations indexed by . hence , whenever .therefore , . on the other hand , if , then there exists at least one such that .so , , being a nontrivial linear combination of independent and uniformly distributed random variables , is also uniformly distributed over .moreover , it is easy to see that the s are independent . therefore , =\begin{cases } 1/p & \text { if } j\in \su\\ 0 & \text { if } j\notin \su \text { and } a\neq 0\\ 1 & \text { if } j\notin \su \text { and } a=0 .\end{cases}\ ] ] this completes the proof .recall that defines a linear code over , where is the smallest prime greater than .the following lemma , proved in appendix a , gives a lower bound on the probability of a randomly chosen not having full rank .if for some , then \leq n^{-(2\lba+\dta)}(1+o(1)).\ ] ] [ lemma : lda_fullrank ] we now proceed to prove the various goodness properties of lda lattices .recall that a sequence of lattices is good for coding in presence of semi norm - ergodic noise if for any sequence of semi norm - ergodic noise vectors , with second moment per dimension equal to ] as as long as .note that if the noise is assumed to be i.i.d .gaussian , then the above definition is weaker than the definition of awgn ( or poltyrev ) goodness defined in , since the probability ] as . to prove that is good for coding , it is then enough to show the absence of nonzero lattice points within a ball of radius around , for all and all sufficiently large . in ,di pietro proved the following statement , thus establishing theorem [ theorem : lda_awgn ] , and hence showing that lda lattices are good for channel coding : for every , \to 0 \text { as } n\to\infty , \label{eq : lda_awgngood}\ ] ] where , and as .recall that is good for packing if the packing goodness of lda lattices follows as a corollary to theorem [ theorem : lda_awgn ] .let be a lattice chosen uniformly at random from a lda ensemble , where is -good , and ( [ eq : dta_v_condns ] ) is satisfied .furthermore , let then , the probability that is good for packing tends to 1 as .[ theorem : lda_packing ] let us choose , where is a quantity that goes to as .we want to prove that \to 0 \text { as } n\to\infty.\ ] ] it is enough to show that the probability of any nonzero integer point within belonging to goes to zero as , i.e. , \to 0 \text { as } n\to\infty\ ] ] this requirement is similar to ( [ eq : lda_awgngood ] ) , and the rest of the proof of packing goodness of lda lattices follows , _ mutatis mutandis _ , on similar lines as that for goodness for channel coding .in nested lattice coding for power constrained transmission over gaussian channels , the codebook is generally the set of all points of the fine lattice within the fundamental voronoi region of the coarse lattice .hence , the fine lattice determines the codeword points , while the coarse lattice defines the shaping region . in order to maximize the rate for a given power constraint , we want the shaping region to be approximately spherical .the loss in rate ( penalty for not using a spherical shaping region ) is captured by the normalized second moment , , of the coarse lattice , and in order to minimize this loss , we want to be as close to the asymptotic normalized second moment of a sphere as possible . as defined in section [ sec : basicdefs ] , is good for mse quantization if as . in this section, we will prove the following result : let and .fix suppose that satisfies the conditions of lemma [ lemma : goodgraphs ] , and is -good .furthermore , let let be randomly chosen from a lda ensemble .then , the probability that is good for mse quantization tends to as .[ theorem : lda_msequantization ] to prove the theorem , we will show that for every positive , and all sufficiently large , \leq \delta_2 .\label{eq : msequantization_main}\ ] ] since for all , the above statement guarantees the existence of a sequence of lattices , , for which as .our proof of the above inequality is based on the techniques used in and . for a lattice , and , we define to be the euclidean distance between and the closest point in to . for ease of notation ,let us define .our proof of inequality ( [ eq : msequantization_main ] ) , and hence theorem [ theorem : lda_msequantization ] , will make use of the following lemmas , which are proved in appendix b. suppose that the hypotheses of theorem [ theorem : lda_msequantization ] are satisfied .let be drawn uniformly at random from a lda ensemble , and be a random vector uniformly distributed over .then , \leq \be_{\l , x}\left[\frac{d^{2}(x,\l)}{n(\textnormal{vol}(\l))^{2/n}}\bigg| h\text { is full rank}\right ] + o(1 ) .\label{eq : msegood_lemma1}\ ] ] [ lemma : msegood_1 ] suppose that the hypotheses of theorem [ theorem : lda_msequantization ] are satisfied .let .there exists a so that for every , \leq \frac{1}{n^{2\lba r+\dta}}(1+o(1 ) ) .\label{eq : dxl_bound}\ ] ] [ lemma : dxl_bound ] let be a random vector uniformly distributed over , and be uniformly distributed over .then , =\mathbb{e}_{u}\mathbb{e}_{\l}[d^2(u,\l)|h\text { is full rank}].\ ] ] [ lemma : elx_eul ] recall that to prove the theorem , it is enough to prove inequality ( [ eq : msequantization_main ] ) . to this end, we will show that the first term in ( [ eq : msegood_lemma1 ] ) tends to as .we will use lemma [ lemma : dxl_bound ] to bound this term .recall that .since ( [ eq : dxl_bound ] ) holds for all , we can say that for any random vector ( having density function ) over , we have & = \int_{\r^n } \pr\big[d(\u,\l)>r(1+n^{-\omega})\big|h\text { is full rank}\big]f(\u)d\u & \notag \\ & \leq n^{-(2\lba r+\dta)}(1+o(1 ) ) .& \notag\end{aligned}\ ] ] let us define . for any , and any construction - a lattice , we have . then , for any distribution on , & \leq \rho^2 \pr\big[d(u,\l)\leq \rho\big|h\text { is full rank}\big ] & \notag \\ & \qquad\qquad + \frac{p^2n}{4}\pr\big[d(u,\l)>\rho\big|h\text { is full rank}\big ] & \notag \\ & \leq \rho^2\left ( 1 + \frac{p^2 n}{4\rho^2}\frac{1}{n^{2\lba r+\dta}}(1+o(1))\right ) .&\notag\end{aligned}\ ] ] substituting , & \leq \rho^2 \left ( 1+n^{2\lba+1 } \frac{2\pi e}{4n^{2\lba(1-r)+1}}\frac{1}{n^{2\lba r+\dta}}(1+o(1 ) ) \right ) & \notag\\ & = \rho^2 \left(1 + \frac{\pi e}{2n^{\dta}}(1+o(1))\right)&\notag \\ & = r^2 ( 1+o(1 ) ) . &\label{eq : eul_bound}\end{aligned}\ ] ] from ( [ eq : eul_bound ] ) and lemma [ lemma : elx_eul ] , we have \leq r^2 ( 1+o(1)).\ ] ] recall that denotes the volume of an -dimensional unit ball . using stirling s approximation , we get , therefore , and hence ,\leq \frac{1}{2\pi e}(1+o(1)).\ ] ] using this , and lemma [ lemma : msegood_1 ] , we can write \leq \frac{1}{2\pi e } ( 1+\delta(n ) ) , \label{eq : msegood_1}\ ] ] where is a quantity that goes to as .we also have for all . for any ,we can write &\geq \frac{1}{2\pi e}\pr\left[\frac{1}{2\pi e}<g(\l)\leq \frac{1}{2\pi e}+\gamma\right ] + \left(\frac{1}{2\pi e}+\gamma\right)\pr\left[g(\l ) > \frac{1}{2\pi e}+\gamma\right]&\notag \\ & = \frac{1}{2\pi e}\left ( 1-\pr\left[g(\l ) > \frac{1}{2\pi e}+\gamma\right]\right ) + \left(\frac{1}{2\pi e}+\gamma\right)\pr\left[g(\l ) > \frac{1}{2\pie}+\gamma\right]&\notag \\ & = \frac{1}{2\pi e}+ \gamma \pr\left[g(\l ) >\frac{1}{2\pi e}+\gamma\right ] , & \notag\end{aligned}\ ] ] and hence , \leq \frac{\mathbb{e}[g(\l)]-1/(2\pi e)}{\gamma}\ ] ] since the above inequality holds for every , we can choose , for e.g. , , and use ( [ eq : msegood_1 ] ) to obtain \leq \sqrt{\delta(n)}\to 0 \text { as } n\to\infty.\ ] ] therefore , we can conclude that the probability of choosing an lda lattice which is good for mse quantization tends to as .recall that denotes the packing radius of , and that a sequence of lattices is good for packing if our motivation for studying the properties of the dual of a lattice comes from , where a nested lattice coding scheme was presented for compute - and - forward in a bidirectional relay network with an untrusted relay . in this problem ,two users want to exchange messages with each other , with all communication taking place via an honest - but - curious bidirectional relay .the users operate under an average transmission power constraint of , and the links between the users and the relay are awgn channels with noise variance .the messages have to be reliably exchanged ( the probability of decoding error should go to zero asymptotically in the blocklength ) , but kept secret from the relay . to be more specific, the signals received by the relay have to be statistically independent of the individual messages .this requirement is also called perfect ( or shannon ) secrecy .it was shown in that if the fine lattices are good for awgn channel coding , the coarse lattices are good for mse quantization , and the duals of the coarse lattices are good for packing , then a rate of can be achieved with perfect secrecy .this motivates us to construct lattices whose duals are good for packing . in this section, we will prove the following result .let be an -good -regular bipartite graph whose parameters satisfy the hypotheses of lemma [ lemma : goodgraphs ] . if then the dual of a randomly chosen lattice from a lda ensemble is good for packing with probability tending to as .[ theorem : lda_dualpacking ] if is a lattice obtained by applying construction a to a linear code , and if is the dual of , then , is obtained by applying construction a to the dual code , ( see ( * ? ? ? * lemma 27 ) for a proof ) . to showthat the duals of lda lattices are good for packing , it is enough to show that the construction - a lattices generated by the duals of the nonbinary ldpc codes ( ) are good for packing .note that ( a parity check matrix for ) is a generator matrix for .let be the lattice obtained by applying construction a on .we will prove that is good for packing .the lattice contains as a sublattice , and the nesting ratio is if is full - rank .the volume of is equal to the ratio of the volume of to the nesting ratio , and hence , recall that is the volume of the unit ball in dimensions .the effective radius of can therefore be written as , let us define where is a term that goes to as , defined as follows : here , we want to prove that the probability \to 0 ] , and hence the probability that is not full rank , goes to zero as .* remark : * to prove that \to 0 ] could then be bounded from above by .however , we need ] from above by . using this and ( [ eq : glambda_constabound ] ) in ( [ eq : msegoodness1 ] ) , and the fact that \leq 1 ] goes to zero faster than .the proof is along the same lines as di pietro s proof of existence of lattices that achieve the capacity of the power constrained awgn channel in .the parameters chosen in were not sufficient to show that the lattices are good for mse quantization .we have adapted the proof to show that under stronger conditions ( on the parameters of the lattice ) , we can obtain lattices which are good for mse quantization . for , define let . recall that denotes an -dimensional ball centered at and having radius .we define which is simply the number of lattice points in .let us define . from , we have \geq \sqrt{\mathcal{e}(\rho)}. \label{eq : ex_lbound}\ ] ] in , it was shown that the variance of can be bounded from above as follows .is upper bounded by a sum of three terms , ( [ eq : term1 ] ) , ( [ eq : term2 ] ) , and ( [ eq : term3 ] ) , which was also studied in to show that nested lda lattices achieve the capacity of the power constrained awgn channel .we impose stronger constraints on and so as to ensure that ( [ eq : dxl_bound ] ) goes to zero sufficiently fast as . ] we show that ( [ eq : term1 ] ) , ( [ eq : term2 ] ) and ( [ eq : term3 ] ) are all bounded from above by .let the hypotheses of theorem [ theorem : lda_msequantization ] ensure that .we have provided that the exponent is negative .as long as , we have the first term bounded from above by .this condition is indeed satisfied , since by definition , . for all , we have , and hence . with this, we get this implies that where is a positive constant . from ( [ eq : delta_defn ] ) , we have , and hence .moreover , for sufficiently large .hence , for all sufficiently large .similarly , for all sufficiently large .hence , the second term is bounded from above by since and , we have for , if , then the above terms are at most .now , where ( [ eq : thirdterm_mid1 ] ) follows from lemma [ lemma : zn_cap_rb ] . but . using this , and simplifying , we get therefore , goes to zero faster than any polynomial . combining ( [ eq : thirdterm_1 ] ) , ( [ eq : thirdterm_2 ] ) , and ( [ eq : thirdterm_3 ] ) , we can conclude that ( [ eq : term3 ] ) is upper bounded by . as a consequence , the variance of is bounded from above by .we have already seen in ( [ eq : ex_lbound ] ) that \geq \sqrt{\mathcal{e}(\rho ) } ] for . proceeding along the same lines as in the previous subsection , we get using ( [ eq : vm_vn ] ) , and the inequalities and , since , we get which goes to zero as , since from definition [ defn : goodgraphs ] .let us prove the contrapositive of the above statement .suppose that .equivalently , .this implies that , otherwise we would be in violation of property ( l2 ) in definition [ defn : goodgraphs ] .but from ( l2 ) , we have , and this completes the proof . since has at least vertices , the set has less than vertices ( see fig . [fig : graph_part3 ] ) . if , then , has does not have any neighbours from .hence , .but must imply that , from lemma [ lemma : expander_left ] .therefore , , or .this means that =0 $ ] for .consider following the approach in the previous subsections , the above reduces to where the last step uses the inequality .since , we get if we have for some , then ( [ eq : phi_3_2 ] ) is upper bounded by , which goes to zero as .simplifying the above quantity gives us the condition which is satisfied in this regime , and hence , as .for any subset of parity check nodes , , we have .this is because the number of edges between and is , but the number of edges incident on each node in from is at most .therefore , we have since is a decreasing function of for , we have using the inequality and simplifying , we get for all sufficiently large , we have .therefore , since , we have which goes to zero as because of our choice of .this completes the proof of theorem [ theorem : lda_dualpacking ] .g. bresler , a. parekh , and d.n.c .tse , `` the approximate capacity of the many - to - one and one - to - many gaussian interference channels , '' _ ieee trans . inf .theory _ , vol .56 , no . 9 , pp . 45664592 , sep . 2010 .n. di pietro , g. zmor , and j.j .boutros , `` new results on construction a lattices based on very sparse parity - check matrices , '' _ proc . 2013 ieee int . symp .information theory _ ,istanbul , turkey , 2013 , pp . 16751679 .n. di pietro , g. zmor , and j.j .boutros , `` new results on construction a lattices based on very sparse parity - check matrices , '' _ submitted , ieee trans . inf .theory , _ 2016 .[ online ] .available : ` http://arxiv.org/pdf/1603.02863.pdf ` . c. ling , l. luzzi , j .- c .belfiore , and d. stehl , `` semantically secure lattice codes for the gaussian wiretap channel , '' _ ieee trans .theory _ , vol .60 , no .10 , pp . 63996416 , oct .h. minkowski , _ gesammelte abhandlungen , vol . 2 ,_ leipzig : b.g .teubner verlag , 1911 .o. ordentlich and u. erez , `` a simple proof for the existence of `` good '' pairs of nested lattices , '' _ proc .2012 ieee 27th conv .electrical and electronics engineers in israel , _ eilat , israel , pp . 112 .tunali , k.r .narayanan , and h.d .pfister , `` spatially - coupled low density lattices based on construction a with applications to compute - and - forward '' _ proc .2013 information theory workshop _ ,sevilla , spain , 2013 , pp .15 .wilson , k. narayanan , h.d .pfister , and a. sprintson , `` joint physical layer coding and network coding for bidirectional relaying , '' _ ieee trans .theory _ , vol .56 , no .11 , pp . 56415654 , nov .
|
we study some structural properties of construction - a lattices obtained from low density parity check ( ldpc ) codes over prime fields . such lattices are called low density construction - a ( lda ) lattices , and permit low - complexity belief propagation decoding for transmission over gaussian channels . it has been shown that lda lattices achieve the capacity of the power constrained additive white gaussian noise ( awgn ) channel with closest lattice - point decoding , and simulations suggested that they perform well under belief propagation decoding . we continue this line of work , and prove that these lattices are good for packing and mean squared error ( mse ) quantization , and that their duals are good for packing . with this , we can conclude that codes constructed using nested lda lattices can achieve the capacity of the power constrained awgn channel , the capacity of the dirty paper channel , the rates guaranteed by the compute - and - forward protocol , and the best known rates for bidirectional relaying with perfect secrecy .
|
the success of machine learning algorithms depends heavily upon the representation of the input data .a major appeal of deep learning , on which the current dominant approaches for machine vision tasks are based , is that they can automatically learn useful feature representations from the data .a criticism of most deep architectures is that they wastefully process every input component when performing a task ; for example , the input layer considers all pixels in every region of the input when learning an image classifier and making classification decisions .in contrast , the human visual system has only a small fovea of high resolution chromatic input allowing it to more judiciously budget computational resources . in order to receive additional information in the field of view, we make either covert or overt shifts of attention. overt shifts of attention or _ eye - movements _ allow us to bring the fovea over particular locations in the environment that are relevant to current behavior . to avoid the serial nature of processing as demanded from overt shifts of attention , our visual system can also engage in covert shifts of attention in which the eyes remain fixated on one location but attention is deployed to a different location .the human retina receives 10 million bits per second which exceeds the computational resources available to our visual system to assimilate at any given time .even though we perceive the environment around us in great detail , only a small fraction of the information registered by the visual system is processed .this paper asks a simple question : if high detail input were not available , would artificial neural networks still be able to capture aspects of the underlying distribution ? to further put this question in perspective ,our own fovea takes up only 4% of the entire retina and is solely responsible for sharp central full color vision with maximum acuity .the relative visual acuity diminishes rapidly with eccentricity from the fovea . as a result, visual performance is best at the fovea and progressively worse towards the periphery .indeed , our visual cortex is receiving distorted color - deprived visual input except for the central two degrees of the visual field as seen in figure [ fig : foveated ] . despite receiving such a distorted signal ,we perceive the world in color and high resolution and are mostly unaware of this distortion . even when confronted with actual blurry or distorted visual input , our visual system is good at extracting the scene contents and context .for instance , our system can recognize faces and emotions expressed by those faces in resolutions as low as 16 x 16 pixels .we can reliably extract contents of a scene from the gist of an image even at low resolutions .recently , ullman et al . has shown that our visual system is capable of recognizing contents of images from critical feature configurations ( called minimal recognizable images or mircs ) that current deep learning systems can not utilize for similar tasks .these mircs resemble foveations on an image and their results reveal that the human visual system employs features and processes that are not used by current deep networks .similarly , little attention has been given by the deep learning community to how these networks deal with distorted or noisy inputs .we draw inspiration from the abilities of the human visual system and ask whether an artificial neural network can learn to perceive an image from low fidelity input .if this is the case , we can design state of the art architectures in image super resolution , automatic image coloring , image compression and at the same time , reduce computational costs of processing entire images associated with deep networks . there has been a revival in applying the idea of attention to deep learning architectures .such work is exciting and has lead to improvements in tasks ranging from machine translation to image captioning .however , in many approaches especially those that employ a _ soft _ attention mechanism the computational cost is increased .for example , when generating a target sentence , a network must compute a softmax over every word of a source sentence or location of a source image . unlike these systems ,humans perceive by sequentially directing attention to relevant portions of the data and in turn enables our visual system to reduce computational costs . in this paper , we want to understand what kind of information can be gleaned from low - fidelity inputs . what can be gleaned from a single foveal glimpse ?what is the most predictive region of an image ?we present a framework for studying such questions based on a generative model known as an autoencoder .in contrast to traditional or de - noising autoencoders , which attempt to reconstruct the original image ( or respectively , a salt and pepper corrupted version ) , our autoencoders attempt to reconstruct original high - detail inputs from lower - detail foveated versions of those images ( that is , images that are entirely low detail except perhaps a small `` fovea '' of high detail ) .thus , we have taken to calling them defoveating autoencoders ( dfae ) .we find that even relatively simple dfae architectures are able to perceive color , shape and contrast information , but fail to recover high - frequency information ( e.g. , textures ) when confronted with extremely impoverished input .interestingly , as the amount of detail present in the input diminishes , the structure of the learnt features becomes increasingly global .autoencoders are a class of unsupervised algorithms which pairs a bottom - up recognition network ( encoder ) with a top - down generative network ( decoder ) .the encoder , denoted as the function , forms a compressed representation of the input . is the feature vector representation or code computed from . in the context of our work , we were interested in whether or not we can learn a rich representation of a low fidelity input image .the output , denoted as the function , maps the feature vector back into the input space producing a reconstruction through the minimization of a reconstruction error function .good generalization means reconstruction error of test examples should be close to the reconstruction error for training examples . to capture the structure of the underlying data distribution and prevent the autoencoder from learning the identity function, we can either require the hidden layer to have lower dimensionality than the input or regularize the weights .the lower dimension constraint is what the classical autoencoder or pca does while the higher dimension is used by the sparse autoencoders .recently , denoising autoencoders have been shown to regularize the network by adding noise such as salt - and - pepper ( sp ) noise to the input , thus forcing the model to learn to predict the true image from its noisy representation . in summary ,the basic autoencoder training consists of optimizing parameter vector to minimize reconstruction error as measured by the loss , : where is a training example .the minimization is carried out by standard gradient descent algorithms like backpropagation .the commonly used forms for the encoder is an affine mapping followed by non linearity : where is the encoder activation function , , is the weight matrix and is the bias vector .similarly the decoder mapping is : with the appropriately sized parameters as mentioned above , it has been shown that the features learnt by the encoder without any non - linearity are a subspace of the principal components of the input space .however , when a non - linear activation such as a sigmoid is used in the encoder , an ae can learn more powerful feature detectors than a simple pca .the architecture of a simple one hidden layer ae is very similar to that of a multilayer perceptron ( mlp ) .the difference between aes and mlps lies in the output layer : the mlp predicts the class of the input whereas an ae reconstructs from .we will start by reviewing related work on using distorted inputs to train deep networks and then move on to describe the architecture of ae that was used to test feature extraction from downsampled images .using noisy or _ jittered _ inputs to understand feature learning in the framework of aes or mlps has been explored before .vincent et al . first proposed training autoencoders with corrupted image as input . therefore their _ denoising _ autoencoder ( dae ) learnt to reconstruct the clean input from a corrupted version .they have shown that introducing noise to the input lowers classification error on benchmark classification problems .the filters produced by denoising aes tend to capture more distinctive blob - like features and with higher level of corruption in the input image , they learn less localized filters .in fact , bishop has argued that in a linear system training with noise has a similar effect as training with a regularizer , such as an l2 weight decay . another proposal to make autoencodersnoise invariant is by rifal et al .they improved on daes by adding a penalty term , called the contraction ratio , to the learnt mapping which makes the features learnt more robust and invariant to change of raw input . in the spirit of denoising aes, we incorporate a form of noise in our input image .however , unlike the sp noise , our noise is generated from using a foveation function ( described below ) on the image .we investigated whether foveations acted as a strong regularizer for the ae like the sp noise , thus allowing us to use it in deep architectures .denoising images has been investigated using architectures other than autoencoders .xie et al . presented an approach to remove noise from corrupted inputs using sparse coding and deep networks pre - trained with daes .their end to end system could automatically remove complex patterns like text from an image in addition to simple patterns like pixels missing at random .the type of noise additions they investigated were white gaussian noise , sp noise ( flipping pixels randomly ) , and image background changes .along the same lines , post deblurring denoising and using convolutional neural networks for natural image denoising of patterns such as specks , dirt and rain has been investigated . as mentioned above ,low resolution images can be considered as a type of noisy input . in the domain of image super resolution ,cui et al . used low resolution images interpolated to the size of the output image and aes in their pipeline to restore resolution of these images .their cascade model is not trained end - to - end and requires optimization of each layer individually . a similar approach by dong et al . improves on cui et al. by using convolutional neural networks and with an end - to - end system .behnke et al . demonstrated that difficult non - linear image reconstruction from low resolution inputs can be learnt by hiearchical recurrent networks . from a given 28 x 28 handwritten digit image as input ,their system can iteratively increase it s resolution to 64 x 64 output .our work can be viewed as an image super resolution problem , in that our network learns mapping between low resolution and high resolution images .in contrast to existing approaches , our network is end - to - end differentiable and thus learns features automatically via backpropagation .current approaches require manual engineering of features and image pre - processing on top of interpolations . finally , we emphasize here that our goal is _ not _ achieving state of the art results in image super resolution .rather , we want to study a deep architecture s ability to extract useful representation from low - detail images and showing the range in which mapping between low resolution and high resolution images is possible .the usefulness of the representation is then measured using mean squared error between input and reconstructed output .we now present a framework for studying the extent to which neural networks can `` perceive '' an image given various types of low - detail ( or foveated ) inputs .we begin by specifying a space of neural network architectures and by precisely defining a notion of _ perceives _ that we can measure .it is important that the framework is general and not dependent on a specific task such as image classification in which , for example , the ability to learn domain - specific discriminating features might make it easy to solve the classification problem without fully modeling the structure of the input .this is undesirable because then we are unable to trust classification accuracy as a reliable surrogate for _perceiving_. with this in mind , we focus instead on generative models of the raw input data itself , specifically autoencoders ( ae ) .the ae s hidden units are analogous to the intermediate neurons in our visual system that capture features and structure of the visual input . similarly , the ae s weights forge visual memories of the training set and are thus analogous to long - term memorywhen these weights are properly trained , the activations of the hidden units reflect how the network is perceiving a novel input .however , since these units are not directly interpretable , we indirectly measure how well the network perceives by evaluating the similarity between the original and generated ( high - detail ) images : the more similar the images are , the better the network is able to perceive . more formally , let be the original input image and be a lower - detail _ foveated _ version of that image .that is , a version of the image which is mostly low - detail ( e.g. , downsampled , black - and - white , or both ) except for possibly a small portion which is high - detail ( mimicking our own fovea ) .for example , if we encode images as vectors of floats between 0 and 1 ( reflecting pixel intensities in rgb or grayscale ) then we might define a class of foveation functions as ^n\rightarrow [ 0,1]^m ] .finally , we can then measure the similarity between and as : 1 .a surrogate for how well the network perceives from the foveated input and 2 .part of a loss function to train the network . in summary ,dfaes simply comprise : 1 . a foveation function that filters an original image by removing detail ( color , downsampling , blurring , etc ) .we will later make this the independent variables in our experiments so we can study the effect of different types of input distortion .an autoencoder network that inputs the low - detail foveated input , but is trained to output the high - detail original image .3 . a loss function for measuring the quality of the reconstruction against the original image and for training the network . given this framework we can now study how well different architectures are able to cope with different types of foveated inputs . note that much like denoising autoencoders , these autoencoders reconstruct an original image from a corrupted version .however , the form of corruption is a systematic foveation instead of random sp noise .thus , as an homage to denoising autoencoders , we have termed these models _ defoveating _ autoencoders or dfaes .we describe our exact model in the next section . in our experiments , we study dfaes with fully connected layers . that is , dfaes of the form : where is the logistic function .the sigmoid in the final layer conveniently allows us to compare the pixel intensities between the generated image and the original image directly , without having to post - process the output values .we experiment with the number of hidden units per - layer as well as the number of layers . for training, we could employ the traditional mean - squared ( mse ) error or cross - entropy loss , but we found that the domain - specific loss function of peak signal - to - noise ratio ( psnr ) yielded much better training behavior .the psnr between generated image a and its original input is defined as follows : network parameters were initialized at random in the range [ -0.1,0.1 ] and loss was minimized by stochastic gradient descent with adagrad updates .adaptive gradient descent , or adagrad , is a form of stochastic gradient descent that determines the per - feature learning rate dynamically during training .adagrad calculates a different learning rate for each feature , allowing it to efficiently learn the weights even for features that rarely occur in the training data .the learning rate was initialized at 1.0 and was adjusted by adagrad during training .we performed 1000 epochs of training in all experiments .0.54 is foveated ( via ) to .the autoencoder then mapped to hidden representation from which the final output image is generated .reconstruction error is measured : .( b ) inputs with no foveations.,title="fig : " ] 0.45 is foveated ( via ) to .the autoencoder then mapped to hidden representation from which the final output image is generated .reconstruction error is measured : .( b ) inputs with no foveations.,title="fig : " ] the above architecture is useful for studying single foveations , which is the primary focus of this work .however , we remark that it is straightforward to augment dfaes with recurrent connections to handle a sequence of foveations similar to what has has been done for solving classification tasks with attention .first , augment the foveation function to include a locus on which the fovea is centered .second , a saccade function predicts such a locus from the dfae s current hidden states , and finally we make the hidden state recurrent via a function .putting this all together yields the following architecture : now the dfae can handle a sequence of foveations for a given input image , further allowing us to train the model in a more realistic fashion .that is , the human visual system does not have access to all the high detail information at once and must must instead forge visual memories from a sequence of foveations .thus , to mimick this , rather than trying to reconstruct the original image , we can instead try to reconstruct the foveation at time from the information available at time .this is similar to how a language model is trained . fornow , we focus on studying the effects of single foveations using the non - recurrent form of the dfae .recall that we are interested in the question of whether an artificial neural network can _ perceive _ an image from foveated input images . in the context of autoencoders , the hidden layers are responsible for representing the foveated inputs .if the network learns a reasonable representation , then it should be able to produce a higher resolution output .we can then measure how similar the output of the network is to the original image to evaluate how well the network can _ perceive_. in these experiments , we fix the architecture of our network to the family described in the previous section and vary the type of foveation , the number of hidden units and the number of layers and study the learnt features and reconstruction accuracy .we address the following questions : 1 .[ q : perceive - details ] can the network perceive aspects of the image that are not present in the input ? what can it perceive and under what conditions ? 2 .[ q : perceive - color ] can the network perceive color that is not present in the input ? does it need a small fovea of color to do so ? 3 .[ q : capacity ] how much capacity is required to perceive missing details ?[ q : features ] how does the network compensate for foveated inputs ?does the foveation affect the learnt features ? to answer these questions , we define several foveation functions as described in the following section . in our experiments, we study several different foveation functions ( described in more detail in the appropriate sections ) .in many cases , downampling is employed as part of the foveation function for which we employ the nearest neighbor interpolation algorithm .nearest neighbor interpolation is a simple sampling algorithm which selects the value of the nearest point and does not consider the values of the neighboring points at all . as an interpolation algorithm, it generates poor quality or blocky images as there is no smoothing function .we picked nearest neighbor as our downsampling algorithm to test the worst case possible downsampled inputs on our system .foveation functions include : * * downsampled factor ( no fovea ) : * no fovea is present , the entire image is uniformly downsampled by a factor of using the nearest neighbor interopolation method .for example , a factor of transforms a 28x28 image ito a 7x7 image and approximatley 94% of the pixels are removed .note , in the case of color images , each channel ( rgb ) is separately downsampled resulting in color distortion .the downsampling factors tested for mnist were 2 , 4 and 7 , and for cifar100 dataset were 2 , 4 and 8 .see [ fig : zerofoveatedimages ] for examples . * * scotoma ( sct - r ) : * entire regions ( 25% , 50% and 75% ) of the image are removed ( by setting the intensities to 0 ) to create a blind spot / region , but the rest of the image remains at the original resolution .we experiment with the location of the scotoma ( centered or not ) .* * fovea ( fov - r ) : * only a small fovea of high resolution ( 25% or 6% ) ; the rest of the image is downsampled by a factor of .note that the special case of is equivalent to downsampling the entire image uniformly . ** achromatic ( ach - r ) : * only a region of size has color ; color is removed from the periphery by averaging the rgb channels into a single grayscale channel . * * fovea - achromatic ( fova - r ) : * combines the fovea with the achromatic region : only the foveated region is in color , the rest of the image is in grayscale and downsampled by a factor of .we used two datasets in our experiments : mnist and cifar100 .the mnist database consists of 28 x 28 handwritten digits and has a training set of 60,000 examples and a test set of 10,000 examples .therefore each class has 6000 examples .the cifar100 dataset consists of 32 x 32 color images of 100 classes .some examples of classes are : flowers , large natural outdoor scenes , insects , people , vehicles etc .each class has 600 examples .the training set consists of 50,000 images and the test set consists of 10,000 images .we trained dfaes on the mnist and cifar100 dataset ( in grayscale and color ) .we normalized the datasets so that the pixel values are between 0 and 1 and additionally , zero - centered them .this step corresponds to local brightness and contrast normalization . aside from this step ,no other preprocessing such as patch extraction or whitening was applied . first , to establish baselines and context for our results ,we compare a 1-layer dfae to common upsampling algorithms found in image editing software .we report mean squared error ( mse ) between the reconstructed image and original image to measure the quality of reconstructed images by the interpolation algorithm and the dfae .an mse of zero means the algorithm or dfae is able to reconstruct the input with perfect accuracy .figure [ fig : baselineplots ] shows the mse of the interpolation algorithms and a dfae .not surprisingly , the nearest neighbor performed the worst reconstruction overall .the bilinear interpolation performed the best in comparison to other upsampling algorithms tested .the interpolation algorithms performed poorly when they reconstructed an image that was downsampled beyond a factor of 2 .the error rates produced by these interpolation algorithms on the mnist dataset is higher than the natual image dataset .figure [ fig : baselineplots ] show that a single layer dfae outperforms these standard algorithms for the datasets tested . herewe experiment with foveation functions in which the size of the fovea is 0 ; that is , these foveation functions uniformly downsample the original by various factors ( the factors are experimentally controlled ) .the purpose of this experiment is to study how well the network can reconstruct the image when no high - detail input is available . the variables to consider are the number of hidden units per layer and the number of layers .pilot experiments showed when the number of hidden units was less than the downsampled input size , dfaes performed very poorly .this is not surprising because autoencoders can not learn features in an _ under complete _ state and the downsampled input contains impoverished features .figure [ fig : downsampledreconstructions ] show examples of the reconstructed images by a single layer dfae .the images produced by the dfae is compared to upsampled reconstructions by the bilinear algorithm .when compared to the bilinear algorithm , dfaes can correctly extract the contents of a downsampled input even when 94% of pixels are removed .a compelling example is that even when faced with a blank input as seen in figure [ fig : mnist_output ] the dfae can correctly predict the digit .however the performance of dfaes suffered when the input was downsampled beyond factor .even though the dfae made predictions based on the input , most of the reconstructions were incorrect .the reconstructed natural images as seen in figure [ fig : cifar100_color_output ] show that the dfae learnt a smoothing and centering function even though it was unable to reconstruct the high frequencies in the images .the dfaes could predict the shape of objects in the natural images but not the high frequency details .0.48 0.51 next , we looked at filters or features learnt by the single layer dfaes as shown in figure [ fig : 1layer_features ] .feature detectors that correspond to the final hidden layer of the network were visualized .each hidden neuron has an associated vector of weights that it uses to compute a dot product with an input example .these _ weight vectors or filters _ have the same dimensionality as the input allowing us to visualize them as images , highlighting the aspects of the input to which a hidden unit is sensitive .the goal of visualizing feature detectors was to examine qualitatively the kind of feature detectors learnt from the downsampled images and compare them to those learnt from full - resolution input .for mnist images , a single layer dfae learns neuron like features when given the original input .when the input was downsampled , it was forced to learn stroke like features .a curiously similar result was observed by vincent et al . , where their denoising autoencoder learnt global structures when it was trained on corrupted inputs .our dfae also learnt increasingly global features when the input is downsampled correspondingly .however the ability to learn useful features deteriorated when the input was downsampled beyond a factor .for instance , when given an input downsampled by a factor , a majority of the features learnt were superimpositions of two digits and this was reflected in the images reconstructed as shown in figure [ fig : mnist_output ] .on the other hand , the filters learnt on cifar100 images does not look meaningful . in some casesthe network learnt a specific color gradients or locally circular blobs which probably enabled it to be better at reconstructing low frquency shape information and landscapes particularly well .since we did not whiten the images , nor used image patches during training , the noise modeled by the dfae for natural images was not surprising . 0.5 0.5 0.5 0.5 to understand how the number of hidden units and layers affect performance of dfaes , we increased the breadth and/or depth of the dfae .figure [ fig : dfae_capacity ] show that the performance of dfae does not improve drastically if the network is given additional capacity both in breadth ( number of hidden units ) and in depth ( number of layers ) .the dfae error rates stabilized when the number of hidden units was increased beyond 100 .note that the number of hidden units was varied according to the original input size .therefore for 28 x 28 mnist images the range of hidden units varied from 800 hidden units ( rounded from 784 input size ) .similarly , for cifar100 images increasing the number of hidden units of the dfae did not improve performance of either cifar100 color or grayscale images .a pilot tests with networks upto 4-layers showed that performance on mnist or cifar100 images did not improve significantly with increasing number of layers .until now , we evaluated dfaes on uniformly downsampled images but this kind of input is unrealistic from those received by the retina . in this section , we evaluate dfaes on foveated inputs , * sct - r * and * fov - r * , as described in section .as discussed in the introduction , the human visual system makes effective use of these kinds of foveated inputs . from a machine learning perspective , it is desirable to recognize or classify images from degraded configurations , which in turn will reduce the need for carefully pruned and preprocessed datasets during training . 0.45 0.46 the rationale for having scotoma - like regions in the input was to test whether the available input contained enough information to reconstruct the rest of the image . the dataset used was grayscale cifar100 images .variable sized areas of region ( 25% , 50% , 75% , 75% centered ) were removed from the original input .the location of removal was chosen randomly from the four quadrants of the input image , except for the condition where 75% of the image around the center was removed .since a majority of the input images have a subject of interest , we tested if the central region contained enough information to reconstruct the rest of the image .the reconstructions in figure [ fig : removedregions ] show the dfae does not perform well when 25% .when 50% of the input is removed , the dfae can reconstruct landscapes and reconstruct shape information and symmetry , demonstrating it s ability to extract low frequency information .when 75% and 75% centered , the reconstruction process breaks down and dfae can not predict the input beyond the given region of information .the filters learnt under these conditions look similar to the grayscale version of figure [ fig : cifar100_color_features ] with bigger smooth blobs over blacked out regions of the input . in* fov - r * inputs , is the same as * sct - r * inputs and we chose to use downsampling factor for regions outside the fovea since previous experiments revealed that dfaes can not reconstruct inputs downsampled beyond this factor .figure [ fig : foveatedregions ] shows the reconstructed images from * fov - r * inputs and figure [ fig : mse_foveatedinputs ] show the error rate of reconstruction .the cluster of red lines with lower error rates show that the dfae performed considerably well with * fov - r * than * sct - r * inputs the performance was better ( 1% error for 75% centered ) than an dfae trained with uniformly downsampled inputs ( 1.5% error ) .this result is not surprising , given that * fov - r * contains additional information from regions outside the fovea .these results suggests that a small number of foveations containing rich details might be all these neural networks need to extract contents of the input in higher detail .it is well known that the human visual system loses chromatic sensitivity towards the periphery of the retina .recently , there has been interest in how deep networks , specifically convolutional neural networks ( cnns ) , can learn to color grayscale images and learn artistic style . specifically in dahl s reconstructions from grayscale images , numerous cases of the colorized images produced were muted or sepia colored .the problem of colorization which is inherently ill - posed was treated as a classification task in these studies .can dfaes perceive color if it is absent in the input ?we investigated this question using * ach - r * and * fova - r * inputs described in section .the regions of color tested were 0% or no color , 6% , 25% and 100% or full color .figure [ fig : ach - r ] and [ fig : fova - r ] show examples of color reconstructions of the these input types .when the dfae is trained with full color * ach - r * inputs , it can make mistakes in reconstructing the right color as seen in figure [ fig : ach - r ] .for example : it colors the yellow flower as pink and the purplish - red landscape as blue . when the input is grayscale ( no color , 0% ) , the colorizations are gray , muted , sepia toned or simply incorrect in the case of landscapes .but if there is a `` fovea '' of color , the single layer dfae can reconstruct the colorizations correctly .ofcourse , if the `` fovea '' of color is reduced , i.e. 6% , the color reconstruction accuracy falls off but not too drastically .for example , it predicts a yellowish tone for the sunflower among a bed of brown leaves .the critical result is that the performance difference between 100% or full colored inputs and `` foveated '' color inputs is small as seen in figure [ fig : colorfoveatedregions ] and [ fig : colorbwinputs ] .these results suggest that color reconstructions can be just as accurate if these networks can figure out the color of one region of the image accurately as opposed to every region in the image .similar to the human visual system , these networks are capable of determining accurate colors in the periphery if color information is available at foveation .the key finding in this paper is that current deep architectures are capable of learning useful features from low fidelity inputs . as discussed in the introduction ,the human visual system uses sequential foveations to gather information about their surroundings . in each foveation, only a fraction of the input is in high resolution .we studied the capability of deep networks to learn in the face of minimal information , specifically foveated inputs .our results indicate a single layer dfae can reconstruct low fidelity inputs better than existing upsampling algorithms and remarkably , color reconstructions with foveated inputs are just as good with full colored inputs .in general , our model achieves these results using a shallow network with only a small number of hidden units .we investigated how the capacity of the dfae , in terms of layers and number of hidden units , interacts with foveated inputs .we found that small shallow networks were capable of learning good representation , especially low frequencies in the input .as noted , the performance of the the dfae was qualitatively better on the mnist digit images than the natural images .firstly , the mnist dataset contains 6000 training examples for each class compared to the cifar100 dataset which contains 500 training examples for each class . secondly , the shape of the digits ( a low frequency feature ) is prominent in the mnist dataset but not in the natural images which contains texture , multiple objects , contrast variation adding to the high frequency noise .the noise to signal ratio is lower in mnist dataset in general which helped dfae learn better representations .color information is obviously important to the human visual system but our results show that the performance of the dfae does not improve significantly with color images as seen in figures [ fig : cifarc_capacity ] and [ fig : cifarg_capacity ] .but color information was important in improving accuracy when the dfae colorized images from foveated inputs .does an image ( of scene or object ) consist of a single or multiple image regions that are predictive of the contents of the image ? in this paper , we focused on a single foveated region to test how that specific region was predictive of the rest of the image .future studies can investigate which regions of an image are most predictive .how many of such regions exist within an image ?do these regions generalize across a class of images ?how can we combine these regions to reconstruct the image ? in general , foveated inputs enabled the dfae to learn the best representations overall in terms of image contents and color .this gives us hope that we can learn useful feature representations when full resolution input is not available and with a small computational budget . in future work , we want to study models that can make shifts of attention to improve the representation on demand as needed for the associated task .
|
humans perceive their surroundings in great detail even though most of our visual field is reduced to low - fidelity color - deprived ( e.g. dichromatic ) input by the retina . in contrast , most deep learning architectures are computationally wasteful in that they consider every part of the input when performing an image processing task . yet , the human visual system is able to perform visual reasoning despite having only a small fovea of high visual acuity . with this in mind , we wish to understand the extent to which connectionist architectures are able to learn from and reason with low acuity , distorted inputs . specifically , we train autoencoders to generate full - detail images from low - detail `` foveations '' of those images and then measure their ability to reconstruct the full - detail images from the foveated versions . by varying the type of foveation , we can study how well the architectures can cope with various types of distortion . we find that the autoencoder compensates for lower detail by learning increasingly global feature functions . in many cases , the learnt features are suitable for reconstructing the original full - detail image . for example , we find that the networks accurately perceive color in the periphery , even when 75% of the input is achromatic .
|
the quest for a complete understanding of phases of matter has been a driving force in condensed matter physics . from the landau - ginzburg - wilson paradigm to topological insulators and superconductors to topological orders to symmetry protected topological ( spt ) phases to symmetry enriched topological phases , we have witnessed an infusion of ideas from topology into this century - old field .spt phases are a relatively simple class of non - symmetry - breaking , gapped quantum phases and have been a subject of intense investigation in recent years . as an interacting generalization of topological insulators and superconductors and intimate partner of topological orders , they exhibit such exotic properties as the existence of gapless edge modes , and harbor broad applications .they have also been increasingly integrated into other novel concepts such as many - body localization and floquet phases . despite tremendous progress ,a complete classification of spt phases remains elusive .this is especially true when fermions , high ( e.g. ) spatial dimensions , or continuous symmetry groups are involved .a number of proposals have been made for the general classification of spt phases : the borel group cohomology proposal , the oriented cobordism proposal , freed s proposal , and kitaev s proposal in the bosonic case ; and the group supercohomology proposal , the spin cobordism proposal , freed s proposal , and kitaev s proposal in the fermionic case .these proposals give differing predictions in certain dimensions for certain symmetry groups , and while more careful analysis has uncovered previously overlooked phases and brought us closer than ever to our destination , we believe that we can do much more . in this paper , we will take a novel , minimalist approach to the classification problem of spt phases , by appealing to the following principle of mark twain s : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ distance lends enchantment to the view . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this spirit , we will not commit ourselves to any particular construction of spt phases , specialize to specific dimensions or symmetry groups , or investigate the completeness of any of the proposals above . instead, we will put various proposals under one umbrella and present results that are independent of which proposal is correct .this will begin with the formulation of a hypothesis , we dub the generalized cohomology hypothesis , that encapsulates essential attributes of spt classification .these attributes will be shown to be possessed by various existing proposals and argued , on physical grounds , to be possessed by the unknown complete classification should it differ from existing ones .the results we present will be rigorously derived from this hypothesis alone .because we are taking a meta " approach , we will not be able to produce the exact classification in a given dimension protected by a given symmetry group .we will be able , however , to _ relate _ classifications in different dimensions and/or protected by different symmetry groups .such relations will be interpreted physically this may require additional physical input , which we will keep to a minimum and state explicitly .a major advantage of this formalism is the universality of our results , which , as we said , are not specific to any particular construction .what will enable us to relate different dimensions and symmetry groups is ultimately the fact that the hypothesis is a statement about all dimensions and all symmetry groups simultaneously .furthermore , due to a certain symmetry " the hypothesis carries , the relations we derive will hold in arbitrarily high dimensions .finally , the hypothesis is supposed to apply to fermionic phases as well as bosonic phases .thus our formalism is not only independent of construction , but also independent of physical dimension and particle content , that is , bosons vs.fermions .more specifically , the hypothesis will be based on a prototype offered by kitaev .we will add a couple of new ingredients ( additivity and functoriality ; see below ) and formulate the ideas in a language amenable to rigorous treatment . while the hypothesis is informed by refs. , our philosophy is fundamentally different .the goal of refs. was to classify spt phases in dimensions by incorporating into the hypothesis current understanding of the classification of invertible topological orders .the goal of this paper is to make rigorous , maximally general statements about the classification of spt phases by refraining from incorporating such additional data .the approach of refs. was concrete , whereas ours is minimalist .here is a preview of some of the fruits of this minimalist undertaking .a. we will be able to relate the original definition of spt phases to the one currently being developed by refs. , which is in terms of invertibility of phases and uniqueness of ground state on arbitrary spatial slices . according to the latter definition ,the classification of spt phases can be nontrivial even without symmetry .( for instance , the integer quantum hall state represents an spt phase in that sense . )we will show that spt phases in the old sense are not only a subset , but in fact a direct summand for groups , but the direct sum notation is more common for abelian groups in the mathematical literature .] , of spt phases in the new sense .more precisely , where invertible topological orders are synonymous with spt phases ( in the new sense ) without symmetry , and and are arbitrary .we will also see the two definitions are nicely captured by two natural variants of a mathematical structure that we will introduce .these claims depend only on the hypothesis , and are expounded upon in sec.[subsec : unification_old_new_definitions_spt_phases ] .b. we will be able to relate the classification of translationally invariant spt phases to the classification of usual spt phases .( from now on , spt phases will mean spt phases in the new sense . )the former are protected by a discrete spatial translational symmetry as well as an internal symmetry , whereas the latter are protected by alone .it is conceivable that translational symmetry will refine the classification , but it is not clear whether every usual spt phase will have a translationally invariant representative , whether every usual spt phase will split into multiple phases , or whether all usual spt phases will split into the same number of phases . to all three questions , we will give affirmative answers .more precisely , we will prove that there is a decomposition such that forgetting the translational symmetry corresponds to projecting from the left - hand side onto the second direct summand in the right - hand side .these claims depend only on the hypothesis and the belief that it applies to translational symmetries as well as internal symmetries for a suitable definition of translationally invariant spt phases .these are the subject of sec.[subsec : strong_weak_topological_indices_interacting_world ] .c. we will go on to argue , through a field - theoretic construction in app.[app : field_theoretic_argument_weak_index_interpretation ] , that the inclusion of the first summand in the right - hand side into the left - hand side corresponds to a layering construction , where one produces a -dimensional translationally invariant phase by stacking identical copies of a usual -dimensional phase .d. we will generalize the relation above to -dimensional spt phases protected by discrete translation in directions .we will see a hierarchy of lower - dimensional classifications enter the decomposition , with direct summands in dimension .( the relation above corresponds to . )this is discussed in sec.[hierarchy_strong_weak_topological_indices ] .e. we will reinterpret the above as discrete temporal translational symmetry . accordingly , there will be a decomposition we will give physical meaning to the projection maps onto the two direct summands in the right - hand side , in terms of pumping and floquet eigenstates , respectively .what the relation tells us is that a -dimensional floquet spt phase can pump any -dimensional stationary spt phase we want , that it can represent any -dimensional ( stationary ) spt phase we want , and that it is completely determined by these two pieces of information . except for the pumping interpretation ,these claims depend only on the hypothesis and the belief that it applies to discrete temporal translational symmetry as well as internal symmetries for a suitable definition of floquet spt phases .these are discussed in sec.[subsec : pumping_floquet_eigenstates_classification_floquet_spt_phases ] .f. we will show that a similar decomposition exists for semidirect products , and more generally , whose applications to space group - protected spt phases will be discussed in sec.[subsec : applications_space_group_protected_spt_phases ] .g. an enlargement of symmetry group can not only refine a classification but also eliminate certain phases , for a priori there may be obstructions to lifting an action of a smaller symmetry group over to a larger symmetry group . in sec.[subsec :obstruction_free_enlargement_symmetry_group ] , we will give a sufficient condition for the absence of such obstructions .more specifically , given , if one can find another subgroup such that , including the special case of direct product , then every -protected spt phase will be representable by some -protected spt phase .this claim follows immediately from the hypothesis .h. there are other results derived from the hypothesis that we would rather defer to a subsequent paper due to our incomplete understanding .they are summarized in sec.[sec : summary_outlook ] .this paper is organized as follows . in sec.[sec: generalities ] , we establish conventions , define spt phases , and comment on two elementary properties of spt phases , additivity and functoriality , that will play a role in the hypothesis . in sec.[sec : generalized_cohomology_hypothesis ] , we introduce necessary mathematical concepts and formulate the generalized cohomology hypothesis . in sec.[sec : justification_hypothesis ] , we justify the hypothesis on physical grounds . in sec.[sec : consequences_hypothesis_mathematical_results ] , we present mathematical forms of the results we derived from the hypothesis . in sec.[sec : consequences_hypothesis_physical_implications ] , we explore physical implications of these results . in sec.[sec : summary_outlook ] , we summarize the paper , advertise further preliminary results , and suggest future directions . a variety of topics are covered in the appendices . in app.[app : existing_proposals_generalized_cohomology_theories ] , we explain in more detail how existing proposals for the classification of spt phases satisfy the hypothesis . in app.[app : field_theoretic_argument_weak_index_interpretation ] , we propose a field - theoretic construction to corroborate the weak - index interpretation in sec.[subsec : strong_weak_topological_indices_interacting_world ] . in app.[app : categorical_viewpoint ] , we present an equivalent but more succinct version of the hypothesis using the terminology of category theory . in app.[app : additivity_functoriality_group_cohomology_construction ] , we explicitly show that the group cohomology construction is additive and functorial . in app.[app : proofs ] , we supply proofs to various lemmas and propositions in the paper .app.[app : mathematical_background ] is a review of notions in algebraic topology , category theory , and generalized cohomology theories .i am grateful to my advisor , ashvin vishwanath , for his guidance and support .i also want to thank ammar husain , ryan thorngren , benjamin gammage , and richard bamler for introducing me to the subject of generalized cohomology theories ; hoi - chun po , alexei kitaev , christian schmid , yen - ta huang , yingfei gu , dominic else , shengjie huang , shenghan jiang , drew potter , and chong wang for numerous inspiring discussions ; and judith hller , alex takeda , and byungmin kang for their invaluable comments on an early draft of the paper .this work was supported in part by the 2016 boulder summer school for condensed matter and materials physics through nsf grant dmr-13001648 .locality is defined differently for fermionic systems than for bosonic ( i.e. spin ) systems .for this reason , classifications of bosonic phases and fermionic phases are traditionally done separately . while we will follow that tradition , our formalism works identically in the two cases .therefore , we can omit the qualifiers fermionic " and bosonic " and simply speak of spt phases . " by the dimension of a physical system , we always mean the spatial dimension . when it comes to mathematical construction, it is convenient to allow dimensions to be negative .if a purely mathematical result in this paper appears to contain a free variable , then it should be understood that this result is valid for all .if a physical result appears to contain a free variable , then it should be understood that this result is valid for all for which all dimensions involved are non - negative . for simplicity ,we assume all symmetry actions to be linear unitary .a generalization to antilinear antiunitary actions is possible ( see sec.[sec : summary_outlook ] ) but beyond the purview of this paper .we allow all topological groups satisfying the basic technical conditions in app.[subapp : technical_conventions ] to be symmetry groups .thus , a symmetry group can be finite or infinite , and discrete or non - discrete ( also called continuous " ) . in the non - discrete case , one must define what it means for a symmetry group to act on a hilbert space , that is whether we want a representation to be continuous , measurable - cochains as postulated in ref. reduces to the measurability of when .] , or something else , where denotes the space of unitary operators on . conceivably , the hypothesis can hold for one definition but fail for another , so some care is needed .it is possible that the validity of the hypothesis requires further restrictions on symmetry groups and symmetry actions , such as compactness and on - siteness , but there is a growing body of evidence against the necessity of such restrictions .it appears that discrete temporal translation , discrete spatial translation , and other space group actions may well fit into the same framework as on - site symmetry actions . in particular , refs. maintained that the classification of -dimensional -protected topological phases is the same whether is spatial or internal , provided that orientation - reversing symmetry operations ( e.g.parity ) are treated antiunitarily . in any case , on - site actions by finite groups are in the safe zone .we emphasize that the derivation of the mathematical results in sec.[sec : consequences_hypothesis_mathematical_results ] from the hypothesis is independent of these considerations .[ [ mathematical - notation - and - conventionssubsecmathematical_notation_conventions ] ] mathematical notation and conventions[subsec : mathematical_notation_conventions ] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we denote bijections and homeomorphisms by , isomorphisms of algebraic structures by , homotopy or pointed homotopy by , and homotopy equivalences or pointed homotopy equivalences by .we denote the one - point set , the unit interval ( i.e. $ ] ) , the boundary of the unit interval ( i.e. ) , the -sphere , the -disk , and the boundary of the -disk by , , , , , and , respectively . unless stated otherwise , map " always means continuous map , group " always means topological group , and homomorphism " between groups always means continuous homomorphism . for experts , the technical conventions in app.[subapp : technical_conventions ] are observed throughout , except in apps.[subapp : notions_algebraic_topology]-[subapp : technical_conventions ] .traditionally , the definition of spt phases goes as follows .first , one defines a trivial system to be a local , gapped system whose unique ground state is a product state .then , one defines a short - range entangled ( sre ) system to be a local , gapped system that can be deformed to a trivial one via local , gapped systems .finally , one defines a -protected spt phase to be an equivalence class of -symmetric , non - symmetry - breaking - symmetric " is an adjective qualifying hamiltonians while non - symmetry - breaking " is an adjective qualifying ground states . ]sre systems with respect to the following equivalence relation : two such systems are equivalent if they can be deformed into each other via -symmetric , non - symmetry - breaking sre systems .explicit as the definition above is , we shall adopt a different definition that will turn out to be extremely convenient for our formalism , at the expense of including more phases .the set of spt phases in the old sense will be shown to sit elegantly inside the set of spt phases in the new sense , undisturbed , and they can be readily recovered .the definition spelled out below is based on the ideas in refs. .to begin , let us assume that the terms system , " local , " gapped , " -symmetric , " non - symmetry - breaking , " and deformation " have been defined .given two arbitrary systems and of the same dimension , we write ( no commutativity implied ; this is just a notation ) for the composite system formed by stacking on top of .however the aforementioned terms may be defined , it seems reasonable to demand the following : a. is well - defined .b. if both and are local , gapped , -symmetric , or non - symmetry - breaking , then is also local , gapped , -symmetric , or non - symmetry - breaking , respectively . c. a deformation of either or also constitutes a legitimate deformation of .we will speak of deformation class , which , as usual , is an equivalence class of systems with respect to the equivalence relation defined by deformation ( possibly subject to constraints , as discussed in the next paragraph ) .-dimensional , -symmetric , non - symmetry - breaking , local , gapped systems .each deformation class , shown as a patch here , is called a -protected topological phase . each invertible ( respectively non - invertible ) class ,shown as a gray or black ( respectively pink ) patch , is called an spt ( respectively set ) phase .the identity class , shown as a black patch , is called the trivial spt phase .dashed circles are meant to indicate , by forgetting the symmetry , that more systems will be allowed and that distinct phases can become one.,height=278 ] now , let be a symmetry group and be a non - negative integer . consider the set of deformation classes of -dimensional , local , gapped , -symmetric , non - symmetry - breaking systems .we have seen that there is a binary operation on the set of such systems , given by stacking , which descends to a binary operation on , owing to property ( iii ) .we define the _ trivial -dimensional -protected spt phase _ to be the identity of with respect to the said binary operation .we define a _ -dimensional -protected spt phase _ to be an invertible element of .we define a _ -dimensional -protected symmetry enriched topological ( set ) phase _ to be a non - invertible element of . in general , we call an element of a _ -dimensional -protected topological phase_. an illustration of these concepts appears in fig.[fig : spt_set_old_new ] . in mathematical jargon , spt phases are thus the group of invertible elements of the monoid of -dimensional -protected topological phases. we will see later that is commutative .this means that the -dimensional -protected spt phases form not just a group , but an abelian group .this is elaborated upon in sec.[subsubsec : additivity ] .note that we have made no mention of sre systems so far . instead, spt and set phases naturally fall out of the binary operation given by stacking . the uniqueness of identity and inverses and the abelian group structure of spt phases come about for free .this is in line with the minimalism we are after and is we think the beauty of the definition .let us introduce special names for the special case of trivial symmetry group .the trivial spt phase in this case can be called the _trivial topological order _ ; an spt phase , an _ invertible topological order _ ; an set phase , an _ intrinsic topological order _ ; and any element of , a _topological order_. we may call a system _ short - range entangled ( sre ) _ if it represents an invertible topological order , and _ long - range entangled ( lre ) _ otherwise .an illustration of these concepts appears in fig.[fig : sre_lre_old_new ] .-dimensional local , gapped systems .each deformation class , shown as a patch here , is called a topological order . each invertible ( respectively non - invertible ) class ,shown as a gray or black ( respectively pink ) patch , is called an invertible ( respectively intrinsic ) topological order .the identity class , shown as a black patch , is called the trivial topological order , which is in particular invertible .a system is called sre ( respectively lre ) if it belongs to an invertible ( respectively intrinsic ) topological order.,height=278 ] [ [ comparison - between - old - and - new - definitions - of - spt - phasessubsubseccomparison_definition_spt_phases ] ] comparison between old and new definitions of spt phases[subsubsec : comparison_definition_spt_phases ] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ to make contact with the old definition of spt phases , we note that all trivial systems in the old sense represent the identity element of , where denotes the trivial group . hence , sre systems in the old sense are precisely those sre systems in our sense that happen to lie in this identity class .similarly , spt phases in the old sense are precisely those spt phases in our sense that , by forgetting the symmetry , represent the said identity class .this shows that the spt phases in the old sense are a subset of the spt phases in our sense .one of our results in this paper is that the former form a subgroup , in fact a direct summand , of the latter .these are illustrated in figs.[fig : spt_set_old_new ] and [ fig : sre_lre_old_new ] .what is also clear is that the classification of spt phases ( according to our definition ; same below ) can be nontrivial even for the trivial symmetry group .this amounts to saying that there can exist nontrivial invertible topological orders , or that the set of sre systems are partitioned into more than one deformation classes in the absence of symmetry .examples of systems that represent nontrivial invertible topological orders are given in table [ table : spt_examples ] . while this may seem to contradict the original idea of symmetry protection , it is the new notion of short - range entanglement not the old one that is closely related and potentially equivalent to the condition of unique ground state on spatial slices of arbitrary topology , and in two dimensions , the condition of no nontrivial anyonic excitations , both of which are more readily verifiable , numerically and experimentally , than the deformability to product states ..examples of systems that represent nontrivial invertible topological orders .they are legitimate representatives of spt phases according to our definition but fall outside the realm of refs..[table : spt_examples ] [ cols="<,<,<",options="header " , ] a. each is a cw - complex and each is a pointed cw - complex .b. is a covariant functor . c. is homeomorphic to .d. homotopy equivalent to .e. can be given an abelian group structure if is abelian .[ exm : eilenberg_maclane_spectrum ] given any discrete abelian group , the eilenberg - mac lane spaces form an -spectrum , called the eilenberg - mac lane spectrum of .more precisely , the eilenberg - mac lane spectrum of consists of a generalized cohomology theory is a theory that satisfies the first six of the seven eilenberg - steenrod axioms plus milnor s additivity axiom .inclusion of the seventh , dimension axiom of eilenberg and steenrod s would force the theory to be an ordinary one . herewe define generalized cohomology theories in an equivalent but more compact way .a. homotopy : pointed homotopic maps in induce identical homomorphisms in ; b. exactness : given any pair , there is a long exact sequence where is the inclusion map and is the quotient map ; c. wedge : given any family of pointed spaces , , the inclusion maps induce an isomorphism a. homotopy : homotopic maps in induce identical homomorphisms in ; b. exactness : given any pair , there is a long exact sequence where and are the inclusion maps . c. excision : given a triple with ,the quotient map induces an isomorphism d. additivity : given any family of pairs , , the inclusion maps induce an isomorphism every reduced generalized cohomology theory canonically determines an unreduced generalized cohomology theory , and vice versa , as follows .given a reduced theory , we define an unreduced theory according to with the convention .given an unreduced theory , we define a reduced theory according to to make contact with definitions[dfn : unreduced_generalized_cohomology_theory ] and [ dfn : reduced_generalized_cohomology_theory ] , we need the pivotal brown representability theorem ( see e.g.ref. or theorems 4.58 and 4e.1 of ref. ) .every -spectrum defines a reduced generalized cohomology theory according to conversely , every reduced generalized cohomolog theory can be represented by an -spectrum this way.[thm : brown_representability_theorem ] definitions [ dfn : unreduced_generalized_cohomology_theory ] and [ dfn : reduced_generalized_cohomology_theory ] differ from definitions [ dfn : unreduced_generalized_cohomology_theory_2 ] and [ dfn : reduced_generalized_cohomology_theory_2 ] in two subtle ways , even when the brown representability theorem is assumed .first , definitions [ dfn : unreduced_generalized_cohomology_theory ] and [ dfn : reduced_generalized_cohomology_theory ] treated -spectrum as part of the data of a generalized cohomology theory , but in reality different -spectra can represent the same theory ( although , in the category of spectra , a representing spectrum is determined by the theory up to isomorphism , in view of the yoneda lemma ) .it was because of the physical interpretations of -spectrum that we decided to treat it as part of the data .second , in definition [ dfn : unreduced_generalized_cohomology_theory ] , an unreduced generalized cohomology theory was only evaluated on individual spaces not pairs .the connection is given by it is then easy to show that }\ ] ] for any -spectrum that represents the corresponding reduced theory , in accord with definition [ dfn : unreduced_generalized_cohomology_theory ] .
|
a number of proposals with differing predictions ( e.g.borel group cohomology , oriented cobordism , group supercohomology , spin cobordism , etc . ) have been made for the classification of symmetry protected topological ( spt ) phases . here we treat various proposals on an equal footing and present rigorous , general results that are independent of which proposal is correct . we do so by formulating a minimalist generalized cohomology hypothesis , which is satisfied by existing proposals and captures essential aspects of spt classification . from this hypothesis alone , formulas relating classifications in different dimensions and/or protected by different symmetry groups are derived . our formalism is expected to work for fermionic as well as bosonic phases , floquet as well as stationary phases , and spatial as well as on - site symmetries . symmetry protected topological phases , generalized cohomology theories
|
analytical techniques developed in the statistical physics of spin models have been widely employed in the analysis of systems in a wide variety of fields , such as neural networks , econophysical models , and error - correcting codes . analogy is drawn between the interactions and dynamics commonly present in the spin models and other systems .consequently , notions such as cost functions and noises can find their corresponding physical quantities in spin models .this parallelism contributes to the success of statistical mechanics in various fields .recently , a statistical physics perspective was successfully applied to the problem of resource allocation on sparse random networks .resource allocation is a well known network problem in the areas of computer science and operations management .it is relevant to applications such as load balancing in computer networks , reducing internet traffic congestion , and streamlining network flow of commodities . in a typical setup, each node of the network has its own demand or supply of resources , and the task is to transport the resources through the links to satisfy the demands while a global transportation cost function is optimized .the work in can be extended to consider the effects of bandwidths of the transportation links . in communication networks ,connections usually have assigned bandwidths .bandwidths limit the currents flowing in the links or , in equivalent models , the interaction strengths defined on the links connecting the site variables .the significance of finite bandwidths was recently recognized in several similar problems .for example , transportation problems with global constraints on interaction strengths were studied , through a model of transportation network with limited total conductance , and another model of transportation network with limited total pipe volume or surface area which increase the resistance of pipes . among these studies , constraints on interaction strengthswere implemented as limits on conductance .the purpose of this paper is twofold .first , we demonstrate the close relation between statistical mechanics and distributed algorithms .such close relations have facilitated statistical mechanics to provide insights to a number of principled algorithms recently , as illustrated by the relation between bethe approximation and error - correcting codes , and probabilistic inference , as well as the replica symmetry - breaking ansatz and the survey propagation algorithm in -satisfiability problems .these contributions were made in problems with discrete variables . in this paper , we extend the bethe approximation to the resource allocation involving continuous variable with finite bandwidths . since bandwidths limit the currents along the links , both the message - passing and price iteration algorithms proposed in have to be modified , which enable us to find the optimal solutions without the need of a global optimizer .the second purpose is to study the behavior of the optimized networks when the connectivity and bandwidth changes , using insights generated from the recursion relations of the free energy .we observe a number of interesting physical phenomena in networks with finite bandwidths .for example , there is the emergence of _ bottlenecks _ when the bandwidth decreases , resulting in the shrinking of the fraction of unsaturated links .we also identify the correlations between the capacities of the nodes and their roles as sources , sinks and relays .we find that resources are more efficiently distributed with increasing connectivity .scaling relations are found in the distributions of currents and chemical potentials . in the high connectivity limit , we find a phase transition at a critical bandwidth , above which clusters of balanced nodes appear , characterised by a profile of homogenized resource distribution reminiscent of the maxwell s construction in thermodynamics . when adapted to scale - free networks , changes in this profile enable us to identify the enhanced homogeneity of resources brought by the presence of hubs to nodes with low connectivity .the paper is organized as follows . in section[ sec : model ] , we introduce the general model , and the analysis is presented in section [ sec : analysis ] .we demonstrate the conversion of the derived recursive equations into algorithms in section [ sec : mp ] . in section [ sec : sim ] , we compare the analytic solutions with numerical simulations in networks with low connectivity , and report on the bottleneck effect .we will examine the limit of high connectivity in section [ sec : highc ] .we conclude the paper and point out the potential applications of the model in section [ sec : conclusion ] .we address the problem of resource allocation on random sparse networks of nodes and links .each node of the network has its own supply or demand of resources , and the task is to transport the resources through the links to satisfy the demands while optimizing a cost function of transportation costs .the amount of resources transported through a link is limited by the bandwidth , which leads to extra constraints in the resource allocation problem .bandwidth limitations require us to generalize the problem in the following way . in the case of infinite bandwidths ,currents along the links can be set arbitrarily large to satisfy the demands of all nodes .hence hard constraints on the satisfiability of the node demands can be realized , provided that the networkwide supply of resource is greater than their resource demand . on the other hand , in the present model of finite bandwidth ,a node with resource demand can still experience shortage even though its neighbors have adequate supply of resources , since the provision of resources can be limited by the bandwidths of the links .hence , the hard constraints on node satisfaction of the infinite - bandwidth model is replaced by soft constraints , and the energy function is generalized to include the cost of unsatisfaction , or shortage of resources .specifically , we consider a network with nodes , labelled .each node is randomly connected to other nodes .the connectivity matrix is given by for connected and unconnected node pairs respectively .except for the discussion on scale - free networks in section [ sec : highc ] b4 , we focus on sparse networks , namely , those of intensive connectivity .each node has a capacity randomly drawn from a distribution .positive and negative values of correspond to supply and demand of resources respectively .the task of resource allocation involves transporting resources between nodes such that the demands of the nodes can be satisfied to the largest extent .hence we assign to be the _ current _ drawn from node to , aiming at reducing the _ shortage _ of node defined by the magnitudes of the currents are bounded by the _ bandwidth _ , i.e. , . for simplicity and clarity, we deal with the case of homogeneous connectivity and bandwidth in the paper , but the analyses and algorithms can be trivially generalized to handle real networks having heterogeneous connectivity and bandwidth .to minimize the shortage of resources after their allocation , we include in the total cost both the shortage cost and the transportation cost . hence , the general cost function of the system can be written as the summation corresponds to summation over all node pairs , and is a quenched variable defined on node . in the present model of resource allocation ,the first and second terms correspond to the transportation and shortage costs respectively .the parameter corresponds to the _ resistance _ on the currents , and is the capacity of node .the transportation cost can be a general even function of .( our model can be generalized to consider transportation costs that are odd functions of . in such cases ,one simply has to introduce extra quenched variables of the links representing the favored directions of the currents . ) in this paper , we consider and to be concave functions of their arguments , that is , and are non - decreasing functions .specifically , we have the quadratic transportation cost , and the quadratic shortage cost .as considered previously , other concave and continuous functions of , such as the anharmonic function in , result in similar network behavior .the model can also be extended to probabilistic inference on graphical models . in this context may represent the coupling between observables in nodes and , may correspond to the logarithm of the prior distribution of , and the logarithm of the likelihood of the observables .the analysis of the model is made convenient by the introduction of the variables .it can be written as the minimization of eq .( [ e_define ] ) in the space of and , subject to the constraints and the constraints on the bandwidths of the links we consider the dual of the original optimization problem . introducing lagrange multipliers to the above inequality constraints ,the function to be minimized becomes \nonumber\\ + \sum_{(ij)}\ca_{ij}\biggl[r\phi(y_{ij})+\gamma^+_{ij}(w - y_{ij})+\gamma^-_{ij}(w+y_{ij})\biggr],\end{aligned}\ ] ] with the kuhn - tucker condition and the constraints , and .optimizing with respect to , one obtains ^{-1}\biggl(\frac{x}{r}\biggr)\biggr]\biggr\}.\end{aligned}\ ] ] the lagrange multiplier is referred to as the chemical potential of node , and is the derivative of with respect to its argument .the function relates the potential difference between nodes and to the current driven from node to . for the quadratic cost , it consists of a linear segment between reminiscent of ohm s law in electric circuits . beyond this range, is bounded above and below by respectively .thus , obtaining the optimized configuration of currents among the nodes is equivalent to finding the corresponding set of chemical potentials , from which the optimized s are then derived from .this implies that we can consider the original optimization problem in the space of chemical potentials .the network behavior depends on whether is zero or not ; can be considered as the friction necessary for the chemical potentials to overcome when the shortage changes from zero to nonzero . for the frictionless case of , to which the quadratic considered in this paperbelongs , two types of nodes can be identified .those nodes with excess resources are characterized by and .those nodes with resource shortage are characterized by and . for the friction case of ,there is a third type of nodes with fully utilized resources and characterized by and .three types of links can be identified .those links with are referred to as the _ saturated _ links . those with and are referred to as _ unsaturated _ and _ idle _ links respectively .we introduce the free energy at a temperature , where is the partition function .\nonumber\\\end{aligned}\ ] ] the statistical mechanical analysis of the free energy can be carried out using the replica method or the bethe approximation , both yielding the same results .the replica approach for networks with finite bandwidths is a direct generalization of the case with infinite bandwidth in . here, we describe the bethe approach , whose physical interpretation is more transparent . in large sparse networks, the probability of finding loops of finite lengths on the network is low , and the local environment of a node resembles a tree . in the bethe approximation, a node is connected to branches of the tree , and the correlations among the branches are neglected . in each branch , nodes are arranged in generations , a node is connected to an ancestor node of the previous generation , and another descendent nodes of the next generation .we consider the vertex of a tree .we let be the free energy of the tree when a current is drawn from the vertex by its ancestor node .one can express in terms of the free energies of its descendents , \biggr\},\end{aligned}\ ] ] where represents the tree terminated at the descendent of the vertex , and is the capacity of .we then consider the free energy as the sum of two parts , where is the number of nodes in the tree , and is the vertex free energy per node . is referred to as the _ vertex free energy_. note that when a vertex is added to a tree , there is a change in the free energy due to the added vertex . in the language of the cavity method , the vertex free energies are equivalent to the _ cavity fields_ , since they describe the state of the system when the ancestor node is absent . from eq .( [ eq : free ] ) , we obtain a recursion relation for the vertex free energy . in the zero temperature limit, this relation reduces to - f_{\rm av}.\end{aligned}\ ] ] note that has to be subtracted from the above relation , since the number of nodes increases by 1 when a new vertex is added to the trees .the average free energy is obtained by considering the average change in the free energy when a vertex is connected to its neighbors , resulting in \biggr\rangle_\lambda.\end{aligned}\ ] ] where is the capacity of the vertex fed by trees , and represents the average over the distribution .the shortage distribution and current distribution can be derived accordingly .due to the local nature of the recursion relation ( [ recur ] ) , the optimization problem can be solved by message - passing approaches , which have been successful in problems such as error - correcting cods and probabilistic inference . however , in contrast to other message - passing algorithms which pass conditional probability estimates of discrete variables to neighboring nodes , the messages in the present context are more complex , since they are free energy functions of the continuous variable .inspired by the success of replacing the function messages by their first and second derivatives in , we follow the same route and simplify the messages devised for this problem .note that the validity of this simplification is not obvious in this problem , owing to the non - quadratic nature of the energy function as a result of the finite bandwidths .these additional constraints increase the complexity of the problem and complicate the messages .we form two - parameter messages using the first and second derivatives of the vertex free energies .let be the messages passed from node to its ancestor node , based on the messages received from its descendents in the tree .the recursion relation of the message can be obtained by minimizing the vertex free energy in the space of the current adjustments drawn from the descendents .we minimize + \psi(\xi_j),\end{aligned}\ ] ] subject to the constraints together with the constraints on bandwidths , and represent the first and second derivatives of at respectively .we introduce lagrange multiplier for constraints ( [ mpconstraint ] ) . after optimizing the energy function of node ,the messages from node to are given by \biggr\}^{-1 } & for ,\\ \\\displaystyle\biggl\ { \psi''(\xi)^{-1 } + \sum_{k\ne i } \ca_{jk}(r\phi''_{jk } + b_{jk})^{-1 } \\\times\theta\biggl[w-\biggr|y_{jk } -\frac{r\phi'_{jk}+a_{jk}+\mu_{ij}}{r\phi''_{jk}+b_{jk}}\biggr|\biggr]\biggr\}^{-1 } & for ,\\ \\ } \end{aligned}\ ] ] where (x ) + x,\end{aligned}\ ] ] and is defined by \biggr\}\end{aligned}\ ] ] these variables can be interpreted as the cavity variables of node in the absence of , when a current is drawn from node . is the cavity chemical potential of node . is the cavity shortage of resource at node when takes the value . is then the corresponding dissatisfaction cost per unit resource of node .the condition that in eq .( [ mpmu ] ) is thus to require the chemical potential to be set at the minus of the dissatisfaction cost per unit resource shortage . for the frictionless case of , changes continuously when varies from zero to negative .hence when changes , changes continuously , resulting in a continuous change in the chemical potential as well as the first derivatives of the vertex free energy function .hence , for the quadratic load balancing task , defined by , the vertex free energies in the recursion relation eq .( [ recur ] ) will be piecewise quadratic with continuous slopes , with respect to continuous changes of currents , implying the consistency of the proposed optimization algorithm .this validifies the replacement of the message functions by the two - parameter messages .since the messages are simplified to be the first two derivatives of the vertex free energies , it is essential for the nodes to determine the _ working points _ at which the derivatives are taken .that is , each node needs to estimate the current drawn by its ancestor .this determination of the working point is achieved by passing additional information - provision messages among the nodes . here, we describe the method of backward information - provision messages .this is in contrast to conventional message - passing algorithms , in which messages are passed in the forward direction only .an alternative method of forward information - provision messages can also be formulated following .after the minimization of the vertex free energy at a node , forward messages are sent forward from node to ancestor node .optimal currents are computed and sent backward from node to the descendent nodes .these backward messages serve as a key in information provision to descendents , so that the derivatives in the subsequent messages are to be taken at the updated working points . minimizing the free energy ( [ mpl ] ) with respect to , the backward message is found to be \biggr\}\end{aligned}\ ] ] when implemented on real networks , the nodes are not divided into fixed generations , and there are no fixed ancestors or descendents among the neighbors of a particular node .individual nodes are randomly chosen and updated , by randomly setting one of its neighbors to be a temporary ancestor .this results in independent updates of the currents and in the opposite directions of the same link . to achieve more efficient convergence of the algorithm, we can set during the updates .for the cost function used in this work , we find that this procedure speeds up the balance of demands from the two nodes connected by the link .an alternative distributed algorithm can be obtained by iterating the chemical potentials of the nodes .the optimal currents are given by eq .( [ solution ] ) in terms of the chemical potentials which , from eqs .( [ xi_define ] ) and ( [ constraints ] ) , are related to their neighbors via where and are given by with function again given eq .( [ solutiony ] ) .this provides a simple local iteration method for the optimization problem in which the optimal currents can be evaluated from the potential differences of neighboring nodes .simulation results in section [ sec : sim ] show that the chemical potential iterations have an excellent agreement with the message - passing algorithm in eqs.([mpab])-([mph ] ) .we may interpret this algorithm as a price iteration scheme by noting that the lagrangian in eq .( [ lagr ] ) can be written as where therefore the problem can be decomposed into independent optimization problems , each for a current on a link . is the storage price per unit resource at node , and each problem involves balancing the transportation cost on the link , and the storage cost at node less that at node . from eqs .( [ cpmu ] ) and ( [ cphg ] ) , we see that when node is short of resources , .that is , the storage price at a node is equal to minus the dissatisfaction energy of that node .this means that the algorithm gives bonuses to the link controls instead of charging them , so as to encourage them to station their resources at the nodes with resource shortage .the amount of bonus per unit resource compensates exactly the dissatisfaction energy per unit resource .this corresponds to a pricing scheme for the individual links to optimize , which simultaneously optimizes the global performance .the concept is based on the same consideration as those used in distributed adaptive routing algorithms in communication networks .we examine the properties of the optimized networks by first solving the theoretical recursive equation ( [ recur ] ) numerically to obtain various quantities of interest , including the energy , the fraction of idle links and saturated links .the results are then compared with the simulation results obtained by the message - passing and price iteration algorithms .all experiments assume a quadratic shortage cost for shortage given in eq .( [ xicon ] ) , and a quadratic transportation cost , and the capacity distribution a gaussian with mean and variance 1 . to solve numerically the recursive equation ( [ recur ] ), we have discretized the vertex free energy functions into a vector , whose component is the value of the function corresponding to the current . at each generation of the calculation ,a node is randomly chosen and its capacity is drawn from the distribution .the node is randomly connected to nodes of the previous generation with vertex free energies . then the vertex free energy is found by minimizing the right hand side of eq .( [ recur ] ) using standard numerical methods for every discretized components of , subject to the constraints of .thus a new generation of vertex energy functions is generated and the process continue in a recursive manner .figure [ varyw_smallc ] shows the average energy as a function of the bandwidth , obtained by , respectively , the message - passing and the price iteration algorithm simulations , and the recursive equation ( [ recur ] ) .simulation results collapse almost perfectly with the recursive equation .we find that the average energy increases with decreasing bandwidths .physically , decreasing bandwidth corresponds to the increasing limitations in the allocation of resources .thus , it is natural to have higher energy . in the limit of zero bandwidth with small finite connectivity , the solution is equivalent to the initially unoptimized condition .as the bandwidth vanishes , no resources are transported among the network , and the average network energy is given by in the limit of zero bandwidth , a link can only be idle or saturated , with no intermediate unsaturated state .a link is idle only if the two connected nodes are satisfied already , with their own initial resources .the probability of finding an idle link is thus determined by the initial resources of the connected nodes , yielding the fraction of idle links as ^ 2 = \frac{1}{4}\left[1-{\rm erf}\left(\frac{\langle\lambda\rangle}{\sqrt{2 } } \right)\right]^2,\end{aligned}\ ] ] where the last equality is specific to the gaussian . the fraction of saturated link is indeed given by .the corresponding limit is shown in the inset of fig .[ varyw_smallc ] , which is consistent with the simulation results . another interesting physical consequence of decreasing bandwidth is also shown in the inset of fig .[ varyw_smallc ] .we note that as the bandwidth decreases , the fraction of saturated link increases , which is a direct consequence of the attempt to minimize the shortages of nodes fed by the saturated links .a similar agreement would lead us to anticipate a decreasing fraction of idle links as the bandwidth decreases , since more links should participate in the task of resource allocation .surprisingly , we notice an increasing fraction of idle links as the bandwidth decreases , in contrast with our anticipation .this is a consequence of the _ bottleneck effect _ , as illustrated in fig .[ bottle ] .when the bandwidth decreases , resources transferred from the secondary neighbors may become redundant since resources from nearest neighbors already saturate the link to the dissatisfied node , which can therefore be considered as a bottleneck in transportation .this makes the resource distribution among the nodes less uniform , since resources are less free to be transported through the links . as a result ,more nodes are having either excess resources or large shortage , leading to a higher network energy . on the other hand ,the existence of these bottlenecks confirm the efficiency of our algorithms , since redundant flows to distant nodes are eliminated .the existence of bottlenecks is common in many real networks . among the most common examples are bottlenecks occurring in traffic congestions .further studies can be carried out to minimize the bottleneck effect , such as using heterogeneous bandwidths for different links .the highway is an example of enlarging the bandwidth on a link with heavy traffic .similar considerations have been applied to routing of traffic on telecommunication networks . to study the roles played by the different nodes in resource allocation , we consider the example of networks with and very negative , so that all resources in the networks have to be utilized for optimization .we classify the nodes into four classes , the nodes of each class having to 3 saturated links connected to them .figure [ satdis ] shows the conditional capacity distribution of these four classes . for nodes having no saturated links ,their participation in the optimization task is relatively inactive , and their capacity distribution is approximately gaussian with an average . for nodes having onesaturated link , two peaks are found around and in the conditional capacity distribution , respectively corresponding to nodes acquiring resources from or providing resources to their neighbors , in order to lower the global shortage cost . for nodes having two saturated links, we obtain a trimodal distribution .the right and the left peaks at around and correspond to resource donors and receptors respectively , which export or import resources , saturating two of the connected links .the central peak at corresponds to nodes with initial resources close to the average value .these nodes have little tendency to donate resources to their neighbors or receive from them .thus they are referred as _ relays _ which participate in optimization by transferring resources from one neighbor to another . for nodes with three saturated links ,we observe a _ four - modal _ distribution .similarly , the rightmost and the leftmost peaks correspond to _ sources _ and _ sinks _ of resources respectively .the middle - right and the middle - left peaks correspond to _ source - like relays _ and _ sink - like relays _ respectively , namely , they serve partially as relays and partially as sources and sinks .all these distributions show the strong dependence of the number of saturated links of a node on its own initial resources , which identify an inborn optimization role for each node during the resource allocation .the optimal network behavior can be studied analytically in the limit of high connectivity . in this limit, the bandwidth of the links plays an important role in the scaling laws of the physical quantities of interest .two cases will be considered in this section . in the first case ,the bandwidths of the individual links remains constant when the connectivity scales up .this increases the total available bandwidth connecting a node , and hence the freedom in resource allocation . in the second case ,we scale the bandwidth of the individual links as when the connectivity scales up .hence , the total available bandwidth connecting the individual nodes is conserved , but the burden of transporting resources is divided into smaller currents shared by a larger number of neighbors . to further simplify the analysis , we set , and in the subsequent derivations . in the high connectivity limit ,the magnitudes of the currents in individual links are reduced , owing to the division of the transportation burden among the links .since the bandwidths remain constant , the fraction of saturated links becomes negligible .hence , eq . ( [ mpab2 ] ) converges to the result at the steady state , all current adjustments vanish , reducing eq .( [ mpy ] ) to thus , eq . ( [ mpmu ] ) for in the message - passing algorithms can be simplified to , 0\biggr\ } , \ ] ] where we have approximated by .this expression of corresponds to the shortage on node after optimization for the quadratic cost .we then utilize the fact that the messages from the descendents are independent to each other .this allows us to express the collective effects of the descendents on a node in terms of the statistical properties of the descendents . by virtue of the law of large numbers , it is sufficient to consider the mean and the variance of the messages . in eq .( [ nvba ] ) , the term is negligible in the limit of high connectivity . after applying the law of large numbers to the term , we can write averaging over , which is drawn from a gaussian distribution of mean and variance 1 , we obtain a self - consistent expression for , \nonumber\\ & & -\biggl(\frac{cm_a}{r}-\langle\lambda\rangle\biggr ) h\biggl(\frac{c m_a}{r}-\langle\lambda\rangle\biggr),\end{aligned}\ ] ] where ] results in a total resource transport of transported to or from a node .it provides an extremely large freedom in resource allocation . for positive , the average shortage vanishes in the limit high connectivity limit , corresponding to a zero fraction of unsatisfied nodes .hence , increasing connectivity results in better resource allocation of the system .we obtain the current distribution by combining the expression for in eq .( [ nvba2 ] ) with the equation for current in eq .( [ nvby ] ) , such that .\end{aligned}\ ] ] hence the current distribution is given by for the gaussian distribution , we obtain \\ & & + \frac{c+r}{\sqrt{\pi}}h\biggl(\frac{(c+r)y-2\chi}{\sqrt{2}}\biggr ) \exp\left[-\frac{1}{4}((c+r)^2y^2)\right ] , \nonumber\end{aligned}\ ] ] where .this shows that the rescaled distribution /(c+r) ] is independent of and depends solely on .thus , the width of the distribution scales as .the variable , corresponding to the shortage of a node .this implies that the variance of the shortage distribution also scales as , indicating a more efficient allocation of resources .the absence of the delta function component at implies that none of the nodes are holding excess resources in the negative regime .as both the current and chemical potential distributions are gaussian , the average energy per node is simplified to note that in the high connectivity limit , the average energy per node approaches .this corresponds to the theoretical limit of efficient and uniform resource allocation , in which the total shortage is evenly shared among all nodes .simulation results are compared with the analytical prediction in the high connectivity limit .figure [ gr_nvbydis ] shows the rescaled current distribution /(c+r) ] compared with the high connectivity limit . for negative ,the rescaled distributions are gaussian - like and approaching the high connectivity limit .compared with the corresponding rescaled current distribution , there is a larger dependence on .the inset of fig .[ gr_nvbmudis ] shows the variances of chemical potentials which approach different asymptotic values values as decreases , for different fixed values of connectivity .remarkably , for sufficiently negative , the variances become independent .this further confirms the convergence of chemical potential distributions to universal gaussian distributions in the negative regime .figure [ gr_nvbma ] shows the average shortage , as measured by , obtained from simulations at different connectivities as compared to the theory . in the neighborhood of , as the connectivity increases , gradually approaches the high connectivity limit which corresponds to most uniform allocation of resources given by eq .( [ nvbma ] ) . away from , collapses well with the analytical results . remarkably , the distributions of currents and chemical potentials , and the average shortage , all approach their high connectivity limits , for relatively low values of already .we also analyze the finite size effects of the system in approaching the asymptotic behavior of the high connectivity limit .the inset of fig .[ gr_nvbma ] shows the difference between the variance of chemical potentials and its asymptotic value when the connectivity is low , the differences are larger as indicated by the power law fit .to summarize , our analysis shows that in the high connectivity limit , the optimized state of the system is one in which the resources are uniformly allocated , such that the final resources on every individual node are equal .simulations with increasing connectivities reveal the asymptotic approach to this uniform limit , and the deviation from the limit decreases as the connectivity and the system size increase .we now consider the case that the bandwidth of individual links scales as when the connectivity increases , where is a constant .thus the total bandwidth available to an individual node remains a constant .this is applicable to real networks in which the allocation of resources is limited by the processing power of individual nodes .we start by writing the chemical potentials using eq .( [ cpmu ] ) , .\end{aligned}\ ] ] in the high connectivity limit , the interaction of a node with all its connected neighbors become self - averaging , making it a function which is singly dependent on its own chemical potential , namely , physically , the function corresponds to the average interaction of a node with its neighbors when its chemical potential is .thus , we can write eq .( [ vbmu ] ) as ,\end{aligned}\ ] ] where is now a function of , and we have where we have written the chemical potential of the neighbors as , assuming that they are well - defined functions of their capacities . to explicitly derive , we take advantage of the fact that the rescaled bandwidth , vanishes in the high connectivity limit , so that the current function is effectively a sign function , which implies that the current on a link is always saturated .( this approximation is not fully valid if is large but finite , and will be further refined in subsequent discussions . )thus , we approximate .\end{aligned}\ ] ] assuming that is a monotonic function of , then we have ={\rm sgn}(\lambda-\lambda_i) ] and are no longer necessarily equal , and eq .( [ vbmu2 ] ) is no longer valid .nevertheless , eq .( [ vbmu ] ) permits another solution of constant in a range of .this is possible since for any and in this range , implies that the link between nodes and is unsaturated .if an extensive fraction of the links of such a node are connected to other nodes in the range , then the freedom of tuning the currents in the unsaturated links enables the nodes in the range to have the same level of resources after optimization .hence , we propose that the unstable region of should be replaced by a range of constant as shown in fig .[ gr_maxwell](b ) analogous to maxwell s construction in thermodynamics .let and be the end points of the maxwell s construction as shown in fig .[ gr_maxwell](b ) .the position of this construction can be determined by the conservation of resources . since the construction is not necessary for positive , we focus on the case of negative . in the high connectivity limit , resources are so efficiently allocated that the resources of the rich nodes are maximally allocated to the poor nodes as much as the total bandwidth allows .let be the capacity of a node beyond which nodes can have positive final resources ( i.e. zero chemical potential ) .nodes with send out their resources without drawing inward currents from their neighbors , and can be regarded as _donors_. the value of is given by the self - consistent equation for gaussian distribution of capacities , ,\end{aligned}\ ] ] by equating the networkwide shortage to minus the initially available resources , we find .\end{aligned}\ ] ] on the right hand side , the first term corresponds to the sharing of shortages among the non - donors , while the second term corresponds to the resources donated by the donors to reduce the total shortage . in the high connectivity limit, this equation becomes substituting eqs .( [ vbmu1a ] ) , ( [ vbmmuint3 ] ) and ( [ vblambdao ] ) , and making use of the symmetry relation we arrive at the condition which implies that the value of should be chosen such that the areas a and b in fig .[ gr_maxwell](b ) , weighted by the distribution , should be equal in direct correspondence with the maxwell s construction in thermodynamics . for capacity distributions symmetric with respect to , we have as a result , the function is given by & for otherwise,\\ } \end{aligned}\ ] ] where as and are respectively given by the lesser and greater roots of the equation nodes with chemical potentials represent clusters of nodes interconnected by an extensive fraction of unsaturated links , which provides the freedom to fine tune their currents so that the shortages among the nodes are uniform .they will be referred to as the _ balanced _ nodes . on the other hand ,nodes outside the balanced clusters are connected by saturated links only .the fraction of balanced nodes is given by the equation note that has the same dependence on for all negative .[ fbal ] shows that when the total bandwidth increases , the fraction of balanced nodes increases , reflecting the more efficient resource allocation brought by the convenience of increased bandwidths .when becomes very large , a uniform chemical potential of networkwide is recovered , converging to the case of non - vanishing bandwidths .the chemical potential distribution can be obtained from eq .( [ vbmuhori ] ) . unlike the corresponding distribution in the case of non - vanishing bandwidths described in the previous subsection, there is no explicit scaling on the connectivity .in addition , it is not purely gaussian. the delta function component corresponds to the balanced clusters .the weight of the delta function component is a measure of the efficiency of resource allocation .we compare the analytical result of in eq .( [ vbmuhori ] ) with simulations in fig .[ gr_vbhori ] . for ,data points of individual nodes from network simulations follow the analytical result of , giving an almost perfect overlap of data .the presence of the balanced nodes with effectively constant chemical potentials is obvious and essential to explain the behavior of the majority of data points from simulations . outside the region of balanced nodes ,the data points follow the tails of the function . for , the analytical shows no turning point as shown in the inset of fig .[ gr_vbhori ] . despite the scattering of data points, they generally follow the trend of the theoretical .this scattering effect may be explained by the use of a finite connectivity in simulations .we found that the extent of scattering can be much reduced by increasing the connectivity in the simulations .our analysis can be generalized to the case of large but finite connectivity , where the approximation in eq .( [ vbmmuint2 ] ) is not fully valid .this modifies the chemical potentials of the balanced nodes , for which eq .( [ vbmmuint2 ] ) has to be replaced by + \int_{\lambda_<}^{\lambda_>}d\lambda\rho(\lambda ) \biggl(\frac{\mu(\lambda)-\mu}{r}\biggr ) .\nonumber\\\end{aligned}\ ] ] we introduce an ansatz of a linear relationship between and for the balanced nodes , namely , after direct substitution of eq .( [ vbslantanastz ] ) into given by eq .( [ vbslantmmu ] ) , we get the self - consistent equations for and , thus , the maxwell s construction has a non - zero slope when the connectivity is finite .we remark that the approximation in eq .( [ vbslantmmu ] ) assumes that the potential differences of the balanced nodes lie in the range of , so that their connecting links remain unsaturated .note that the end points of the maxwell s construction have chemical potentials respectively , rendering the approximation in eq .( [ vbslantmmu ] ) _ exact _ at one special point , namely , the central point of the maxwell s construction .hence , this approximation works well in the central region of the maxwell s construction .however , when one approaches the end points of the maxwell s construction , the balanced nodes are also connected to nodes outside the balanced clusters with potential differences less than .hence , we expect to see deviations from the theoretical linear prediction near the end points of the maxwell s construction . the chemical potential distribution can be obtained .compared with corresponding distribution in the high connectivity limit , the delta function component of the balanced nodes is now smeared into a gaussian component lying in a strip of width .it implies a lower efficiency in resource allocation due to the finiteness of the connectivity . in the simulation data shown in fig .[ gr_vbslant ] , the data points of from different resistance follow the trend of the corresponding analytical results , both within and outside the linear region , with increasing scattering within the linear region as increases . as expected , there are derivations between the analytical and simulational results at the two ends of the linear region , with smoothened corners appearing in the simulation data , especially in the case of .we note that when increases , the gradient of the linear region increases , corresponding to a less uniform allocation of resources caused by higher transportation costs during optimization . as shown in the inset of fig .[ gr_vbslant ] , the chemical potential distributions follow the trend of the analytical results . remarkably , as evident from eq .( [ vbslantm ] ) , even with constant available bandwidth , increasing connectivity causes to decrease , and hence sharpens the chemical potential distribution .the narrower distributions correspond to higher efficiency in resource allocation .it leads us to realize the potential benefits of increasing connectivity in network optimization even for a given constant total bandwidth connecting a node , despite a decrease in bandwidth on individual links .recent studies show that complex communication networks have highly hetergeneous structures , and the connectivity distribution obeys a power law .these networks are commonly known as scale - free networks and are characterized by the presence of hubs .the presence of these nodes with very high connectivity can modify the network behavior significantly .figure [ scalefree1 ] shows the simulation results of node with and in a scale - free network , where is the connectivity of node and is drawn from the distribution when the scale - free network is constructed .the data points of follow the corresponding analytical results of eqs .( [ vbslantm ] ) and ( [ vbslantb ] ) , for both sets of nodes with and .this implies that the previous argument of increasing efficiency by increasing connectivity also hold for scale - free networks , as a smaller gradient is found for nodes with higher connectivity . as can be seen from fig .[ scalefree1 ] , the data points are less scattered for nodes with large , implying a more efficient resource allocation for the hubs , in addition to the effect of smaller .more important , nodes with low connectivity benefit from the presence of hubs in the networks . to see these benefits , the simulation results of nodes in scale - free networksare compared with nodes in regular networks of the same connectivity . as shown in fig .[ scalefree1](b ) , the data points from regular networks are more scattered away from the maxwell s construction , when compared with those from scale - free networks .this shows that the presence of hubs increases the efficiency of the entire network , especially for nodes with low connectivity .this provides support that scale - free networks are better candidates for resource allocation than regular networks .we have applied statistical mechanics to a system in which magnitude of the interactions ( such as currents in resource allocation ) between on - site variables ( such as the shortages ) are constrained .this allows us to study an optimization task of resource allocation on a sparse network , in which nodes with different capacities are connected by links of finite bandwidths .analogy can be drawn between interaction of spins in magnetic systems and that of resources of nodes in networks .the bandwidths serve as constraints on the interaction magnitudes and limit the information exchange among the neighbors . despite these additional constraints, we found that the analyses of the unrestricted case in are applicable after appropriate adaptations , such as allowing for shortages with finite penalty and obtaining a non - linear relationship between currents and potential differences . by adopting suitable cost functions , such as quadratic transportation and shortage costs ,the model can be applied to the study of realistic networks with constrained transport between neighbors . in this paper , we focus on cases in which the shortage dominates . by employing the bethe approximation or equivalently the replica method ,recursive relations of the vertex free energies can be derived .analytically , the recursive relations enable us to make theoretical predictions of various quantities of interest , including the average energy , the current distribution , and the chemical potential distribution .the predictions are confirmed by simulation results . in particular , the study reveals interesting effects due to finite bandwidths . when the bandwidth decreases , resource allocation is less efficient , and links are more prone to saturation .a consequence is the creation of bottlenecks , which refer to the saturation of links feeding the poor nodes , rendering the secondary provision of resources from their next nearest neighbors redundant .this causes certain links previously assigned for secondary transport to become idle , lowering the participation of individual links in global optimization and making the resources less uniformly distributed . in the context of resource allocation, further studies can be carried out to suppress the bottleneck effect .an equally remarkable phenomenon is found in networks with fixed total bandwidths per node , where bandwidths per link vanish in the high connectivity limit and the relation between the chemical potential and capacity is well defined . for sufficiently large total bandwidths , we find a phase transition beyond which the chemical potential function has to be described by the maxwell s construction , implying the existence of clusters of balanced nodes having a uniform shortage among them . in the case of large but finite connectivity , the maxwell s construction becomes a linear region with nonzero slope , implying a less uniform shortage among the balanced nodes .this reflects the benefits of increasing the number of connections in resource allocation .when adapted to scaled - free networks , deviations from the maxwell s construction reveal that the presence of hubs is able to homogenize the resources among the nodes with low connectivities . for future extensions ,the theory and algorithms in this paper can be generalized to model real networks with inhomogeneous connectivity and bandwidths .scale - free networks with adjustable bandwidth distributions would be one of the most interesting systems to study .further studies can also be done on minimizing the bottleneck effect by using heterogeneous bandwidths on different links .it is also worthwhile to consider other non - linear shortage costs .we believe that the techniques presented in this paper are useful in many different network optimization problems and will lead to a large variety of potential applications .we thank david saad for very meaningful discussions . this work is supported by the research grant council of hong kong ( grant numbers hkust 603606 and hkust 603607 ) .10 m. mzard and r. zecchina , _ random k - satisfiability problem : from an analytic solution to an efficient algorithm _ , 2002 phys .e * 66 * 056126 ; m. mzard , g. parisi and r. zecchina , _ analytic and algorithmic solution of random satisfiability problems _ , 2002 science * 297 * 812
|
we apply statistical physics to study the task of resource allocation in random sparse networks with limited bandwidths for the transportation of resources along the links . recursive relations from the bethe approximation are converted into useful algorithms . _ bottlenecks _ emerge when the bandwidths are small , causing an increase in the fraction of idle links . for a given total bandwidth per node , the efficiency of allocation increases with the network connectivity . in the high connectivity limit , we find a phase transition at a critical bandwidth , above which clusters of balanced nodes appear , characterised by a profile of homogenized resource allocation similar to the maxwell s construction .
|
the problem of evaluation of statistical significance of observations when searching for gamma ray sources using air shower experiments remains one of highest importance . the emission from a source would appear as an excess number of events coming from the directions of the candidate over the background level .the difficulty arises because the signal to background ratio as registered by the detectors in this energy range is often quite unfavorable , requiring careful examination of data . in this paper, we expand on the usually adopted procedure of the significance calculation described by , in particular on the conditions of its applicability .the prescription relies on the knowledge of the expected background level , methods of estimation of which are reviewed in .however , the standard significance calculation method is not compatible with these methods of background estimation . in this paperwe introduce a self consistent scheme for a source detection and discuss some of its properties .the method is applicable to point and extended source searches as well as to searches for transient phenomena .we show how practical problems specific to an experiment can be incorporated into the method .the methods described in this paper were developed for , and applied in two gamma ray searches using the milagro water cherenkov air shower detector .a typical air shower detector registers particles from air showers that survive to the ground level .the recorded information is used to provide the direction of the incident primary particle and perhaps provide some information on its energy and type . among the particles entering the earth s atmosphere gamma rays present a very small fraction , often less that .most of the air showers are induced by charged cosmic rays that form a background to the search for gamma initiated showers from a source .special techniques and algorithms have been developed to suppress this background in order to increase the sensitivity to gamma primaries .these , however , are limited due to similarities of the cascades produced by primaries of both types .the application of these techniques helps but does not solve the problem of gamma ray source detection in the presence of a background .therefore , one of the problems in gamma ray astronomy using air shower technique is to be able to determine the level of background .this problem is rather difficult if one tries to calculate it from the first principles , because it would require exact knowledge of the details of the detector operation , its sensitivity which may depend on voltages , temperature , properties of atmosphere and direction reconstruction algorithms .the problem is solved by measuring the background level using the same instrument .thus , in a typical experiment , two measurements are performed one corresponding to the observation of the candidate ( so called _ on - source _ ) and the other is the measurement of the corresponding background level ( so called _ off - source _ observation ) .then , a decision is made as to the plausibility of the existence of the source . because the results of the on- and off - source observations represent random numbers drawn from their respective parent distributions , the question of the existence of the source is the question of whether the numbers are drawn from the same or different parent distributions .it is addressed by a hypothesis test .many statistical tests have been used to test the null hypothesis of the absence of a source given two independent counts from the on - source and from the off - source regions accumulated during time periods and respectively with all other conditions being equal .an improvement was proposed by and is based on the test statistic : because each event carries no information about another , each of the observed counts can be regarded as being drawn from a poisson distribution with some value of the parameter ( adjusted for the duration of observation ) .the motivation provided by the authors is that the numerator may be interpreted as the excess number of events from source over the expected background and denominator is the maximum likelihood estimate of the standard deviation of the numerator given that the null hypothesis is true .it has been argued by the authors that for the case of large , and if the null hypothesis is true , the distribution of becomes gaussian with zero mean and unit variance .this statement is based on the well known fact that poisson distribution approaches that of gaussian for large values of parameter : for a measured value of , the calculation of the -value ( which we denote by ) becomes simple : when looking for a source and when looking for a sink .the null hypothesis is rejected with significance if . the significance is set in advance , before the test is performed and its choice is based on the penalty for rejecting the null hypothesis when it is true .( scientific false discoveries should not happen very often , and thus the significance is usually selected as . ) because of the one - to - one correspondence between and , the significance of a measurement can be quoted in the units of .when the null hypothesis is true , the distribution of statistic ( equation [ equation : signif : lima ] ) approaches the normal distribution in the limit of large numbers .indeed , substituting the factorial in the poisson distribution using the stirling formula , one obtains : or expanding the right hand side into taylor series in in the vicinity of , denoting and keeping first two non - zero terms , we obtain : thus , it is seen that the poisson distribution approaches that of gauss in a narrow region around its mean : with . substituting with and with corresponding estimates of obtain the region around zero where the distribution of statistic is approximately normal .that is , for {36\alpha ( 1+\alpha)^{2}(n_{1}+n_{2 } ) } , \ ; \right .\label{equation : lima_bound } \\ & & \left .\sqrt[6]{36\alpha^{-3}(1+\alpha)^{2}(n_{1}+n_{2 } ) } \right ) \nonumber\end{aligned}\ ] ] the error on the -value due to this approximation does not exceed .figure [ fig : lima ] shows the results of a monte carlo simulations for distribution of the statistic .it can be seen that the distribution is approximately normal in the vicinity of zero . by the same argumentsit may be shown that within essentially the same region around zero , another statistic is also distributed normally .the motivation for statistic is similar to that of statistic ( equation [ equation : signif : lima ] ) that the numerator may be interpreted as the excess number of events from source over the expected background but the denominator is the maximum likelihood estimate of the standard deviation of the numerator given that the alternative hypothesis is true .( the alternative hypothesis in this case is that both observations and are from poisson distributions with unrelated means . )although this motivation appears to be incorrect and the statistic was abandoned by , the critical range of may be defined as for testing the null hypothesis against the presence of a source and for testing against the presence of a sink . because under the conditions of applicability of li ma statistic ( equation [ equation : lima_bound ] ) statistic is distributed normally , the -value calculation is identical to that of for statistic .the figure [ fig : notlima ] presents the results of monte carlo simulations of distribution of statistic .in general , a hypothesis test may be based on any statistic if its distribution under the null hypothesis in known .it is interesting to note that equation ( [ equation : lima_bound ] ) can be used to aid in the design of an experiment . indeed ,if the relative on- to off - source region exposure can be estimated before the experiment is performed , then equation ( [ equation : lima_bound ] ) allows estimating the observation time needed to collect enough events to reach the accuracy of statistics or compatible with the desired significance .for example , if and the significance in units of is set at 3.0 , then the experiment ( duration of observation ) has to be designed in such a way as to allow accumulation of at least events from the on - source region .if the significance is set at 5.0 , then number of on - source events should be at least .in order to be able to implement any of the above hypothesis tests , one must assure that the two measurements and are independent and that the ratio of observation times is available while other conditions are equal .indeed , examining a typical scenario of a gamma ray experiment it is seen that on- and off - source observations can be performed at the same time utilizing the wide field of view of the detector , or they can be performed at different times making measurement in the same local directions of the field of view .( due to the earth s rotation , the off - source bin may present itself in the directions of local coordinates previously pointed at the source bin . )both of these stipulations could contradict to the conditions of `` being equal '' : if observations are done at the same time , then non - uniformity in the acceptance of the array to air showers due to detector geometry must be compensated for ; if observations are done at different times , then any time variation in detector operation must be addressed . under these varying conditions , the meaning of the parameter be changed to the effective ratio of exposures of the bins .the mechanism of such an equalization and determination is called _background estimation_. the name is due to interpretation of the second term of the numerator of equation ( [ equation : signif : lima ] ) as the expected number of background events in the source region : .correspondingly , the number of events obtained from the direct source observation will be denoted .below we consider two methods of background estimation : direct integration and time swapping .a widely accepted method of background estimation recognizes that usually no major changes in the detector configuration are made on short time scales and takes advantage of the rotation of the detector with the earth which sweeps the sky across the detector s field of view .it also recognizes that most air showers detected are produced by charged cosmic rays . because of their charge and because of the presence of random magnetic fields in the interstellar medium , the cosmic ray particles lose all memory of their initial directions and sites of production , and can be regarded as forming isotropic radiation .detector configuration stability implies that the acceptance of the detector is time independent although variations in the overall rate of detected events are allowed .( an example of such rate variations could be an event rate decrease caused by a temporary data acquisition system overload . )therefore , the average number of detected events as a function of local coordinates and time on the short time scale can be written in the form : here is overall event rate , acceptance of the array such that .the local coordinates could be either hour angle and declination or zenith and azimuth .the average number of background events expected in the source bin , is then given by where is equal to zero if and are such that they translate into inside of the source bin , and is one otherwise .the isotropy and stability assumptions ( equation [ equation : sloshing : stability_assumption ] ) become part of the null hypothesis being tested .the direct integration method of source detection is based on isotropy and stability assumptions ( equation [ equation : sloshing : stability_assumption ] ) and is the method where the integration of equation ( [ equation : sloshing : background ] ) is performed numerically by discretizing both and on a fine grid and replacing integrals by sums .the significance test is based on either statistic ( [ equation : signif : lima ] ) or ( [ equation : signif : notlima ] ) .the acceptance and the event rate are estimated by histogramming local coordinates and event times of the events collected during integration time period from the entire sky .the fluctuations in are dominated by the ones in because the event rate is collected from the entire sky and may be deemed as known to high precision . in this scheme , the source region defined by also gets discretized , therefore , source count must be obtained using the same discretized definition of the source region .extending the time integration window is equivalent to increasing exposure to the off - source bin , which leads to decreasing value of and improved sensitivity .the assumption ( [ equation : sloshing : stability_assumption ] ) , however , must hold during the entire integration period placing a constraint on the maximum size of the off - source bin .the time integration window is limited by 24 hours of sidereal day .the realization of the direct integration method just described includes on - source events in the calculation of expected background ( via and ) .this , however , is inconsistent with their independence required by li ma statistic ( equation [ equation : signif : lima ] ) and was already recognized in .an extreme example would be the case of sighting of the north celestial pole .there , the source does not present any apparent motion in local coordinates because it lies on the axis of rotation of the earth .the off - source bin does not exist , the off - source count and the ratio of exposures are not defined and the measurement can not be performed using isotropy and detector stability assumptions ( equation [ equation : sloshing : stability_assumption ] ) . in the framework of the direct integration method ,however , the background is guaranteed to be estimated exactly equal to and therefore .this is clearly unsatisfactory . in order to be able to use either of the statistics ( [ equation : signif : lima ] ) or ( [ equation : signif : notlima ] )the events from the source bin should be excluded from the background estimation .however , simply removing all of these events from the procedure will destroy its foundation that the lists of local coordinates and times represent samples from and respectively . a solution tothis problem follows .denote by a function similar to which defines the region of the sky events from which are to be excluded from the background estimation .the excluded region should contain the candidate source bin , but is not limited to it . also denote by the number of detected events originating from outside of the excluded region , and their total event rate, then it is readily seen that integrating this equation with respect to and a system of equations on unknown and is obtained ( and are available experimentally ) : the numerical solution of these integral equations provides and based on data and from the outside of the excluded region to be used in equation ( [ equation : sloshing : background ] ) .the situation is illustrated on figure [ fig : background : solution ] .the heavily shaded area is the outside of the excluded region bounded by in its discrete form , events from which may be used for the off - source observation .the region of interest , the on - source region , is defined by some other conditions which are irrelevant for the background equations ( [ equation : background : equations ] ) as long as it is contained in the excluded region. ) .,width=288 ] it can be noted that both and enter into equations ( [ equation : background : equations ] ) and ( [ equation : sloshing : background ] ) only as a product , therefore , normalization of either of them does not make any difference as long as the product is preserved . also , if there are points in the local coordinates which are always inside the excluded region , which may happen if the detector was operational during a short time period and/or the excluded region was large , ( that is ) then and the first equation of ( [ equation : background : equations ] ) becomes : leading to and integral in equation ( [ equation : sloshing : background ] ) being undefined .on - source events with local coordinates from these regions must be discarded as having no corresponding background estimate .it is thus seen that the method entails that the second , off - source region is defined by the regions of the _ local _ sky which have the opportunity to present themselves into the directions of the source region due to the earth s rotation during the time period of integration .different parts of the source region have different corresponding off - source regions .this leads to the ratio of exposures of on- and off - source regions that is dependent on the local coordinates : the off - source region corresponding to a given on - source region is not a celestial bin , it is a set of local directions with . because the measurements made from different local directions are independent , all measurements can be combined to obtain the compound statistic : or the described method is the integration scheme which is based on the direct integration method and which properly estimates the ratio of exposures and accounts for the source events .the importance of the source region exclusion is illustrated on figure [ fig : galactic_latitide : mc ] where results of the computer simulations for a galactic plane observation is presented .detection of an extended source such as galactic plane presents a difficult example because the ratio of on- and off - source exposures varies dramatically over the area of the source .the figure shows the excess number of events extracted from a simulated galactic signal as a function of galactic latitude .the excess is recovered correctly by the modified method ( equations [ equation : sloshing : background ] and [ equation : background : equations ] ) proposed here ( figure [ fig : galactic_latitide : mc : after ] ) compared to the standard direct integration method ( equation [ equation : sloshing : background ] ) ( figure [ fig : galactic_latitide : mc : before ] ) .use of the standard method would lead to a 25% loss in both the excess number of events and in value of statistic which are recovered by the modification . in the time swapping method of source detection the integration of equation ( [ equation : sloshing : background ] ) is performed by means of monte carlo , which leads to : where _ generated events _ are distributed according to joint probability density with , being the total number of events detected during integration time window .a list of all coordinates of the detected events is regarded as a sample from the distribution , while a similar list of all times as the one from .therefore , sample from distribution can be generated from the data by randomly associating an event s local coordinate with another event s time among the pool of detected events .the so created coordinate - time pair is called a generated event .the accuracy of monte carlo integration increases with the number of generated events . in the time swapping method, the function defining the source region does not have to be discretized as it had to be in the direct integration method . here ,acceptance and event rate must be solutions of the equations ( [ equation : background : equations ] ) to account for on - source events . in practice ,monte carlo integration is performed by substituting each real event s arrival time by a new time from the list of registered times of collected events in a finite time window .this is why the method is referred to as _ time swapping method_. the swapping is repeated times per each real event , typically being around 10. the event rate is considered to be constant on the very short time scale and therefore is saved as a histogram . generated event times are drawn from it . the sample from is generated using events from the outside of the excluded region and should contain events with given local coordinates .however , the number of events available is .therefore , instead of swapping each event times , missing events are created by choosing actual number of swaps from a poisson distribution with parameter where the significance calculation has to reflect the fact that the time swapping method is a monte carlo integration and thus introduces additional fluctuations in the estimate of .the integration error reduces as the number of generated events increases or equivalently as increases and the fluctuations in approach that of the direct integration method .use of statistic ( equation [ equation : signif : notlima ] ) provides a transparent way of including these additional fluctuations .it can be shown that the statistic within framework of time swapping must be substituted by the fact that the source region defined by does not have to be discretized is the advantage of the time swapping method . otherwise , it is based on the same assumptions as the direct integration method : stability of the detector and isotropy of the background ( equation [ equation : sloshing : stability_assumption ] ) .it was assumed in the above discussion that no anisotropy on the sky is present .this , together with the stability assumption had lead to the equation ( [ equation : sloshing : stability_assumption ] ) .in fact , if there are known sources on the sky , then the number of registered events is given by : where describes the strength of the sources as function of local coordinates and time .for example , the crab nebula is known to emit gamma rays in the tev energy region . because the anisotropy function is not known , the region around the crab has to be excluded from the background estimation even if the nebula is not the subject of investigation .another , more dramatic example is given by two known cosmic - ray sinks on the sky : the sun and the moon .not only do they present a source of anisotropy , they also traverse the sky , blocking on their way potential candidates and perturbing on - source count as well as .this can be handled by vetoing certain size regions around the objects , that is treating them as part of the excluded region during integration ( equations [ equation : sloshing : background ] and [ equation : background : equations ] ) and disregarding events if they fall within the veto region when counting on - source events .in other words , if is the function describing the veto region where it is equal to zero and equal to one everywhere else , then excluded region and source region have to be redefined as : in general , existing small scale anisotropies can be excluded or vetoed as described , known large scale ones have to be incorporated into the stability assumption .these will become a part of the null hypothesis being tested .incorporation of the improved stability assumption ( [ equation : sloshing : anisotropy_assumption ] ) into the framework of direct integration method is straightforward : the anisotropy function must be discretized on the same grid as and are . in order to incorporate the improved stability assumption into the framework of time swapping method , the generated events must represent a sample from which can be achieved with the help of the rejection method . despite the fact that no reconfigurations to the detector on the short time scale are made , the acceptance of the array depends on transmission properties of the atmosphere which may vary during the integration time window ( equation [ equation : sloshing : background ] ) . in this casethe stability assumption ( [ equation : sloshing : stability_assumption ] ) is violated and must be replaced by assumption ( [ equation : sloshing : anisotropy_assumption ] ) , where describes the atmospheric variations .thus , atmosphere must be considered as an integral part of the detector and we refer to the phenomenon in general as detector instability . if the variations are known , they can be incorporated into background estimation as described above .the remainder of this section presents one method of determining such variations and shows how they are incorporated into s(x , t ) .the test of the stability assumption would be a comparison of two acceptances and , measured at different times and .on physical grounds , a detector usually possesses a certain degree of azimuthal symmetry , so does the atmosphere , therefore acceptance is considered as function of zenith and azimuth angles .the histograms and can be collected from the data for a certain duration of time ( for example 30 minutes ) around and . for the purpose of background estimationthe time scale during which the stability assumption ( [ equation : sloshing : stability_assumption ] ) holds must be ascertained .therefore , the test is aimed at studying the difference between the distributions as function of time separation .it has to be recognized that presence of sources or large scale anisotropies on the sky and instability of the detector mimic each other , therefore , zenith and azimuth angle distributions alone are compared instead of two dimensional s .the test can be implemented as a series of tests of and ( yielding ) and then obtaining the combined for time separation : the test statistic so obtained follows a distribution with degrees of freedom if observed differences are of random nature only .here are the number of degrees of freedom in the corresponding tests .the average of is equal to while its variance is equal to .thus , the per degree of freedom should fluctuate around 1.0 .examining the dependence of on time separation it is possible to test the detector stability assumption and to ascertain the proper integration time window .if detector instability is recognized , care must be taken to improve the stability assumption . with 30 minute bins ,vertical axis is corresponding .solid horizontal line is the expected value of one if the stability assumption holds ., width=288 ] figure [ fig : background : stability_test ] is an example of the results of the detector stability assumption test using milagro data with regard to zenith coordinate .( for description of milagro please see . )it is seen from the plots that the degree of violation of the assumption grows with time separation as might be expected , but then it drops before growing again .this can be interpreted as presence of a periodic component which insured that two acceptances and separated by 24 ut hours are `` closer '' to each other than , say , those separated by only 12 .thus , despite the fact that no human intervention on the short time scale is made , the acceptance of the detector changes . because the diurnal periodicity is noted , the investigation of the modulation can be performed by comparing a particular distribution with its daily average .it was observed that the shape of the modulation ( ) of the zenith distribution is approximately constant with amplitude varying from half hour to half hour .therefore , the improved stability assumption is chosen to be of the form : where is the amplitude of the correction at time , is the polynomial zenith angle correction function coefficients of which are obtained from the modulation shape study , is the average acceptance of the detector obtained from equations ( [ equation : background : equations ] ) .the example of the correction function is shown on figure [ fig : zenith_variation ] .the example of the average daily amplitude dependence is shown on figure [ fig : backgroun : zenith_amplitude ] .the value of the amplitude is typically within range .the plot can also be used to justify the choice of half hour intervals for the amplitude measurement .the assumption ( [ equation : sloshing : zenith_modulation ] ) becomes part of the null hypothesis . derived from milagro data ., width=288 ] derived from milagro data ., width=288 ]we have considered a typical air shower experiment conducted by means of two observations and have discussed two commonly used tests ( based on statistics and ) and conditions of their applicability. a careful look at the situation where an astrophysical object traverses the large field of view of a detector had led us into the subject of background estimation .we have developed a method of background estimation which is consistent with the use of either statistics or and have discussed two implementations of it : direct integration and time swapping .the background estimation method is based on widely adopted assumptions of short time scale stability of the detector operation and that of isotropy of cosmic ray background .we have discussed a way to relax the short time scale stability assumption and used milagro data to illustrate the situation where presence of zenith diurnal modulations can easily be incorporated into the background estimation method .more generally , this is also the way to incorporate known large scale anisotropies .small scale anisotropies do not have to be known , existing ones can be handled by excluding or vetoing the regions around them .any method based on the assumption of short time scale stability of the detector operation and that of isotropy of cosmic ray background can not be used for detection of stationary in the field of view objects .while the methods and ideas presented in this paper were developed for a gamma - ray air shower array , we believe that the methods can also find their applications outside of the field of gamma ray astronomy . the properties of the significance test can be useful for any counting type experiment in which number of events follows poisson distribution , the background estimation method can be used with any large field of view detector , where the object of investigation traverses the field of view , such as in solar neutrino monitors or is transient such as in supernovae neutrino observatories .we would like to thank the milagro collaboration for permitting to use milagro data for illustration of the zenith diurnal modulation , and for its help .this work is supported by the u. s. department of energy office of high energy physics , the national science foundation ( grant numbers phy-9722617 , -9901496 , -0070927 , -0070933 , -0070968 ) , the ldrd program at los alamos national laboratory , los alamos national laboratory , the university of california , the institute of nuclear and particle astrophysics and cosmology as well as the institute of geophysics and planetary physics , the research corporation , and the california space institute .
|
in this paper we discuss several methods of significance calculation and point out the limits of their applicability . we then introduce a self consistent scheme for source detection and discuss some of its properties . the method allows incorporating background anisotropies by vetoing existing small scale regions on the sky and compensating for known large scale anisotropies . by giving an example using milagro gamma ray observatory we demonstrate how the method can be employed to relax the detector stability assumption . two practical implementations of the method are discussed . the method is universal and can be used with any large field - of - view detector , where the object of investigation , steady or transient , point or extended , traverses its field of view .
|
constructs for delaying calls have long been a popular extension to conventional prolog .such constructs allow sound implementation of negation , more efficient versions of `` generate and test '' algorithms , more flexible modes and data - flow , a mechanism for coordinating concurrent execution and forms of constraint programming .they also introduce a new class of errors into logic programming : rather than computing the desired result , a computation may _ flounder _ ( some calls are delayed and never resumed ) .tools for locating possible bugs , either statically or dynamically , are desirable .static analysis can also be used to improve efficiency and in the design of new languages where data and control flow are known more precisely at compile time .the core contribution of this paper is to show how a program with `` delays '' can be transformed into a program without delays whose ( ground ) success set contains much information about floundering and computed answers of the original program .some technical results are given which extend known results about floundering , and these are used to establish the properties of two new program transformations .the main motivation we discuss is program analysis , though we also mention declarative debugging .analysis of properties such as which goals flounder can be quite subtle , even for very simple programs .the term floundering was originally introduced in the context of negation , where negated calls delay until they are ground , and sometimes they never become ground . in this paperwe do nt directly deal with negation but our approach can equally be used for analysing this form of delaying of negated calls .subcomputations are also delayed in some other forms of resolution , for example , those which use tabling . for these computational models delayingis more determined by the overall structure of the computation ( for example , recursion ) rather than the instantiation state of variables in the call , and it is doubtful our methods could be adapted easily .this paper is structured as follows . in section [ sec : delprim ] delay declarations are described and the procedural semantics of prolog with delays is discussed informally . in section [ sec : examples ] we give some sample programs which use delays . in section [ sec : anal1 ]we discuss in more detail some properties of delaying code which , ideally , we would like to be able to analyse . in section [ sec : ground ] we briefly discuss an observation concerning computed answers which is important to our approach . in section [ sec : sldf ]we review a theoretical model of prolog with delays and extend some previous results concerning floundering . in section [ sec : elim ] we give a program transformation that converts floundering into success . in section [ sec : cfs ] a more precise characterisation of floundering is provided , along with a second transformation . in section [ sec : ddis ] we briefly discuss declarative debugging of floundering and a related model - theoretic semantics . in section [ sec : related ] we discuss some related work and we conclude in section [ sec : conc ] .dozens of different control annotations have been proposed for logic programming languages . in the programs in this paper we use `` delay '' declarations of the form ` : - delay a if c ` where ` a ` is an atom , the are distinct variables , is a predicate and ` c ` is a condition consisting of ` var/1 ` , ` nonground/1 ` ( with arguments the ) , `` ` , ` '' and `` ` ; ` '' .procedurally , a call delays if ` c ` holds ( with the conventional meaning of ` var ` and ` nonground ` ) .the procedural semantics of prolog with delays is typically difficult to describe precisely and , to our knowledge , is not done in any manuals for the various prolog systems which support delays .here we describe the procedural semantics of nu - prolog , and where the imprecision lies ; other systems we know of are very similar . by default, goals are executed left to right , as in standard prolog . if the leftmost sub - goal delays ( due to some delay annotation in the program ) , the next leftmost is tried .thus the leftmost non - delaying subgoal is selected .complexities arise when delayed goals become further instantiated and may be resumed .when a delayed goal becomes instantiated enough to be called ( due to unification of another call with the head of a clause ) , the precise timing of when is it resumed can be difficult to predict . with a single call to resume ,it is done immediately after the head unification is completed . with multiple calls to resume , they are normally resumed in the order in which they were first delayed .it is as if they are inserted at the start of the current goal in this order .however , this is not always the case .some calls may delay until multiple variables are instantiated to non - variable terms .this is implemented by initially delaying until one of those variables is instantiated .when that occurs , the call is effectively resumed but may immediately delay again if the other variables are not instantiated . similarly , when delaying until some term is ground , the delaying occurs on one variable at a time and the call can be resumed and immediately delayed again multiple times .the order in which multiple calls are resumed depends on when they were _ most recently _ delayed .this depends on the order in which the variables are considered , which is not specified . in nu - prolog ,the code generated to delay calls is combined with the code for clause indexing and it is difficult to predict the order in which different variables are considered without understanding a rather complex part of the compiler .the situation is even worse in parallel logic programming systems . in parallel nu - prolog default computation rule is exactly the same as for nu - prolog .however , if an idle processor is available a call which is instantiated enough may delay and be ( almost ) immediately resumed on another processor . even with total knowledge of the implementation , the precise execution of a program can not be determined .any program analysis based on procedural semantics must respect the fact that the computation rule is generally not known precisely but ( we hope ) not lose too much information ..... : - delay append(as , bs , cs ) if var(as ) , var(cs ) .append ( [ ] , as , as ) .append(a.as , bs , a.cs ) : - append(as , bs , cs ) .append3(as , bs , cs , abcs ) : - append(bs , cs , bcs ) , append(as , bcs , abcs ) . : - delay reverse(as , bs ) if var(as ) , var(bs ) . reverse ( [ ] , [ ] ) .reverse(a.as , bs ) : - append(cs , [ a ] , bs ) , reverse(as , cs ) . ....we now present two small examples of code which uses delays .the first will be used later to explain our techniques .figure gives a version of ` append ` which delays until the first or third argument is instantiated .this delays ( most ) calls to ` append ` which have infinite derivations . delaying such callsallows ` append ` to be used more flexibly in other predicates .for example , ` append3 ` can be used to ` append ` three lists together or to split one list into three . without the delay declaration for ` append ` , the latter `` backwards '' mode would not terminate . with the delay declaration , the first call to ` append ` delays .the second call then does one resolution step , instantiating variable ` bcs ` .this allows the first call to resume , do one resolution step and delay again , et cetera . in a similar way ,this version of ` reverse ` works in both forwards and backwards modes if either argument is instantiated to a list it will compute the other argument .if the second argument is instantiated , no calls are delayed .however , if only the first argument is instantiated , all the calls to ` append ` initially delay and after the last recursive call to ` reverse ` succeeds , the multiple calls to ` append ` proceed in an interleaved fashion . for any given mode , the code for ` append3 ` and ` reverse ` can be statically reordered to produce a version which works without delaying .the mercury compiler does such reordering automatically , but without automatic reordering it requires some slightly tricky coding to produce such flexible versions of these predicates . ....submaxtree(tree , newtree ) : - submaxtree1(tree , max , max , newtree ) .submaxtree1(nil , _ , 0 , nil ) .submaxtree1(t(l , e , r ) , gmax , max , t(newl , newe , newr ) ) : - submaxtree1(l , gmax , maxl , newl ) , submaxtree1(r , gmax , maxr , newr ) , max3(e , maxl , maxr , max ) , plus(newe , gmax , e ) .% delays ; later mode o , i , i max3(a , b , c , d ) : - ... : - delay plus(a , b , c ) if var(a ) , var(b ) ; var(a ) , var(c ) ; var(b ) , var(c ) ..... figure is a variant of the ` maxtree ` program ( see , for example ) which takes a tree and constructs a new tree containing copies of a logic variable in each node , then binds the variable to a number ( the maximum number in the original tree ) .the ` submaxtree ` program fills each node in the new tree with the original value _ minus _ the maximum .this is done by delaying a call to ` plus ` for each node until the maximum is known , then resuming all these delayed calls .we assume a version of ` plus ` which delays until two of its three arguments are instantiated ; nu - prolog has such a predicate built in .all calls to ` plus ` become sufficiently instantiated at the same time ( when ` gmax ` becomes instantiated ) . in most systemsthey will be called in the order they were delayed .if ` plus ` only worked in the forward mode the calls would not be sufficiently instantiated and the computation would flounder .we also assume a predicate ` max3/4 ` which calculates the maximum of three numbers .it is not possible to statically reorder the clause bodies to eliminate the delays .even dynamic reordering clause bodies each time a clause instance is introduced ( also known as a _computation rule ) is insufficient . without coroutining , two passes over the treeare necessary , doubling the amount of `` boilerplate '' traversal code the first to compute ` gmax ` and the second to build the new tree .delays can be used to write concise and flexible code , the behaviour of which can be very subtle .for example , shows that when bugs are introduced to a four - clause permutation program with delays , a wide variety of counter - intuitive behaviour results .even with such a tiny program , the combination of interleaved execution and backtracking makes understanding why it misbehaves very challenging .another tiny example is the ( arguably correct ) definition of ` reverse ` in figure [ fig_rev ] .having first written and used equivalent code around twenty - five years ago , the author did not become fully aware of its floundering properties until the preparation of this paper .it was incorrectly thought that all calls to ` reverse ` which have an infinite number of solutions with different list lengths ( such as ` reverse([a , b , c|xs],ys ) ` ) would flounder .section [ sec_using_f ] gives a precise characterisation of the actual behaviour .automated methods of analysis of code with delays are highly desirable because manual analysis is just too complex to be reliable .analysis of code with delays can address many different issues. it may be that we expect code to succeed or finitely fail for certain classes of goals but some such goals may actually flounder , typically with a computed answer less instantiated than expected this is the main focus of the declarative debugging work of , also discussed in section [ sec : ddis ] . in section [ sec : elim ]we give a transformation which allows analysis of computed answers ( of successful or floundered derivations ) which can detect such cases .conversely , we may expect certain goals to flounder when actually they succeed .this is particularly important if the goal has an infinite number of solutions and is part of a larger computation success can result in non - termination where floundering would not . in section [ sec : cfs ]we give a further transformation which captures floundering precisely .both methods reduce the problem of analysing a program with delays to analysing the success set of a program without delays . a deeper understanding of floundering can also help us in other ways .for example , although the declarative debugger of does nt rely on either of these transformations directly , it is based on the insights of this paper .similarly , these insights may help us optimise code , either by simplifying delay annotations or , more significantly , eliminating them entirely ( possibly with some reordering of code ) .they may also help our understanding of termination properties of code with delays .[ cols="^,^,^ " , ] before proceeding further , with more technical material , we make an observation about computed answers which is fundamental to our work .the conventional approach to the semantics of logic programs is that the set of function symbols in the language is precisely that in the program see , for example , the textbook which combines and refines some of the original work of van emden , kowalski , apt and others .this means that , unlike in prolog , new function symbols can not occur in the goal ( or the semantics of the program differs depending on the goal ) . an alternative is to define the ( typically infinite ) set of function symbols _ a priori _ and assume that both the program and goals use a subset of these function symbols .this approach has been examined in , where various results are given , and earlier in , where forms of equivalence of logic programs are explored .for example , figure gives three programs with different sets of computed answers for ` p(y ) ` .they are all equivalent using the `` lloyd '' declarative semantics but with extra function symbols programs and are equivalent but is not .one advantage of the latter semantics is that the universal closure of a goal is true if and only if it succeeds with a computed answer substitution which is empty ( or simply a renaming of variables)see the discussion in .another is that the model - theoretic and fixed - point semantics can capture information about ( non - ground ) computed answers and , as we will show later , floundering !since the lloyd semantics deals only with sets of ground atoms , this fact is somewhat surprising , and does not seem to have been exploited for program analysis until now .we partition the set of function symbols into _ program function symbols _ and _ extraneous function symbols_. programs and goals may only contain program function symbols . _ program atoms _ are atoms containing only program function symbols .substitutions in derivations ( including computed answer substitutions ) contain only program function symbols .[ obs_pfs ] this is a consequence of _ most general _ unifiers being used ( indeed , programming with delays makes little sense without this ) .non - ground computed answers can be identified in the success set by the presence of extraneous function symbols .for example , if is an extraneous function symbol then an atom such as ` p(\bowtie ) ` appears in the success set if and only if it is an instance of some non - ground computed answer .if we assume there are an infinite number of terms whose principal function symbol is an extraneous function symbol then computed answers can be captured more precisely we make this assumption later for our analysis of floundering .note this semantics can not determine whether a variable exists in _ all _ computed answers ( or derivations ) of a goal in both and of figure the success set contains ` p(\bowtie ) ` and ` p(a ) ` .however , it does precisely capture groundness in all computed answers ( or variables in _ some _ computed answer ) , a property which has attracted much more interest .for example , many consider it of interest that in all computed answers of ` append ` , if the third argument is ground the second argument is also ground . using the semanticswe suggest , this is equivalent to saying if occurs in the second argument it also occurs in the third argument . if we can find a superset of the success set ( for example , a model ) which has this property , the groundness dependency must hold .thus a small variation to the lloyd semantics leads to significant additional precision while retaining the simple model - theoretic and fixed - point semantics and the relationship between them .a model of prolog with delays , sldf resolution , is presented in . herewe review the model and main results , concerning ground atoms , and extend these result to non - ground atoms .we define the non - ground flounder set , which approximates the floundering behaviour of a program . however , we first give discuss two important closure properties which hold for sld resolution ( where there is no floundering ) .[ clprops ] if an atom has a successful sld derivation with computed answer ( an empty computed answer substitution ) then , using the same program clauses in the derivation * any atom with as an instance has a computed answer with as an instance , and * any atom has a computed answer .such properties allow computed answers to be captured precisely by the set of computed answers of maximally general atoms , and generally simplifies analysis .when delays are introduced ( sldf resolution ) , only closure property 2 holds for successful atoms a less instantiated version of a successful atom may flounder rather than succeed . for floundered atoms only closure property 1 holds ( see proposition [ prop_gth_ca])an instance of a floundered atom may succeed , loop , finitely fail or flounder with an even more instantiated floundered computed answer . the weaker closure properties ( compared to sld resolution ) means it is harder to precisely characterise the behaviour of sldf resolution using sets of atoms .we now review sldf resolution , define the set of atoms we use to approximate its behaviour and show the relationship between the two .sldf resolution is similar to sld resolution ( see ) , but the computation ( atom selection ) rule is restricted to be _ safe _ : an atom may only be selected if it is in the `` callable atom set '' .it is desirable that this set is closed under instantiation and the results below and those in this paper rely on this property .this property seems quite intuitive and holds for most logic programming systems with flexible computation rules .another restriction suggested in is that all ground atoms should be callable .while this is not required for our technical results , it is a pragmatic choice .sldf derivations can be failed , successful , infinite or _ floundered _ , in which the last resolvent consists only of atoms which are not callable ( we say it is _ immediately floundered _ ) .given the assumption above , for a program the following sets of ground atoms can be defined independently of the ( safe , and also fair in the case of finite failure ) computation rule : * the success set ( ground atoms with successful derivations ) . * the finite failure set ( ground atoms with finitely failed sld trees ) . *the flounder set ( ground atoms with floundered derivations ) .note that some atoms in may also be in and have infinite ( fair ) derivations .the fact that floundering is independent of the computation rule suggests it is a declarative property in some sense .however , it has not been fully exploited for analysis until now , perhaps due to the lack of non - procedural definitions of .note also that the results above only refer to ground atoms .an atom such as ` q(x ) ` may have floundered derivations but no instance may appear in because no ground instance flounders .however , can contain information about floundering of non - ground atoms and conjunctions .for example , if the program contains the definition ` p : - q(x ) ` , will contain ` p ` if and only if ` q(x ) ` flounders . relying on the existence of such definitions is a problem unless we know a priori which goals we want to analyse , and gives us no information about substitutions in floundered derivations .substitutions in floundered ( sub)computations can influence termination and are very important for certain programming styles , particularly those associated with parallel programming . most prolog systems which support delays print variable bindings at the top level for both successful and floundered derivations .thus we use the term(s ) computed answer ( substitution ) for both successful and floundered derivations , explicitly adding the words `` successful '' or `` floundered '' where we feel it aids clarity .the analysis proposed in this paper can be seen as being based on the following generalisation of .the _ non - ground flounder set _, , of a program is the set of program atoms which have floundered derivations with empty floundered computed answer substitutions .successful derivations can be conservatively approximated by simply ignoring delays the lack of closure property 1 for successful atoms and the fact that an atom may have both successful and floundered derivations prevents our approach being more precise for analysis of success .the key results of this section , propositions [ prop_gth_nfs ] and [ prop_gth_ca ] , show how contains much but not all information about computed answers of floundered derivations . the results in , and some we prove in this paper , rely on the notion of two derivations of the same goal using the same clause selection .used in in the context of successful derivations , we formalise it here .we assign each clause in the program a unique positive integer and use zero for the top level goal .we annotate each atom used in a derivation with a superscript to indicate the sequence of clauses and atoms within those clauses used to introduce it .annotations are lists of pairs , where is a clause number and is the number of an atom within the clause .we use these annotations for both sld and sldf resolution .the _ annotation _ of an atom used in a derivation is as follows . if goal is the top level goal , .applying substitutions to atoms does not change their annotation .if is selected and resolved with a variant of clause number , , each atom is annotated with .two atoms in different derivations or goals are _ corresponding atoms _ if they have the same annotation .two derivations with the same top level goal , or with one top level goal an instance of the other , have _ the same clause selection _ if all pairs of corresponding selected atoms in the two derivations are matched with the same clause .although not explicitly stated , the proofs in can easily be adapted to show that ( successful and floundered ) computed answers of successful and floundered derivations are independent of the computation rule ( see lemma [ lem_mgi_gth ] ) . for anytwo successful or floundered derivations of the same goal with the same clause selection but a different ( safe ) computation rule , each selected atom in one has a corresponding atom selected in the other .derivation length , computed answer substitutions and the last resolvent are all the same , up to renaming .[ lem_mgi_gth ] suppose is the sld derivation , where is the composition of the most general unifiers used in the first steps , and is the sld derivation , where affects only variables in .suppose also that and use the same set of program clauses and corresponding set of selected atoms in the first steps .then , is a most general instance ( m.g.i . ) of and , and is a m.g.i of and .let , , be the program clause variant used in the first steps of , , , be the atom in a body of one of these clauses or in whose corresponding instance is selected and matched with , and , be the atoms as above corresponding to those in ( they are not selected in the first steps ) .let terms and be as follows , where the connectives and predicate symbols are mapped to function symbols . a m.g.i . of and can be obtained by left to right unification of the arguments of and a variant of which shares no variables with . the first argument unifications yield a renaming substitution , resulting in the same variants of program clause heads and bodies in the instances of and .the next are the same as the unifications in , modulo the clause variants used , and the other unifications yield empty unifiers . thus is a m.g.i . of and .other orders of the arguments and ( or unification orders ) correspond to different computation rules but result is the same most general instance , or a variant .so , by the same construction , is a m.g.i . of and , and must be an instance of . is an instance of so must be a m.g.i . of and .the instances of the initial goals and resolvents can be extracted from the arguments of , and so the result follows .we can now show something similar to the converse of closure property 1 , for floundering .this allows us to infer certain information about program behaviour from .[ prop_gth_nfs ] if is the floundered sldf derivation with floundered computed answer , then has a floundered sldf derivation with a renaming ( or empty ) floundered computed answer substitution ( ) .let be a derivation using the same selection and computation rule as . can not flounder before steps because the resolvent is an instance of and the callable atom set is closed under instantiation . can not fail before steps because the resolvent is no more instantiated than , and is a unifier of all pairs of calls and clause heads in the first steps .consider the resolvent , . by lemma [ lem_mgi_gth ], is a m.g.i . of and and since is an instance of , must be a variant of , so it is immediately floundered . similarly , is a variant of , so the floundered computed answer substitution is a renaming .[ lem_fi_inst ] if has a floundered sldf derivation with the last resolvent being and has a sldf derivation using the same clause selection rule then each is an instance of all its corresponding atoms in .if is immediately floundered the result is trivial .we use induction on the length of . for length 0 , since the callable atom set is closed under instantiation and is immediately floundered , must also be immediately floundered .assume it is true for length .suppose the first selected atom in is . is also callable , so we can construct a derivation using the same clause selection as but with as the first selected atom .the lengths of and are equal and their last resolvents are variants due to the result stated earlier .the first resolvent in ( after selecting ) is an instance of the first resolvent in and has a derivation of length so the result follows .we can now show that closure property 1 holds for floundering : [ prop_gth_ca ] if has a floundered sldf derivation with a renaming ( or empty ) floundered computed answer substitution ( ) , then has a floundered sldf derivation with a floundered computed answer with as an instance . from we can construct a derivation using the same clause selection as that used in and any safe computation rule .the callable atom set is closed under instantiation so by lemma [ lem_fi_inst ] , any atom selected in must have a corresponding atom selected in and thus can not be successful or longer that . uses the variants of the same clauses used in , which has a more ( or equally ) instantiated top level goal , , so can not be failed .it must therefore be floundered and have a computed answer with as an instance . from these propositions we know that an atom will flounder if and only if it has an instance in . also , the maximally general instances of in will be floundered computed answers .the imprecision of with respect to floundered computed answers is apparent when there are atoms in which are instances of other atoms in .if for example , we know will have the first atom as a floundered computed answer .the second atom may also be a floundered computed answer ( via a different floundered derivation ) or it may only be returned for more instantiated goal such as . in practice , there is usually a single maximally general instance of a goal in and this is the only answer computed , even when there are an infinite number of instances .for example , the non - ground flounder set for ` append ` has an infinite number of instances of the atom ` append(a , [ b ] , c ) ` , including ` append(xs , [ y ] , zs ) ` , ` append([x1|xs ] , [ y ] , [ x1|zs ] ) ` and ` append([x1 , x2|xs ] , [ 42 ] , [ x1 , x2|zs ] ) ` , but only the first is computed .we now present a program transformation which converts a program , with delays , into a program , without delays .the success set of is the union of the success set of and a set isomorphic to the non - ground flounder set of .thus analysis of some properties of programs with delays can be reduced to analysis of programs without delays .type , groundness and other dependencies are of interest in programs with and without delays as they give us important information concerning correctness . in the version of naive reverse without delays ,analysis can tell us that in all computed answers of ` reverse/2 ` both arguments are lists . in the delaying version ( figure [ fig_rev ] ) this is not the case , since there are floundered computed answers where both arguments are variables .this increases the flexibility of ` reverse/2 ` since it can delay rather than computing an infinite number of answers ( this is particularly important when ` reverse/2 ` is called as part of a larger computation ) . in ( successful and floundered )computed answers for the delaying version of ` reverse/2 ` , the first argument of is a list if and only if the second argument is a list .this tells us that if either argument is a list in a call , the other argument will be instantiated to a list by the ` reverse/2 ` computation ( assuming it terminates ) .if the delay declaration for ` append/3 ` was changed so it delayed if just the first argument was a variable , ` reverse/2 ` would not work backwards .it would flounder rather than instantiate the first argument to a list and the `` if '' part of this dependency would not hold .this section shows how a program with delays can be very simply transformed into a program without delays which can be analysed to reveal information such as this .analysis of success in a program without delays can not give us information about ( non - ground ) delayed calls directly because success is closed under instantiation ( closure property 2 ) whereas floundering is not .however , extraneous function symbols allow us to _ encode _ non - ground atoms using ground atoms , re - establishing this proposition and allowing analysis .the encoding uses an isomorphism between the ( infinite ) set of variables and the set of terms with extraneous principal function symbols ( this set must also be infinite to avoid loss of precision in the encoding ; it is sufficient to have a single extraneous function symbol with arity greater than zero ) .the _ encoded flounder set ( ) _ of a program is the set of ground instances of atoms in such that distinct variables are replaced by distinct terms with extraneous principal function symbols . can be reconstructed from the atoms in by finding the set of most specific generalisations which contain only program function symbols .for example , the non - ground flounder set for append contains atoms such as ` append(xs , [ y ] , zs ) ` whereas the encoded flounder set contains atoms such as ` append(\bowtie , [ \otimes(1 ) ] , \otimes(2 ) ) ` , assuming and are extraneous function symbols .we introduce two new `` builtin '' predicates , ` evar/1 ` and ` enonground/1 ` , which are true if their argument is an _ encoded _ variable or non - ground term , respectively . for simplicity ,our treatment assumes they are defined using an ( infinite ) set of facts : ` evar(t ) ` for all terms where the principal function symbol is not a program function symbol and ` enonground(t ) ` for all terms which have at least one extraneous function symbol .this can cause an infinite branching factor in sld trees ( for example , a call such as ` evar(x ) ` ) .however , since in this paper we deal with single derivations but not sld trees ( or finite failure ) , it causes us no difficulties . ....evar('var ' ( _ ) ) .enonground(a ) : - evar(a ) . enonground([a|b ] ) : - enonground(a ) .enonground([a|b ] ) : - enonground(b ) ..... it is also possible to define ` evar/1 ` and ` enonground/1 ` in prolog .figure gives a definition which assumes ` var/1 ` is the only extraneous function symbol of the original program and ` ./2 ` is the only program function symbol with arity greater than zero for the original program ( if there are other such function symbols , more clauses are needed for ` enonground/1 ` ) .these definitions depart from our theoretical treatment in that they can involve deeper proof trees ( due to recursive calls ) and they can have non - ground computed answers .however , they can be useful for observing floundering behaviour , especially with a fair ( or depth - bounded ) search strategy see section [ sec_using_f ] .we now define the transformation : given a program ( not defining predicates ` evar/1 ` or ` enonground/1 ` ) containing delay declarations , is the program with all clauses of plus , for each delay declaration ` : - delay a if c ` in , the clause ` a : - c ` where ` c ` is ` c ` with ` var ` replaced by ` evar ` and ` nonground ` replaced by ` enonground ` .these additional clauses introduced for delay declarations and those in the definitions of ` evar/1 ` and ` enonground/1 ` are referred to as _ delay clauses_. .... append_sf(as , bs , cs ) : - evar(as ) , evar(cs ) .append_sf ( [ ] , as , as ) .append_sf(a.as , bs , a.cs ) : - append_sf(as , bs , cs ) .reverse_sf(as , bs ) : - evar(as ) , evar(bs ) .reverse_sf ( [ ] , [ ] ) .reverse_sf(a.as , bs ) : - append_sf(cs , [ a ] , bs ) , reverse_sf(as , cs ) . .... to avoid possible confusion , the code in this paper uses `` ` _ sf ` '' suffixes for the new predicate definitions ; our theoretical treatment assumes the original predicate names are used for the new predicate definitions .for example , figure shows the transformed version of ` reverse ` ( from figure [ fig_rev ] ) .figures [ fig_extra_d ] and [ fig_extra_du ] give further examples. immediately floundered atoms in have matching delay declarations with true right hand sides .corresponding ( encoded ) atoms in have matching ground delay clause instances with successful bodies .we have described how ` evar ` and ` enonground ` behave .some languages have delay conditions which can not be expressed using _ var _ and _nonground_. for example , in nu - prolog ` x ~= y ` delays whereas ` x ~= x ` does not . to analyse such constructs we need additional primitives similar to ` evar ` and ` enonground ` .the key to designing such constructs is that the delay clauses should implement the encoding as defined above .the following propositions show how successful derivations in correspond to successful or floundered derivations in : the success set of is the union of the success set of without delays and the encoded flounder set of ( proposition [ prop_ss_sf_p ] ) .note that when we talk of successful derivations and/or here , sld resolution rather than sldf resolution is used ( delays are ignored when dealing with success ) .the lack of closure property 2 is problematic when dealing with success if delays are considered and sldf resolution used .[ prop_no_del_iff ] a goal has a successful sld derivation with program ( ignoring delays ) if and only if it has a successful derivation with which uses no delay clauses . without delay clauses is the same as without delays .we now deal with floundering , which is more complex .[ lem_im_fl ] a goal is immediately floundered with program if andonly if it has a successful derivation with which uses only delay clauses . follows from the way in which delay clauses implement the encoding of the flounder set .[ lem_subs_efs ] a goal which is immediately floundered with program has a computed answer substitution in sf(p ) such that all variables bound by are bound to distinct terms with extraneous principal function symbols ( or are simply renamed ) . by lemma [ lem_im_fl ] , there is a derivation where all non - renaming substitutions are due to calls to ` evar/1 ` and ` enonground/1 ` .a call to ` evar/1 ` binds its argument to a term with an extraneous principal function symbol .multiple calls with distinct variables will have some of the infinite number of computed answers binding their arguments to distinct terms .similarly , some computed answers to ` enonground/1 ` will bind all distinct variables in its argument to distinct terms with extraneous principal function symbols .[ lem_nfs_iff_sf_del ] given a program , a goal has a floundered derivation with an empty floundered computed answer substitution if and only if it has a successful derivation with in which delay clauses are selected and the successful computed answer , , is such that all variables bound by are bound to distinct terms with extraneous principal function symbols ( or are simply renamed ) .( only if ) derivation can be reproduced with since it has all the clauses of and the computation rule is unrestricted . by lemma [ lem_subs_efs ] the last resolvent in must have a successful derivation such that the computed answer substitution has the desired property .( if ) by repeated application of the switching lemma to we can construct a successful derivation with such that callable atoms are selected in preference to atoms which would delay in .the derivation has a prefix where only callable atoms are selected , except for , which would be immediately floundered in ( a delay clause is used in so an immediately floundered goal must be reached at some stage ) .callable atoms are not matched with delay clauses ( by lemma [ lem_im_fl ] , if a callable atom is resolved with a delay clause the resolvent can not succeed ) .variables bound by the computed answer substitution of are bound to distinct terms with extraneous principal function symbols ( or simply renamed ) , and all of the non - renaming bindings must be due to delay clauses .thus is a floundered sldf derivation in with an empty ( or renaming ) answer substitution .[ lem_sf_all_subs ] given a program , a goal has a successful derivation with with computed answer , , such that all variables bound by are bound to distinct terms with extraneous principal function symbols ( or are simply renamed ) if and only if there are successful derivations with all such computed answers .all such substitutions are due to delay clauses and the sets of ` enonground/1 ` and ` evar/1 ` atoms which succeed are closed under the operation of replacing one extraneous function symbol with another .[ prop_fl_iff_sf_del ] goal has a floundered sldf derivation with program if and only if it has a successful derivation with in which delay clauses are selected .propositions [ prop_gth_nfs ] and [ prop_gth_ca ] imply flounders if and only if an instance flounders with an empty floundered computed answer substitution so by lemma [ lem_nfs_iff_sf_del ] it is sufficient to show that an instance of has a successful derivation with in which delay clauses are selected and all variables bound by the computed answer substitution are bound to distinct terms with extraneous principal function symbols ( or are simply renamed ) iff has a successful derivation with in which delay clauses are selected .( only if ) by closure property 1 .( if ) consider a derivation using the same clause selection as in but with a computation rule such that atoms resolved with delay clauses are selected at the end , from . by lemma [ lem_mgi_gth ] , has a derivation where the resolvent is a variant of and the substitution at that point is a renaming substitution for . by lemma [ lem_subs_efs ] ,a computed answer substitution for has the desired property .[ prop_ss_sf_p ] for any program , .the set of atoms in with derivations which do nt use delay clauses is by proposition [ prop_no_del_iff ] .the set of atoms in with derivations which use delay clauses is by lemmas [ lem_nfs_iff_sf_del ] and [ lem_sf_all_subs ] .note that although there is a bijection between successful sld derivations with and successful sld derivations with which do nt use delay clauses , there is not a bijection between floundered sldf derivations with and successful derivations with which use delay clauses , even if multiple solutions to ` evar/1 ` and ` enonground/1 ` are ignored . generally has additional derivations .this is unavoidable due to the imprecision of mentioned in section [ sec : sldf ] . ....p(x , y ) : - q(x ) , q(y ) .p_sf(x , y ) : - q_sf(x ) , q_sf(y ) .: - delay q(v ) when var(v ) .q_sf(v ) : - evar(v ) .q(a ) . q_sf(a ) . .... for example , consider the definition of ` p/2 ` in figure .the success set , ignoring delays , is \{`p(a , a ) ` } and is \{`p(x , y ) ` , ` p(a , v ) ` , ` p(v , a)`}. thus the computed answers of ` p_sf(x , y ) ` encode all these four atoms , since computes the union of the success set and the encoded flounder set . however , ` p(x ,y ) ` only has one floundered derivation , with the empty answer substitution .the other two atoms in are only computed for more instantiated goals and the derivations in correspond to these computations ( the same atoms are selected , ignoring ` evar/1 ` and ` enonground/1 ` ) ..... p : - q(x ) .p_sf : - q_sf(x ) : - delay q(v ) when var(v ) .q_sf(v ) : - evar(v ) .q(a ) .q(x ) : - q(x ) .q_sf(x ) : - q_sf(x ) . ....it is also possible to have successful derivations in which do not correspond to any sld or sldf derivation in .for example , in figure , the goal ` p ` has a single floundered sldf derivation , where ` q(x ) ` immediately flounders , whereas ` p_sf ` has an infinite number of derivations which use delay clauses and the derivations of ` q(x ) ` have unbounded length .this is related to the fact that ` p ` has an infinite sld tree .type dependencies of can be analysed in the same ways as any other prolog program .the following set of atoms , where means is a list , is a model of the transformed reverse program , showing these type dependencies hold ( and thus they hold for computed answers in the original reverse program with delays ) : it is not necessary to consider the complex procedural semantics of prolog with delays , or even the procedural semantics of prolog without delays since bottom - up analysis can be used .similarly , the transformation makes it relatively easy to show that ` submaxtree/2 ` can indeed compute a tree of integers when given a tree of integers as the first argument .groundness in can also be analysed by analysing using specialised types .we can define the type to be the set of terms constructed from only program function symbols .the dependencies which hold for lists above also hold for type , indicating the corresponding groundness dependencies hold for computed answers of reverse with delays . similarly , can be defined as the set of terms with a program principal function symbol . by extending the type / mode checker described in have demonstrated it is possible to check non - trivial useful properties of by checking models of . for more complicated casesit is necessary to support sub - types , as and are both subtypes of .this approach to groundness analysis is not reliant on the transformation it can be applied to any logic program due to observation [ obs_pfs ] .the analysis can be identical to conventional groundness analysis using boolean functions because logic programs can be abstracted in an identical way .a unification can be abstracted by , assuming is a program function symbol .calls and can be abstracted as .we now present a second transformation , which allows us to capture the non - ground flounder set more precisely .the results here suggest a solution to the open problem posed in : how the flounder set can be defined inductively . such a definition may be a very useful basis for analysis of floundering as an alternative to a purely model theoretic approach .the semantics of captures both successful and floundered derivations of . by defining a variant of the immediate consequence operator can distinguish atoms with derivations which use delay clauses .an _ f - interpretation _ is a set of ground atoms , some of which may be flagged ( to indicate floundering ) .if is an f - interpretation , is the set of atoms in and is the set of atoms in which are flagged .the union of two f - interpretations and is the f - interpretation such that and . given a program , is a mapping from f - interpretations to f - interpretations , defined as follows . and an atom in this set is flagged if there is a ground instance of a clause in , , such that each is in and some is flagged in or if the predicate of is ` evar/1 ` or ` enonground/1 ` . and are defined in the same way as and .[ prop_flag_iff_sf_del ] a ground atom other than ` evar/1 ` or ` enonground/1 ` is flagged in if and only if it has a proof tree of height in which uses a delay clause , and is flagged in if and only if it has a successful derivation in which uses a delay clause .a standard result is that ( and ) contains exactly those ground atoms with proof trees in ( of height , respectively ) . since . from the definition of ,these atoms are flagged if and only if they are derived using ` evar/1 ` or ` enonground/1 ` , that is , if a delay clause is used in the derivation .a ground atom other than ` evar/1 ` or ` enonground/1 ` is flagged in if and only if it is in the encoded flounder set of . by proposition [ prop_flag_iff_sf_del ]it is sufficient to show that iff has a successful derivation in which uses a delay clause .if : the decoded version of ( distinct terms with extraneous principal function symbols are replaced by distinct variables ) , , has a successful derivation in using the same clause selection as that in , with a computed answer , which has as an instance .since has as an instance , any variables bound by must be bound to distinct terms with extraneous principal function symbols ( or simply renamed ) .thus satisfies the condition of lemma [ lem_nfs_iff_sf_del ] so is in . only if : , so , where . by lemmas [ lem_nfs_iff_sf_del ] and [ lem_sf_all_subs ] , has a derivation which uses delay clauses and has a computed answer with an instance . has a successful derivation using the same clause selection .thus we have an inductive / fixed - point characterisation of ( a set isomorphic to ) the non - ground flounder set .it may be practical to base floundering analysis on .it is monotonic with respect to the set of atoms ( if ) and for a given set of atoms it is monotonic with respect to the flagged atoms in the set ( if and ) .monotonicity is important for the structure of fixed - points , particularly the existence of a least fixed - point .alternatively , the definition of can be mirrored by a further transformation which produces a program whose success set is the encoded flounder set of .an advantage is that it can then be analysed using standard techniques .a disadvantage is that the transformation increases the program size , which will affect analysis time . given a horn clause program , is the program consisting of the predicate definitions in ( we assume each predicate has a subscript / postfix ) plus the following new definitions . for each clause in add a clause . for delay clauses , .for other clauses , , where is the disjunction of all calls in , with `` ` _ sf ` '' replaced by `` ` _ f ` '' .if is the empty conjunction ( ` true ` ) then is the empty disjunction ( ` fail ` ) . ....append_f(as , bs , cs ) : - evar(as ) , evar(cs ) .append_f ( [ ] , as , as ) : - fail .append_f(a.as , bs , a.cs ) : - append_sf(as , bs , cs ) , append_f(as , bs , cs ) .reverse_f(as , bs ) : - evar(as ) , evar(bs ) .reverse_f ( [ ] , [ ] ) : - fail .reverse_f(a.as , bs ) : - reverse_sf(as , cs ) , append_sf(cs , [ a ] , bs ) , ( reverse_f(as , cs ) ; append_f(cs , [ a ] , bs ) ) ..... figure gives the new clauses generated for ` reverse ` . note that we assume the original program consists of only horn clauses but the transformed program contains disjunctions .these could be eliminated by further transformation .the transformation is designed so that ( extended to handle disjunctions ) is essentially the same as : flagged atoms correspond to the subscripted predicates and the set of all atoms corresponds to the subscripted predicates .the success set of the subscripted predicates in is the encoded non - ground flounder set of the corresponding predicates in .the transformation allows us to observe the floundering behaviour of the original program very clearly . if we define ` evar ` as in figure [ fig_evar ] and run the goal ` append_f(x , y , z ) ` using a fair search strategy , we get computed answers of the form ` x = [ a``,a``, ... ,a``|'var'(b ) ] ` , ` y = c ` , ` z = [ a``,a``, ... ,a``|'var'(d ) ] ` .occurrences of ` var/1 ` in answers correspond to variables in computed answers of floundered derivations of the original program and variables correspond to arbitrary terms .thus a call to ` append ` flounders if and only if it has an instance such that the first and third arguments are `` incomplete lists '' ( lists with a variable at the tail rather than nil ) of the same `` length '' with pair - wise identical elements .for example , ` append(x,[a],[a|z ] ) ` flounders ( it also has a successful derivation ) whereas ` append([a , v|x],y,[v , b|z ] ) ` does not .running ` reverse_f ` we discovered to our surprise ( as mentioned in section [ sec : examples ] ) that ` reverse ` flounders if and only if the first argument is an incomplete list and the second argument is a _( rather than incomplete list ) . a call such as ` reverse(x,[a|y ] )` returns an infinite number of answers rather than floundering ! with a suitably expressive domain the transformed program can be analysed with established techniques to obtain precise information about the original program with delays .powerful techniques have been developed to help construct domains .for example , we can start with a simple domain containing four types : lists , ( the complement of our type ) , incomplete lists ( this is a supertype of ) , and a `` top '' element ( the universal type ) . completing this domain using disjunction adds two additional elements : `` list or var '' and `` list or incomplete list '' .the heyting completion of this domain introduces implications or dependencies such as is a list if is a list .this domain can be used as a basis for interpretations of the program and to infer and express useful information about floundering . for example , figure gives the minimal model of the program for this domain , where represents the type , the type and the set of incomplete lists ( this was found using the system described in , with additional modifications and manual intervention ) .it expresses the fact that ` reverse ` flounders only if the first argument is an incomplete list and the second is a variable .the condition for ` append ` is somewhat more complex .it is possible to drop the last conjunct for and replace by for to obtain a simpler model .further simplification does not seem possible without weakening the condition for ` reverse ` .we note that careful design of the types in the domain is crucial for the precision .the incomplete list type is able to make the important distinction between ( encoded versions of ) ` [ x ] ` and ` [ [ ] |x ] ` .analysis without this distinction must conclude that calls to ` reverse/2 ` where both arguments are ( complete ) lists may flounder . to see this , consider the following instance of the recursive clause for ` reverse/2 ` . ....reverse([a , x ] , [ x ] ) : - append([x ] , [ a ] , [ x ] ) , reverse([x ] , [ x ] ) ..... if we replace the two occurrences of ` [ x ] ` in ` append ` by ` [ [ ] |x ] ` then the clause body flounders with an empty computed answer substitution . thus any safe approximation to the set of floundering atoms must include the head of this clause .inferring models is significantly more challenging than checking models .the domain is huge and the models can be quite complex , even for simple programs ( see the condition for in figure , for example ) .after some ad hoc attempts to find models for , particularly minimum models within our abstract domain , a more systematic approach was developed .we use the relationship between predicates in and their subscripted variants in minimum models .we first compute a model for ( the minimum model for in our abstract domain ) .we use this as a starting point to compute a ( larger ) model for the ` " _ sf " ` predicates .we then use as a starting point to compute a model for the ` " _ f " ` predicates .this strategy may also be useful for automatic inference of precise floundering information since although there are three separate fixed - point calculations , each one is relatively simple and should converge quickly .declarative debugging can be an attractive alternative to static analysis since more information is known at debug time than at static analysis time and hence bugs can potentially be located more easily and precisely .the transformation of section [ sec : cfs ] potentially provides a mechanism for declarative debugging of incorrectly floundered computations a floundered derivation of corresponds to a successful derivation of and debugging of incorrect successful derivations is well understood .the main novel requirement is that the user must be able to determine which ( encoded ) atoms should flounder ( that is , an intended interpretation for the ` _ f ` predicates ) .it is also important for the debugger to understand the relationship between the ` _ f ` and ` _ sf ` predicates because their intended interpretations are not independent . in propose a more practical approach which does nt use the transformations and encoding explicitly , but does use them to guide the design .it uses the three - valued debugging scheme of , where atoms can be correct , incorrect or _inadmissible _ , meaning they should never occur .atoms which have insufficiently instantiated `` inputs '' ( and hence flounder ) are considered inadmissible .the user effectively supplies a three - valued interpretation in the style of for and the debugger finds a clause ( possibly a delay clause ) for which this interpretation is not a ( three - valued ) model . as well as model - theoretic semantics, provides a fixed - point semantics .this could also be applied to analysis of delays by using transformation and encoding , particularly if the user specifies intended modes in some way .the transformation - based method used to detect deadlocks in parallel logic programs bears superficial similarity to our work here .however , those transformations do not eliminate delays , and both the original code and transformed code have impure features such as pruning operators for committed choice non - determinism and nonvar checks .our approach to analysis of floundering here is unusual in that it supports a declarative , `` bottom - up '' or `` goal independent '' approach .analysis of logic programs with the conventional left to right computation rule has been done using both top - down and bottom - up methods .the top - down methods are based on the procedural semantics sld resolution maintaining information about variables and substitutions to obtain approximations to the sets of calls and answers to procedures .the bottom - up methods ( which are independent of the computation rule ) are based on the fixed - point semantics ( the immediate consequence operator , which is very closely related to the model theoretic semantics ) to obtain approximations to the set of answers to procedures .an advantage of the bottom - up approach is its simplicity . using the standard fixed point semantics ( see also ) the domain contains sets of ground atoms and a clause can be treated as equivalent to the set of its ground instances .the disadvantage is lack of precision : the naive bottom - up approach obtains no information about calls or non - ground computed answers , both of which seem important for modeling systems with flexible computation rules .two methods are used to re - gain this information .non - ground computed answers can be captured by using a more complicated immediate consequence operator , such as the s - semantics , making the domain more complex by re - introducing variables .calls can be captured by using the magic set ( or similar ) transformation , adding complexity to the program being analysed , but this assumes a left to right computation rule . since there has been no known bottom - up method for approximating the instantiation states of calls in logic programs with delays , it is natural that most other work on analysis of such programs has been based on the top - down procedural semantics .the more recent approach of uses bottom - up analysis , and argues strongly for the practicality of bottom - up methods .a relatively standard bottom - up least fixed - point analysis is used to compute groundness dependencies for successful computed answers of all predicates using the domain ( positive boolean functions ) .in addition , a novel greatest fixed - point computation is used to find sufficient conditions for predicates to be flounder - free , using the domain ( monotonic boolean functions ) . however, this analysis assumes a local computation rule is used .programs such as ` submaxtree/2 ` ( and examples given in ) have cyclic data - flow and do not work with a local computation rule , so the greatest fixed - point computation results in significant loss of precision .our transformations make no assumptions about the computation rule other than it is safe with respect to the delay declarations , so ( in this respect ) it can be more precise .we have shown an alternative way the `` lloyd '' semantics can be adapted to capture information about variables : simply change the set of function symbols rather than the immediate consequence operator .the extra function symbols allow us to encode and capture the behaviour of non - ground atoms . furthermore , by encoding the non - ground flounder set it becomes closed under instantiation , allowing safe approximation by the success set of a ( transformed ) program without delays .floundering information can then be obtained by a simple bottom - up analysis using sets of ground atoms .the complexity associated with variables does not magically disappear entirely . in practiceit can re - emerge in the abstract domain of types used in the analysis .however , careful integration of type and instantiation information seems unavoidable if analysis of floundering is to be precise , so combining both in the type domain is probably a good idea .using the procedural semantics has the advantage of being ( strictly ) more expressive than the declarative approach , so analysis of more properties is possible .analysis of ( for example ) whether a particular sub - goal will ever delay ( for a particular computation rule ) is beyond the scope of our approach and can only be done with procedural information .a disadvantage is the additional complexity .each ( non - ground ) atom has a set of computed answers and for each one there is a set of immediately floundered atoms .the analysis domain typically contains representations of sets of these triples .we believe that analysis of such things as computed answers and whether a computation flounders is likely to benefit from the declarative approach we have proposed , where the analysis domain can contain just sets of ground atoms .expressive languages for defining such sets have been developed for type - related analysis .with an intuitive restriction on delay primitives , floundering is independent of the computation rule .however , the development of a declarative rather than procedural understanding of floundering has been hindered because it is not closed under instantiation . in this paperwe have shown how non - ground atoms can be encoded by ground atoms , using function symbols which do not occur in the program or goal .some may consider this to be a theoretical `` hack '' , but it has numerous advantages .this technique , along with two quite simple program transformations , allows floundering behaviour of a logic program with delays to be precisely captured by the success set of a logic program without delays . by simply executing the transformed program using a fair search strategy ,the delaying behaviour can be exposed .declarative debugging can be used to diagnose errors related to control as well as logic , and alternative semantic frameworks can be applied .finally , the wealth of techniques which have been developed for analysing downward closed properties such as groundness and type dependencies can be used to check or infer floundering behaviour . , codognet , p. , and corsini , m .- m .abstract interpretation for concurrent logic languages . in _ proceedings of the north american conference on logic programming _ ,s. debray and m. hermenegildo , eds . the mit press , austin , texas , 215232 . , le charlier , b. , and rossi , s. 2001 .reexecution - based analysis of logic programs with delay declarations . in _ proc . of the andrei ershov fourth international conference on perspectives of system informatics ( psi01)_. lncs 2244 .springer - verlag , 395405 ., sndergaard , h. , and dart , p. 1990 . a characterization of non - floundering logic programs . in _ proceedings of the north american conference on logic programming, s. debray and m. hermenegildo , eds . the mit press , austin , texas , 661680 . ,henderson , f. j. , and conway , t. 1995 .mercury : an efficient purely declarative logic programming language . in _ proceedings of the australian computer science conference_. glenelg , australia , 499512 .
|
we show how logic programs with `` delays '' can be transformed to programs without delays in a way which preserves information concerning floundering ( also known as deadlock ) . this allows a declarative ( model - theoretic ) , bottom - up or goal independent approach to be used for analysis and debugging of properties related to floundering . we rely on some previously introduced restrictions on delay primitives and a key observation which allows properties such as groundness to be analysed by approximating the ( ground ) success set . this paper is to appear in theory and practice of logic programming ( tplp ) . floundering , delays , coroutining , program analysis , abstract interpretation , program transformation , declarative debugging
|
steady increase in load demands and aging transmission networks have pushed many power systems closer to their stability limit .previous practical experience shows that the system may remain stable in the short - term time scale ( say 0 - 30s after the contingency ) while becomes unstable in the long - term time scale ( say 30s - several minutes after the contingency ) due to insufficient reactive power support or poor control schemes - .the focus of long - term stability is on slower and longer duration phenomena after generator dynamics damp out - . in this time scale , two - time scale decomposition of the power system dynamic model can be employed .particularly , the dynamics can be classified into short - term dynamics and long - term dynamics based on the time frames of function after contingencies . as the penetration of wind power continues to grow, the long - term stability of a system may be further affected by the volatile nature of wind power and the distinct dynamic response characteristics of wind turbines .long - term stability analysis deserves more attention to better ensure the secure operation of power grids .it is also crucial to study the impacts of wind power on long - term stability . in previous literature ,impacts of wind power on long - term stability have been discussed in , which focus on the effects of different control schemes of wind generators .stability impacts of wind power on transient stability and small signal stability have been discussed in - . in all above studies ,power systems are formulated in ordinary differential equations ( odes ) or differential algebraic equations ( daes ) , and wind speed is assumed to be constant without considering the statistic properties of wind . to study the effects of the variability of wind power , stochastic differential equations ( sdes )are employed in - for transient stability and small - signal stability . in those studies ,the variations induced by wind speed are simply modelled as white noise perturbations on power injections without characterizing the statistic properties of wind speed . in ,a set - theoretic method is proposed to assess the effect of variability of renewable energies on short - term dynamics under small perturbations . regarding long - term stability study of power systems with wind power ,the challenges include the stochastic characterization of wind power and the resulting high computational burden in stability assessments , which have not been well addressed .specifically , introduction of the randomness will clearly increase the computational burden , making the stability assessments of stochastic systems time consuming . in this paper ,the statistic properties of wind power are well characterized and incorporated in stability analysis ; a method is proposed with its theoretical foundation for long - term stability analysis , which is able to reduce the computational burden arising from the randomness .particularly , a sde - based model is utilized to characterize the weibull - distributed wind speed , which is further incorporated into the complete dynamic model described in sdes . under this sde formulation , a theory - based method , which is to approximate the stochastic model by a deterministic model ,is proposed to perform long - term stability analysis . the theoretical foundation for the method is also developed under which the accuracy of the deterministic model is guaranteed . compared to the deterministic models using constant wind speeds , the proposed deterministic model reflects the variable nature of wind power by providing correct stability assessments for the stochastic model . compared to the stochastic model characterizing the statistic properties of wind speed , the proposed deterministic model takes much less time in time domain simulation .the rest of the paper is organized as follows .section [ sectionpowermodels ] briefly reviews power system models .section [ mathprelim ] introduces the preliminaries about the wind speed model by sdes and singular perturbation method .section [ sectionmodels ] presents the formulation of the long - term stability model with wind power based on sdes .afterwards , an analytical method with its theoretical foundation is proposed in section [ sectiontheory ] for long - term stability analysis of power systems with wind power .several numerical examples are given in section [ sectionnumerical ] to illustrate the feasibility and efficiency of the proposed method .conclusions and perspectives are stated in section [ sectionconclusion ] .the deterministic power system long - term stability model , i.e. , complete dynamic model , for simulating system dynamic response relative to a disturbance can be described as : equation ( [ slow ode ] ) describes long - term dynamics including exponential recovery loads , turbine governors ( tgs ) and over excitation limiters ( oxls ) , and eqn ( [ fast ode ] ) describes the internal dynamics of devices such as generators , their automatic voltage regulators ( avrs ) , certain loads such as induction motors , and other dynamically modeled components .equation ( [ algebraic eqn ] ) describes the electrical transmission system and the internal static behaviors of passive devices , and eqn ( [ slow dde ] ) describes long - term discrete events like load tap changers ( ltcs ) and shunt switchings . , and are continuous functions , and vectors , and are the corresponding long - term / slow state variables , short - term / fast state variables , and algebraic variables . are termed as long - term discrete variables whose transitions from to depend on system trajectories and occur at distinct times where .besides , can be regarded as the maximum time constant among devices .load models play an important role in long - term stability analysis , and generally load dynamics appear in eqn ( [ algebraic eqn ] ) as : where and are internal state variables associated with generic load dynamics .they are described in eqn ( [ slow ode ] ) as : where and are the static and transient real power absorptions ; similar definition for and . and are pq load power from power flow solutions . the tap - changing logic of ltc at time instant shows up in eqn ( [ slow dde ] ) , and is given as follows : where is the controlled voltage of ltc , is the reference voltage , is half the ltc dead - band , and and are the upper and lower tap limits .in this section , the method to model the weibull - distributed wind speed by sde proposed in is briefly reviewed .singular perturbation theory for ode is also reviewed .two continuous wind speed models based on sde have been developed in .the developed models include but not limited to the autocorrelated weibull distributed wind speed models . in this paper ,model i in is applied to formulate wind speed . briefly speaking , memoryless transformation of ornstein - uhlenbeck processis used to obtain the weibull distributed process .first , consider the following stochastic differential equation with given initial condition : where is white noise process and represents a standard wiener process .the stochastic process is an ornstein - uhlenbeck / gauss - markov process which is stationary , gaussian and markovian with the statistic properties =0 ] , and autocorrelation =e^{-\alpha|t_j - t_i|} ] , =\lambda^2 \gamma(1+\frac{2}{k})-\mu_w^2 ] .note that ] and ] in which is determined by autocorrelation property of each wind source , is a small positive parameter , is a -dimensional wiener process ; ^t ] , and .note that does not affect statistic properties of , which has the same reason that does not affect the statistic properties of as shown in the last paragraph of section [ subsection modelling wind by sde ] .this property of the formulation will play an important role in the analysis of section [ sectiontheory ] . with ( [ complete_1])-([complete_2 ] ) and ( [ wind ] ), the stochastic long - term stability model can be represented as : where ^t\in \mathbb{r}^{n_x+n_w}=\mathbb{r}^{n_{\bar{x}}} ] , ] is linear time - dependant .proof : it can be directly deducted from _ theorem 3 _ stated in appendix [ appendix1 ] , hence is omitted .note that the detailed expression of is omitted here for brevity , which can be obtained directly from theorem 2.4 in .one key point is that the coefficient is independent of and , so the probability of the sample path to leave the layer decays exponentially as increases. one important implication of the theorem is that as soon as we take slightly larger than , say , the right hand side of eqn ( [ theorem2 ] ) becomes very small unless we wait for long time spans .that means the sample paths of the stochastic long - term stability model ( [ sde power ] ) are concentrated in .furthermore , if in eqn ( [ theorem2 ] ) , sample pathes of the stochastic model unlikely leave as long as slow dynamics permit . hence , we can explore the trajectory relations between the stochastic model ( [ sde power ] ) and the deterministic model ( [ det power ] ) without concerning probabilities . on the other hand , the condition that can be easily satisfied in this sde formulation of the power system model . as stated before, does nt affect the statistic properties of wind speed , thus we can choose as small as desired such that any satisfactory depth of the layer satisfies .next we study the trajectory relations between the stochastic long - term stability model ( [ sde power ] ) and deterministic long - term stability model ( [ det power ] ) under the condition that , i.e. , the depth of layer is much larger than the small positive parameter associated with wind speeds . according to singular perturbation theorems and the theoretical foundation for the quasi steady - state ( qss ) model proposed in , we have that if the slow manifold of the deterministic long - term stability model is a stable component of the constraint manifold , and the initial point on trajectory of the deterministic long - term stability model lies inside the stability region of the initial short - term stability model , then the trajectory of the long - term stability model will approach the slow manifold in a time of order , and moves along the invariant manifold . refer to for more details .we denote the solution of stochastic long - term stability model ( [ sde power ] ) as , and denote the solution of deterministic long - term stability model ( [ det power ] ) as . then the following theorem describes the trajectory relations between the two models . _ theorem 2 ( trajectory relationship ) : _ assuming that , consider the systems ( [ sde power])-([det power ] ) , for some fixed , there exist , a time of order , and such that if the following conditions are satisfied : _ i. _ : : the slow manifold where is a stable component of the constraint manifold ; _ ii ._ : : the initial condition of stochastic system ( [ sde power ] ) where satisfies ; _ iii . _ : : the initial condition of the deterministic long - term stability model ( [ det power ] ) is inside the stability region of the initial short - term stability model , then for ] is linear time - dependant .a matching lower bound given in theorem 2.4 of shows that the above bound is sharp in the sense that it captures the correct behaviour of the probability ._ remark 1 _ : it is shown in theorem 5.1.17 in ( remark 2.7 and lemma 3.4 in ) that the stochastic singular perturbed system ( [ sto_singular perburbed ] ) can be approximated by the reduced deterministic system : and the deviation between the solution of ( [ det slow ] ) and that of ( [ sto_singular perburbed ] ) is of order up to lyapunov time of system ( [ det slow ] ) . that means provides approximation for , i.e. , .following the deduction of in section 2.3 of or chapter 5.1.1 of , we can readily obtain the following results .the cross section of is the solution of the following slow - fast system : \nonumber\end{aligned}\ ] ] where and is a sufficiently small positive parameter .note that the exact expression of is not of interest and thus does not need to be calculated , and we only need to know that is well defined in this formulation ._ proof : _ according to _ theorem 2 _ , if the conditions that and ( i)-(ii ) in _ theorem 2 _ are satisfied , then after a time of order , sample paths of the stochastic long - term stability model ( [ sde power ] ) are concentrated in which is a -neighborhood of the invariant manifold .particularly , the slow dynamics of the stochastic long - term stability model ( [ sde power ] ) are in the neighborhood of the invariant manifold as explained in _ remark 1_. hence , eqn ( [ corollary3_1 ] ) and eqn ( [ corollary3_2 ] ) hold only if trajectory of the deterministic long - term stability model moves along the invariant manifold . on the other hand , from singular perturbation theory , we have that if the slow manifold is a stable component of the constraint manifold , then the invariant manifold exists for sufficiently small . moreover , if one more condition that the initial point of the deterministic long - term stability model lies inside the stability region of the initial short - term stability model is also satisfied , then there exists such that trajectory of the deterministic long - term stability model ( [ det power ] ) moves along the invariant manifold for all , ] .this completes the proof . 1 h. d. chiang , _ direct methods for stability analysis of electric power systems - theoretical foundation , bcu methodologies , and applications_. new jersey : john wiley & sons , inc , 2011 . t. v. cutsem , _ voltage stability of electric power systems_. boston / london / dordrecht : kluwer academic publishers , 1998 .r. r. londero , c. m. affonso , j. p. a. vieira , and u. h. bezerra , _ impact of different dfig wind turbines control modes on long - term voltage stability_. 2012 3rd ieee pes innovative smart grid technologies europe ( isgt europe ) , berlin .d. gautam , v. vittal , and t. harbour , _ impact of increased penetration of dfig - based wind turbine generators on transient and small signal stability of power systems . _ ieee transactions on power systems , vol 24 , no .3 , pp . 1426 - 1434 , aug 2009 .x. z. wang , h. d. chiang , _ analytical studies of quasi steady - state model in power system long - term stability analysis_. ieee transactions on circuits and systems i : regular papers , vol .943 - 956 , march 2014 .
|
in this paper , the variable wind power is incorporated into the dynamic model for long - term stability analysis . a theory - based method is proposed for power systems with wind power to conduct long - term stability analysis , which is able to provide accurate stability assessments with fast simulation speed . particularly , the theoretical foundation for the proposed approximation approach is presented . the accuracy and efficiency of the method are illustrated by several numerical examples . wind power , stochastic differential equations , sufficient conditions , power system dynamics , power system stability
|
mathematics is useful for all branches of scientific research .the mastery of mathematical skills is an essential enabler of success in almost all sciences .the mastery of discrete mathematics , which studies discrete and distinct mathematical objects , is particularly important for many branches of scientific research , including , for example , the efficient production of correct and efficient software .unfortunately , globally - speaking , the mathematical skills of undergraduate and even graduate students are significantly lacking .many prospective scientists lack the basics of how to think mathematically and how to write a correct mathematical proof , despite the importance of such skills for their future success as scientists . to help in addressing this situation , daniel velleman wrote a book in 1994 ( with a second edition in 2006 ) titled ` how to prove it : a structured approach ' , in which he likened constructing mathematical proofs to structured programming .velleman used in his book examples from arithmetic and high - school mathematics to present his ideas on how to construct mathematical proofs in a structured way .velleman encouraged the use of his book by referring to a pedagogic mathematics software that he developed called proof designer , by which readers of his book can apply the ideas they learn from the book .proof designer is freely - available online , as a java applet , to assist its users build mathematical proofs of elementary set theory theorems in a structured way .since 1994 , and more so since 2006 , many mathematics courses around the world have used velleman s book , and its accompanying software , as an essential references for teaching the skills of mathematical thinking and proof construction to graduate and undergraduate students . helping in the widespread use of the bookwas velleman s lucid writing style , his use of elementary mathematical examples in his book , and also the ease of use of proof designer when compared to that of other proof assistants .recently we started a project whose goal is to take proof designer to its next step ,so as to make it usable in wider contexts and to appeal to an even wider audience .nowadays , in the age of handheld devices ( such as tablets and smart - phones ) , proof designer is starting to show its age and limitations .for example , there is no portal of proof designer to any of the popular platforms for handheld devices .additionally , proof designer uses only english as the language of its proofs and the language of its graphical user interface .thus , compared to guis of modern educational software , despite its success and it fully serving its initial purpose , proof designer is now clearly lacking in many regards .the goal of our project is to eliminate most , if not all , of the limitations on proof designer that make it less - used today as a math education software than it was during the last ten years . in the following two sections we report , using software illustrations , on our effort so far .we first describe in the next section what we have done so far , then , in the following section , we describe what remains to be done .to describe what we have done so far , we first describe proof designer in its original form then describe changes we made to it . then we describe steps we made so far towards porting proof designer to the android platform .proof designer allows users to develop proofs using an intuitive interface .figures [ fig : an - incomplete - proof]-[fig : proof - designer - reexpress ] , on pp . - , show the main components of the proof designer user experience , which involve the presentation of structured complete and incomplete proofs , a theorem - entry dialog box , drop - down menus that the user uses to construct his or her proofs , and a dialog box for re - expressing mathematical formulas .proof designer , in its original form , is available for use as a java applet at http://www.cs.amherst.edu/~djv/pd/pd.html .instructions for how to setup and use proof designer can be found at http://www.cs.amherst.edu/~djv/pd/help/instructions.html . before setting on building proof maker as a portal of proof designer to handheld devices, we set on making some improvements to proof designer itself . after communicating with professor velleman and consulting with him , he kindly sent us the source code of proof designer .we made many changes to proof designer , some of which are visible to the user , and some are not .first , to enhance our understanding of the proof designer code base and to facilitate its further development , we made some improvements to the software source code .in particular , 1 .proof designer had little documentation for its source code .we thus added some unit tests , ` assert ` statements , and code comments .all classes of proof designer were in one java package ( the default package ) .based on uml class diagrams of the code base ( see figure ) , we distributed the code among seven java packages ( a.k.a ., `` modules '' ) , the most important being packages for formula classes ( class ` formula ` and its descendant classes ) and for proof component classes ( class ` pcomponent ` and its descendants ) .we also had packages for class ` menuaction ` and all its ` dox ` descendant classes , and for class ` pdialog ` and all its descendant proof designer dialog box classes ( e.g. , class ` entrydlg ` , which is used to enter theorem statements in proof designer ) .proof designer was written using java 1.3 . hence , its code made no use of java generics or java enumerations , for example , which were introduced in java 1.5/5.0 .we thus used generics wherever possible in the proof designer source code to improve the reliability and maintainability of the code , and we also made use of ` enum`s ( instead of ` int`s ) , e.g. , for defining proof designer s formula and operator kinds .we also made changes that are visible to the user , to improve his or her user experience . 1 .we restructured menus so that some user actions , more intuitively , are viewed as either inferences ( from givens ) or are goal - oriented actions .( see figure . ) 2 .we added the ability to save and load proof sessions ( as xml files ) .we added the ability to run proof designer , not only as a web browser applet but also as a standalone software ( a java jar file ) that can be downloaded and run without the need for a web browser .proof designer originally had the ability to a single undo / redo proof step .we added an unlimited undo / redo capability to proof designer .we also added a new unlimited undo / redo capability in proof designer s reexpress dialog .( see figure . )we added limited support for automating proofs in proof designer by adding an ` auto ' command for use on proof goals .the auto command automatically decides and performs the next step in the proof , if any , based on the logical form of the goal statement .7 . to ease the use of proof designer ( and to gear it more towards touch - based interaction ) , we added a toolbar that has the auto and undo / redo commands .( see figure . ) after implementing the above - mentioned improvements to proof designer , we set on exploring porting proof designer to handheld devices .we decided to call the new software proof maker .given the global widespread use of the android platform , we picked the platform as our first choice for porting proof designer to .we call the portal to the android platform android proof maker ( or , apm for short ) . given that typical android softwareis written using java , we initially assumed porting proof designer to android will be straightforward .in fact the ` formula ` package in proof designer ( after making the above - mentioned changes ) was ported without a single change to its code .however , we soon realized that there is no one - to - one correspondence between java swing ui ( user interface ) components ( used in proof designer ) and android ui components .the differences include , for example , * the android ` view ` class has a somewhat different semantics and a different behavior than the java swing ` jcomponent ` class . *the android s ` viewgroup ` class is different from its swing approximate counterpart class ` container ` . *although the android platform has dialog boxes , but the closest to a java swing dialog box is usually an android ` activity ` not an android dialog box .* similarly , ` jframe ` and ` jpanel ` in java swing have no exact counterparts in android ui components .the closest android classes to them seem to be ` activity ` and ` linearlayout ` , respectively . * in javaswing the ` toolkit ` and ` font ` classes provide font services that in android are provided , using a different api , in classes ` paint ` and ` typeface ` .we thus started experimenting with android ui components to see which could suit our purposes and best approximate the proof designer user interface , and that will incur the least changes to the source code of proof designer so as to maintain as much as possible of its `` spirit '' .( see figure and figure . )even though not as polished as their proof designer counterparts , our portal of some of the main proof designer ui components to the android platform is a good proof of concept that the portal is possible , even when it will not be straightforward .our effort so far has provided us thus with an assurance that proof designer does not need a total rewriting to be ported to the android platform or to platforms of other handheld devices .it is worthy to mention that due to proof designer not employing the popular model - view - controller ( mvc ) model in its software design , we do though expect the differences between the android and swing ui apis to affect the final versions of android proof maker , particularly affecting the presentation of proofs , which in proof designer are modeled using descendants of class ` pcomponent ` .( contrary to the requirements of mvc , class ` pcomponent ` and its descendants in proof designer doubly function as model classes , modeling abstract proof components , but also as view classes that inherit from the swing ` jcomponent ` ui class and as such are used as part of the gui of proof designer ) .as demonstrated by the figures for apm as we have it today , proof maker is still far from complete .much remains to be done before we get to a final usable version of proof maker .we mention the most important remaining steps below .we first intend to make further improvements to proof designer .these include the following . 1 . adding more unit tests , assertions , and code commentsconsider using the mvc software design model .mostly will affect proof components ( ` pcomponent ` and its subclasses ) . 1 . improving auto ( expanding its scope to givens ). 2 . supporting long variable names , and possibly expanding the role of variables along the lines of .3 . supporting named hypothesis , to allow easy reference .4 . allow proof comments .5 . adding syntax highlighting ( color coding of proofs ) .6 . adding more toolbar buttons .updating html help files to reflect software changes. then our remaining work on proof maker includes the following . 1 . finishing and polishing the apm user interface ( ui ) as a genuine , fully - functioning android portal of the proof designer ui that has the look - and - feel but also the behavior and user experience of native android applications .2 . adding more touch - aware interactions to proof maker ( e.g. , dragging - and - dropping of hypothesis , context menus ) .3 . internationalizing proof maker ,so as to allow languages such as arabic , chinese , etc . , in its proofs and its gui .4 . porting proof designer to other handheld device platforms such as windows 8 phone and ios .coq and isabelle are generic proof assistants .both build on a large tradition of scientific research in the area of proof automation , going back to lcf and even further .compared to proof designer , coq and isabelle are vastly much more powerful ( they can help construct proofs in almost any mathematical domain ) , but the two proof assistants are much less user - friendly than proof designer .users of coq and isabelle have to write code to construct their proofs . assuch , coq and isabelle users actually need to also be programmers , not only mathematicians or math students .since handheld devices typically lack a keyboard , writing capabilities are usually limited on them .this casts doubts on the likelihood of coq or isabelle getting ported to handheld devices .dc proof is a more user friendly software when compared to coq and isabelle , and it is a bit more powerful than proof designer . given its ascii - based mathematical notation , however , dc proof is less user - friendly than proof designer .the following table summarizes some of the differences between these proof construction software tools .[ cols="^,^,^,^,^",options="header " , ]like proof designer , the scope of proofs doable in proof maker will be limited to elementary set theory .once done with proof maker , future work that could be built on top of it can include adding a type system ( and possibly later a type inference system ) that enables proof maker overcome this fundamental limitation that it inherited from proof designer . adding a type system to proof makerwill enable it to assist in constructing proofs in mathematical domains other than set theory ( e.g. , number theory , group theory , order theory , domain theory , etc . ) , while maintaining the characteristic simplicity of the software and its user - friendliness .( this software , for example , may help in our formalization of an introductory domain theory textbook . )lurch is a word processor that can check your math . in particular , just as a word processor checks spelling and grammar in natural language documents , lurch aims to check any mathematical proofs included in a document ( e.g. , a school math homework , an exam , a research article , a book chapter , ... etc . ) with as little user guidance as possible ( in the form of document annotations ) . in its aims ,lurch was greatly influenced by proof designer . adding a _ customizable _ type system to lurch is another possible future work that can be done after adding a type system to proof maker . adding a customizable type system to lurch will add to lurch the ability to restrict its rule identifiers so that they can only be instantiated with an expression of a certain type ( like a statement , or a set , or a natural number , ... etc . ) .the type of an expression in lurch will also be customizable , and will be compatible with a customizable parser that the authors of lurch intend to soon add to lurch .moez a. abdelgawad .finitary - based domain theory in coq : an early report ( extended abstract ) .technical report , the 7th coq workshop , sophia antipolis , france also available at arxiv.org:1506.nnnn [ cs.pl ] , 2015 .
|
proof designer is a computer software program designed to help mathematics students learn to write mathematical proofs . under the guidance of the user , proof designer assists in writing outlines of proofs in elementary set theory . proof designer was designed by daniel velleman in association with his book how to prove it : a structured approach to help students apply the methods discussed in the book , making classes based on the book more interactive . this paper is an early report on the progress of our effort to `` bring set theory to the masses '' by developing proof maker , a new proof designer - inspired software that ports proof designer to hand - held devices such as smart - phones and tablets . proof maker , when completed , will allow students to use proof designer with the ease of a touch , literally , on their smart devices . our goal behind developing proof maker is to enable any one who is interested enough to develop elementary set theory proofs anywhere he or she might be ( think of doing proofs while waiting at a bus stop ! ) and at any time he or she wishes ( think of writing proofs before going to bed , or even in bed ! ) . in this paper we report on the improvements we made to proof designer so far , and on the ( many ) steps remaining for us to have a fully - functioning proof maker `` in our hands '' .
|
a variety of spatial patterns in growing bacterial colonies are found in nature and in the lab . when grown on semi - solid agar with succinate or other tca cycle intermediates , _ escherichia coli _ cells divide , move and selforganize into patterns ranging from outward - moving rings of high cell density to chevron patterns , depending on the initial concentration of the nutrient .when grown or just placed in static liquids , cells quickly reorganize into networks of high cell density comprised of bands and/or aggregates , after exposure to succinate and other compounds .chemotactic strains of _ salmonella typhimurium _, a closely - related species , can also form concentric rings and other complex patterns in similar conditions .it has been shown that pattern formation in _e. coli _ and _ s. typhimurium _ is caused by chemotactic interactions between the cells and a self - produced attractant . the gram - positive bacterium _bacillus subtilis _ forms patterns ranging from highly branched fractal - like patterns to compact forms , depending on the agar and nutrient concentrations . in all these systemsproliferation , metabolism and movement of individual cells , as well as direct and indirect interactions between cells , are involved in the patterning process , but how they influence each other and what balances between them lead to the different types of patterns can best be explored with a mathematical model .understanding these balances would advance our understanding of the formation of more complex biofilms and other multicellular assemblies . _ proteus mirabilis _ is an enteric gram - negative bacterium that causes urinary tract infections , kidney stones and other diseases . _p. mirabilis _ is also known for spectacular patterns of concentric rings or spirals that form in _ proteus _ colonies when grown on hard agar .pattern formation by _was described over 100 years ago , and the nature of these patterns has since been discussed in many publications . _p. mirabilis _ cells grown in liquid medium are vegetative swimmer cells which are 1 - 2 m long , have 1 - 10 flagella and move using a `` run - and - tumble strategy '' , similar to that used by _e. coli _ .swimmers respond chemotactically to several amino acids , and can adapt perfectly to external signals .when grown on hard agar swimmers differentiate into highly motile , hyperflagellated , multi - nucleated , non - chemotactic swarmer cells that may be as long as 50 - 100 m , and that move coordinately as `` rafts '' in the slime they produce . during pattern formation on hard surfacesswarmer cells are found mainly at the leading edge of the colony , while swimmers dominate in the interior of the colony .more and more effort is put into understanding the mechanism of swarming , but to date little is known about how cells swarm and how cells undergo transitions between swimmers and swarmers , but understanding these processes and how they affect colonization could lead to improved treatments of the diseases _ p. mirabilis _ can cause .traditionally , formation of periodic cell - density patterns in _ proteus _ colonies has been interpreted as a result of periodic changes in velocity of the colony s front , caused by the cyclic process of differentiation and dedifferentiation of swimmers into swarmers ( see ) .douglas and bisset ( 1976 ) described a regime for some strains of _ p. mirabilis _ in which swarmers form a continuously moving front , while concentric rings of high cell density form wel1 behind that front .this suggests that pattern formation can occur in the absence of cycles of differentiation and dedifferentiation . the similarity between this mode of pattern formation and that of _ salmonella_ led us to ask whether the underlying mechanism for pattern formation in _p. mirabilis _ might also be chemotactic aggregation of the actively moving swimmers behind the colony front .a number of mathematical models of colony front movement have been proposed , and in all of them swimmer cells are nonmotile and swarming motility is described as a degenerate diffusion , in that swarmers only diffuse when their density exceeds a critical value .the front propagation patterns as a function of various parameters in one model are given in .although these models can reproduce the colony front dynamics , it remains to justify modeling the swarming motility as a diffusion process , since it is likely that the cell - substrate interaction is important . to replicate a periodically propagating front, ayati showed that swarmers must de - differentiate if and only if they have a certain number of nuclei .it was shown that this may result from diffusion limitations of intracellular chemicals , but biological evidence supporting this assumption is lacking , and further investigation is needed to understand the mechanism of front propagation .here we report new experimental results for a continuously - expanding front and show that after some period of growth , swimmer cells in the central part of the colony begin streaming inward and form a number of complex multicellular structures , including radial and spiral streams as well as concentric rings .these observations suggest that swimmer cells are also motile and communication between them may play a crucial role in the formation of the spatial patterns .however , additional questions raised by the new findings include : ( 1 ) what induces the inward movement of swimmer cells , ( 2 ) why they move in streams , ( 3 ) why radial streams quickly evolve into spiral streams , and ( 4 ) quite surprisingly , why all the spirals wind counterclockwise when viewed from above .to address these questions we developed a hybrid cell - based model in which swimmer cells communicate by excreting a chemoattractant to which they also respond .the model has provided biologically - based answers to the questions above and guided new experiments .we have also developed a continuum chemotaxis model for patterning using moment closure methods and perturbation analysis , and we discuss how the classical models had to be modified for _ proteus _ patterning .previous experimental work focused on expansion of the colony and neglected the role of swimmers in the pattern formation process .the experimental results reported here represent a first step toward understanding their role .after a drop of _p. mirabilis _ culture is inoculated on a hard agar - like surface containing rich nutrient , the colony grows and expands . under the conditions used here ,the colony front expands continuously initially as a disc of uniform density ( figure [ fig_front ] ) .the swarmers exist at the periphery of the colony , and the mean length of the cells decreases towards the center , as observed by others . after a period of growth , swimmer cells behind the leading edge start to stream inward , forming a number of complex patterns ( figure [ fig1 ] ) .the swimmer population first forms a radial spoke - like pattern in an annular zone on a time scale of minutes , and then cells follow these radial streams inward ( [ fig1]a ) .the radial streams soon evolve into spirals streams , with aggregates at the inner end of each arm ( [ fig1]b ) .a characteristic feature of this stage is that the spirals always wind counterclockwise when viewed from above .different aggregates may merge , forming more complex attracting structures such as rotating rings and traveling trains ( [ fig1]b , c ) .eventually the motion stops and these structures freeze and form the stationary elements of the pattern ( [ fig1]b , c ) . later , this dynamic process repeats at some distance from the first element of the pattern , and sometimes cells are recruited from that element . in this way , additional elements of the permanent pattern are laid down ( [ fig1]c ) . on a microscopic level ,transition to the aggregation phase can be recognized as transformation of a monolayer of cells into a complex multi - layered structure .not every pattern is observable in repeated experiments , ( for example , no observable rotating rings can be identified in ( [ fig1]d ) ) , probably due to sensitivity to noise in the system and other factors that require further investigation , variations in nutrient availability , etc . , but the radial and spiral streams seem to be quite reproducible . these new findings pose challenges to the existing theories of concentric ring formation in which swimmer cells are believed to be non - motile .additional questions arise regarding the mechanism(s ) underlying the formation of radial and spiral streams , rings and trains by swimmers , and what determines the chirality of the spiral streams .the macroscopic patterns are very different and more dynamic than the patterns formed in _escherichia coli _ or _ salmonella typhimurium _colonies , where cells interact indirectly via a secreted attractant , but the fact that swimmers move up the cell density gradient is quite similar .the non - equilibrium dynamics suggests intercellular communication between individual swimmer cells .we determined that swimmer cells extracted from these patterns are chemotactic towards several amino acids , including aspartate , methionine and serine ( see table 1 ) . in the following we provide an explanation of the radial and spiral streams using a hybrid cell - based model .the spatial patterns of interest here are formed in the center of the colony where cells are primarily swimmers , and the role of swarmers is mainly to advance the front and to affect the swimmer population by differentiation and de - differentiation .thus we first focus on modeling the dynamics in the patterning zone in the colony center ( figure [ fig2]a ) , and later we incorporate the colony front as a source of swimmers ( see figure [ fig6 ] ) .this enables us to avoid unnecessary assumptions on the poorly - understood biology of swarming and the transition between the two phenotypes .as noted earlier , swimmer cells are chemotactic to certain factors in the medium , and we assume that they communicate via a chemoattractant that they secrete and to which they respond .therefore the minimal mathematical model involves equations for the signal transduction and movement of individual cells , and for the spatio - temporal evolution of the extracellular attractant and the nutrient in the domain shown in figure [ fig2]b . we first focus on understanding the radial and spiral stream formation , which occurs rapidly , and during which the nutrient is not depleted and cells grow exponentially . during this periodthe nutrient equation is uncoupled from the cell equations and can be ignored . _mirabilis _ is genetically close to _ escherichia coli _ , and all the chemotaxis - related genes of _ escherichia coli _ have been identified in _proteus _ ._ p. mirabilis _ cells swim using a run - and - tumble strategy , which consists of more - or - less straight runs punctuated by random turns . in the absence of an attractant gradient the result is an unbiased random walk , with mean run time 1 s and mean tumble time 0.1 s. in the presence of an attractant gradient , runs in a favorable direction are prolonged , and by ignoring the tumbling time , which is much shorter than the run time , the movement of each cell can be treated as an independent velocity jump process with a random turning kernel and a turning rate determined by intracellular variables that evolve in response to extracellular signals .the signal transduction pathway for chemotaxis is complex and has been studied extensively in _escherichia coli _, both experimentally and mathematically .however the main processes are relatively simple , and consist of fast excitation in response to signal changes , followed by adaptation that subtracts out the background signal . given the genetic similarity between _p. mirabilis _ and _ escherichia coli _, we describe motility and signal transduction of each cell using the key ideas used successfully for _ escherichia coli _ . * each swimmer cell ( with index ) is treated as a point and characterized by its location , velocity , cell - cycle clock and intracellular variables .* signal transduction of each cell is described by the simple model used in , which captures the main dynamics of the signal transduction network , where , with are constants characterizing the excitation and adaptation time scales , is the local attractant concentration and models detection and transduction of the signal . herethe variable is the one that excites and adapts to the signal .it has a similar role as in the signal transduction network .the variable causes the adaptation , which models the methylation level of receptors .* the turning rate and turning kernel are which assumes no directional persistence .* since the slime layer is very thin , typically m , we restrict cell movement to two dimensions .* each cell divides every h and is replaced by two identical daughter cells of age .we assume that cells secrete attractant at a constant rate and that it is degraded by a first - order process .the resulting evolution equation for the attractant is for simplicity , we also restrict reaction and diffusion of the attractant to two space dimensions , which is justified as follows . since no attractant is added to the substrate initially , which is much thicker than the slime layer , we assume that the attractant level is always zero in the substrate .we further assume that the flux of the attractant at the interface of the two layers is linear in the difference of its concentration between the two layers .thus the loss of attractant due to diffusion to the agar can be modeled as a linear degradation , and the degradation constant in ( [ eqn_attr ] ) reflects the natural degradation rate and the flux to the substrate . in the numerical investigations described below , ( [ eqn_attr ] )is solved on a square domain using the adi method with no - flux boundary conditions , while cells move off - grid . for each time step ( mean run time ) , ( [ internals ] ) , ( [ internale ] ) are integrated for each cell and the velocity and position are updated by monte carlo simulation .transfer of variables to and from the grid is done using bilinear interpolating operators .a detailed description of the numerical scheme is given in appendix a of .radial streams appear after several hours of bacterial growth , and before their emergence , the cell density is uniform in the colony , except at the inoculation site . at this stagethe attractant concentration can be approximated by a cone - like profile centered at that site .here we show that starting from this initial condition , the mechanism introduced above can explain radial stream formation . in the numerical investigations belowwe assume that and for simplicity .therefore , we specify an initial attractant gradient of / cm in a disk of radius 1.5 cm , centered at the center of the domain , with zero attractant at the boundary of the disk .for compatibility with later computations on a growing disk , we initially distribute cells/ randomly within the disk .( if cells are initially distributed throughout the square domain cells near the four corners , outside the influence of the initial gradient , aggregate into spots , as is observed in _escherichia coli _ as well . )figure [ fig3 ] shows how this distribution evolves into radial streams that terminate in a high - density region at the center , as expected .one can understand the breakup into streams as follows . whether or not there is a macroscopic attractant gradient, cells bias their run lengths in response to the local concentration and the changes they measure via the perceived lagrangian derivative of attractant along their trajectory .in this situation , the small local variations in cell density produce local variations in attractant to which the cells respond . in the absence of a macroscopic gradient, an initially - uniform cell density evolves into a high cell density network , which in turn breaks into aggregates , and then nearby aggregates may merge ( , figure 4.4 ) , as is also observed experimentally in _if we describe the cell motion by a 1-d velocity jump process , a linear stability analysis of the corresponding continuum equations predicts that the uniform distribution is unstable , and breaks up into a well - defined spatial pattern ( ) , figure 4.2 , 4.3 ) .numerical solutions of the nonlinear equations confirm this , and experiments in which the grid size is varied show that the results are independent of the grid , given that it is fine enough .parameters used : m/s , /s , /s , /s , cm , , , and the secretion rate of the attractant is / s per cell . , width=384 ] in the presence of a macroscopic gradient a similar analysis , taken along a 1d circular cross - section of the 2d aggregation field , predicts the breakup of the uniform distribution , but in this situation the 2d pattern of local aggregations is aligned in the direction of the macroscopic gradient .this is demonstrated in a numerical experiment in which cells are placed on a cylindrical surface with constant attractant gradient . thus the experimentally - observed radial streams shown in figure [ fig1 ] and the theoretically - predicted ones shown in figure [ fig3 ] can be understood as the result of ( i ) a linear instability of the uniform cell density , and ( ii ) the nonlinear evolution of the growing mode , with growth oriented by the initial macroscopic gradient of attractant . in most experimentsthe radial streams that arise initially rapidly evolve into spiral streams , and importantly , these spirals always wind counter - clockwise when viewed from above .the invariance of the chirality of these spirals indicates that there are other forces that act either on individual cells or on the fluid in the slime layer , and that initial conditions play no significant role .one possible explanation , which we show later can account for the observed chirality , stems from observations of the swimming behavior of _ escherichia coli _ in bulk solution and near surfaces .when far from the boundary of a container , _ escherichia coli _ executes the standard run and tumble sequence , with more or less straight runs interrupted by a tumbling phase in which a new , essentially random direction is chosen .( there is a slight tendency to continue in the previous direction ) .however , observations of cell tracks near a surface show that cells exhibit a persistent tendency to swim clockwise when viewed from above .since the cells are small , the reynolds number based on the cell length is very small ( ) ) , and thus inertial effects are negligible , and the motion of a cell is both force- and torque - free . since the flagellar bundle rotates counter - clockwise during a run , when viewed from behind , the cell body must rotate clockwise .when a cell is swimming near a surface , the part of the cell body closer to the surface experiences a greater drag force due to the interaction of the boundary layer surrounding the cell with that at the immobile substrate surface .suppose that the cartesian frame has the x and y axes in the substrate plane and that z measures distance into the fluid .when a cell runs parallel to the surface in the y direction and the cell body rotates cw , the cell body experiences a net force in the x direction due to the asymmetry in the drag force .since the flagellar bundle rotates ccw , a net force with the opposite direction acts on the flagella , and these two forces form a couple that produces the swimming bias of the cell .( since the entire cell is also torque - free , there is a counteracting viscous couple that opposes the rotation , and there is no angular acceleration . )the closer the cell is to the surface , the smaller is the radius of curvature and the slower the cell speed . because of the bias ,cells that are once near the surface tend to remain near the surface , which increases the possibility of attachment .( in the case of _ proteus _ this may facilitate the swimmer - to - swarmer transition , but this is not established . )resistive force theory has been used to derive quantitative approximations for the radius of curvature as a function of the distance of the cell from the surface and other cell - level dimensions , treating the cell body as a sphere and the flagellar bundle as a single rigid helix .cell speed has been shown to first increase and then decrease with increasing viscosity of linear - polymer solutions when cells are far from a surface , but how viscosity changes the bias close to a surface is not known . .* the initial attractant gradient is / cm , centered as before , and all other parameters are as used for the results in figure [ fig3].,width=384 ] the question we investigate here is whether the microscopic swimming bias of single bacteria can produce the macroscopic spiral stream formation with the correct chirality .we can not apply the above theory rigorously , since that would involve solving the stokes problem for each cell , using variable heights from the surface .instead , we introduce a constant bias of each cell during the runs , _i.e. _ , where is the normal vector to the surface , and measures the magnitude of the bias in the direction of swimming. figure [ fig4 ] shows the evolution of the cell density using a bias of , which is chosen so that a cell traverses a complete circle in 50 secs , but the results are insensitive to this choice .the simulations show that the initially - uniform cell density evolves into spiral streams after a few minutes and by 12 minutes the majority of the cells have joined one of the spiral arms .the spiral streams persist for some time and eventually break into necklaces of aggregates which actively move towards the center of the domain .figure [ fig5]a illustrates the positions of 10 randomly chosen cells every 30 seconds , and figure [ fig5]b illustrates how to understand the macroscopic chirality based on the swimming bias of individual cells . at the blue cell detects a signal gradient ( red arrow ) roughly in the 1 oclock direction , and on average it swims up the gradient longer than down the gradient . because of the clockwise swimming bias , the average drift is in the direction of the blue arrow . at arrives at the place and ` realizes ' that the signal gradient is roughly in the 12 oclock direction , and a similar argument leads to the average net velocity at that spot . as a result of these competing influences , the cell gradually make its way to the source of attractant ( the red dot ) in a counterclockwise fashion .certainly the pitch of the spirals is related to the swimming bias , but we have not determined the precise relationship . the spiral movement has also been explained mathematically in , where the macroscopic chemotaxis equation is derived from the hybrid model in the presence of an external force , under the shallow - signal - gradient assumption . when the swimming bias is constant , the analysis shows that this bias leads to an additional taxis - like flux orthogonal to the signal gradient .according to the foregoing explanation , one expects spirals in the opposite direction when experiments are performed with the petri plate upside - down and patterns are viewed from the top , since in this case the relative position of the matrix and slime is inverted and cells are swimming under the surface .this prediction has been confirmed experimentally , and the conclusion is that the interaction between the cell and the liquid - gel surface is the crucial factor that determines the genesis and structure of the spirals . , , other parameters used are the same as in figure [ fig3].,width=384 ] from the foregoing simulations we conclude that when the swimming bias is incorporated , the hybrid model correctly predicts the emergence of streams and their evolution into spirals of the correct chirality for experimentally - reasonable initial cell densities and attractant concentration .next we take one more step toward a complete model by incorporating growth of the patterning domain . as we indicated earlier ,the biology of swimmer / swarmer differentiation and the biophysics of movement at the leading edge are poorly understood .consequently , we here regard the advancing front as a source of swimmer cells and prescribe a constant expansion rate as observed in experiments ( figure [ fig_front ] ) .the results of one computational experiment are shown in figure [ fig6 ] , in which the colony expands outward at a speed of / h ( as in figure [ fig_front ] after the initial lag phase ) , and the cells added in this process are swimmer cells .one sees that the early dynamics when the disk is small are similar to the results in figure [ fig4 ] on a fixed disk , but as the disk continues to grow the inner structure develops into numerous isolated islands , while the structure near the boundary exhibits the spirals .the juxtaposition in figure [ fig7 ] of the numerical simulation of the pattern at 5 hours and the experimental results shown in figure [ fig1 ] shows surprisingly good agreement , despite the simplicity of the model .this suggests that the essential mechanisms in the pattern formation have been identified , but others are certainly involved , since the experimental results show additional structure in the center of the disk that the current model does not replicate .new experimental results reported here show that swimmer cells in the center of the colony stream inward toward the inoculation site , and form a number of complex patterns , including radial and spiral streams in an early stage , and rings and traveling trains in later stages .these experiments suggest that intercellular communication is involved in the spatial pattern formation .the experiments raise many questions , including what induces the inward movement of swimmer cells , why they move in streams , why radial streams quickly evolve into spiral streams , and finally , why all the spirals wind counterclockwise . to address these we developed a hybrid cell - based model in which we describe the chemotactic movement of each cell individually by an independent velocity jump process .we couple this cell - based model of chemotactic movement with reaction - diffusion equations for the nutrient and attractant . to numerically solve the governing equations , a monte carlo method is used to simulate the velocity jump process of each cell , and an adi methodis used to solve the reaction - diffusion equations for the extracellular chemicals .the hybrid cell - based model has yielded biologically - based answers to the questions raised above . starting with an estimate of the attractant level before the onset of the radial streaming as the initial value, we predicted the formation of radial streams as a result of the modulation of the local attractant concentration by the cells .it is observed in _escherichia coli _ that runs of single cells curve to the right when cells swim near a surface , and we incorporated this swimming bias by adding a constant angular velocity during runs of each cell .this leads to spiral streams with the same chirality as is observed experimentally .finally , by incorporating growth of the patterning domain we were able to capture some of the salient features of the global patterns observed . the streams and spirals reported here share similarities with those formed in _dictyostelium discoideum _ , where cells migrate towards a pacemaker , but there are significant differences .firstly , the mechanism leading to aggregation is similar , in that in both cases the cells react chemotactically and secrete the attractant .however , since bacteria are small , they do a bakery search in deciding how to move detecting the signal while moving , and constantly modulating their run time in response to changes in the signal .in contrast , _d. discoideum _ is large enough that it can measure gradients across it s length and orient and move accordingly .thus bacteria measure temporal gradients whereas amoeboid cells such as _ d. discoideum _ measure spatial gradients . in either casethe cells respond locally by forming streams and migrate up the gradient of an attractant .however , spirals are less ubiquitous in _ d. discoideum _ , and when they form they can be of either handedness , whereas in _ p. mirabilis _ ,only spirals wound counterclockwise when viewed from above have been observed , which emphasizes the importance of the influence of the cell - substrate interaction when cells swim close to the surface .experiments in which the patterning occurs in an inverted petri dish lead to spirals with an opposite handedness when viewed from above , which further support our explanation .our results imply that the spatial patterns observed in _p. mirabilis _ can be explained by the chemotactic behavior of swimmer cells , and suggest that differentiation and de - differentiation of the cells at the leading edge does not play a critical role in patterning , but rather serves to expand the colony under appropriate conditions .a future objective is to incorporate a better description of the dynamics at the leading edge when more biological information is available .the spatial patterns reported here are also different from those observed in other bacteria such as _ escherichia coli _ or _bacillus subtilis_. in the latter , fractal and spiral growth patterns have been observed , and these patterns form primarily at the leading edge of the growing colony .there cell motility plays a lesser role and the limited diffusion of nutrient plays an important role in the pattern formation .of course the experimental reality is more complicated than that which our model describes .for instance , the nutrient composition is very complex and nutrient depletion may occur at a later stages such as during train formation .further , cells may become non - motile for various reasons , and these factors may play a role in the stabilization of the ring patterns .another important issue is the hydrodynamic interaction of the swimmer cells with fluid in the slime layer .when cell density is low and cells are well separated we can approximate their movement by independent velocity jump processes plus a swimming bias , but when the cell density is high the cell movement is correlated through the hydrodynamic interactions and this must be taken into account .this hydrodynamic interaction may be an important factor in the formation of the trains observed in experiments .the individual cell behavior , including the swimming bias , has been embedded in a continuum chemotaxis equation derived by analyzing the diffusion limit of a transport equation based on the velocity jump process .the resulting equation is based on the assumption that the signal gradient is shallow and the predicted macroscopic velocity in this regime is linear in the signal gradient .a novel feature of the result is that the swimming bias at the individual cell level gives rise to an additional taxis term orthogonal to the signal gradient in this equation .however in the simulations of the patterns presented here we observe steep signal gradients near the core of the patterns and within the streams , and therefore in these regimes the assumptions underlying the continuum chemotaxis model are not valid .a statistical analysis of cell trajectories in the results from the cell - based model reveals saturation in the macroscopic velocity and a decreasing diffusion constant as the signal gradient grows , which suggests that in the limiting case of large gradients , the macroscopic equation for cell density will simply be a transport equation with velocity depending on the signal gradient .positive chemotaxis toward each of the common 20 amino acids was tested using the drop assay .each amino acid was tested at the following concentrations : .1 m , 10 mm , 1 mm , l0 m , and 1 m . .amino acid drop assay ._ proteus _ cells were collected from the inner area of a growing colony , approximately 1 hr before a projected onset of a streaming phase .microscopic examination revealed that 90% of cells were 1 to 2 cell length .cells were resuspended in a minimal growth medium to the od=.1 to .15 ( similar results were obtained with the cells grown in a liquid culture ) drop assay .500 minimal growth medium , 200 of cell culture ( od=.l to .15 ) , and 240 of 1% methyl cellulose were combined in a l0x35 mm culture dish and mixed until a homogenous state .4 of a respective amino acid solution was added to the center .cell density distribution in the dish was analyzed after 20 - 25 minutes .addition of h was used as a control .increase in the cell density in the center indicates that a respective amino acid is an attractant .[ cols="^,^,^,^,^,^",options="header " , ] chemotaxis of swimmer cells towards single amino acids was tested using 0.3% agar plates with different thickness of substrate layer(10 and 20 ml ) .each amino acid was used in concentrations varying from 0.25 mm to 7.5 mm in both thicknesses of agar .the plates were point inoculated and placed in a humid chamber at room temperature for at least 20 hrs .bacteria growing on 10 and 20 ml plates with 0.00lm of aspartate , methionine and serine formed dense moving outer ring which we interpret as a chemotactic ring .bacteria grown on all remaining amino acids produced colonies with the higher density at the point of inoculation and homogeneous cell distribution in the rest of the colony . in the implementation of the cell - based model , cell motion is simulated by a standard monte carlo method in the whole domain , while the equations for extracellular chemicals are solved by an alternating direction method on a set of rectangular grid points . in this appendix , we present the numerical algorithm in a two - dimensional domain with only one chemical the attractant involved. each cell is described by its position , internal variables , direction of movement and age ( the superscript is the index of the cell ) .concentration of the attractant is described by a discrete function defined on the grid for the finite difference method ( figure [ fig_interp ] , left ) .we denote the time step by , the space steps by and . , [ eqn_tcg ] ) . ,title="fig:",width=192 ] , [ eqn_tcg ] ) ., title="fig:",width=192 ] since two components of the model live in different spaces , two interpolating operators are needed in the algorithm . is used to evaluate the attractant concentration that a cell senses .for a cell at , inside the square with vertex indices , , and , is defined by the bi - linear function : where and are the area fractions ( figure [ fig_interp ] , right ) . on the other hand ,the attractant secreted by cells is interpolated as increments at the grid points by .suppose during one time step , a cell staying at secretes amount of attractant , we then interpolate the increment of the attractant concentration at the neighboring grid points as follows : we consider here a periodic boundary condition .the detailed computing procedure is summarized as follows .1 . initialization . 1. initialize the chemical fields .2 . initialize the list of swimmer cells .each cell is put in the domain with random position , moving direction and age . is set to be .2 . for time step ( initially ) , update the data of each cell . 1 .determine the direction of movement by the turning kernel .+ i ) generate a random number $ ] ; + ii ) if , update with a new random direction .2 . .apply periodic boundary condition to make sure inside the domain , 3 .if hours , then divide the cell into two daughter cells .this step is only considered when cell growth is considered .update by the equations for the internal dynamics .+ i ) determine the attractant concentration before the cell moves and after the cell moves by using the interpolating operator .+ ii ) estimate the attractant level during the movement by and integrate equation for to get .+ iii ) .3 . compute the source term of the attractant due to the secretion by the cells using the interpolator where .4 . apply the alternating direction implicit method to the equation of the attractant : for the boundary grid points , use the periodic scheme .5 . .if , repeat * s2-s4 * ; otherwise , return .cx is supported by the mathematical biosciences institute under the us nsf award 0635561 .hgo is supported by nih grant gm 29123 , nsf grant dms 0817529 and the university of minnesota supercomputing institute .e. ben - jacob , i. cohen , a. czirok , t. vicsek , and d. l. gutnick .chemomodulation of cellular movement , collective formation of vortices by swarming bacteria , and colonial development . 238(1):181 , 1997 .a. jansen , c. lockatell , d. johnson , and h. mobley .visualization of proteus mirabilis morphotypes in the urinary tract : the elongated swarmer cell is rarely observed in ascending urinary tract infection ., 71(6):360713 , 2003 .melanie m pearson , mohammed sebaihia , carol churcher , michael a quail , aswin s seshasayee , nicholas m luscombe , zahra abdellah , claire arrosmith , becky atkin , tracey chillingworth , heidi hauser , kay jagels , sharon moule , karen mungall , halina norbertczak , ester rabbinowitsch , danielle walker , sally whithead , nicholas r thomson , philip n rather , julian parkhill , and harry l t mobley .complete genome sequence of uropathogenic proteus mirabilis , a master of both adherence and motility ., 190(11):40274037 , 2008 .julien tremblay , anne - pascale richardson , francois lepine , and eric deziel .self - produced extracellular stimuli modulate the pseudomonas aeruginosa swarming motility behaviour ., 9(10):26222630 , 2007 .
|
the enteric bacterium _ proteus mirabilis _ , which is a pathogen that forms biofilms _ in vivo _ , can swarm over hard surfaces and form concentric ring patterns in colonies . colony formation involves two distinct cell types : swarmer cells that dominate near the surface and the leading edge , and swimmer cells that prefer a less viscous medium , but the mechanisms underlying pattern formation are not understood . new experimental investigations reported here show that swimmer cells in the center of the colony stream inward toward the inoculation site and in the process form many complex patterns , including radial and spiral streams , in addition to concentric rings . these new observations suggest that swimmers are motile and that indirect interactions between them are essential in the pattern formation . to explain these observations we develop a hybrid cell - based model that incorporates a chemotactic response of swimmers to a chemical they produce . the model predicts that formation of radial streams can be explained as the modulation of the local attractant concentration by the cells , and that the chirality of the spiral streams can be predicted by incorporating a swimming bias of the cells near the surface of the substrate . the spatial patterns generated from the model are in qualitative agreement with the experimental observations .
|
pairwise - interacting components in implicit solvent are often used as models of self - assembling systems , such as crystal - forming and capsid - forming- proteins , and patchy nanoparticles . in this paperwe shall summarize ways of using monte carlo ( mc ) simulation to evolve such components in order to approximate the ( interacting ) brownian motion that their real counterparts execute .brownian motion is usually approximated in simulations by integration of overdamped equations of motion , called the brownian dynamics ( bd ) method , with monte carlo algorithms more often used as a means of sampling thermal distributions .however , recent work shows that in certain circumstances the mc method can also evolve components according to an approximately correct dynamics .dynamic mc simulations even offer some advantages over their bd counterparts : it is easier and computationally cheaper to evaluate potentials ( mc ) than forces ( bd ) ; mc can cope with pathological potentials ( e.g. hard particles , abrupt changes in potential ) ; and one does not face problems of numerical instability with mc as one does with bd ( even for smooth potentials ) , and so can make larger basic moves . in what follows we outline the reasons why single - particle metropolis mc can effect a dynamics close to brownian motion , and we summarize the work of others that makes use of this correspondence .we then describe extensions of the mc scheme that incorporate explicit moves of collections of particles , and argue that such schemes can be used to preserve the approximate realism of the mc method when collective modes of motion become important to a system s evolution .we take the view that although mc methods are not dynamically realistic in all details , they offer for some applications a convenient alternative to conventional integration of equations of motion .it is well - known that sequential moves of single components , proposed in an unbiased fashion and accepted according to the metropolis criterion , permits sampling of the boltzmann distribution given sufficiently long simulation times .however , is also true that if we restrict such moves to local translations and rotations then the dynamics executed by a single particle in an external forcefield is equivalent , in the limit of small trial moves , to a langevin dynamics , i.e. to brownian motion in that potential . to see this ,consider a particle in one dimension in a position - dependent potential .the master equation corresponding to a metropolis mc algorithm in which particle displacements are drawn uniformly from a range ] is the rate of moving a particle from position to position , and .can be expanded in powers of ( which we assume to be small ; we also assume that ) , giving to lowest order [ lang ] _t p(x;t ) -_x ( v p(x , t ) ) + _ x ( d _ x p(x , t ) ) .this is a fokker - planck equation with drift velocity and diffusion constant , and corresponds to a langevin dynamics satisfying an einstein relation .we can neglect terms higher order in provided that changes little in the course of a single move .if this condition holds then metropolis mc moves of single particles in an external potential occur in a dynamically realistic way : the drift velocity of the particle is proportional to the force acting upon it , and its diffusion constant is independent of position .this correspondence also holds in two and three dimensions . in most situations we are interested in interacting particles , not isolated particles in external potentialshere , too , single - particle metropolis mc evolution can in many cases approximate a realistic dynamics . as an illustration ,consider two interacting but otherwise isolated particles and in one dimension , with positions and .particles interact according to a pairwise potential . under the single - particle metropolis monte carlo algorithm described before , the master equation for the separation of these particles reads [ ma2 ] _t p(r;t)= _ -^ d p(r;t ) w(rr ) -_-^ dp(r;t ) w(rr ) , where and \right) ] .expansion of yields ( since no external forces act on the dimer ) and .hence in the limit of small displacements the collective diffusion constant is independent of the force exterted by on , which is what one would conclude by adding eqs . and .if , by contrast , trial moves of and lead to large energy changes , then the dimer diffusion constant ] .if exceeds then the move is aborted _ in situ _ , preventing clusters from moving with a frequency greater than is physical .illustration of static and dynamic linking procedures for cluster moves . in ( a ), nanoparticle is linked to its neighbors , and according to pairwise energies of interaction in the initial microstate , .all recruited neighbors ( e.g. ) propose links with their neighbors ( e.g. ) , and so on until no particles remain to be tested .in the example shown all particles interact strongly , and the entire cluster is chosen to move .the move shown results in proposed new microstate . in ( b ) , is recursively linked to its environment according to gradients of interaction energies calculated by making virtual moves of particles ( see text ) .the proposed move leads to new microstate . ]if the link is succesful then we add to the moving cluster ; if not , we do not , and we do not attempt to link to again .we continue iteratively to propose links between particles in the moving cluster and those with which they interact ( as long as we have not tested those links before , and provided that those particles are not already members of the moving cluster ) .we stop when we run out of particles to test .we then propose a move ( e.g. a translation ) of the cluster .this defines a new microstate . to preserve the equilibrium distribution it is sufficient to impose the requirement of superdetailed balance , [ superdetbal ] ( ) w_gen(|r ) w_acc(|r ) = ( ) w_gen(| ) w_acc(| ) . here is the equilibrium weight of the state ( is the energy of state and is the partition function ) , and is the rate of generating a move from state to state , given a realization of links and failed links .this rate contains the likelihood of selecting the cluster s displacement or rotation , one factor of for each link formed within the moving cluster , one factor of for each link attempted but not formed within the cluster , and one factor of for each link not formed between the cluster and its environment .all but the latter set of probabilities equal their counterparts for the reverse move .rearranging reveals that balance is satisfied by the acceptance rate [ accept1 ] w_acc()= d(c)(1 , e^(- ) ( e()-e ( ) ) ) , provided that no overlaps occur .if they do , the move is rejected .for infinite fictitious temperature , , the likelihood of forming links between the seed and any other particle is zero , and the algorithm executes single - particle moves . for finite values of , collective moves can be achieved .we have attached a factor of in order to modulate the diffusivity of clusters according to their size and shape .such schemes provide a convenient way to effect collective motion in self - assembling systems .they allow for precise control of collective motion : we know in advance of the move the nature of the cluster to be moved , and we can rotate and translate this cluster as desired . their chief shortcoming , however , is that clusters ( and single particles ) do not move solely according to the potential energy gradients acting on them . because links are conditioned upon energies in the initial microstate , particles interacting strongly are likely to be moved in concert , even if relative moves of these particles are favorable .the analog of for a static cluster algorithm yields an effective drift velocity for the inter - dimer separation that is not simply proportional to the negative of the potential gradient , but is instead proportional to .this drift velocity is not consistent with a physical dynamics .one suggested consequence of such a bias is sketched in fig [ fig2](a ) : even though rotation of the upper 6 particles might be desirable ( see panel ( b ) ) , if particles interact strongly then a static algorithm would have trouble forming a cluster of those 6 particles that does not include the whole of the structure shown .simulations show that proposing relative moves of strongly - attracting particles less frequently than moves of weakly - attracting particles can lead to dynamical trajectories substantially different than are generated by integrating equations of motion .by contrast , the idea behind a dynamic cluster - linking scheme is to make a trial move of a single particle and to deal iteratively with the consequences of that move .this scheme , and certain of its off - lattice generalizations , decouple the likelihood of proposing relative moves of particles from their interaction energies in the initial microstate , circumventing the chief deficiency of static cluster - linking schemes . in fig .[ fig2](b ) we illustrate one possible dynamic cluster - linking scheme , called a ` virtual - move ' monte carlo algorithm , for particles bearing general pairwise interactions .we link particles in a recursive manner similar to that described above , except that now our linking procedure involves trial virtual moves of particles . in detail, we pick a particle .we choose to form a pre - link of particle and some neighbor with a probability [ virtual_link ] p_ij()=(n_c -n_c ) _ ij ( ) max(0,1-e^u_ij()-u_ij ( ) ) that depends on a _ virtual move _ ( e.g. translation or rotation ) of relative to . here is the pairwise energy of the bond in microstate , and is the bond energy following the virtual move of ( is returned to its original position following its virtual move ) .the factors and are as before , and as before the move is aborted if exceeds . linking particles in this fashionensures that neighbors exert mutual forces proportional to the gradient of their pairwise energies , unlike in static linking schemes .particle motions are correlated at the level of a single move if is large .we then do as follows . *if a pre - link does not form , we label the link as unformed , do not add to the moving cluster , and do not consider the link again .we then consider another neighbor of .* if the pre - link forms , then * * we convert the pre - link into a full link with probability , and add to the moving cluster . is then assigned a virtual move so that it moves with as a rigid body . to compute make a reverse virtual move of ( starting from its original position ) , corresponding to the forward virtual move with the sense of rotation or translation reversed . for pre - linked particles ( indeed , for any two particles internal to the chosen cluster ) the factor is given by where now refers to following a reverse virtual move . * * we convert the pre - link into a frustrated link with probability . in this case is not added to the moving cluster , and the bond is not tested again .we stop when no more particles remain to be tested , and we move the chosen cluster according to the prescribed virtual move , defining a proposed new microstate . to preserve the equilibrium distribution we can balance the rates for forward and reverse moves involving a given realization of 1 ) internal cluster links , and 2 ) failed internal links that are either unformed or frustrated . by construction of the linking procedurethese two classes of probabilities cancel from .the remaining contribution to comes from unformed links external to the moving cluster , and rearrangement of that equation reveals that an appropriate acceptance rate for the collective move is [ accept2 ] w_acc(| ) = d ( ) ( 1 , _n e^- ( u_ij ( ) -u_ij ( ) ) ) , provided that no frustrated links lie external to the pseudocluster ; the acceptance rate is zero if they do .the label identifies particle pairs that start ( ) in a noninteracting configuration and end ( ) with positive energy of interaction ( overlapping ) , _ or _ which start ( ) in an overlapping configuration and end ( ) in a noninteracting one .there is no guarantee that such a procedure will result in motion that is dynamically realistic in all details .indeed , there are some features of the algorithm that make precise control of cluster motion impossible . for one ,we do not know in advance the nature of the moving cluster , making it hard to cleanly separate translational from rotational motion ( the same is not true of a static linking procedure ) .for another , collective motion requires that both forward and reverse virtual moves of one particle ` recruit ' another : if the basic scale of virtual displacements is much smaller than particle interaction ranges then such motion is unlikely .correspondingly , if the basic scale of virtual displacements is too large , intra - cluster relaxation becomes slow .some tinkering is needed in order to reach a reasonable compromise between the these two processes .even given these difficulties , intuition suggests that by moving clusters according to gradients of potential energy , and by choosing cluster diffusion constants in a reasonable way , we should reproduce some key features of overdamped motion .( it is also worth noting that we have the freedom to scale collective diffusion constants anisotropically , as is physically reasonable , which is not a feature that emerges from simple bd algorithms . )particles should move in a locally realistic fashion and retain some of the collective degrees of freedom that single - particle moves ignore . a qualitative comparison between a virtual - move algorithm ( the version of ref . , which is the predecessor of the algorithm described here ) and bd simulations of strongly - attractive discs shows this to be the case , even in circumstances where single - particle moves clearly lack dynamical accuracy .preservation of these important features of real dynamics may be sufficient to determine if the real counterpart of a model system will assemble well or become kinetically trapped .testing of a virtual - move algorithm against bd simulations of viral capsid self - assembly found that each generates similar values of capsid yields for given model parameters .since yields depend upon both thermodynamics and dynamics , such agreement is encouraging .other work has used virtual - move algorithms ( the original version or the one described here ) to generate dynamical trajectories for self - assembling systems or to thermodynamically sample them .the cluster schemes described above treat pairwise - interacting particles .however , it is possible to use them to effect collective motion of particles bearing multibody potentials , which are often encountered in model biomolecules .one way to do so is described in .let s say that the true energy of a system of particles in microstate is , which may contain contributions from multibody potentials .we can nonetheless use the virtual - move scheme by assuming that all particles interact via fictitious pairwise potentials ( perhaps derived from potentials of mean force obtained using particles true interactions ) .if we write in the exponentials in the equilibrium weights in , where is the system s total fictitious energy in microstate , we find the acceptance rate for the virtual - move procedure using the fictitious potentials to be [ accept3 ] w_acc(| ) = d ( ) ( 1 , e^-(e - u)_i j _n e^- ( u_ij ( ) -u_ij ( ) ) ) , subject to the same caveats as. here and .the factor accounts for the difference between real and fictitious potentials .a fictitious linking potential can also be used if the real potential contains long range interactions ( e.g. coulomb interactions ) that make direct application of a cluster algorithm inconvenient . in this case, the fictitious potential could be chosen to account only for the short range component of particles interactions .we have described the use of monte carlo algorithms to approximate the overdamped dynamics of interacting particle systems . while neither single - particle- nor collective - move algorithms are dynamically realistic in all details , recent work shows that they can approximate a natural dynamics within a range of model systems .given the advantages of numerical stability and ease of implementation offered by monte carlo algorithms over brownian dynamics schemes , we suggest that monte carlo algorithms can in some cases provide a useful and convenient alternative to conventional integration of equations of motion .i thank rob jack and jocelyn rodgers for comments on the manuscript .i am grateful to phill geissler for the collaboration that led to development of the virtual - move algorithm ( ref . ) , and i thank alex wilber , tom ouldridge and jon doye for identifying omissions in preprint- and published versions of that paper .this work was performed at the molecular foundry , lawrence berkeley national laboratory , and was supported by the director , office of science , office of basic energy sciences , of the u.s .department of energy under contract no .de - ac0205ch11231 .
|
we describe collective - move monte carlo algorithms designed to approximate the overdamped dynamics of self - assembling nanoscale components equipped with strong , short - ranged and anisotropic interactions . conventional monte carlo simulations comprise sequential moves of single particles , proposed and accepted so as to satisfy detailed balance . under certain circumstances such simulations provide an approximation of overdamped dynamics , but the accuracy of this approximation can be poor if e.g. particle - particle interactions vary strongly with distance or angle . the twin requirements of simulation efficiency ( trial moves of appreciable scale are needed to ensure reasonable sampling ) and dynamical fidelity ( true in the limit of vanishingly small trial moves ) then become irreconcilable . as a result , single - particle moves can underrepresent important collective modes of relaxation , such as self - diffusion of particle clusters . however , one way of using monte carlo simulation to mimic real collective modes of motion , retaining the ability to make trial moves of reasonable scale , is to make explicit moves of collections of particles . we will outline ways of doing so by iteratively linking particles to their environment . linking criteria can be static , conditioned upon properties of the current state of a system , or dynamic , conditioned upon energy changes resulting from trial virtual moves of particles . we argue that the latter protocol is better - suited to approximating real dynamics . # 1 # 1#1 # 1 # 1eq . ( [ # 1 ] ) # 1([#1 ] ) v
|
in reference we presented a prescription for calculating the efficiency of readout encoding methods for binary strip detector readout , which is arguably the simplest case of interest for particle physics instrumentation . herewe extend the analysis to pixel detectors and also to include charge information rather than binary readout .both the two - dimensional nature of pixel hit patterns and the addition of charge information introduce significant complexity .we preserve the same meaning of readout efficiency as the number of bits used in practice to extract all the required information from the detector , relative to the minimum possible number of bits needed given by the information content .calculating this minimum is the main content of this note .this does not mean the bits used for a single event or readout cycle , but the average bits per event for a large ensemble of events .in practice we define efficiency as the minimum possible number over the actual number of bits , so that it has a value between 0 and 1 .we consider only lossless data compression , as opposed to data reduction in which information is discarded and can not be recovered .we consider typical pixel detector occupancy below 0.5% , where occupancy always means average occupancy over a fairly long time ( eg . 1 second ) .the average occupancy is what determines the output data total readout bandwidth requirement , rather than the instantaneous occupancy of a single event .the actual readout bandwidth required for a given occupancy depends on how much the data can be compressed , and also on the required latency ( how long one is willing to wait for the information ) .we only consider the compression aspect here ; for a case study that includes latency considerations see ref .an important practical constraint is that the detector readout will be implemented in units ( such as modules or chips ) .the information content of the detector is the sum of the information in all the readout units .we are therefore calculating the entropy for one single readout unit .if the data from the entire detector could somehow be combined prior to readout , then the entropy may be lower than the plain sum of all readout units .but we restrict our analysis to assume independent readout units as the case of practical interest .following ref . we decompose the detector output data into the parts shown in fig . [ fig : parts ] , where we have added cluster shape and charge elements that were not present for binary strip detector readout . a cluster is any combination of hit pixels which are _ touching _ , including corner - to - corner touching .any two hit pixels that touch belong to the same cluster .we will only study the bold face parts in this note , since the treatment of context and format elements given in ref . remains valid .we have chosen this decomposition for convenience , but other decompositions are possible .the desired end result is the minimum number of bits necessary to encode all the information , or information entropy , which is a physical property and should not depend on the representation chosen to calculate it .however , correlations in real data that are not included in our assumptions will introduce a bias and result in a representation - dependent entropy .this is discussed further in section [ sec : assumptions ] . the decomposition of fig .[ fig : parts ] has a first order physical interpretation as follows .the _ addresses _ correspond to the positions of particles crossing a silicon sensor- each particle forming one cluster that is identified by one address ( the cluster address can be defined in several ways , for example as the bottom left corner pixel of the smallest cluster bounding box .how it is defined is not important to our discussion ) .if the distribution of particle tracks crossing a sensor is uniform , the cluster address entropy is straightforward to calculate . each cluster has a _ shape _ and _ size _ , which are given by cluster distribution functions which depend on position , orientation , and other features that may or may not vary across a particular detector .there are nevertheless a finite number of possible shapes and sizes .finally , there is a _ charge _distribution in each cluster . in the classical limit charge is an analog quantity and therefore a potential problem for an entropy calculation , which requires a discrete distribution .however , the practical measurement of charge is limited by noise , and we must therefore consider the entropy of encoding useful charge information , not meaningless noise fluctuations . for this we will further decompose charge into total charge , , and single pixel charge fractions , . to higher order , multiple particles can merge into one cluster , secondary interactions from a single particle can result in multiple clusters , and dead pixels or charge deposits that fluctuate below threshold can lead to split clusters .so the association of clusters to randomly incident single particles is not perfect , but as we will see at low occupancy such high order corrections have a negligible impact on the entropy .sections [ sec : address ] , [ sec : shape ] , and [ sec : charge ] evaluate and discuss the address , shape , and charge entropy , respectively .the total information entropy due to the hits is the sum of these parts , which excludes the context and format contributions as already explained .this paper calculates an information entropy by making general assumptions about the data from particle physics pixel detectors . for the result to be applicable to a particular pixel detector, one must first check that the assumptions are valid for the detector in question .this section discusses how the assumptions may fail and what would be the consequence on the result .this section refers to material in the sections that follow , but we placed this discussion first , so the reader is aware of the issues and can refer back to this section as needed .the address entropy calculation ( sec . [ sec : address ] ) assumes clusters are mainly due to particles that illuminate a readout unit uniformly and randomly .this may not be the case , for example in collimated particle jets , where the clusters themselves may be `` clustered '' ( spatially correlated ) on the scale of a readout unit .correlations between cluster positions would reduce the address entropy . for the particular case of a high luminosity proton collider with many interactions per beam bunchcrossing ( pile - up ) , the vast majority of clusters will come from pile - up interactions , with high energy collimated jets being rare . therefore , we expect that the uniform , random assumption will hold .note that what matters for entropy is the vast majority , not the rare exceptions .[ sec : address ] also assumes that most clusters are produced by one single particle , with merging of energy deposits from two or more particles being rare .this of course depends on the track density and the detector design . a low granularity detector in high track density , or used for collimated beams or jets ( eg . a calorimeter ) may have most clusters encompass multiple particles .thus our assumption is appropriate for tracking detectors , which , in order to be useful , must be designed with high enough granularity for the assumption to hold .the separate calculations of sections [ sec : address ] , [ sec : shape ] , and [ sec : charge ] assume that the entropy can be decomposed exactly into the parts shown in fig .[ fig : parts ] .that is , we have not considered any correlation between cluster positions , cluster charge , pixel charge fraction , and cluster shape .but such correlations do exist at some level and will result in a reduction of entropy . consider a flat ( not curved ) detector element illuminated by a point source of particles .each cluster position in the detector will be associated with a different particle incidence angle , and therefore a different cluster shape and charge .we have ignored such correlations within a readout unit , assuming that a readout unit is small , so that even for a point source there would be little variation .furthermore , if the beam spot is not a point source , but in fact physically larger than the readout unit , any correlation between position and incidence angle will be washed out ( recall that our calculation is for one readout unit , so only correlations within a readout unit matter . ) in sec .[ sec : charge ] we have ignored correlation between cluster charge and pixel charge fractions . when the charge of an n - pixel cluster takes on it smallest ( biggest ) possible value , then the pixel charge fractions are given : they must all be equal and given by the pixel threshold ( maximum ) . as cluster charge increases ( decreases ) , more and more pixel fraction combinations become possible , and correlation disappears .this type of correlation lowers the entropy .the above discussion concerns assumptions that omit correlations thought to be small for tracking detectors in high intensity colliders .as discussed , the effect of these correlations will be to lower the entropy .therefore , to the extent that the assumptions are violated , the minimum number of bits calculated in this paper can be seen as an upper limit to the true minimum number of bits .other assumptions may instead increase the entropy if violated .[ sec : shape ] and [ sec : charge ] assume a detector is exposed to a single dominant source of particles ( originating form a single interaction region and containing a mix of particles with a peaked ionization distribution , such as for minimum ionizing particles ) . if a detector is instead exposed to multiple , comparable intensity sources at the same time , that would complicate the estimate and result in higher entropy ( weak secondary sources will not matter , as the entropy is determined by the bulk of the data , not by rare events ) .in the limiting case of individual pixels rather than clusters , the address entropy is given simply by the logarithm of the number of ways to pick out of pixels , where is the occupancy and is the number of pixels in the chip .this is given by the expression .when clusters are considered , two clusters can not be touching , or they would count as a single cluster .one must therefore exclude a 1-pixel - wide empty boundary around each cluster .let be the total number of pixels `` used up '' by a cluster _ including _ this empty boundary .for example , for a 1-pixel cluster , because the boundary around 1 isolated pixel consists of 8 pixels . for a 3 by 3 pixel cluster , , and so on .the address entropy then has a lower bound \ ] ] where is the number of clusters .this of course reduces to for ( 1-pixel clusters with no empty boundary ) . is a lower bound because at the edges of a chip , or when two clusters are very close , a dedicated empty boundary for each cluster is not needed .[ fig : haratio ] shows the ratio for ( 1-pixel clusters plus an empty boundary ) and ( 3 by 3-pixel clusters plus an empty boundary ) .it can be seen that , for the occupancy range of interest , and are very close , and therefore is a good approximation to the true entropy .we will use as the address entropy in our calculations ..,scaledwidth=70.0% ] it is instructive to examine the address entropy per address as a function of occupancy . for a single address ,the entropy is simply the number of bits needed to count the full address space , so for a readout chip with pixels the address entropy of a single address ( occupancy of ) is 16 bits .but as the occupancy rises , fewer and fewer bits are needed per address . within the occupancy range of interest , the number of bits per address quickly converges to a common value independent of chip size .[ fig : hapera ] shows the address entropy per address as a function of occupancy for chips sizes , , and pixels ( note that the occupancy range in this figure goes only up to 0.1% so that the difference between the curves can be appreciated ) .this is an important result counter to conventional wisdom .it shows that making a chip larger and therefore growing the address space ( keeping pixel size constant ) does _ not _ imply a greater data volume due to the need for more address bits .if addresses are compressed ( rather than using the same number of bits per address regardless of occupancy ) , then approximately the same number of bits is needed to transmit the cluster address information regardless of chip size .making pixels smaller , on the other hand , does increase the entropy per address as expected ( i.e. cutting the pixel size in half adds one bit ) .this is more difficult to appreciate from fig .[ fig : hapera ] , where reducing pixel size has the effect of reducing occupancy ( so moving to the left on the curves ) as well as increasing the number of pixels for constant size chip ., the total number of cluster shapes , and , the number of most frequent shapes making up the total fraction shown on the axis.,scaledwidth=70.0% ] we consider shape to also include the size ( number of pixels ) of a cluster .a single pixel has zero shape entropy , but two adjacent pixels can have 4 possible shapes ( up - down , left - right , and two diagonals ) .three adjacent pixels can have 20 possible shapes , four pixels 110 shapes , and so on ( discuses the counting of cluster shapes for manhattan geometry ) . from thisit seems that shape entropy can be rather large , and difficult to estimate in general .but in real - life particle detectors this will not be the case , and we can estimate a shape entropy without knowing a detailed cluster zoology .the reason is that each readout element of a pixel detector ( a single chip or a multi - chip module ) will be traversed by particles from a preferential direction , and these will produce similar - looking clusters .thus , we can consider that some large fraction , , of the clusters in a given readout unit consist of a small number of shapes , , while the remaining can have a large variety of shapes , , due to scattered particles and radiation .this is shown graphically in fig .[ fig : shapes ] .the shape entropy is thus , is plotted for different and in fig .[ fig : shape - entropy ] . not surprisingly , the shape entropy is dominated by the number of cluster shapes that occur very frequently , and the presence of even thousands of additional , but improbable shapes has little impact . for estimation purposes we will take 4 bits per cluster as a reasonable indicative value for shape entropy in present day pixel detectors where the mean cluster size is smallhowever , the cluster shapes occurring frequently in one particular module of a pixel detector will be different from those in another module with different position relative to the collisions .therefore , in order to realize in practice low entropy encoding of cluster shapes , each module would need to use a different encoding ( a different huffman table , for example ) , which would have to be programmed or learned by the module from its own data .we first consider cluster charge .the ideal cluster charge obeys a landau distribution for a minimum ionizing particle passing through a silicon detector .one must consider a spectrum of different particle momenta and incidence angles , which leads to a sum of landaus , as well a gaussian broadening due to noise . as was the case for cluster shapes , the range of landau means to be summed will vary for modules in different detector locations , but each readout unit will have a known , landau - like cluster charge distribution . to estimate the entropy we consider a cluster charge probability distribution function equal to the sum of 5 landau distributions with means at , 5 , 6 , 7 , and 8 arbitrary units , where the landau function is /2}$ ] . using a single landau instead , or changing the mean , made very little difference .the mean of is just the average of the component means , or 6.5 . to compute an entropywe first `` digitize '' the charge pdf by histogramming it in equal width bins in the range 0 to 20 ( 40 ) arbitrary units .the last ( overflow ) bin is increased so that the total histogram probability is unity .note that 20 ( 40 ) corresponds to just over 3 ( 6 ) times the mean of this particular pdf , meant to be representative of present pixel detectors with a dynamic range of a few times the minimum ionizing particle signal . for each value of compute the entropy as , where the sum is over all histogram bins and is the probability in each bin .we also compute the _ digitization error _ by performing a toy experiment in which we measure cluster charge in 4 hypothetical layers and take the average .this is representative of measuring the specific ionization of particle tracks in a silicon detector . the true value for each layer is randomly drawn from , while the measurement is taken to be the central value of the histogram bin that the true value falls into .the digitization error is the standard deviation of the difference between the average of the 4 measurements and the true average .the digitization error vs. cluster charge entropy , , is shown in fig .[ fig : clusterq ] .as previously mentioned , we want to calculate the entropy for a meaningful charge measurement , and the most precise possible meaningful measurement is limited by noise . a horizontal line in fig . [ fig : clusterq ] is included to represent the noise level .obviously the noise level will vary from system to system , so this value is an example .we drew the line at 0.0125 which means s / n=80 for the average of 4 measurements ( 4 layers with cluster s / n=40 per layer ) .this shows that for this particular noise level the cluster charge entropy is bits per cluster .[ fig : clusterq ] also includes two additional curves to show the bits per cluster for an uncompressed linear adc scale for the two dynamic ranges considered : 20 units and 40 units ( the dynamic range has no effect on the calculated entropy ) .we also explicitly compressed the adc values using huffman codes , and the average number of bits after compression differed from the calculated entropy by at most 5% for both dynamic ranges ( would not able to see as a separate line if plotted on the figure ) .in addition to cluster charge , we must also consider the charge of individual pixels in a cluster .the single pixel charge distribution does not have a universal landau form . as we already have analyzed the cluster charge ,the remaining information is not the absolute charge , but the fraction of the total cluster charge in each pixel .the analysis is particularly simple for a 2-pixel cluster . as both pixelsmust have the same charge fraction distribution , it follows that must be symmetric , where the fraction ranges between 0 and 1 .furthermore , in the ideal case that the charge splitting is due to the ionizing particle that created the cluster crossing the boundary between the pixels ( as opposed to a systematic effect like electronic crosstalk ) , then there is no favored splitting and the ratio distribution is simply .thus , f is digitized in equal size bins with equal probability in each bin and the entropy is simply /2 per pixel( is divided by 2 because only the charge fraction in one pixel need be specified ) .the smallest useful bin size should be simply given by the single pixel noise over the single pixel average signal .thus , for 2-pixel cluster s / n=40 , the signal to noise on the single pixel fraction will be approximately 20 .hence , = per pixel .we will take this as a general estimate on average .clearly for individual clusters will vary : for 1-pixel clusters , and for more than 2-pixel clusters .we can now combine the above estimates to obtain a value for of eq .[ hhits ] , which we write as an entropy per cluster . from fig .[ fig : hapera ] we see that . from fig .[ fig : shape - entropy ] we can estimate that . from fig .[ fig : clusterq ] .also .all these estimates are independent of fine details about the detector in question other than signal / noise .one parameter needed is the average number of hit pixels per cluster , due to . taking a typical value of yields , as a concrete example we can compare this to the number of bits per cluster used by the fe - i4 chip of the atlas experiment , excluding format and context information , for an average cluster size of 2 . for a ( ) pixel chip, the number of bits for a 1-pixel cluster used by the fe - i4 encoding is 24 ( 26 ) . for a 2-pixel clusterit can be 24 ( 26 ) or 48 ( 52 ) , depending on the shape , and for a 3-pixel cluster 48 ( 52 ) or 72 ( 78 ) .the fe - i4 encoding includes 4 bits of uncompressed adc value per pixel , which is not sufficient for a s / n=40 cluster charge measurement even with limited dynamic range ( fig .[ fig : clusterq ] ) .the average number of bits depends on the cluster distributions . using atlas experiment inner layer distributions yields an average of 35 ( 37 ) bits per cluster .we have shown that it is possible to estimate the entropy of pixel detector hit data in the occupancy range of interest to particle physics , without knowing all the details about a specific detector .approximate knowledge about the occupancy , signal to noise , and cluster size distributions is sufficient .this is useful to understand how much room for improvement there is when developing a new detector readout . in particular , new pixel detectors for the high luminosity lhc will have very high data volume , requiring efficient encoding of the information to be transmitted .a comparison to the readout encoding used by the atlas fe - i4 shows that , even with reduced signal to noise capability , the fe - i4 encoding has of order 40% total readout bandwidth to be gained from data compression . with smaller pixels and therefore broader cluster size distributions , the gains from applying on - chip data compression will likely be even greater in future detectors .this work was supported in part by the office of high energy physics of the u.s .department of energy under contract de - ac02 - 05ch11231 .
|
the average minimum number of bits needed for lossless readout of a pixel detector is calculated , in the regime of interest for particle physics where only a small fraction of pixels have a non - zero value per frame . this permits a systematic comparison of the readout efficiency of different encoding implementations . the calculation is compared to the number of bits used by the fe - i4 pixel readout chip of the atlas experiment . particle tracking detectors ( solid - state detectors ) , data acquisition concepts , electronic detector readout concepts ( solid - state ) , data reduction methods , information theory
|
imagine a large chessboard , such as occasionally found in a park .it is fall , and all the master players have fled the cold a long time ago .you are taking a walk and enjoy the beautiful sunny afternoon , all the colors of the indian summer in the trees and in the falling leaves around you .looking at the chessboard , you see that some squares are already full of leaves , while others are still empty .the pattern of the squares which are covered by leaves seems rather random . as you try to cross the chessboard, you see that there is a way to get from one side of the board to the opposite side by walking on leaf - covered squares only .this is percolation nearly . before continuing and explaining in detailwhat percolation is about , let me outline the content of this paper . in sect .[ sec : percolation ] , i will review some of the most prominent and interesting results on classical percolation .percolation theory is at the heart of many phenomena in statistical physics that are also topics in this book . beyond the exact solutions of percolation in and dimensions , further exact solutions in only rarely exist .thus computational methods , using high - performance computers and algorithms are needed for further progress and in sects .[ sec : color ] [ sec : gradient ] , i explain in detail some of these algorithms. section [ sec : renormalization ] is devoted to the real - space renormalization group ( rg ) approach to percolation .this provides an independent and very suggestive method of analytically computing results for the percolation problem as well as a further numerical algorithm . while many applications of percolation theory are mainly concerned with problems of classical statistical physics , i will show in sect.[sec : qhe ] that the percolation approach can give useful information also at the quantum scale .in particular , i will show that aspects of the quantum hall ( qh ) effect can be understood by a suitably generalized renormalization procedure of bond percolation in .this application allows the computation of critical exponents and conductance distributions at the qh transition and also opens the way for studies of scale - invariant , experimentally relevant macroscopic inhomogeneities .i summarize in sect .[ sec : concl ] .from the chessboard example given above , we realize that the percolation problem deals with the spatial _ connectivity _ of occupied squares instead of simply counting whether the number of such squares has the majority of all squares .then the obvious question to ask is : how many leaves are usually needed in order to allow passage across the board ? since leaves normally do not interact with each other , and friction - related forces can be assumed small compared to wind forces , we can model the situation by assuming that the leaves are _ randomly _ distributed on the board. then we can define an occupation probability as being the probability that a site is occupied ( by at least one leaf ) .thus our question can be rephrased in modern physics terminology as : is there a threshold value at which there is a spanning cluster of occupied sites across an infinite lattice ?the first time this question was asked and the term _ percolation _ used was in the year 1957 in publications of broadbent and hammersley . sincethen a multitude of research articles , reviews and books have appeared on this subject .certainly among the most readable such publications is the 1995 book by stauffer and aharony , where also most of the relevant research articles have been cited .let me here briefly summarize some of the highlights that have been discovered in the nearly 50 years of research on percolation .the percolation problem in can be solved exactly .since the number of empty sites in a chain of length is , there is always a finite probability for finding such an empty site in the infinite cluster at and thus the percolation threshold is . defining a correlation function which measures the probability that a site at distance from an occupied site at belongs to the same cluster, we easily find and thus .thus close to the percolation threshold , the correlation length diverges with an exponent . in ,the percolation problem provides perhaps the simplest example of a second - order phase transition .the order parameter of this transition is the probability that an arbitrary site in the infinite lattice is part of an infinite cluster , i.e. , and is a critical exponent similar to the exponent of the correlation length . the distribution of the sites in an infinite cluster at the percolation threshold can be described as a fractal , i.e. , its average size in boxes of length increases as , where is the fractal dimension of the cluster . as in any second - order phase transition ,much insight can be gained by a finite - size scaling analysis .in particular , the exponents introduced above are related according to the scaling relation .furthermore , it has been shown to an astonishing degree of accuracy , that the values of the exponents and the relations between them are independent of the type of lattice considered , i.e. , square , triangular , honeycomb , etc . , and also whether the percolation problem is defined for sites or bonds ( see fig .[ fig : perco - site - bond ] ) .this independence is called _ universality_. in the following , we will see that the universality does not apply for the percolation threshold .thus it is of importance to note that for site percolation on the triangular lattice and bond percolation on the square lattice is known exactly : .especially the bond percolation problem has received much attention also by mathematicians .clusters in each panel .the solid outline indicates the percolating cluster , scaledwidth=90.0% ] for higher dimensions , much of this picture remains unchanged , although the values of _ and _ the critical exponents change .the upper critical dimension corresponds to such that mean field theory is valid for with exponents as given in table [ tab : critexp ] ..[tab : critexp ] critical exponents and and fractal dimension for different spatial dimensions . for a more complete listsee . [cols="<,^,^,^,^,^ " , ] as explained for the bond percolation problem we now apply the rg method to the cc model .the rg structure which builds the new super - saddle points is displayed in fig .[ fig : rg - struct ] .it consists of saddle points drawn as bonds .the links ( and phase factors ) connecting the saddle points are indicated by arrows pointing in the direction of the electron motion due to the magnetic field .each saddle point acts as a scatterer connecting the incoming with the outgoing channels with reflection coefficients and transmission coefficients , which are assumed to be real numbers .the complex phase factors enter later via the links between the saddle points . by this definition including the minus sign the unitarity constraint is fulfilled a priori .the amplitude of transmission of the incoming electron to another equipotential line and the amplitude of reflection and thus staying on the same equipotential line add up to unity electrons do not get lost . in order to obtain the scattering equation of the super - saddle point we now need to connect the scattering equations according to fig.[fig : rg - struct ] . for each linkthe amplitude of the incoming channels is defined by the amplitude of the outgoing channel of the previous saddle point multiplied by the corresponding complex phase factor .this results in a system of matrix equations , which has to be solved .one obtains an rg equation for the transmission coefficient of the super - saddle point analogously to eq .( [ eq : bond - rg ] ) , depending on the products , of transmission and reflection coefficients and of the saddle points and the random phases accumulated along equipotentials in the original lattice . forfurther algebraic simplification one can apply a useful transformation of the amplitudes and to heights relative to heights of the saddle points .the conductance is connected to the transmission coefficient by . for the numerical determination of the conductance distribution , we first choose an initial probability distribution of transmission coefficients .the distribution is discretized in at least bins .thus the bin width is typically for the interval ] . by this methodat least super - transmission coefficients are calculated and their distribution is stored .next , is averaged using a savitzky - golay smoothing filter in order to decrease statistical fluctuations .this process is then repeated using as the new initial distribution . by three gaussians .inset : moments of the fp distribution .the dashed lines indicate various predictions based on extrapolations of results for small .the dotted line denotes the moments of a constant distribution , scaledwidth=72.0% ] the iteration process is stopped when the distribution is no longer distinguishable from its predecessor and we have reached the desired fixed - point ( fp ) distribution .however , due to numerical instabilities , small deviations from symmetry add up such that typically after iterations the distributions become unstable and converge towards the classical fps of no transmission or complete transmission similar to the classical percolation case .figure [ fig : rg - conductance ] shows this behavior for one of the rg iterations .the fp distribution shows a flat minimum around and sharp peaks at and .it is symmetric with .this is in agreement with previous theoretical and experimental results whereas our results contain much less statistical fluctuations .furthermore we determine moments of the fp distribution . as shown in fig .[ fig : rg - conductance ] for small moments up to our results agree with the work of wang et al . , who computed moments .but more interesting is the fact that the obtained moments of the fp distribution can hardly be distinguished from the moments of a simple constant distribution thus indicating the influence of the broad flat minimum of the fp distribution around . for the determination of the critical exponent , we next perturb the fp distribution slightly ,i.e. , we construct a distribution with shifted average .then we perform an rg iteration and compute the new average of . tracing the shift of the perturbed average for several initial shifts , we expect to find a linear dependence of on for each iteration step .the critical exponent is then related to the slope .figure [ fig : rg - nu - l ] shows the resulting in dependence on the iteration step and thus system size .the curve converges close to , i.e. the value obtained by lee et al. .note that the system size " is more properly called a system magnification , since we start the rg iteration with an fp distribution valid for an infinite system and then magnify the system in the course of the iteration by a factor .as a function of magnification factor for rg step .the dashed line shows the expected result .inset : the shift of the average of is linear in .the dashed lines indicate linear fits to the data , scaledwidth=72.0% ]the percolation model represents the perhaps simplest example of a system exhibiting complex behavior although its constituents the sites and bonds are chosen completely uncorrelated .of course , the complexity enters through the connectivity requirement for percolating clusters .i have reviewed several numerical algorithms for quantitatively measuring various aspects of the percolation problem .the specific choice reflects purely my personal preferences and i am happy to note that other algorithms such as breadth- and depth - first algorithms have been introduced by p. grassberger in his contribution .the real - space rg provides an instructive use of the underlying self - similarity of the percolation model at the transition .furthermore , it can be used to study very large effective system sizes .this is needed in many applications . as an example, i briefly reviewed and studied the qh transition and computed conductance distributions , moments and the critical exponent .these results can be compared to experimental measurements and shown to be in quite good agreement .the author thanks phillip cain , ralf hambach , mikhail e. raikh , and andreas rsler for many helpful discussions .this work was supported by the nsf - daad collaborative research grant int-9815194 , the dfg within sfb 393 and the dfg - schwerpunktprogramm `` quanten - hall - systeme '' .t. vojta , ( this volume ) .b. kramer , ( this volume ) .u. grimm , ( this volume ) .k. schenk , b. drossel , f. schwabl , ( this volume ) .j. voit : _ the statistical mechanics of capital markets _ ( springer , heidelberg 2001 ) alas , this intuitively convincing argument is not strictly true : the percolation frontier is a fractal and as such scales . on the other hand ,it is not the random number generation for sites in the hoshen - kopelman algorithm but rather the numerical determination of the percolating clusters which is numerically challenging .
|
in this article , i give a pedagogical introduction and overview of percolation theory . special emphasis will be put on the review of some of the most prominent of the algorithms that have been devised to study percolation numerically . at the central stage shall be the real - space renormalization group treatment of the percolation problem . as a rather novel application of this approach to percolation , i will review recent results using similar real - space renormalization ideas that have been applied to the quantum hall transition .
|
federal law affects millions of people .however , only 4% of introduced bills become law . determining the probability of enactment across the thousands of bills under consideration , some over 1,000 pages long , would allow ordinary citizens and other affected parties to focus on legislation that is likely to matter .this research is the most comprehensive analysis of law - making forecasting to date .we built a model with consistently high predictive performance on 68,863 bills over 14 years and analyzed it to determine which factors increase or decrease the probability of success .our approach to prediction and analysis can be applied to any process where text and context affect a categorical outcome .the u.s . legislative branch creates laws that impact the lives of millions of citizens . for example , the patient protection and affordable care act ( aca ) significantly affected the health care industry and individuals health insurance coverage .bills often consist of hundreds of pages of dense legal language .in fact , the aca is more than 900 pages long .there are thousands of bills under consideration at any given time and only about 4% will become law . furthermore , the number of bills introduced is trending upward ( see si ) , exacerbating the problem of determining what text is relevant .given the complexity , length , and vast quantity of bills , a machine learning approach that leverages bill text is well - suited to forecast bill success and identify the important predictive factors . despite rapid advancement of machine learning methods , it s difficult to outperform naive forecasts of rare events because of inherent variability in complex social processes ( 1 ) and because relationships learned from historical data can change without warning and invalidate models applied to future circumstances .due to the complexity of law - making and the aleatory uncertainty in the underlying social systems , we model enactment probabilistically .it s important to make _ probabilistic _predictions for high consequence events because even small changes in probabilities for events with extreme implications , e.g. the passage of the 2009 stimulus bill which cost $ 831 billion , can have large expected values .probabilities provide much more information than a simple `` enact '' or `` not enact '' prediction .model performance metrics that do nt use probabilities , such as accuracy , are not suitable measures of rare event predictive ability . for instance , a blunt `` never enact '' model has a seemingly impressive 96% accuracy rate on this data but incorrectly classifies all the enacted bills with incalculable effects on society .forecasting model performance should be estimated using multiple metrics on large amounts of test data measured _ after _ the data that was used to train the model .we trained models on congresses prior to the congress predicted , which simulated real - time deployment across 14 years and 68,863 bills .starting with the 107th congress ( 20012003 ) , models were sequentially trained on data from _ previous _ congresses and tested on all bills in the _ current _ congress .this was repeated until the most recently completed congress the 113th ( 20132015 ) served as the test . to estimate performance , we compared a baseline model to our models across three performance measures that leverage predicted probabilities .although previous research found that bill text was useful for predicting whether bills will survive committee ( 2 ) and for predicting roll call votes ( 3 , 4 ) , these authors tested their models on much less data than we do and predicted more frequently observed events : getting out of committee is more common than being enacted and bills up for vote are a small subset of all bills introduced .it s not clear whether utilizing text models trained on previous congresses will improve predictions of enactment of bills introduced in future congresses beyond the predictive power of sponsorship , committee and other non - textual data .text is noisy and completely different topics can be found within the same bill ( 5 ) .however , we hypothesized that there are unique semantic and syntactic signatures that differentiate successful bills .our second hypothesis was concerned with the changes to bills over their lives .some bills , e.g. the aca , are only a few pages when introduced but are hundreds of pages when enacted . however , 87% of bill texts do nt change after being introduced because they do nt progress further in the law - making process . we hypothesized that using the most recently available version of bill text and metadata would lead to stronger predictive performance for text and context models . to test these hypotheses , we designed an experiment across two primary dimensions : _ data type _ ( text - only , text and context , or context - only ) and _ time _ ( using oldest or newest bill data ) . analyzing a model that makes successful ex ante predictionscan be more informative than ex - post interpretations of socio - political events ( outside experiment - like settings ) due to the over - fitting that plagues most modeling of observational data ( 6 ) .however , because highly predictive models are often designed with only predictive power in mind , they rarely provide clear insights into relationships between predictor variables and the predicted outcome .when estimates of these relationships are provided for non - linear models , they are almost always measures of only magnitudes of the effects of predictor variables and not also the directions of the effects .our work is not limited to raw predictive power .we estimate the _ direction and magnitude _ of the effect of each predictor variable in the model on the predicted probability of enactment .furthermore , the text model reveals which words are more associated with enactment success .five models are compared across the two time conditions ._ w2v _ is the scoring of full bill text with an inversion of word2vec - learned language representations ( 79 ) .we take this approach to textual prediction because it provides the capacity to conduct a semantic similarity text analysis across enacted and failed categories and can predict which sentences of a bill contribute most to enactment ._ w2vtitle _ is title - only scoring with the same method ._ glm _ is a regularized non - negative generalized linear model ( glm ) meta - learner over an ensemble of a regularized glm , a gradient boosted machine and a random forest , which each use only the contextual variables ( see data section ) ._ w2vglm _ is the same as _ glm _ except the _ w2v _ and _ w2vtitle _ predictions are added as two more predictor variables for the three base learners .these are compared to a baseline , _ null _ , that uses the proportion of bills enacted in the same chamber as the predicted bill across all previous congresses as the predicted probability .for instance , the proportion of bills enacted in the senate from the 103rd to the 110th congress was 0.04 and so this is the _null _ predicted probability of enactment of a senate bill in the 111th congress .it s important to use chamber - specific rates to improve _ null _ performance because bills originating in the house have a higher enactment rate .using only text outperforms using only context on two of three performance measures ( auc and brier ) for the newest data , while using only context outperforms only text on three measures for the oldest data ( fig .1 ) . using text and context together , _ w2vglm _ , outperforms all competitors on all measures for newest and oldest data ( table 1 ) . when predicting enactment with the newest bill text and the updated number of cosponsors , text length and session , both models improved but _w2vglm _ and _ w2v _ improved dramatically ._ w2vglm _ has the highest auc , _w2v _ has the second highest for predictions with new data and _ glm _ has the second highest for predictions with old data ( see si for roc curve plots for senate and house subsets of the data for reasons discussed in si , models perform better with house bills ) .[ cols="^,^,^,^",options="header " , ] * table 2 . * predicted probabilities of enactment for key bills .probabilities increased between old and new forecasts for the two enacted bills , and the mean of the probabilities for the failed bills decreased . now that we have a model validated on thousands of predictions , we analyze it to understand law - making . with our language models ,we create `` synthetic summaries '' of hypothetical bills by providing a set of words that capture any topic of interest . comparing these synthetic summaries across chamber andacross enacted and failed categories uncovers textual patterns of how bill content is associated with enactment .the title summaries are derived from investigating similarities within _ w2vtitle _ and the body summaries are derived from similarities within _ w2v_. distributed representations of the words in the bills capture their meaning in a way that allows semantically similar words to be discovered .although bills may not have been devoted to the topic of interest within any of the four training data sub - corpora , these synthetic summaries can still yield useful results because the queried words have been embedded within the semantically structured vector space along with all vocabulary in the training bills .this is important for a topic , such as climate change , with little or no relevant enacted legislation . to demonstrate the power of our approach , we investigated the words that best summarize `` climate change emissions '' , `` health insurance poverty '' , and `` technology patent '' topics for enacted and failed bills in both the house and senate ( fig .`` impacts , '' `` impact , '' and `` effects '' are in house enacted while `` warming , '' `` global , '' and `` temperature '' are in house failed , suggesting that , for the house climate change topic , highlighting potential future impacts is associated with enactment while emphasizing increasing global temperatures is associated with failure . in both chambers , `` efficiencies '' is in enacted and `` variability '' is in failed . in the senate , `` anthropogenic '' ( human - induced ) and `` sequestration '' ( removing greenhouse gases ) are in failed . for the health insurance poverty topic , `` medicaid '' and `` reinsurance '' are in both house and senate failed .the senate has words related to more specific health topics , e.g. `` immunization '' for failed and `` psychiatric '' for enacted . for the patent topic ,both chambers have a word related to water ( `` fish '' and `` marine '' ) in the failed titles and `` geospatial '' in the failed bodies .given recent legal developments regarding patenting software , it s notable that `` software '' and `` computational '' are in failed for the house and senate , respectively .* synthetic summary bills for three topics across enacted and failed and house and senate categories .our language model provides sentence - level predictions for an overall bill and thus predicts what sections of a bill may be the most important for increasing or decreasing the probability of enactment .4 compares patterns of predicted sentence probabilities as they evolve from the beginning to the end of bills across four categories : enacted and failed and newest and oldest texts . in the newest texts of enacted bills ,there is much more variation in predicted probabilities within bills .we also plot the aca with 5,000 evenly spaced sentence probabilities connected by a line , where a pattern similar to the summary of all newest data enacted bills is evident .* sentence probabilities across bills for oldest data ( * a. * ) , newest data ( * b. * ) and the aca . for each bill, we convert the variable length vectors of predicted sentence probabilities to _ n_-length vectors by sampling _n _ evenly - spaced points from each bill . we set _n=10 _ because almost every bill is at least 10 sentences long. then we loess - smooth the resulting points across all bills to summarize the difference between enacted and failed and newest and oldest texts .far right plot is a line connecting _n=5,000 _ aca probabilities .we conducted a partial rank correlation coefficient sensitivity analysis to estimate the effect of each predictor variable on the predicted probability of enactment .these are not bivariate correlations between variables and the predicted probabilities ( bivariate relationships are plotted in si ) , rather , they are estimates of correlation _ after controlling for _ the effect of all other predictor variables , e.g. the effect of a bill being introduced in the house is negative after controlling for the other effects in the model ( fig . 5b . ) but bills introduced in the house are enacted at a 0.043 rate while senate bills are enacted at a 0.025 rate . if we stopped with the simple descriptive statistic we could have incorrectly concluded that introducing a bill in the house will increase its odds , all else equal .the two subjects with the largest negative effects are foreign trade and international finance , and taxation ( fig . 5a . ) .some bills fail because their content is integrated into other bills and this is especially true for tax - related bills ( 2 ) . with the oldest data model , increasing bill length decreases enactment probability but with the newest data the opposite relationship holds ( fig . 5b . ) .we repeated the sensitivity analysis on the model where no text predictions are included ( _ glm _ , see si ) , and found that , under both time conditions , when we do nt control for the probability of the text ( _ glm _ ) by including our language model predictions , longer texts are more negative than when we control for the text ( _ w2vglm _ ) , and that this difference is much larger for the newest data .this suggests that the better we capture the probability of the text and control for its effects , the better we isolate estimates of non - textual effects . if the bill sponsor s partyis the majority party of their chamber , the probability of the bill is much higher , especially with the oldest data where the model relies on this as a key signal of success . increasingthe number of terms the sponsor has served in congress also has a positive effect .the predictive model learned interactions as well : the number of co - sponsors has a stronger positive effect in the senate for the newest data and in the house for the oldest data . if the bill text scored by the language model is in the second session of the congress , for the newest data model, this can serve as a signal that a bill is being updated and thus it has a higher chance of enactment . for the oldest data , this means the bill was introduced in the second session , which is not particularly indicative of success _ or _ failure : the proportion of _ failed _ bills in the second session for the oldest data is 0.34 and the proportion of _ enacted _ is 0.31 . on the other hand , for the newest data , the proportion of failed bills in the second session is 0.37 while the proportion of enacted is 0.68 .* partial rank correlation coefficient estimates between predictor variables and predicted probabilities .all variables used in _w2vglm _ are included in the analysis .see data section for variable descriptions .bars represent 95% confidence intervals .* a. * plots effects of top subjects . social sciences andhistory is used as the reference subject so no effect is estimated for that factor level . *b. * plots effects of all other variables other than subject .january and north central are the reference levels for the month and region factors .see si for same analysis of _this is the most comprehensive analysis of law - making forecasting to date .we compared five models across three performance measures and two data conditions on 68,863 bills over 14 years .we created a model with consistently high predictive performance that effectively integrates heterogeneous data .a model using only bill text outperforms a model using only bill context for newest data , while context - only outperforms text - only for oldest data . in all conditions textconsistently adds predictive power after controlling for non - textual variables .in addition to accurate predictions , we are able to improve our understanding of bill content by using a text model designed to explore differences across chamber and enactment status for important topics .our textual analysis serves as an exploratory tool for investigating subtle distinctions across categories that were previously impossible to investigate at this scale .the same analysis can be applied to any words in the large legislative vocabulary .the global sensitivity analysis of the full model provides insights into the factors affecting predicted probabilities of enactment .for instance , when predicting bills as they are first introduced , the text of the bill and the proportion of the chamber in the bill sponsor s party have similarly strong positive effects .the full text of the bill is by far the most important predictor when using the most up - to - date data .the oldest data model relies more on title predictions than the newest data model , which makes sense given that titles rarely change after bill introduction . comparing effects across time conditions and across models not including text suggests that controlling for accurate estimates of the text probability is important for estimating the effects of non - textual variables .although the effect estimates are not causal and estimates on predictors correlated with each other may be biased , they represent our best estimates of predictive relationships within a model with the strongest predictive performance and are thus useful for understanding law - making .this methodology can be applied to analyze any predictive model by treating it as a `` black - box '' data - generating process , therefore predictive power of a model can be optimized and subsequent analysis can uncover interpretable relationships between predictors and output .our work provides guidance on effectively combining text and context for prediction _ and analysis _ of complex systems with highly imbalanced outcomes that are related to textual data .our system for determining the probability of enactment across the thousands of bills currently under consideration ( http://predictgov.com/projects/congress/[predictgov.com/projects/congress ] ) focuses effort on legislation that is likely to matter , allowing the public to identify policy signal amid political and procedural noise .the textual predictions are powered by learning distributed representations of words for separate corpora of enacted and failed bills .new text is then evaluated by these separate language representations to predict whether text has higher probability under the model for successful or failed bills .the non - textual information is leveraged for prediction with a stacking procedure .continuous - space vector representations of words can capture subtle semantics across the dimensions of the vector . to learn these representations ,a neural network model predicts a target word with the mean of the representations of the surrounding words ( e.g. vectors for the two words on either side of the target word in fig .the prediction errors are then back - propagated through the network to update the representations in the direction of higher probability of observing the target word ( 9 , 11 ) .after randomly initializing representations and iterating this process over many word pairings , words with similar meanings are eventually located in similar locations in vector space as a by - product of the prediction task , which is called word2vec ( 9 ) . *fig . 6 . ** a. * the neural network - based training algorithm used to obtain word vectors .we used the gensim implementation of word2vec ( 8) .parameters are updated with stochastic gradient descent and we use a binary huffman tree to implement efficient softmax prediction of words . see si for description of hyper - parameters , how they were tuned , and lists of all selected values . *b. * model training and testing process .this process is completed and then we advance one congress .the only pre - processing we applied to text was removal of html , carriage returns , and whitespace , and conversion to lower - case .then inversion of distributed language models was used for classification as described in ( 7 ) .distributed language models mappings from words to obtained by leveraging word co - occurrences were separately fit to the sub - corpora of successful and failed bills by applying word2vec .each sentence of a testing bill was scored with each trained language model and bayes rule was applied to these scores and prior probabilities for bill enactment to obtain posterior probabilities .the proportions of bills enacted in the same chamber as the predicted bill in all _ previous _ congresses were used as the priors .the probabilities of enactment were then averaged across all sentences in a bill to assign an overall probability .trees are decision rules that divide predictor variable space into regions by choosing variables and their threshold values on which to make binary splits ( 12 ). a tree model can learn interactions between predictors , unlike linear models where interactions must be manually specified , and is generally robust to the inclusion of variables unrelated to the outcome .a _ gradient boosted machine _ ( gbm ) improves an ensemble of weaker base models , often trees , by sequentially adjusting the training data based on the residuals of the preceding models ( 13 ) .a _ random forest _ randomly samples observations from training data and grows a tree on each sample , forcing each tree to consider randomly selected sets of predictor variables at each split to reduce correlation between trees ( 14 ) .gbms and random forests can both learn non - linear functions but have different strengths : in general , random forests are more robust to outliers while gbms can more effectively learn complex functions .random forests and gbms are trained , sampling from their hyper - parameter distributions and recording the cross - validated log loss estimated with each hyper - parameter configuration ( see si ) .configurations with the lowest log loss are used to estimate models for ensemble stacking . a regularized logistic regression ( elastic - net ) with default hyper - parameters ( = 0.5 and = 1e-05 ) is also estimated to gain a complementary linear perspective ( 15 ) . using the text predictions as features allows the training process to learn interactions between contextual variables and textual probabilities .additionally , the sensitivity analysis can then estimate the impact of text predictions on enactment probabilities along with the contextual predictors , controlling for the effect of the probability of the bill text when estimating non - textual effects .random forests and gbms combine weak learners to create a strong learner .stacking combines strong learners to create a stronger learner .a cross - validation stacking process on the training data is used to learn a combination of the three base models to form a meta - predictor ( 16 , 17 ) .after hyper - parameter tuning , out - of - fold cross - validation predictions are made on the training data with the three base learners .these predictions and the outcome vector are used to train the meta - learner , a regularized logistic regression with non - negative weights .weights are forced to be non - negative because we assume all predictors should positively contribute .this entire learning process is conducted on data from prior congresses .the model is applied to test data by making predictions with base learners and feeding those into the meta - learner ( fig . 6b . ) .we use the two most frequently applied binary classification probability scoring functions : the log score and the brier score ( see si ) . for both , if a model assigns high probability to a failed bill it s penalized more than if it was less confident and if a model assigns high probability to an enacted bill it is rewarded more than if it was nt confident .a receiver operating characteristic curve ( roc ) is built from points that correspond to the true positive rate at varying false positive rate thresholds with the model s predictions sorted by the probability of the positive class ( enacted bill ) ( 18 ) .starting at the origin of the space of true positive rate against false positive rate , the prediction s impact on the rates results in a curve tracing vertically for a correct prediction and horizontally for an incorrect prediction .a perfect area under the roc curve ( auc ) is 1.0 and the worst is 0.5 .auc captures the trade - off between a model s false positive and true positive rate , and thus rewards models for being discriminative throughout the range of probabilities and is more appropriate than accuracy for imbalanced datasets .we train language models with word2vec for enacted house bills , failed house bills , enacted senate bills , and failed senate bills and then investigate the most similar words within each of these four models to word vector combinations representing topics of interest .that is , for each of the four models , return a list of most similar words : , where is one of word vectors of interest , are the most frequent words in the vocabulary of words ( rare words are retained to train the model , but is set to less than to exclude rare words during model analysis ) excluding words corresponding to the query vectors , is _ 1 _ or _-1 _ for whether we are positively or negatively weighting , and . for ease of comparisonacross enacted and failed categories we also remove words the two have in common .we conduct a simulation experiment on our model of the legislative system by varying inputs to the model and measuring the effect on the output .if input values are varied one at a time , while keeping the others at `` default values , '' sensitivities are then conditional on the chosen default values ( 19 ) .there are no sensible default values for the predictor variables .we could instead take a global sampling approachbut when each input variable is independently sampled from its empirical distribution , there will inevitably be input vectors far from the data .therefore , we combine 55,695 empirical observations , and feed them into _ w2vglm _ estimated with congresses 104 - 112 to obtain a vector of predicted probabilities .the empirical data creates a sufficiently large yet realistic set of observations for a global sensitivity analysis .next , we expand the factor variables out so each level is represented in the design matrix as a binary indicator variable .this allows us to estimate the effect of each level of a factor , e.g. the 39 subject categories .we add interaction terms between the chamber and bill characteristics , e.g. whether the bill originated in the senate and the number of characters , to estimate these interaction effects potentially automatically learned by the tree models .finally , we estimate the relationship between the resulting matrix of input values and the vector of predicted probability outputs with a partial rank correlation coefficient ( prcc ) analysis , which estimates the correlation between an input variable and the predicted probability of bill enactment , discounting the effects of the other inputs and allowing for potentially non - linear relationships by rank - transforming the data before model estimation ( 20 , 21 ) .partial correlation controls for the other predictor variables , , by computing the correlation between the _ residuals _ of regressing the predictor of interest , , on and the _ residuals _ of regressing the outcome ( predicted probability of enactment ) on .the prcc analysis is bootstrapped 1,000 times to obtain 95% confidence intervals .we include all house and senate bills and exclude simple , joint , and concurrent resolutions because simple and concurrent resolutions do not have the force of law and joint resolutions are very rare .we downloaded all data other than committee membership from govtrack.us/developers/data , which is created by scraping thomas.gov .we downloaded committee membership data from web.mit.edu/17.251/www/data_page.html ( 22 , 23 ) .we release our full combined data : all raw and processed data and model predictions .there is often more than one version of the full text for each bill . in order to create a forecasting problem that predicts enactment as soon as possible ,the earliest dated full text is used , which is , for more than 99% of the bills in the testing data , the text as it was introduced . to understand how much predictive power newer versions add, we collect the most recent version of each bill , which is , for 87% of the bills in the testing data , the version as introduced .the full text of all introduced bills is only available starting with the 103rd congress ( 19931995 ) and therefore this is the first congress used to train language models .the 104th congress is the first used to train the base models of the ensemble because they require the language model predictions and the language models need the 103rd for training .the 107th congress ( 20012003 ) is the first to serve as a testing congress because the full model needs multiple congresses worth of data for training .we used the list of predictor variables from ( 2 ) as a starting point for designing our feature set .the following variables capture characteristics of a bill s sponsor and committee(s ) : * _ region _ : region corresponding to state the sponsor represents ( 5 levels ) . *_ sponsorpartyprop _ : proportion of chamber in sponsor s party ( min : 0 , median : 0.51 , max : 0.59 ) . *_ sponsorterms _ : number of terms sponsor has served in congress ( only up to congress being predicted to ensure model is only using data that would have been available at that time , min : 1 , median : 6 , max : 30 ) . * _ committeeseniority _ : mean length of time sponsor has been on the committees the bill is assigned to ( min : 0 , median : 0 , max : 51 ) . if not on committee , assigned 0 . *_ committeeposition _ : out of any leadership position of sponsor on any committee bill is assigned to , lowest number on the `` leadership codes '' list in si ( 11 levels , e.g. chairman ) . * _ notmajoncom _ : binary for whether sponsor is ( _ i _ ) _ not _ in majority party and ( _ ii _ ) on first listed committee bill is assigned to . *_ majoncom _ : binary for whether sponsor is ( _ i _ ) in majority party and ( _ ii _ ) on first listed committee bill is assigned to . *_ numcosponsors _ : number of co - sponsors ( for oldest - min : 0 , median : 2 , max : 378 ; for newest - min : 1 , median : 6 , max : 432 ) .the following variables capture political and temporal context of bills : * _ session _ : session ( first or second ) of congress that corresponds to full text date , almost always the date bill was introduced for oldest data ( for oldest - proportion in first session : 0.64 ; for newest - proportion in first session : 0.6 ) . * _ house _ : binary for whether it s a house bill . * _ month _ : month bill is introduced .the following variables capture aspects of bill content and characteristics : * _ subjectstopterm _ : official top subject term ( 36 levels ) . *_ textlength _ : number of characters in full text ( for oldest - min : 119 , median : 5,340 , max : 2,668,424 ; for newest - min : 113 , median : 5,454 , max : 3,375,468 ) .martin t , hofman jm , sharma a , anderson a , watts dj ( 2016 ) exploring limits to prediction in complex social systems ._ proceedings of the 25th international conference on world wide web _ , www 16 .pp 683694 .yano t , smith na , wilkerson jd ( 2012 ) textual predictors of bill survival in congressional committees ._ proceedings of the 2012 conference of the north american chapter of the association for computational linguistics : human language technologies _ , naacl hlt 12 .pp 793802 .katz dm , bommarito mj , blackman j ( 2014 ) _ predicting the behavior of the supreme court of the united states : a general approach _available at : http://papers.ssrn.com/abstract=2463244 [ accessed june 22 , 2016 ] .mikolov t , sutskever i , chen k , corrado gs , dean j ( 2013 ) distributed representations of words and phrases and their compositionality . _advances in neural information processing systems 26 _ , pp 31113119 .office cb ( 2012 ) letter to the honorable john boehner providing an estimate for h.r .6079 , the repeal of obamacare act .available at : https://www.cbo.gov/publication/43471 [ accessed april 21 , 2016 ] .
|
out of nearly 70,000 bills introduced in the u.s . congress from 2001 to 2015 , only 2,513 were enacted . we developed a machine learning approach to forecasting the probability that any bill will become law . starting in 2001 with the 107th congress , we trained models on data from _ previous _ congresses , predicted all bills in the _ current _ congress , and repeated until the 113th congress served as the test . for prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a semantic - laden vector space . this language representation enables our investigation into which words increase the probability of enactment for any topic . to test the relative importance of text and context , we compared the text model to a context - only model that uses variables such as whether the bill s sponsor is in the majority party . to test the effect of changes to bills after their introduction on our ability to predict their final outcome , we compared using the bill text and meta - data available at the time of introduction with using the most recent data . at the time of introduction context - only predictions outperform text - only , and with the newest data text - only outperforms context - only . combining text and context always performs best . we conducted a global sensitivity analysis on the combined model to determine important factors predicting enactment . * keywords * : forecasting , ensemble modeling , natural language processing , congress , law .
|
[ [ some - history ] ] some history + + + + + + + + + + + + i should say some words about the history of discrete gravity .classical gravity deals with a smooth ( not necessarily four - dimensional ) manifold , pseudo - metric tensor on it and the classical einstein - hilbert action the various stationary points of which are studied . here is the intrinsic curvature at the point , - some functional of matter fields . in the pure ( no matter ) gravity case , we consider only this case here .quantum gravity takes into account not only stationary points but also all other configurations with some weights , that is with a formal ( but which becomes positive for euclidean metrics ) density on some configuration space of matter fields and metric tensors .all earlier attempts to do this brought the conclusion that should also include smooth structures on and even itself , that is the space , its topology , should be random .now the only reasonable way to pursue this program is to discretize everything from the beginning and then to perform some scaling limits . that is the space becomes a finite complex, smooth structure becomes a piecewise linear structure , metrics and curvature are encrypted in one - dimensional and two - dimensional skeletons of the complex , matter fields are spins which live on the cells of the complex .it appears that such quantization ( discretization ) is equally applicable to other physical systems : relativistic particles , strings etc . , but with different interpretations .for example , the quantized ( in such a way ) string consists of a two - dimensional complex ( representing a coordinate system and metrics on the string itself ) and spins - vectors in which provide a mapping of the vertices of the complex into the -dimensional euclidean space , thus approximating the classical string .the discretization of the classical gravity was first considered by regge where he gave definitions of some exact mathematical objects related to the classical general relativity : finite discrete space time , its curvature and einstein - hilbert action .it was afterwards included in the fundamental monograph but in seventies it was still considered outside of the main streamline of physics and only rare papers were devoted to it . among themhowever there was a well known paper by s. hawking where the applications to quantum gravity were discussed .in eighties there are already more than 100 papers concerning discrete quantum gravity . in ninetiesthe the number of papers is more than 1000 and still grows at the moment .mainly it is due to the appearing algebraic formal techniques to deal with such problems .this formal techniques follows physical insights on relations of quantum gravity with string theory , random matrix models etc .moreover , recent papers in theoretical physics often contain the following sententions : two - dimensional random geometry is now placed at the heart of many models of modern physics , from string theory and two - dimensional quantum gravity , attempting to describe fundamental interactions , to membranes and interface fluctuations in various problems of condensed matter physics , see . for a probabilistthe quantum gravity is a source of inspiration and also new mathematics and new philosophy of probability .the paper can serve an introduction to quantum gravity for a probabilist : it is a mathematical text on the quantum gravity for the planar pure gravity case . [[ dynamics - contre - equilibrium ] ] dynamics contre equilibrium + + + + + + + + + + + + + + + + + + + + + + + + + + + mostly we consider combinatorial techniques , instead of more popular in physics random matrix models , the central point is the famous exponent .another goal of the paper is to consider stochastic dynamics which leaves quantum gravity equilibrium distribution invariant .we start theoretical study of this dynamics ( earlier it was only used for monte - carlo simulation ) .the study of dynamics constitutes ( but mainly it is self - contained ) a third part of the series of papers ( see ) where more general class of processes was studied .these processes have also some universal character in probability : they cover most concrete processes .also they have many examples in computer science and biology . here the probability is the classical probability .the quantum gravity constitutes a bunch ( a lot ! ) of papers overfilling last 10 years well - known physical journals .discrete quantum gravity is now considered as a promising direction towards unifying largest and smallest scales in the nowadays picture of nature .i consider one part of this field which evidently uses probabilistic intuition but it is difficult to find even formulations ( i do not mention proofs ! ) which could be satisfactory for a mathematician : even when the probabilities are hopefully positive they are not normalized . and this is not because of negligence of the authors but because some deep reasons seem to be behind the curtains . in the existing physical literaturea permanently developing algebraic and geometric techniques overwhelms the subject .thus it can be useful to step away from algebra and geometry , discussing some simple probabilistic aspects of quantum gravity : even such simple project appeared to rise many natural but still not answered questions .there are now two variants in the discrete approaches to quantum gravity : quantum regge calculus ( where links ( edges ) have lengths as random variables ) and dynamical triangulations ( where lengths of edges are constant ) . the word _ dynamical _ in the second approach is a little bit misleading because there is no dynamics at all in this approach : main techniques uses gibbs equilibrium distributions on large matrices .that is why i will call here these approaches equilibrium .the dynamics appeared earlier in monte carlo simulations of quantum gravity .here i try to give a probabilistic ( not numerical ) study of relevant markov processes .what is new here ( i do not know earlier rigorous results ) is that we want to advocate not numerical but analytic and probabilistic studies of such processes .why such processes can be useful not only in computer monte - carlo experiments but also as giving theoretical information ?there are many reasons - we give here a short list . *well known difficulty in averaging over all topologies is that , in 4 dimensions , it includes some questions which are known to be algorithmically unsolvable .dynamics substitutes this problem with a new one : instead of averaging we are looking for a process ( with arbitrary initial state ) which will generate all topologies it can generate .this process should have some symmetries but also it should be a legitimate ( for example non - exploding ) stochastic process . *what one would like to have ( as in the stochastic quantization in quantum field theory and glauber dynamics in statistical mechanics ) is a markov process leaving gibbs measure invariant .this is quite natural in quantum field theory where there are whiteman axioms and in statistical mechanics where there is a deterministic dynamics more fundamental than the gibbs measure itself . in quantum gravityboth these factors are absent and an alternative viewpoint could be advertised : that the process itself can be taken to be more fundamental than the gibbs measure itself . *dynamics allows to consider the region below the critical point where equilibrium distribution has no sense . on the contrarythis region is even more natural for the dynamics - like a growing universe ( in the computer time , the term which i know from a paper by a. migdal ) .moreover dynamics gives also some sense to distributions in the critical point without performing scaling limits .i do not know the physical counterpart of all this but its naturalness from probability point of view is evident .* i have absolutely no physical arguments for the choice and even relevance of the dynamical models , but that is also true for all modern approaches due to the lack of experimental confirmation .the leading thread can only be probabilistic intuition and beauty .relevant question are : what is universality and generic situation ?it was argued recently , see , that computer science could play some role in future physical theories .probabilistic aspects which we discuss here make this relation quite evident by a preliminary model of the universe growing via some grammar ( more exactly a graph grammar ) similarly to the random evolution of a language . *mathematical thermodynamic theory existing for statistical mechanics and quantum theory brought many new ideas .it is some surprise that an attempt to construct similar theory for growing complexes brought quite unexpected phenomena ( see , ) ( hopefully having some physical significance ) .one of the effects is that one can not fix an origin in an infinite universe without zermelo axiom , any constructive introducing of a local observer changes drastically the space time in his neighborhood .* it can not be easy to find critical exponents by monte - carlo simulation because the asymptotic is dominated by the exponential term which depends strongly on the details of the model .what is usually simulated is the uniform distribution on the set of triangulations with fixed number of cells .if we consider a growing complex then we could not find a markov process giving the necessary exponents ( the famous ) but only some random transformation of measures , that is called usually a nonlinear markov processes , giving these exponents .[ [ contents - of - the - paper ] ] contents of the paper + + + + + + + + + + + + + + + + + + + + + one - dimensional case ( section 2 ) is useful in particular as emphasizing links between classical probability and two - dimensional quantum gravity . in section 3the minimum of necessary definitions are given concerning complexes and curvature in two dimensional case .section 4 contains introductory definitions , problems and some known results . in section 3.2we give a short exposition of rmt approach to pure planar gravity , the only goal of this exposition is to emphasize some points , related to the combinatorial approach . in section 5we study a dynamical model , where the cells are appended in random to the boundary of the disk .this model is solvable ( via random walks ) and we calculate some main quantities .the exponent for this model is and thus it belongs to a different universality class than models accepted in physics . butcontinuum limit in this model is well defined , gives space with a constant curvature as in the physical model .section 6 is the central in the paper .we construct and study nonlinear markov processes ( where also changes are possible only on the boundary ) which render the equilibrium distribution invariant .we use the tutte functional equation method to prove that one gets exponent .we develop new combinatorial techniques to study local correlation functions . in section 7we consider dynamics where changes can be done elsewhere in the complex .we study large time behavior of such markov processes .[ [ acknowledgments ] ] acknowledgments + + + + + + + + + + + + + + + i thank l. pastur for elucidating to me some points of the random matrix theory and s. shlosman for reading the paper and very helpful comments .for the physical interpretation of the one - dimensional gravity and many beautiful calculations we refer to chapter 2 of ambjorn s lectures .our goal here is to give a probabilistic viewpoint and discuss new approaches .there is no topology in one dimension : the underlying structure ( cell complex ) is one - dimensional - a linear graph .a chain of symbols from some alphabet can be considered as a function on vertices of such graphs ( see figure [ 1f5 ] ) .we give now the basic definition in more abstract terms than in , without prior embedding in euclidean space : this corresponds more to polyakov string quantization .we shall consider distributions on the set of finite linear spin graphs ( sometimes we use the terminology from our previous papers but mostly it can be skipped ) . that is the distributions on the set of strings ( here string comes from computer science terminology ) , where and , where is an alphabet ( spin space ) .for example can be the unit sphere in or in .case corresponds to the empty string with no -value prescribed .define the nonnegative measure on by for some function .it is convenient to assume that some element is fixed .simplest example is when for all except a finite set .this measure can be normalized if intuitively , a sequence of arrays can be considered as a random walk .but it is quite different from the classical random walks .we shall see below its relationship with some computer science problems .we shall also see how this formal object can be tied to the euclidean space : in physical papers one can also see similar steps - abstract object ( random triangulation , internal metrics etc . )mapped finally to the physical space - time .[ [ simplest - examples ] ] simplest examples in the first example if and otherwise , assume . otherwise speaking we have the non - normalized distribution on all possible finite paths in , starting from where is the length ( number of steps ) of .it does not always exists .there exists such that the series for the partition function converges for and diverges for . in our case . for the second example for some bounded function of the angle between the two vectors . for the third example .these examples , highly simple and having nothing special from the probabilistic viewpoint , correspond to one - dimensional analogs of rather famous actions : free relativistic point particle in -dimensional space - time ( - length parameter in the euclidean space , is a path from to ) hilbert - einstein action ( is the curvature of the curve embedded in an euclidean space ) and bosonic string action ( is some metrics on the parameter interval ) one can consider the introduced distribution as a quantization of the corresponding classical action . each link of the discrete path is assumed to have unit length and thus the length of a path is the number of links .we shall discuss only the first example , other two are similar , see .for we can define the probability distribution on the set of all finite paths starting from green function are defined as a measure on where is the number of paths from to of length , - -step transitions probabilities from to for the classical simple random walk in .the number of such paths ( by the local limit theorem ) is if .green functions have their origin in physics and they also look like green functions for a markov process where are all paths from to .but there is no markov process here .the following observations are important : * neither nor the exponent do not depend on the choice of .* is not universal , it depends on the dimension and on the lattice .also we will get different values of if we take piecewise linear paths in with sides of fixed length ( see ) ; * however the exponent does not depend on the lattice , which follows immediately from the local limit theorem ; * for the series converges iff .now we want to study the scaling limit .denote . in the scaling limit onestudies the exponents : mass ( inverse correlation length ) exponent , susceptibility exponent , anomalous dimension , hausdorf dimension .they are defined via the leading term behavior for small we give now more detailed explanations .we have immediately that does not depend on the dimension of and holds also for more general spins and interactions ( however not always ) . thus . if then , the green functions for the simple random walk .that is why .let be the transition probabilities for the simple random walk on and - their generating function .then which is the classical propagator for quantum relativistic free particle with mass .it can be also proved that in the second example different exponents can be obtained for .one can construct a reversible dynamics with respect to which the measure introduced above is invariant .it is a continuous time markov chain .the state is interpreted as a queue , where is the length of the queue , is the set of customer types and is a generalized length of the queue ( taking into account customer types and signs of jobs ) .this is a lifo type queue ( last in first out ) and transitions consist of appending and deleting links on the right hand side ( like arriving and service of customers in queueing theory ) .more exactly , for all , with rate and with rate where all values of are equiprobable .transitions from the empty queue are with rate and equiprobable . if then the process is ergodic and the distribution ( [ ap ] ) is invariant with respect to this dynamics . to prove this , note that the restriction of the process is also a markov chain - a birth - death process , its stationary probabilities are for models 2 and 3 similar dynamics leaves the distributions invariant .note that a system of two queues would correspond to two interacting particles etc .[ [ supercritical - case ] ] supercritical case now we see that they cases have no sense in the equilibrium approach but in the dynamical picture they are no worse than the case . one can not write down equilibrium distribution for , but at any time moment there exists some distribution and its limiting properties as could have interesting properties .denote the string at time . for have a.s .moreover there exist limiting local correlation functions ( not too close to the ends of the string ) which define a translation invariant gibbs field .for example in fact for the first example this gibbs field is a bernoulli sequence in all three regions : , see .[ [ critical - case - and - scaling - limit ] ] critical case and scaling limit there are two possibilities to consider the critical case .the first one is to consider the dynamics for critical parameter values .the properties of this dynamics define the critical exponents .there are results for sufficiently general transitions : given two positive functions , define the transition rates as thus depending on the right symbols .assume that the functions are such that the markov chain is null - recurrent , see the conditions in .let be finite with values .let is the number of symbols in the string .then the central limit theorem holds for the random vector , that is the following limit as exists in distribution where has the standard gaussian distribution and is a constant vector .this gives the same canonical exponents .note that for the reversible case the proof reduces to the reflected random walks .the proof for non - reversible dynamics is more involved : for finite see it in . for compact should be similar . for non - compact it would be interesting to find examples with non - gaussian limiting distribution .the second approach corresponds to the scaling limit in equilibrium case . in dynamicsthe parameters are scaled together with time , the parameters tend to the critical line and is scaled as . in such dynamics the scaling limit corresponds to the diffusion approximation in queueing theory .one gets the brownian motion with drift for the dynamics of under the following scaling the drift defines the mass gap in the spectrum of the infinitesimal generator of the corresponding diffusion process .the proofs here can be obtained by the application of the techniques known for the critical case .[ [ random - grammars ] ] random grammars we considered the dynamics , that is called right linear grammar ( not necessarily context free ) in the computer science terminology .now we shall speak about more general dynamics when transitions can occur at any place of the string , not only in its right end .for the first example one can construct the following reversible markov chain , leaving invariant the distribution , that appears to be a context - free random grammar ( see ) .each symbol of the string is deleted with rate and for each we insert a new symbol between symbols and ( where for we put it before , and for - after ) of the string with rate .appended symbol with probability will have one of coordinate vectors . to prove it note that this this dynamics restricted to , the set of path lengths , is also markov .it is in fact a birth and death process on with jump rates .then its stationary probabilities are ( if ) for two other examples the dynamics ( not context free ) can also be constructed , we shall do it in another paper in more general cases . .here we present the minimum of basic definitions concerning cell structures .a complex is obtained by gluing together its elementary constituents - cells , like the matter consists of molecules .one should be very careful in defining the rules of gluing and the arising probability distributions .on the other hand it seems doubtful that some type of cellular structure has some a priori advantages in front of others .there are no definite physical reasons to prefer one cell structure or gluing rule etc ., over another . thus various possibilities should be studied to see what universal laws they share . in this paper paperwe shall encounter two universal classes , one of them is popular in physics now .moreover , having some flexibility in choosing a cell structure one can gain more simplicity in the probabilistic description and even get solvable models .a ( labelled ) complex is a set of elements called cells , there is a function on , the dimension of the cell , taking values .the dimension of is .let be the set of cells of dimension .for each cell , is defined a subset , the boundary of .subcomplex of is a subset of such that if then .isomorphism of two complexes is one - to - one mapping respecting dimension and boundaries .equivalence classes of complexes with respect to these isomorphisms are called unlabelled complexes .the star of the cell is the subcomplex containing and all cells such that either or or .note that complexes can be considered as particular cases of spin graphs , see .the correspondence can be constructed in different ways .for example , let the vertices of correspond to cells of , the function is the dimension of the corresponding simplex .links are defined by the incidence matrix : two vertices and of are connected by a link iff .labelled spin complex is a pair where is a complex and is a function on the set of cells of with values in some spin space .isomorphism of two spin complexes is an isomorphism of the complexes respecting spins .the equivalence classes are called ( unlabelled ) spin complexes .unless otherwise stated we consider only functions defined on the cells of maximal dimension ; by dualisation it is often equivalent to functions restricted to vertices . theremany topological incarnations of abstract complexes . in each of them a cell is represented by an open disk .a cw - complex is a topological space which is defined by the inductive construction of its -dimensional skeletons .let be a disconnected set of points ( vertices ) - cells of dimension .in general , is obtained from as follows .each cell of dimension is identified with an open -dimensional disk and some continuous ( attaching ) map is fixed .then is the factor space of the union of and via identifications of with .for example , is a graph with vertices , zero - dimensional cells , and links ( edges ) , one - dimensional cells .link is a loop if the boundary of is mapped to one vertex .often some restrictions on the attaching maps are imposed . herewe restrict ourselves to the case and for all , the boundary is the union of some cells ( in some books , see for example , cw - complexes are defined as already satifying this restriction ) . with such cw - complexone can associate an abstract complex with .we get the class of simplicial complexes ( where the cells are called simplices ) if for each 2-cell its boundary has 3 one - dimensional cells and the set of vertices uniquely defines . any graph without multiple edges , no loops is a simplicial complex . in the paperwe consider different classes of ( two - dimensional ) complexes . the class can be defined either by imposing further restrictions on the class of complexes defined above or by some constructive procedures to get all complexes in this class .anyway such classes are a particular case of a language defined by some substitutions in a graph grammar , see .the following restrictions hold for all complexes in this paper : complex is a ( closed compact ) surface .pseudosurface ( closed compact ) is a topological space isomorphic to a finite 2-dimensional simplicial complex with the following property : each link is contained in the boundary of exactly two faces ( two - dimensional cells ) .a surface has an additional property that the neighbourhood of each vertex is homeomorphic to a disk .this is the list of all compact closed ( without holes ) 2-dimensional surfaces .orientable surfaces are just - sphere with handles .nonorientable surfaces are ( projective plane ) , ( klein bottle ) , ... , - sphere in which holes are cut and to each of them a moebius band ( crosscup ) is attached along its boundary . in this case is a graph homeomorphically imbedded to the surface .such complexes are studied in the topological graph theory ( see ) and in combinatorics , where topological complexes are called maps .surface with holes is obtained from a closed surface by cutting out finite number of disks with non - intersecting boundaries .if the surface has a boundary then the boundary belongs to .isomorphism of maps is an isomorphism of abstract complexes . in other words ,two maps are called isomorphic if there is a homeomorphism of such that vertices map onto vertices , edges on edges , cells on cells .a map is a subdivision of the map if the graph of is a subgraph of the graph of . by hauptvermutung if two topological complexes are homeomorphic as topological spaces there exist their subdivisions isomorphic as abstract complexes .if the surface is closed the euler characteristics of the complex is defined as where is the number of faces , - number of vertices , - number of links .it does not depend on the complex but only on the surface itself : for orientable surfaces the euler characteristics where is the genus ( number of handles ) , for nonorientable surfaces where is the number of crosscups .we shall use in fact only the following 4 classes .[ [ arbitrary - maps ] ] arbitrary maps + + + + + + + + + + + + + + this is the class we have just defined . no further restrictions are imposed .simplest examples are a vertex inside the sphere ( vertex map ) , an edge with two vertices inside the sphere - edge map .[ [ smooth - cell - surfaces ] ] smooth cell surfaces smooth cell surface ( see ) is a compact connected smooth two - dimensional manifold with finite number of closed subsets ( cells ) such that : 1 . ; 2 . for each there exists a one - to - one smooth mapping of onto a polygon with faces ; 3 . for either or an edge or a vertex of the corresponding polygon .[ [ triangulations ] ] triangulations + + + + + + + + + + + + + + this is a smooth cell surface with all .the set of vertices is called a cut if there are two subgraphs such that .disk - triangulation is a smooth cell surface , homeomorphic to the sphere , where there is one distinguished ( that will be the outer face ) face and for all other faces , and there is no cuts with one vertex .then it can be considered as the triangulation of the disk ( sphere with a hole ) . for triangulationsthe absence of cuts is equivalent to the absence of loops .[ [ simplicial - complexes ] ] simplicial complexes + + + + + + + + + + + + + + + + + + + + these are triangulations without multiple edges , where moreover every three edges define not more than one cell .note that a triangle ( cycle of length 3 ) having inside and outside at least one vertex , is not considered as a cell .[ [ convex - polyhedra ] ] convex polyhedra + + + + + + + + + + + + + + + + quantizing smooth via piecewise linear structures is possible because the convex polyhedra have combinatorial counterparts .for example , convex polyhedra can be considered as maps with .there is a pure combinatorial characterization of maps corresponding to convex polyhedra . if a triangulation has no loops and no multiple edges , then , if , it corresponds ( by steinitz - rademacher theorem ) , to a convex polyhedron .labels in complexes are not necessarily given explicitly but the complex is considered to be labelled if the set is claimed to be fixed .labels are useful for fixing coordinate system in the space but are superfluous for the geometry and topology .there is a very convenient way to avoid the superfluous labelling but at the same time giving some algorithmic way to get a complete coordinatization .root ( local observer ) in a ( labelled ) complex is an array where is a two - dimensional cell , = its edge , - vertex of .isomorphism of two complexes with roots is an isomorphism of complexes respecting the roots .rooted map ( rooted complex , complex with a local observer ) of class is an equivalence class of isomorphisms of complexes with a root in the class of complexes .assume that the rooted edge is directed from the rooted vertex . for disk triangulationswe agree that one ( the outer ) face is rooted , it is possible that .the automorphism group of any rooted map is trivial .this is easily proved by induction on the number of cells by subsequent extending the automorphism from the rooted face to its neighbors .graph grammars corresponding to transformations ( substitutions here are called moves ) of complexes were studied very little . in the next sectionwe shall consider tutte moves , see fig .[ 1f4 ] , which consist in appending an edge between two vertices of a cell or joining together two disjoint graphs by identifying two of their vertices . in topology subdivisions played always a big role .there are two papers ( see ) where some moves are studied in detail .let be the commutative associative algebra over ( simplicial chains over ) generated by the symbols of some ( countable ) alphabet with commutation relations .thus it is a linear span generated by the strings ( simplices ) .define the boundary operator as a linear operator such that where the sum runs over all subsets of with the number of elements .we shall consider here only two dimensional complexes .there are other linear operators in this algebra ( called alexander moves ) .they are defined as follows .let , then next example : gross - varsted moves .it is proved in that each alexandre move can be obtained via gross - varsted moves and vice versa .we say that a set of moves is irreducible in the class of complexes if for each pair of complexes from there is a sequence of moves giving from ( in the physical literature the term ergodic is used in this cased , but we want to use the standard probabilistic terminology ) . in the class of simplicial complexesthe set of alexander moves and the set of gross - varsted moves as well are irreducible .proof see in .let be any of the five classes of complexes introduced above .[ theauto ] for most complexes with two - dimensional cells from the automorphism group is trivial , that is if , where ( ) is the set of all complexes with two - dimensional cells from ( the same but with nontrivial automorphism group ) .earlier tutte remarked that it is very intuitive that almost all triangulations have no nontrivial automorphism .many rigorous results appeared afterwards , see .proof for the case of disk - triangulations see in .the metric structure is defined once it is defined for each closed cell so that on the edges the lengths are compatible .there are two basic approaches for defining the metric structure : dynamical triangulations - when all edges have length one and quantum regge calculus - when they are random .we shall use the first one .then all cells with the equal number of edges are identical and on faces the metrics is standard .one can do it differently .let first the graph be embedded in the plane , the edges being smooth arcs .define the metric structure on the graph so that the edge lengths are all equal to a constant .inside a cell with edges we define the metric structure via some smooth one - to - one mapping of an equilateral polygon with edges onto this cell , so that the smoothness hold also in vicinity of each point on the edge .then inside cells the curvature is zero . on edges also : this is shown on the figure in piecewise linear case . we shall define curvature at vertex . as always the curvature is measured by parallel transport ( levi - civita connection ) of a vector ( lying in the plane in piecewise linear situation ) along a closed path : along the internal part of a triangle as on the euclidean plane , through an edge - by unfolding the two half planes separated by this edge to a plane .one sees immediately that only paths around vertices may give nonzero difference . around the vertex the angle between the initial and the transported vector is , where is the angle of the simplex at vertex .note that using the euler formula one can get from this the gauss - bonnet formula for triangulations where .gauss - bonnet formula for smooth surfaces is .its relationship with the discrete case for a partition with -gons with areas of the unit sphere ( the area of the triangle is ) is given by the formula = \sum_{ij}\alpha _ { ij}-\pi \sum n_{i}+2\pi f=2\pi v-2\pi e+2\pi f\ ] ] classical examples are : positive curvature - elliptic geometry ( sphere , projective plane ) ; zero curvature , euclidean geometry ( plane , torus , klein bottle ) ; negative curvature - hyperbolic geometry ( all others ) .now we shall show that the curvature at vertex is defined by the number of edges incident with .einstein - hilbert action on the smooth manifold is where is the gaussian curvature , - metrics .it is known that . thus the discrete action should be ( up to a constant ) , where is the genus and is the number of triangles .we want to write down a discrete analog of this action with a discrete curvature summing over vertices instead of summing over triangles .assume all triangles to be equilateral and scale their area to 1 .thus each vertex gets area from each incident triangle , thus in total .then and the formula holds if only we put the curvature at the vertex equal .there are two kind of techniques used in the two - dimensional gravity .historically the first one is the combinatorial approach , that was initialized by tutte and continued ( without any mention of physics ) by many researchers , the papers are published in journals on combinatorics .the second one is random matrix theory ( rmt ) approach , that was originated in physics itself .calculations in the second approach are very persuasive but the arguments are not completely rigorous .as far as i know , no explicit connections between these approaches were established .we use the first approach and give a short review of the latter approach .let be some class of complexes ( for example defined in the previous section ) , homeomorphic to the sphere , - number of cells of dimension in , .the main example is the class of all triangulations of a sphere .the grand canonical ensemble is defined by in particular , the conditional distribution of with fixed is uniform .easy and general methods to estimate are useful sometimes , but can provide only bounds .( exponential a priori bounds ) proof .lower bound : this is quite trivial and can be proved in many ways .for example , take two following complexes homeomorphic to the ring with the same number of boundary edges from both sides .first one - alternating up and down triangles ( that is standing on an edge and on the vertex correspondingly ) , second - two triangles up and two triangles down etc .these two kind of triangles can be glued sequentially one - after - one in all possible ways .the following method of proof of upper bounds works even in some more general situations .one can give an algorithm to construct all possible complexes with cells of dimension two .start with one cell .we enumerate its edges as . on each stepwe add not more than one cell to the boundary and enumerate new edges immediately after already used numbers .now we describe the inductive construction .we take the edge with number one and make one of the 4 decisions : 1 ) not to add anymore triangles to this edge , 2 ) add to it a triangle having exactly two new edges , 3 ) add triangle to this edge and to the next edge on the boundary ( in clockwise direction ) , 4 ) the same for counterclockwise direction . for each of decision sequences let be the number of edges after steps , .moreover , if there are triangles there can not be more than type 1 decisions .one needs however exact asymptotics .all known examples exhibit the following asymptotic behavior from ( [ asy ] ) it follows there exists such that for the series ( [ ser ] ) converges .it diverges if .if then iff .thus , for the parameters the distribution does not exist .however , the dynamics introduced later allows to consider such and for them local correlation functions make sense .no general results are known however .none of the constants is universal , but for all known examples is .universality of is not at all simple intuitive fact .for example , predictions based on physical non - rigorous arguments ( see , for example , ) failed to predict famous in the planar case .the asymptotics ( [ asy ] ) holds for all four classes , defined in the previous section .moreover in all cases . proof .we shall prove it only for triangulations ; other cases see in references cited in enumeration of two - dimensional maps . in the similar way we shall define the distribution on the class of rooted complexes where index zero means that we consider rooted complexes of class . for triangulations it follows from triviality of automorphism groups for most complexes ( see theorem [ theauto ] ). then we can take as a root any of cells of dimension 2 , choose one of its edges and orient it in 2 ways .denote the number of disk - triangulations where the outer face has edges , - where the outer face is moreover rooted .the following result is similar but can be proved easier . for large and fixed .enumerate the edges of the boundary in a cyclic order : .an automorphism is uniquely defined , if is given .we shall show that almost all complexes do not have an automorphism such that . to prove thiswe shall show that for each complex having a nontrivial automorphism we can subdivide the complex on two parts where each cell belongs to only one part , such that .this can be done by induction as follows .take some boundary edge , take a triangle with this edge and refer it to , then put .each step of induction consists of taking one more triangle having common edge with already constructed part of .now we can modify inside in a number of ways , bounded from below by some function as , uniformly in .this can be done by choosing triangles in , not too close from each other , and modifying independently some neighborhood of each keeping the boundary of the neighborhood and the number of cells in this neighborhood fixed .this is possible as , where is the number of edges on the boundary .thus for given the proportion of complexes with is small .we have proved that only small number of complexes have an automorphism such that . as is fixed then multiplying this number on gives again a small number .to prove the theorem we should prove that .the universal nature of ( [ asy ] ) is strongly supported by the fact that , for all such examples , the first positive singularity of the generating function is an algebraic singularity , that gives the asymptotics ( [ asy ] ) .assume an algebraic function is analytic at , has minimal positive singularity at point .we say that its leading exponent is if there exist such and functions analytic at such that . then we have the following expansion in our case ( for ) .one could also apply tauberian theorems in such situation .we give some examples where all constants in the asymptotics are known , see the same references .first example is the class of triangulations defined above . herefor convex polyhedra we have . for simplicial triangulations .many other examples can be given ; it is interesting however to understand the general underlying mechanism .tutte has begun to study the asymptotics for and developed a beautiful and efficient quadratic method .afterwards many authors contributed by developing the method itself and obtaining asymptotics for various classes ( see review and more recent papers ) .the main idea of tutte are the following recurrent equations for these equations are easily derived as follows from the following picture where the orientation of the rooted edge is marked by arrow , rooted vertex is the first vertex of the arrow , rooted face is to the right of the arrow ( containing the north pole of the sphere ) , see figure [ 1f4 ] take any rooted map with and do tutte move 1 , take any ordered pair of rooted maps and perform tutte move 2 .any rooted map can be uniquely obtained in this way . corresponds to the so called edge map with one edge only which is counted twice . if we introduce the generating function the following functional equation holds .we shall deduce from this equation that is algebraic and compute its first singularity , below in this paper , in a bit more general setting .[ [ green - functions ] ] green functions + + + + + + + + + + + + + + + consider a class of complexes . let be a class of complexes , defined with the same restrictions as , homeomorphic to the sphere with holes with edges on the boundaries of these holes .we assume also that these boundaries do not intersect each other .the green functions are defined as follows corresponds to the case .rooted green functions are defined similarly where the index everywhere means that we consider complexes with a distinguished edge on the first boundary with edges , the local observer in the terminology of .one would like to have an expression for the green functions in terms of the basic probabilities ( as for markov chains ) .green functions are associated with the derivatives , that is the factorial moments of . the partition function and its two first derivatives are finite for and for we have as proof .we shall see later that is an algebraic function of and has the principal singularity at the point . in the vicinity of we have .this is in a good agreement with the following simple intuitive counting argument. for fixed there exist constants such that proof .take first and prove the upper bound .take some complex with faces and glue up the hole with some complex with faces where depends only on .we shall get some complex with faces . for given and with we shall get not more than complexes where depends only on .in fact , for any the number of subcomplexes with faces from having the same root is bounded by .that is why .the lower bound can be proved similarly .for the proof is similar but one should first choose faces along which paths with edges will pass . this will give the factor .this can be done by induction in .two questions arise : what is the asymptotics of if both tend to infinity and what is the asymptotics of other global variables , such as the number of vertices etc . ?we shall see that these two questions are related . are well - defined random variables in the grand canonical ensemble and one could would like to have their joint distribution .in general only two of them are independent due to the euler formula . for triangulations , where each face has 3 incident edges , we have only one independent variable as . for the class of all rooted maps , where two variables are independent, we have the following lemma .let be the conditional mean number of vertices if the number of faces is .then for some .as it follows from the formula on p. 157 of the number of rooted maps with faces and vertices is thus is defined by the maximum in of by large deviation asymptotics .consider now one - particle green functions .the following series converges above some nondecreasing function , see figure [ 1f3 ] thus the series diverges . proof .it is quite obvious because the series has all coefficients positive .thus we have a family of distributions .it is of interest to study the asymptotics and exponents when where is fixed .the explicit formula ( see ) for the number of triangulations with a distinguished edge on the boundary ( rooted triangulations ) is where is the number of inner cells . if is odd. then as in particular for thus for all the exponent is . for fixed the exponent does not depend on and is andmoreover we have also as .random matrix model is the following probability distribution on the set of selfadjoint -matrices with the density where is a polynomial of bounded from below , is the lebesgue measure on real -dimensional space of vectors .it can be written also as where is the gaussian measure .it is easy to see that has covariances .note that for mere existence of the probability measure one needs that the senior coefficient of were positive and were even . in this casethere exists a well - developed probability theory of such models , which we shall not review here , see .the fundamental connection ( originated from thooft ) between rm model and two - dimensional complexes is provided by the formal series where is the sum of all connected diagrams with vertices .take for example .then each diagram has labelled vertices , each vertex has labelled thick legs , corresponding to the product .each thick leg can be seen as a narrow strip with two sides , each side is marked with a matrix index .dividing by we eliminate the numbering of the four legs leaving them however cyclically ordered . after coupling legs and their sides ( note that coupled sides have the same index and , as each vertex have two sides with the same index , we get index loops ) and summing over indices we get a factor where is the number of index loops . after thiswe are left with for each graph choose the minimal cell embedding of in a compact orientable surface of genus ( topological graph theory ) . assume clockwise order of legs .it has vertices , edges and faces .putting and using euler formula we have with .the calculations in rmm can be done only for , thus to get finite one should scale as . in the limit we have and only survives giving thus only plane imbeddings .the limit is called the simple scaling limit .it was proved ( see ) that in this case showing again the stability of this exponent .there are important points in this approach which should be mentioned : * in case the order of all vertices equals , this is some restriction on the class of maps ; * automorphism group of our labelled diagram factores in two factors .the first one related to the permutation of vertices , and second one related to the permutation of legs in each vertex .almost all diagrams have the first factor trivial , but for some of them .we can sum over nonlabelled diagrams then but each unlabelled diagram will have a factor this means that the counting does not coincide with the natural counting used in the combinatorial approach ; * we should fix also somehow : normally one chooses embedding to the minimal possible .but anyway not all possible triangulations are taken into account because a given graph can be embedded to surfaces with different .this gives one more reason that the counting rule does not coincide with natural counting where all maps from some fixed class are counted exactly once .but this should not be taken seriously : anyway this counting is no worse and no better than others .* there appears a contradiction if one wants to get probability distributions simultaneously for the matrix model itself and for graph embeddings .we have probability distribution for the matrix model if , but the probability distribution on the diagrams is achieved only if . thus one should always perform analytic continuation from to . the free energy for the scaling mentioned above can be rigorously calculated but the complete argument leading to the graph counting is still lacking .* there are other pure gravity models treated with this approach : more general pure gravity model counts the number of vertices with : where are the parameters , see .the probability distribution on some set of complexes is invariant with respect to the following simple markov process .let at time the triangulation be .the process is defined by the following infinitesimal transition rates . with rate at time we destroy , add one more cell and glue anew all cells randomly together , that is if then we choose uniformly among complexes of the class with cells . with rate do random choice of a complex with cells. what dependence on can be ? if for some positive function then the probability distribution is an invariant distribution with respect to this process .proof consists of the remark that the induced process on is a reversible markov chain : a birth and death process on with jump rates .the simplest way of monte - carlo simulation is to take sufficiently large and simulate uniform distribution , but it is impossible to find the exponent in this way .one should compare different and this can be done via such a process .apart from this such dynamics is of no interest , it is not constructive , especially in higher dimensions . in the rest of this paper we shall study local dynamics .we start with a simplest local dynamics of two - dimensional planar complexes .the distribution appears not to be invariant with respect to the first model dynamics .thus , there could be two possibilities : either it will nevertheless give the same exponents for the invariant measure or its invariant measure belongs to another universality class ( being however irreducible and ergodic ) .we shall show that the second one holds .we consider smooth cell surfaces and assume the cells be triangles .one starts with one triangle and each step consists in attaching a new triangle on the boundary .there are two kinds of attachment ( see figure [ 1f2 ] ) : to one or to two edges with the same vertex : for any edge on the boundary we attach to it a triangle with rate . for any pair of neighboring edges on the boundary we attach to them a triangle with rate any time the complex is homeomorphic to a closed two dimensional disk and its boundary - to a circle .we assume that the initial state is the only triangle and that if the number of edges on the boundary is equal to 3 then only -transitions are possible .we can consider the states with as giving a triangulation of the sphere itself ( all other states as disk - triangulations ) , the outside of the triangle being the cell containing the north pole on the sphere .one can interpret it as the closing up of the hole in the sphere ( the external part of the complex ) .it is important to note that one could consider two other variants of this dynamics .first one is when we consider equivalence classes of cell surfaces .then transition rates would be , instead of , where is divided by the number of automorphisms of the disk triangulation .second , we shall use its analog later in more complicated situations , is that there is a distinguished ( rooted ) edge on the boundary and transitions can occur only if they touch this edge .there are 3 cases with quite different behavior of this markov process : sub - critical or ergodic , critical or null recurrent , supercritical or non - recurrent .for all these cases we shall study the behavior of local correlation functions and of the following global variables at time : where is the total number of edges and is the number of edges on the boundary .[ [ subcritical - case ] ] subcritical case by definition it is the case when .if then let be the random number of jumps until first return to the state and put for any triangulation of the sphere with a distinguished face ( outer face ) . then proof . note first that the length of the boundary is itself a markov process : the evolution of the boundary can be seen as the simplest ( context free ) random grammar with the alphabet consisting of one symbol ( representing one edge ) and with the substitutions this process is obviously reduced to the branching process with one particle type where is the birth rate , is the death rate .denote this process - it is a continuous time markov process states of which are the points of the lattice interval .it has jumps and the corresponding rates and from the point ] .then iterating the equation we get that the expansion of at has all coefficients positive . is an algebraic function , analytic for .in fact , could have a pole for only if but it would imply which is impossible . to visualizethe expansion of denote now and substitute into ( [ main ] ) .we get as is a double root of the main equation , and choosing minus sign we have then that gives a legitimate expansion . for given the convergence radius of as the function of is defined by zeros of or .as increases on the interval ] we have ah urn problem on an arbitrary planar tree under the conditions where is the number of balls in the urn ( vertex ) of the tree , is the number of vertices of the tree .let be the number of such arrays on the tree with balls .if for example from the vertex only two edges go upwards to the vertices , then then the argument is similar to the previous one .note that .we want to compare and , for this we iterate the latter recurrent equation for to the very end , that is we get the sum of terms , in each of them all factors equal for some .the iteration process for there corresponds the similar process for , that is why to each term there corresponds the term in the expansion for in that term one of the factors is instead of the factor in the term .thus as before . from this bounds uniform in bare trees follow .the influence of the boundary is exponentially small .similarly one can estimate other correlation functions , for example , the decay of correlations .let and take two vertices with . then considered above only a growth of the boundary , that was quite natural : many modern technologies follow this principle .but also another dynamics is possible where all cells ( even inside the building ) can evolve .we shall consider here some questions related to such dynamics .note that gross - varsted moves can be used not only for simplicial complexes but for other classes as well , as it is seen from the picture .consider gv - moves 1 and 2 and the inverse one to 2 , consider the markov chain with rates for these moves correspondingly .if then are invariants .let be an irreducible component of the set of ( nonequivalent ) complexes with given and and .we make an assumption that a move can only be done if it gives non - equivalent complex .we formulate the following lemma without proof .the following example shows that large time and large limits are not interchangeable , that is for local quantities .this the simulation is slow and dangerous in this case . consider the sequence of such chains having the embedded state spaces take a vertex at time and consider random variables - number of edges at at time .we have and it could be natural to think that .but the following argument shows more complicated situation . proof . for fixed the process is markov with state space with rates .in fact , each edge incident to can be changed to a transversal and , for each triangle containing , its edge not containing can be erased by gv - move , this will give one more incident edge .the limiting random walk is null recurrent and thus big fluctuations in it occur until it reaches equilibrium for fixed .now consider markov chains where the only transitions are a - moves . to get ergodic chains we change the generator which produces jumps. now the jumps are produced by any vertex with rates or . for fixed with rate randomly ( that is with probability ) one of the edges on the boundary of and do the a - move corresponding to this edge .let be the rate of the inverse a - move at vertex , also for each possible vertex of degree on with equal probability we take one pair of triangles ( on the right hand side of the a - move ) and do the inverse a - move .once the vertex appeared it can disappear afterwards .let the time when vertex appeared . proof .let for each vertex be number of vertices on with , let .fix vertex . if then is the next vertex on in the clockwise direction . 1 .two edges ( marked 1 on the figure ) appear on some link ( on the figure ) .thus here the transition is with rate . here and further factor because the same move can be produced also by the opposite vertex .inverse move with rate ; 2 .this move is produced by vertex ( dotted edges 2 on the figure ) , the new vertex appears on the edge .it produces a change in the vector only if .thus here with rate .inverse move gives the jump with rate 2 .next move is also produced by vertex ( edges 3 on the figure ) . can be transformed .in fact we do not need rates for 2 and 3 : note only that these jumps conserve .assume first .then the embedded process , where are the jump moments , satisfies the following inequality for some fixed . by the submartingale techniques ( see , for example , ) we have the proof . in the opposite case we have and again the techniques of works .it seems plausible that if then for all sequences the process tends to some proper distribution if is fixed . if it can be proved . on the contrary for the critical case random variables as for the brownian motion .compared with the results in the previous section this gives argument that we do not get the physical invariant measure here .randomised approximation schemes for tutte - grothendieck invariants . in discrete probability and algorithms , ed .d. aldous , p .diaconis , j .spencer , j. m. steele , springer verlag , pp .133 - 148 .spectral and probabilistic aspects of random matrix models . in algebraic and geometric methods in mathematical physics , a. boutet de monvel and v.a .marchenko ( eds . ) , kluwer , 1996 , 205 - 242 .
|
in this paper we study stochastic dynamics which leaves quantum gravity equilibrium distribution invariant . we start theoretical study of this dynamics ( earlier it was only used for monte - carlo simulation ) . main new results concern the existence and properties of local correlation functions in the thermodynamic limit . the study of dynamics constitutes a third part of the series of papers where more general class of processes were studied ( but it is self - contained ) , those processes have some universal significance in probability and they cover most concrete processes , also they have many examples in computer science and biology . at the same time the paper can serve an introduction to quantum gravity for a probabilist : we give a rigorous exposition of quantum gravity in the planar pure gravity case . mostly we use combinatorial techniques , instead of more popular in physics random matrix models , the central point is the famous exponent .
|
the sustainability of modern human societies relies on cooperation among unrelated individuals .situations that require cooperative behaviour for socially beneficial outcomes abound and range from taxpaying and voting to neighbourhood watch , recycling , and climate change mitigation .the crux of the problem lies in the fact that , while cooperation leads to group - beneficial outcomes , it is jeopardized by selfish incentives to free - ride on the contributions of others .excessive short - term benefits to individuals who act as selfish maximizers create systemic risks that may nullify the long - term benefits of cooperation and lead to the tragedy of the commons .fortunately , we have strong predispositions to behave morally even when this in conflict with our material interests . the innate human drive to act prosociallyis a product of our evolution as a species , as well as our unique capacity to internalise norms of social behaviour .yet , it is also important to note that impaired recognition and absent cognitive skills are likewise potential triggers of antisocial rewarding , in particular since under such circumstances the donor of the reward is likely to be unable to distinguish between cheaters and cooperators . as such , the concepts of mutualism and second - order free - riding are by no means limited to human societies , but apply just as well to certain eusocial insects as well as to bacterial societies . despite favourable predispositions , however , cooperation is often subject to both positive and negative incentives .positive incentives typically entail rewards for behaving prosocially , while negative incentives typically entail punishing free - riding .however , just like public cooperation incurs a cost for the wellbeing of the common good , so does the provisioning of rewards or sanctions incur a cost for the benefit or harm of the recipients .individuals that abstain from dispensing such incentives therefore become second - order freeriders , and they are widely believed to be amongst the biggest impediments to the evolutionary stability of rewarding and punishing .in addition to being costly , the success of positive and negative incentives is challenged by the fact that they can be applied to promote antisocial behaviour . antisocial punishment ,that is , the sanctioning of group members who behave prosocially , is widespread across human societies .moreover , antisocial rewarding is present in various inter - specific social systems , where the host often rewards the parasitic species of a symbiont .this phenomenon is due to the inability of the donor to distinguish defectors and cooperators .recent theoretical work also indicates that antisocial punishment can prevent the coevolution of punishment and cooperation , just like antisocial rewarding can lead to the breakdown of cooperation if the latter is contingent on pool rewarding . in theory , the resolution of such social traps involves rather complex set - ups , entailing the ability of second - order sanctioning , elevated levels of effectiveness of prosocial incentives in comparison to antisocial incentives , or the decreased ability to dispense antisocial incentives due to the limited production of public goods in environments with low levels of cooperation .here we study what happens if both competing strategies are able to invest into a rewarding pool to support akin players .how does such a strategy - neutral intervention influence the evolutionary outcome of a public goods game ?we consider a four - strategy game , where beside traditional cooperators and defectors also rewarding cooperators and rewarding defectors are present .rewarding cooperators reward other rewarding cooperators , while rewarding defectors reward other rewarding defectors , thus representing prosocial and antisocial pool rewarding , respectively .noteworthy , our setup differs slightly from a recently studied model where rewarding players could be utilized directly by non - rewarding competitors . in our case , however , we focus on the impact of the strategy - neutral intervention in the form of pool rewarding .in addition to the well - mixed game , we mainly study the game in a structured population , where everybody does not interact with everybody else , and the interactions that do exist are not random .the importance of structured populations for the outcome of evolutionary social dilemmas was reported first by nowak and may , and today the positive effects of spatial structure on the evolution of cooperation are well - known as network reciprocity .several recent reviews are devoted to evolutionary games in structured populations . the consideration of prosocial and antisocial pool rewarding in structured populations is thus an important step that promises to elevate our understanding of the impact of strategies that aim to promote antisocial behaviour in evolutionary games .as we will show , antisocial rewarding does not hinder the evolution of cooperation from a random state in structured populations , and in conjunction with prosocial rewarding , it still has positive consequences in that it promotes the spatial selection for cooperation in evolutionary social dilemmas .this counterintuitive outcome can be understood through pattern formation that facilitates the aggregation of players who adopt the same strategies , which in turn helps to reveal the long - term benefits of cooperation in structured populations .the public goods game is a stylized model of situations that require cooperation to achieve socially beneficial outcomes despite obvious incentives to free - ride on the efforts of others .we suppose that players form groups of size , where they either contribute or nothing to the common pool . after the sum of all contributionsis multiplied by the synergy factor , the resulting public goods are distributed equally amongst all the group members irrespective of their contribution to the common pool . in parallel to this traditional version of the public goods game entailing cooperators ( ) and defectors ( ) , two additional strategies run an independent pool rewarding scheme .these are rewarding cooperators ( ) and rewarding defectors ( ) , who essentially establish a union - like support to aid akin players .accordingly , rewarding cooperators contribute to the prosocial rewarding pool .the sum of all contributions in this pool is subsequently multiplied by the synergy factor , and the resulting amount is distributed equally amongst all players in the group . likewise , at each instance of the public goods game all rewarding defectors contribute to the antisocial rewarding pool .the sum of all contributions in this pool is subsequently multiplied by the same synergy factor that applies to the prosocial rewarding pool , and the resulting amount is distributed equally amongst all players in the group .we are thus focusing on the consequences of union - like support to akin players , without considering second - order free - riding .it is therefore important that we consider strategy - neutral pool rewarding in that individual contributions to the prosocial and the antisocial rewarding pool are the same ( ) , as is the multiplication factor that is subsequently applied .otherwise , if an obvious disadvantage would be given to either the prosocial or the antisocial rewarding pool , the outcome of the game would become predictable .we also emphasize that , in order to consider the synergistic consequence of mutual efforts and to avoid self - rewarding of a lonely player , we always apply if only a single individual contributed to the rewarding pool .in addition to the well - mixed version of the game , we primarily consider the spatial game .we emphasize that the importance of a structured population is not restricted to human societies , but applies just as well to bacterial societies , where the interaction range is typically limited , especially in biofilms and in vitro experiments .biological mechanisms that are responsible for the population being structured rather then well - mixed typically include limited mobility , time and energy constrains , as well as cognitive preferences in humans and higher mammals . in the corresponding model , the public goods game is staged on a square lattice with periodic boundary conditions where players are arranged into overlapping groups of size , such that everyone is connected to its nearest neighbours .accordingly , each individual belongs to different groups .the square lattice is the simplest of networks that allows us to take into account the fact that the interactions among humans are inherently structured rather than well - mixed or random . despite of its simplicity ,however , there exist ample evidence in support of the fact that the square lattice suffices to reveal all the feasible evolutionary outcomes for games that are governed by group interactions , and also that these outcomes are qualitatively independent of the details of the interaction structure . as an alternative , and to explore the robustness of our findings, we nevertheless also consider regular small - world networks , where a fraction of all links is randomly rewired once before the start of the game .the considered evolutionary game in a structured population is studied by means of monte carlo simulations , which are carried out as follows .initially each player on site is designated either as a cooperator , defector , rewarding cooperator , or a rewarding defector with equal probability .next , the following elementary steps are iterated repeatedly until a stationary solution is obtained .a randomly selected player plays the public goods game with its partners as a member of all the groups , whereby its overall payoff is thus the sum of all the payoffs acquired in each individual group as described in the preceding subsection .next , player chooses one of its nearest neighbours at random , and the chosen co - player also acquires its payoff in the same way .finally , player enforces its strategy onto player with a probability given by the fermi function \}$ ] , where quantifies the uncertainty by strategy adoptions , implying that better performing players are readily adopted , although it is not impossible to adopt the strategy of a player performing worse .such errors in decision making can be attributed to mistakes and external influences that adversely affect the evaluation of the opponent .each full monte carlo step ( mcs ) gives a chance to every player to enforce its strategy onto one of the neighbours once on average .the average fractions of cooperators ( ) , defectors ( ) , rewarding cooperators ( ) , and rewarding defectors ( ) on the square lattice were determined in the stationary state after a sufficiently long relaxation time .depending on the proximity to phase transition points and the typical size of emerging spatial patterns , the linear system size was varied from to , and the relaxation time was varied from to mcs to ensure that the statistical error is comparable with the line thickness in the figures .from the pairwise comparison of strategies it follows that pool rewarding is dominant .accordingly , the original 4-strategy game can be reduced to a 2-strategy game , where the and strategies compete . designating by the number of rewarding cooperators and by the number of rewarding defectors among other players in a group ,the payoffs of the two competing strategies are where by designating the fraction of players as , the corresponding replicator equation becomes \ , .\label{replica}\ ] ] here where and are always fulfilled . starting from a random initial state , where both competing strategies are equally common ( ) , the solution of eq .[ replica ] indicates that the population will always terminate into the full state if , and this independently of the value of . in other words ,the introduction of strategy neutral rewards can not help cooperators if they are not already predominant in the initial population .accordingly , the introduced rewards will not avert from the tragedy of the commons when the competing strategies start the evolutionary game equally strong. however , if players are somehow able to aggregate , then a significantly new situation emerges .this condition can be reached by assuming , when rewarding cooperators form the majority in the initial population . in this case , the full and the full state becomes an attractor point , but the border of their basins depends sensitively on the values of .this effect is illustrated in fig .[ border ] , where we have plotted the border of the two stable solutions on the parameter plane .the lesson learned from the preceding subsection is that rewarding cooperators should initially constitute the majority of the population to survive .otherwise , if their strength in numbers is absent , rewarding defectors inevitably take over . in a structured population , however , this special initial condition can spontaneously emerge locally , during the course of evolution , without there being an obvious advantage given to rewarding cooperators at the outset .the fundamental question then is whether such a positive local solution is viable and able to spread across the whole population , or rather if it is unstable and folds back to the defector - dominated state . to clarify this ,we perform systematic monte carlo simulations to obtain the phase diagram for the whole parameter plane , as shown in fig .[ phase ] . before addressing the details , we emphasize that the reported stationary states are highly stable and fully independent of the initial conditions , which is a fundamental difference from the well - mixed solutions we have reported above . starting with the line , which implies the absence of pool rewarding , we note that cooperators survive only if the critical value of is .the fact that this value is still lower than the group size , which would be the threshold in a well - mixed population , is due to network reciprocity .the latter enables cooperators to form compact clusters and so protect themselves against being wiped out by defectors . taking this as a reference value, we can appreciate at a glance that , even in the presence of antisocial rewarding , prosocial rewarding still promotes the evolution of cooperation .however , neither defectors ( ) nor cooperators ( ) who abstain from pool rewarding can survive if .indeed , as in the well - mixed case , only rewarding defectors ( ) and rewarding cooperators ( ) remain in the stationary state , depending on the value of and .this outcome can be understood since players that do engage in pool rewarding collect payoffs that exceed their initial contributions to the rewarding pool . in terms of the relation between and players ,it is interesting to note that the introduction of strategy - neutral pool rewarding unambiguously supports the cooperative strategy . in particular , as we increase the value of and thus increase also the efficiency of rewarding , the critical value of where players are able to survive decreases steadily .likewise decreasing is the threshold for complete dominance of the strategy . at specific values of , for example at , it is even possible to go from the pure phase to the pure phase solely by increasing the value of .thus indeed , even if the prosocial pool rewarding scheme is accompanied by an equally effective antisocial pool rewarding scheme , in structured populations the evolution of cooperation from a neutral or even from an adverse initial state is still promoted well past the boundaries imposed by network reciprocity alone .these results are different from those obtained with random initial conditions in well - mixed populations , and they are likely to appear contradictory because there is no obvious advantage given to cooperators over defectors as the value of increases .in fact , defectors benefit just as much given that they run an identical pool rewarding scheme as cooperators .so why is the evolution of cooperation promoted ?the answer is rooted in the possible aggregation of cooperators , which can easily emerge spontaneously in a structured population .it is therefore instructive to monitor the evolution of the spatial distribution of strategies over time , as obtained for different values of .results are presented in fig .[ snapshots ] , where for clarity we have used a prepared initial state with only a stripe of rewarding cooperators ( blue ) and rewarding defectors ( pale red ) initially present , as it is illustrated in panel ( f ) . in all casesthe synergy factor for the main public goods game was set to .the top row of fig . [ snapshots ] shows the evolution obtained at , which corresponds to the traditional , reward - free public goods game .it can be observed that the initially straight interface separating the two competing strategies disintegrates practically immediately .there is a very noticeable mixing of the two strategies , which ultimately helps defectors to occupy the larger part of the available space . herecooperators are able to survive solely due to network reciprocity , but at such a relatively small value of only small cooperative clusters are sustainable .nevertheless , we note that in a well - mixed population defectors would wipe out all cooperators at such a small value of the synergy factor .snapshots depicted in the middle row of fig .[ snapshots ] were obtained at , where thus both antisocial and prosocial pool rewarding mechanisms are at work . herethe final state is still a mixed phase ( see also fig .[ phase ] ) , but the fraction of cooperators is already significantly larger than in the absence of rewarding .larger cooperative clusters are sustainable in the stationary state , which is due to an augmented interfacial stability between competing domains . in addition to traditional network reciprocity , clearly the formation of more compact cooperative clustersis further promoted by the introduction of pool rewarding , and this despite the fact that both antisocial and prosocial rewarding mechanisms are equally strong .if an even higher value of is applied , the interface that separates and players becomes impenetrable for defectors .the two strategies do not mix at all , which maintains the phalanx of cooperators .accordingly , the latter players simply spread into the region of defectors until they dominate completely .this scenario is demonstrated in the bottom row of fig .[ snapshots ] , where the final stationary state is indeed a pure phase . as demonstrated in the middle and the bottom row of fig .[ snapshots ] , the introduction of pool rewarding supports the aggregation of akin players and results in more stable interfaces between competing domains .this fact enhances the positive impact of network reciprocity further and provides an even more beneficial condition for cooperation .this favourable consequence of rewarding can be studied directly by monitoring how the width of the mixed zone the stripe where both strategies are present evolves over time when the evolution starts from the prepared initial state that is depicted in the panel ( f ) of fig . [ snapshots ] . according to the definition of the width of the mixed zone , in panel ( j ) , while it becomes in panels ( e ) and ( j ) .the inset of fig .[ reward ] shows how increases in time for different values of increasing from top to the bottom curve .clearly , as the effectiveness of rewarding increases , the width of the mixed zone increases slower and slower . while for low values of the width of the mixed zone increases until eventually it covers the whole population ( see panel ( e ) of fig .[ snapshots ] for a demonstration ) , for sufficiently large values of the width remains finite , saturating and never exceeding a certain threshold .this result provides quantitative evidence that the interface between the two competing strategies remains intact , and that in fact the compact phalanx of cooperators can not be broken by defectors .this in turn directly supports the evolution of cooperation to the point where defectors are wiped out completely , and this despite of the fact that they are able support each other by means of antisocial rewarding .based on the results presented thus far , it is possible to provide a clear rationale why a strategy - neutral intervention , like in this case the introduction of pool rewarding that at least in principle ought to benefit cooperators and defectors equally , is able to have such a biased impact on the final evolutionary outcome . in particular ,pool rewarding yields an additional payoff to the players only if they aggregate and form at least partly uniform groups .this is beneficial for cooperators because it also helps them to obtain a competitive payoff from the original public goods game . in other words ,the long - term benefits of cooperation come into full effect .the fate of defectors , on the other hand , is under this assumption entirely different .they can benefit from the antisocial rewarding scheme if they aggregate into uniform groups , but then they are unable to exploit the efforts of cooperators in the main public goods game .if they do not aggregate , then the benefits from antisocial rewarding become void .either way , unlike cooperators , defectors are unable to enjoy the rewards as well as maintain a sustainable level of public goods .ultimately , this favours the evolution of cooperation even though the intervention on the game is strategy neutral in that it does not favour one or the other strategy directly by granting it a higher payoff .this argument also explains why the same positive outcome is not attainable from a random initial state in well - mixed populations , where it was concluded that the possibility of antisocial rewarding utterly shatters any evolutionary benefits to cooperators that might be stemming from prosocial rewards .if the interactions among players are well - mixed , then of course neither cooperators nor defectors can aggregate locally , which is a fundamental condition to reveal the long - term benefits of cooperation in a collective enterprise , even if the population contains strategies that seek to actively promote antisocial behaviour . to corroborate our main arguments further, it is instructive to consider the studied spatial public goods game on alternative interaction networks , in particularly on such where random mixing can be controlled and adjusted deliberately . to that effect , we randomly rewire a certain fraction of links that constitute the originally considered square lattice , so that for small values of we obtain a regular small - world network , while in the limit we obtain a regular random network , as described in .essentially , we thereby allow players to expand the range of their interactions to players that are well outside their local neighbourhood . in agreement with the above - outlined arguments ,this randomness in the interaction structure ought to prevent defectors from suffering the negative consequences of aggregation with their like , thus allowing them to further exploit the cooperative efforts of others whilst still enjoying the benefits of antisocial pool rewarding .we note that at high values of it is very likely that the direct neighbours of any given player are not strongly connected .the aggregation of players with the same strategies therefore looses effect .defectors who are members in one group can also be members in completely different groups , where perhaps the exploitation of cooperators is still possible .we test this argument quantitatively in fig .[ mix ] , where we show how the critical synergy factor of the main public goods game for which the population arrives to the pure phase increases as increases . indeed ,as we increase the fraction of random links , more and more defectors are able to enjoy the benefits of antisocial rewarding as well as the benefits of free - riding on the cooperative efforts of others . as a countermeasure, a higher synergy factor is needed to prevent defectors from taking over .nevertheless , even at the required value of is still below the survival threshold of cooperators in a well - mixed population , and up to , when half of all the links are randomly rewired , there are still benefits to strategy - neutral pool rewarding that go beyond those offered solely by network reciprocity .we thus conclude that antisocial rewarding does not deter public cooperation in structured populations , even if the randomness of the interaction network is high .detrimental effects of strategies that seek to promote antisocial behaviour appear to be significantly lessened if the assumption of a well - mixed population is replaced by a structured population .we have studied the joint impact of antisocial and prosocial pool rewarding in a public goods game , in particular focusing on potential detrimental effects on the evolution of public cooperation that may stem from strategies that seek to actively promote antisocial behaviour .we have been motivated by the fact that strategies that promote antisocial behaviour are surprisingly common in human societies and in various inter - specific social systems , as well as by the fact that recent research on a similar variant of the public goods game in a well - mixed population has shown that antisocial rewarding can lead to the breakdown of cooperation if the latter is contingent on pool rewarding . by considering akin - like pool rewarding rather than peer rewarding, we also depart from the mainstream efforts to study the effects of rewards in structured populations , and join the recent ( and not so recent ) trend in recognizing the importance of institutions for the delivery of positive and negative incentives to cooperate in collective enterprises .our research reveals that , in structured populations , the detrimental effects of antisocial rewarding are significantly more benign than in well - mixed populations . even if the interaction network lacks local structure and has many long - range links , and in this sense approaches conditionsthat one might hope to adequately describe by a well - mixed population , antisocial rewarding still fails to upset the effectiveness of prosocial rewarding in promoting public cooperation .we have shown that the rationale behind this rather surprising result is rooted in spatial pattern formation , and in particular in the necessity of alike strategies to aggregate if they want to enjoy the benefits of rewarding . while this condition is actually beneficial for cooperators because it helps them to obtain a competitive payoff from the original public goods game , defectors suffer significantly because they are no longer able to free - ride on the cooperative efforts of others .the situation for defectors is thus a lot like sophie s choice , in that they can either enjoy the benefits of antisocial rewarding or the benefits of free - riding on the public goods , but they can not do both simultaneously . and just one of the two options is not sufficient to grant them evolutionary superiority over cooperators. therefore , even in the presence of antisocial rewarding , prosocial rewarding still offers benefits to cooperators that go well beyond network reciprocity alone .an interesting alternative interpretation of the studied public goods game is to consider the introduction of antisocial and prosocial pool rewarding as a strategy - neutral interference on the original rules of the social dilemma .we emphasize that neither defectors nor cooperators gain an obvious evolutionary advantage from the introduction of pool rewarding in fact , both strategies benefit exactly the same .it is therefore puzzling why , in the long run , cooperators turn out as the favoured strategy .this is in fact different from what was reported before for punishment , where available results indicate that antisocial punishment prevents the coevolution of punishment and cooperation , unless individuals have a reputation to lose , or if individuals have the freedom to leave their group and become loners .nevertheless , the results presented in our study add to the favourable aspects that positive incentives to promote cooperation have over negative incentives .the likely unwanted consequences of punishment are well know and include failure to lead to higher total earning , damage to reputation , and invitation to retaliation . summarizing, we have shown that antisocial rewarding does not necessarily deter public cooperation in structured populations , even if the randomness of the interaction network is high .this is because the delivery of rewards is contingent on the aggregation of alike strategies , which effectively prevents defectors from free - riding on the public goods . at the same time, the aggregation enhances the spatial selection for cooperation in evolutionary social dilemmas and thus helps to expose the long - term benefits of cooperative behaviour .this research was supported by the hungarian national research fund ( grant k-101490 ) , the slovenian research agency ( grant p5 - 0027 ) , and by the deanship of scientific research , king abdulaziz university ( grant 76 - 130 - 35-hici ) .he , j .- z . , wang r .- w , and li . y .-evolutionary stability in the asymmetric volunteer s dilemma , e103931 ( 2014 ) .andreoni , j. , harbaugh , w. , and vesterlund , l. the carrot or the stick : rewards , punishments , and cooperation ., 893902 ( 2003 ) . he , j .- z . ,wang , r .- w . , jensen , c. x. j. , and li , y .- t .asymmetric interaction paired with a super - rational strategy might resolve the tragedy of the commons without requiring recognition or negotiation ., 7715 ( 2015 ) .henrich , j. , mcelreath , r. , barr , a. , ensminger , j. , barrett , c. , bolyanatz , a. , cardenas , j. , gurven , m. , gwako , e. , henrich , n. , lesorogol , c. , marlowe , f. , tracer , d. , and ziker , j. costly punishment across human societies ., 17671770 ( 2006 ) .nowak , m. a. and sigmund , k. games on grids . in the geometry and ecological interactions: simplifying spatial complexity , dieckmann , u. , law , r. , and metz , j. a. j. , editors , 135150 .cambridge university press ( 2000 ) .
|
rewarding cooperation is in many ways expected behaviour from social players . however , strategies that promote antisocial behaviour are also surprisingly common , not just in human societies , but also among eusocial insects and bacteria . examples include sanctioning of individuals who behave prosocially , or rewarding of freeriders who do not contribute to collective enterprises . we therefore study the public goods game with antisocial and prosocial pool rewarding in order to determine the potential negative consequences on the effectiveness of positive incentives to promote cooperation . contrary to a naive expectation , we show that the ability of defectors to distribute rewards to their like does not deter public cooperation as long as cooperators are able to do the same . even in the presence of antisocial rewarding the spatial selection for cooperation in evolutionary social dilemmas is enhanced . since the administration of rewards to either strategy requires a considerable degree of aggregation , cooperators can enjoy the benefits of their prosocial contributions as well as the corresponding rewards . defectors when aggregated , on the other hand , can enjoy antisocial rewards , but due to their lack of contributions to the public good they ultimately succumb to their inherent inability to secure a sustainable future . strategies that facilitate the aggregation of akin players , even if they seek to promote antisocial behaviour , thus always enhance the long - term benefits of cooperation .
|
william of ockham , a great lover of simple explanations , wrote that `` a plurality is never to be posited except where necessary.'' the aim of this paper is to provide a geometric insight into this principle of economy of thought in the context of inference of parametric distributions .the task of inferring parametric models is often divided into two parts .first of all , a parametric family must be chosen and then parameters must be estimated from the available data . once a model family is specified , the problem of parameter estimation , although hard , is well understood - the typical difficulties involve the presence of misleading local minima in the error surfaces associated with different inference procedures . however , less is known about the task of picking a model family , and practitioners generally employ a judicious combination of folklore , intuition , and prior knowledge to arrive at suitable models .the most important principled techniques that are used for model selection are bayesian inference and the minimum description length principle . in this paperi will provide a geometric insight into both of these methods and i will show how they are related to each other . in section [ sec : qual ]i give a qualitative discussion of the meaning of `` simplicity '' in the context of model inference and discuss why schemes that favour simple models are desirable . in section [ sec : deriv ] i will analyze the typical behaviour of bayes rule to construct a quantity that will turn out to be a _ razor _ or an index of the simplicity and accuracy of a parametric distribution as a model of a given true distribution . in effect , the razor will be shown to be to be an ideal measure of distance " between a model family and a true distribution in the context of parsimonious model selection . in order to define this indexit is necessary to have a notion of measure and of metric on a parameter manifold viewed as a subspace of the space of probability distributions .section [ sec : geom ] is devoted to a derivation of a canonical metric and measure on a parameter manifold .i show that the natural distance on a parameter manifold in the context of model inference is the fisher information .the resulting integration measure on the parameters is equivalent to a choice of jeffreys prior in a bayesian interpretation of model selection .the derivation of jeffreys prior in this paper makes no reference to the minimum description length principle or to coding arguments and arises entirely from geometric considerations . in a certain novel sense jeffreys prior is seen to be the prior on a parameter manifold that is induced by a uniform prior on the space of distributions . some relationships with the work of amari et.al .in information geometry are described.( , ) in section [ sec : largen ] the behaviour of the razor is analyzed to show that empirical approximations to this quantity will enable parsimonious inference schemes .i show in section [ sec : meaning ] that bayesian inference and the minimum description length principle are empirical approximations of the razor .the analysis of this section also reveals corrections to mdl that become relevant when comparing models given a small amount of data .these corrections have the pleasing interpretation of being measures of the robustness of the model .examination of the behaviour of the razor also points the way towards certain geometric refinements to the information asymptotics of bayes rule derived by clarke and barron.( ) close connections with the index of resolvability introduced by barron and cover are also discussed.( )since the goal of this paper is to derive a geometric notion of simplicity of a model family it is useful to begin by asking why we would wish to bias our inference procedures towards simple models .we should also ask what the qualitative meaning of `` simplicity '' should be in the context of inference of parametric distributions so that we can see whether the precise results arrived at later are in accord with our intuitions . for concretenesslet us suppose that we are given a set of outcomes generated i.i.d . from a true distribution t . in some suitable sense , the empirical distribution of these events will fall with high probability within some ball around t in the space of distributions .( see figure [ fig1 ] . )now let us suppose that we are trying to model t with one of two parametric families _ 1 or _ 2 . now _ 1 and _ 2 define manifolds embedded in the space of distributions ( see figure [ fig1 ] ) and the inference task is to pick the distribution on _ 1 or _2 that best describes the true distribution . if we had an infinite number of outcomes and an arbitrary amount of time with which to perform the inference , the question of simplicity would not arise .indeed , we would simply use a consistent parameter estimation procedure to pick the model distribution on _ 1 or _2 that gives the best description of the empirical data and that would be guaranteed to give the best model of the true distribution .however , since we only have finite computational resources and since the empirical distribution for finite only approximates the true , our inference procedure has to be more careful .indeed , we are naturally led to prefer models with fewer degrees of freedom .first of all , smaller models will require less computational time to manipulate .they will also be easier to optimize since they will generically have fewer misleading local minima in the error surfaces associated with the estimation .finally , a model with fewer degrees of freedom generically will be less able to fit statistical artifacts in small data sets and will therefore be less prone to so - called `` generalization error '' .another , more subtle , preference regarding models inferred from finite data sets has to do with the `` naturalness '' of the model .suppose we are using a family m to describe a set of outcomes drawn from t .if the accuracy of the description depends very sensitively on the precise choice of parameters then it is likely that the true distribution will be poorly modelled by .(see figure [ fig2 ] . )this is for two reasons - 1 ) the optimal choice of parameters will be hard to find if the model is too sensitive to the choice , and 2 ) even if we succeed in getting a good description of one set of sample outcomes , the parameter sensitivity suggests that another sample will be poorly described . in geometric terms, we would prefer model families which describe a set of distributions all of which are close to the true .( see figure [ fig2 ] . ) in a sense this property would make a family a more `` natural '' model of the true distribution t than another which approaches t very closely at an isolated point .the discussion above suggests that for practical reasons inference schemes operating with a finite number of sample outcomes should prefer models that give good descriptions of the empirical data , have fewer degrees of freedom and are `` natural '' in the sense discussed above .i will refer to the first property ( good description ) as _ accuracy _ and the latter two ( fewer degrees of freedom and naturalness ) as _ simplicity_. we will see that both accuracy and simplicity of parametric models can be understood in terms of the geometry of the model manifold in the space of distributions .this geometric understanding provides an interesting complement to the minimum description length approach , which gives an implicit definition of simplicity in terms of shortest description length of the data and model .the previous section has discussed the qualitative meaning of simplicity and its practical importance for inference of distributions from a finite amount of data . in this sectionwe will construct a quantity that is an index of the accuracy and the simplicity of a model family as a description of a given true distribution .we will show in later sections that empirical approximations of this quantity which we call the _ razor _ of a model will enable consistent and parsimonious inference of parametric probability distributions .we will now motivate the definition of the razor via a construction from the bayesian approach to model inference .( in later sections we will conduct a more precise analysis of the relationship between the razor and bayes rule . )suppose we are given a collection of outcomes drawn independently from a true density t , defined with respect to lebesgue measure on .suppose also that we are given two parametric families of distributions a and b and we wish to pick one of them as the model family that we will use .the bayesian approach to this problem consists of computing the posterior conditional probabilities and and picking the family with the higher probability .the conditional probabilities depend , of course , on the specific outcomes , and so in order to understand the most likely result of an application of bayes rule we should analyze the statistics of the posterior probabilities . let a be parametrized by a set of parameters .then bayes rule tells us that : in this expression is the prior probability of the model family , is a prior density with respect to lebesgue measure on the parameter space and is a prior density on the outcome sample space .the lebesgue measure induced by the parametrization of the dimensional parameter manifold is denoted .since we are interested in comparing with , the prior is a common factor that we may omit and for lack of any better choice we take the prior probabilities of a and b to be equal and omit them . in order to analyze the typical behaviour of equation [ eq : bayes1 ]observe that ] .this is to be contrasted with the behaviour of for any _ fixed _ for which ^{(1/n ) } = \exp{- d(\theta_p\|\theta_q ) } < 1 \theta^* \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} \tilde{j} ] approaches zero in probability where the expectation is taken in , the true distribution .to this end we write : latexmath:[\[\label{eq : bounder } \leq hand side is the absolute value of the difference between the sample average of an iid random variable and its mean value .this approaches zero almost surely by the strong law of large numbers and so for sufficiently large the first term is less than with probability greater than for any and . in order to show that the second term on the right hand side converges to zero in probability , note that since is equicontinuous at , given any there is a neighbourhood of within which for any and .therefore , for any set of outcomes and , . by consistency of the maximum likelihood estimator , with probability greater than for sufficiently large . consequently , for sufficiently large . putting the bounds on the two terms on the right hand side of equation [ eq : bounder ] together , and using the union of events bound we see that for sufficiently large : | >\epsilon ) < \delta\ ] ] to complete the proof we can observe that by assumption and its derivatives with respect to are equicontinuous at and that and the various are therefore examples of the functions of . furthermore , = d(t\|\theta^ * ) + h(t) ] under the assumption that derivatives with respect to commute with expectations with respect to . on applying equation [ eq : result ] to these observations ,the theorem is proved .note that lemma [ lemma : conv1 ] shows that the two leading terms in the asymptotic expansions of and approach each other with high probability .we will now obtain control over the subleading terms in these expansions .define to be the coefficient of in the asymptotic expansion of so that we can write .let be the corresponding coefficients of in the expansion of .the are identical to the with each replaced by . we can show that the approach the with high probability .[ lemma : conv2 ] let the assumptions made in lemma [ lemma : conv1 ] hold and let and .then for every intger , there is an such that . *proof : * the coefficient has been shown to approach in probability as an immediate consequence of lemma [ lemma : conv1 ] . next we consider for .every term in every such can be shown to be a finite sum over finite products of constants and random variables of the form and .we have already seen that in probability .the are the entries of the inverse of the empirical fisher information . since the inverse is a continuous function , and since in probability , in probability also .as noted before , is identical to with each replaced by . since is finite sum of finite products of random variables that converge individually in probability to the , we can conclude that in probability .finally , we consider .we have shown that and in probability .since the determinant and the logarithm are continuous functions we conclude that in probability .we have just shown that each term in the asymptotic expansion of approaches the corresponding term in with high probability for sufficiently large . as an easy corollary of this lemmawe obtain the following theorem : [ theorem : conv ] let the conditions necessary for lemmas [ lemma : conv1 ] and [ lemma : conv2 ] hold and take to be integers .then let consist of the terms in the asymptotic expansion of that are of orders to .for example , , using the coefficients defined above .let be the corresponding terms in the asymptotic expansion of .then for any and , and for any and , for sufficiently large .* proof : * by definition of and , . by lemma [ lemma : conv2 ] in probability .therefore , is a postive number that is upper bounded by a finite sum of random variables that individually converge to zero in probability .since the sum is finite we can conclude that also converges to zero in probability thereby proving the theorem note that the multiplication by ensures that the convergence is not simply due to the fact that every partial sum is individually decreasing to zero as the number of outcomes increases .any finite series of terms in the asymptotic expansion of the logarithm of the bayesian posterior probability converges in probability to the corresponding series of terms in the expansion of the razor .theorem [ theorem : conv ] precisely characterizes the sense in which the razor of a model reflects the typical asymptotic behaviour of the bayesian posterior probability of a model given the sample outcomes .we can also compare the razor to the expected behaviour of in the true distribution .clarke and barron have analyzed the expected asymptotics of the logarithm of where is the true distribution , under the assumption that belongs to the parametric family .( ) with certain small modifications of their hypotheses their results can be extended to the situation studied in this paper where the true density need not be a member of the family under consideration .the first modification is that the expectation values evaluated in condition 1 of should be taken in the true distribution which need not be a member of the parametric family .secondly the differentiability requirements in conditions 1 and 2 should be applied at which minimizes .( clarke and barron apply these requirements at the true parameter value since they assume that is in the family . ) finally , condition 3 is changed to require that the posterior distribution of given concentrates on a neighbourhood of except for in a set of probability . under these slightly modified hypothesesit is easy to rework the analysis of to demonstrate the following asymptotics for the expected value of : we see that as , is equal to the razor up to a constant term .more careful analysis shows that this term arises from the statistical fluctuations of the maximum likelihood estimator of around .it is worth noting that while terms of and larger in depend depend at most on the measure ( prior distribution ) assigned to the parameter manifold , the terms of depend on the geometry via the connection coefficients in the covariant derivatives .for that reason , the terms are the leading probes of the effects that the geometry of the space of distributions has on statistical inference in a baysian setting and so it would be very interesting to analyze them. normally we do not include these terms because we are interested in asymptotics , but when the amount of data is small , these correction terms are potentially important in implementing parsimonious density estimation .unfortunately it turns out to be difficult to obtain sufficiently fine control over the probabilities of events to extend the expected asymptotics beyond the terms and so further analysis will be left to future publications .in the previous section we have seen that the bayesian conditional probability of a model given the data is an estimator of the razor . in this sectionwe will consider the relationship of the razor to the minimum description length principle and the stochastic complexity inference criterion advocated by rissanen .the mdl approach to parameteric inference was pioneered by akaike who suggested choosing the model maximizing with the dimension of the model and the maximum likelihood estimator.( ) subsequently , schwarz studied the maximization of the bayesian posterior likelihood for densities in the koopman - darmois family and found that the bayesian decision procedure amounted to choosing the density that maximized .( ) rissanen placed this criterion on a solid footing by showing that the model attaining gives the most efficient coding rate possible of the observed sequence amongst all universal codes.(, ) . in this paper we have shown that the razor of a model , which reflects the typical asymptotics of the logarithm of the bayesian posterior , has a geometric interpretation as an index of the simplicity and accuracy of a given model as a description of some true distribution . in the previous section we have shown that the logarithm of the bayesian posterior can be expanded as : + o(1/n ) \label{eq : stoch}\end{aligned}\ ] ] with the maximum likelihood parameter and . the term of that we have not explicitly written is the same as the the corresponding term of the logarithm of the razor ( equation [ eq : lograzor ] ) with every replaced by .we recognize the first two terms in this expansion to be exactly the stochastic complexity advocated by rissanen as a measure of the complexity of a string relative to a particular model family .we have given a geometric meaning to the term in terms of a measurement of the rate of shrinkage of the volume in parameter space in which the likelihood of the data is significant .given our results concerning the razor and the typical asymptotics of , this strongly suggests that the definition of stochastic complexity should be extended to include the subleading terms in equation [ eq : stoch ] .indeed , rissanen has considered such an extension based on the work of clarke and barron and finds that the terms of in the expected value of equation [ eq : stoch ] remove the redundancy in the class of codes that meet the bound on the expected coding rate represented by the earlier definition of stochastic complexity.( ) essentially , in coding short sequences we are less interested in the coding rate and more interested in the actual code _length_. this suggests that for small the terms can be important in determining the ideal expected codelength but it remains difficult to obtain sufficient control over the probabilities of rare events to extend the rissanen s result to this order . as mentioned earlier, the metric on the parameter manifold affects the terms of and therefore these corrections would be geometric in nature .another approach to stochastic complexity and learning that is related to the razor and its estimators has been taken recently by yamanishi.( ) let be a hypothesis class indexed by d - dimensional real vectors .then , in a general decision theoretic setting yamanishi defines the extended stochastic complexity of a model relative to the data , the class , and a loss function to be : where and is a prior . following the work described in this paper he defines the razor index of relative to , and a given true distribution to be : }\ ] ] forthe case of a loss function , equations [ eq : escrazor ] and [ eq : esc ] reduce to the quantities and which are the logarithm of the razor and its estimator .yamanishi shows that if the class of functions has finite vapnik - chervonenkis dimension , then with high probability for sufficiently large .for the case of a logarithmic loss function this result applies to the razor and its estimator as defined in this paper .there is an interesting `` physical '' interpretation of the results regarding the razor and the asymptotics of bayes rule which identifies the terms in the razor with energies , temperatures and entropies in the physical sense .many techniques for model estimation involve picking a model that minimizes a loss function where is the data , are the parameters and is some empirical loss calculated from it .the typical behaviour of the loss function is that it grows as the amount of data grows . in the case of maximum likelihood model estimationwe take where we expect to attain a finite positive limit as under suitable conditions on the process generating the data . in this casewe can make an analogy with physical systems : is like the inverse temperature and the limit of is like the energy of the system .maximum likelihood estimation corresponds to minimization of the energy and in physical terms will be adequate to find the equilibrium of the system at zero temperature ( infinite ) .on the other hand we know that at finite temperature ( finite ) the physical state of the system is determined by minimizing the free energy where is the temperature and is the entropy .the entropy counts the volume of configurations that have energy and accounts for the fluctuations inherent in a finite temperature system .we have seen in the earlier sections that terms in the razor and in the asymptotics of bayes rule that account for the simplicity of a model arise exactly from such factors of volume .indeed , the subleading terms in the extended stochastic complexity advocated above can be identified with a physical " entropy associated with the statistical fluctuations that prevent us from knowing the true " parameters in estimation problems .the evaluation of the razor and the relationship to the asymptotics of bayes rule suggest how to pick the natural " parametrization of a model . in geometric terms , the natural " coordinates describing a surface in the neighbourhood of a given point make the metric locally flat .the corresponding statement for the manifolds in question here is that the natural parametrization of a model in the vicinity of reduces the fisher information at to the identity matrix .this choice can also be justified from the point of view of statistics by noting that for a wide class of parametric families the maximum likelihood estimator of is asymptotically distributed as a normal density with covariance matrix . if is the identity in some parametrization , then the various components of the maximum likelihood estimator are independent , identically distributed random variables .therefore , the geometric intuitions for `` naturalness '' are in accord with the statistical intuitions . in our context where the true density need not be a member of the family in question , there is another natural choice in the vicinity of that minimizes .we could also pick coordinates in which is reduced to the identity matrix .we have carried out an expansion of the bayesian posterior probability in terms of which maximizes .we expect that is asymptotically distributed as a normal density with covariance .the second choice of coordinates will therefore make the components of independent and identically distributed .there are numerous close relationships between the work described in this paper and previous results on minimum complexity density estimation .the seminal work of barron and cover introduced the notion of an `` index of resolvability '' which was shown to bound covergence rates of a very general class of minimum complexity density estimators .this class of estimators was constructed by considering densities which achieve the following minimization : \ ] ] where the are drawn iid from some distribution , belongs to some countable list of densities , and the set of satisfy kraft s inequality.( ) equation [ eq : mincomp ] can be interpreted as minimizing a two stage code for the density and the data .the `` index of resolvability '' of is constructed from expectation value in of equation [ eq : mincomp ] divided by , the number of samples : \ ] ] where the are description lengths of the densities and is the relative entropy .this quantity was shown to bound the rates of convergence of the minimum complexity estimators . in a sensethe density achieving the minimization in equation [ eq : index ] is a theoretical analog of the sample - based minimum complexity estimator arising from equation [ eq : mincomp ] .the work of barron and cover starts from the assumption that description length is the correct measure of complexity in the context of density estimation and that minimizing this complexity is a good idea .they have demonstrated several very general and beautiful results concerning the consistency of the minimum description length principle in the context of density estimation .we also know that minimum description length principles lead to asymptotically optimal data compression schemes .the goal of this paper has been to develop some alternative intuitions for the practical meaning of simplicity and complexity in terms of geometry in the space of distributions .the _ razor _ defined in this paper , like the index of resolvability , is an idealized theoretical quantity which sample - based inference schemes will try to approximate .the razor reflects the typical order - by - order behaviour of bayes rule just as the index of resolvability reflects the _ expected _ behaviour of the minimum complexity criterion of barron and cover . in order to compare the two quantities and their consequences we have to note that barron and cover do not work with families of distributions , but rather with a collection of densities .consequently , in order to carry out inference with a parametric family they must begin by discretizing the parameter manifold .the goal of this paper has been to develop a measure of the simplicity of a family as a whole and hence we do not carry out such a truncation of a parameter manifold . under the assumption that the true density is approximated by the parametric family barron and coverfind that an optimal discretization of the parameter manifold ( see and ) yields a bound on the resolvability of a parametric model of : where is the true parameter value , is the fisher information at , is a prior density on the parameter manifold and arises from sphere - packing problems and is close to for large .the asymptotic minimax value of the bound is attained by choosing the prior to be jeffreys prior .we see that aside from the factor of , the leading terms reproduce the logarithm of the razor for the case when the true density is infinitesimally distant from the parameter manifold in relative entropy sense so that .barron and cover use this bound to evaluate convergence rates of minimum complexity estimators .in contrast , this paper has presented the leading terms in an asymptotically exact expansion of the razor as an abstract measure of the complexity of a parametric model relative to a true distribution .i begin by studying what the meaning of `` simplicity '' should be in the context of bayes rule and the geometry of the space of distributions , and arrive at results that are closely related to the minimum complexity scheme .saying the models with larger razors are preferred is asymptotically equivalent to saying the models with a lower resolvability ( given an optimal discretization ) are preferred .however , we see from a comparison of the logarithm of the razor and equation [ eq : resolvbound ] , that the resolvability bound is a truncation of the series expansion of the log razor which therefore gives a finer classification of model families .the geometric formulation of this paper leads to interpretations of the various terms in the razor that give an alternative understanding of the terms in the index of resolvability that govern the rate of convergence of minimum complexity estimators .we have also given a systematic scheme for evaluating the razor to all orders in .this suggests that the results on optimal discretizations of parameter manifolds used in the index of resolvability should be extended to include such sub - leading terms.(, )in this paper we have set out to develop a measure of complexity of a parametric distribution as a description of a particular true distribution .we avoided appealing to the minimum description length principle or to results in coding theory in order to arrive at a more geometric understanding in terms of the embedding of the parametric model in the space of probability distributions .we constructed an index of complexity called the razor of a model whose asymptotic expansion was shown to reflect the accuracy and the simplicity of the model as a description of a given true distribution .the terms in the asymptotic expansion were given geometrical interpretations in terms of distances and volumes in the space of distributions .these distances and volumes were computed in a metric and measure given by the fisher information on the model manifold and the square root of its determinant .this metric and measure were justified from a statistical and geometrical point of view by demonstrating that in a certain sense a uniform prior in the space of distributions would induce a fisher information ( or jeffreys ) prior on a parameter manifold .more exactly , we assumed that indistinguishable distributions should not be counted separately in an integral over the model manifold and that there is a `` translation invariance '' in the space of distributions .we then showed that a jeffreys prior can be rigorously constructed as the continuum limit of a sequence of discrete priors consistent with these assumptions .a technique of integration common in statistical physics was introduced to facilitate the asymptotic analysis of the razor and it was also used to analyze the asymptotics of the logarithm of the bayesian posterior .we have found that the razor defined in this paper reflects the typical order - by - order asymptotics of the bayesian posterior probability just as the index of resolvability of barron and cover reflects the expected asymptotics of the minimum complexity criterion studied by those authors . in particular , any finite series of terms in the asymptotic expansion of the logarithm of the bayesian posterior converges in probability to the corresponding series of terms in the asymptotic expansion of the razor .examination of the logarithm of the bayesian posterior and its relationship to the razor also suggested certain subleading geometrical corrections to the expected asymptotics of bayes rule and corresponding corrections to stochastic complexity defined by rissanen .i would like to thank kenji yamanishi for several fruitful conversations .i have also had useful discussions and correspondence with steve omohundro , erik ordentlich , don kimber , phil chou and erhan cinlar .finally , i am grateful to curt callan for his support for this investigation and to phil anderson for helping with travel funds to the 1995 workshop on maximum entropy and bayesian methods .this work was supported in part by doe grant de - fg02 - 91er40671 .s.i.amari , o.e.barndorff-nielsen , r.e.kass , s.l.lauritzen , and c.r.rao , _ differential geometry in statistical inference _ , institute of mathematical statistics lecture note - monograph series , vol.10 , 1987 .
|
i define a natural measure of the complexity of a parametric distribution relative to a given true distribution called the _ razor _ of a model family . the minimum description length principle ( mdl ) and bayesian inference are shown to give empirical approximations of the razor via an analysis that significantly extends existing results on the asymptotics of bayesian model selection . i treat parametric families as manifolds embedded in the space of distributions and derive a canonical metric and a measure on the parameter manifold by appealing to the classical theory of hypothesis testing . i find that the fisher information is the natural measure of distance , and give a novel justification for a choice of jeffreys prior for bayesian inference . the results of this paper suggest corrections to mdl that can be important for model selection with a small amount of data . these corrections are interpreted as natural measures of the simplicity of a model family . i show that in a certain sense the logarithm of the bayesian posterior converges to the logarithm of the _ razor _ of a model family as defined here . close connections with known results on density estimation and `` information geometry '' are discussed as they arise . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
a delay / disruption - tolerant network ( dtn ) routing in a store - carry - forward manner is useful especially for disasters , battlefields , and poor communication environments on mobile devices or ad hoc wireless connections .even if some connections are removed by failures or attacks , it is expected that a packet is delivered before very long in a dtn routing by re - establishing or re - generating an end - to - end route .there are many protocols in dtn routing methods .we focus on a message - ferries scheme instead of the most trivial approaches by flooding with a lot of redundant messages . in the message - ferries scheme ,ferries of communication agents move proactively to send and receive messages , and distributedly give support to the delivery of messages . in the dtn as a collective adaptive system ( cas ), it is very important how to design the interactions among message - ferries in what type of moves or actions .we have proposed the searching and routing methods by message - ferries where actions obey random walks on a fractal - like network .the methods are better than the conventional optimal search by biophysically inspired lvy flights on a square lattice for homogeneously distributed targets at unknown positions , because the fractal - like network structure is adaptive to the uneven distribution of communication flows between send and receive requests according to spatially inhomogeneous population densities .however , random walks are predominated by contingency . for a higher reliable dtn routing, we consider a design principle of configuration and autonomous actions with recoverable procedures in cas .it is based on the cooperation in cyclic moving of message - ferries on the multi - scale quartered ( msq ) network taking into account several advantages of the special fractal - like structure . moreover , we give an estimation procedure for the optimal service rates of message - ferries from a queueing theory .thus , we focus on consistent simple ways for a collective ( routing ) function , theoretical and algorithmic validity , and adaptiveness for a change , which depend on the configuration and actions in cas .we consider a geographical network construction , in which the spatial distribution of nodes is naturally determined according to population in a self - organized manner .the following msq network model is generated by a self - similar tiling of faces for load balancing of communication requests in the territories of nodes .the territory is assigned by the nearest access from each user s position to a node on a geographical map .note that communication requests are usually more often generated and received at a node whose assigned population is large in the territory , and that the territories of nodes are determined by the nearest access .thus , how to locate nodes is important for balancing the communication load as even as possible .[ generation procedure of a msq network ] step0 : : : set an initial triangulation of any polygonal region which consists of equilateral triangles .step1 : : : at each time , a triangle face is chosen with a probability proportional to the population in the space .step2 : : : as shown in fig .[ fig_vis_msq ] , four smaller equilateral triangles are created from the subdivision by adding three nodes , at the intermediate point of each edge of the chosen triangle , respectively .step3 : : : return to step 1 , while the network size ( the total number of nodes ) does not exceed a given size .since a step - by - step selection of triangle face is unnecessary in the above algorithm , the subdivision process can be initiated in a asynchronously distributed manner , e.g. according to the increase of communication requests in an individual triangle area .we will discuss why the configuration of equilateral triangles is better in the next section . on a combination of complex network science and computer science approaches ,this model has several advantages : the robustness of connectivity without vulnerable high degree nodes , the bounded short distance path between any two nodes , and the efficient decentralized routing on a planar graph which tends to be avoided from interference among wireless beams .complex network science that emerges at the beginning of the 21st century provides a new paradigm of network self - organization , e.g. based on recursive growing geometric rule for the division of a face or for the attachment which aims at a chosen edge in a random or hierarchical selection .these self - organized networks including the msq network have a potential to be superior to the vulnerable scale - free network structure found in many real network systems , such as internet , www , power - grids , airline networks , etc .we preliminarily explain some basic elements and mechanisms to realize a message - ferries routing on a planar network .figure [ fig_msq_cycle ] shows the forward direction of each edge defined by the clockwise cycles on upper triangles and the counterclockwise cycles on lower triangles . in order to reduce the number of assignment of a direction to each edge , we omit the assignment for the center faces of triangles at any layer. the layer is defined by a depth of quartered triangle faces ( or by the number of the recursive divisions ) from an initial outermost triangle .the backward direction of edge is defined in the same way by changing each of cycle to the opposite direction .we remark that a cycle is corresponded to three edges on the triangle face .moreover , these cycles represent the delivery routes of message - ferries .we consider another idea as shown in fig .[ fig_another_idea ] : each triangle face is directed by only clockwise cycles . then , both forward and backward directions of an edge are assigned without counterclockwise cycles , except the edges of the outermost triangle .the better case of either fig .[ fig_msq_cycle ] or [ fig_another_idea ] depends on situations of the utilization in what type of communication resource and environment is given .we consider the autonomous distributed delivery processes after a communication request is occurred at a source node . in a store - carry - forward manner , a ferry that visits the source node picks up a data from , and carries it to the next mediator node on the routing path , which is found by a routing algorithm as mentioned later .the mediator node stores the data into its queue . here , the pickup and store times are ignored .then , any other ferry that visits the 1st mediator node picks up the data , and carries it to the 2nd mediator node .such relaying by ferries are repeated until a ferry reaches at the terminal node and the data is delivered .we assume that each ferry autonomously moves on a triangle cycle at a turnaround rate .several ferries exist on a cycle at random interval each other , and may have various speeds .therefore , in the random process by heterogeneous message - ferries , similar pickup and carry services are available at any time .this property of the random process rationalizes an exponential distribution of service times in the next subsection . in this routing , direct interactions between ferries at a time are not necessary , because a node act as the helper to temporarily store and forward a data asynchronously with ferry s encounters .note that our approach is categorized as multi - route and node relaying type in the ferry route design algorithms .although the message ferries act just like information frames in token ring at the date - link layer , it is quite different that the special topology of msq network generated by subdivision of equilateral triangles is focused therefore a relay of data transfers on cycles by mobile agents can be performed as a routing .since the msq networks that consist of squares and equilateral triangles belong to planner graps with the -spanner property , we can apply an efficient adaptive face routing algorithm in order to find a shortest distance path between any two node .the spanner property means that a length of path measured by the sum of link lengths on the path as euclidean distances is bounded at most the twice of the straight line between two nodes .efficiently , the routing algorithm uses only local information about the edges of faces that intersects the straight line between source and terminal nodes , and the nodes of the faces are restricted in the ellipsoid whose chord length is defined by twice of the - line as shown in fig .[ fig_adaptive_face ] .we remark that the routing path can be represented by a concatenation of triangle cycles for the intersected faces with the - line .if there are several shortest distance paths with a same path length between two nodes , e.g. due to symmetry , each of the paths is equally selected at random with the rate 1/(the number of the paths ) ( the amount of communication flows between the nodes ) .figure [ fig_detour ] shows that the routing algorithm is able to find an alternative path when some edges are disconnected , although a greedy forwarding policy based on the distance from the neighbor of a current node to the terminal node supports the path finding in the parts of removed edges and the corresponding triangle faces .the dashed piecewise linear line denotes the shortest distance path in this case , and the dotted piecewise linear line denotes the original path .thus , even with several damages for the network , the routing can be performed in a connected part before the fully destruction .in addition , the path finding is able to be performed reactively on - demand after a communication request occurs , and therefore adaptive for changes of connections . and for the adaptive face routing .the chord length that consists of - and - is defined by the twice of - line because of -spanner.,height=245 ] we naturally assume that the occurrence of communication request is independent random event , but the amount is proportional to a product of populations around source and terminal nodes , and that the service to transfer the request is stochastic in a multihop manner by passing through mediator nodes on a network . thus , we apply a queueing model to the communication flows as follows . for the m / m/1 queue system as kendall notation with the arrival rate according to a poisson process and the service rate in an exponential distribution , the average number of remaining requests is and the average end - to - end delay is when we consider such stochastic processes on a network , the tandem queue model in fig .[ fig_queueing_model ] is applied .each path between source and terminal nodes ( e.g. in fig .[ fig_comm_flows ] ) is decomposed into chained edges corresponded to m / m/1 queues . in the tandem queue model , from burke s theorem , these queues are independent of each other in a poisson process .then , the average end - to - end delay in a direction is given by the sum of ether in eq.([eq_ave_e - t - e_delay ] ) , where denotes the arrival rate for the sum of flows passing through an edge corresponded to the -th cycle of message - ferry , the superscript and denote the forward and the backward directions of edge , the index number is due to the one - to - three corresponding between a cycle and three edges of triangle . if two cycles get involved in a same direction on an edge with respect to both adjacent sides of faces , the arrival rate per cycle is given by ( the sum of flows)/ .we assume by the symmetry of flows to simplify the discussion .the amount of flows on an edge is corresponded to the link load called as routing betweenness centrality .the service rate corresponds to the turnaround rate of the -th cycle of message - ferry whose direction of clockwise or counterclockwise is determined by the coincidence with the forward direction of edge on the triangle cycle ( see fig .[ fig_msq_cycle ] ) . on each cycle, the condition is necessary for the stable situation without involving -length queue . ., height=56 ] - on two paths . here , and denote source nodes , and denote terminal nodes .the edge - receives the directional flows superimposed by the amounts passing through the edge for paths , , and so on . , height=207 ] we consider the delivery cost that consists of the end - to - end delay and the service load . where denotes the total number of cycles , is an exclusive addition operator : one of , , , , , , is chosen for the -th cycle on with the corresponding or , and denotes a set of cycles with respect to the edges of forward or backward direction on the routing path between nodes and .there are trade - off relations between the 1st & 2nd rows of the end - to - end delay and the 3rd row of the service load in eq .( [ eq_cost ] ) for minimizing the delivery cost .( [ eq_cost ] ) is rewritten to the sum of cycle - based components where , , , , , , are the weights obtained from the deformation of eq .( [ eq_cost ] ) .we can solve the optimal problem for minimizing the object function of eq.([eq_cost ] ) by using the newton - raphson method , since the finding at ( or ) results in a one - dimensional search for each variable or .we remark that the solution of is given by .thus , for minimizing the cost of eq.([eq_cost ] ) , it is useful that the initial value is set as considering the condition ( [ eq_condition ] ) . denotes .since a triangle has the minimum number of edges to form a polygonal cycle , only three variables , , and affect to the optimal solution of .if we consider the proposed scheme of message - ferries routing to any other shaped cycles with more than three edges , such as on squares , on chair or sphinx type polygons , the tuning of service rates will be more constrained than the triplets in the maximum of eq .( [ eq_initial_value ] ) for minimizing the cost of eq.([eq_cost ] ) .thus , the triangle cycle is the best choice . in the above discussion, the case of backward direction is similarly treated by replacing the superscript from to .we have two strategies of ferry initiation and node initiation with monitoring and controlling of the system state for a recovery in local accidents .these situations are illustrated in figs .[ fig_recovery1 ] and [ fig_recovery2 ] . in the following ,although we explain the procedures for the clockwise cycles in the case of fig.[fig_msq_cycle ] , the same ways can be applied for the counterclockwise cycles and for the case of fig .[ fig_another_idea ] .when a white square node falls into malfunction and becomes non - interactive with a ferry , each of the related three ferries detects it in some turnarrounds , and moves to the next neighbor black circle node beyond the removed white square node , as shown in fig .[ fig_recovery1 ] .then , their ferries begin to move on the larger cycle . when a ferry encounters an accident and stop the moving , the black circle nodes that detect it from non - visiting of ferry in a time - interval notify the next neighbor black square nodes to initiate the recovery process , as shown in fig .[ fig_recovery2 ] .each of the notified black square nodes negotiates with the moving ferries on the cycles of small triangles , then a new cycle of the larger triangle is created by changing the cyclic routes for the ferries . in deeper layers ,the recover process is similarly performed as shown in fig .[ fig_recovery3 ] .the gray square node in fig .[ fig_recovery3 ] detects non - visiting of ferry after changing the cyclic route , and becomes inactive .thus , if an accident occurs at a shallow layer , the procedures are hierarchically propagated to deeper layers . on the other hand , we consider the following procedures , when new nodes are added by subdivision of triangle face in order to be a persistent living system for the growing msq network and the message - ferries routing on it . after detecting the addition of nodes by ferries , these ferries negotiate with the new nodes to move on one of the smaller cycles , as shown in fig .[ fig_update_growth ] .such procedures can be performed at any layers of triangle faces and even simultaneously in some distant parts .the update process is used to the hierarchical recovery for a ferry s accident , when nodes work correctively .the white inactive nodes on the 2nd layer at the right of fig .[ fig_recovery3 ] act same as adding new nodes at the left of fig .[ fig_update_growth ] , then the related message - ferries negotiate with these nodes to move on smaller cycles at the right of fig .[ fig_update_growth ] . afterthe change of routes , the inactive nodes on the 3rd layer begin to return the state in a right - down triangles at the right of fig .[ fig_recovery3 ] .these procedures propagate to deeper layers hierarchically with the unification and division of ferry s routes from transiently resetting and re - creating the cycles on the layers back to the damaged layer by the accident , as shown in fig .[ fig_hierarchical_recovery ] .the resetting and re - creating seem to be wasteful , however using such consistent simple ways is better especially for unpredictable situation in maintaining the minimum performance . because extremely complex procedures and controls can be avoided .we have proposed a design principle in cas for a dtn routing based on autonomously moving message - ferries on a special structure of msq network self - organized by iterative subdivision of faces . in the subdivision ,nodes are located as balancing the amount of communication requests as possible which requests are generated or received at a node according to population in its territory defined by the nearest access . by considering the correspondence between each cycle on a triangle and a moving of message - ferry ,we realize a reliable and efficient message - ferries routing . in the routing , a relay of cyclic message - ferries ,the bounded path length by the -spanner property , and the adaptive face routing algorithm are key factors . as stochastic processes, we have considered the arrival of communication request and the service of transfer in the relay of cyclic message - ferries . the tandem queue model in a queueing theoryis applied for this problem setting .we have derived a calculation method of the optimal solution for minimizing the delivery cost defined by a sum of the end - to - end delay and the service load in a trade - off relation .moreover , we have considered recovery procedures for local accidents , and update procedures of ferry s cyclic routes in an incremental growth of msq network . these collective adaptive configuration and procedures will be evaluated in numerical simulations for more detailed design .we mention some further studies .our proposed method can be applied to a transport logistic system ( e.g. including by vehicles or small flying devices ) in both normal and emergent situations on a wide area . however, how to define a processing unit of communication or transportation requests is an issue .in addition , temporal disconnecting cases at simultaneous and many parts may be discussed by extending our approach in considering disaster situations .these challenges will be useful for developing a resilient network as cas in theoretical and practical points of views .this research is supported in part by grant - in - aide for scientific research in japan , no.21500072 .w. zhao , m. h. ammar , and e. zegura , `` controlling the mobility of multiple data transport ferries in a delay - tolerant network , '' _ proc . of the 24th annual joint conference of the ieee computer and communications societies ( infocom ) _ , pp.14071418 , 2005 .y. hayashi , and t. komaki , `` adaptive fractal - like network structure for efficient search of targets at unknown positions and for cooperative routing , '' _ international journal on advances in networks and services _ , vol.6 ,no.1&2 , pp.3750 , 2013 .f. kuhn , r. wattenhofer , and a. zollinger , `` asymptotically optimal geometric mobile ad - hoc routing , '' _ proc .6th acm workshop discrete algorithms and methods for communication ( dial - m02 ) _ , 2002 .
|
an interrelation between a topological design of network and efficient algorithm on it is important for its applications to communication or transportation systems . in this paper , we propose a design principle for a reliable routing in a store - carry - forward manner based on autonomously moving message - ferries on a special structure of fractal - like network , which consists of a self - similar tiling of equilateral triangles . as a collective adaptive mechanism , the routing is realized by a relay of cyclic message - ferries corresponded to a concatenation of the triangle cycles and using some good properties of the network structure . it is recoverable for local accidents in the hierarchical network structure . moreover , the design principle is theoretically supported with a calculation method for the optimal service rates of message - ferries derived from a tandem queue model for stochastic processes on a chain of edges in the network . these results obtained from a combination of complex network science and computer science will be useful for developing a resilient network system .
|
the nested error regression ( ner ) model with normality assumption for both the random effects or model error terms and the unit - level error terms has played a key role in analyzing unit - level data in small area estimation .many popular small area estimation methods have been developed under this model . in the frequentist approach , battese et al .( 1988 ) , prasad and rao ( 1990 ) , datta and lahiri ( 2000 ) , for example , derived empirical best linear unbiased predictors ( eblups ) of small area means .these authors used various estimation methods for the variance components and derived approximately accurate estimators of mean squared error ( mses ) of the eblups . on the other hand ,datta and ghosh ( 1991 ) followed the hierarchical bayesian ( hb ) approach to derive posterior means as hb predictors and variances of the small area means .while the underlying normality assumptions for all the random quantities are appropriate for regular data , they fail to adequately accommodate outliers .consequently , these frequentist / bayesian methods , are highly influenced by major outliers in the data , or break down if the outliers grossly violate distributional assumptions .sinha and rao ( 2009 ) investigated robustness , or lack thereof , of the eblups from the usual normal ner model in presence of `` representative outliers '' . according to chambers ( 1986 ) ,a representative outlier is a `` sample element with a value that has been correctly recorded and can not be regarded as unique . in particular , there is no reason to assume that there are no more similar outliers in the nonsampled part of the population . ''sinha and rao ( 2009 ) showed via simulations for the ner model that while the eblups are efficient under normality , they are very sensitive to outliers that deviate from the assumed model . to address the non - robustness issue of eblups , sinha and rao ( 2009 ) used -function , huber s proposal 2 influence function in m - estimation , to downweight contribution of outliers in the blups and the estimators of the model parameters , both regression coefficients and variance components . using m - estimation for robust maximum likelihood estimators of model parameters and robust predictors of random effects , sinha and rao ( 2009 ) for mixed linear modelsproposed a robust eblup ( reblup ) of mixed effects , which they used to estimate small area means for the ner model . by using a parametric bootstrap procedurethey have also developed estimators of the mses of the reblups .we refer to sinha and rao ( 2009 ) for details of this method .their simulations show that when the normality assumptions hold , the proposed reblups perform similar to the eblups in terms of empirical bias and empirical mse .but , in presence of outliers in the unit - level errors , while both eblups and reblups remain approximately unbiased , the empirical mses of the eblups are significantly larger than those of the reblups .datta and ghosh ( 1991 ) proposed a noninformative hb model to predict finite population small area means . in this articlewe follow the approach to finite population sampling which was also followed by datta and ghosh ( 1991 ) .our suggested model includes the treatment of ner model by datta and ghosh ( 1991 ) as a special case .our model facilitates accommodating outliers in the population and in the sample values .we replace the normality of the unit - level error terms by a two - component mixture of normal distributions , each component centered at zero . as in datta and ghosh ( 1991 ) , we assume normality of the small area effects. simulation results of sinha and rao ( 2009 ) indicated that there was not enough improvement in performance of the reblup procedures over the eblups when they considered outliers in both the unit - level error and the model error terms . to keep both analytical and computational challenges for our noninformative hb analysis manageable, we use a realistic framework and we restrict ourselves to the normality assumption for the random effects .moreover , the assumption of zero means for the unit - level error terms is similar to the assumption made by sinha and rao ( 2009 ) . while allowing the component of the unit - level error terms with the bigger variance to also have non - zero means to accommodate outliers might appear attractive , we note it later that it is not possible to conduct a noninformative bayesian analysis with an improper prior on the new parameter .we focus only unit - level model robust small area estimation in this article .there is a substantial literature on small area estimation based on area - level data using fay - herriot model ( see fay and herriot , 1979 ; prasad and rao , 1990 ) .the paper by sinha and rao ( 2009 ) also discussed robust small area estimation for area - level model . in another paper , lahiri and rao ( 1995 ) discussed eblup and estimation of mse under non - normality assumption for the random effects .an early robust bayesian approach for area - level model is due to datta and lahiri ( 1995 ) , where they used scale mixture of normal distributions for the random effects .it is worth mentioning that the -distributions are special cases of scale mixture of normal distributions . while datta and lahiri ( 1995 ) assumed long - tailed distributions for the random effects , bell and huang ( 2006 ) used hb method based on distribution , either only for the unit - level errors or only for the model errors .bell and huang ( 2006 ) assumed that outliers can arise either in model errors or in unit - level errors .the scale mixture of normal distributions requires specification of the mixing distribution , or in the specific case for distributions , it requires the degrees of freedom . in an attempt to avoid this specification , in a recent article chakraborty et al .( 2016 ) proposed a simple alternative via a two - component mixture of normal distributions in terms of the variance components for the model errors .the model - based approach to finite population sampling is very useful to model unit - level data in small area estimation .the ner model of battese et al .( 1988 ) is a popular model for unit - level data .suppose a finite population is partitioned into small areas , with area having units .the ner model relates , the value of a response variable for the unit in the small area , with , the value of a -component covariate vector associated with that unit , through a mixed linear model given by where all the random variables s and s are assumed independent .distributions of these variables are specified by assuming that random effects and unit - level errors . here is the regression coefficient vector .we want to predict the small area finite population mean , .battese et al .( 1988 ) , prasad and rao ( 1990 ) , among others , considered noninformative sampling , where a simple random sample of size is selected from the small area . for notational simplicitywe denote the sample by . to develop predictors of small areameans , these authors first derived , for known model parameters , the conditional distribution of the _ unsampled _ values , , given the sampled values . under squared error loss , the best predictor of is its mean with respect to this conditional distribution , also known as predictive distribution . in the frequentist approach , battese et al .( 1988 ) , prasad and rao ( 1990 ) obtained the eblup of by replacing in the conditional mean the unknown model parameters by their estimators using . in the bayesian approach ,on the other hand , datta and ghosh ( 1991 ) developed hb predictors of by integrating out these parameters in the conditional mean of with respect to their posterior density , which is derived based on a prior distribution on the parameters and the distribution of the sample , derived under the model ( [ ner - bhf ] ) .while the frequentist approach for the ner model under distributional assumptions in ( [ ner - bhf ] ) continues with accurate approximation and estimation of the mses of the eblups , the bayesian approach typically proceeds under some noninformative priors , and computes numerically , usually by mcmc method , the exact posterior means and variances of s . among various noninformative priors for , a popular choice is ( see , for example , datta and ghosh , 1991 ) .the standard ner model in ( [ ner - bhf ] ) is unable to explain outlier behavior of unit - level error terms . to avoid breakdown of eblups and their mses in presence of outliers , sinha and rao ( 2009 ) modified all estimating equations for the model parameters and random effects terms by robustifying various `` standardaized residuals '' that appear in estimating equations by using huber s -function , which truncates large absolute values to a certain threshold .they did not replace the working ner model in ( [ ner - bhf ] ) to accommodate outliers , but they accounted for their potential impacts on the eblups and estimated mses by downweighting large standardized residuals that appear in various estimating equations through huber s -function .their approach , in the terminology of chambers et al .( 2014 ) , may be termed _ robust projective _ , where they estimated the working model in a robust fashion and used that to project sample non - outlier behavior to the unsampled part of the model . to investigate the effectiveness of their proposal ,sinha and rao ( 2009 ) conducted simulations based on various long - tailed distributions for the random effects and/or the unit - level error terms . in one of their simulation scenarios which is reasonably simple but useful , they used a two - component mixture of normal distributions for the unit - level error terms , both components centered at zero but with unequal variances , and the component with the larger variance appears with a small probability .this reflects the regular setup of the ner model with the possibility of outliers arising as a small fraction of contamination caused by the error corresponding to the larger variance component . in this article, we incorporate this mixture distribution to modify the model in ( [ ner - bhf ] ) to develop new bayesian methods that would be robust to outliers .our proposed population level hb model is given by nm hb model : * conditional on and , * the indicator variables s are iid with and are independent of and .* conditional on and , random small area effects for . for simplicity, we assume the contamination probability to remain the same for all units in all small areas .gershunskaya ( 2010 ) proposed this mixture model for empirical bayes point estimation of small area means .we assume independent simple random samples of size from the small areas .for simplicity of notation , we denote the responses for the sampled units from the small area by , .the srs results in a noninformative sample and that the joint distribution of responses of the sampled units can be obtained from the nm hb model above by replacing by . this marginal distribution in combination with the prior distribution provided below will yield the posterior distribution of s , and all the parameters in the model .for the informative sampling development in small area estimation we refer to pfeffermann and sverchkov ( 2007 ) and verret et al .( 2015 ) .two components of the normal mixture distribution differ only by their variances. we will assume the variance component is larger than and is intended to explain any outliers in a data set .however , if a data set does not include any outliers , the two component variances may only minimally differ .in such situation , the likelihood based on the sample will include limited information to distinguish between these variance parameters , and consequently , the likelihood will also have little information about the mixing proportion .we notice this behavior in our application to a subset of the corn data in section [ sec : data ] . in this article , we carry out an objective bayesian analysis by assigning a noninformative prior to the model parameters .in particular , we propose a noninformative prior where we have assigned an improper prior on and a proper uniform prior on the mixing proportion .however , subjective priors could also be assigned when such subjective information is available .notably , it is possible to use some other proper prior on that may elicit the extent of contamination to the basic model to reflect prevalence of outliers .while many such subjective prior can be reasonably modeled by a beta distribution , we use a _ uniform distribution _ from this class to reflect noninformativeness or little information about this parameter .we also use traditional uniform prior on and .the improper prior distribution on the two variances for the mixture distribution has been carefully chosen so that the prior will yield conditionally proper distribution for each parameter given the other .this conditional propriety is _ necessary _ for parameters appearing in the mixture distribution in order to ensure under suitable conditions the propriety of the posterior density resulting from the hb model .the specific prior distribution that we propose above is such that the resulting marginal densities for and respectively , are and .these two densities are of the same form as that of in the regular model in ( [ pop - prior - bhf ] ) introduced earlier . indeed by setting or in our analysis , we can reproduce the hb analysis of the regular model given by ( [ ner - bhf ] ) and ( [ pop - prior - bhf ] ) .we use the nm hb model under noninformative sampling and the noninformative priors given by ( [ new - prior ] ) to derive the posterior predictive distribution of .the nm hb model and noninformative sampling that we propose here facilitate building model for _ representative outliers _ ( chambers , 1986 ) . according to chambers ,a representative outlier is a value of a sampled unit which is not regarded as unique in the population , and one can expect existence of similar values in the non - sampled part of the population which will influence the value of the finite population means s or the other parameters involved in the superpopulation model . following the practice of battese et al .( 1988 ) and prasad and rao ( 1990 ) , we approximated the predictand by to draw inference on the finite population small area means . here is assumed known .this approximation works well for small sampling fractions and large s .it has been noted by these authors , and by sinha and rao ( 2009 ) , that even for the case of outliers in the sample the difference between the inference results for and is negligible .our own simulations for our model also confirm that .once mcmc samples from the posterior distribution of s and have been generated , using the nm hb model the mcmc samples of from their posterior predictive distributions can be easily generated . finally ,using the relation ] = _ _ rank__- , ( b ) _ _ rank__ = _ _ rank__ , where = . proof of lemma [ thm : lm3 ] is relegated to the supplementary materials . from details in the supplementary materials we have rank = rank+ rank = .hence , we have rank = rank ( by ( a ) of lemma [ thm : lm2 ] ) .thus is positive - semidefinite and with probability 1 .let .since is symmetric and idempotent , rank = rank \left(\sum_{j=1}^{t_{1 } } w^{2}_{j } > 2d^2 + \epsilon \right) ] = rank rank ( say ) .let denote a minimizer of wrt .let be an orthogonal matrix such that , = diag , where , are the positive eigenvalues of .we use the transformation = in ( [ eqn : thm9 ] ) . we integrate with respect to and using inverse gamma density integration result . forthat we need the shape parameters and to be positive , i.e. , we need and .since we already assumed and , the shape parameters will be positive . by carrying out integration with respect to and we have , notethat , \nonumber \\ & \rightarrow \sum^{t_{1}}_{j=1 } ( w_{j}-\hat{w_{j}})^{2 } \ge \frac{1}{2}\sum^{t_{1}}_{j=1}w^{2}_{j}- \sum^{t_{1}}_{j=1}\hat{w_{j}}^2 .\nonumber \end{aligned}\ ] ] let us denote by , then for any , ^{\frac{n^{*}_{1}-p_{1}}{2 } } } \times \dfrac{\mbox{i}}{\left(\sum^{t_{1}}_{j=1}w^{2}_{j}\right)^{\frac{t_{1}-2}{2}}}\end{aligned}\ ] ] and are positive constants . + , - , and .the jacobian of transformation is given by .+ now , the right side of ( [ s.eqn:thm14 ] ) is : ^{\frac{n^{*}_{1}-p_{1}}{2}}(\alpha^2)^{\frac{t_{1}-2}{2}}}(\text{cos}\ , \theta_{1})^{t_{1}-2}\dots ( \text{cos } \,\theta_{t_{1}-2 } ) \ ,\text{i}(\alpha^{2 } > 2d^2+\epsilon ) \nonumber\end{aligned}\ ] ] ^{\frac{(n^{*}_{1}-p_{1})}{2}}}\,\mathrm{d}\alpha \nonumber \\ & \hspace{-0.2 in } = c_{1}\dfrac{1}{(y_{1}^{t}r_{2}y_{1})^{\frac{n_{1}^{*}-p_{1}}{2}}}\left ( \dfrac{2d^2 + \epsilon}{2}\right ) + c_{2}\frac{1}{(\lambda_{t_{1}})^{\frac{(n^{*}_{1}-p_{1})}{2}}}\times \frac{1}{\epsilon^{\frac{n^{*}_{1}-p_{1}-2}{2 } } } < \infty\end{aligned}\ ] ]so far we have proved that any arbitrary typical term in ( [ s.eqn:thm1 ] ) satisfying conditions ( a ) , ( b ) and ( c ) is integrable .hence , we can conclude , ( in ( [ s.eqn:thm1 ] ) ) is integrable with respect to if condition ( a ) , ( b ) and ( c ) are satisfied . at first we note that at least one of these two conditions and holds . in order to establish that ,let us assume , and , i.e. , , which contradicts to our assumption that . note that , = .if possible , let , that is , small areas have more than observations . since we previously assumed that ( in theorem [ thm : c11 ] ) , for all .hence the remaining small areas have at least observations overall .therefore , , which is a contradiction to the previous assumption that . with similar arguments we can establish .case - iii : , or . in this case ,hence , .again , let us assume , .now , , which contradicts to our earlier assumption that .therefore in this case , i. e. .hence , and , i.e. , condition ( a ) holds .\(b ) = , is idempotent .+ therefore , rank = rank rank = rank rank = .
|
national statistical institutes in many countries are now mandated to produce reliable statistics for important variables such as population , income , unemployment , health outcomes , etc . for many sub - populations , called small areas , defined by geography and/or demography . due to small sample sizes from these areas , direct sample - based estimates are often unreliable . model - based small area estimation methods have now been extensively used to generate reliable small area statistics by borrowing strength " from other areas and related variables through suitable models . the nested error regression model is a popular model which facilitates use of unit - level data for accurate estimation of small area means via stein s shrinkage estimation of multiple parameters . standard model - based small area estimates perform poorly in presence of outliers . to deal with outliers , a robust frequentist approach had recently been proposed by sinha and rao ( 2009 ) . they developed robust empirical best linear unbiased predictors ( reblups ) of small area means . in this article , we present a robust hierarchical bayes ( hb ) method to handle outliers in unit - level data by extending the nested error regression model . we consider a two - component scale mixture of normal distributions for the unit - level error to model outliers and present a computational approach to produce hb predictors of small area means under a noninformative prior for various model parameters . our solution is a modification of the hb prediction derived by datta and ghosh ( 1991 ) under the normality assumption . application of our method to a data set for prediction of county means for corn area , which is suspected to contain an outlier , confirms this suspicion and correctly identifies the suspected outlier , and produces robust predictors and posterior standard deviations of the small area means . this example and extensive simulations convincingly show robustness of our hb predictors to outliers . evaluation of these three procedures and chambers et al . ( 2014 ) m - quantile small area estimators via simulations shows that our proposed procedure is as good as the others in terms of bias , variability , and coverage probability of nominal credible or confidence intervals , when there are no outliers . in presence of outliers , with respect to these measures our method and sinha - rao method perform similarly , and they are better than the others . this superior frequentist performance of our hb procedure shows its dual ( bayes and frequentist ) dominance , and will be attractive to all practitioners , both bayesians and frequentists , of small area estimation . * adrijo chakraborty , gauri sankar datta and abhyuday mandal * .1 in * key words : * normal mixture ; outliers ; prediction intervals and uncertainty ; robust empirical best linear unbiased prediction ; unit - level models .
|
attracted by several analogies with the dynamics of natural systems , physicists , especially during the last decade , have attempted to understand the mechanism behind stock - market dynamics by applying techniques and ideas developed in their respective fields . in this context, possible connections between self - organized criticality ( soc ) and the stock market , or economics in general , have been investigated theoretically .the theory of soc , originally proposed in the late eighty s by bak , tang and wiesenfeld ( btw ) to explain the ubiquity of power laws in nature , is claimed to be relevant in several different areas of physics as well as biological and social sciences .the key concept of soc is that complex systems _ i.e. _ systems constituted by many non - linear interacting elements although obeying different microscopic physics , may _ naturally _ evolve toward a _critical _ state where , in analogy with physical systems near the phase transition , they can be characterized by power laws .the critical state is an ensemble of metastable configurations and the system evolves from one to another via an avalanche - like dynamics .the classical example of a system exhibiting soc behaviour is the 2d sandpile model . herethe cells of a grid are randomly filled , by an external driver , with `` sand '' .when the gradient between two adjacent cells exceeds a certain threshold a redistribution of the sand occurs , leading to more instabilities and further redistributions .the benchmark of this system , indeed of all systems exhibiting soc , is that the distribution of the avalanche sizes , their duration and the energy released , obey power laws . as such , they are _ scale - free_. in the present work we search for imprints of soc in the stock market by studying the statistics of the coherent periods ( that is , periods of high volatility ) , or _ avalanches _ , which characterize its evolution .we analyze the tick - by - tick behaviour of the nasdaq e - mini futures ( nq ) index , , from 21/6/1999 to 19/6/2002 for a total of data .in particular , we study the logarithmic returns of this index , which are defined as $ ] .possible differences between daily and high frequency data have also been taken into consideration through the analysis of the dow jones daily closures ( dj ) from 2/2/1939 to 13/4/2004 , for a total of data .this work extends our earlier work on this subject by introducing new criteria to optimize the filtering of the time series essential to separating quiescent and avalanche dynamics .the properties of the time series reconstructed from the filtered returns are also examined .the issue regarding the presence of soc in the stock market is of not only of theoretical importance , since it would lead to improvements in financial modeling , but could also enhance the predictive power of econophysics . in the next section we present the analysis methodology while in sec .[ data_analysis ] the results of the analysis are presented .discussions and conclusions are contained in the last section .the logarithmic returns of stock indices rarely display intervals of genuinely quiescent periods , yet such periods are vital to the quantitative identification of avalanche dynamics . as such, noise must be filtered from the time series .ideally , only gaussian noise , associated with the _ efficient _ phases of the market where the movements can be well approximated by a random walk , is to be filtered from the time - series returns .such dynamics have no memory and contrast the avalanche dynamics , _i.e. _ anomalous periods characterized by large fluctuations , that we aim to analyze .naively , one might simply set a threshold for the logarithmic returns , below which the index is deemed to be laminar .however , a simple threshold method is not appropriate , as it would include in the filtering some non - gaussian returns at small scales that are relevant in our analysis .this difficulty is illustrated in fig .[ filt_pl ] ( top ) where the probability distribution function ( pdf ) for the returns of the nq index , filtered using a fixed threshold of standard deviations is shown by the open squares . in this casebroad wings , related to events that do not follow gaussian statistics , are clearly evident .however , an important _ stylized fact _ of financial returns the _ intermittency _ of financial returns can be used to identify an appropriate filtering scheme .already , physicists have drawn analogies with the well known phenomenon of intermittency in the spatial velocity fluctuations of hydrodynamic flows .both systems display broad tails in the probability distribution function and a non - linear multifractal spectrum as a result of this feature .the empirical analogies between turbulence and the stock market suggest the existence of a temporal information cascade for the latter .this is equivalent to say that various traders require different information according to their specific strategies . in this way ,different time scales become involved in the trading process . in the present work we use a wavelet method in order to study multi - scale market dynamics .the wavelet transform is a relatively new tool for the study of intermittent and multifractal signals .this approach enables one to decompose the signal in terms of scale and time units and so to separate its coherent parts _ i.e. _ the bursty periods related to the tails of the pdf from the noise - like background .this enables an independent study of the avalanches and the quiescent intervals .the wavelet transform ( wt ) is defined as the scalar product of the analyzed signal , , at scale and time , with a real or complex `` mother wavelet '' , . in the discrete wavelet transform ( dwt )case , used herein , this reads : where the mother wavelet is scaled using a dyadic set .one chooses , for , where is the scale of the wavelet and is the number of scales involved , and the temporal coefficients are separated by multiples of for each dyadic scale , , with being the index of the coefficient at the scale .the wavelet coefficients are a measure of the correlation between the original signal , , and the mother wavelet , , at scale and time . in the analysis presented in the next section , we use the daubechies4 wavelet as the orthonormal basis .however , tests performed with different sets do not show any qualitative difference in the results .the utility of the wavelet transform in the study of turbulent signals lies in the fact that the large amplitude wavelet coefficients are related to the extreme events corresponding to the tails of the pdf , while the laminar or quiescent periods are related to the coefficients with smaller amplitude . in this way , it is possible to define a criterion whereby one can filter the time series of the coefficients depending on the specific needs . in our case ,we adopt the method used in ref . and originally proposed by katul et al . . in this methodwavelet coefficients that exceed a fixed threshold are set to zero , according to here denotes the average over the time parameter at a certain scale and is the threshold coefficient . in this way only the dynamics associated with the efficient phases of the market where the movements can be well approximated by a random walk are preserved .once we have filtered the wavelet coefficients an inverse wavelet transform is performed , obtaining what should approximate gaussian noise .the pdf of this filtered time series is shown , along with the original pdf in fig .[ filt_pl ] ( top ) .it is evident how the distribution of the filtered signal matches perfectly a gaussian distribution . in the same figure ( bottom ), we also show the logarithmic returns , , of the original time series after the filtered time series has been subtracted .truly quiescent periods are now evident , contrasting the the bursty periods , or avalanches , which we aim to study .the time series of logarithmic prices is reconstructed from the residuals in fig .[ ts_pl ] and is contrasted with the one reconstructed from the filtered gaussianly distributed returns .note how , in the latter case , the time series is completely independent of the actual market price . to this point , the filtering parameter , , has been constrained to 1 , thus preserving coefficients that are less than the average coefficient at a particular scale .however , one might wonder if it is possible to tune this parameter to maximally remove the uninteresting gaussian noise from the original signal .[ kur_pl ] illustrates the extent to which the filtered signal is gaussian as a function of the filtering parameter .here we report the value of the excess of kurtosis , , where is the average of the filtered time series over the period considered .for pure gaussian noise this value should be 0 . with this testwe are able to identify as optimal for both the nq and dj indices investigated here .pure noise signals are completely filtered with this simple consideration : an examination of the standard autocorrelation function of the filtered time series shows a complete temporal independence , further confirming that we have successfully filtered gaussian noise .once we have isolated and removed noise from the time series we are able to perform a reliable statistical analysis on the avalanches of the residual returns , fig .[ filt_pl ] ( bottom ) .in particular , we _ define _ an _ avalanche _ as the periods of the residual returns in which the volatility , , is above a small threshold , typically two orders of magnitude smaller than the characteristic return .a parallel between avalanches in the classical sandpile models ( btw models ) exhibiting soc and the previously defined coherent events in the stock market is straightforward . in order to test the relation between the two , we make use of some properties of the btw models . in particular, we use the fact that the avalanche size distribution and the avalanche duration are distributed according to power laws , while the laminar , or waiting times between avalanches are exponentially distributed , reflecting the lack of any temporal correlation between them .this is equivalent to stating that the triggering process has no memory .similar to the dissipated energy in a turbulent flow , we define an avalanche size , , in the market context as the integrated value of the squared volatility over each coherent event of the residual returns .the duration , , is defined as the interval of time between the beginning and the end of a coherent event , while the laminar time , , is the time elapsing between the end of an event and the beginning of the next .the results for the statistical analysis of the optimally - filtered nq and dj indices are shown in figs .[ ene_pl ] , [ dur_pl ] and [ lam_pl ] for the avalanche size , duration and laminar times , respectively .a power law relation is clearly evident for all three quantities investigated .the data analyzed herein display a distribution of laminar times different from the btw model of the classical sandpile . as explained previously , the btw model shows an exponential distribution for , derived from a poisson process with no memory .the power law distribution found here implies the existence of temporal correlations between coherent events .however this correlation may have its origin in the driver of the market , contrasting the random driver of the classical sandpile .we have investigated the possible relations between the theory of self - organized criticality and the stock market .the existence of a soc state for the latter would be of great theoretical importance , as this would impose constraints on the dynamics , as implied by the presence of a bounded attractor in the state space .moreover , it would be possible to build new predictive schemes based on this framework .after a multiscale wavelet filtering , an avalanche - like dynamics has been revealed in two samples of market data .the avalanches are characterized by a scale - free behaviour in the size , duration and laminar times .the power laws in the avalanche size and duration are a characteristic feature of a critical underlying dynamics in the system . however , the power law behavior in the laminar time distribution implies a memory process in the triggering driver that is absent in the classical btw models , where an exponential behavior is expected .remarkably , the same features have been also observed in other physical contexts .the problem of temporal correlation in the avalanches of real systems , has raised debates in the physics community , questioning the practical applicability of the soc framework .motived by this issue , several numerical studies have been devoted to including temporal correlations in soc models .a power - law distribution in the laminar times has been achieved , for example , by substituting the random driver with a chaotic one .alternatively , it has been shown that non - conservative systems , as for the case of the stock market , could be in a _ near - soc _ state where dissipation induces temporal correlations in the avalanches while the power law dynamics persist for the size and duration . in conclusion , a definitive relation between soc theory and the stock market has not been found .rather , we have shown that a memory process is related with periods of high activity .the memory could result from some kind of dissipation of information , similar to turbulence , or have its origin in a chaotic driver applied to the self - organized critical system . while a combination of the two processes can also be possible , it is the latter property that prevents one from ruling out the possibility that the stock market is indeed in a soc state .similar power - law behaviour has been found in the asx index for the australian market and different single stock time series .if this power - law behaviour is confirmed by further studies , this should be considered as a stylized fact of stock market dynamics .this work was supported by the australian research council .00 r. n. mantegna and h. e. stanley , _ an introduction to econophysics : correlation and complexity in finance _ , ( cambridge university press , cambridge , 1999 ) .et al . _ , ric . econ . * 47 * , 3 ( 1993 ) .et al . _ , physica a * 246 * , 430 ( 1997 ) .d. l. turcotte , rep .. phys . * 62 * , 1377 ( 1999 ) .j. feigenbaum , rep .. phys . * 66 * , 1611 ( 2003 ) .m. bartolozzi , d.b .leinweber and a.w .thomas , physica a * 365 * , 449 ( 2006 ) .et al . _ ,lett . * 59 * , 381 ( 1987 ) ; p. bak _et al . _ , phys . rev .a * 38 * , 364 ( 1988 ) . h. j. jensen , _ self - organized criticality : emergent complex behavior in physical and biological systems _ , ( cambridge university press , cambridge , 1998 ) .m. bartolozzi , d. b. leinweber and a. w. thomas , physica a * 350 * , 451 ( 2005 ) .e. caglioti and v. loreto , phys .e , * 53 * , 2953 ( 1996 ) .u. frisch , _ turbulence _ , ( cambridge university press , cambridge , 1995 ) .s. ghashghaie _et al . _ ,nature * 381 * , 767 ( 1996 ) .r. n. mantegna and h. e. stanley , physica a * 239 * , 225 ( 1997 ). m. farge , annu .fluid mech .* 24 * , 395 ( 1992 ) .et al . _ ,fluids * 11 * , 2187 ( 1999 ) .i. daubechies , comm .pure appl . math .* 41 * ( 7 ) , 909 ( 1988 ) .p. kov _ et al .space sci .* 49 * , 1219 ( 2001 ) ._ , _ wavelets in geophysics _ , pp .81 - 105 , ( academic , san diego , calif . 1994 ) g. boffetta _et al . _ ,lett . * 83 * , 4662 ( 1999 ) ._ , astrophys.j.*509 * , 448 ( 1998 ) .lett . * 86 * , 3032 ( 2001 ) .v. antoni _ et al .87 * , 045001 ( 2001 ) .a. corral , phys .92 * , 108501 ( 2004 ) . v. carbone_ et al . _ ,europhys . lett.,*58 * ( 3 ) , 349 ( 2002 ) .et al . _ ,j. , * 557 * , 891 ( 2001 ) . e. lippiniello l. de arcangelis and c. godano , europhys . lett.,*72 * , 678 ( 2005 ) .m. baiesi and c. maes , preprint : cond - mat/0505274 .de los rios _ et al .e * 56 * , 4876 ( 1997 ) .r. sanchez _ et al .* 88 * , 068302 - 1 ( 2002 ) .et al . _ ,e * 62 * , 8794 ( 2000 ) .carvalho and c.p.c .prado , phys ., * 84 * , 4006 ( 2000 ) .
|
self - organized criticality has been claimed to play an important role in many natural and social systems . in the present work we empirically investigate the relevance of this theory to stock - market dynamics . avalanches in stock - market indices are identified using a multi - scale wavelet - filtering analysis designed to remove gaussian noise from the index . here new methods are developed to identify the optimal filtering parameters which maximize the noise removal . the filtered time series is reconstructed and compared with the original time series . a statistical analysis of both high - frequency nasdaq e - mini futures and daily dow jones data is performed . the results of this new analysis confirm earlier results revealing a robust power law behaviour in the probability distribution function of the sizes , duration and laminar times between avalanches . this power law behavior holds the potential to be established as a stylized fact of stock market indices in general . while the memory process , implied by the power law distribution of the laminar times , is not consistent with classical models for self - organized criticality , we note that a power - law distribution of the laminar times can not be used to rule out self - organized critical behaviour . complex systems , econophysics , self - organized criticality , wavelets 05.65.+b , 05.45.tp , 02.70.hm , 45.70.ht , 02.70.rr
|
from the brain over the internet to social groups , complex networks are a prominent framework to describe collective behaviors in many areas .many of real - world networks exhibit topological features that can be captured neither by regular connectivity models as lattices , nor by random configurations . under this framework ,recent studies of complex brain networks have attempted to characterize the connectivity patterns observed under functional brain states .electroencephalography ( eeg ) , magnetoencephalography ( meg ) , or functional magnetic resonance imaging ( fmri ) studies have consistently shown that human brain functional networks during different pathological and cognitive neurodynamical states display small world ( sw ) attributes .sw networks are characterized by a small average distance between any two nodes while keeping a relatively highly clustered structure .thus , sw architecture is an attractive model for brain connectivity because it leads distributed neural assemblies to be integrated into a coherent process with an optimized wiring cost .another property observed in many networks is the existence of a modular organization in the wiring structure .examples range from rna structures to biological organisms and social groups .a module is currently defined as a subset of units within a network such that connections between them are denser than connections with the rest of the network .it is generally acknowledged that modularity increases robustness , flexibility and stability of biological systems .the widespread character of modular architecture in real - world networks suggests that a network s function is strongly ruled by the organization of their structural subgroups .empirical studies have lead to the hypothesis that specialized neural populations are largely distributed and linked to form a web - like structure .the emergence of any unified brain process relies on the coordination of a scattered mosaic of modules , representing functional units , separable from -but related to- other modules .characterizing the modular structure of the brain may be crucial to understand its organization during different pathological or cognitive states .previous studies over the mammalian and human brain networks have successfully used different methods to identify clusters of brain activities .some classical approaches , such as those based on principal components analysis ( pca ) and independent components analysis ( ica ) , make very strong statistical assumptions ( orthogonality and statistical independence of the retrieved components , respectively ) with no physiological justification .although a number of studies investigating the organization of anatomic and functional brain networks have shown very interesting properties of the macro - scale brain architecture , little is known about the network structure at a finer scale ( at a voxel level ) .current approaches are based on the use of a priori coarse parcellations of the cortex ; or on partial networks defined by a seed voxel .nevertheless , seed - based descriptions may fail to describe the global behavior of the brain , as they only consider the connectivity of the reference voxel . on the other hand ,parcellation schemes reduce the analysis to a macro - scale fixed by an _ a priori _ definition of the brain areas .further , a recent study shows that the topological organization of brain networks is affected by the different parcellation strategies applied .here we focus on a completely data - driven framework to study the connectivity of brain networks extracted directly from functional magnetic resonance imaging ( fmri ) signals at voxel resolution .a random walk - based algorithm is used to assess the modular organization of functional networks from healthy subjects in a resting - state condition .results reveal that functional brain webs present a large - scale modular organization significatively different from that arising from random configurations .further , the spatial distribution of some modules fits well with previously defined anatomo - functional brain areas , assessing a functional significance to the retrieved modules . based on the patterns of inter- and intra - modular connectivities, we also study the roles played by different brain sites .results provide a characterization of the functional scaffold that underly the coordination of specialized brain systems during spontaneous brain behavior .bold fmri data were acquired using a t2 * -weighted imaging sequence during a period of 10 minutes from 7 healthy right - handed subjects .the study was performed with written consent of the subjects and with the approval of local ethics committees . during the scan ,all subjects were instructed to rest quietly , but alert , and keep their eyes closed .500 volumes of gradient echoplanar imaging ( epi ) data depicting bold contrast were acquired . in the acquisition ,we used the following parameters : number of slices , ( interleaved ) ; slice thickness , mm ; inter - slice gap , mm ; matrix size , ; flip angle , ; repetition time ( tr ) , ms ; echo time , ms ; in - plane resolution , mm .subsequently , a high resolution structural volume was acquired via a t1weighted sequence ( axial ; matrix ; fov mm ; slice thickness ; mm ; in plane voxel size , mm ; flip angle 15 ; tr , ms , ti , ms ; te , 3.87 ms ) to provide the anatomical reference for the functional scan .all acquired brain volumes were corrected for motion and differences in slice acquisition times using the spm5 ( http://www.fil.ion.ucl.ac.uk ) software package . after correction ,fmri datasets were coregistered to the anatomical dataset and normalized to the standard template mni , enabling comparisons between subjects . due to computational limitations , normalized andcorrected functional scans were subsampled to a 4x4x4 mm resolution , yielding a total of 20898 voxels ( nodes in the network ) . to eliminate low frequency noise ( e.g. slow scanner drifts ) and higher frequency artifacts from cardiac and respiratory oscillations ,time - series were digitally filtered with a finite impulse response ( fir ) filter with zero - phase distortion ( bandwidth hz ) .a functional link between two time series and ( normalized to zero mean and unit variance ) was defined by means of the linear cross - correlation coefficient computed as , where denotes the temporal average . for the sake of simplicity, we only considered here correlations at lag zero . to determine the probability that correlation values are significantly higher than what is expected from independent time series , values ( denoted ) were firstly transformed by the fisher s z transform under the hypothesis of independence, has a normal distribution with expected value 0 and variance , where is the effective number of degrees of freedom .if time series are formed of independent measurements , simply equals the sample size , .nevertheless , autocorrelated time series do not meet the assumption of independence required by the standard significance test , yielding a greater type i error .in presence of auto - correlated time series must be corrected by the following approximation : where is the autocorrelation of signal at lag .other estimators of , and statistical significance tests for auto - correlated time series can be found in . to correct for multiple testing ,the false discovery rate ( fdr ) method was applied to each matrix of values . with this approach ,the threshold of significance was set such that the expected fraction of false positives is restricted to . in the construction of the networks ,a functional connection between two brain sites was assumed as an undirected and unweighted edge ( if ; and zero otherwise ) .although topological features can also be straightforwardly generalized to weighted networks , we obtained qualitative similar results ( not reported here ) for weighted networks with a functional connectivity strength between nodes given by . to characterize the topological properties of a network ,a number of parameters have been described .here we use three key parameters : mean degree , clustering index and global efficiency .briefly , the degree of node denotes the number of functional links incident with the node and the mean degree is obtained by averaging across all nodes of the network .the clustering index quantifies the local density of connections in a node s neighborhood . for a node , the clustering coefficient is calculated as the number of links between the node s neighbors divided by all of their possible connections and is defined as the average of taken over all nodes of the network .the global efficiency provides a measure of the network s capability for information transfer between nodes and is defined as the inverse of the harmonic mean of the shortest path length between each pair of nodes .figure [ netdegreecdf ] shows superimposed the degree distributions for the seven studied subjects .for each network , goodness - of - fit was compared here using maximum likelihood methods and the kolmogorov - smirnov statistic ( ks ) for four possible forms of degree distribution : a power law ; an exponential ; a truncated pareto ; and an exponentially truncated power law .the bestfitting were obtained for the truncated power law ( compared with , and for the exponential law , the truncated pareto and the power law distribution , respectively ) .estimated parameters for the truncated power law are , ..parameters for real and randomized networks : , mean degree ; , clustering index ; , global efficiency ; denotes the average of parameter obtained from equivalent randomized networks .single asterisks indicate that this parameter has a significance level of . [cols="<,^,^,^,^,^,^,^",options="header " , ] to partition the functional networks in modules , we used a random walk - based algorithm , because of its ability to manage very large networks , and its good performances in benchmark tests . in a nutshell ,a random walker on a connected graph tends to remain into densely connected subsets corresponding to modules .let to be the transition probability from node to node , where denotes the adjacency matrix and is the degree of the i node .this defines the transition matrix for a random walk process of length ( denoted here for simplicity ) .the metric used to quantify the structural similarity between vertices is given by using matrix identities , the distance can be written as ; where and are the eigenvalues and right eigenvectors of the matrix , respectively .this relates the random walk algorithm to current methods using spectral properties of the graphs .the current approach , however , needs not to explicitly compute the eigenvectors of the matrix ; a computation that rapidly becomes intractable when the size of the graphs exceeds some thousands of vertices . to find the modular structure ,the algorithm starts with a partition in which each node in the network is the sole member of a module .modules are then merged by an agglomerative approach based on a hierarchical clustering method .following ref . , if two modules and are merged into a new one , the transition matrix is updated as follows : , where denotes the number of elements in module .the algorithm stops when all the nodes are grouped into a single component . at each stepthe algorithm evaluates the quality of partition .the partition that maximizes is considered as the partition that better captures the modular structure of the network . in the calculation of ,the algorithm excludes small isolated groups of connected vertices without any links to the main network .however , these isolated modules are considered here as part of the network for the calculation of the topological parameters . as reported in table [ tablefornetsmodularity ] , a modular structureis confirmed by the high values of obtained for the optimal partition of the networks ( a value of is in practice a good indicator of modularity in a network ) .further , values of modularity for all the subjects were statistically significant when compared with randomized wirings ( ) . to assess the stability of the partition structure across subjects we used the rand index , which is a traditional criterion for comparison of different results provided by classifiers and clustering algorithms , including partitions with different numbers of classes or clusters . for two partitions and the rand index is defined as ; where is number of pairs of data objects belonging to the same class in and to the same class in , is number of pairs of data objects belonging to the same class in and to different classes in , is the number of pairs of data objects belonging to different classes in and to the same class in , and is number of pairs of data objects belonging to different classes in and to different classes in .thus index yields a normalized value between ( if the two partitions are randomly drawn ) and ( for identical partition structures ) . for our data ,the values of indicate a moderate stability of the partition structure across all subjects ( ) .to assess a functionality to the different groups of the modular brain webs , we compared the spatial distribution of the recovered modules with a previously reported anatomical parcellation of the human brain .for the sake of simplicity , we only consider here communities whose size was larger than 40 voxels ( of the size of the whole network ) , which yields modules .[ commsdistribandnet ] illustrates the spatial distribution of the modules retrieved from the averaged connectivity matrix computed over all subjects .results show that the spatial distribution of recovered modules fits well some brain systems .module 22 for instance , includes of the primary visual areas v1 , while module 5 overlaps half of the ventral visual stream ( brain areas v2 and v4 ) , and visual areas of the v3 region ( cuneus and precuneus ) are included ( ) in the module 4 .module 20 includes most of the subcortical structures caudate and thalamus nuclei ( covered at and , respectively ) .the auditory system is included by module 12 that overlaps primary and secondary areas plus associative auditory cortex ( ) .modules 11 , 16 and 21 cover most ( ) of the somatosensory and motor cortices ; and language related areas are mainly included ( ) in module 10 . importantly , some modules include distant brain locations that are functionally related , e.g. the language related areas ( modules 10 ) , the auditory system ( module 12 ) , or brain regions involved in high level visual processing tasks ( module 5 ) .this spatially distributed organization of modules rules out the possibility that modularity _ simply _ emerges as a consequence of vascular processes or local physiological activities independent of neuronal functions .modules assignment provides the basis for the classification of nodes according to their patterns of intra- and inter - modules connections , which conveys significant information about the importance of each node within the network . the within - module degree -score measures how well connected the node is to other nodes in the module , and is defined as : where is the number of links of node to other nodes in its module , is the average of over all the nodes in , and is the standard deviation of in .thus node will display a large value of if it has a large number of intra - modular connections relative to other nodes in the same module , i.e. it measures how well connected a node is to other nodes in the module ) .the extent a node connects to different modules is measured by the participation coefficient defined as : where is the number of links of node to nodes in module , and is the degree of node .the participation coefficient takes values of zero if a node has most of its connections exclusively with other nodes of its module .in contrast , if their links are distributed among different modules in the network . the role ( r ) of a node in the network can be assessed by its within - module degree and its participation coefficient , which define how the node is positioned in its own module and with respect to other modules .figure [ rolesbrainnets ] shows the distribution of the roles obtained from all the analyzed networks over the parameter space .most of the nodes in the functional brain networks ( ) can be classified as non - hubs ( indicated by the gray area in fig . [ rolesbrainnets]-(b ) ) , while only a minority of them are module hubs ( ) .non - hubs nodes were classified as ultra - peripheral ( r1 , ) having all their links within their own modules ; peripherals ( r2 , ) with most links within their modules ; or non hub - connectors ( r3 , ) with half of their links to other modules .this distribution of roles strongly contrasts with that obtained from random configurations ( results not show ) where most nodes have their links homogeneously distributed among all modules ( r4 and r7 ) .the anatomical distribution of the parameters and is depicted in figure [ rolesbraindistrib ] .interestingly , this representation shows that the wiring structure of the brain has a non - homogeneous organization in terms of the parameters distribution .examples of the different behaviours that can be observed are : _i ) _ subcortical structures ( indicated by the orange arrow ) display relatively high values for both -score and parameters , indicating a dense inter- and intra - modular connectivity ; _ ii ) _ nodes belonging to brain areas associated to the primary visual system ( pointed by the red arrow ) have a scatter connectivity , yielding low values for both and parameters ; _ iii ) _ precuneus and cyngular gyrus areas ( indicated by the yellow arrow ) have a dense intra - modular connectivity ( high values of ) but few links to other modules ( low values of ) ; _ iv ) _ frontal areas and some visual regions related to associative functions ( cian arrow ) present more connections to other modules , which is reflected in their low values of and relatively high values of .in conclusion , here we address a fundamental problem in brain networks research : whether the spontaneous brain behavior relies on the coordination ( integration ) of a complex mosaic of functional brain modules ( segregation ) .by using a random walk - based method we have identified a non - random modular structure of functional brain networks .in contrast to current approaches , our procedure requires neither of signal averaging in predefined brain areas , nor the definition of seed regions , nor subjective thresholds to assess the connectivities . to our knowledge, this work provides the first evidence of a modular architecture in functional human brain networks at a voxel level .the modularity analysis of large - scale brain networks unveiled a modular structure in the functional connectivity .although a one - to - one assignment of anatomical brain regions to each detected module is difficult to define , results reveal a strong correlation between the spatial distribution of the modules and some well - known functional systems of the brain , including some of the frequently reported circuits underlying the functional activity at rest .it is worth to notice that , although the functional brain connectivity is strongly shaped by the underlying anatomical wiring ( e.g. by the white matter pathways ) , future studied are needed to clearly examine the interplay between the structural substrate and the modular connectivity inferred from brain dynamics .our findings are in full agreement with previous studies about the structure of human brain networks .first , we have confirmed the degree distribution presents a power - law behavior over a wide range of scales , implying that there are a small number of regions with a large number of connections .we also found that brain connectivity shows a degree of clustering that is one order of magnitude higher than that of the equivalent random networks while keeping similar efficiency values , suggesting that spontaneous brain behavior involves an optimized ( in a sw sense ) functional integration of distant brain regions .further , the intrinsic non - random modular structure suggested by the high values of the clustering index of brain networks was confirmed by a high degree of modularity obtained for the ensemble of subjects .although the mechanisms by which modularity emerges in complex networks are not well understood , it is widely believed that the modular structure of complex networks plays a critical role in their functionality .functional brain modules can be related to a local -segregate- information processing while inter - modular connections allows the integration of distant anatomo / functional brain regions . on the other hand ,the sw and scale - free characteristics of brain webs provide an optimal organization for the stability , robustnes , and transfer of information in the brain .the modular structure constitutes therefore an attractive model for the brain organization as it supports the coexistence of a functional segregation of distant specialized areas and their integration during spontaneous brain activity .although the study of anatomical brain networks is a current subject of research , we suggest that a modular description might provide new insights into the understanding of human brain connectivity during pathological or cognitive states .
|
modular structure is ubiquitous among real - world networks from related proteins to social groups . here we analyze the modular organization of brain networks at a large - scale ( voxel level ) extracted from functional magnetic resonance imaging ( fmri ) signals . by using a random walk - based method , we unveil the modularity of brain - webs , and show modules with a spatial distribution that matches anatomical structures with functional significance . the functional role of each node in the network is studied by analyzing its patterns of inter- and intra - modular connections . results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest . * there is a growing interest in studying the connectivity patterns extracted from brain signals during different mental states . current studies suggest that brain architecture leads neural assemblies to be coordinated with an optimized wiring cost . brain webs coordinate a mosaic of brain modules , carrying out specific functional tasks and integrated into a coherent process . we analyze the modular structure of brain networks extracted from fmri signals in humans at rest . using a random walk - based method we identify a non - random modular architecture of brain connectivity . this approach is fully data driven and relies on no a priori choice of a seed brain region or signal averaging in predefined brain areas . the analysis of intra- and inter - modules connections leads us to relate a node s connectivity to a local information processing , or to the integration of distant anatomo / functional brain regions . we also find that the spatial distribution of the retrieved modules matches with brain areas associated with specific functions , assessing a functional significance to the modules . in our conclusions , we argue that a modular characterization of the functional brain webs constitutes an interesting model for the study of brain connectivity during different pathological or cognitive states . *
|
consider a graph with vertex set , and edge set . in the following, we shall denote by the set of neighbors of , and assume ( i.e. is locally finite ) . to each vertex we assign an initial spin .the vector of all initial spins is denoted by .configuration at subsequent times are determined according to the following majority update rule .if is the set of neighbors of node , we let when .if , then we let in order to construct this process , we associate to each vertex , a sequence of i.i.d . bernoulli variables , whereby is used to break the ( eventual ) tie at time .a realization of the process is then determined by the triple , with . in this workwe will study the asymptotic dynamic of this process when is an infinite regular tree of degree .let be the law of the majority process where , in the initial configuration , the spins are i.i.d . with .we define the _ consensus threshold _ as the smallest bias in the initial condition such that the dynamics converges to the all configuration here convergence to the all- configuration is understood to be pointwise .we shall call the _ consensus threshold _ of the -regular tree .two simple observations will be useful in the following : * monotonicity . *denote by the natural partial ordering between configurations ( i.e. if and only if for all ) .then the majority dynamics preserves this partial ordering .more precisely , given two copies of the process with initial conditions , there exists a coupling between them such that for all . * symmetry .* let denote the configuration obtained by inverting all the spin values in . thentwo copies of the process with initial conditions can be coupled in such a way that for all .it immediately follows from these properties that it is not too difficult to show that for all .a simple quantitative estimate is provided by the next result .[ lemma : lessthanone ] for all , denote by the threshold density for the appearance of an infinite cluster of occupied vertices in bootstrap percolation with threshold . then a numerical evaluation of this upper bound yields , , .it is possible to show that .we will prove a much tighter bound in theorem [ thm : upperbound ] .the next lemma simplifies the task of proving upper bounds on for large .[ lemma : local ] assume to be the regular tree of degree .there exists such that for , if , then .the proofs of the lemmas [ lemma : lessthanone ] and [ lemma : local ] can be found in section [ sec : basiclemmas ] .notice that the consensus threshold is well defined for a general infinite graph . if is finite , then trivially : indeed for any there is a positive probability that is the all configurations .however , given a sequence of graphs with increasing number of vertices , one can define a threshold function such that with probability for .it is an open question to determine which graph sequences exhibit a sharp threshold ( in the sense that has a limit independent of as ) .we carried out numerical simulations with random regular graphs of degree . in this case, there appears to be a sharp threshold bias that converges , as to a limit . above this threshold ,the dynamics converges with high probability to all . below this threshold ,the dynamics converges instead to either a stationary point or to a length - two cycle .threshold biases found for small values of were : , generated according to a modified _ configuration model _ ( with eventual self - edges and double edges rewired randomly ) .the initial bias was implemented by drawing a uniformly random configuration with spins . ] [ cols="<,<",options="header " , ] as observed in the introduction by symmetry and monotonicity .therefore the lower bounds are non - trivial only if .it turns out that for any fixed , becomes negative at large .we present in the same table the asymptotic behaviors .nevertheless , for , our lower bounds provide good estimates of the actual threshold .the values of are much lower for even values of . for example , for , , , , , respectively .this is as expected , since our requirement of an alternating -core is more stringent for even .on the other hand , numerical simulations suggest that for small even values of .this work was partially supported by a terman fellowship , the nsf career award ccf-0743978 and the nsf grant dms-0806211 .y. k. is supported by a 3com corporation stanford graduate fellowship .the proof repeats the arguments of , while keeping track explicitly of error terms .we will therefore focus on the differences with respect to .we will indeed prove a result that is slightly stronger than theorem [ thm : sum_approx ] .apart from a trivial rescaling , the statement below differs from theorem [ thm : sum_approx ] in that we allow for larger deviations from the mean .since is bounded away from for bounded , the error estimate in the last statement is equivalent to the one in theorem [ thm : sum_approx ] . for our claimis implied by the multi - dimensional berry - esseen theorem , and we will therefore focus on .recall that the bernoulli decomposition of allows to write , for and where is a lattice random variable , for , and is a collection of i.i.d .bernoulli random variables independent from and . finally , it is easy to check that . as in , we let , for , be the probability mass function of the vector .it then follows immediately that for some numerical constant .this is is a slight generalization of lemma 2.2 of , and follows again immediately from the same estimates on the combinatorial coefficients used in .( theorem [ thm : sum_approx2 ] ) for as in the statement and , let then , by lemma [ lemma : lipclt ] , there exists a constant such that on the other hand , by the berry - esseen theorem finally , it is easy to see that is lipschitz continuous in with lipschitz constant bounded uniformly in , whence the proof is completed by putting together eqs .( [ eq : cltfin1 ] ) , ( [ eq : cltfin2 ] ) and ( [ eq : cltfin3 ] ) , using , , and setting .d. aldous and j. m. steele , `` the objective method : probabilistic combinatorial optimization and local weak convergence , '' in _ probability on discrete structures _, h. kesten ed ., pp . 1 - 72 , springer verlag ( 2003 ) j. balogh , y. peres and g. pete , `` bootstrap percolation on infinite trees and non - amenable groups , '' combinatorics , probability and computing , 15 ( 2006 ) 715 - 730 a. bandyopadhyay , d. gamarnik , `` counting without sampling. new algorithms for enumeration problems using statistical physics '' , proceedings of the 17th acm - siam symposium on discrete algorithm , miami , flo . , 2007 .j. p. bouchaud , l. f. cugliandolo , j. kurchan , and m. mezard , `` out of equilibrium dynamics in spin - glasses and other glassy systems , '' in a. p. young , ed ., _ spin glass dynamics and random fields _ , world scientific ( 1997 ) f. krzakala , a. montanari , f. ricci - tersenghi , g. semerjian , and l. zdeborova , `` gibbs states and the set of solutions of random constraint satisfaction problems , '' proc . natl ., 104 , 10318 - 10323 ( 2007 ) f. krzakala , a. rosso , g. semerjian , and f. zamponi , `` on the path integral representation for quantum spin models and its application to the quantum cavity method and to monte carlo simulations , '' phys . rev .b 78 , 134428 ( 2008 )
|
an elector sits on each vertex of an infinite tree of degree , and has to decide between two alternatives . at each time step , each elector switches to the opinion of the majority of her neighbors . we analyze this majority process when opinions are initialized to independent and identically distributed random variables . in particular , we bound the threshold value of the initial bias such that the process converges to consensus . in order to prove an upper bound , we characterize the process of a single node in the large -limit . this approach is inspired by the theory of mean field spin - glass and can potentially be generalized to a wider class of models . we also derive a lower bound that is non - trivial for small , odd values of .
|
in the past two decades , network science has successfully contributed to many diverse scientific fields . indeed , many complex systems can be represented as networks , ranging from biochemical systems , through the internet and the world wide web , to various social systems .economics also made use of the concepts of network science , gaining additional insight to the more traditional approach .although a large volume of financial data is available for research , information about the everyday transactions of individuals is usually considered sensitive and is kept private . in this paper , we analyze bitcoin , a novel currency system , where the complete list of transactions is accessible . we believe that this is the first opportunity to investigate the movement of money in such detail . bitcoin is a decentralized digital cash system , there is no single overseeing authority .the system operates as an online peer - to - peer network , anyone can join by installing a client application and connecting it to the network .the unit of the currency is one bitcoin ( abbreviated as btc ) , and the smallest transferable amount is . instead of having a bank account maintained by a central authority ,each user has a bitcoin address , that consists of a pair of public and private keys .existing bitcoins are associated to the public key of their owner , and outgoing payments have to be signed by the owner using his private key . to maintain privacy , a single user may use multiple addresses .each participating node stores the complete list of previous transactions .every new payment is announced on the network , and the payment is validated by checking consistency with the entire transaction history . to avoid fraud ,it is necessary that the participants agree on a single valid transaction history .this process is designed to be computationally difficult , so an attacker can only hijack the system if he possesses the majority of the computational power of participating parties . therefore the system is more secure if more resources are devoted to the validation process . to provide incentive ,new bitcoins are created periodically and distributed among the nodes participating in these computations .another way to obtain bitcoins is to purchase them from someone who already has bitcoins using traditional currency ; the price of bitcoins is completely determined by the market . the bitcoin system was proposed in 2008 by satoshi nakamoto , and the system went online in january 2009 . for over a year , it was only used by a few enthusiasts , and bitcoins did not have any real - world value .a trading website called mtgox was started in 2010 , making the exchange of bitcoins and conventional money significantly easier .more people and services joined the system , resulting a steadily growing exchange rate .starting from 2011 , appearances in the mainstream media drew wider public attention , which led to skyrocketing prices accompanied by large fluctuations ( see fig .[ addr1 ] ) . since the inception of bitcoin over 17 million transactions took place , andcurrently the market value of all bitcoins in circulation exceeds 1 billion dollars .see the methods section for more details of the system and the data used in our analysis .we download the complete list of transactions , and reconstruct the transaction network : each node represents a bitcoin address , and we draw a directed link between two nodes if there was at least one transaction between the corresponding addresses . in addition to the topology, we also obtain the time and amount of every payment .therefore , we are able to analyze both the evolution of the network and the dynamical process taking place on it , i.e. the flow and accumulation of bitcoins .to characterize the underlying network , we investigate the evolution of basic network characteristics over time , such as the degree distribution , degree correlations and clustering . concerning the dynamics, we measure the wealth statistics and the temporal patterns of transactions .to explain the observed degree and wealth distribution , we measure the microscopic growth statistics of the system .we provide evidence that preferential attachment is an important factor shaping these distributions .preferential attachment is often referred to as the `` rich get richer '' scheme , meaning that hubs grow faster than low - degree nodes . in the case of bitcoin ,this is more than an analogy : we find that the wealth of already rich nodes increases faster than the wealth of nodes with low balance ; furthermore , we find positive correlation between the wealth and the degree of a node .bitcoin is an evolving network : new nodes are added by creating new bitcoin addresses , and links are created if there is a transaction between two previously unconnected addresses .the number of nodes steadily grows over time with some fluctuations ; especially noticeable is the large peak which coincides with the first boom in the exchange rate in 2011 ( fig .[ addr1 ] ) .after five years bitcoin now has nodes and links . to study the evolution of the network we measure the change of network characteristics in function of time .we identify two distinct phases of growth : ( i ) the _ initial phase _ lasted until the fall of 2010 , in this period the system had low activity and was mostly used as an experiment .the network measures are characterized by large fluctuations .( ii ) after the initial phase the bitcoin started to function as a real currency , bitcoins gained real value .the network measures converged to their typical value by mid-2011 and they did not change significantly afterwards .we call this period the _ trading phase_. and days . ] .the data is log - binned for ease of visual inspection , the power - law is fitted on the original data . ] .the data is log - binned for ease of visual inspection , the power - law is fitted on the original data . ]we first measure the degree distribution of the network .we find that both the in- and the outdegree distributions are highly heterogeneous , and they can be modeled with power - laws .figures [ indegdist1 ] and [ outdegdist1 ] show the distribution of indegrees and outdegrees at different points of time during the evolution of the bitcoin network . in the initial phasethe number of nodes is low , and thus fitting the data is prone to large error . in the trading phase , the exponents of the distributions do not change significantly , and they are approximated by power - laws and . to further characterize the evolution of the degree distributions we calculate the corresponding gini coefficients in function of time ( fig .[ ginitime ] ) . the gini coefficient is mainly used in economics to characterize the inequality present in the distribution of wealth , but it can be used to measure the heterogeneity of any empirical distribution . in general , the gini coefficient is defined as where is a sample of size , and are monotonically ordered , i.e. . indicates perfect equality , i.e. every node has the same wealth ; and corresponds to complete inequality , i.e. the complete wealth in the system is owned by a single individual .for example , in the case of pure power - law distribution with exponent , the gini coefficient is .this shows the fact that smaller exponents yield more heterogeneous wealth distributions .in the bitcoin network we find that in the initial phase the gini coefficient of the indegree distribution is close to 1 and for the outdegree distribution it is much lower .we speculate that in this phase a few users collected bitcoins , and without the possibility to trade , they stored them on a single address . in the second phasethe coefficients quickly converge to and , indicating that normal trade is characterized by both highly heterogeneous in- and outdegree distributions . * . in networks without degree correlations , the degree of connected nodes do not depend on each other , therefore for such networks we expect that is constant . in the case of the bitcoin network , we observe a clear disassortative behavior : is a decreasing function , indicating that nodes with high outdegree tend to connect to nodes with low indegree . ] to characterize the degree correlations we measure the pearson correlation coefficient of the out- and indegrees of connected node pairs : here is the outdegree of the node at the _ beginning _ of link , and is the indegree of the node at the _ end _ of link .the summation runs over all links , and .we calculate and similarly .we find that the correlation coefficient is negative , except for only a brief period in the initial phase .after mid-2010 , the degree correlation coefficient stays between and , reaching a value of by 2013 , suggesting that the network is disassortative ( fig .[ degcorrcltime ] ) .however , small values of are hard to interpret : it was shown that for large purely scale - free networks vanishes as the network size increases .therefore we compute the average nearest neighbor degree function for the final network ; measures the average indegree of the neighbors of nodes with outdegree .we find clear disassortative behavior ( fig .[ anndoutin ] ) .we also measure the average clustering coefficient which measures the density of triangles in the network . herethe sum runs over all nodes , and is the number of triangles containing node . to calculate we ignored the directionality of the links ; is the degree of node in the undirected network .pa6a10y0a ( 12,27 ) values ( see eq .[ erank ] ) for exponents and . the inset shows the kolmogorov - smirnoff error for these exponents.,title="fig : " ] in the initial phase is high , fluctuating around ( see fig .[ degcorrcltime ] ) , possibly a result of transactions taking place between addresses belonging to a few enthusiasts trying out the bitcoin system by moving money between their own addresses . in the trading phase ,the clustering coefficient reaches a stationary value around , which is still higher than the clustering coefficient for random networks with the same degree sequence ( ) .to explain the observed broad degree distribution , we turn to the microscopic statistics of link formation .most real complex networks exhibit distributions that can be approximated by power - laws .preferential attachment was introduced as a possible mechanism to explain the prevalence of this property .indeed , direct measurements confirmed that preferential attachment governs the evolution of many real systems , e.g. scientific citation networks , collaboration networks , social networks or language use . in its original form, preferential attachment describes the process when the probability of forming a new link is proportional to the degree of the target node . in the past decade ,several generalizations and modifications of the original model were proposed , aiming to reproduce further structural characteristics of real systems . here , we investigate the nonlinear preferential attachment model , where the probability that a new link connects to node is where is the indegree of node , and . the probability that the new link connects to _ any _ node with degree , where is the number of nodes with degree at the time of the link formation .we can not test directly our assumption , because changes over time .to proceed we transform to a uniform distribution by calculating the rank function for each new link given and : if eq .[ eq : nonlinkernel ] holds , is uniformly distributed in the interval $ ] , independently of .therefore , if we plot the cumulative distribution function , we get a straight line for the correct exponent . to determine the best exponent, we compare the empirical distribution of the values to the uniform distribution for different exponents by computing the kolmogorov - smirnoff distance between the two distributions . evaluating our method for indegree distribution of the bitcoin network ,we find good correspondence between the empirical data and the presumed conditional probability function ; the exponent giving the best fit is ( fig . [ pa6a10y0n ] ) .this shows that the overall growth statistics agree well with the preferential attachment process .of course , preferential attachment itself can not explain the disassortative degree correlations and the high clustering observed in the network .we argue that preferential attachment is a key factor shaping the degree distribution , however , more detailed investigation of the growth process is necessary to explain the higher order correlations . , the exponential cutoff corresponds to the finite lifetime of the bitcoin system . ] , however , the rest of the fit is unsatisfactory . therefore , we fit the distribution with a stretched exponential distribution of form .we find a better approximation of the whole distributions ; the parameters are and . ] in the this section , we analyze the detailed dynamics of money flow on the transaction network .the increasing availability of digital traces of human behavior revealed that various human activities , e.g. mobility patterns , phone calls or email communication , are often characterized by heterogeneity .here we show that the handling of money is not an exception : we find heterogeneity in both balance distribution and temporal patterns .we also investigate the microscopic statistics of transactions .the state of node at time is given by the balance of the corresponding address , i.e. the number of bitcoins associated to node .the transactions are directly available , and we can infer the balance of each node based on the transaction list . note that the overall quantity of bitcoins increases over time : bitcoin rewards users devoting computational power to sustain the system .we first investigate the temporal patterns of the system by measuring the distribution of inactivity times .the inactivity time is defined as the time elapsed between two consecutive outgoing transactions from a node .we find a broad distribution that can be approximated by the power - law ( fig .[ dtin2 ] ) , in agreement with the behavior widely observed in various complex systems .it is well known that the wealth distribution of society is heterogeneous ; the often cited and quantitatively not precise 80 - 20 rule of pareto states that the top 20% of the population controls 80% of the total wealth . in line with this , we find that the wealth distribution in the bitcoin system is also highly heterogeneous .the proper pareto - like statement for the bitcoin system would be that the 6.28% of the addresses posesses the 93.72% of the total wealth .we measure the distribution of balances at different points of time , and we find a stable distribution .the tail of wealth distribution is generally modeled with a power - law , following this practice we find a power - law tail for balances ( see fig .[ balanceslb ] ) .however , visual inspection of the fit is not convincing : the scaling regime spans only the last few orders of magnitude , and fails to reproduce the majority of the distribution .instead we find that the overall behavior is much better approximated by the stretched exponential distribution , where and .to further investigate the evolution of the wealth distribution we measure the gini coefficient over time .we find that the distribution is characterized by high values throughout the whole lifetime of the network , reaching a stationary value around in the trading phase ( see fig .[ ginitime ] ) .to understand the origin of this heterogeneity , we turn to the microscopic statistics of acquiring bitcoins . similarly to the case of degree distributions , the observed heterogeneous wealth distributions are often explained by preferential attachment . moreover ,preferential attachment was proposed significantly earlier in the context of wealth distributions than complex networks . in economicspreferential attachment is traditionally called the matthew effect or the `` rich get richer phenomenon '' .it states that the growth of the wealth of each individual is proportional to the wealth of that individual . in line with this principle, several statistical models were proposed to account for the heterogeneous wealth distribution ..,title="fig : " ] + .,title="fig : " ] to find evidence supporting this hypothesis , we first investigate the change of balances in fixed time windows .we calculate the difference between the balance of each address at the end and at the start of each month .we plot the differences in function of the starting balances ( fig . [ pamoney ] ) .when the balance increases , we observe a positive correlation : the average growth increases in function of the starting balance , and it is approximated by the power - law .this indicates the `` rich get richer '' phenomenon is indeed present in the system . for decreasing balances, we find that a significant number of addresses lose all their wealth in the time frame of one month .this phenomenon is specific to bitcoin : due to the privacy concerns of users , it is generally considered a good practice to move unspent bitcoins to a new address when carrying out a transaction . to better quantify the preferential attachment, we carry out a similar analysis to the previous section .however , there is a technical difference : in the case of the evolution of the transaction network , for each event the degree of a node increases by exactly one . in the case of the wealth distributionthere is no such constraint . to overcome this difficulty we consider the increment of a node s balance by one unit as an event ,e.g. if after a transaction increased by , we consider it as separate and simultaneous events .we only consider events when the balance associated to an address increases , i.e. the address receives a payment .we now calculate the rank function defined in eq .[ erank ] , and plot the cumulative distribution function of the values observed throughout the whole time evolution of the bitcoin network ( fig . [ addrbalct ] ) .visual inspection shows that no single exponent provides a satisfying result , meaning that can not be modeled by a simple power - law relationship like in eq .[ eq : nonlinkernel ] .however , we do find that the `` average '' behavior is best approximated by exponents around , suggesting that is a sublinear function . in the context of network evolution, previous theoretical work found that sublinear preferential attachment leads to a stationary stretched exponential distribution , in line with our observations .we have investigated the evolution of both the transaction network and the wealth distribution separately .however , it is clear that the two processes are not independent . to study the connection between the two , we measure the correlation between the indegree and balance associated to the individual nodes .we plot the average balance of addresses as a function of their degrees on fig .[ degbal ] . for degrees in the range of ( over of all nodes with nonzero balance ) , the average balance is a monotonously increasing function of the degree , and it is approximated by the power - law , indicating that the accumulated wealth and the number of distinct transaction partners an individual has are inherently related .similar scaling was reported by tseng et al . , who conducted an online experiment where volunteers traded on a virtual market .addrbalct2 ( 46,14 ) values ( see eq .[ erank ] ) for exponents , , , , , , and . the inset shows the maximum kolmogorov - smirnoff error for these exponents . here , the results are not as obvious as in the case of link creation ( fig .[ pa6a10y0n ] ; a simple power - law form like in eq . [ eq : nonlinkernel ] is not sufficient to accurately model the statistics of money flow . on the other hand ,the `` average '' behavior shows a correlation between the balance and the increase of the balance : the uncorrelated assumption ( ) clearly gives a much worse approximate than the exponents that presume preferential attachment ( ).,title="fig : " ] degbal4 ( 9,23 ) , only 75 nodes ( ) have higher indegree , the averages calculated for such small sample result in high fluctuations ( see inset ) .we also measure both the pearson and spearman correlation coefficient : the pearson correlation coefficient of the full dataset is , while the spearman rank correlation coefficient is .( note that the pearson correlation coefficient measures the linear dependence between two variables , while the spearman coefficient evaluates monotonicity ) .we test the statistical significance of the correlation by randomizing the dataset 1000 times and calculating the spearman coefficient for each randomization .we find that the average spearman coefficient is with a standard deviation of , indicating that the correlation is indeed significant.,title="fig : " ]bitcoin is based on a peer - to - peer network of users connected through the internet , where each node stores the list of previous transactions and validates new transactions based on a proof - of - work system .users announce new transactions on this network , these transactions are formed into _ blocks _ at an approximately constant rate of one block per 10 minutes ; blocks contain a varying number of transactions .these blocks form the block - chain , where each block references the previous block . changing a previous transaction ( e.g. double spending )would require the recomputation of all blocks since then , which becomes practically infeasible after a few blocks .to send or receive bitcoins , each user needs at least one address , which is a pair of private and public keys .the public key can be used for receiving bitcoins ( users can send money to each other referencing the recipient s public key ) , while sending bitcoins is achieved by signing the transaction with the private key .each transaction consists of one or more _ inputs _ and _ outputs_. in fig .[ trasaction1 ] we show a schematic view of a typical bitcoin transaction .readers interested in the technical details of the system can consult the original paper by satoshi nakamoto or the various resources available on the internet .an important aspect of bitcoin is how new bitcoins are created , and how new users can acquire bitcoins .new bitcoins are generated when a new block is formed as a reward to the users participating in block generation .the generation of a valid new block involves solving a reverse hash problem , whose difficulty can be set in a wide range .participating in block generation is referred to as _ mining _ bitcoins .the nodes in the network regulate the block generation process by adjusting the difficulty to match the processing power currently available .as interest in the bitcoin system grew , the effort required to generate new blocks , and thus receive the newly available bitcoins , has increased over 10 million fold ; most miners today use specialized hardware , requiring significant investments .consequently , an average bitcoin user typically acquires bitcoins by either buying them at an exchange site or receiving them as compensation for goods or services . due to the nature of the system ,the record of all previous transactions since its beginning are publicly available to anyone participating in the bitcoin network . from these records, one can recover the sending and receiving addresses , the sum involved and the approximate time of the transaction .such detailed information is rarely available in financial systems , making the bitcoin network a valuable source of empirical data involving monetary transactions .of course , there are shortcomings : only the addresses involved in the transactions are revealed , not the users themselves . while providing complete anonymity is not among the stated goals of the bitcoin project , identifying addresses belonging to the same user can be difficult , especially on a large scale .each user can have an unlimited number of bitcoin addresses , which appear as separate nodes in the transaction records .when constructing the network of users , these addresses would need to be joined to a single entity .another issue arises not only for bitcoin , but for most online social datasets : it is hard to determine which observed phenomena are specific to the system , and which results are general .we do not know to what extent the group of people using the system can be considered as a representative sample of the society . in the case of bitcoin for example , due to the perceived anonymity of the system , it is widely used for commerce of illegal items and substances ; these types of transactions are probably overrepresented among bitcoin transactions .ultimately , the validity of our results will be tested if data becomes available from other sources , and comparison becomes possible .we installed the open - source ` bitcoind ` client and downloaded the blockchain from the peer - to - peer network on may 7th , 2013 .we modified the client to extract the list of all transactions in a human - readable format .we downloaded more precise timestamps of transactions from the ` blockchain.info ` website s archive .the data and the source code of the modified client program is available at the project swebsite or through the casjobs web database interface .the data includes 235,000 blocks , which contain a total of 17,354,797 transactions .this dataset includes 13,086,528 addresses ( i.e. addresses appearing in at least one transaction ) ; of these , 1,616,317 addresses were active in the last month .the bitcoin network itself does not store balances associated with addresses , these can be calculated from the sum of received and sent bitcoins for each address ; preventing overspending is done by requiring that the input of a transaction corresponds to the output of a previous transaction . using this method , we found that approximately one million addresses had nonzero balance at the time of our analysis .we have preformed detailed analysis of bitcoin , a novel digital currency system .a key difference from traditional currencies handled by banks is the open nature of the bitcoin : each transactions is publicly announced , providing unprecedented opportunity to study monetary transactions of individuals .we have downloaded and compiled the complete list of transactions , and we have extracted the time and amount of each payment .we have studied the structure and evolution of the transaction network , and we have investigated the dynamics taking place on the network , i.e. the flow of bitcoins . measuring basic network characteristics in function of time , we have identified two distinct phases in the lifetime of the system : ( i ) when the system was new , no businesses accepted bitcoins as a form of payment , therefore bitcoin was more of an experiment than a real currency .this initial phase is characterized by large fluctuations in network characteristics , heterogeneous indegree- and homogeneous outdegree distribution .( ii ) later bitcoin received wider public attention , the increasing number of users attracted services , and the system started to function as a real currency .this trading phase is characterized by stable network measures , dissasortative degree correlations and power - law in- and outdegree distributions .we have measured the microscopic link formation statistics , finding that linear preferential attachment drives the growth of the network . to study the accumulation of bitcoins we have measured the wealth distribution at different points in time .we have found that this distribution is highly heterogeneous through out the lifetime of the system , and it converges to a stable stretched exponential distribution in the trading phase .we have found that sublinear preferential attachment drives the accumulation of wealth .investigating the correlation between the wealth distribution and network topology , we have identified a scaling relation between the degree and wealth associated to individual nodes , implying that the ability to attract new connections and to gain wealth is fundamentally related .we believe that the data presented in this paper has great potential to be used for evaluating and refining econophysics models , as not only the bulk properties , but also the microscopic statistics can be readily tested . to this end , we make all the data used in this paper available online to the scientific community in easily accessible formats .the authors thank andrs bodor and philipp hvel for many useful discussions and suggestions . this work has been supported by the european union under grant agreement no .fp7-ict-255987-foc - ii project .the authors thank the partial support of the european union and the european social fund through project futurict.hu ( grant no . : tamop-4.2.2.c-11/1/konv-2012 - 0013 ) , the otka 7779 and the nap 2005/kckha005 grants .eitkic_12 - 1 - 2012 - 0001 project was partially supported by the hungarian government , managed by the national development agency , and financed by the research and technology innovation fund and the makog foundation .catanzaro , m. , and buchanan , m. ( 2013 ) .network opportunity ._ nature physics _ , * 9*(3 ) , 121123 .caldarelli , g. , chessa , a. , pammolli , f. , gabrielli , a. , and puliga , m. ( 2013 ) . reconstructing a credit network . _ nature physics _ , * 9*(3 ) , 125126 .bargigli , l. , and gallegati , m. ( 2013 ) . finding communities in credit networks ._ economics _ , * 7*. caldarelli , g. ( 2007 ) .scale - free networks .oxford university press palla , g. , farkas , i. , dernyi , i. , barabsi , a .-l . , and vicsek , t. ( 2004 ) .reverse engineering of linking preferences from network restructuring ., * 70*(4 ) , 046115 .nakamoto , s. ( 2008 ) .bitcoin : a peer - to - peer electronic cash system . ` http://bitcoin.org/bitcoin.pdf ` ron , d. and shamir , a. ( 2012 ) . quantitative analysis of the full bitcoin transaction graph ._ iacr cryptology eprint archive _ , 2012:584 .available : ` http://eprint.iacr.org/2012/584 ` . in : financial cryptography and data security , springer , 2013 .barabsi , a .-l . , jeong , h. , nda , z. , ravasz , e. , schbert , a. , and vicsek , t. ( 2002 ) .evolution of the social network of scientific collaborations ._ physica a _ * 311 * _ 590614 _ newman , m. e. j. ( 2001 ) . clustering and preferential attachment in growing networks .e _ * 64 * _ 025102 _ perc , m. ( 2013 ) .self - organization of progress across the century of physics . _ scientific reports _ , * 3 * , 1720 .jeong , h. , nda , z. , and barabsi , a .-( 2003 ) . measuring preferential attachment in evolving networks ._ europhysics letters _ , * 567*. kunegis , j. , blattner , m. , and moser , c. ( 2013 ) .preferential attachment in online networks : measurement and explanations . ` arxiv:1303.6271 ` .alan mislove .online social networks : measurement , analysis , and applications to distributed information systems .phd thesis , rice university , 2009 .vzquez , a. ( 2003 ) . growing network with local rules : preferential attachment , clustering hierarchy , and degree correlations .e _ , * 67 * ( 5 ) , 056104 .vzquez , a. , oliveira , j. g. , dezs , z. , goh , k. , kondor , i. , and barabsi , a .-modeling bursts and heavy tails in human dynamics .e _ , * 73 * , 036127 .jo , h. , karsai , m. , kertsz , j. , and kaski , k. ( 2012 ) .circadian pattern and burstiness in mobile phone communication ._ new j. phys ._ , * 14 * , 013055 .song , c. , qu , z. , blumm , n. , and barabsi , a .-limits of predictability in human mobility . _ science _ , * 327 * , 101821 .lambiotte , r. , blondel , v. d. , kerchove , c. , huens , e. , prieur , c. , smoreda , z. , and van dooren , p. ( 2008 ) .geographical dispersal of mobile communication networks ._ physica a _ , * 387 * ( 21 ) , 53175325 .barabsi , a .-the origin of bursts and heavy tails in human dynamics ._ nature _ , * 435 * ( 7039 ) , 207 - 211 .ning , d. , and you - gui , w. ( 2007 ) .power - law tail in the chinese wealth distribution ._ chinese physics letters _ , * 24 * ( 8) , 2434 .klass , o. s. , biham , o. , levy , m. , malcai , o. , and solomon , s. ( 2006 ) .the forbes 400 , the pareto power - law and efficient markets .j. b _ , * 55 * ( 2 ) , 143147 .simon , h. a. ( 1955 ) . on a class of skew distribution functions ._ biometrika _ , * 42 * ( 3/4 ) 425440 .`` for whosoever hath , to him shall be given , and he shall have more abundance : but whosoever hath not , from him shall be taken away even that he hath . ''the bible , matthew 13:12 , king james translation ispolatov , s. and krapivsky , p.l . and redner , s. ( 1998 ) .wealth distributions in asset exchange models .j. b _ , * 2 * ( 2 ) , 267276 .garlaschelli , d. , battiston , s. , castri , m. , servedio , v. d. p. , and caldarelli , g. ( 2005 ) . the scale - free topology of market investments ._ physica a _ , * 350 * , 491499 .tseng , j. , li , s. , and wang , s. ( 2010 ) .experimental evidence for the interplay between individual wealth and transaction network ., * 73 * 6974 .`` most bitcoin software and websites will help with this by generating a brand new address each time you perform a transaction . '' ` https://en.bitcoin.it/wiki/address ` mtray , p. , csabai , i. , hga , p. , stger , j. , dobos , l. and vattay , g. ( 2007 ) .building a prototype for network measurement virtual observatory .acm sigmetrics 2007 minenet workshop ._ san diego , ca , usa . doi : 10.1145/1269880.1269887
|
the possibility to analyze everyday monetary transactions is limited by the scarcity of available data , as this kind of information is usually considered highly sensitive . present econophysics models are usually employed on presumed random networks of interacting agents , and only some macroscopic properties ( e.g. the resulting wealth distribution ) are compared to real - world data . in this paper , we analyze bitcoin , which is a novel digital currency system , where the complete list of transactions is publicly available . using this dataset , we reconstruct the network of transactions and extract the time and amount of each payment . we analyze the structure of the transaction network by measuring network characteristics over time , such as the degree distribution , degree correlations and clustering . we find that linear preferential attachment drives the growth of the network . we also study the dynamics taking place on the transaction network , i.e. the flow of money . we measure temporal patterns and the wealth accumulation . investigating the microscopic statistics of money movement , we find that sublinear preferential attachment governs the evolution of the wealth distribution . we report a scaling law between the degree and wealth associated to individual nodes .
|
the distributed optimization of a sum of convex functions is an important class of decision and data processing problems over network systems , and has been intensively studied in recent years ( see and references therein ) .in addition to the discrete - time distributed optimization algorithms ( e.g. , ) , continuous - time multi - agent solvers have recently been applied to distributed optimization problems as a promising and useful technique , thanks to the well - developed continuous - time stability theory . constrained distributed optimization , in which the feasible solutions are limited to a certain region or range , is significant in a number of network decision applications , including multi - robot motion planning , resource allocation in communication networks , and economic dispatch in power grids .however , due to the consideration of constraints , the design of such algorithms , to minimize the global cost functions within the feasible set while allowing the agents operate with only local cost functions and local constraints , is a difficult task .conventionally , the projection method has been widely adopted in the design of algorithms for constrained optimization and related problems . proposed a continuous - time distributed projected dynamic for constrained optimization , where the agents shared the same constraint set , while constructed a primal - dual type continuous - time projected algorithm to solve a distributed optimization problem , where each agent had its own private constraint function . presented a primal - dual continuous - time projected algorithm for distributed nonsmooth optimization , where each agent had its own local bounded constraint set , though its auxiliary variables may be asymptotically unbounded .the purpose of this technical note is to propose a novel continuous - time projected algorithm for distributed nonsmooth convex optimization problems where each agent has its own general private constraint set .the main contributions of the note are three folds .firstly , we propose a distributed continuous - time algorithm for each agent to find the same optimal solution based only on local cost function and local constraint set , by combining primal - dual method for saddle point seeking and projection method for set constraints .the proposed algorithm is consistent with those in when there are no constraints in the distributed optimization .secondly , the proposed algorithm is proved to solve the optimization problem and have bounded states while seeking the optimal solutions , and therefore , further improves the recent interesting result in , whose algorithm may have asymptotically unbounded states . finally , nonsmooth lyapunov functions are employed along with the stability theory of discontinuous systems to conduct a complete and original convergence analysis .our nonsmooth analysis techniques also guarantee the algorithm convergence even when the problem has a continuum of optimal solutions . therefore , the convergence analysis provides additional insights and understandings for continuous - time distributed optimization algorithms compared with , which considered the objective functions with a finite number of critical points .the remainder of this note is organized as follows . in section [ sec : def ] , notation , mathematical definitions , and some results are presented and reviewed . in section [ distributed_optimization ] , a constrained convex ( nonsmooth ) optimization problemis formulated and a distributed continuous - time projected algorithm is proposed . in section [ convergence_simulation ] , a complete proof is presented to show that the algorithm state is bounded and the agents estimates are convergent to the same optimal solution , and simulation studies are carried out for illustration .finally , in section [ conclusion ] , concluding remarks are given .in this section , we introduce necessary notations , definitions and preliminaries about graph theory , nonsmooth analysis , and projection operators . [sec : def ] denotes the set of real numbers , denotes the set of -dimensional real column vectors , denotes the set of -by- real matrices , denotes the identity matrix , and denotes transpose , respectively .we write for the rank of the matrix , for the range of the matrix , for the kernel of the matrix , for the largest eigenvalue of the matrix , for the ones vector , for the zeros vector , and for the kronecker product of matrices and .furthermore , denotes the euclidean norm , ( ) denotes that matrix is positive definite ( positive semi - definite ) , denotes the closure of the subset , denotes the interior of the subset , denotes the dimension of the vector space , denotes the open ball _ centered _ at with _ radius _ , denotes the distance from a point to the set , that is , , as denotes that approaches the set , that is , for each there exists such that for all . a weighted undirected graph is denoted by , where is a set of nodes , is a set of edges , \in\mathbb r^{n\times n} ] , (x(t)),\quad x(0 ) = x_0,\ ] ] where (x)\triangleq\bigcap_{\delta>0}\bigcap_{\mu ( \mathcal s)=0}\overline{\rm{co}}\{f(\mathcal b_{\delta}(x)\backslash\mathcal s)\} ] is an upper semi - continuous map with nonempty , compact and convex values .dynamical systems of the form given by ( [ fs ] ) are called _ differential inclusions _ in the literature and for each state , they specify a _ set _ of possible evolutions rather than a single one . recall that the solution to ( [ eq_ode ] ) is a _right maximal solution _ if it can not be extended forward in time .we assume that all right maximal filippov solutions to ( [ eq_ode ] ) exist on .a set is said to be _ weakly invariant _ ( resp . ,_ strongly invariant _ ) with respect to ( [ eq_ode ] ) if for every , contains a maximal solution ( resp . , all maximal solutions ) of ( [ eq_ode ] ) .a point is a _ positive limit point _ of a solution to ( [ eq_ode ] ) with , if there exists a sequence with and as .the set of all such positive limit points is the _ positive limit set _ for the trajectory with .an equilibrium point of ( [ eq_ode ] ) is a point such that (x_e) ] if is continuously differentiable at , then (x)\} ] and define with , where is defined by the cartesian product of .then , we arrive at the following lemma by directly analyzing the optimality condition .[ optimal_eqv ] assume assumption [ assumption ] holds . is an optimal solution to problem ( [ optimization_problem ] ) if and only if there exist and such that [ optimal_condition ] it follows from theorem 3.33 in that is an optimal solution to problem if and only if where is the normal cone of at an element . note that is convex and followed by assumption [ assumption ] .it follows from theorem 2.85 and lemma 2.40 in that and . to prove this lemma ,one only needs to show holds if and only if ( [ optimal_condition ] ) is satisfied .suppose ( [ optimal_condition ] ) holds .since graph is connected , it follows from ( [ optimal_condition_2 ] ) that there exists such that .note that if and only if .let be the entry of the adjacency matrix of and ^{\rm t}\in\mathbb r^{nq} ] . for every , and , hence , and there exists such that .thus , there exists ^{\rm t}\in\mathbb r^{nq} ] and ^{\rm t}\in\mathbb r^{nq} ] , , is the laplacian matrix of graph , and .the following lemma provides some result when .[ l_n_com ] let be the laplacian matrix of the connected and undirected graph , and let , then , , and .note that is symmetric since is undirected . can be decomposed as via eigenvalue decompositions , where is an orthogonal matrix and is a diagonal matrix whose diagonal entries are the eigenvalues of .thus , where is clearly a diagonal matrix . since and , it follows that where is the diagonal element of .hence , in addition , and since is connected .the diagonal matrix has one zero diagonal entry and positive diagonal entries .furthermore , it follows from ( [ alpha_l ] ) that the diagonal matrix has one zero diagonal entry and positive diagonal entries . hence , and .since is an orthogonal matrix and is invertible , and .because and , it follows from rank - nullity theorem of linear algebra that and .let and be the vectors such that ( [ optimal_condition ] ) is satisfied .define where and such that .the set - valued lie derivative of with respect to ( [ feedback_comp ] ) is defined by . by using lemmas [ lemma_ineq ] and [ l_n_com ] , we have the following result , which provides a nonsmooth lyapunov function and analyzes its set - valued lie derivative . [ differential_eqn ]consider the distributed algorithm ( [ distributed feedback_v2_x ] ) , or equivalently , algorithm ( [ feedback_comp ] ) .assume assumption [ assumption ] holds .let be as defined in .* is positive definite and if and only if ; * if , then there exists such that -\mathbf{x}\|^2- \mathbf { x}^{\rm t}(\alpha\mathbf{l}-\alpha^2\mathbf{l}^2)\mathbf { x}\leq 0 such that } \\ & & a=[p^{\rm t},\ , ( \nabla_{\lambda } v_2^*(\mathbf{x},\lambda))^{\rm t } ] v \textit { for all } \big{\ } } , \end{aligned}\ ] ] where (\mathbf{x},\lambda ) ] for all , where -\mathbf{x}\\ \alpha \mathbf{l}\mathbf{x}\end{bmatrix} ] must be true for .hence , there exists such that -\mathbf{x}\big{)}\\ & & + \big{(}\alpha\mathbf{lx}+\lambda-\lambda^*\big{)}^{\rm t}\alpha \mathbf{l}\mathbf{x}\\ & = & \big { ( } g ( \mathbf{x})+ \alpha\mathbf{lx}+\alpha\mathbf{l}\lambda+\mathbf{x}-\mathbf{x}^*\big{)}^{\rm t } \big{(}p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}\big{)}\\ & & -\big{(}g(\mathbf{x}^*)+\alpha\mathbf{l}\lambda^ * \big { ) } ^{\rm t}\big{(}p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}\big{)}+\big{(}\alpha\mathbf{lx}+\lambda-\lambda^*\big{)}^{\rm t}\alpha \mathbf{l}\mathbf{x}\\ & = & j_1(\mathbf{x},\lambda)+j_2(\mathbf{x},\lambda)+j_3(\mathbf{x},\lambda),\end{aligned}\ ] ] where -\mathbf{x}\big{)} ] , and . by setting , , and in ( [ revised_inequality ] ) , we have -\mathbf{x}\|^2 -(\mathbf { x}-\mathbf { x}^*)^{\rm t}(g ( \mathbf{x})+ \alpha\mathbf{lx}+\alpha\mathbf{l}\lambda ) \nonumber\\ & = & -\|p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}\|^2 -(\mathbf { x}-\mathbf { x}^*)^{\rm t}g ( \mathbf{x})- \alpha\mathbf { x}^{\rm t}\mathbf{l}\mathbf { x } -\alpha\mathbf { x}^{\rm t}\mathbf{l}\lambda .\end{aligned}\ ] ] next , -\mathbf{x}^*\big{)}\nonumber\\ & & -\big{(}g(\mathbf{x}^*)+\alpha\mathbf{l}\lambda^ * \big { ) }^{\rm t}(\mathbf{x}^*-\mathbf{x})\nonumber\\ & = & -\big{(}g(\mathbf{x}^*)+\alpha\mathbf{l}\lambda^ * \big { ) } ^{\rm t}\big{(}p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}^*\big{)}\nonumber\\ & & -g(\mathbf{x}^*)^{\rm t}(\mathbf{x}^*-\mathbf{x } ) + \alpha(\mathbf{l}\lambda^ * ) ^{\rm t}\mathbf{x}.\end{aligned}\ ] ] note that is chosen such that .furthermore , implies that , where is the normal cone of at an element . since \in\omega ] .it follows from ( [ inequality_j_2 ] ) that in view of ( [ j_1_ineq ] ) and ( [ j_2_ineq ] ) , -\mathbf{x}\|^2 - \alpha\mathbf { x}^{\rm t}\mathbf{l}\mathbf { x}\nonumber\\ & & -(\mathbf { x}-\mathbf { x}^*)^{\rm t}\big{(}g ( \mathbf{x})-g ( \mathbf{x}^*)\big{)}-\alpha\mathbf { x}^{\rm t}\mathbf{l}(\lambda-\lambda^*).\end{aligned}\ ] ] the convexity of implies that . hence , -\mathbf{x}\|^2 - \alpha \mathbf { x}^{\rm t}\mathbf{l}\mathbf { x } -\alpha\mathbf { x}^{\rm t}\mathbf{l}(\lambda-\lambda^*).\end{aligned}\ ] ] and hence , -\mathbf{x}\|^2 - \alpha \mathbf { x}^{\rm t}\mathbf{l}\mathbf { x } -\alpha\mathbf { x}^{\rm t}\mathbf{l}(\lambda-\lambda^*)+\big{(}\alpha\mathbf{lx}+\lambda-\lambda^*\big{)}^{\rmt}\alpha \mathbf{l}\mathbf{x}\nonumber\\ & = & -\|p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}\|^2- \mathbf { x}^{\rm t}(\alpha\mathbf{l}-\alpha^2\mathbf{l}^2)\mathbf { x}.\end{aligned}\ ] ] by the properties of kronecker product , it follows from lemma [ l_n_com ] that , , and . therefore , is positive semi - definite and .hence , for all and -\mathbf{x}\|^2- \mathbf { x}^{\rm t}(\alpha\mathbf{l}-\alpha^2\mathbf{l}^2)\mathbf { x}\leq 0. ] .hence , -\mathbf{x}\|^2\big{\}}- \mathbf { x}^{\rm t}(\alpha\mathbf{l}-\alpha^2\mathbf{l}^2)\mathbf { x}\nonumber \\ & = & -\inf_{g(\mathbf { x})\in\partial \mathbf{f}(\mathbf { x } ) } \big{\{}\|p_{\omega}[\mathbf{x}- \alpha\mathbf{l}\mathbf { x } - \alpha \mathbf{l}\lambda - g(\mathbf{x})]-\mathbf{x}\|^2\big{\}}- \mathbf { x}^{\rm t}(\alpha\mathbf{l}-\alpha^2\mathbf{l}^2)\mathbf { x}\nonumber\\ & \leq & 0.\label{non - positive}\end{aligned}\ ] ] furthermore , if , then and .it follows from ( [ non - positive ] ) that , where , is positive invariant . since is positive definite , is bounded and every solution is also bounded .part ( ) is thus proved .( ) let , and let be the largest weakly invariant subset of .it follows from lemma [ nonsmooth_invariance ] that as .in addition , for every , and is an optimal solution to problem followed by lemma [ optimal_eqv ] .part ( ) is thus proved .( ) let . by replacing in with , we define a function , where such that .it follows from ( ) of lemma [ differential_eqn ] that is positive definite and if and only if .moreover , it follows from a similar line of attack as the proof of part ( ) that along the trajectories of ( [ distributed feedback_v2_x ] ) satisfies .hence , is a lyapunov stable equilibrium point to the system ( [ distributed feedback_v2_x ] ) .note that as , and every point in is a lyapunov stable equilibrium .according to lemma [ converge ] , converges to a point as .in addition , implies that there is such that and is an optimal solution to problem followed by lemma [ optimal_eqv ] .part ( ) is thus proved .theorem [ theorem_convergence ] shows the convergence property of the proposed algorithm .part ( ) of theorem [ theorem_convergence ] shows that the state trajectories of the algorithm are bounded , while part ( ) of theorem [ theorem_convergence ] shows that every state trajectory converges to a set in which every point corresponds to an optimal solution of ( [ optimization_problem ] ) , and part ( ) of theorem [ theorem_convergence ] further proves that every state trajectory converges to one point in that set . convergence analysis in this note is based on nonsmooth lyapunov functions , which can be regarded as an extension of the analysis basing on smooth lypunov functions in .moreover , the novel technique proves that algorithm ( [ distributed feedback_v2_x ] ) is able to solve optimization problems having a continuum of optimal solutions , and therefore , improves the previous ones in , which only handle the objective functions with a finite number of critical points . following is an example modified from .consider the optimization problem ( [ optimization_problem ] ) with ^{\rm t}\in\mathbb r^3 $ ] , where and nonsmooth objective functions .the information sharing graph in algorithm ( [ distributed feedback_v2_x ] ) is given by fig .[ fig_topology ] .the trajectories of estimates for versus time are shown in fig .it can be seen that all the agents converge to the same optimal solution which satisfies all the local constraints and minimizes the sum of local objective functions , without knowing other agents constraints or feasible sets .[ lambda ] shows the trajectories of the auxiliary variable s and verifies the boundedness of the algorithm trajectories . versus time , width=8 ] s versus time , width=8 ]in this note , a novel distributed projected continuous - time algorithm has been proposed for a distributed nonsmooth optimization under local set constraints . by virtue of inequalities of projection operators and nonsmooth analysis ,the proposed algorithm has been proved to be convergent while keeping the states bounded .furthermore , based on the stability theory and the invariance principal for nonsmooth lyapunov functions , the algorithm has been shown to solve the optimization problem with a continuum of optimal solutions .finally , the algorithm performance has also been illustrated via simulation .g. shi , k. johansson , and y. hong , `` reaching an optimal consensus : dynamical systems that compute intersections of convex sets , '' _ ieee transactions on automatic control _ , vol .55 , no . 3 , pp .610622 , 2013 .k. j. arrow , l. hurwicz , and h. uzawa , _ studies in linear and non - linear programming : stanford mathematical studies in the social sciences , no .2_.1em plus 0.5em minus 0.4emstanford : stanford university press , 1972 .a. bacciotti and f. ceragioli , `` stability and stabilization of discontinuous systems and nonsmooth lyapunov functions , '' _ esaim : control , optimisation and calculus of variations _ ,vol . 4 , pp . 361376 , 1999 .q. hui , w. m. haddad , and s. p. bhat , `` semistability , finite - time stability , differential inclusions , and discontinuous dynamical systems having a continuum of equilibria , '' _ ieee trans .54 , no .24652470 , 2009 .
|
this technical note studies the distributed optimization problem of a sum of nonsmooth convex cost functions with local constraints . at first , we propose a novel distributed continuous - time projected algorithm , in which each agent knows its local cost function and local constraint set , for the constrained optimization problem . then we prove that all the agents of the algorithm can find the same optimal solution , and meanwhile , keep the states bounded while seeking the optimal solutions . we conduct a complete convergence analysis by employing nonsmooth lyapunov functions for the stability analysis of discontinuous systems . finally , we provide a numerical example for illustration . * key words : * constrained distributed optimization , continuous - time algorithms , multi - agent systems , nonsmooth analysis , projected dynamical systems .
|
since its introduction in 1992 , the density matrix renormalisation group ( dmrg ) algorithm has been extremely successful at the solution of one - dimensional quantum mechanical problems. following the connection between the original dmrg algorithm and the variational class of matrix product states ( mps ) , a series of second - generation dmrg algorithms has been developed which explicitly build on the underlying tensor structure . in these second - generation algorithms , both the current variational state as well as the hamiltonian operatorare represented as tensor networks , namely mps and matrix product operators ( mpo ) .as such , the correct construction of the mpo representation of the hamiltonian at hand is the starting point of any dmrg calculation .this construction can be done fairly easily by hand for short - range hamiltonians , if necessary with the help of a finite - state machine which generates the required terms in the mpo .however , these finite - state machines can very quickly become extremely complicated ( see e.g. ref .fig . 7 ,9 and 10 for automata to generate interactions on a two - dimensional cylinder ) .other analytical approaches to construct the mpo representation of in particular quantum chemistry hamiltonians require individual treatment of each system and type of interaction by hand . in this paper , we will present a generic method to construct arbitrary mpos based solely on a ) the definition of appropriate single - site operators ( such as or ) and b ) the implementation of a model - independent mpo arithmetic .we will show that using these two ingredients , it is possible to efficiently construct the optimal representations of small powers of one - dimensional hamiltonians and of medium - range hamiltonians on two - dimensional cylinders .we further provide a proof - of - principle that the constructive approach is also able to generate the optimal representation for the four - body quantum chemistry hamiltonian with long - range interactions .the outline of the paper is as follows : in section [ sec : mpodef ] we define mpos as widely used in the literature .sections [ sec : sso ] and [ sec : arithmetic ] summarise and supplement the existing works on the construction of fundamental single - site operators such as in mpo form as well as the addition and multiplication of arbitrary mpos .after such an addition or multiplication , compression using one of the three compression methods specifically adapted to mpos as laid out in section [ sec : trunc ] brings the operator representation back into its most efficient form .we give examples of the resulting mpos in section [ sec : example ] for a spin - chain with nearest - neighbor interactions , the fermi - hubbard model on a cylinder in hybrid real- and momentum space and the full quantum chemistry hamiltonian .section [ sec : variance ] details an algorithm to reduce numerical errors while calculating the variance of a mpo of particular interest here is the hamiltonian represented as a mpo .finally , we conclude in section [ sec : conclusions ] .for a detailed introduction to the density matrix renormalization group ( dmrg ) and in particular the second - generation algorithms based on matrix product states ( mps ) and matrix product operators ( mpos ) , we refer to an existing review as well as a dmrg - centered overview of the implementation. here , we will only define the basic structure of matrix product operators . ( wi ) [ draw ] at ( 0,0) ; ( wi ) node[near end , left](-0.5,0 ) ; ( wi ) node[near end , right](0.5,0 ) ; ( wi ) node[near end , above](0,0.5 ) ; ( wi ) node[near end , below](0,-0.5 ) ; ( w1 ) [ draw ] at ( 2.5,0) ; ( w2 ) [ draw ] at ( 3.5,0) ; ( w3 ) [ draw ] at ( 4.5,0) ; ( w4 ) [ draw ] at ( 5.5,0) ; ( w1 ) node[near end , above] + ( 0,0.5 ) ; ( w1 ) node[near end , below] + ( 0,-0.5 ) ; ( w2 ) node[near end , above] + ( 0,0.5 ) ; ( w2 ) node[near end , below] + ( 0,-0.5 ) ; ( w3 ) node[near end , above] + ( 0,0.5 ) ; ( w3 ) node[near end , below] + ( 0,-0.5 ) ; ( w4 ) node[near end , above] + ( 0,0.5 ) ; ( w4 ) node[near end , below] + ( 0,-0.5 ) ; ( w1 ) node[near end , left] + ( -0.5,0 ) ; ( w1 ) ( w2 ) ; ( w2 ) ( w3 ) ; ( w3 ) ( w4 ) ; ( w4 ) node[near end , right] + ( 0.5,0 ) ; given a set of local hilbert spaces ] .an operator applied to has to be commuted past all operators with . for each , it picks up a minus sign .each of these signs can be implemented as the application of the local parity operator .the mpo is then constructed as a chain of parity tensors , the active site tensor and then a chain of right mpo identity components , graphically represented in fig .[ fig : sso - ferm : par ] .constructed in such a way , fermionic mpos can be treated exactly the same as bosonic mpos in all applications that follow . at ( -1.2,0 ) : ; ( i1 ) [ draw ] at ( 0,0) ; ( i1 ) node[near end , left](-0.5,0 ) ; ( i1 ) node[near end , right](i1r)(0.5,0 ) ; ( i1 ) node[near end , above](0,0.5 ) ; ( i1 ) node[near end , below](0,-0.5 ) ; ( p2 ) [ draw ] at ( 2,0) ; ( p2 ) node[near end , left](p2l)(1.5,0 ) ; ( p2 ) node[near end , right](p2r)(2.5,0 ) ; ( p2 ) node[near end , above](2,0.5 ) ; ( p2 ) node[near end , below](2,-0.5 ) ; ( i1r ) ( p2l ) ; ( i3 ) [ draw ] at ( 4,0) ; ( i3 ) node[near end , left](i3l)(3.5,0 ) ; ( i3 ) node[near end , right](i3r)(4.5,0 ) ; ( i3 ) node[near end , above](4,0.5 ) ; ( i3 ) node[near end , below](4,-0.5 ) ; ( p2r ) ( i3l ) ; ( c4 ) [ draw ] at ( 6,0) ; ( c4 ) node[near end , left](c4l)(5.5,0 ) ; ( c4 ) node[near end , right](c4r)(6.5,0 ) ; ( c4 ) node[near end , above](6,0.5 ) ; ( c4 ) node[near end , below](6,-0.5 ) ; ( i3r ) ( c4l ) ; at ( -1.2,-2 ) : ; ( i1 ) [ draw ] at ( 0,-2) ; ( i1 ) node[near end , left] + ( -0.5,0 ) ; ( i1 ) node[near end , right](i1r) + ( 0.5,0 ) ; ( i1 ) node[near end , above] + ( 0,0.5 ) ; ( i1 ) node[near end , below] + ( 0,-0.5 ) ; ( i2 ) [ draw ] at ( 2,-2) ; ( i2 ) node[near end , left](i2l) + ( -0.5,0 ) ; ( i2 ) node[near end , right](i2r) + ( 0.5,0 ) ; ( i2 ) node[near end , above] + ( 0,0.5 ) ; ( i2 ) node[near end , below] + ( 0,-0.5 ) ; ( i1r ) ( i2l ) ; ( c3 ) [ draw ] at ( 4,-2) ; ( c3 ) node[near end , left](c3l) + ( -0.5,0 ) ; ( c3 ) node[near end , right](c3r) + ( 0.5,0 ) ; ( c3 ) node[near end , above] + ( 0,0.5 ) ; ( c3 ) node[near end , below] + ( 0,-0.5 ) ; ( i2r ) ( c3l ) ; ( i4 ) [ draw ] at ( 6,-2) ; ( i4 ) node[near end , left](i4l) + ( -0.5,0 ) ; ( i4 ) node[near end , right] + ( 0.5,0 ) ; ( i4 ) node[near end , above] + ( 0,0.5 ) ; ( i4 ) node[near end , below] + ( 0,-0.5 ) ; ( c3r ) ( i4l ) ; it is possible to simulate non - homogeneous systems using mps and mpo .such a non - homogeneity could be different spin sizes in a spin chain or the presence of both fermionic and bosonic sites in the system ( the case of non - homogeneous hopping between otherwise identical sites will be handled later in section [ sec : arithmetic ] ) .the former case of non - homogeneity can be used to represent some experimental systems with alternating and spins as well as reduce finite - size effects in spin chains by placing spins at the two edges .the latter case might be helpful in simulating physical systems with bosonic and fermionic species , as they commonly occur in experiments with ultracold atoms .suppose we have two types of sites in our system .even sites may contain zero , one or two fermions , while odd sites may contain up to a certain number of bosons .if we then wish to construct the fermionic creation operator , we have to ensure that the identities used to its left and right match the corresponding physical basis on those sites .further , if we use quantum numbers for fermion and boson number conservation , the identities to the left need mpo bond indices transforming as .in contrast , if we apply a bosonic creation operator , the bond indices of those identities have to transform as .equal to zero and call the bosonic creators . for spin systems , it is however entirely reasonable to have both on sites ( at the edge ) and sites ( in the bulk ) of the system .[ fig : sso : nonh : ops ] gives examples of those creation operators .this has two implications .first , for every type of sites in the system , we need to define an appropriate active tensor representing ( say ) acting on a site of this type .second , for every active site type on which the operator acts , we also need to store an appropriate left and right identity tensor for all types of sites . thus , if we have different types of sites in our system , we need to store up to rank-4 tensors per single - site operator .however , since these tensors are still only of size , and the number of different types is typically also small , this is not a concern in practice .consider the example of a spin chain with spins in the bulk and two spins at the boundaries . to construct on the fly ,we need to store ten rank-4 tensors : first , we need to store two tensors representing acting on sites with and .second , for each of these two , we need to store two left - identities which we place on sites with and respectively to the left of site .similarly , we need a total of four right - identities to be placed on sites to the right of site with and respectively for a total of ten tensors of size ; in this specific case requiring the storage of 55 scalar values in total . for sites , requiring 4 numbers each .the other five are of size for sites with 9 scalar entries .we hence need to store scalar values to represent on any site . ]the implementation of arithmetic operations with mpos is well - known already and is entirely independent of the specific form of the operands .in particular , the implementation can handle single - site operators as constructed in the previous section and mpos resulting from earlier arithmetic operations on equal footing .\(r ) [ draw ] at ( -1.7,0) ; ( r ) + + ( 0,0.5 ) ; ( r ) + + ( 0,-0.5 ) ; ( r ) + + ( 0.6,0 ) ; ( r ) + + ( -0.6,0 ) ; at ( -0.8,0 ) ; \(a ) [ draw ] at ( 1,0.5) ; ( b ) [ draw ] at ( 1,-0.5) ; ( a ) + + ( 0,0.5 ) ; ( b ) + + ( 0,-0.5 ) ; ( 0,0 ) circle ( 3pt ) node ( l ) ; ( 2,0 ) circle ( 3pt ) node ( rm ) ; \(a ) ( b ) ; \(b ) -| ( l ) ; ( a ) -| ( l ) ; ( l ) ( -0.4,0 ) ; ( rm ) |- ( b ) ; ( rm ) |- ( a ) ; ( 2.4,0 ) ( rm ) ; given two operators , and their mpo representation tensors and , the product ( read from right - to - left , is applied first ) can be built on each site individually .it is graphically represented in fig .[ fig : arithmetic : prod ] . the lower physical index of each is contracted with the upper physical index of the corresponding .the left and right mpo indices of the tensors are merged into one fat index .this procedure results in a mpo with bond dimensions .specifically , the product of two single - site operators ( mpo bond dimension 1 ) is again a mpo with bond dimension 1 .the scalar products of operators occuring during the implementation of non - abelian symmetries in tensor networks can similarly be implemented independently of the operator at hand .the sum of two operators , represented by mpo components and can also be constructed .considering only the mpo bond indices , i.e. treating as _ matrices _ of operators , the components of the resulting mpo are built as follows : for the example of a mpo , it is easy to verify that this results in the desired form representing .the sum of two mpos has a bond dimension .when constructing the mpo representation of a single - site operator as described in section [ sec : sso ] , the resulting operator will have bond dimension 1 and will be in its most efficient representation. products of such single - site operators ( such as ) will keep the bond dimension at 1 .however , the bond dimension will grow linearly in the number of such terms that are added together .naively , a four - term interaction mpo representing will have a maximal bond dimensions .the leading term in the computational cost of dmrg typically scales linearly in the maximal and linearly in , but there are sub - leading terms of quadratic order in .hence , some way to avoid this quintic or even decic scaling is absolutely necessary . _ compressing _ a mpo will in general reduce its bond dimension to the bare minimum . for example , the sum of two identical mpos will have a doubled bond dimension which is obviously not necessary a prefactor of multiplied into the first tensor would correspond to the same operator .similarly , two addends with long strings of identities to the left and right of the active sites , such as can easily `` share '' these strings such that the most efficient mpo has bond dimension 1 everywhere but on bond , where 2 is the minimum required .the compression methods presented here for mpos are based on the same idea as those for mps : given a mpo which has components and on sites and , it is possible to rewrite without changing the mpo itself . for some mpo components ,it is possible to find matrices with .the new tensors then have a smaller bond dimension while representing the same original mpo , as only the matrix product of and or and is relevant for the operator .this is entirely analogous to mps , which also offer this gauge freedom and where it is also possible to use it in order to compress the size of the mps .it must be stressed that the compression methods presented here work iteratively on a bond - by - bond basis and can not find a globally different ( but better ) mpo representation .however , for mpos investigated here , we are still able to recover the optimal representation in most cases and a near - optimal representation even for extremely difficult problems . for the latter , it would be possible to combine the compression methods here with others , such as an iterative fitting method. the singular value decomposition of mpos has been proposed before and in infinite - precision arithmetic it would work exactly the same as for mps : given a tensor , the indices and are combined into a larger index , yielding the matrix .this matrix is decomposed via svd as .columns of and rows of which correspond to negligible singular values in are removed . is reshaped into the compressed tensor .the product acts as a transfer matrix and is multiplied into the next tensor on the right , compressing the dimension of the mpo on bond .sweeping left - to - right and right - to - left through the mpo compresses all bonds . unfortunately , a straightforward svd yields extremely large singular values . the issue can be observed in fig . [fig : examples : svd ] ( labels `` standard '' ) . given an uncompressed fermi - hubbard hamiltonian with some finite range interaction on systems of length equal to 20 , 40 and 80 ,we compress the left and right halves and then calculate the singular value spectrum in the centre of the system . with increasing system size , we observe singular values growing as large as . at the same time, the numerical noise , singular values normally discarded , grows as large as !the magnitude of the singular values is linked to the fact that while we can always rescale a mps to have frobenius norm 1 , an operator will in principle have a system - size dependent frobenius norm .normalising all tensors but one , as is common for svd , implies that only this one tensor will carry the full norm of the operator .this leads to two problems : first , there is difficulty in decididing which singular values should be kept , as even the singular values strictly associated to numerical noise become extremely large .second , compared to the normalised tensors to its left and right , the entries of the singular value tensor will have a grossly different order of magnitude , resulting in great precision loss during subsequent operations .to avoid such large singular values , we can _ rescale _ the singular value tensor by a scalar value . while this destroys the ortho__normality _ _ of the resulting mpo bond basis , it preserves the ortho__gonality__. further , since all basis vectors are still of the same length , compression can proceed as usual , either based on a sharp cut - off or on a dynamic detection of the drop - off in magnitude of singular values ( cf .[ fig : examples : svd ] ) .lastly , properly chosen , such a rescaling can most often ensure that the norm of the operator is evenly distributed throughout its length , rather than concentrated in a single place . in practice, we found it helpful to calculate the arithmetic average of the singular values in the tensor and rescale such that this average is of order one .the tensor is multiplied with the inverse of the scaling factor to preserve the overall norm . to minimize numerical instabilities , it is advisable to choose the power of two closest to as the scaling factor , since such multiplications are exact with ieee-754 floating point numbers . with this rescaling after each svd during the compression sweeps , we observe singular values of magnitudes between 1 and 100 independent of the system length and numerical noise clearly recognisable as such of magnitude or smaller . however , there are still caveats and counterindications against using svd in specific cases , primarily concerning mpo representations of projectors or sums of operators involving projectors . first , when attempting to compress a suboptimal representation of a projector , svd even with rescaling often struggles to properly distribute the norm throughout the system .for example , given the projector on the heisenberg chain of lenght , a svd compression will lead to exponentially large terms in the first and last tensor , with the ( otherwise properly compressed ) terms in the bulk all carrying a prefactor .second , when attempting to evaluate sums of operators with greatly varying frobenius norms , svd will often entirely discard the smaller operator .this is not a concern for most hamiltonians , as they are built from few - body interaction terms all with roughly the same order of magnitude .however , when evaluating in the above system , the result from svd is simply .this can be understood since while . in a similar fashion ,if svd were tasked with the compression of the sum of two mps , one of norm and the other of norm 1 , the result would also simply be the larger of the two states , as soon as the difference in the two is lost in the numerical noise of order .both problems can be detected reliably : for the first , it is sufficient to compare e.g. the norm of each mpo component : if one or two ( i.e. at the edges of the system ) is much greater than in the bulk , svd failed to properly distribute the norm . for the second , it is sufficient to compare the frobenius norms of addends before operator addition , if in doubt . as a rule of thumb ,the frobenius norm is exponential in the number of identity mpo components .it is hence possible to sum few - body interaction terms together ( as they typically occur in hamiltonians or correlators ) or _ alternatively _ sum `` few - identity '' terms ( such as projectors ) together .however , this rule only applies to sums of mpos , not products of mpos .special care must be taken in those cases and it must be checked carefully whether the errors introduced by svd are acceptable relative to the problem at hand .the svd method has the disadvantage that it destroys the extreme sparsity of the usual mpo tensors and relies on a robust and small window of singular values encountered in the mpo .employing quantum number labels reduces this destruction of sparsity to the scope of individual blocks , which will often be implemented as dense tensors in any case . however , in particular for simple homogenous operators , it is desirable to keep the sparse , natural structures of mpo .it is furthermore sometimes also necessary to compress mpos with greatly varying singular values .the much simpler deparallelisation method avoids both issues entirely : sparsity is largely conserved and the compression does not rely on singular values .it furthermore does not rescale most elements of the tensor , keeping the norm distributed in the same way as before .it was first presented in ref . and can be considered a slight generalisation from the _ fork - merge _ method presented in ref . , from `` forking '' and `` merging '' only identity operators to arbitrary strings of operators .the algorithm is presented in detail in appendix [ app : depara ] .the basic idea is again to re - shape each site tensor into a matrix .then , columns of which are entirely parallel to any previous column are removed , with the respective proportionality factor stored in the transfer matrix to be multiplied into the next site tensor .this procedure results in a mpo that is often optimal for spatially homogeneous operators and retains the advantageous structure of analytically - constructed mpo tensors . for more difficult hamiltonians, it often results in suboptimal representations .the delinearisation method aims to combine the advantages of the svd and the deparallelisation .it is suitable to compress any mpo , including the previously - mentioned sums of projectors and hamiltonians as well as complicated hamiltonians . in most cases , it results in an optimal mpo dimension . for extremely large mpos ,the resulting bond dimensions tend to be slightly larger than with svd compression . however , the original sparsity of the mpo is largely preserved , even in the dense sub - blocks , is the most relevant scaling dimension .] labelled by quantum numbers . wherever possible , it attempts to ensure that no spurious small terms can occur in the hamiltonian .the algorithm is presented in full detail in appendix [ app : delin ] .similar to the deparallelisation , we attempt to remove columns from the matrix , but now allow for linear combinations of previously - kept columns to replace the column in question , whenever possible under the constraint that no cancellation to exactly - zero can occur ( this avoids the spurious small terms ) .we will present three examples of mpo generation using the above construction method .first , we show that it works well for the simple example of nearest - neighbour interactions on spin chains and even for small powers of the hamiltonian .second , we explain that it is very easy to generate the hamiltonian for the fermi - hubbard model on a cylinder in hybrid real- and momentum space .third , we present data that the construction method also correctly sums up partial terms in a toy model for the full quantum chemistry hamiltonian ., , and to illustrate the generic case .the leftmost and rightmost bonds have dimension four , whereas the bulk bond dimension is five as in the analytical solution . ]we consider the hamiltonian with nearest - neighbour interactions on a spin chain we can construct analytically an optimal representation of this hamiltonian, which has mpo bond dimension 5 . in comparison , we can plot the dimension of each bond of the numerically constructed representation for various system sizes ( cf . fig .[ fig : example : spin : w ] ) . as is clearly visible , the bond dimension quickly saturates at five and stays constant independent of the system length .the algorithm even finds an improvement over the usual analytic solution , as only one term is necessary at the boundary . in the bulk, it completely reproduces the analytic solution , here at the example of : further , we can construct powers of the hamiltonian , here specifically with coefficients , , , .the procedure is to first generate using only deparallelisation , which leads to the near - analytic solution at bond dimension 5 .we then multiply the mpo with itself to generate and compress the operator using svd or delinearisation . multiplying with repeatedly, we construct up to the seventh power of and compare the bond dimensions with those resulting from an iterative fitting procedure ( cf .table [ tab : example : spinchain ] ) . for small powers ,the resulting bond dimensions from the three compression methods coincide . for higher powers , the svd method results in somewhat lower bond dimensions. this could be both due to numerical inaccuracies in either method ( e.g. erroneously discarding small but relevant singular values ) or the fitting approach getting stuck in a local minimum .to numerical accuracy , the error resulting from the svd compression is zero , however . in comparison, the delinearisation method encounters cyclic linear dependencies it can not break when attempting to compress the higher power mpo representations .this results in a larger bond dimension .however , the original sparsity of the mpo , calculated as the relative number of exactly - zero entries in the dense sub - blocks of the mpo , is largely preserved at over 80% zero entries while no such entries where found after svd compression ..[tab : example : spinchain]bond dimensions in the center of a chain of powers of the nearest - neighbour spinchain hamiltonian with svd and delinearisation compression. relative sparsity of the resulting mpo is included for the delinearisation method ( svd does not preserve sparsity at all ) .we compare with the results of frwis et . for the xxz hamiltonian constructed with an iterative fitting procedure , which could also be combined with our construction method for mpos . [ cols="<,^,^,^,^,^,^,^",options="header " , ] c. hubig acknowledges funding through the exqm graduate school and the nanosystems initiative munich .i. mcculloch acknowledges support from the australian research council ( arc ) centre of excellence for engineered quantum systems , grant ce110001013 . and the arc future fellowships scheme , ft140100625 .mpo compression of an arbitrary operator should occur in three stages : 1 . performing one full sweep using the deparallelisation method 2 . performing sweeps using the strict delinearisation method until bond dimensions stay constant 3 . performing sweeps using the relaxed delinearisation method until bond dimensions stay constantthe motivation for this sequence is to firstly reduce the bond dimension as much as possible with the fairly cheap deparallelisation , then move on to the more costly delinearisation and finally , if a cyclic dependency occurs which can not be broken without allowing cancellation to zero , use the relaxed delinearisation .note that if the mpo is already optimal , the last step will not introduce such small terms .independent of the compression method , each full sweep iterates twice over the full system , once from left to right and then from right to left . on each site , the local tensor is re - shaped into a matrix during left - to - right ( right - to - left ) sweeps .the matrix is then decomposed as . is re - shaped into the new site tensor with the transfer matrix being multiplied into the next site tensor ( ) during left - to - right ( right - to - left ) sweeps .the decomposition is described in the following sections for the deparallelisation and delinearisation methods ._ input _ : matrix _ output _ : matrices , s.t. and has at most as many columns as and no two columns which are parallel to each other ._ procedure _ :1 . let be the set of kept columns , empty initially 2 .let be the dynamically - resized transfer matrix 3 . for every column index ] : 1 .if the -th column is parallel to column : 1 . set to the prefactor between the two columns 2 .otherwise : 1 . add to , set .4 . construct by horizontally concatenating the columns stored in .return and the check for parallelicity is ideally done on an element - wise basis by finding the first non - zero element of either column , calculating the factor between it and the corresponding element of the other column and then ensuring that all other elements agree on that prefactor .zero columns should be removed with a corresponding zero column stored in ._ input _ : matrix , threshold matrix _ output _ : matrices , s.t. and has at most as many columns as and all columns in are linearly independent . _remark _ : initially , the threshold matrix is constructed from as , i.e. each element is the 1-norm of the original operator to which it belongs multiplied by a small threshold ._ procedure _ : 1 .[ alg : dln : relaxed ] if relaxed delinearisation : set all elements to .[ alg : dln : inner : start ] deparallelise the rows of : where the elements of are chosen as the smallest elements in that column from non - zero rows which were parallel to the kept row .[ alg : dln : permutation ] sort the columns of according to the following criteria , resulting in , and a permutation matrix .sorting criteria are : 1 . the number of exactly - zero values in the column 2 . if tied , the number of exactly - zero thresholds in the same column of 3 . if tied , the number of exactly - zero values from the bottom of the column 4 . if tied , the number of exactly - zero thresholds from the bottom of the same column of 4 . for every column and associated threshold column in and 1 .attempt to solve where is the matrix from eligible previously - kept columns .a column is eligible for inclusion in if it has no non - zero entry in a row where is exactly zero .+ the coefficients are found via qr decomposition with column scaling ( by their respective norms ) .rows of and are scaled s.t .the right - hand side is either 1 or 0 prior to solution by backwards substitution .if any coefficients in have absolute value less than , remove the associated column from the eligible set to build and repeat .3 . if any coefficients in are close to , replace them by .if each element of the residual is smaller than : 1 . store the coefficients 5 . else , 1 . add the column to the set of kept columns and store a coefficient of 1 in the appropriate place .5 . collect all kept columns into , associated columns from into and construct the transfer matrix from the stored coefficients times the permutation matrix .[ alg : dln : inner : end ] multiply the row - deparallelisation transfer matrix back into and , yielding and .[ alg : dln : reset ] if the number of columns in is equal to the number of columns in , replace , , . 8 .[ alg : dln : rows ] repeat steps [ alg : dln : inner : start ] through [ alg : dln : inner : end ] for and ( i.e. delinearise the _ rows _ of ) : 9 . [alg : dln : reset2 ] if neither nor have fewer columns than 1 .return and .[ alg : dln : rows2 ] else - if has fewer columns than , 1 .return , , 11 .else , 1 .return , _ remark _ : during matrix - matrix products , it is helpful and often necessary to set elements of for which is true to zero .this ensures that where we allow cancellation to zero , we do not introduce additional terms whenever possible .step [ alg : dln : relaxed ] removes the requirement that we can not allow cancellation to zero .step [ alg : dln : inner : start ] usually halves the number of rows of , as there are often many zero rows or rows parallel to previous ones , making the subsequent qr decompositions both faster and more accurate .step [ alg : dln : permutation ] sorts columns such that those with few non - zero entries are considered first while attempting to keep an upper - triangular form .the former helps to find optimal non - cancelling linear superpositions , while the latter attempts to restore the usually - preferred triangular form whenever possible . steps [ alg : dln : reset ] and [ alg : dln : reset2 ] reduce numerical errors by reverting to the input matrix if no improvements have been found .finally , steps [ alg : dln : rows ] and [ alg : dln : rows2 ] often help to break cyclic dependencies and achieve optimal compression .the relevant three threshold values are , with the machine precision : * : during delinearisation , a new column has to be equal to the original one to within this value , relative to operator norms . in practice , we found to be a suitable value , as columns are usually either completely dependent ( with very small error ) or differ substantially ( with very large error ) .too small a threshold will lead to failure to optimise in some cases , as numerical noise may become relatively large during a long calculation .* : the delinearisation method is able to work with operators of very different orders of magnitude in the same mpo . in turn , this means that small terms are not automatically discarded as with svd .this implies that during the various matrix - matrix products encountered during mpo compression , special care has to be taken to avoid introducing artifical small terms . in practice, we found to work . * : this threshold serves to avoid small coefficients in the transfer matrix , which would lead to valid small coefficients in the next tensor . for most sensible operators ,coefficients should be of order one and if this is not possible , it may well be desired to keep the components separate rather than conflating them into a single column .our implementation uses a value of here .26ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevlett.69.2863 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.77.259 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.75.3537 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2007/10/p10014 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=2/a=025012 [ * * , ( ) ] link:\doibase 10.1016/j.aop.2010.09.012 [ * * , ( ) ] link:\doibase 10.1103/physrevb.72.180403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.020604 [ * * , ( ) ] link:\doibase 10.1103/physrevb.87.155137 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.062337 [ * * , ( ) ] link:\doibase 10.1103/physrevb.78.035116 [ * * , ( ) ] link:\doibase 10.1103/physrevb.93.155139 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4939000 [ * * , ( ) , http://dx.doi.org/10.1063/1.4939000 ] ( ) ,link:\doibase 10.1103/physrevb.91.155115 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.cpc.2014.08.019 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=5/a=055026 [ * * , ( ) ] link:\doibase 10.1146/annurev - conmatphys-020911 - 125018 [ * * , ( ) ] _ _ , ed .( , , ) p. `` , '' ( ) , `` , '' ( ) , `` , ''
|
matrix product operators ( mpos ) are at the heart of the second - generation density matrix renormalisation group ( dmrg ) algorithm formulated in matrix product state language . we first summarise the widely known facts on mpo arithmetic and representations of single - site operators . second , we introduce three compression methods ( rescaled svd , deparallelisation and delinearisation ) for mpos and show that it is possible to construct efficient representations of arbitrary operators using mpo arithmetic and compression . as examples , we construct powers of a short - ranged spin - chain hamiltonian , a complicated hamiltonian of a two - dimensional system and , as proof of principle , the long - range four - body hamiltonian from quantum chemistry .
|
we consider the following setting in this introductory section .let , , let be a probability space with a normal filtration } ] for all .moreover , let be a smooth globally one - sided lipschitz continuous function with at most polynomially growing derivatives and let be a smooth globally lipschitz continuous function with at most polynomially growing derivatives .in particular , we assume that there exists a real number such that and for all .these assumptions ensure the existence of an up to indistinguishability unique adapted stochastic process \times \omega \rightarrow \mathbb{r}^d ] ( see , e.g. , alyushina , theorem 1 in krylov or theorem 2.4.1 in mao ) .the drift coefficient is the infinitesimal mean of the process and the diffusion coefficient is the infinitesimal standard deviation of the process .our goal in this introductory section is then to efficiently compute the deterministic real number \ ] ] where is a smooth function with at most polynomially growing derivatives .note that this question is not treated in the standard literature in computational stochastics ( see , for instance , kloeden and platen and milstein ) which concentrates on sdes with globally lipschitz continuous coefficients rather than the sde .the computation of statistical quantities of the form for sdes with non - globally lipschitz continuous coefficients is a major issue in financial engineering , in particular , in option pricing . for detailsthe reader is refereed to the monographs lewis , glassermann , higham and szpruch . in order to simulate the quantity on a computer, one has to discretize both the solution process \times \omega \rightarrow \mathbb{r}^d ] and for all , .clearly , this implies the divergence of absolute moments of the euler approximation , i.e. , = \infty ] .the main observation of this article is that the approximation error of the multilevel monte carlo euler method for the sde diverges to infinity .more formally , theorem [ thm : conj ] below implies - \frac{1}{n } \sum _ { k = 1 } ^ { n } \big ( y_1^ { 1 , 0 , k } \big)^ { \ !2 } - \sum _ { l = 1 } ^ { \log_2 ( n ) } \frac { 2^l } { n } \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } \big ( y _ { 2^l } ^ { 2^l , l , k } \big)^ { \ !2 } - \big ( y _ { 2^ { ( l-1 ) } } ^ { 2^ { ( l-1 ) } , l , k } \big)^ { \ ! 2 } \right ) \ ! \right|= \infty\ ] ] -almost surely . with . ,width=495 ] note that the multilevel monte carlo euler method diverges on an event that is not rare but has _probability one_. thus in contrast to classical monte carlo simulations the multilevel monte carlo euler method is very sensitive to the rare events on which euler s method diverges in the sense of theorem [ thm : ee_divergence ] below . to visualize the divergence , figure [ f : five.sigma1 ] depicts four random sample paths of the approximation error of the multilevel monte carlo euler method for the sde with and shows explosion even for small values of .we emphasize that we are only able to establish the divergence for the simple sde . even in this simple case, the proof of the divergence is rather involved and requires precise estimates on the speed of divergence of euler s method for the random ordinary differential equation on an appropriate event of instability ; see below for an outline .comparing the convergence result for the monte carlo euler method and the divergence result for the multilevel monte carlo euler method reveals a remarkable difference between the classical monte carlo euler method and the new multilevel monte carlo euler method .the classical monte carlo euler method applies both to sdes with globally lipschitz continuous coefficients and to sdes with possibly superlinearly growing coefficients such as our sde .the multilevel monte carlo euler method , however , produces often completely wrong values in the case of sdes with superlinearly growing nonlinearities .this is particularly unfortunate as sdes with superlinearly growing nonlinearities are very important in applications ( see , e.g. , for applications in financial engineering ) .we recommend not to use the multilevel monte carlo euler method for applications with such nonlinear sdes .nonetheless , the multilevel monte carlo method can be used for sdes with non - globally lipschitz continuous coefficients when being combined with a strongly convergent numerical approximation method .hutzenthaler , jentzen and kloeden , for example , proposed the following slight modification of the euler method .let , , , be defined recursively through and for all and all .following we refer to this numerical approximation as a tamed euler method .additionally , let , , , for and be independent copies of the tamed euler approximations . in theorem [ thm : tamed_convergence ] belowwe then prove convergence of the multilevel monte carlo tamed euler method for all locally lipschitz continuous test functions on the path space whose local lipschitz constants grow at most polynomially .in particular , theorem [ thm : tamed_convergence ] below implies the existence of finite random variables , , such that - \frac{1}{n } \sum _ { k = 1 } ^ { n } f\big ( z_1^ { 1 , 0 , k } \big ) - \sum _ { l = 1 } ^ { \log_2 ( n ) } \frac { 2^l } { n } \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f\big ( z _ { 2^l } ^ { 2^l , l , k } \big ) - f\big ( z _ { 2^ { ( l-1 ) } } ^ { 2^ { ( l-1 ) } , l , k } \big ) \right ) \right| \leq \frac { c _ { \varepsilon } } { n^ { \left ( \frac{1}{2 } - \varepsilon \right ) } } \ ] ] for all and all -almost surely . to sum it up , the classical monte carlo euler method converges ( see ) , the new multilevel monte carlo euler method , in general , fails to converge ( see ) and the new multilevel monte carlo tamed euler method converges and preserves its striking higher convergence order from the lipschitz case ( see ) .thus , concerning applications , the message of this article is that the multilevel monte carlo euler method appropriately when being applied to sdes with superlinearly growing nonlinearities . this is a crucial difference to the classical monte euler method which has been shown to converge for such sdes and which does not need to be modified .however , when modified appropriately ( see , e.g. , ) , the multilevel monte carlo method preserves its strikingly higher convergence order from the global lipschitz case and is significantly more efficient than the classical monte carlo euler method even for such nonlinear sdes .thereby , this article motivates future research in the construction and the analysis of `` appropriately modified '' numerical approximation methods . for the interested reader, we now outline the central ideas in the proof of . for thiswe use the random variables , , , defined by for all , , . then we note for every , and every that is strictly increasing in if and only if .it turns out that increases in double exponentially fast for all , and all ( see lemma [ l : upper.esti ] and corollary [ cor : doublegrowth ] below for details ) .a central observation in our proof of the divergence is then that the behavior of the multilevel monte carlo euler method is dominated by the highest level that produces such double exponentially fast increasing trajectories .more precisely , a key step in our proof of is to introduce the random variables , , by for all . using the random variables , , we now rewrite the multilevel monte carlo euler method in as for all . due to the definition of , , it turns out that the asymptotic behavior of the multilevel monte carlo euler method is essentially determined by the three summands in ( see inequality , estimate and inequalities , in the proof of theorem [ thm : conj ] for details ) . in order to investigate these three summands , we - roughly speaking - quantify the value of the largest summand in each of the three sums in . for thiswe introduce the random variables and for by and for all . using the random variables and for then distinguish between three different cases ( see inequality , inequality and inequalities , below ) .first , on the events , , the middle summand in will be positive with large absolute value and will essentially determine the behavior of the multilevel monte carlo euler approximations ( see estimate for details ) .second , on the events , , the left summand in will be positive with large absolute value and will essentially determine the behavior of the multilevel monte carlo euler approximations ( see inequality for details ) .finally , on the events , , the right summand in will be negative with large absolute value and will essentially determine the behavior of the multilevel monte carlo euler approximations ( see inequalities and for details ) .this very rough outline of the case - by - case analysis in our proof of also illustrates that the multilevel monte carlo euler approximations assume both positive ( first and second case ) as well as negative values ( third case ) with large absolute values .we add that this case - by - case analysis argument in our proof of requires that the probability that the random variables and are close to each other in some sense must decay rapidly to zero as goes to infinity ( see inequality below ) .we verify the above decaying of the probabilities in lemma [ l : an4 ] below which is a crucial step in our proof of .additionally , we add that the level is approximately of order as goes to infinity ( see lemma [ l : an1 ] for the precise assertion ) . in view of the above case - by - case analysis of the multilevel monte carlo euler method , we find it quite remarkable to observe that the essential behaviour of the multilevel monte carlo euler method in is determined by the levels around the order as goes to infinity .the remainder of this article is organized as follows .theorem [ thm : ee_divergence ] in section [ sec : divergence_ee ] slightly generalizes the result on strong and weak divergence of the euler method of hutzenthaler , jentzen and kloeden .convergence of the monte carlo euler method is reviewed in section [ sec : divergence_mce ] .the main result of this article , i.e. , divergence of the multilevel monte carlo euler method for the sde , is presented and proved in section [ sec : multilevel.pathwise.divergence ] .we believe that the multilevel monte carlo euler method diverges more generally and formulate this as conjecture [ conj ] in section [ sec : divergence_mlmce ] .section [ sec : tamed_convergence ] contains our proof of almost sure and strong convergence of the multilevel monte carlo tamed euler method for all locally lipschitz continuous test functions on the path space whose local lipschitz constants grow at most polynomially .throughout this section assume that the following setting is fulfilled .let , let be a probability space with a filtration } ] be a one - dimensional standard } ] or that there exists a real number such that \geq \beta^ { ( -x^\beta ) } ] and for all and all .in particular , the euler approximations satisfy = \infty ] .then there exists a real number such that \geq \beta^ { \left ( - ( n x ) ^\beta \right ) } ] for all and all .then there exists a real number and a sequence of nonempty events , , such that \geq \theta^ { \left ( - n^\theta \right ) } ] for all .define real numbers , , by for all .we also use the function defined by for all and by for all .furthermore , we define events , , by for all . in particular , the definition of implies for all , and all . in the next steplet and be arbitrary .we then claim for all .we now show by induction on .the base case follows from definition of . for the induction step assume that holds for one .in particular , this implies moreover , definition , the triangle inequality and equation yield and the estimate for all with , inequality and definition therefore show the induction hypothesis hence yields inequality thus holds for all , and all .in particular , we obtain for all and all . additionally , lemma 4.1 in yields \\ & = \mathbb{p}\!\left [ \left ( w _ { \frac { ( n + 1 ) t } { n } } - w _ { \frac { n t } { n } } \right ) \geq \frac { t } { n } \right ] = \mathbb{p}\!\left [ w _ { \frac { t } { n } } \geq \frac { t } { n } \right ] = \mathbb{p}\!\left [ t^ { - \frac{1}{2 } } w_t \geq \sqrt { \frac { t } { n } } \right ] \geq \frac { e^ { -\frac { t } { n } } \sqrt { t } } { 8 \sqrt { n } } \nonumber\end{aligned}\ ] ] -almost surely for all and all . therefore , we obtain & = \mathbb{p}\bigg [ \big| y _ { n_0 } ^n \big| \geq \left ( r_n \right)^ { \left ( \left ( \frac { \alpha + 1 } { 2 } \right)^ { n_0 } \right ) } \ ! \bigg ] \cdot \bigg ( \mathbb{p}\!\left [ t^ { - \frac{1}{2 } } w_t \geq \sqrt { \tfrac { t } { n } } \right ] \bigg)^ { \!\ ! ( n - n_0 ) } \\ &\geq \beta^ { \big ( - \left ( n r_n \right)^ { \left ( \left ( \frac { \alpha + 1 } { 2 } \right)^ { n_0 } \beta \right ) } \big ) } \cdot \bigg ( \mathbb{p}\!\left [ t^ { - \frac{1}{2 } } w_t \geq \sqrt { \tfrac { t } { n } } \right ] \bigg)^ { \!\ !n } \geq \beta^ { \big ( - \left ( n r_n \right)^ { \left ( \left ( \frac { \alpha + 1 } { 2 } \right)^ { n_0 } \beta \right ) } \big ) } \cdot \left ( \frac { e^ { -\frac { t } { n } } \sqrt { t } } { 8 \sqrt { n } } \right)^ { \!\ ! n } \\ & \geq e^ { - t } \cdot \beta^ { \big ( - \left ( n r_n \right)^ { \left ( \left ( \frac { \alpha + 1 } { 2 } \right)^ { n_0 } \beta \right ) } \big ) } \cdot \left ( \frac { \sqrt { t } } { 8 \sqrt { n } } \right)^ { \!\ ! n } \end{aligned}\ ] ] for all .this shows the existence of a real number such that \geq \theta^ { \left ( - n^ { \theta } \right ) } \ ] ] for all .combining and finally gives \geq \lim _ { n \rightarrow \infty } \mathbb{e}\big [ 1 _ { \omega_n } \ ! \left|y_n^n \right|^p \big ] \geq \lim _ { n \rightarrow \infty } \left ( \mathbb{p}\big [ \omega_n \big ] \cdot c^ { \left ( p \ , \cdot \left ( \frac { \alpha + 1 } { 2 } \right)^ { n } \right ) } \right ) \geq \lim _ { n \rightarrow \infty } \left ( \theta^ { \left ( - n^ { \theta } \right ) } \cdot c^ { \big ( p \ , \cdot \left ( \frac { \alpha + 1 } { 2 } \right)^ { n } \big ) } \right ) = \infty\ ] ] for all . this , and then complete the proof of theorem [ thm : ee_divergence ] .the monte carlo euler method has been shown to converge with probability one for one - dimensional sdes with globally one - sided lipschitz continuous drift and globally lipschitz continuous diffusion coefficients ( see ) .the monte carlo euler method is thus _ strongly consistent _ ( see , e.g. , nikulin , cramr or appendix a.1 in glassermann ) . after having review this convergence result of the monte carlo euler method , we complement in this section this convergence result with the behavior of moments of the monte carlo euler approximations for such sdes .more precisely , an immediate consequence of theorem [ thm : ee_divergence ] is corollary [ thm : mce_divergence ] below which shows for such sdes that the monte carlo euler approximations diverge in the strong -sense for every .we emphasize that this strong divergence result does not reflect the behavior of the monte carlo euler method in a simulation and it is presented for completeness only . indeed , the events on which the euler approximations diverge ( see theorem [ thm : ee_divergence ] ) are _ rare events _ and their probabilities decay to zero very rapidly ( see , e.g. , lemma 4.5 in for details ) .this is the reason why the monte carlo euler method is strongly consistent and thus does converge according to ( see also theorem [ thm : mce_convergence ] below ) . throughout this sectionassume that the following setting is fulfilled .let , let be a probability space with a normal filtration } ] , , be a family of independent one - dimensional standard } ] for all .moreover , let be two -measurable mappings such that there exists a predictable stochastic process \times \omega \rightarrow \mathbb{r } ] .the drift coefficient is the infinitesimal mean of the process and the diffusion coefficient is the infinitesimal standard deviation of the process .we then define a family , , , of euler approximations by and for all and all . for clarity of expositionwe recall the following convergence theorem from .its proof can be found in .[ thm : mce_convergence ] assume that the above setting is fulfilled , let be four times continuously differentiable and let be a real number such that , and for all .then there exist finite /-measurable mappings , , such that - \frac { 1 } { n^2 } \ !\left ( \sum _ { k = 1 } ^ { n^2 } f ( y_n^ { n , k } ) \right ) \ ! \right| \leq \frac { c _ { \varepsilon } } { n^ { \left ( 1 - \varepsilon \right ) } } \ ] ] for all and all -almost surely .in contrast to pathwise convergence of the monte carlo euler method for sdes with globally one - sided lipschitz continuous drift and globally lipschitz continuous diffusion coefficients ( see theorem [ thm : mce_convergence ] above for details ) , strong convergence of the monte carlo euler method , in general , fails to hold for such sdes which is established in the following corollary of theorem [ thm : ee_divergence ] , i.e. , in corollary [ thm : mce_divergence ] .as mentioned above we emphasize that corollary [ thm : mce_divergence ] does not reflect the behavior of the monte carlo euler method in a practical simulation because the events on which the euler approximations diverge ( see theorem [ thm : ee_divergence ] ) are rare events and their probabilities decay to zero very rapidly ( see lemma 4.5 in for details ) .[ thm : mce_divergence ] assume that the above setting is fulfilled and let be real numbers such that for all with .moreover , assume that > 0 ] for all .moreover , let be -measurable with latexmath:[ ] . in the case = \infty ] and this implies in the case = \infty ] be the unique stochastic process with continuous sample paths which fulfills the sde for ] .the real number ] .the sample paths clearly diverge even for small .for some other sdes , however , pathwise divergence does not emerge for small .for example , let us choose a standard deviation as small as in where . herethe exact value is \approx 0.009971 ] but diverge for larger values of ( see figure [ f : five.sigma033 ] for four sample paths ) ., , and . , width=495 ] first of all , we introduce more notation in order to prove theorem [ thm : conj ] .let , , , , be defined recursively through and for all , and all and let be fixed for the rest of this section .this notation enables us to rewrite the multilevel monte carlo euler approximation in as for all .additionally , let be defined as for every .furthermore , define and by and for every .moreover , we define the mappings by and by for all . additionally , we fix a real number for the rest of this section . in the next stepthe following events are used in our analysis of the multilevel monte carlo euler method .let , , , , , be defined by for all .additionally , define and by and by for all .next we prove a few lemmas that we use in our proof of theorem [ thm : conj ] .[ l : stability ] assume that the above setting is fulfilled .then we have for all and all . fix and .we prove by induction on .the base case is trivial . for the induction step , note that the induction hypothesis implies for all completes the proof of lemma [ l : stability ] .[ l : instability ] assume that the above setting is fulfilled .then we have for all , and all .in particular , we have for all , and all . fix and .we prove by induction on .the base case is trivial . for the induction step , note that the induction hypothesis implies for all completes the induction .the assertion then immediately follows by taking absolute values in .[ l : upper.esti ] assume that the above setting is fulfilled .then we have for all , and all .fix and .we prove by induction on .the base case is trivial . for the induction step , note that lemma [ l : instability ] and the induction hypothesis imply for all completes the proof of lemma [ l : upper.esti ] .[ l : monotonicity.y ] assume that the above setting is fulfilled .then we have for all , all satisfying , and all . fix and with , .we prove by induction on .the base case is trivial . for the induction step , note that lemma [ l : instability ] and the induction hypothesis imply for all completes the proof of lemma [ l : monotonicity.y ] .[ l : initial.multiple ] assume that the above setting is fulfilled .then we have for all , , and all .fix .we prove by induction on .the base case is trivial . for the induction step , note that lemma [ l : instability ] and the induction hypothesis imply for all , and all completes the proof of lemma [ l : initial.multiple ] .[ cor : doublegrowth ] assume that the above setting is fulfilled .then we have for all , , and all .lemma [ l : monotonicity.y ] , lemma [ l : initial.multiple ] and imply for all , , and all .this completes the proof of corollary [ cor : doublegrowth ] .[ l : lower.esti ] assume that the above setting is fulfilled .then we have for all , and all .we apply the inequality for all ] for all ( e.g. , lemma 22.2 in ) imply = { \ensuremath{\mathbb{p}}}\!\left [ \exists \ , l \in \ { 0 , 1 , 2 , \ldots , { \ensuremath{\operatorname{ld}}}(n ) \ } \ , \exists \ , k \in \ { 1 , 2 , \dots , \tfrac{n}{2^l } \ } \colon | \xi^{l , k } | \geq 2^ { \frac { ( l-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \right ] \leq \sum_{l=0}^{{\ensuremath{\operatorname{ld}}}(n ) } \sum_{k=1}^{\frac{n}{2^l } } { \ensuremath{\mathbb{p}}}\!\left [ | \xi^{l , k } | \geq 2^ { \frac { ( l-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \right ] \\ & = \sum_{l=0}^{{\ensuremath{\operatorname{ld}}}(n ) } \frac{n}{2^l } \cdot { \ensuremath{\mathbb{p}}}\!\left [ | \xi^{0,1 } | \geq 2^ { \frac { ( l-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \right ] = \sum_{l=0}^{{\ensuremath{\operatorname{ld}}}(n ) } \frac{n}{2^l } \cdot { \ensuremath{\mathbb{p}}}\!\left [ { \ensuremath{\bar{\sigma}}}^{-1}| \xi^{0,1 } | \geq \frac { 2^ { \frac { ( l-1 ) } { 4 } }n } { { \ensuremath{\bar{\sigma}}}t^ { \frac{1}{4 } } } \right ] \leq \sum_{l=0}^{{\ensuremath{\operatorname{ld}}}(n ) } \frac { n } { 2^l } \cdot \frac { { \ensuremath{\bar{\sigma}}}t^ { \frac{1}{4 } } } { 2^ { \frac { ( l-1 ) } { 4 } } n } \exp\left ( - \frac { 2^ { \frac { ( l-1 ) } { 2 } } n^2 } { 2{\ensuremath{\bar{\sigma}}}^2 t^ { \frac{1}{2 } } } \right ) \\ & \leq \sum_{l=0}^{{\ensuremath{\operatorname{ld}}}(n ) } \frac { { \ensuremath{\bar{\sigma}}}t^ { \frac{1}{4 } } } { 2^ { \frac { -1 } { 4 } } } \exp\left ( - \frac { 2^ { \frac { -1 } { 2 } } n^2 } { 2{\ensuremath{\bar{\sigma}}}^2 t^ { \frac{1}{2 } } }\right ) = \left({\ensuremath{\operatorname{ld}}}(n)+1\right ) { \ensuremath{\bar{\sigma}}}2^{\frac{1}{4}}t^ { \frac{1}{4 } } \exp\left ( - \frac { n^2 } { 2^{\frac{3}{2}}{\ensuremath{\bar{\sigma}}}^2 t^ { \frac{1}{2 } } } \right ) \\ \end{split}\ ] ] for all .summing over results in \leq \sum _ { n \in\{2 ^ 1,2 ^ 2,2 ^ 3,\ldots\ } } \left({\ensuremath{\operatorname{ld}}}(n)+1\right ) { \ensuremath{\bar{\sigma}}}2^{\frac{1}{4}}t^ { \frac{1}{4 } } \exp\left ( - \frac { n^2 } { 2^{\frac{3}{2}}{\ensuremath{\bar{\sigma}}}^2 t^ { \frac{1}{2 } } } \right ) < \infty \end{split}\ ] ] and this completes the proof of lemma [ l : an2 ] .[ l : normal.conditional.estimate ] let be a standard normally distributed -measurable mapping .then \leq 5xy\ ] ] for all and all .monotonicity of the exponential function yields = 2\cdot { \ensuremath{\mathbb{p}}}\left[x\leq z < x+y\right ] = 2\int_x^{x+y}\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}\,dz \leq\frac{2}{\sqrt{2\pi}}y e^{-\frac{x^2}{2 } } \end{split}\ ] ] for all .apply the standard estimate \geq \frac{x}{1+x^2}\frac{2}{\sqrt{2\pi } } \exp\big(-\tfrac{x^2}{2}\big ) ] for all and all yields \leq \frac{n}{2^{(l_n-1 ) } } \cdot 4\cdot 4^{\left(-2^{(l_n-1)}\right ) } 2^ { \frac { ( l_n-1 ) } { 4 } } t^ { - \frac{1}{4 } } n\cdot{\ensuremath{\bar{\sigma}}}^{-1 } \leq \frac{4n^2}{{\ensuremath{\bar{\sigma}}}t^{\frac{1}{4 } } } 2^{\left(-2^{l_n}\right ) } \end{split}\ ] ] -almost surely for all .now we apply inequality to obtain \leq \mathbb { p } \!\left [ \left\ { \left|\eta_n-\theta_n\right| \leq 4^{\left(-2^{(l_n-1)}\right)}\eta_n \right\ } \cap \left\ { \eta_n < 2^ { \frac { ( l_n-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \right\ } \cap \left ( a _{ n } ^ { ( 1 ) } \right)^{c } \ , \right ] \\ & \leq \mathbb { p } \!\left [ \left\ { \left|\eta_n-\theta_n\right| \leq 4^{\left(-2^{(l_n-1)}\right ) } 2^ { \frac { ( l_n-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \right\ } \cap \left ( a _{ n } ^ { ( 1 ) } \right)^{c } \ , \right ] \\ & = \mathbb { e } \!\left [ { \ensuremath{\mathbbm{1 } } } _ { \left ( a _{ n } ^ { ( 1 ) } \right)^{c } } \cdot \mathbb { p } \!\left [ \left|\eta_n-\theta_n\right| \leq 4^{\left(-2^{(l_n-1)}\right ) } 2^ { \frac { ( l_n-1 ) } { 4 } } t^ { - \frac{1}{4 } } n \,\big|\ , \tilde{\mathcal{f}}_{\tilde{l}_n}^n \right ] \right ] \leq \mathbb { e } \!\left [ { \ensuremath{\mathbbm{1 } } } _ { \left ( a _{ n } ^ { ( 1 ) } \right)^{c } } \cdot \frac{4n^2}{{\ensuremath{\bar{\sigma}}}t^{\frac{1}{4 } } } 2^{\left(-2^{l_n}\right ) } \right ] \end{split}\ ] ] for all .next we observe on for all .inserting into results in \leq \mathbb { e } \!\left [ { \ensuremath{\mathbbm{1 } } } _ { \left ( a _{ n } ^ { ( 1 ) } \right)^{c } } \frac{4n^2}{{\ensuremath{\bar{\sigma}}}t^{\frac{1}{4 } } } 2^{\left(- \frac12{\ensuremath{\bar{\sigma}}}^4 t \left(\ln(n)\right)^2 \right ) } \right ] \\ & \leq \frac{4n^2}{{\ensuremath{\bar{\sigma}}}t^{\frac{1}{4 } } } \exp\left(-\ln(2 ) \frac12{\ensuremath{\bar{\sigma}}}^4 t \left(\ln(n)\right)^2 \right ) = 4 \ , \bar{\sigma}^{-1 } \ , t^ { -\frac{1}{4 } } \ , n^{\left(2 - \ln(2){\ensuremath{\bar{\sigma}}}^4 t \ln(n)/2 \right ) } \end{split}\ ] ] for all .combining , lemma [ l : an1 ] and lemma [ l : an2 ] then shows & = \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 4 ) } \cap \left ( a _ { 2^n } ^ { ( 2 ) } \right)^{c } \cap \left ( a _ { 2^n } ^ { ( 1 ) } \right)^{c } \ , \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 4 ) } \cap \left ( \left ( a _ { 2^n } ^ { ( 2 ) } \right)^{c } \cap \left ( a _ { 2^n } ^ { ( 1 ) } \right)^{c } \ , \right)^{c } \ , \right ] \\ & \leq \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 4 ) } \cap \left ( a _ { 2^n } ^ { ( 2 ) } \right)^{c } \cap \left ( a _ { 2^n } ^ { ( 1 ) } \right)^{c } \ , \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 2 ) } \cup a _ { 2^n } ^ { ( 1 ) } \right ] \\ & \leq \sum _ { n = 1 } ^ { \infty } 4 \ , \bar{\sigma}^{-1 } \ , t^ { -\frac{1}{4 } } \ , n^{\left(2 - \ln(2){\ensuremath{\bar{\sigma}}}^4 t \ln(n)/2 \right ) } + \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 2 ) } \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb { p } \!\left [ a _ { 2^n } ^ { ( 1 ) } \right ] < \infty . \end{split}\ ] ] this completes the proof of lemma [ l : an4 ] .[ lem : n1 ] assume that the above setting is fulfilled. then we have = 1 .\ ] ] combining the subadditivity of the probability measure and lemmas [ l : an1 ] , [ l : an2 ] , [ l : an3 ] and [ l : an4 ] shows \leq \sum _ { n = 1 } ^ { \infty } \mathbb{p}\!\left [ a _ { 2^n } ^ { ( 1 ) } \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb{p}\!\left [ a _ { 2^n } ^ { ( 2 ) } \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb{p}\!\left [ a _ { 2^n } ^ { ( 3 ) } \right ] + \sum _ { n = 1 } ^ { \infty } \mathbb{p}\!\left [ a _ { 2^n } ^ { ( 4 ) } \right ] < \infty . \end{split}\ ] ] the lemma of borel - cantelli ( e.g. , theorem 2.7 in ) therefore implies = 0 .\ ] ] hence , we obtain = \mathbb{p}\bigg [ \big\ { \omega \in \omega \colon \exists \ , n \in \ { n_0 , 2 ^ 1 n_0 , 2 ^ 2 n_0 , \ldots \ } \colon \forall \ , m \in \ { n , 2 ^ 1n , 2 ^ 2n , \dots \ } \colon \omega \notin \cup _ { i = 1 } ^4 a^ { ( i ) } _ { m } \big\ } \bigg ] \\ & = \mathbb{p}\bigg [ \left\ { \omega \in \omega \colon \exists \ , n \in \mathbb{n } \colon \forall \ , m \in \ { n , n+1 , \dots \ } \colon \omega \notin a^ { ( 1 ) } _ { 2^m } \cup a^ { ( 2 ) } _ { 2^m } \cup a^ { ( 3 ) } _ { 2^m } \cup a^ { ( 4 ) } _ { 2^m } \right\ } \bigg ] \\ & = \mathbb{p}\bigg [ \liminf _ { n \rightarrow \infty } \left ( a^ { ( 1 ) } _ { 2^n } \cup a^ { ( 2 ) } _ { 2^n } \cup a^ { ( 3 ) } _ { 2^n } \cup a^ { ( 4 ) } _ { 2^n } \right)^ { \ ! c } \bigg ] = 1 .\end{split}\ ] ] this completes the proof of lemma [ lem : n1 ] .fix throughout this proof .our proof of theorem [ thm : conj ] is then divided into four parts . in the first partwe analyze the behavior of the multilevel monte carlo euler approximations on the events for ( see inequality ) . in the second part of this proof we concentrate on the events for ( see inequality ) . in the third part of this proofwe investigate the events for ( see inequality ) and in the fourth part we analyze the behavior of the multilevel monte carlo euler approximations on the events for ( see inequality ) . combining all four parts ( inequalities , , and ) and lemma [ lem : n1 ]will then complete the proof of theorem [ thm : conj ] as we will show below . in these four partswe will frequently use for all .we begin with the first part and consider the events for .note that lemma [ l : monotonicity.y ] , the inequalities on ( see ) and on for all , ( see ) and the definition of imply on and lemma [ l : lower.esti ] , lemma [ l : upper.esti ] and lemma [ l : stability ] hence yield on for all .therefore , we obtain on and the estimate on ( see ) hence shows \cdot t^ { - \frac{p}{4 } } \geq r(n ) \cdot t^ { - \frac{p}{4 } } \end{split}\ ] ] on for all where is a function defined by \ ] ] for all . in the next stepwe analyze the behavior of the multilevel monte carlo euler approximations on the events for . to this end notethat lemma [ l : monotonicity.y ] , the inequalities on ( see ) and on for all , ( see ) and the definition of imply on for all .lemma [ l : monotonicity.y ] , lemma [ l : upper.esti ] and lemma [ l : stability ] therefore show on for all . by definition of and of have on ( see ) for all .consequently we get the inequality on ( see ) for all .lemma [ l : initial.multiple ] and lemma [ l : monotonicity.y ] hence yield on for all .lemma [ l : lower.esti ] and therefore imply on for all .the inequalities and for all hence give on for all .this shows \cdot t^ { - \frac{p}{4 } } \end{split}\ ] ] on for all and , using the estimate on ( see ) , \cdot t^ { - \frac{p}{4 } } \nonumber\end{aligned}\ ] ] on for all .it follows from that there exists an such that for all . using this, we deduce from on for all .next , we analyze the behavior of the multilevel monte carlo euler approximations on the events for .note that lemma [ l : monotonicity.y ] and the inequality on for all , ( see ) imply on for all . therefore lemma [ l : monotonicity.y ] , the inequality on ( see ) and lemma [ l : stability ] result in on for all . lemma [ l : initial.multiple ] , lemma [ l : monotonicity.y ] and the estimate on ( see and ) and lemma [ l : upper.esti ] hence yield on for all . therefore lemma [ l : lower.esti ] implies on for all .the inequality for all hence shows on for all .consequently \cdot t^ { - \frac{p}{4 } } \end{split}\ ] ] on for all .the estimate on ( see ) therefore implies \cdot t^ { - \frac{p}{4 } } \end{split}\ ] ] on for all .finally , we obtain on for all . finally , we analyze the behavior of the multilevel monte carlo euler approximations on the events for .note that lemma [ l : monotonicity.y ] and the inequality on for all , ( see ) imply on for all and , applying lemma [ l : lower.esti ] , lemma [ l : monotonicity.y ] , lemma [ l : upper.esti ] and lemma [ l : stability ] , on for all .therefore , we obtain on and hence , using on , \cdot t^ { - \frac{p}{4 } } = r(n ) \cdot t^ { - \frac{p}{4 } } \end{split}\ ] ] on for all . combining , , and then shows on for all .equation and inequality imply for all and all .the fact therefore shows for all .hence , lemma [ lem : n1 ] finally yields -almost surely .this completes the proof of theorem [ thm : conj ] .motivated by figure [ f : ginzburg ] below and by the divergence result of the multilevel monte carlo euler method in section [ sec : multilevel.pathwise.divergence ] , we conjecture in this section that the multilevel monte carlo euler method diverges with probability one whenever one of the coefficients of the sde grows superlinearly ( see conjecture [ conj ] ) . whereas divergence with probability one seems to be quite difficult to establish ,strong divergence is a rather immediate consequence of the divergence of the euler method in theorem [ thm : ee_divergence ] above .we derive this strong divergence in corollary [ thm : lim_mean ] below . for practical simulationsthe much more important question is , however , consistency and inconsistency respectively ; see , e.g. , nikulin , cramr , appendix a.1 in glassermann and also theorem [ thm : conj ] above and conjecture [ conj ] below . throughout this sectionassume that the following setting is fulfilled .let , let be a probability space with a normal filtration } ] , , , be a family of independent one - dimensional standard } ] for all .moreover , let be two continuous mappings such that there exists a predictable stochastic process \times \omega \rightarrow \mathbb{r } ] .the drift coefficient is the infinitesimal mean of the process and the diffusion coefficient is the infinitesimal standard deviation of the process .we then define a family of euler approximations , , , , , by and for all , , and all .[ conj ] assume that the above setting is fulfilled and let be real numbers such that for all with .moreover , assume that > 0 ] for all .moreover , let be -measurable with for all .then we conjecture - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( y_1^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( y _ { 2^l } ^ { 2^l , l , k } ) - f ( y _ { 2^ { ( l - 1 ) } } ^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right| = \infty\ ] ] -almost surely . to support this conjecture , we ran simulations for the stochastic ginzburg - landau equation given by the solution } ] .its solution is known explicitly ( e.g. section 4.4 in ) and is given by for ] .figure [ f : ginzburg ] shows four sample paths of the approximation error of the multilevel monte carlo euler method for the ginzburg - landau equation .only finite values of the sample paths are plotted .the next corollary is an immediate consequence of theorem [ thm : ee_divergence ] above .[ thm : lim_mean ] assume that the above setting is fulfilled and let be real numbers such that for all with .moreover , assume that > 0 ] for all .additionally , let be -measurable with for all .then we obtain - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( y_1^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( y _ { 2^l } ^ { 2^l , l , k } ) - f ( y _ { 2^ { ( l - 1 ) } } ^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right ] = \infty\ ] ] for all .first of all , note that the assumption < \infty ] for all .therefore , we obtain = \mathbb{e}\left [ f ( y_n^ { n , 0 , 1 } ) \right]\end{aligned}\ ] ] for all .the estimate for all and theorem [ thm : ee_divergence ] hence give \geq \frac { 1 } { c } \left ( \lim _ { n \rightarrow \infty } \mathbb{e}\!\left [ \left| y_n^ { n , 0 , 1 } \right|^ { \frac { 1 } { c } } \right ] \right ) - c = \infty .\end{aligned}\ ] ] in the case < \infty ] . in the case = \infty ] and this implies in the case = \infty ] , let , let \times \omega \rightarrow \mathbb{r}^m ] -brownian motions and let , , , be a family of independent identically distributed /-measurable mappings with < \infty ] . under the assumptions above, the sde is known to have a unique solution .more formally , there exists an up to indistinguishability unique adapted stochastic process \times \omega \rightarrow \mathbb{r}^d ] ( see , e.g. , theorem 2.4.1 in mao ) .the drift coefficient is the infinitesimal mean of the process and the diffusion coefficient is the infinitesimal standard deviation of the process . in the next stepwe define a family of tamed euler approximations , , , , , by and for all , , and all . in order to formulate our convergence theorem for the multilevel monte carlo tamed euler approximations, we now introduce piecewise continuous time interpolations of the time discrete numerical approximations .more formally , let \times \omega \rightarrow \mathbb{r}^d ] , , , and all .the following corollary is a direct consequence of hutzenthaler , jentzen and kloeden and mller - gronbach ( see also ritter ) .it asserts that the piecewise linear approximations , , converge in the strong sense to the exact solution .the convergence order is except for a logarithmic term .[ thm : tamed_convergence_2 ] assume that the above setting is fulfilled .then there exists a family , , of real numbers such that } \left\| x_t - \bar{y}_t^ { n , 0 , 1 } \right\|^p _ { \mathbb{r}^d } \right ] \right)^ { \!\ ! \frac { 1 } { p } } \leq r_p \cdot \frac { \sqrt { 1 + { \ensuremath{\operatorname{ld } } } ( n ) } } { \sqrt { n } } \ ] ] for all and all .the convergence rate for obtained in is sharp according to mller - gronbach s lower bound established in theorem 3 in in the case of globally lipschitz continuous coefficients ( see also hofmann , mller - gronbach and ritter ) .let \times \omega \rightarrow \mathbb{r}^d ] , and all .theorem 1.1 in then shows the existence of a family , , of real numbers such that } \| x_t - \tilde{y}_t^n \| _ { \mathbb{r}^d } \big\| _ { l^p ( \omega ; \mathbb{r } ) } \leq \frac { \tilde{r}_p } { \sqrt{n } } ] , and all . combining , and hlder s inequality then gives } \left\| x_t - \bar{y}_t^ { n , 0 , 1 } \right\| _ { \mathbb{r}^d } \right\| _ { l^p ( \omega ; \mathbb{r } ) } \leq \frac { \tilde{r}_p } { \sqrt{n } } + \left\| \max _ { n \in \ { 0 , 1 , \ldots , n - 1 \ } } \left\| \sigma ( y_n^ { n , 0 , 1 } ) \right\| _ { l ( \mathbb{r}^m , \mathbb{r}^d ) } \right\| _ { l^ { 2p } ( \omega ; \mathbb{r } ) } \\&\quad\cdot \left\| \max _ { n \in \ { 0 , 1 , \ldots , n - 1 \ } } \sup _ { t \in \left [ \frac { n t } { n } , \frac { ( n + 1 ) t } { n } \right ] } \left\| w_t^{0,1 } - w _ { \frac { n t } { n } } ^{0,1 } - \left ( \frac { t n } { t } - n\right ) \left ( w_ { \frac { ( n + 1 ) t } { n } } ^{0,1 } - w _ { \frac { nt } { n } } ^{0,1 } \right ) \right\| _ { \mathbb{r}^m } \right\| _ { l^ { 2p } ( \omega ; \mathbb{r } ) } \\ & \leq \nonumber \frac { \tilde{r}_p } { \sqrt{n } } + \sqrt { \frac { t } { n } } \left ( c \cdot \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \max _ { n \in \ { 0 , 1 , \ldots , m \ } } \left\| y_n^ { m , 0 , 1 } \right\| _ { \mathbb{r}^d } \right\| _ { l^ { 2p } ( \omega ; \mathbb{r } ) } \!\!\ !+ \left\| \sigma ( 0 ) \right\| _ { l ( \mathbb{r}^m , \mathbb{r}^d ) } \right ) \left\| \max _ { n \in \ { 1 , 2 , \ldots , n \ } } \sup _ { t \in [ 0 , 1 ] } \left| \beta_t^n - t \cdot \beta_1^n \right| \right\| _ { l^ { 2p } ( \omega ; \mathbb{r } ) } \end{aligned}\ ] ] for all and all where \times \omega \rightarrow \mathbb{r } ] be a function from the space of continuous functions , \mathbb{r}^d ) ] .then there exists a family , , of real numbers such that - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ ! \right|^p\right ] \right)^ { \!\!\ ! \frac { 1 } { p } } \leq c_p \cdot \frac { \left ( 1 + { \ensuremath{\operatorname{ld } } } ( n ) \right)^ { \frac { 3 } { 2 } } } { \sqrt { n } } \end{gathered}\ ] ] for all and all .in particular , there are finite /-measurable mappings , , such that - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right| \leq \frac { \tilde{c } _ { \varepsilon } } { n^ { \left ( \frac{1}{2 } - \varepsilon \right ) } } \ ] ] for all and all -almost surely .the convergence rate for obtained in is the same as in remark 8 in creutzig , dereich , mller - gronbach and ritter . for numerical approximation results for sdes with globally lipschitz continuous coefficients but under less restrictive smoothness assumption on the payoff function ,the reader is referred to giles , higham and mao and drsek and teichmann .moreover , numerical approximation results for sdes with non - globally lipschitz continuous and at most linearly growing coefficients can be found in yan , for instance .the triangle inequality gives - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq \left| \mathbb{e}\big [ f ( x ) \big ] - \mathbb{e}\big [ f ( \bar{y}^ { n , 0 , 1 } ) \big ] \right| + \frac { 1 } { n } \left\| \sum _ { k = 1 } ^ { n } \left ( \mathbb{e}\big [ f ( \bar{y}^ { 1 , 0 , 1 } ) \big ] - f ( \bar{y}^ { 1 , 0 , k } ) \right ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\quad+ \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \left\| \sum _ { k = 1 } ^ { \frac { n } { 2^l } } \left ( \mathbb{e}\big [ f ( \bar{y}^ { 2^l , 0 , 1 } ) \big ] - \mathbb{e}\big [ f ( \bar{y}^ { 2^ { ( l - 1 ) } , 0 , 1 } ) \big ] - f ( \bar{y}^ { 2^l , l , k } )+ f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \end{aligned}\ ] ] for all and all and the burkholder - davis - gundy inequality in theorem 6.3.10 in stroock shows the existence of real numbers , , such that - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ ! \right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq \mathbb{e}\big [ \big| f ( x ) - f ( \bar{y}^ { n , 0 , 1 } ) \big| \big ] + \frac { k_p } { \sqrt { n } } \left\| \mathbb{e}\big [ f ( \bar{y}^ { 1 , 0 , 1 } ) \big ] - f ( \bar{y}^ { 1 , 0 , 1 } ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\quad+ \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^ { \frac { l } { 2 } } k_p } { \sqrt { n } } \left\| \mathbb{e}\big [ f ( \bar{y}^ { 2^l , 0 , 1 } ) \big ] - \mathbb{e}\big [ f ( \bar{y}^ { 2^ { ( l - 1 ) } , 0 , 1 } ) \big ] - f ( \bar{y}^ { 2^l , 0 , 1 } ) + f ( \bar{y}^ { 2^ { ( l - 1 ) } , 0 , 1 } ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \end{aligned}\ ] ] for all and all . in the next step estimate , hlder s inequality and the triangle inequality show - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq c\left ( 1 + \left\| x \right\|^c _ { l^ { 2 c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } + \left\| \bar{y}^ { n , 0 , 1 } \right\|^c _ { l^ { 2 c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \left\| x - \bar{y}^ { n , 0 , 1 } \right\| _ { l^2 ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \\&\quad+ \frac { 2 k_p } { \sqrt { n } } \left\| f ( \bar{y}^ { 1 , 0 , 1 } ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } + \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^ { ( \frac { l } { 2 } + 1 ) } k_p } { \sqrt { n } } \left\| f ( \bar{y}^ { 2^l , 0 , 1 } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , 0 , 1 } ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \end{aligned}\ ] ] and corollary [ thm : tamed_convergence_2 ] and again estimate hence give - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq 2 c r_2 \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { \sqrt { 1 + { \ensuremath{\operatorname{ld}}}(n ) } } { \sqrt { n } } + \frac { 2 k_p } { \sqrt { n } } \left\| f ( \bar{y}^ { 1 , 0 , 1 } ) \right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\quad+ \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^ { ( \frac { l } { 2 } + 2 ) } c k_p } { \sqrt { n } } \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 p c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \left\| \bar{y}^ { 2^l , 0 , 1 } - \bar{y}^ { 2^ { ( l - 1 ) } , 0 , 1 } \right\| _ { l^ { 2 p } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \end{aligned}\ ] ] for all and all . the triangle inequality , again corollary [ thm : tamed_convergence_2 ] and the estimate , \mathbb{r}^d ) } \leq ( 2 c + \| f ( 0 ) \| _ { c ( [ 0,t ] , \mathbb{r}^d ) } ) ( 1 + \| v \|^ { ( c + 1 ) } _ { c ( [ 0,t ] , \mathbb{r}^d ) } ) ] then yield - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \ ! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ ! \right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq 2 c r_2 \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { \sqrt { 1 + { \ensuremath{\operatorname{ld}}}(n ) } } { \sqrt { n } } \\&\quad+ 2 k_p \left ( 2 c + \left\| f ( 0 ) \right\| _ { c ( [ 0,t ] , \mathbb{r}^d ) } \right ) \left ( 1 + \left\| \bar{y}^ { 1 , 0 , 1 } \right\|^ { ( c + 1 ) } _ { l^ { p ( c + 1 ) } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { 1 } { \sqrt { n } } \\&\quad+ c k_p r _ { 2 p } \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 p c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^ { ( \frac { l } { 2 } + 3 ) } \sqrt { 1 + { \ensuremath{\operatorname{ld } } } ( 2^ { l } ) } } { 2^ { \frac { ( l - 1 ) } { 2 } } \sqrt {n } } \end{aligned}\ ] ] and finally - \frac { 1 } { n } \sum _ { k = 1 } ^ { n } f ( \bar{y}^ { 1 , 0 , k } ) - \sum _ { l = 1 } ^ { { \ensuremath{\operatorname{ld } } } ( n ) } \frac { 2^l } { n } \! \left ( \sum _ { k = 1 } ^ { \frac { n } { 2^l } } f ( \bar{y}^ { 2^l , l , k } ) - f ( \bar{y}^ { 2^ { ( l - 1 ) } , l , k } ) \right ) \ !\right\| _ { l^p ( \omega ; \mathbb{r } ) } \\&\leq 2 c r_2 \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { \left ( 1 + { \ensuremath{\operatorname{ld}}}(n ) \right)^ { \frac { 3 } { 2 } } } { \sqrt { n } } \\&\quad+ 2 k_p \left ( 2 c + \left\| f ( 0 ) \right\| _ { c ( [ 0,t ] , \mathbb{r}^d ) } \right ) \left ( 1 + \left\| \bar{y}^ { 1 , 0 , 1 } \right\|^ { ( c + 1 ) } _ { l^ { p ( c + 1 ) } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { \left ( 1 + { \ensuremath{\operatorname{ld}}}(n ) \right)^ { \frac { 3 } { 2 } } } { \sqrt { n } } \\&\quad+ 12 c k_p r _ { 2 p } \left ( 1 + \sup _ { m \in { \ensuremath{\mathbb{n } } } } \left\| \bar{y}^ { m , 0 , 1 } \right\|^c _ { l^ { 2 p c } ( \omega ; c ( [ 0,t ] , \mathbb{r}^d ) ) } \right ) \frac { \left ( 1 + { \ensuremath{\operatorname{ld}}}(n ) \right)^ { \frac { 3 } { 2 } } } { \sqrt { n } } \end{aligned}\ ] ] for all and all .this shows .inequality then immediately follows from lemma 2.1 in kloeden und neuenkirch .this completes the proof of proposition [ thm : tamed_convergence ] .it is well - known that the multilevel monte carlo method combined with the ( fully ) implicit euler method converges too .the following simulation indicates that this multilevel monte carlo implicit euler method is considerably slower than the multilevel monte carlo tamed euler method .we choose a multi - dimensional langevin equation as example .more precisely , we consider the motion of a brownian particle of unit mass in the -dimensional potential , , with .the corresponding force on the particle is then for .more formally , let , , , for all and let be the identity matrix for all .thus the sde reduces to the langevin equation for ] of the exact solution of as function of the runtime both for the multilevel monte carlo implicit euler method and for the multilevel monte carlo tamed euler method ., width=340 ] figure [ f : compareml ] displays the root mean square approximation error of the multilevel monte carlo implicit euler method for the uniform second moment }\left\|x_t\right\|^2\right] ] of the exact solution of as function of the runtime when .we see that both numerical approximations of the sde apparantly converge with rate close to .moreover the multilevel monte carlo implicit euler method was considerably slower than the multilevel monte carlo tamed euler method .this is presumably due to the additional computational effort which is required to determine the zero of a nonlinear equation in each time step of the implicit euler method .more results on implicit numerical methods for sdes can be found in , for instance .this work has been partially supported by the research project `` numerical solutions of stochastic differential equations with non - globally lipschitz continuous coefficients '' and by the collaborative research centre `` spectral structures and topological methods in mathematics '' both funded by the german research foundation .we would like to express our sincere gratitude to weinan e , klaus ritter , andrew m. stuart , jan van neerven and konstantinos zygalakis for their very helpful advice .semi - implicit euler - maruyama scheme for stiff stochastic equations . in _stochastic analysis and related topics , v ( silivri , 1994 ) _ , vol .38 of _ progr ._ birkhuser boston , boston , ma , 1996 , pp .183202 .strong convergence rates for backward euler - maruyama method for nonlinear dissipative - type stochastic differential equations with super - linear diffusion coefficients .( 2010 ) , 23 pages . to appear in stochastics .stochastic hamiltonian systems : exponential convergence to the invariant measure , and discretization by the implicit euler scheme . , 2 ( 2002 ) , 163198 .inhomogeneous random systems ( cergy - pontoise , 2001 ) .
|
the euler maruyama scheme is known to diverge strongly and numerically weakly when applied to nonlinear stochastic differential equations ( sdes ) with superlinearly growing and globally one - sided lipschitz continuous drift coefficients . classical monte carlo simulations do , however , not suffer from this divergence behavior of euler s method because this divergence behavior happens on _ rare events_. indeed , for such nonlinear sdes the classical monte carlo euler method has been shown to converge by exploiting that the euler approximations diverge only on events whose probabilities decay to zero very rapidly . significantly more efficient than the classical monte carlo euler method is the recently introduced multilevel monte carlo euler method . the main observation of this article is that this multilevel monte carlo euler method does in contrast to classical monte carlo methods not converge in general . more precisely , we establish divergence of the multilevel monte carlo euler method for a family of sdes with superlinearly growing and globally one - sided lipschitz continuous drift coefficients . in particular , the multilevel monte carlo euler method diverges for these nonlinear sdes on an event that is not at all rare but has _ probability one_. as a consequence for applications , we recommend not to use the multilevel monte carlo euler method for sdes with superlinearly growing nonlinearities . instead we propose to combine the multilevel monte carlo method with a slightly modified euler method . more precisely , we show that the multilevel monte carlo method combined with a tamed euler method converges for nonlinear sdes with globally one - sided lipschitz continuous drift coefficients and preserves its strikingly higher order convergence rate from the lipschitz case . makefnmarkto@
|
traders willing to trade in electronic markets can either place market orders , which are immediately executed at the current best listed price , or they can place limit orders .limit orders are stored in the exchange s book and executed using time priority at a given price and price priority across prices .a transaction occurs when a market order hits the quote on the opposite side of the market . in the last few years several order driven microscopic modelshave been introduced to explain the statistical properties of asset prices , we cite in particular bottazzi et al .( 2005 ) , consiglio et al .( 2003 ) , chiarella and iori ( 2002 ) , daniels et al .( 2002 ) , li calzi and pellizzari ( 2003 ) , luckock ( 2003 ) , raberto et al .( 2001 ) and gil - bazo et al .see also slanina ( 2007 ) for a recent overview .the aim of chiarella and iori ( 2002 ) was to introduce a simple auction market model in order to gain some insights into how the placement of limit orders contributes to the price formation mechanism .the impact on the market of three different trading strategies : noise trading , fundamentalism and chartism were analyzed .it was shown that the presence of chartism plays a key role in generating realistic looking dynamics , such as volatility clustering , persistent trading volume , positive cross - correlation between volatility and trading volume , and volatility and bid - ask spread .while several microstructure models have been able to generate price trajectories that share common statistical properties with real asset prices , few studies have so far investigated the properties of generated order flows and of the order book itself .recently , the empirical analysis of limit order data has revealed a number of intriguing features in the dynamics of placement and execution of limit orders . in particular , zovko and farmer ( 2002 ) found a fat - tailed distribution of limit order placement from the current bid / ask ( with an exponent around ) .bouchaud et al .( 2002 ) and potters and bouchaud ( 2003 ) found a fat - tailed distribution of limit order arrival ( with an exponent roughly equal to , smaller than the value observed by zovko and farmer ) and a fat - tailed distribution of the number of orders stored in the order book ( with exponent of about ) . in order to build a model that can incorporate these recent empirical findings of limit order data , we here extend the original model of chiarella and iori ( 2002 ) in two main respects .first , agents apply different time horizons to the different components of their strategies ; longer time horizons for the fundamentalist component and shorter horizons for chartist and noise trader component .second , agents are not constrained to merely submit orders of size one , but rather submit orders given by an asset demand function determined in the traditional economic framework of expected utility maximisation . in this waythe ongoing evolution of the market feeds back into the asset demands of the different agents .we simulate our model and compare its qualitative predictions for different strategies of our population of traders .we show in particular with the introduction of chartist strategies into a population of utility optimizing traders that our model is not only capable of generating realistic asset prices , fat tails and volatility clustering but , is also capable of reproducing the empirically observed regularities of order flows .the analysis of order book data has also added to the debate on what causes fat tailed fluctuations in asset prices .gabaix et al .( 2003 ) put forward the proposition that large price movements are caused by large order volumes .a variety of studies have made clear that the mean market impact is an increasing function of the order size , in contrast farmer et al .( 2004 ) have shown that large price changes in response to large orders are very rare .furthermore these authors have also shown that an order submission typically results in a large price change when a large gap is present between the best price and the price at the next best quote ( see also weber and rosenow ( 2005 ) and gillemot et al .( 2005 ) ) .we show in section 3 that in the model proposed in this paper large returns are also mostly associated with large gaps in the book .the paper is structured as follows ; in section [ themodel ] we model the fundamentalist , chartist and noise trader components of expectations and the way in which agents form their demands for the risky asset . in section[ sim anal ] we undertake a number of simulations of the model under our choice of the asset demand function to determine how well the empirical facts referred to earlier are reproduced .section [ conclude ] concludes and suggests some avenues for future research .the appendices contain a number of technical derivations .we assume that all agents know the fundamental value of the asset , which we take to follow a geometric brownian motion .agents also know the past history of prices . at any time the price is given by the price at which a transaction , if any , occurs .if no new transaction occurs , a proxy for the price is given by the average of the quoted ask ( the lowest ask listed in the book ) and the quoted bid ( the highest bid listed in the book ) : so that , a value that we call the mid - point . if no bids or asks are listed in the book a proxy for the price is given by the previous traded or quoted price .bids , asks and prices need to be positive and investors can submit limit orders at any price on a prespecified grid , defined by the tick size .the demands of each trader for the risky asset are assumed to consist of three components , a fundamentalist component , a chartist component and a noise induced component .the weights applied to these various components will vary in ways to be described below . at any time a trader is chosen to enter the market .the chosen agent , , forms an expectation about the spot return , , that will prevail in the interval , where is the agent s time horizon .agents use a combination of fundamental value and chartist rules to form expectations on stock returns , so that , \label{expect}\ ] ] where the quantities and represent the weights given to the fundamentalist and chartist component respectively .in addition , we add a noise induced component , with zero mean and variance to agent s expectations with weight .the initial term in eq .( [ expect ] ) normalises the impact of the three trading strategies .the quantity is the time scale over which the fundamentalist component for the mean reversion of the price to the fundamental is calculated .finally , the average gives the future expected trend of the chartist component based on the observations of the spot returns over last time steps .that is , where and are , respectively , the spot return and the spot price at time . observe that a pure fundamentalist trading strategy has , a chartist trading strategy has , whilst for a noise trading strategy .we assume that the degree of fundamentalism , chartism and noise trading will be spread across agents , so we model the trading weights as random variables independently chosen . for agent weights are chosen according to the realizations of the set of laplace distributions , and for and .the average and variance of these densities depend on the values of , and .so that : =\sigma_{1} ] with variance and =\sigma_{n} ] refers to the value obtained by rounding up to the next highest integer . ] , \label{tau}\ ] ] where is some reference time horizon .once the agent has formed its own expectation of the future price , it has to decide whether to place a buy or a sell order and choose the size of the order .we assume that agents are risk averse and maximize the exponential utility of wealth function where the coefficient measures the relative risk aversion of agent .we assume that those agents giving greater weight to the fundamentalist component are more risk averse than those giving more weight to the noise trader and chartist components .this effect is captured by setting where is some reference level of risk aversion .we next define the portfolio wealth of each agent as where and are respectively the stock and cash position of agent at time .the optimal composition of the agent s portfolio is determined in the usual way by trading - off expected return against expected risk .however the agents are not allowed to engage in short - selling .the number of stocks an agent is willing to hold in its portfolio at a given price level depends on the choice of the utility function . for the cara utility function assumed here the optimal composition of the portfolio , that is the number of stocks the agent wishes to hold is given by where is the variance of returns expected by agent , the agent s relative risk aversion is given by eq .( [ alpha ] ) , and the agent s investment horizon is defined in eq .( [ tau ] ) .we note that eq .( [ equation8 ] ) is independent of wealth , which obviates the need to keep track of the wealth dynamics of each individual agent .if the amount is larger ( smaller ) than the number of stocks already in the portfolio of agent then the agent decides to buy ( sell ) .( [ equation8 ] ) can be derived on the basis of mean - variance one - period portfolio optimization .appendix [ demand ] shows how to obtain this equation from the utility function ( [ utility ] ) following a similar approach to that of bottazzi et al .we estimate as the variance of past returns , estimated by each agent as ^ 2 , \label{sigma}\ ] ] where the average spot return is given by eq .( [ ave ] ) . in order to determine the buy/ sell price range of a typical agent , we first estimate numerically the price level at which agents are satisfied with the composition of their current portfolio , which is determined by eq .( [ p * ] ) admits a unique solution with since given that short selling is not allowed .agents are willing to buy at any price since in this price range their demand is greater than their holding . while agents are willing to sell at any price since then their demand is less than their holding .note that agents may thus wish to sell even if they expect a future price increase .if we select agents decide to do nothing . as we want to impose budget constraints we need to restrict ourselves to values of to ensure and so rule out short selling .furthermore to ensure that an agent has sufficient cash to purchase the desired stocks , the smallest value of we can allow for agent is determined by its cash position ( see eq . ( [ w ] ) ) , and so is given by the condition again one can easily show that this equation also admits a unique solution with since . indeed , comparing eqs .( [ p * ] ) and ( [ pm ] ) it can be easily proven that .we represent the typical agent s buy / sell price range graphically in fig .[ figure ] . has been suppressed here ) .the curve labelled graphs eq .( [ equation8 ] ) with the dashed section representing negative ( short position ) demand .the upper limit of the price range is where the positive ( long position ) demand comes to zero .the minimum limit is determined by the agent s current wealth holding .the price that separates the buy / sell regions is determined by the agent s current holding , , of stock.,width=226 ] lllll + & position & type of order & volume + + & buy & limit order & + & buy & market order & + & no order placement + & sell & market order & + & sell & limit order & + + having determined that the possible values at which an agent can satisfactorily trade are in the interval , ] and if they submit a limit order to buy an amount while if they submit a limit order to sell an amount however if and the buy order can be executed immediately at the ask .an agent in this case would submit a market order to buy an amount similarly if and the agent would submit a market order to sell an amount if the depth at the bid ( ask ) is not enough to fully satisfy the order , the remaining volume is executed against limit orders in the book .the agent thus takes the next best buy ( sell ) order and repeats this operation as many times as necessary until the order is fully executed .this mechanism applies under the condition that quotes of these orders are above ( below ) price .otherwise , the remaining volume is converted into a limit order at price .if the limit order is still unmatched at time it is removed from the book .the essential details of the trading mechanism are summarized in table [ mechanism ] , showing how it depends on the price level , the `` satisfaction level '' , the best ask and the best bid .in this section we analyze , via numerical simulations , various properties of prices , order flows and the book implied by the model of section 2 . in the simulations we have considered ( in succession ) three kinds of trading rules , noise trading only ( black ) ,fundamentalism and noise trading only ( red ) , and , fundamentalism , chartism and noise trading ( green ) .we set the number of agents at .agents are initially assigned a random amount of stock uniformly distributed on the interval ] . in the following simulations we choose and .we also fix , , , , and = 0.1 .we choose the fundamental value to be a random walk with initial value zero drift and and volatility .we run the simulations with a large set of values and ranging from 0 to 30 in order to study the impact of different fundamentalist and chartist components on price , order flows and the book .the results reported here are the outcome of simulations of 200,000 steps , each of which we repeat 100 times with different random seeds . to test the robustness of the results we have repeated the simulations varying the parameter set within a small neighborhood of those used here .we have found that the qualitative features reported below are fairly robust to such variations .our main aim in this section is to gain some insights into the details of the price formation in the model .figure [ figure2 ] displays sample paths of the price under very different assumptions concerning the trading strategies used . in the first path ( left hand side )the trading strategy only has a noise trading component and , as we might expect , the fundamental price has nothing to do with the current price generated by the double auction market .however , once we include the fundamentalist component , the auction market is no longer operating independently of the fundamental and we observe in the centre plot that the market arrives at a price that follows closely the fundamental price .finally , the right hand plot shows how the addition of the chartist component to the trading strategy affects the price evolution .this component accentuates the fluctuations of the price as evidenced by the numerous extreme movements that are indeed characteristic of real markets .these simulations would suggest that big jumps in the price are mainly caused by the chartist component . , ) , noise trading and fundamentalism ( center , , ) and noise trading , fundamentalism and chartism ( right , , ).,title="fig:",width=188 ] , ) , noise trading and fundamentalism ( center , , ) and noise trading , fundamentalism and chartism ( right , , ).,title="fig:",width=188 ] , ) , noise trading and fundamentalism ( center , , ) and noise trading , fundamentalism and chartism ( right , , ).,title="fig:",width=188 ] next we analyse the return time series .figure [ figure3 ] shows on the left a rather typical path for the returns generated by the model when noise trader , fundamentalist and chartist components are all present . on the right side of figure [ figure3 ]we represent the decumulative distribution functions ( ddf ) defined as the probability of having a change in price larger than a certain return threshold . in other words, we are plotting one minus the cumulative distribution function . in this waywe observe that fat tail behaviour appears once chartism is introduced into the trading strategy and increases with .the four curves correspond to and ( black ) , ( red ) , ( green ) , ( blue ) ., ) components all present .( right ) the decumulative distribution function of the absolute returns with and ( black ) , ( red ) , ( green ) , ( blue).,title="fig:",width=226 ] , ) components all present .( right ) the decumulative distribution function of the absolute returns with and ( black ) , ( red ) , ( green ) , ( blue).,title="fig:",width=226 ] to confirm and quantify the robustness of all the above statements we have also computed the hill tail index , the details of which are explained in appendix [ hill ] .essentially this index measures the tail exponent ( hill ( 1975 ) ) , the lower it is the fatter the tail of the ddf .the results are plotted in figure [ figure4 ] .the left hand plot shows how the inclusion of the fundamentalist component brings about a fat tail distribution although we do not see big changes when we increase the fundamentalist weight in the market .the index decreases very slowly and takes values between 4.5 and 3.5 , while in real markets the tail index is normally around or even below 3 ( lux ( 2001 ) , gabaix et al .( 2003 ) ) .we also detect that the right tail ( due to positive changes in returns ) of the distribution is fatter ( even though the differences are not statistically significant enough ) for all values of .this would be in contradiction with the situation in real markets where the skewness of the returns is found to be negative indicating that the tail due to negative returns is fatter than that due to positive returns ( lux ( 2001 ) ) . with ( left ) and as a function of with ( right ) .vertical spreads depicts the error bars for the hill exponent which are evaluated across 100 repetitions of the simulations with different random seeds.,title="fig:",width=226 ] with ( left ) and as a function of with ( right ) .vertical spreads depicts the error bars for the hill exponent which are evaluated across 100 repetitions of the simulations with different random seeds.,title="fig:",width=226 ] we consider the impact of the chartist component on the right hand side plot of figure [ figure4 ] .we observe a much clearer pattern when the weight of this component of trading behavior increases .the tail index diminishes drastically to a value below 3 as we increase the weight of the chartist component .the second important point to emerge from this plot is that it provides the consistent negative skewness for the pdf observed in real markets .the left tail of the distribution ( negative returns ) is fatter than the right one .however , increasing the value of beyond about the value of 2 leads to a reversal of the skewness property in the pdf , even though the tails become fatter .clearly , the parameters of the model need to have a significant chartist component in order that it exhibit behaviour consistent with empirical observations , but not so big as to lose the negative skewness between the positive and negative tails of the return pdf .we now focus on the long memory property of the volatility .we first observe this property by means of the volatility autocorrelation where the are the absolute returns over time steps to .the variable is the absolute return sample mean over a time window of time steps the autocorrelation thus calculates the ratio between the autocovariance and variance estimators . in figure [ figure5 ] ( left )we plot the volatility autocorrelation for noise trading ( black ) , noise trading plus fundamentalism ( red ) and noise trading plus fundamentalism and chartism ( green ) in terms of and averaging over the whole data set generated from the simulations .the volatility autocorrelation is non - negligible when we include the fundamentalist and chartist components in the market for a broader domain .real markets show a very long range volatility autocorrelation ( lo ( 1991 ) ) .nonetheless , our model only shows very long term memory when the chartist component is activated .hence , we not only observe a direct relationship between the chartist component and large returns but also detect that long memory in the volatility appears to be directly related to the inclusion of the chartism component . to better quantify this effectwe have studied the long range memory of the volatility by implementing the modified r / s , or rescaled range , analysis ( see figure [ figure5 ] , right ) .basically , the r / s statistic , introduced by mandelbrot ( 1972 ) is the range of partial sums of deviations of a time series from its mean , rescaled by its standard deviation .lo ( 1991 ) has modified the statistic so that its behaviour is invariant for short memory processes but deviates for long memory process .the modified r / s statistic is defined as , \label{rs1}\ ] ] where , are a sample of absolute returns and with the term appearing in eqs .( [ rs1 ] ) and given by eq .( [ mean ] ) is the sample mean over the time window .also note that and are the usual sample of variance and autocovariance estimators of x provided in eq .( [ auto ] ) but taking only elements of the whole sample of size .the original r / s estimator is recovered by setting . with a value of modified lo estimator allows us to remove the effects of short range correlations up to the level and thus focuses on long memory effects in the data .the relevant quantity to study is the ratio where is given by eq .( [ rs1 ] ) .the value indicates that there is still memory up to the time scale .if the series has a persistent behaviour ( at least up to the time scale ) , thus being positively correlated . otherwise ,if , the series has an anti persistent behaviour , thus being negatively correlated .the graphs of in the right hand panel of fig .[ figure5 ] , with for window lengths up to 800 timesteps , again confirm that the chartist component is an important factor in the long memory property of return series ..5 cm ) , noise trading and fundamentalism ( red , , ) noise trading fundamentalism and chartism ( green , , ) .( right ) the modified r / s exponent for absolute returns on the left side in terms of the time window .we here represent the exponent for several trading weights and filter the short term memory for a number of time steps ( cf .( [ rs1])).,title="fig:",width=226 ] ) , noise trading and fundamentalism ( red , , ) noise trading fundamentalism and chartism ( green , , ) .( right ) the modified r / s exponent for absolute returns on the left side in terms of the time window .we here represent the exponent for several trading weights and filter the short term memory for a number of time steps ( cf .( [ rs1])).,title="fig:",width=226 ] in this section we investigate the impact of fundamentalist and chartist trading profiles on the order submission strategies and the resulting market book shape ..,title="fig:",width=226].,title="fig:",width=226].,title="fig:",width=226 ] figure [ figure6b ] reveals a higher number , and larger size , of market orders to sell with respect to market orders to buy , particularly in the case of noise trader strategy only .this indicates that agents are more likely to demand immediacy in execution , via the use of market orders , on the sell side of the market .figure [ figure6 ] displays the decumulative distribution function ( ddf ) of the limit order placement distance from the midpoint , for buy ( black ) and sell ( red ) limit orders , with noise trading ( left ) , noise trading and fundamentalism ( centre ) , and noise trading , fundamentalism and chartism ( right ) .we observe that with only noise trading , the ddf appears to be normally distributed for both buy and sell orders .when the fundamentalist component dominates both buy and sell limit orders are placed very close to the midpoint and the ddf appears to have an exponential decay ( this also explains why the price follows very closely the fundamental price in this case ) .when the chartist component is activated more and more limit orders are placed at larger distances from the midpoint generating a hyperbolic decay for the ddf ..,title="fig:",width=226].,title="fig:",width=226].,title="fig:",width=226 ] figure [ figure9 ] shows the value of the hill exponent for the distribution of limit order placement from the best bid - ask ( right ) .the estimation of the hill exponent gives values comparable with those estimated empirically by zovko and farmer ( 2002 ) and potters and bouchaud ( 2003 ) .we then measured the distribution of gap sizes , where the gaps are defined as the difference between the best price ( bid / ask ) and the price at the next best quote ( on each side respectively ) . on the left side of figure [ figure9 ]we plot the hill exponent for the distribution of the first gap size and show that it decreases with ( this observation , as discussed in the next section , is important in understanding the origin of large price fluctuations ) . and different values of .( right ) hill exponent of the order placement distribution for for and different values of ., title="fig:",width=226 ] and different values of .( right ) hill exponent of the order placement distribution for for and different values of ., title="fig:",width=226 ] the origin of these large gaps can be explained as follows .as seen in fig .[ figure6 ] adding the chartist component induces traders to place orders far from the best bid and the best ask .the region where orders are placed is in fact four times larger than in the fundamentalist case . with the same number of orders spread over a broader region of the book, larger gaps between orders may arise both on the buy and sell side of the book .the chartist strategies also have a shorter time horizon that makes the book even more dynamic thus having stronger fluctuations in its shape and eventually creating even more gaps .moreover , order submission differs for optimistic and pessimistic traders .an optimistic trader ( ) can choose sell / buy orders at price levels distributed symmetrically around the the current price . a pessimistic trader ( )instead typically chooses orders at price levels that are systematically below .if the pessimistic trader chooses an order to sell , the order is very likely to be immediately executed as a market order ( due to the fact that there is a high probability of finding a matching limit order to buy at a low price ) . on the other hand , if the trader chooses an order to buy , it is likely to be a limit order placed at a large distance from the bid .figure [ figure12 ] shows this mechanism .it is this asymmetry between buy / sell orders that is responsible for generating larger gaps on the buy side of the book and thus the negative skewness shown in fig . [ figure4 ] . ) in the order placement decision when and for a given snapshot of the book .( right ) the role of the cara demand function ( [ utility ] ) in the order placement decision when and for a given snapshot of the book .the figures at the bottom show a typical pattern of the limit order book at a given time in a simulation .buy orders placed in the book are represented as positive volume , while sell orders placed in the book are represented as negative volume .the top figures are particular instances of figure [ figure ] .the intersections determine the three different price levels , and for the case of optimistic trader ( ) ( right ) and pessimistic trader ( ) ( left).,title="fig:",width=226 ] ) in the order placement decision when and for a given snapshot of the book .( right ) the role of the cara demand function ( [ utility ] ) in the order placement decision when and for a given snapshot of the book .the figures at the bottom show a typical pattern of the limit order book at a given time in a simulation .buy orders placed in the book are represented as positive volume , while sell orders placed in the book are represented as negative volume .the top figures are particular instances of figure [ figure ] .the intersections determine the three different price levels , and for the case of optimistic trader ( ) ( right ) and pessimistic trader ( ) ( left).,title="fig:",width=226 ] is activated .the value of the parameters are the same as in figure [ figure5].,width=226 ] figure [ figure8 ] displays the book shape with noise trading ( black ) , noise trading and fundamentalism ( red ) , noise trading fundamentalism and chartism ( green ) .the market depth is averaged over the entire simulation period .order placements are given as prices relative to the midpoint between the best bid and best ask .we observe that the book has an increasingly more realistic shape with the depth higher toward the best price as the chartist component is activated .chartist strategies also generate longer , power law , tails in the distributions of orders in the book in qualitative agreement with empirical findings ( bouchaud et al .( 2002 ) ) . in summary ,the risk aversion in the cara utility function ( [ utility ] ) , has the effect of making noise traders more impatient when they sell than when they buy .noise traders prefer to sell immediately via market orders , and to buy by submitting limit orders at prices lower than the current quote .the contribution of fundamentalist rules is that of reducing the imbalance between buy and sell orders of both types , and of concentrating the distribution of orders around the midpoint , thus creating a more realistic shape of the book .the effect of chartist rules is that of widening the distribution of limit order submissions farther away from the midpoint , thus generating fatter tails in the book and larger gaps between orders , particularly on the buy side of the book .next we address the fundamental question of what generates large price returns , an issue that has been discussed by several authors including gabaix et al .( 2003 ) , plerou et al .( 2004 ) , farmer et al .( 2004 ) , farmer and lillo ( 2004 ) , and lillo and farmer ( 2005 ) .we compute the distribution of returns conditional on the size of incoming market orders . to this end ,we split orders into three groups with approximately the same amount of orders in each .the first group includes orders for buying or selling less than 15 stocks , the second group has orders of size between 15 and 30 stocks , and the last group orders of size larger than 30 stocks . figure [ figure10 ] ( left ) displays the conditional distributions of returns , given orders of different sizes , and shows the same power law decay , indicating that large price fluctuations seem to be rather insensitive to order size . in fig .[ figure10 ] ( right ) we find instead an almost linear relationship between the hill exponent of returns distributions and the hill exponent of the gap distribution , indicating , as suggested by farmer and lillo ( 2004 ) , that the presence of large gaps at the best price is what drives large price changes .( black ) , size ( red ) , size ( green ) , for and .( right ) the tail index of the first gaps versus the tail index of returns for and different values of .,title="fig:",width=226 ] ( black ) , size ( red ) , size ( green ) , for and .( right ) the tail index of the first gaps versus the tail index of returns for and different values of .,title="fig:",width=226 ]in this paper we have set up and analysed a model of a double auction market .each agent determines its demand through maximisation of its expected utility of wealth , and bases its expectation of future return on a fundamentalist , a chartist and a noise trader component .agents differ in risk aversion , investment time horizon and the weight given to the three components of expected return .we have thus extended a number of earlier models in this literature which allowed agents to only place orders of unit size , so that the model now incorporates feed back from the ongoing evolution of the market .we have used a number of statistical tools such as decumulative distribution functions , the hill tail index , and the rescaled range statistic to analyse time series simulated by the model .we find that the chartist component needs to be activated in order that the generated returns exhibit many empirically observed features such as volatility clustering , fat tails and long memory .we also find that this approach describes fairly well a number of recently observed stylized facts of double auction markets , such as the fat - tailed distribution of limit orders placement from current bid / ask and the fat - tailed distribution of orders stored in the order book .our paper also contributes to the debate on what generates large price changes in stock markets .our simulations seem to confirm the picture proposed by farmer and lillo ( 2004 ) , namely that large returns are driven by large gaps occurring in price levels adjacent to the best bid and best ask .future research could aim to enrich the economic framework of the model .for instance in place of allowing the weights on the fundamentalist , chartist and noise trader components in eq .( [ expect ] ) to the randomly selected they could be chosen on the basis of some fitness measure as in brock and hommes ( 1998 ) .a further question of interest would be to see how the order book and order flow are affected if we assume agents have crra ( constant relative risk aversion ) utility functions , in which case their asset demands depend on their level of wealth .we know from chiarella , dieci and gardini ( 2006 ) that in this type of modelling framework crra and cara utility functions lead to different types of dynamics .jp acknowledges kind hospitality of city university during his stay with grant 2004-be-00314 from agncia de gesti dajuts universitaris i de recerca and also financial support by direccin general de investigacin under contract no .fis2006 - 05204 .gi acknowledges generous hospitality and financial support from uts during her stay in sydney in august 2003 .carl chiarella acknowledges financial support from australian research council grant dp0450526 .we would like to thank ( tony ) xuezhong he for helpful comments .bottazzi , g. , dosi , g. , rebesco , i. , 2005 . institutional achitectures and behavioural ecologies in the dynamics of financial markets : a preliminary investigation .journal of mathematical economics 41 , 197 - 228 .bouchaud , j .-p . , m , m. , potters , m. , 2002 .statistical properties of stock order books : empirical results and models .quantitative finance 2 , 251 - 256 .bouchaud , j .-p . , kockelkoren , j. , potters , m. , 2006 .random walks , liquidity molasses and critical response in financial markets .quantitative finance 6 , 115 - 123 .bouchaud , j .-p . , gefen , y. , potters , m. , wyart , m. , 2004 .fluctuation and response in financial markets : the subtle nature of random price changes .quantitative finance 4 , 176 - 190 .brock , w. , hommes , c. , 1998 .heterogeneous beliefs and routes to chaos in a simple asset pricing model .journal of economics dynamics and control 22 , 1235 - 1274 .challet , d. , stinchcombe , r. , 2001 . analyzing and modelling 1 + 1d markets .physica a 300 , 285 - 299 .chiarella , c. , dieci , r. , gardini , l. , 2006 .asset price and wealth dynamics in a financial market with heterogeneous agents .journal of economic dynamics and control 30 , 1755 - 1786 .chiarella , c. , iori , g. , 2002 . a simple microstructure model of double auction markets .quantitative finance 2 , 346 - 353 .consiglio , a. , lacagnina , v. , russino , a. , 2005 .a simulation analysis of the microstructure of an order driven financial market with multiple securities and portfolio choices .quantitative finance 5 , 71 - 88 .farmer , j.d . ,gillemot , l. , lillo , f. , mike , s. , sen , a. , 2004 .what really causes large price changes ? .quantitative finance 4 , 383 - 397 .farmer , j.d . ,lillo , f. , 2004 . on the origin of power laws in financial markets .quantitative finance 4 , c7-c10 .daniels , m.g . , farmer , j.d . ,gillemot , l. , iori , g. , smith , e. , 2003 . quantitative model of price diffusion and market friction based on trading as a mechanistic random process .physical review letters 90 , 108102 .gabaix , x. , gopikrishnan , p. , plerou , v. , stanley , h. e. , 2003 .a theory of power - low distributions in financial market fluctuations .nature 423 , 267 - 270 .gillemot , l. , farmer , j.d . ,lillo , f. , 2006 . there s more to volatility than volume , quantitative finance 6 , 371 - 384 .gil - bazo , j. , moreno , d. , tapia , m. , 2007 .price dynamics , informational efficiency and wealth distribution in continuous double auction markets .computational intelligence 23 , 176 - 196 .hill , b.m . , 1975 .a simple general approach to inference about the tail of a distribution .annals of statistics 3 , 1163 - 1173 .hommes , c.h . , 2001 .financial markets as nonlinear adaptative evolutionary systems .quantitative finance 1 , 149 - 167 .hurst , h. , 1951 .long term storage capacity of reservoirs .transactions of the american society of civil engineers 116 , 770 - 799 .kirman , a. , teyssiere , g. , 2002 .microeconomic models for long memory in the volatility of financial time series .studies in nonlinear dynamics and econometrics 5 , 281 - 302 .li calzi , m. , pellizzari , p. , 2003 .fundamentalists clashing over the book : a study of order - driven stock markets .quantitative finance 3 , 470 - 480 .lillo , f. , farmer , j.d . , mantegna , r.n .single curve collapse of the price impact function for the new york stock exchange .nature 421 , 129 - 130 .lillo , f. , farmer , j.d . , 2004 . the long memory of the efficient market .studies in nonlinear dynamics & econometrics 8 , 1 - 33 .lillo , f. , farmer , j.d . , 2005 . the key role of liquidity fluctuations in determining large price fluctuations .fluctuations and noise letters 5 , l209-l216 .lillo , f. , mike , s. , farmer , j.d . ,theory for long memory in supply and demand .physical review e 7106 287 - 297 .lo , a. , 1991 .long - term memory in stock market prices .econometrica 59 , 1279 - 1313 .luckock , h. , 2003 .a steady - state model of the continuous double auction .quantitative finance 3 , 385 - 404 . lux , t. , 2001 .the limiting extremal behaviour of speculative returns : an analysis of intra - daily data from the frankfurt stock exchange . applied financial economics 11 , 299 - 315 .mandelbrot , b. , 1972 . statistical methodology for non - periodic cycles : from the covariance to r / s analysis .annals of economic and social mesurements 1 , 259 - 290 .plerou , v. , gopikrishnan , p. , gabaix , x. , stanley , h.e . , 2004 . on the origin of power - law fluctuations in stock pricesquantitative finance 4 , c11-c15 .potters , m. , bouchaud , j .-more statistical properties of stock order books and price impact .physica a 324 , 133 - 140 .potters , m. , bouchaud , j .-p . , 2006 .trend followers lose more often than they gain .wilmott magazine jan 2006 .raberto , m. , cincotti , s. , focardi , s.m . , marchesi , m. , 2001 .agent - based simulation of a financial market .physica a 299 , 320 - 328 .slanina , f. , 2007 .critical comparison of several order - book models for stock - market fluctuations .the european physical journal b , forthcoming .weber , p. , rosenow , b. , 2005 .order book approach to price impact .quantitative finance 5 , 357 - 364 .zovko , i.i . , farmer , j.d . , 2002 .the power of patience : a behavioral regularity in limit order placement .quantitative finance 2 , 387 - 392 .at the beginning of each trading timestep , the agent constructs its individual demand function and determines the amount of wealth it would like to invest in the risky asset for any possible level of the notional transaction price level .the residual wealth is invested in a riskless asset with zero interest rate .for simplicity , in this appendix we drop the agent superscript from all variables .the following procedure for the determination of individual demand functions is understood to apply to every agent .we first assume that the agent s decision is taken in terms of a cara ( constant absolute risk aversion ) class as given by eq .( [ utility ] ) where is the risk aversion of the agent .the wealth at a given time step is given by a cash amount and the quantity of the risky asset whose value is .that is : . at time the agent places an order of size with price level .the agent takes these two decisions about size and price level based on its own expected price of the stock at a time horizon ( cf .( [ pexpect ] ) and ( [ tau ] ) ) and the maximization of its own expected utility function at horizon . assuming the cara class utility function ( [ utility2 ] ) , the invertor s expected utility at time is given by , \label{w1}\ ] ] where ] stands for the conditional variance .since =w_t+s_tp \mathbb{e}_{t}\left[\rho_{t+\tau}\right] ] , we get +\alpha^2 s_t^2p^2v_{t}\left[\rho_{t+\tau}\right]/2\right)}. \label{w3}\ ] ] at time the agent expects that the price will be ( cf .( [ pexpect ] ) ) and the expected return thus reads =\ln(\hat{p}_{t+\tau}/p) ] as the historical one calculated over a certain time window as shown in section [ themodel ] in eq .( [ sigma ] ) . differentiating the expected utility function ( [ w3 ] ) with respect to setting the expression above to zero we determine the optimal amount of stocks , the agent wishes to hold in its portfolio at time for a given price level , based on expectations at time horizon , namely which coincides with eq .( [ equation8 ] ) . before going back to the main text, it is worth mentioning that bottazzi et .al ( 2005 ) give a similar analysis but with the hypothesis that the expected utility function reads -\frac12 \alpha v_{t}\left[w_{t+\tau}\right ] .\label{w6}\ ] ] this is simply a function depending linearly on the expected return and its variance ( see also brock and hommes ( 1998 ) , hommes ( 2001 ) and kirman and tessyiere ( 2002 ) . by neglecting higher momentsone is in practice assuming gaussianity in the agent s expectations , as we do .in order to quantify the fat tails of our distribution of returns we estimate the hill tail index ( hill ( 1975 ) ) . the hill estimator is a maximum likelihood estimator of the parameter of the pareto law for large , where denotes the return pdf . because of its simplicitythe hill estimator has become the standard tool in most studies of tail behaviour of economic data . to compute the hill tail index ,the sample elements are put in descending order : where is the length of our data sample and is precisely the number of observations located in the tail of our distribution .hence , the hill estimator obtains the inverse of as .\label{hill}\ ] ] the main difficulty with this procedure is to choose the optimal threshold .if we take too small we may have too few statistics , while if we include too large a data set inside the tail then the estimator increases because of contamination with entries from more central parts of the distribution .there are many techniques aimed at finding this optimal cut - off ( see for instance lux ( 2001 ) ) .all of them agree that the optimal is defined as the number of order statistics minimizing the mean squared error of defined in eq .( [ hill ] ) .however , the optimal sample fraction is not easy to infer from this assertion and one of the most common ways to perform the estimation in the literature is by a bootstrapping technique ( again see lux ( 2001 ) ) . in our case , we have evaluated the hill tail index for a fixed threshold taking 5% of the data . while this choice does not necessarily provide the optimal cut - off ,this is the value commonly taken in the literature ( see e.g. lux ( 2001 ) ) .we have carried out the hill estimator procedure to estimate the right and the left tail exponents of both the return and also the absolute return time series . the estimated tail index given by eq .( [ hill ] ) is obtained for every simulation we ran .it can be shown that is asymptotically normal with zero mean and variance where ( see lux ( 2001 ) for more details ). the confidence level of being inside the interval of one unit of mean squared error is 67% , being inside the interval of two units of mean squared error is 95% , and being inside the interval of three units of mean squared error is 99.7% .all these properties allow us to give the error of our measurement and its degree of confidence .the output of our simulations gives the average of the hill tail index over all the path simulations and its standard deviation .the estimation should converge to the `` true '' hill tail index in accordance to the central limit theorem .hence , as the number of simulations increases and where is the total number of simulations and is the index for each simulation .so that we obtain as the confidence band for , and if one wants to focus on the tail index this reads
|
in this paper we develop a model of an order - driven market where traders set bids and asks and post market or limit orders according to exogenously fixed rules . agents are assumed to have three components to the expectation of future asset returns , namely - fundamentalist , chartist and noise trader . furthermore agents differ in the characteristics describing these components , such as time horizon , risk aversion and the weights given to the various components . the model developed here extends a great deal of earlier literature in that the order submissions of agents are determined by utility maximisation , rather than the mechanical unit order size that is commonly assumed . in this way the order flow is better related to the ongoing evolution of the market . for the given market structure we analyze the impact of the three components of the trading strategies on the statistical properties of prices and order flows and observe that it is the chartist strategy that is mainly responsible of the fat tails and clustering in the artificial price data generated by the model . the paper provides further evidence that large price changes are likely to be generated by the presence of large gaps in the book .
|
internet traffic has exploded in the last fifteen years as an area of intense theoretical and experimental research . as the largest engineered infrastructure and information system in human history , the internet s staggering size and complexityare reinforced by its decentralized and self - organizing structure . using packets ofencapsulated data and a commonly agreed protocol suite , the internet has far outgrown its origins as arpanet whose traffic has demanded new models and ways of thinking to understand and predict . amongst the earliest discoverieswere the researches of leland and wilson who identified the non - poisson nature of internet traffic .this was followed by the seminal paper of leland , taqqu , willinger , and wilson which proved that internet packet interarrival times are both self - similar and portray long - range dependence . though self - similarity is present at all time scales , it is most well - defined when traffic is stationary , an assumption that can only last a few hours at the most .the lack of stationarity on long time scales is due to one of the most widely known periodicities ( or oscillations ) in internet traffic , the diurnal cycle with 12 and 24 hour peaks .internet periodicities are not new and have been well - studied since the earliest days of large - scale measurements of packet traffic , however , they rarely receive primary attention in discussions of traffic and are often mentioned only as an aside or a footnote .gradually , however , they are gaining more attention .this new area of research has been dubbed _network spectroscopy _ or _internet spectroscopy_. in this paper , they will take front and center as the most important periodicities , as well as the techniques to measure them , are described .identifying periodicities in internet traffic is , in general , not markedly different from standard spectral analysis of any time series .the same cautions apply with sampling rates and the nyquist theorem to determine the highest identifiable frequency as well as to be aware of possible aliasing . in addition , the sampling period is important due to the large ranges of magnitudes the periods of internet periodicities occupy .the standard method is covered in .a continuous time series is collected and binned with a sampling rate where the number of packets arriving every interval seconds are counted .next , to remove the dc component of the signal , every time step has the mean of the entire time series subtracted from it .next you calculate the autocovariance ( acvf ) of the adjusted time series . where for a time series of sampling periods ( total sampling time ) the acvf , at lag , is defined as with a typical lag range chosen of .finally , a fourier transform is taken of the acvf with maximum lag and the periodogram created from the absolute value ( amplitude ) of the fourier series a resulting periodogram ( see figure [ bgp ] ) has several typical features .first , low frequency noise can be present , again testifying to the self - similar nature of the traffic .this can sometimes obscure low - frequency periodicities in the data .second , are any periodicities , their harmonics , and occasionally even small peaks perhaps representing nonlinear mixing of a sort between two periodicities , often with periods of different orders of magnitude .given the nonstationary nature of internet traffic and the frequent presence of transients , methods based on the fourier transform can only given an incomplete view of the periodic dynamics of internet traffic .in particular , especially for rapidly changing periodicities such as those caused by rtt of flows , periodicities may only be temporary before shifting , disappearing , or being displaced .wavelet methods have been developed in great theoretical and practical detail in the last several decades to allow for the analysis of a signal s periodic nature on multiple times scales .wavelet techniques will not be covered here in detail though there are many good references .the continuous wavelet function on the signal , here an internet traffic trace , is given for a mother wavelet , with representing a stretching coefficient ( scale ) and represents a translation coefficient ( time ) in figure [ bgp ] alongside the fft of the signal is a contour plot generated by plotting using the morlet mother wavelet over 12 octaves .one of the key advantages of wavelets is seeing the periodic variation over time .the y - axis represents the period of the signal represented and the x - axis is the time of the traffic trace in seconds .a first feature is the continuous strong periodicity at 30 seconds as a result of the update packets .a second and more intriguing feature are the inverse triangular ` bursts ' of high frequency traffic with an average period close to one hour .these are update packets generated by route flapping , which are damped for a maximum period of one hour according to the most common presets for route flapping damping .the packets with the most pernicious flapping routers announcing withdrawals were removed in the third figure where the hourly oscillation largely disappears .there are a plethora of traffic periodicities that represent oscillations in traffic over periods of many orders of magnitude from milliseconds to weeks .broido , et . believe there are thousands of periodic processes in the internet .the sheer range of the periods of the periodicities means that many times , only certain periodicities appear in packet arrival time series due either to the sampling rate or sampling duration .this is one of the reasons why a comprehensive description of all internet periodicities has rarely been done .internet periodicities have origins which broadly correspond to two general causes : first , there are protocol or data transmission driven periodicities .these range on the time scale from microseconds to seconds , or in rare cases , hours .these periodicties can again be broken down into two smaller groups , periodicities driven by packet data transmission on the link layer and periodicities driven by protocol behavior on the transport layer .second are application driven periodicities .their periods range on the time scale of minutes to hours to weeks , and quite possibly longer .these are all generated from activities at the application layer , either by automated applications such as bgp or dns or user driven applications via http or other user application protocols .the major known periodicities are summarized in figure [ periodicities ] and will be described in detail in the next two subsections .a key link level periodicity due to the throughput of packet transmission of a link and can be deduced from the equation : where is the average throughput of the link and is the average packet size at the link level .the base frequency is the rate of packet emission across the link at the optimum throughput and packet size .the base frequency for data transmission is given by where is the bandwidth of the link and the packet size is the mtu packet size. therefore for 1 gigabit , 100 mbps , and 10 mbps ethernet links with mtu sizes of 1500 bytes , the theoretical optimal base frequencies are 83.3 khz , 8.3 khz , and 833 hz respectively .other technologies have their own specific periods such as sonet frames identified with periods of 125 .these are among the most difficult traffic to identify due to the need for high sampling rates of packet traffic . at a minimum ,a microsecond sampling rate is usually necessary to make sure you can identify all link - layer periodicities .it is rare that both link layer and other periodicities are displayed together since the massive memory overhead of recording the timestamp of almost every packet is necessary .the link layer periodicities are receiving much of the attention in the research , however , due to their possible use in inferring bottlenecks and malicious traffic .the main practical applications being researched are inferring network path characteristics such as bandwidth , digital fingerprinting of link transmissions , and detecting malicious attack traffic by changes in the frequency domain of the transmission signal . use analysis of the distribution of packet interarrival times to infer congestion and bottlenecks on network paths upstream . in measures of packet arrival distributions , particularly in the frequency domain , are being tested to recognize and analyze distributed denial of service or other malicious attacks against computer networks .inspecting the frequency domain of a signal can also reveal the fingerprints of the various link level technologies used along the route of the signal as is done in .the transport layer also produces its own periodicities .in particular , both tcp and icmp often times operate bidirectional flows with the interarrival of ack packets corresponding to the rtt between the source and destination , often in the range of 10 ms to 1 s. instead of just frequency peaks there usually are wide bands corresponding to the dominant rtt in the tcp or icmp traffic measured . according to most equations of tcp throughput such as that by semke et . the throughput of tcp depends inversely on the rtt so that the tcp rtt periodicities often can give a relative estimate of throughput of the flows producing them and the distribution of rtt for flows in the traffic trace .exact estimates are difficult though since packet loss and maximum segment size are usually unknown .icmp , though a connectionless protocol also has echo replies which can also appear as periodicities if they are persistent through time .once you rise to periods above one second , application layer periodicities dominate the spectrum .these come from a variety of sources including software settings and human activity . at the low endare the 30s and sometimes 60s periodicities in bgp traffic .the 30s oscillation , shown in figure [ bgp ] , is the most common set time for routers to advertise their presence and continuing function to neighboring routers using keepalive bgp updates .these are the strongest periodicities present in bgp traffic .large - scale topological perturbations such as bgp storms can also produce transient periodicities in traffic such as large - scale route flapping which is shown in figure [ bgp ] .udp traffic periodicities are rarely consistent and large - scale and are generally generated by dns , the largest application using udp .claffy et .al . identified periodicities of dns updates transmitted with periods of 75 minutes , 1 hour , and 24 hours due to default settings in windows 2000 and xp dns software .they warn that such software settings could possibly cause problems in internet traffic if they lead to harmful periods of traffic oscillations and congestion .large numbers of usually source and software specific udp periodicities were also identified by brondman .user traffic driven periodicities were the first known and most easily recognized .the first discovered and most well - known periodicity is the 24 hour diurnal cycle and its companion cycle of 12 hours .these cycles have been known for decades and reported as early as 1980 and again in 1991 as well as in many subsequent studies .this obviously refers to the 24 hour work - day and its 12-hour second harmonic as well as activity from around the globe .the other major periodicity from human behavior is the week with a period of 7 days and a second harmonic at 3.5 days and barely perceptible third harmonic at 2.3 days .there are reports as well of seasonal variations in traffic over months , but mostly these have not been firmly characterized .long period oscillations have been linked to possible causes of congestion and other network behavior related to network monitoring .one note is that user traffic driven periodicities tend to appear in protocols that are directly used by most end users .the periodicities appear tcp / ip not udp / ip and are mainly attributable to activity with the http and smtp protocols .they also often do not appear in networks with low traffic or research aims such as the now defunct 6bone ipv6 test network .these periodicities range in roughly 12 orders of magnitude . however , they share one particular characteristic .namely , the longer the period of the periodicity , the less likely it is to betray variations in period or phase over time .for example , the diurnal and weekly periodicities have their roots in human activity and are based on the earth s rotation and the seven week social convention .these do not vary appreciably over long - time periods and since they help drive human behavior which drives traffic , these could be considered the most permanent of all periodicities and this is partially why these were the earliest known .the bgp keepalive updates and dns updates are based on commonly agreed software settings .these also do not vary appreciably and only change by user preference .however , the transport and link layer periodicities are much more variable .the rtt of tcp or icmp varies depending on the topological distance and congestion between two points .hardly , stable variables . assuming the bandwidth of the link layers is steady ,the average packet size , which depends on both the maximum transmission unit ( mtu ) software settings can cause large variability to be seen in actual network traffic .understanding the range of these periodicities is more important than memorizing a distance frequency value since it is always different depending on the time and place of measurement .internet periodicities will likely play a large role in full characterization and simulation of internet traffic .hopefully further work will put them in their rightful place as fundamental phenomena of data traffic .we leland , & dv wilson , high time - resolution measurement and analysis of lan traffic : implications for lan interconnection , proceedings ieee lnfocom 91 , ( 1991 ) 1360 - 1366 .we leland , ms taqqu , w willinger , & dv wilson , on the self - similar nature of ethernet traffic ( extended version ) , ieee / acm transactions on networking , 2 1 , ( 1994 ) -151 .a broido , e nemeth , & kc claffy , spectroscopy of dns update traffic , acm sigmetrics 2003 , 31 ( 2003 ) 320 - 321 .db percival & at walden , wavelet methods for time series analysis , cambridge university press , new york , 2000 .y nievergelt , wavelets made easy springer , berlin , 1999 .g kaiser , a friendly guide to wavelets springer , berlin , 1994 .p addison , the illustrated wavelet transform handbook , crc press , boca raton , 2002 .m roughan , a greenberg , c kalmanek , m rumsewicz , j yates , & y zhang , experience in measuring backbone traffic variability : models , metrics , measurements and meaning , proceedings of the 2nd acm sigcomm workshop on internet measurement , ( 2002 ) 91 - 92 .p owezarski & n larrieu , internet traffic characterization - an analysis of traffic oscillations , in high speed networks and multimedia communications edited by mm freire , p lorenz , & m lee , springer , berlin , 2004 , 96
|
internet traffic displays many persistent periodicities ( oscillations ) on a large range of time scales . this paper describes the measurement methodology to detect internet traffic periodicities and also describes the main periodicities in internet traffic . internet traffic , packets , fft , wavelets , periodicities
|
almost every summer , there is a heat wave somewhere in the us that garners popular media attention . during such hot spells , daily record high temperatures for various citiesare routinely reported in local news reports .a natural question arises : is global warming the cause of such heat waves or are they merely statistical fluctuations ? intuitively , record - breaking temperature events should become less frequent with time if the average temperature is stationary .thus it is natural to be concerned that global warming is playing a role when there is a proliferation of record - breaking temperature events . in this work ,we investigate how systematic climatic changes , such as global warming , affect the magnitude and frequency of record - breaking temperatures .we then assess the potential role of global warming by comparing our predictions both to record temperature data and to monte carlo simulation results .it bears emphasizing that record - breaking temperatures are distinct from threshold events , defined as observations that fall outside a specified threshold of the climatological temperature distribution .thus , for example , if a city s record temperature for a particular day is , then an increase in the frequency of daily temperatures above ( _ i.e. _ , above the percentile ) is a threshold event , but not a record - breaking event .trends in threshold temperature events are also impacted by climate change , and is thus an active research area .studying threshold events is also one of the ways to assess agricultural , ecological , and human health effects due to climate change .here we examine the complementary issue of record - breaking temperatures , in part because they are popularized by the media during heat waves and they influence public perception of climate change , and in part because of the fundamental issues associated with record statistics .we focus on daily temperature extremes in the city of philadelphia , for which data are readily available on the internet for the period 18741999 .in particular , we study how temperature records evolve in time for each _ fixed _ day of the year .that is , if a record temperature occurs on january 1 , 1875 , how long until the next record on january 1 occurs ? using the fact that the daily temperature distribution is well approximated by a gaussian ( sec .[ t - data ] ) , we will apply basic ideas from extreme value statistics in sec .[ etr ] to predict the magnitude of the temperature jump when a new record is set , as well as the time between successive records on a given day .these predictions are derived for an arbitrary daily temperature distribution , and then we work out specific results for the idealized case of an exponential daily temperature distribution and for the more realistic gaussian distribution .although individual record temperature events are fluctuating quantities , the average size of the temperature jumps between successive records and the frequency of these records are systematic functions of time ( see , _ e.g. _ , for a general discussion ) .this systematic behavior permits us to make meaningful comparisons between our theoretical predictions , numerical simulations ( sec .[ simulations ] ) , and the data for record temperature events in philadelphia ( sec .[ trd ] ) .clearly , it would be desirable to study long - term temperature data from many locations to discriminate between the expected number of record events for a stationary climate and for global warming . for u.s .cities , however , daily temperature records extend back only 100140 years , and there are both gaps in the data and questions about systematic effects caused by `` heat islands '' for observation points in urban areas . in spite of these practical limitations , the philadelphia data provide a useful testing ground for our theoretical predictions . in sec .[ sct ] , we investigate the effect of a slow linear global warming trend on the statistics of record - high and record - low temperature events .we argue that the presently - available 126 years of data in philadelphia , coupled with the current global warming rate , are insufficient to meaningfully alter the frequency of record temperature events compared to predictions based on a stationary temperature .this conclusion is our main result .finally , we study the role of correlations in the daily temperatures on the statistics of record temperature events in sec .although there are substantial correlations between temperatures on nearby days and record temperature events tend to occur in streaks , these correlations do not affect the frequency of record temperature events for a given day .we summarize and offer some perspectives in sec .the temperature data for philadelphia were obtained from a website of the earth and mineral sciences department at pennsylvania state university .the data contain both the low and high temperatures in philadelphia for each day between 1874 and 1999 .the data are reported as an integer in degrees fahrenheit , so we anticipate an error of .no information is provided about the accuracy of the measurement or the precise location where the temperature is measured .thus there is no provision for correcting for the heat island effect if the weather station is in an increasingly urbanized location during the observation period . for each day, we also document the middle temperature , defined as the average of the daily high and daily low . to get a feeling for the nature of the data , we first present basic observations about the average annual temperature and the variation of the temperature during a typical year .figure [ av - temps ] shows the average annual high , middle , and low temperature for each year between 1874 and 1999 . to help discern systematic trends, we also plot 10-year averages for each data set .the average high temperature for each year is increasing from 1874 until approximately 1950 and again after 1965 , but is decreasing from 1950 to 1965 . over the 126 years of data, a linear fit to the time dependence of the annual high temperature for philadelphia gives an increase of , compared to the well - documented global warming rate of over the past century . on the other hand, there does not appear to be a systematic trend in the dependence of the annual low temperature on the year .a linear fit to these data give a _ decrease _ of .this disparity between high and low temperatures is a puzzling and as yet unexplained feature of the data ., scaledwidth=40.0% ] a basic feature about the daily temperature is its approximately sinusoidal annual variation ( fig .[ daily - avs+recs ] ) .the coldest time of the year is early february while the warmest is late july .an amusing curiosity is the discernible small peak during the period january 2025 .this anomaly is the traditional `` january thaw '' in the northeastern us where sometimes snowpack can melt and a spring - like aura occurs before winter returns ( see for a detailed discussion of this phenomenon ) . ,scaledwidth=40.0% ] also shown , in fig .[ daily - avs+recs ] , are the temperature extremes for each day .the highest recorded temperature in philadelphia of ( ) occurred on august 7 , 1918 , while the lowest temperature of ( ) occurred on february 9 , 1934 .record temperatures also fluctuate more strongly than the mean temperature because there are only 126 years of temperature data . as a result of this short time span, some days of the year have experienced very few records and the resulting current extreme temperature can be far from the value that is expected on statistical grounds ( see sec . [ trd ] ) . to understand the magnitude and frequency of daily record temperatures , we need the underlying temperature distribution for each day of the year . because temperatures have been recorded for only 126 years , the temperature distribution for each individual day is not smooth . to mitigate this problem , we aggregate the temperatures over a 9-day range and then use these aggregated data to define the temperature distribution for the middle day in this range . thus , for example , for the temperature distribution on january 5 , we aggregate all 126 years of temperatures from january 19 ( 1134 data points ) .we also use the middle temperature for each day to define the temperature distribution .range10 , 9 , 8 , and 6 points respectively , for january 5 , april 5 , october 5 , and july 5 .the distributions are all shifted horizontally by the mean temperature for the day and then vertically to render all curves distinct .the dashed curves are visually - determined gaussian fits ., scaledwidth=45.0% ] figure [ temp - dist-9day ] shows these aggregated temperature distributions for four representative days the of january , april , july , and october .each distribution is shifted vertically to make them all non - overlapping .we also subtracted the mean temperature from each of the distributions , so that they are all centered about zero .visually , we obtain good fits to these distributions with the gaussian , where is the deviation of the temperature from its mean value ( in ) , and with , 4.32 , 4.12 , and 3.14 for january 5 , april 5 , october 5 , and july 5 , respectively .we therefore use a gaussian daily temperature distribution as the input to our investigation of the frequency of record temperatures in the next section .an important caveat needs to be made about the daily temperature distribution .physically , this distribution can not be gaussian _ad infinitum_. instead , the distribution must cut off more sharply at finite temperature values that reflect basic physical limitations ( such as the boiling points of water and nitrogen ) .we will show in the next section that such a cutoff strongly influences the average waiting time between successive temperature records on a given day .notice that the width of the daily temperature distribution is largest in the winter and smallest in the summer .another intriguing aspect of the daily distributions is the tail behavior . for january 5 , there are deviations from a gaussian at both at the high- and low - temperature extremes , while for april 5 and october 5 , there is an enhancement only on the high - temperature side .this enhancement is especially pronounced on april 5 , which corresponds to the season where record high temperatures are most likely to occur ( see sec .[ disc ] and fig .[ daily - var ] ) .what is not possible to determine with 126 years of data is whether the true temperature distribution is gaussian up to the cutoff points and the enhancement results from relatively few data , or whether the true temperature distribution on april 5 actually has a slower than gaussian high - temperature decay .we now determine theoretically the frequency and magnitude of record temperature events .the schematic evolution of these two characteristics is sketched in fig .[ record - evolution ] for the case of record high temperatures .each time a record high for a _ fixed _ day of the year is set , we document the year when the record occurred and the corresponding record high temperature . under the (unrealistic ) assumptions that the temperatures for each day are independent and identical , we now calculate the average values of and and their underlying probability distributions ( for a general discussion of record statistics for excursions past a fixed threshold , see _ , while related work on the evolution of records is given in ref . ) . .this event occurs in year .successive record temperatures , , occur in years , , .[ record - evolution],scaledwidth=48.0% ] suppose that the daily temperature distribution is .two subsidiary distributions needed for record statistics are : ( i ) the probability that a randomly - drawn temperature _ exceeds _ , , and ( ii ) the probability that that this randomly - selected temperature _ is less than _ , .these distributions are : we now determine the record temperature recursively .we use the terminology of record high temperatures , but the same formalism applies for record lows . clearly coincides with the mean of the daily temperature distribution , .the next record temperature is the mean value of that portion of the temperature distribution that lies beyond : that is , this formula actually contains a sleight of hand .more properly , we should average the above expression over the probability distribution for to obtain the true average value of , rather than merely using the typical or the average value of in the lower limit of the integral .equation ( [ t1 ] ) therefore does not give the true average value of , but rather gives what we term the _ typical _ value of .we will show how to compute the average value shortly .proceeding recursively , the relation between successive typical record temperatures is given by where the above caveat about using the typical value of in the lower limit , rather than the average over the ( as yet ) unknown distribution of , still applies .we now compute , the probability that the record temperature equals ; this distribution is subject to the initial condition .for the record temperature , the following conditions must be satisfied ( refer to fig . [ record - evolution ] ) : ( i ) the previous record temperature must be less than , ( ii ) the next temperatures , with arbitrary , must all be less than , and ( iii ) the last temperature must equal . writing the appropriate probabilities for each of these events , we obtain ^n \ , dt'\right ) p(t ) \nonumber \\ & = & \left(\int_0^{t } \frac{\mathcal{p}_{k-1}(t')}{p_>(t ' ) } \ , dt'\right ) p(t)\,.\end{aligned}\ ] ] the above formula recursively gives the probability distribution for each record temperature in terms of the distribution for the previous record .complementary to the magnitude of record temperatures , we determine the time between successive records .suppose that the current record temperature equals and let be the probability that a new record high the set years later .for this new record , the first highs after the current record must all be less than , while the high temperature must exceed .thus the number of years between the record high and the record is therefore we emphasize that this waiting time gives the time between the record and the record when the record temperature equals the specified value . if the typical value of is used in eq .( [ nav ] ) , we thus obtain a quantity that we term the typical value of . to obtain the true average waiting time, we first define as the probability that the record is broken after additional temperature observations , averaged over the distribution for . using the definition of , we obtain the formal expression different approaches to determine the are given in refs . .there are a number of fundamental results available about record statistics that are _ universal _ and do not depend on the form of the initial daily temperature distribution , as long as the daily temperatures are independent and identically - distributed ( iid ) continuous variables . in a string of observations ( starting at time ) ,there are permutations of the temperatures out of total possibilities in which the largest temperature is the last of the string .thus the probability that a new record occurs in the year of observation , , is simply in a similar vein , the probability that the initial ( ) record is broken at the observation , , requires that the last temperature is the largest while the temperature is the second largest out of independent variables .the probability for this event is therefore again independent of the form of the daily temperature distribution .thus the average waiting time between the zeroth and first records , is infinite !more generally , the distribution of times between successive records can be obtained by simple reasoning .consider a string of iid random variables that are labeled by the time index , with .define the indicator function by definition , the probability for a record to occur in the year is .therefore the average number of records that have occurred up to time is moreover , because the order of all non - record events is immaterial in the probability for a record event , there are no correlations between the times of two successive record events .that is , .thus the probability distribution of records is described by a poisson process in which the mean number of records up to time is .consequently , the probability that records have occurred up to time is given by to appreciate the implications of these formulae for record statistics , we first consider the warm - up exercise of an exponential daily temperature distribution . for this case, all calculations can be performed explicitly and the results provide intuition into the nature of record temperature statistics .we then turn to the more realistic case of the gaussian temperature distribution .suppose that the temperature distribution for each day of the year is .equation ( [ pg ] ) then gives we now determine the typical value of each .the zeroth record temperature is .performing the integrals in eq .( [ tk - gen ] ) successively for each gives the basic result namely , a constant jump between typical values of successive record temperatures .for the probability distribution for each record temperature , we compute one at a time for using eq .( [ prob - t ] ) .this gives the gamma distribution this distribution reproduces the typical values of successive temperature records given by eq .( [ tk ] ) ; thus the typical and true average values for each record temperature happen to be identical for an exponential temperature distribution .the standard deviation of is given by , so that successive record temperatures become less sharply localized as increases .for the typical time between the and records , eq .( [ nav ] ) gives substituting into eq .( [ nav - exp ] ) , the typical time is . thus records become less likely as the years elapse .notice that the time between records does not depend on because of a cancellation between the size of the temperature `` barrier '' ( the current record ) and the size of the jump to surmount the record . for the distribution of waiting times between records ,we first consider the time between and in detail to illustrate our approach .substituting eqs .( [ pg - exp ] ) and ( [ prob - t - exp ] ) into eq .( [ qnk ] ) , this distribution is performing this integral by parts gives the result of eq .( [ qn0-exact ] ) , ] .we conclude that the discussion in secs .[ etr][sct ] , which assumed uncorrelated day - to - day temperatures , can be applied to real atmospheric observations , where daily temperatures are correlated .it is worth mentioning , however , that interday correlations do strongly affect the statistics of successive extremes in temperatures .record high temperature in degrees celsius , , where daily temperatures are uncorrelated ( solid line ) and power - law correlated with exponent .( dashed line ) .[ corr_temp],scaledwidth=45.0% ] record high temperature occurs at time ( in years ) or later , using uncorrelated ( solid line ) and power - law correlated daily temperatures ( dashed line ) .[ corr_time],scaledwidth=45.0% ] while temperature correlations do not affect record statistics for a given day , these correlations should cause records to occur as part of a heat wave or a cold snap , rather than being singular one - day events . as a matter of curiosity, we studied the distribution of times ( in days ) between successive record events , as well as the distribution of streaks ( consecutive days ) of record temperatures from the time history of all record temperature events . because the number of record temperatures decreases from year to year , these time and streak distributions are not stationary .we compensate for this non - stationarity by rescaling so that data for all years can be treated on the same footing . for example , for the distribution of times between successive records, we rescale each interevent time by the average time between records for that year .thus , for example , if two successive records occurred 78 days apart in a year where 5 record temperature events occurred ( average separation of 73 days ) , the scaled separation between these two events is . for the length of record streaks, we similarly rescaled each streak by the average streak length in that year , assuming record temperature events were uncorrelated . between successive record temperature events ( record highs , record lows ) .the times are scaled by the average time between record events for each year .[ record - time - dist],scaledwidth=40.0% ] the distribution of times between successive record temperature days decays slower than exponentially ( fig .[ record - time - dist ] ) ; the latter form would occur if record temperature events were uncorrelated . in a similar vein, we observe an enhanced probability for records to occur in streaks .since record streaks are rare , we can only make the qualitative statement that the streak distribution is different than that from uncorrelated data .our basic conclusion is that interday temperature correlations do affect statistical features of successive record temperature events but do not affect the statistics of record temperatures on a given day , where events are more than one year apart .two basic aspects of record temperature events are the size of the temperature jump when a new record occurs and the separation in years between successive records on a given day .we computed the distribution functions for these two properties by extreme statistics reasoning . for the gaussian daily temperature distribution, we found that ( i ) the record high temperature asymptotically grows as , where is the dispersion in the daily temperature , and ( ii ) record events become progressively less likely , with the typical time between the and record growing as .this latter result is independent of so that systematic changes in temperature variability should not affect the time between temperature records . from these predictions ,the distribution of waiting times between two successive records on a given day has an inverse - square power - law tail , with a divergent average waiting time .furthermore , the number of record events in the year of observations decays as .these theoretical predictions agree with numerical simulations and with data from 126 years of observations in philadelphia .another important feature is that the annual frequency of record temperature events is not measurably influenced by interday power - law temperature however , these correlations do play a significant role at shorter time scales .our primary result is that we can not _ yet _ distinguish between the effects of random fluctuations and long - term systematic trends on the frequency of record - breaking temperatures with 126 years of data .for example , in the year of observation , there should be record - high temperature events in a stationary climate , while our simulations give such events in a climate that is warming at a rate of per 100 years .however , the variation from year to year in the frequency of record events after 100 years is larger than the difference of , which should be expected because of global warming ( fig .[ max - warming ] ) .after 200 years , this random variation in the frequency of record events is still larger than the effect of global warming . on the other hand , global warming already does affect the frequency of extreme temperature events that are defined by exceeding a fixed threshold .while the agreement between our theory and the data for record temperature statistics is satisfying , there are various facts that we have either glossed over or ignored .these include : ( i ) a significant difference between the number of record high and record low events1705 record high events and only 1346 record low events have occurred the 126 years of data .( ii ) a propensity for record high temperatures in the early spring .this seasonality is illustrated both by the number of records for each day of the year and by the daily temperature variance , where and are the mean and mean - square temperatures for the day ( fig .[ daily - var ] ) .( iii ) the potential role of a systematically increasing variability on the frequency of records .for the last point , krug has shown that for an exponential daily temperature distribution whose width is increasing linearly with time , the number of record events after years grows as , intermediate to the growth of a stationary distribution and linear growth when the average temperature systematically increases .( iv ) day / night or high / low asymmetry . that is , as a function of time there are more days whose highs exceeds a given threshold and fewer days whose high is less than a threshold .paradoxically , however , there are fewer days whose lows exceed a given temperature and more days whose lows are less than a given temperature . since highs generally occur in daytime and lows in nighttime , these results can be restated as follows : the number of hot days is increasing _ and _ the number of cold nights is increasing .we do nt know how this latter statement fits with the phenomenon of global warming .another caveat is that our theory applies in the asymptotic limit , where each day has experienced a large number of record temperatures over the observational history .the fact that there are no more than 10 record events on any single day means that we are far from the regime where the asymptotic limit truly applies .finally , and very importantly , it would be useful to obtain long - term temperature data from many stations to provide a more definitive test of our predictions . for a comprehensive discussion of the global warming of in the past century , see _e.g. _ , ipcc , 2001 : climate change 2001 : the scientific basis .contribution of working group i to the third assessment report of the intergovernmental panel on climate change eds .j. t. houghton , y. ding , d. j. griggs , m. noguer , p. j. van der linden , x. dai , k. maskell , and c. a. johnson , ( cambridge university press , cambridge , 2001 ) .for a general reference on the use of statistical methods in climatology , see _e.g. _ , h. von storch and f. w. zwiers , _ statistical analysis in climate research _ , ( cambridge university press , cambridge , uk , 1999 ) . see _ e.g. _ , e. j. gumbel , _ statistics of extremes _ ( dover publications , mineola , n.y . , 2004 ) ; j. galambos , _ the asymptotic theory of extreme order statistics _( krieger publishing co. , malabar , fl , 1987 ) for a general introduction to extreme statistics .e. koscielny - bunde , a. bunde , s. havlin , h. e. roman , y. goldreich , and h .- j .schellnhuber phys .lett . * 81 * , 729 ( 1998 ) ; j. f. eichner , e. koscielny - bunde , a. bunde , s. havlin , and h .- j .schellnhuber , phys .e * 68 * , 046133 ( 2003 ) ; a. bunde , j. f. eichner , j. w. kantelhardt , and s. havlin , phys .lett . * 94 * , 048701 ( 2005 ) .
|
we theoretically study the statistics of record - breaking daily temperatures and validate these predictions using both monte carlo simulations and 126 years of available data from the city of philadelphia . using extreme statistics , we derive the number and the magnitude of record temperature events , based on the observed gaussian daily temperature distribution in philadelphia , as a function of the number of years of observation . we then consider the case of global warming , where the mean temperature systematically increases with time . over the 126-year time range of observations , we argue that the current warming rate is insufficient to measurably influence the frequency of record temperature events , a conclusion that is supported by numerical simulations and by the philadelphia data . we also study the role of correlations between temperatures on successive days and find that they do not affect the frequency or magnitude of record temperature events .
|
the low - lying eigenspace of operators has many important applications , including those in quantum chemistry , numerical pdes , and statistics .given a symmetric matrix , and denote its eigenvectors as .the low - lying eigenspace is given by the span of the first ( usually ) eigenvectors .in many scenario , the real interest is the subspace itself , but not a particular set of basis functions . in particular , we are interested in a sparse representation of the eigenspace .the eigenvectors form a natural basis set , but for oftentimes they are not sparse or localized ( consider for example the eigenfunctions of the free laplacian operator on a periodic box ) .this suggests asking for an alternative sparse representation of the eigenspace . in quantum chemistry ,the low - lying eigenspace for a hamiltonian operator corresponds to the physically occupied space of electrons . in this context , a localized class of basis functions of the low - lying eigenspaces is called wannier functions .these functions provide transparent interpretation and understandings of covalent bonds , polarizations , _ etc ._ of the electronic structure .these localized representations are also the starting point and the essence for many efficient algorithms for electronic structure calculations ( see e.g. the review article ) . in this work ,we propose a convex minimization principle for finding a sparse representation of the low - lying eigenspace . where is the entrywise matrix norm , denotes that is a positive semi - definite matrix , and is a penalty parameter for entrywise sparsity . here is an symmetric matrix , which is the ( discrete ) hamiltonian in the electronic structure context .the variational principle gives as a sparse representation of the projection operator onto the low - lying eigenspace .the key observation here is to use the matrix instead of the wave functions .this leads to a convex variational principle .physically , this corresponds to looking for a sparse representation of the density matrix .we also noted that in cases where we expect degeneracy or near - degeneracy of eigenvalues of the matrix , the formulation in terms of the density matrix is more natural , as it allows fractional occupation of states .this is a further advantage besides the convexity .moreover , we design an efficient minimization algorithm based on split bregman iteration to solve the above variational problem . starting from any initial condition, the algorithm always converges to a minimizer .there is an enormous literature on numerical algorithms for wannier functions and more generally sparse representation of low - lying eigenspace .the influential work proposed a minimization strategy within the occupied space to find spatially localized wannier functions ( coined as `` maximally localized wannier functions '' ) . in , the second author with his collaborators developed a localized subspace iteration ( lsi ) algorithm to find wannier functions .the idea behind the lsi algorithm is to combine the localization step with the subspace iteration method as an iterative algorithm to find wannier functions of an operator .the method has been applied to electronic structure calculation in .as shows , due to the truncation step involved , the lsi algorithm does not in general guarantee convergence . as a more recent work in , regularization is proposed to be used in the variational formulation of the schrdinger equation of quantum mechanics for creating compressed modes , a set of spatially localized functions in with compact support . where is the hamilton operator corresponding to potential , and the norm is defined as .this regularized variational approach describes a general formalism for obtaining localized ( in fact , compactly supported ) solutions to a class of mathematical physics pdes , which can be recast as variational optimization problems .although an efficient algorithm based on a method of splitting orthogonality constraints ( soc ) is designed to solve the above non - convex problem , it is still a big challenge to theoretically analyze the convergence of the proposed the algorithm .the key idea in the proposed convex formulation of the variational principle is the use of the density matrix .the density matrix is widely used in electronic structure calculations , for example the density matrix minimization algorithm . in this type of algorithm , sparsity of density matrixis specified explicitly by restricting the matrix to be a banded matrix .the resulting minimization problem is then non - convex and found to suffer from many local minimizers .other electronic structure algorithms that use density matrix include density matrix purification , fermi operator expansion algorithm , just to name a few . from a mathematical point of view, the use of density matrix can be viewed as similar to the idea of lifting , which has been recently used in recovery problems . while a nuclear norm is used in phaselift method to enhance sparsity in terms of matrix rank ; we will use an entrywise norm to favor sparsity in matrix entries .the rest of the paper is organized as follows .we formulate and explain the convex variational principle for finding localized representations of the low - lying eigenspace in section [ sec : formulation ] .an efficient algorithm is proposed in section [ sec : algorithm ] to solve the variational principle , with numerical examples presented in section [ sec : numerics ] .the convergence proof of the algorithm is given in section [ sec : proof ] .let us denote by a symmetric matrix coming from , for example , the discretization of an effective hamiltonian operator in electronic structure theory .we are interested in a sparse representation of the eigenspace corresponding to its low - lying eigenvalues . in physical applications, this corresponds to the occupied space of a hamiltonian ; in data analysis , this corresponds to the principal components ( for which we take the negative of the matrix so that the largest eigenvalue becomes the smallest ) .we are mainly interested in physics application here , and henceforth , we will mainly interpret the formulation and algorithms from a physical view point . the wannier functions , originally defined for periodic schrdinger operators , are spatially localized basis functions of the occupied space . in , it was proposed to find the spatially localized functions by minimizing the variational problem where denotes the entrywise norm of . here is the number of wannier functions and is the number of spatial degree of freedom ( e.g. number of spatial grid points or basis functions ) .the idea of the above minimization can be easily understood by looking at each term in the energy functional .the is the sum of the ritz value in the space spanned by the columns of .hence , without the penalty term , the minimization gives the eigenspace corresponds to the first eigenvalues ( here and below , we assume the non - degeneracy that the -th and -th eigenvalues of are different ) . while the penalty prefers to be a set of sparse vectors .the competition of the two terms gives a sparse representation of a subspace that is close to the eigenspace . due to the orthonormality constraint , the minimization problem is not convex , which may result in troubles in finding the minimizer of the above minimization problem and also makes the proof of convergence difficult .here we take an alternative viewpoint , which gives a convex optimization problem .the key idea is instead of , we consider .since the columns of form an orthonormal set of vectors , is the projection operator onto the space spanned by . in physical terms , if are the eigenfunctions of , is then the density matrix which corresponds to the hamiltonian operator . for insulating systems, it is known that the off - diagonal terms in the density matrix decay exponentially fast .we propose to look for a sparse approximation of the exact density matrix by solving the minimization problem proposed in .the variational problem is a convex relaxation of the non - convex variational problem where the constraint is replaced by the idempotency constraint of : .the variational principle can be understood as a reformulation of using the density matrix as variable .the idempotency condition is indeed the analog of the orthogonality constraint .note that requires that the eigenvalues of ( the occupation number in physical terms ) are between and , while requires the eigenvalues are either or .hence , the set is the convex hull of the set therefore is indeed a convex relaxation of . without the regularization , the variational problems andbecome and these two minimizations actually lead to the same result in the non - degenerate case .[ prop : equiv ] let be a symmetric matrix . assume that the -th and -th eigenvalues of are distinct , the minimizers of and are the same .this is perhaps a folklore result in linear algebra , nevertheless we include the short proof here for completeness .it is clear that the unique minimizer of is given by the projection matrix on the first eigenvectors of , given by where are the eigenvectors of , ordered according to their associated eigenvalues .let us prove that is minimized by the same solution .assume is a minimizer of , we calculate where . on the other hand, we have and since . therefore , if we view as a variational problem with respect to , it is clear that the unique minimum is achieved when we conclude the proof by noticing that the above holds if and only if .this result states that we can convexify the set of admissible matrices .we remark that , somewhat surprisingly , this result also holds for the hartree - fock theory which can be vaguely understood as a nonlinear eigenvalue problem .however the resulting variational problem is still non - convex for the hartree - fock theory .proposition [ prop : equiv ] implies that the variational principle can be understood as an regularized version of the variational problem .the equivalence no longer holds for and with the regularization .the advantage of over is that the former is a convex problem while the latter is not .coming back to the properties of the variational problem .we note that while the objective function of is convex , it is not strictly convex as the -norm is not strictly convex and the trace term is linear .therefore , in general , the minimizer of is not unique .let , and the non - uniqueness comes from the degeneracy of the hamiltonian eigenvalues .any diagonal matrix with trace and non - negative diagonal entries is a minimizer .let , and the non - uniqueness comes from the competition between the trace term and the regularization .the eigenvalues of are and .straightforward calculation shows that which corresponds to the eigenvector associated with eigenvalue and which corresponds to the eigenvector associated with eigenvalue are both minimizers of the objective function .actually , due to convexity , any convex combination of and is a minimizer too .it is an open problem under what assumptions that the uniqueness is guaranteed .to solve the proposed minimization problem , we design a fast algorithm based on split bregman iteration , which comes from the ideas of variables splitting and bregman iteration .bregman iteration has attained intensive attention due to its efficiency in many related constrained optimization problems . with the help of auxiliary variables ,split bregman iteration iteratively approaches the original optimization problem by computation of several easy - to - solve subproblems .this algorithm popularizes the idea of using operator / variable splitting to solve optimization problems arising from information science .the equivalence of the split bregman iteration to the alternating direction method of multipliers ( admm ) , douglas - rachford splitting and augmented lagrangian method can be found in . by introducing auxiliary variables and , the optimization problem is equivalent to which can be iteratively solved by : where variables are essentially lagrangian multipliers and parameters control the penalty terms . solving in alternatively , we have the following algorithm . [ alg : cm_p ] initialize note that the minimization problem in the steps of algorithm [ alg : cm_p ] can be solved explicitly , as follows : = \operatorname{eig}(p^k + d^{k-1}).\end{aligned}\ ] ] starting form any initial guess , the following theorem guarantees that the algorithm converges to one of the minimizers of the variational problem .[ thm : conv ] the sequence generated by algorithm [ alg : cm_p ] from any starting point converges to a minimum of the variational problem . we will prove a slightly more general version of the above ( theorem [ thm : three ] ) .the idea of the proof follows from the general framework of analyzing split bregman iteration , _i.e. _ alternating direction method of multipliers ( addm ) , see for example .the standard proof needs to be generalized to cover the current case of `` two level splitting '' and the non - strictly convexity of the functionals .we defer the detailed proof to section [ sec : proof ] .in this section , numerical experiments are presented to demonstrate the proposed model for density matrix computation using algorithm [ alg : cm_p ] .we illustrate our numerical results in three representative cases , free electron model , hamiltonian with energy band gap and a non - uniqueness example of the proposed optimization problem .all numerical experiments are implemented by in a pc with a 16 g ram and a 2.7 ghz cpu . in the first example , we consider the proposed model for the free electron case , in other words , we consider the potential free schrdinger operator defined on 1d domain ] with equally spaced points , and we take .figure [ fig : densitymatrix_lap](a ) illustrates the true density matrix obtained by the first eigenfunctions of .as the free laplacian does not have a spectral gap , the density matrix decays slowly in the off - diagonal direction .figure [ fig : densitymatrix_lap](b ) and ( c ) plot the density matrices obtained from the proposed model with parameter and .note that they are much localized than the original density matrix . as gets larger, the variational problem imposes a smaller penalty on the sparsity , and hence the solution for has a wider spread than that for . .( b ) , ( c ) : solutions of the density matrices with respectively .[ fig : densitymatrix_lap],title="fig : " ] + .( b ) , ( c ) : solutions of the density matrices with respectively .[ fig : densitymatrix_lap],title="fig : " ] + .( b ) , ( c ) : solutions of the density matrices with respectively .[ fig : densitymatrix_lap ] ] after we obtain the sparse representation of the density matrix , we can find localized wannier functions as its action on the delta functions , as plotted in figure [ fig : projection_lap ] upper and lower pictures for and respectively . using density matrices with ( upper ) and ( lower )respectively.,title="fig : " ] + using density matrices with ( upper ) and ( lower ) respectively.,title="fig : " ] to indicate the approximation behavior of the proposed model , we consider the energy function approximation of to with different values of .in addition , we define as a measurement for the space approximation of the density matrix to the lower eigen - space . figure [ fig : densityfunapprox_lap ] reports the energy approximation and the space approximation with different values of .both numerical results suggest that the proposed model will converge to the energy states of the schrdinger operator .we also remark that even though the exact density matrix is not sparse , a sparse approximation gives fairly good results in terms of energy and space approximations . .lower : space approximation as a function of .,title="fig : " ] + .lower : space approximation as a function of .,title="fig : " ] + we then consider a modified kronig penney ( kp ) model for a one - dimensional insulator .the original kp model describes the states of independent electrons in a one - dimensional crystal , where the potential function consists of a periodic array of rectangular potential wells .we replace the rectangular wells with inverted gaussians so that the potential is given by ,\ ] ] where gives the number of potential wells . in our numerical experiments ,we choose and for , and the domain is $ ] with periodic boundary condition .the potential is plotted in figure [ fig : v_kp](a ) . for this given potential , the hamiltonian operator exhibits two low - energy bands separated by finite gaps from the rest of the eigenvalue spectrum ( see figure [ fig : v_kp](b ) ) . herea centered difference is used to discretize the hamiltonian operator .+ + + ( a ) ( b ) we consider three choices of for this model : , and .they correspond to three interesting physical situations of the model , as explained below . for , the first band of the hamiltonian is occupied , and hence the system has a spectral gap between the occupied and unoccupied states . as a result , the associated density matrix is exponentially localized , as shown in figure [ fig : densityfunapprox_kp](a ) .the resulting sparse representation from the convex optimization is shown in figure [ fig : densityfunapprox_kp](b ) and ( c ) for and respectively .we see that the sparse representation agrees well with the exact density matrix , as the latter is very localized .the wannier functions obtained by projection of delta functions are shown in figure [ fig : projection_kp ] . as the system is an insulator, we see that the localized representation converges quickly to the exact answer when increases .this is further confirmed in figure [ fig : densityfunapprox_kp_energy ] where the energy corresponding to the approximated density matrix and space approximation measurement are plotted as functions of . .( b ) , ( c ) : solutions of the density matrices with respectively.,title="fig : " ] + .( b ) , ( c ) : solutions of the density matrices with respectively.,title="fig : " ] + .( b ) , ( c ) : solutions of the density matrices with respectively.,title="fig : " ] + using density matrices with ( upper ) and ( lower ) respectively.,title="fig : " ] + using density matrices with ( upper ) and ( lower ) respectively.,title="fig : " ] + .( b ) : space approximation as a function of .,title="fig : " ] + ( a ) + .( b ) : space approximation as a function of .,title="fig : " ] + ( b ) next we consider the case .the first band of eigenstates of is occupied and the second band of is `` half - filled '' .that is we have only electrons occupying the eigenstates of comparable eigenvalue of .hence , the system does not have a gap , which is indicated by the slow decay of the density matrix shown in figure [ fig : kp_n15](a ) .nevertheless , the algorithm with gives a sparse representation of the density matrix , which captures the feature of the density matrix near the diagonal , as shown in figure [ fig : kp_n15](b ) . to understand better the resulting sparse representation , we diagonal the matrix : eigenvalues , known as the occupation number in the physics literature , are sorted in the decreasing order .the first occupation numbers are shown in figure [ fig : kp_n15](c ) .we have , and we see that exhibits two groups .the first occupation numbers are equal to , corresponding to the fact that the lowest eigenstates of the hamiltonian operator is occupied . indeed ,if we compare the eigenvalues of the operator with the eigenvalues of , as in figure [ fig : kp_n15](d ) , we see that the first low - lying states are well represented in .this is further confirmed by the filtered density matrix given by the first eigenstates of as plotted in figure [ fig : kp_n15](e ) .it is clear that it is very close to the exact density matrix corresponding to the first eigenfunctions of , as plotted in figure [ fig : densityfunapprox_kp](a ) .the next group of occupation numbers in figure [ fig : kp_n15](c ) gets value close to .this indicates that those states are `` half - occupied '' , matches very well with the physical intuition .this is also confirmed by the state energy shown in figure [ fig : kp_n15](d ) .note that due to the fact these states are half filled , the perturbation in the eigenvalue by the localization is much stronger .the corresponding filtered density matrix is shown in figure [ fig : kp_n15](f ) .for this example , we compare with the results obtained using the variational principle as in shown in figure [ fig : cms_kp_n15 ] . as the variational principle is formulated with orbital functions , it does not allow fractional occupations , in contrast with the one in terms of the density matrix .hence , the occupation number is either or , which is equivalent to the idempotency condition , as shown in figure [ fig : cms_kp_n15](b ) . as a result , even though the states in the second band have very similar energy , the resulting are forced to choose five states over the ten , as can be seen from the ritz value plotted in figure [ fig : cms_kp_n15](c ) .the solution is quite degenerate in this case .physically , what happens is that the five electrons choose wells out of the ten to sit in ( on top of the state corresponding to the first band already in the well ) , as shown from the corresponding density matrix in figure [ fig : cms_kp_n15](a ) , or more clearly by the filtered density matrix in figure [ fig : cms_kp_n15](d ) for the five higher energy states .eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . (e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . (e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) : the filtered density matrix corresponds to the first eigenstates of .( f ) the filtered density matrix corresponds to the next eigenstates of .[fig : kp_n15],title="fig : " ] + for .( a ) : the density representation given by .( b ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of .( d ) the filtered density matrix corresponds to the states in the second band . [fig : cms_kp_n15],title="fig : " ] + for . ( a ) : the density representation given by .( b ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of .( d ) the filtered density matrix corresponds to the states in the second band .[ fig : cms_kp_n15],title="fig : " ] + + for .( a ) : the density representation given by .( b ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of .( d ) the filtered density matrix corresponds to the states in the second band .[ fig : cms_kp_n15],title="fig : " ] + for .( a ) : the density representation given by .( b ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of .( d ) the filtered density matrix corresponds to the states in the second band .[ fig : cms_kp_n15],title="fig : " ] + finally , the case corresponds to the physical situation that the first two bands are all occupied . note that as the band gap between the second band from the rest of the spectrum is smaller than the gap between the first two bands , the density matrix , while still exponentially localized , has a slower off diagonal decay rate .the exact density matrix corresponds to the first eigenfunctions of is shown in figure [ fig : kp_n20](a ) , and the localized representation with is given in figure [ fig : kp_n20](b ) .the occupation number is plotted in figure [ fig : kp_n20](c ) , indicates that the first states are fully occupied , while the rest of the states are empty .this is further confirmed by comparison of the eigenvalues given by and , shown in figure [ fig : kp_n20](d ) . in this case , we see that physically , each well contains two states . hence , if we look at the electron density , which is diagonal of the density matrix , we see a double peak in each well . using the projection of delta functions , we see that the sparse representation of the density matrix automatically locate the two localized orbitals centered at the two peaks , as shown in figure [ fig : kp_n20](e ) .eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) projection of delta function .,title="fig : " ] + eigenfunctions of . ( b ) : the sparse representation of the density matrix for . ( c ) :the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e )projection of delta function .,title="fig : " ] + + eigenfunctions of .( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) projection of delta function .,title="fig : " ] + eigenfunctions of . ( b ) : the sparse representation of the density matrix for .( c ) : the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) projection of delta function .,title="fig : " ] + + eigenfunctions of . ( b ) : the sparse representation of the density matrix for . ( c ) :the occupation number ( eigenvalues ) of .( d ) the first eigenvalues of compared with the eigenvalues of . ( e ) projection of delta function .,title="fig : " ] +let us revisit the example in section [ sec : formulation ] for which the minimizers to the variational problem is non - unique .theorem [ thm : conv ] guarantees that the algorithm will converge to some minimizer starting from any initial condition .it is easy to check that in this case is a fixed point of the algorithm . in figure[ fig : decaydist ] , we plot the sequence for a randomly chosen initial data .we see that the distance does not converge to as the algorithm converges to another minimizer of the variational problem . nonetheless , as will be shown in the proof of theorem [ thm : conv ] in section [ sec : proof ] ,the sequence is monotonically non - increasing . as a function of for algorithm [ alg : cm_p ] . ]for ease of notation , we will prove the convergence of the algorithm for the following slightly generalized variational problem . where , , and are proper convex functionals , but not necessarily strictly convex .in particular , we will get if we set given a saddle point satisfying , it is clear that the first inequality in implies .substitute in the second inequality of , we can immediately have is a minimizer . on the other hand ,suppose is a solution of . the first inequality in holds since .moreover , there exist such that which suggests , for any the summation of the above three inequalities yield the second inequality in .we remind the readers that the minimizers of the variational principle might not be unique . in the non - unique case , the above theorem states that any initial condition will converge to some minimizer , while different initial condition might give different minimizers .now , we calculate .it is clear that note that .thus , for any , we have in particular , let , we have on the other hand , set in , we get therefore , the sequences have limit points .let us denote as a limit point , that is , a subsequence converges we now prove that is a minimum of the variational problem , _i.e. _ on the other hand , taking , , and in , we get from , we have are all bounded sequences , and furthermore , taking the limit , we then get hence , the limit point is a minimizer of the variational principle .
|
we propose a convex variational principle to find sparse representation of low - lying eigenspace of symmetric matrices . in the context of electronic structure calculation , this corresponds to a sparse density matrix minimization algorithm with regularization . the minimization problem can be efficiently solved by a split bergman iteration type algorithm . we further prove that from any initial condition , the algorithm converges to a minimizer of the variational principle .
|
dynamic / opportunistic spectrum access ( dsa / osa ) aims at increasing radio spectrum utilization . in order to do so , the secondary ( unlicensed ) users ( sus ) of dsa networksare allowed to transmit on licensed channels , when they are not occupied by primary ( licensed ) users ( pus ) .understanding the pus channel occupancy distributions becomes important from a theoretical point of view , but most importantly it allows to improve seamless dsa operation ( * ? ? ?iv - b ) , ( * ? ? ?2 ) . for example, if sus have sufficient knowledge about the pus traffic distributions , they can minimize the channel switching latency , predict the pus behavior to minimize interference or find an optimal pu channel sensing order .therefore , the sus should accurately estimate the pus traffic distribution , i.e. , classify the pu traffic correctly from a set of possible distributions , e.g. , exponential , gamma , log - normal , and weibull distributions as tested in .looking at the recent dsa / osa applications , traffic classification can be used in licensed shared access ( lsa ) systems , where traffic classification would help in identifying the behavior of individual lsa licensees and adapting licensing rules accordingly .traffic classification is an important research area in many telecommunication domains , e.g. in ip networks . in parallel , analytical modeling of ip traffichas also been concerned , refer to a discussion in e.g. ( * ? ? ?iii - d ) . in the dsa area , the topic has started to receive attention as well .considering relevant works that aim at pu traffic classification , was the first to deal with traffic pattern classification in dsa networks .therein , the classification of the traffic pattern was done by using the autocorrelation function of the received pu signal .work of improved the classification algorithm of by filtering away the errors that were caused by noise and incorrect spectrum sensing . inspired by machine learning , the authors in proposed two behavior classifiers , namely a naive bayesian classifier and an averaged one - dependence estimation classifier to classify the channel selection strategy for sus .however , the authors of considered the pu traffic pattern to be either stochastic or deterministic , without assigning the pu traffic to a specific distribution .furthermore , the classifier of did not take the distributions of pu traffic but only the mean busy / idle time into consideration .we thus conclude , to the best of our knowledge , the performance of pu traffic classification is still relatively unexplored from the theoretical point of view .this motivated us to perform detailed theoretical studies of pu traffic classification .considering the classification of gamma - distributed pu busy / idle time collected through an error - free spectrum sensing process , the contribution of our work is fourfold : 1 .we analytically derive the performance for the pu traffic classifier based on maximum likelihood using gaussian approximation ; 2 .we re - evaluate a sequential algorithm based on a multi - hypothesis sequential probability ratio test , to deal with the classification problem for multiple pu traffic classes , when parameters of pu traffic classes are known in advance ; 3 . considering pu channel sampling , for a special casewhen probability of pu period change in - between two samples ( busy - to - idle - to - busy or idle - to - busy - to - idle ) is negligible , we evaluate ( i ) a minimum variance pu state length estimator , and ( ii ) propose a modified maximum likelihood classifier , quantifying its performance analytically and providing design guidelines based on traffic parameters ; 4 . finally , we propose ( i ) a pu traffic estimate - then - classify scheme which requires no complete knowledge of the pu traffic parameters , and ( ii ) an average likelihood function method which requires knowledge on the statistics of the pu traffic parameters when they fluctuate in time domain .in addition , we list the important limitations of our work : 1 .we assume that the set size of distributions considered for classification is finite and does not change over time ; 2 .the effect of spectrum sensing errors at the physical layer on the classification accuracy is not considered ; 3 .the calculations of classification accuracy obtained in this paper depend on the exact knowledge of a subset of traffic parameters and their stationarity .the rest of the paper is organized as follows .the system model is given in section [ sec : system_model ] .the proposed pu traffic classifiers with perfect knowledge of pu traffic parameters are presented in section [ sec : classification_perfect ] , and traffic classification using traffic period estimation schemes is presented in section [ sec : classification_blind_period ] . the proposed pu traffic classifiers with imperfect knowledge of pu traffic parameters are presented in section [ sec : classification_imperfect ] .numerical results are given in section [ sec : numerical_results ] .finally , section [ sec : conclusions ] concludes the paper .we consider a single channel randomly accessed by a pu . to ease the analysis we disregard ( i ) the effect of incidental su operation within a pu band , i.e. , the injection of su traffic into pu traffic which obfuscates the correct classification of the latter , and ( ii ) the effect of spectrum sensing errors . the assumption ( ii )is taken consciously , as the problem of traffic classification is strictly coupled with the spectrum sensing problem and requires a separate analytical study due to its complexity .for example , in ( * ? ? ?4.2 ) it has been concluded that `` different energy detection thresholds ( ) result in significantly different [ pu traffic ] distributions . ''recent work of provides a more formal discussion on the effect of sensing errors on pu traffic analysis .nevertheless , assumptions ( i ) and ( ii ) allow us to use the results obtained in this paper also for the non - dsa scenarios and provide a classification benchmark for interference - prone and sensing error - prone cases .further , we assume we can obtain traffic busy / idle periods ( denoted as on / off , respectively ) perfectly through time - domain fine - grained spectrum sensing , as in e.g. ( * ? ? ?this assumption , in practical terms , results in a sampling time much smaller than the shortest duration of pu traffic periods .the on / off periods are denoted as a random variable with its independent and identically distributed realizations .those are assumed to belong to one of possible gamma distributions .the gamma distribution is chosen for its flexibility to represent : ( i ) exponential distribution , due to its analytical popularity ( * ? ? ?v - b ) and existence in real networks , e.g. as measured in ( * ? ? ?iv - a ) for call arrival times in cdma - based system ; and ( ii ) positively skewed data , which is also confirmed through the traffic measurement , e.g. in ( * ? ? ?10 ) for call holding time in public safety systems .our objective is to minimize the required number of measurement periods in in order to classify to the correct distribution. we can formulate such a classification problem as a multi - hypothesis problem , i.e. , where is the hypothesized probability density function ( pdf ) of , is the gamma pdf of under hypothesis given the shape parameter and the rate parameter , where and is the gamma function , where again .we assume that each hypothesis has a prior probability , and we define , . without loss of generality , in this paperwe assume that the elements in denote either pu channel occupancy periods ( on times ) or idle periods only ( off times ) .we start with assuming a perfect knowledge of all pu traffic parameters , .firstly , we introduce a maximum likelihood classifier ( mlc ) that requires a constant number of pu traffic periods , which is an optimal classifier in terms of probability of correct classification when the pdfs are known ( * ? ? ?i ) and derive its classification performance for the considered model in section [ sec : system_model ] .such an analysis , to the best of our knowledge , has not been performed before . secondly , as a comparison to mlc ,we re - introduce the multi - hypothesis sequential probability ratio test classifier ( msprtc ) using which adopts a sequential sample test instead of using a fixed number of pu traffic periods for classification . for the considered gamma distribution the likelihood function given for can be written as then , the mlc final decision , , is to analyze the mlc classification performance for the system model considered in section [ sec : system_model ] , we start with calculating the log - likelihood function which can be represented as .\label{eq : gj}\end{aligned}\ ] ] then we can calculate the probability of correct classification under using ( [ eq : gj ] ) as embedding ( [ eq : gj ] ) into ( [ eq : cond1 ] ) we can simplify ( [ eq : cond1 ] ) as where and , .we also define the mean and variance for the variable as and , respectively , which are derived in appendix [ sec : pdf ] .we can now define and calculate its pdf as , where denotes the -fold pdf convolution .then , by calculating the cumulative distribution function ( cdf ) of we can obtain an exact analytical expression for ( [ eq : cond2 ] ) .however , due to mathematical intractability of such operations we use a simple approximation instead , which has a closed - form expression , to derive the probability of correct classification . therefore , let us transform ( [ eq : cond2 ] ) as where and . according to the central limit theorem , as is large enough , will approach a standard normal distribution , .hence we can approximate ( [ eq : cond3 ] ) as where is the tail probability function of the standard normal distribution .finally , the average probability of correct classification for all hypotheses is derived using ( [ eq : cond4 ] ) as align p_c=_j=1^m_j\{=_j|_j}. [ eq : pc ] to compare the performance with mlc , we introduce a new classification method based on msprtc of . unlike mlc which uses a constant number of pu traffic on ( or off ) periods ,msprtc sequentially classifies multiple hypotheses requiring only as many pu traffic periods as needed for correct classification .we adopt msprtc since the authors in ( * ? ? ?iii ) show that it provides a good approximation to the optimal solution on the condition of a perfect a priori knowledge for all distributions , i.e. their parameters , in the sequential multi - hypothesis classification problem .msprtc decision is then , where the posteriori probability is given as ( * ? ? ?ii ) ^{-1}}. \label{eq : pos}\ ] ] we define as the first such that for at least one , where is the design threshold . recalling ( * ? ? ?vii ) , where , is the total probability of incorrect decision , is the constant defined in ( * ? ? ?vi ) and is the measure of probabilistic distance . in (vii ) , is the kullback - leibler ( kl ) divergence which for two gamma distributions is defined in ( * ? ? ?* eq . ( 6 ) ) and after simplifications where is the digamma function the authors of suggest to use kl for as a descriptor of probabilistic distance for two distributions .for the squared hellinger ( sh ) distance , defined as ( * ? ? ? * ch .14.5 , pp .211 ) ( note that the 0.5 constant is omitted for convenience as remarked in ( * ? ? ?61 ) ) , it can be shown to be the lower bound of kl divergence ( * ? ? ?* proposition 1 ) , i.e. , we thus propose to replace used in calculating the threshold for msprtc , , with where and the sh distance between two gamma distributions ( considered in the system model in section [ sec : system_model ] ) is derived in appendix [ sec : sh_2_gamma ] . the procedure to calculate explained in ( * ?vii ) is convolved ( bayes classification risk minimizer ) to obtain a desired classification . ] . therefore , in numerical evaluation in section [ sec : numerical_results ] we will replace with a single value for all the hypotheses . to find , before performing classification we sweep through to determine the desired classification probability .for example , we can set and obtain the first classification performance .if it does not satisfy the classification system requirement , we increase by a pre - defined step size until we reach our desired classification performance .so far , we have assumed the continuous observation of the pu channel state . in this sectionwe consider a more general traffic classification problem , where the elements of also need to be estimated .therefore we relax the assumption on the continuous observation of pu state and assume a pu channel observation at instants every seconds to find the elements in .first , we introduce the model for the pu period length estimation in section [ sec : noise_modeling ] .then , in section [ sec : estimation_sampling ] , we propose a minimum variance period length estimator to minimize estimation errors .subsequently , we propose a modified mlc considering estimation error and analytically derive the approximation of its classification performance in section [ sec : mlc_error ] .we then propose a modified msprtc considering estimation error in section [ sec : msprt_error ] . finally , in section[ sec : guideline ] we propose a design guideline for mlc with energy or time constraints on the spectrum sensing budget .we follow the system model shown in ( * ? ? ?* section ii , fig .1(a ) ) , where a pu traffic period , i.e. , on / off duration / , is estimated through sampling performed at regular intervals of seconds . without loss of generality , we will focus on estimating only , while can be estimated using the same technique .in addition , to ease the analysis , we assume that the probability of pu state change between two samplings is negligible .denote represents the channel being busy , while represents the channel being idle . assuming as previously that we ignore spectrum sensing errors we would like to estimate the length of based on the set of samples obtained at intervals . for the actual we denote four time instants , i.e. , , , , and : ( i ) is the starting point with , ( ii ) and ( iii ) are the transition points from to and to , respectively , and ( iv ) is the end point with . after sampling the traffic, we define the nearest sampling point to as in region and in region .similarly , we define the nearest sampling point to as in region and in region . in other words , are the actual discrete channel measurement points. then we can think of this pu channel sampling as a quantization process , i.e. , there are four sources of quantization noise which are , , , and .we can now model quantization error as a uniformly distributed random variable , which implies that , where denotes the uniform distribution and , are the minimum and maximum value for the random variable , respectively .we first propose a minimum variance pu period length estimator that reduces the sampling noise effect .then we derive the average number of pu traffic samples needed for length estimation using the proposed estimator .first we consider , i.e. , the interval between two nearest points , where .then , we consider , i.e. , the interval between two nearest points , where . we propose a weighted average of and , i.e. , as our estimator , where ] for , ( ii ) ] otherwise . by applying the derivative of the lambert w function , i.e. , , and lemma [ lamme_inverse ] to ( [ eq : ypdf ] ), we can derive the pdf for as where ( defined for presentation compactness ) , and if and otherwise .we can finally derive the mean and variance using ( [ eq : yfpdf ] ) as respectively , through numerical integration .the sh distance for two probability distributions is defined as ( * ? ? ?14.5 , pp .211 ) again , note that the 0.5 constant is omitted for convenience as remarked in ( * ? ? ?3.3 , pp . 61 ) ) . before calculating the closed - form expression of sh distance for two gamma distributionswe introduce the following integral where is the incomplete gamma function .integral ( [ eq : lemma_integral ] ) can be derived through calculating the incomplete gamma function by the change of variable technique . from the definition of ( [ eq : shd ] ) the sh distance for two distributions and be derived as where . applying ( [ eq : lemma_integral ] ) with and to ( [ eq : shds ] ) the sh distance in ( [ eq : shds ] ) can be simplified to note that the average sh distance with alf , which is used to represent the average distance among hypotheses in table [ table2 ] , can be calculated by using ( [ eq : mshd ] ) to replace in ( [ eq : shd ] ) .also note that the average sh distance with alf has no closed - form expression and it can only be computed through numerical methods .the expected average number of pu traffic samplings for one period under hypothesis can be calculated as where is the floor function . to calculate ( [ eq : enhj ] ) we first need to derive the following conditional probability , i.e. , where is the cdf function for gamma distribution with parameters and .applying ( [ eq : cdfg ] ) to ( [ eq : enhj ] ) we have \notag\\ & = \lim_{l\rightarrow\infty}(l+1)g((l+1)t_s|\mathbf{\theta}_j)-\sum\limits_{k=1}^{l+1}g(k t_s|\mathbf{\theta}_j)\\ & = \lim_{l\rightarrow\infty}-l\frac{\gamma(\alpha_j,(l+1)\beta_j t_s)}{\gamma(\alpha_j)}+\sum\limits_{k=1}^{l}\frac{\gamma(\alpha_j , k\beta_j t_s)}{\gamma(\alpha_j ) } \label{eq : entemp}\\ & = \sum\limits_{k=1}^{\infty}\frac{\gamma(\alpha_j , k\beta_j t_s)}{\gamma(\alpha_j ) } , \label{eq : enhjs}\end{aligned}\ ] ] by applying , and the left hand part in ( [ eq : entemp ] ) can be shown to be zero by lhopital s rule .then we introduce the following lemma as a step to prove ( [ eq : enhjs ] ) converges . we can easily prove it by applying the change of variable technique . since is a decreasing function with respect to by definition and , from the integral test, we know ( [ eq : enhjs ] ) converges .therefore , using ( [ eq : enhjs ] ) we can derive the average expected number of pu traffic samples by taking the average for all possible hypotheses which results in ( [ eq : n ] ) .by directly convolving the pdf of gamma distributed random variable , i.e. , where with the pdf of triangular distributed random variable , i.e. , , we have the pdf for as we now introduce the following lemma . [ lemma_conv ] expression ( [ eq : lem4a ] ) and ( [ eq : lem4b ] ) can be calculated directly from the definition of incomplete gamma function and through the integration by parts technique .finally , applying lemma [ lemma_conv ] to ( [ eq : pdfys1 ] ) we obtain ( [ eq : mpdf ] ) .we ignore the index for notation convenience and denote .we would like to find the variance of , where , , and given in theorem [ theorem : y ] .since can be negative , may be a complex number .therefore we define , where and , if , and and , otherwise .note the pdf of can be represented as , where and are the pdfs with respect to the real part and imaginary part of . likewise , the variance for , i.e. , , is the sum of the variance of its real part and imaginary part .first we calculate the variance of the imaginary part . noting that the first and the second moment for , which are and , respectively , we can derive .the variance for the real part can be obtained through .using lemma [ lamme_inverse ] and observing that there may be at most three solutions to , we can derive the pdf for the real part of as \right\}\\ & \qquad+i(\alpha_{j , k})i(-\beta_{j , k})\left\{\left|\frac{w\left(0,e^{b^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{b^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{b^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\nonumber\\ & \qquad\left.+ i(\eta-\tilde y^{(j , k)})\left[\left|\frac{w\left(0,e^{c^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{c^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{c^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\right.\nonumber\\ & \qquad\left.\left .+ \left|\frac{w\left(1,e^{c^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(1,e^{c^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(1,e^{c^{(j , k)}}\right)|\mathbf{\theta}_j\right ) \right]\right\}\\ & \qquad+i(-\alpha_{j , k})i(\beta_{j , k})\left\{\left|\frac{w\left(0,e^{b^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{b^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{b^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\nonumber\\ & \qquad\left . + i(\tilde y^{(j , k)}-\eta)\left [ \left|\frac{w\left(0,e^{c^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{c^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{c^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\right.\nonumber \\ & \qquad\left.\left .+ \left|\frac{w\left(1,e^{c^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(1,e^{c^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(1,e^{c^{(j , k)}}\right)|\mathbf{\theta}_j\right ) \right]\right\}\\ & \qquad+i(-\alpha_{j , k})i(-\beta_{j , k})\left\{\left|\frac{w\left(0,e^{c^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{c^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{c^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\nonumber\\ & \qquad\left.+ i(\tilde y^{(j , k)}-\eta)\left [ \left|\frac{w\left(0,e^{b^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(0,e^{b^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(0,e^{b^{(j , k)}}\right)|\mathbf{\theta}_j\right)\right.\right.\nonumber\\ & \qquad\left.\left .+ \left|\frac{w\left(-1,e^{b^{(j , k)}}\right)}{\beta_{j , k}\left(1+w\left(-1,e^{b^{(j , k)}}\right)\right)}\right| f_j\left(-\frac{\alpha_{j , k}}{\beta_{j , k}}w\left(-1,e^{b^{(j , k)}}\right)|\mathbf{\theta}_j\right ) \right]\right\ } \label{eq : ynfpdf}\end{aligned}\ ] ] where is defined as in appendix [ sec : pdf ] replacing with , is defined in appendix [ sec : pdf ] , , .therefore we can obtain by ( [ eq : ynfpdf ] ) .the authors would like to thank prof .venugopalv . veeravalli and prof.alexanderg .tartakovsky for insightful discussions related to the msprt classifier .v. kone , l. yang , x. yang , b. y. zhao , and h. zheng , `` the effectiveness of opportunistic spectrum access : a measurement study , '' _ ieee / acm trans . networking _ , vol .20 , no . 6 , pp .20052016 , dec .2012 .liu , j. a. tran , p. paweczak , and d. cabric , `` traffic - aware channel sensing order in dynamic spectrum access networks , '' _ ieee j. select .areas commun ._ , vol . 31 , no . 11 , pp23122323 , nov .m. lopez - benitez and f. casadevall , `` time - dimension models of spectrum usage for the analysis , design and simulation of cognitive radio networks , '' _ ieee trans ._ , vol .62 , no . 5 , pp .20912104 , jun .m. palola , m. matinmikko , j. prokkola , m. mustonen , m. heikkil , t. kippola , s. yrjl , v. hartikainen , l. tudose , a. kivinen , j. paavola , and k. heiska , `` live field trial of licensed shared access ( lsa ) concept using lte network in 2.3ghz band , '' in _ proc ieee dyspan _ ,mclean , va , usa , apr . 14 , 2014 .t. t. t. nguyen and g. armitage , `` a survey of techniques for internet traffic classification using machine learning , '' _ ieee communications surveys & tutorials _ , vol .10 , no . 4 , pp . 5676 , fourth quarter 2008 .m. wellens and p. mhnen , `` lessons learned from an extensive spectrum occupancy measurement campaign and a stochastic duty cycle model , '' _ mobile networks and applications _ , vol .15 , no . 3 , pp .461474 , jun . 2010 .d. s. sharp , n. cackov , n. laskovi , q. shao , and l. trajkovi , `` analysis of public safety traffic on trunked land mobile radio systems , '' _ ieee j. select .areas commun ._ , vol . 22 , no . 7 , pp . 11971205 , sep. 2004 .w. d. penny , `` kullback - leibler divergences of normal , gamma , dirichlet and wishart densities , '' university of college london , wellcome department of cognitive neurology , 2001 .[ online ] .available : www.fil.ion.ucl.ac.uk/~wpenny/publications/densities.ps r. castro , `` lectures 12 and 13complexity penalized maximum likelihood estimation , '' oct. 1417 , 2013 , applied statistics course , tu eindhoven .[ online ] .available : http://www.win.tue.nl/~rmcastro/appstat2013/files/mple.pdf r. m. corless and d. j. jeffrey , `` on the wright function , '' university of western ontario , department of applied mathematics , tr-00 - 12 , 2000 .[ online ] .available : http://www.orcca.on.ca/techreports/techreports/2000/tr-00-12.pdf
|
this paper focuses on analytical studies of the primary user ( pu ) traffic classification problem . observing that the gamma distribution can represent positively skewed data and exponential distribution ( popular in communication networks performance analysis literature ) it is considered here as the pu traffic descriptor . we investigate two pu traffic classifiers utilizing perfectly measured pu activity ( busy ) and inactivity ( idle ) periods : ( i ) maximum likelihood classifier ( mlc ) and ( ii ) multi - hypothesis sequential probability ratio test classifier ( msprtc ) . then , relaxing the assumption on perfect period measurement , we consider a pu traffic observation through channel sampling . for a special case of negligible probability of pu state change in between two samplings , we propose a minimum variance pu busy / idle period length estimator . later , relaxing the assumption of the complete knowledge of the parameters of the pu period length distribution , we propose two pu traffic classification schemes : ( i ) estimate - then - classify ( etc ) , and ( ii ) average likelihood function ( alf ) classifiers considering time domain fluctuation of the pu traffic parameters . numerical results show that both mlc and msprtc are sensitive to the periods measurement errors when the distance among distribution hypotheses is small , and to the distribution parameter estimation errors when the distance among hypotheses is large . for pu traffic parameters with a partial prior knowledge of the distribution , the etc outperforms alf when the distance among hypotheses is small , while the opposite holds when the distance is large . dynamic spectrum access , traffic classification , traffic sampling , traffic estimation , performance analysis .
|
one of the central features of quantum mechanics is that it does not allow to simultaneously obtain whole information about an individual quantum system without errors .the _ holevo bound _ on the accessible information and the _ no - cloning theorem _ are the prominent manifestations of the restrictions on acquiring information from quantum systems , and these restrictions culminate in quantum cryptography . however , there are no obstacles to estimate all aspects of quantum states in a series of distinct measurements on identically prepared particles by _ quantum state tomography _ . the pioneering experimental demonstration of this method has been accomplished by smithey , _ et al . _ . they determined a wigner function for vacuum and pulsed squeezed - vacuum state of a spatial - temporal mode using _homodyne tomography_. schiller , _ et al . _sbpmm1996 applied this method to a estimation of a density matrix ( in the number state representation ) for squeezed vacuum state of two spectral components . in this experiment , the spectacular even - odd oscillations in the photon - number distribution was observed .recently , lvovsky _ et al ._ and bertet _ et al ._ have respectively succeeded in reconstructing a wigner function for single - photon fock state of a travelling spatial - temporal mode and that of a intra - cavity mode .both estimated wingner functions showed a dip reaching classically - impossible negative values around the origin of the phase space .for the polarization degree of freedom of electromagnetic field , white _ et al . _ used quantum state tomography , for the first time , to characterize non - maximally entangled states produced from a spontaneous - down - conversion photon source ._ utilized this method for the verification of decoherence - free characteristic of a particular entangled state , and for the demonstration of _ hidden _ non - locality of entangled mixed states . in spite of these splendid experimental achievement with quantum state tomography , statistical errors in estimating quantum stateshave been paid minor attention so far .statistical analyses of errors in quantum - state estimation should not be undervalued .since any outcomes of measurements are represented as a random variable in quantum mechanics , statistical analyses of their errors may reveal profound rule for acquiring information from quantum system .moreover , such analyses may also lead to the development of quantum information technology , which requires us to faithfully prepare several kinds of quantum states , and to the improvement of the sensitivity for various kinds of precision measurements , which is limited by quantum noises . in this article, we report our theoretical and experimental analyses of errors in quantum state estimation putting a special emphasis on their asymptotic behavior .in particular we focus on the estimation of the state of two qubits ( two 2-level quantum systems ) .the two - qubit system in 4-dimensional hilbert space is the simplest one where the peculiar characteristic of quantum mechanics , _ entanglement _ , is activated . since entanglement plays the critical role in the mysterious phenomena in the quantum world , it is interesting to ask whether entanglement affects accuracy of the estimation .various kinds of two qubits ( including entangled states ) are practically realizable as polarization states of bi - photon produced via spatially - nondegenerate , type - i spontaneous parametric down - conversion ( spdc ) wjek1999,kbaw2000,kbsg2001,kwwae1999,jkmw2001,wjmk2001,nutmn2002 . the procedure to estimate the state of two qubits has been well established by james , kwiat , munro and white .thus , in our experiments , we followed the above methods for producing the ensembles of the bi - photon polarization states , for measuring them , and for estimating their density matrices .the main purpose of this article is to quantitatively show the limit on accuracy of quantum - state estimation .we demonstrate that the accuracy depends on state to be estimated and also measurement strategy . in order to do that, we introduce a new strategy of quantum - state estimation utilizing akaike s information criterion ( aic ) for eliminating numerical problem in the estimation procedures especially in estimating ( nearly ) pure quantum states .while number of parameters used for characterizing density matrices of quantum states is fixed in the conventional estimation strategies , the number is varied in the new strategy for eliminating redundant parameters .consequently , we can quantitatively compare experimentally - evaluated errors in the estimation with their asymptotic lower bound derived from the cramr - rao inequality without bothering about the delicate numerical problem accompanying the redundant parameters .it is shown that the errors of the experimental results nearly achieve their lower bounds for all quantum states we examined .moreover , owing to the reduction of the parameters , the aic based new estimation strategy makes the lower bounds slightly decreased .our results reveal that when measurements are performed locally ( i.e. , separately ) on each qubit , existence of entanglement may degrade the accuracy of estimation .thus , while the measurements in our experiments are local ones , we numerically examine the performance of an alternative measurement strategy , which includes inseparable measurements on two qubits .the remainder of the article is organized as follows . in sec .sec : experiment , we show our experimental analyses of errors in estimating density matrices as a function of the ensemble size , i.e. , as varying data acquisition time . in sec .[ sec : bures ] , we present a prescription for calculating the asymptotic lower bounds on the errors in terms of fidelity and show that in the asymptotic region , the errors should be decreasing as inversely proportional to the ensemble size. then we compare the lower bounds with the experimental results . in sec .[ sec : aic ] , a new strategy of quantum state estimation utilizing akaike s information criterion is introduced , and the accuracy of the state estimated by this new strategy is presented . in sec .[ sec : collective ] , the alternative measurement strategy for two qubits , which employs inseparable measurements , is numerically explored .section [ sec : conclusion ] summarizes this article . in the appendix, we briefly review tomographic measurements and maximum likelihood estimation for estimating two qubits , and derive the cramr - rao lower bound on the errors in the estimation .for experimentally producing various quantum states of two qubits , we use the method to create the various polarization states of bi - photon via spatially - nondegenerate , type - i spontaneous parametric down - conversion ( spdc ) .the method was invented by kwiat __ and applied to the various experiments wjek1999,jkmw2001,kbaw2000,kbsg2001,wjmk2001,nutmn2002 .a rough sketch of the experimental setup is shown in fig .[ fig : setup ] . two thin ( 0.13 mm ) beta - barium borate ( - , bbo ) crystals , which are cut for satisfying the type - i phase matching , are adjacent so that their optical axes lie in the planes perpendicular to each other . inside the crystals , the third harmonic beam ( wave length : 266 nm , average power : 190mw ) of the mode - locked ti : sapphire laser ( pulse duration : 80fs , repetition rate : 82mhz )-we will call it pump beam- is slightly converted into the frequency - degenerate , but spatially - nondegenerate ( opening angle : ) bi - photon ( wave length : 532 nm ) via spdc .this configuration of the setup makes it possible to produce various polarization states of bi - photon ( including entangled states ) by adjusting the pump beam polarization with a half - wave plate ( hwp ) , by modifying the relative time delay between the horizontal and vertical components of the pump beam with a _ pre - compensator _ ( which consists of quartz plates and a variable wave plate ( wp ) ) , and by inserting _ decoherers _ ( two de - polarizers ) into one of the paths of the down - converted photons kbaw2000,kbsg2001,wjmk2001 , as shown in fig .[ fig : setup ] .we produced three particular quantum states , the _ very noisy mixed state ( vnms ) _ , the _ almost pure and separable state ( apss ) _ , and the _ highly entangled state ( he s ) _ for inspecting influence of the various characteristics ( e.g. , entropy and entanglement ) of the states on the accuracy of the estimation .the produced polarization states of bi - photon were estimated by tomographic measurements and maximum likelihood estimation ( mle ) .these procedures are reviewed in appendices [ app : tomography ] and [ app : mle ] . in tomographic measurements ,the coincidental detection events ( within 6ns ) on both single - photon detectors ( hamamatsu h7421 - 40 ) were counted by using the time interval analyzer ( yokogawa ta-520 ) during the data acquisition time at each polarizer s setting ( i.e. , projector ) ( which was determined and varied by the half - wave plate ( hwp ) , the quarter - wave plate ( qwp ) , and the polarizer ( pol ) on each path of the produced photons ) . for investigating ensemble size - dependence of the accuracy , we varied the data acquisition time of each measurement as 0.2s , 0.5s , 1.0s , 2.0s , and 5.0s .the typical single counting rate was about 30000c / s with the dark counting rate of about 300c / s .the typical coincidence counting rate was roughly 500c / s with the accidental coincidence counts being below 1% of the genuine coincidence counts . for eliminating the ambient photons, we used the interference filters ( fwhm : 8 nm ) ( see ref . , for more detailed information ) . in order to assess the accuracy of the estimation, we repeated the measurements and estimation procedures 9 times for each state and each ensemble size . here , as noted in appendix [ app : mle ] , the density matrix of the two - qubit(2 two - level quantum state ) can be written as } , \nonumber\ ] ] which satisfies the positivity condition and the trace condition for density matrices ; see appendix [ app : mle ] . as a result of the 9 identical trials , we had 9 slightly different density matrices .the differences of these states might stem not only from the statistical errors but also from the experimental systematic ones . for reducing the systematic errors , we restricted our data acquisition time , , at each polarizer setting up to , so as to keep the experimental condition unchanged ( especially , to keep the pump power constant during whole data acquisition time measurements ) .then we evaluated the accuracy of the estimation in terms of the average _ fidelity _ between the _ true _ state and each estimated state , i.e. , where the fidelity is equal to ] is a matrix given by the following manner .first , we define a hermitian operator called _ symmetric logarithmic derivative ( sld ) _ , by the sld can be obtained by solving the equation above and considered as a quantum analogue of the _ score _ ( classically , the score is defined by ] can be obtained by the _ cramr - rao inequality _ & \equiv & v^{ij}(\theta_{0 } ) \nonumber \\ & \ge & j_{ij}^{\,-1}(\theta_{0 } ) , \label{eq : crelement}\end{aligned}\ ] ] where ] is the log - likelihood function for the quantum state which parametrized by parameters .when there are several hypothetical models ( with different number of parameters ) for estimating a certain state , the model which attains the smallest aic can be regarded as the most appropriate model because of the following justification . in appendix[ app : mle ] , for explaining the mle , we used the fact that the approximation ( [ eq : mloglikelihood ] ) is valid in the asymptotic region .what akaike found is that there is a difference between the mean of the maximum log - likelihood function ( right - hand side of ( [ eq : mloglikelihood ] ) ) and the maximum log - likelihood function derived by the obtained data ( left - hand side of ( eq : mloglikelihood ) ) , and the difference can be approximately given by -% \mathop{\mathbf{e}}\nolimits_{\theta _ { 0 } } [ \ln [ p^{(k)}(n|\hat{\theta}% ) ] |_{t=1 } ] \nonumber \\ & \approx & \frac{k}{t}. \label{eq : difference}\end{aligned}\ ] ] taking this correction into account , the kullback - leibler distance between the _ true _ probability mass function and its parametric model , i.e. , eq .( [ eq : relativee ] ) , can be minimized by reducing the value , |_{t=1 } ] & = & -\frac{1}{t}\ln [ p^{(k)}(n|\hat{\theta})]+\frac{k}{t } \nonumber \\ & = & \frac{1}{2\,t}aic^{(k)}(\hat{\theta } ) , \label{eq : aic2}\end{aligned}\ ] ] with respect to the estimators .therefore , if we choose the model which minimizes the aic ( [ eq : aic ] ) among several alternative parametric models , it is ensured that this model is the closest to the _ true _ one from the viewpoint of the kullback - leibler distance .the resultant estimate is called _ minimum aic estimate ( maice ) _ akaike1974 .when a maximum likelihood estimates of a certain model is almost identical to that of another model , the maice becomes the one defined with the smaller number of the parameters .the definition of the maice gives the mathematical formulation of the principle of parsimony in model selection .the importance of this new strategy might be more noticeable in estimating quantum states in the infinite - dimensional hilbert space , e.g. , in estimating wigner function or density matrix in the number state representation .in this situation , somehow vague fourier - frequency cutoff or truncation of an infinite - dimensional density matrix to finite - dimensional density matrix is introduced in executing the _ inverse radon transformation _ or _ quantum - state sampling _ , respectively .we note that gill and gu , recently , made a first attempt addressing this issue .specifically , for estimating the two qubits , we use } , \label{eq : dm4}\ ] ] where , \nonumber \\t_{\theta}^{(15 ) } & = & \left [ \begin{array}{cccc } \theta^1 & 0 & 0 & 0 \\ \theta^2+i\theta^3 & \theta^8 & 0 & 0 \\ \theta^4+i\theta^5 & \theta^9+i\theta^{10 } & \theta^{13 } & 0 \\ \theta^6+i\theta^7 & \theta^{11}+i\theta^{12 } & \theta^{14}+i\theta^{15 } & 0% \end{array } \right ] , \nonumber \\t_{\theta}^{(12 ) } & = & \left [ \begin{array}{cccc } \theta^1 & 0 & 0 & 0 \\ \theta^2+i\theta^3 & \theta^8 & 0 & 0 \\ \theta^4+i\theta^5 & \theta^9+i\theta^{10 } & 0 & 0 \\ \theta^6+i\theta^7 & \theta^{11}+i\theta^{12 } & 0 & 0% \end{array } \right ] , \nonumber \\t_{\theta}^{(7 ) } & = & \left [ \begin{array}{cccc } \theta^1 & 0 & 0 & 0 \\ \theta^2+i\theta^3 & 0 & 0 & 0 \\ \theta^4+i\theta^5 & 0 & 0 & 0 \\ \theta^6+i\theta^7 & 0 & 0 & 0% \end{array } \right],\end{aligned}\ ] ] thus , , , , and are representing the rank-4 , rank-3 , rank-2 , and rank-1 density matrices , respectively .then the aics are respectively given by + 2 \times 16 , \nonumber \\aic^{(15)}(\hat{\theta } ) & = & -2\,\ln[p^{(15)}(n|\hat{\theta } ) ] + 2 \times 15 , \nonumber \\ aic^{(12)}(\hat{\theta } ) & = & -2\,\ln[p^{(12)}(n|\hat{\theta } ) ] + 2 \times 12 , \nonumber \\aic^{(7)}(\hat{\theta } ) & = & -2\,\ln[p^{(7)}(n|\hat{\theta } ) ] + 2 \times 7 , \label{eq : aic3}\end{aligned}\ ] ] where is the same form of eq .( [ eq : pdf ] ) but replacing with .\label{eq : meanpro_k}\ ] ] among these models , we can choose the one which minimizes the aic . as an example, for one of the typical experimental data of coincidence counts for the vnms ( data acquisition time : 5s ) , =\{615 , 553 , 613 , 605 , 550 , 576 , 596 , 609 , 575 , 622 , 577 , 601 , 574 , 569 , 591 , 569 } , we have the following aics ; , , and .therefore we choose the rank-4 model . on the other hand , for one of the typical data for the apss ( data acquisition time : 5s ) , =\{42 , 45 , 25 , 2504 , 60 , 56 , 31 , 33 , 1309 , 1431 , 1148 , 1125 , 514 , 487 , 576 , 599 }, we have ; , , and .thus the rank-2 model is chosen .it is possible to think about the other hypothetical models , e.g. , separable model ( which has 7 parameters ) , or separable and also rank-1 model ( which has 5 parameters ) , but for simplicity , our analyses were confined to the above 4 models . .the filled plots represent experimental results ( the vertical error bars correspond to one standard deviation ) and the blank plots represent numerical simulations .the inset shows their asymptotic lower bounds . ]figure [ fig : aic ] shows the average bures distances between the _ true _ states and the estimated states obtained by employing the new estimation strategy .their asymptotic lower bounds are also exhibited in the inset of fig .[ fig : aic ] .these asymptotic values were calculated according to eq .( [ eq : limittm ] ) .note that since the _ true _ state here was also determined by the maice instead of mle , _ true _state for the apss and that for the he s resulted in rank-2 density matrices . as a result of the reduction of the parameters , the maices substantially reduces the discrepancies between the asymptotic lower bounds ( the inset ) and the experimental results ( filled plots : experiments , blank plots : monte carlo simulations ) comparing to the previous results ( fig .[ fig : mle ] ) . moreover , in the region where the data acquisition time greater than 2s , i.e. , , the lower bounds of eq .( eq : limittm2 ) are almost achieved .this is the case even for estimating degenerate states such as the apss and the he s .note also that the intercepts of the asymptotic values on the axis of ordinates shown in the inset of fig . [ fig : aic ] are slightly lowered comparing to the previous ones ( the inset of fig .[ fig : mle ] ) . herewe remark that while the numerical simulations continue up to , the maximum data acquisition time of each experiment is 5s , which corresponds to .this is because the systematic errors mentioned above might be getting significant around .the decreasing rate of the bures distance for the apss deviates slightly from the ideal value , -1 .this might be due to the residue of the redundant parameters even after making model - inquiry among the above 4 models , because for apss , the another model ( e.g. , separable model or separable and rank-1 model ) might be more suitable .thus the further reduction of the parameters might be possible .the discrepancies in small ensemble region ( left - hand side of fig .( [ fig : aic ] ) ) may be explained by higher order effect of the errors as is mentioned in the previous section .what kind of factor does dominantly affect the accuracy of the estimation ?this question has not been perfectly answered so far . in the generalsetting of the quantum parameter estimation problem helstrom1976,holevo1982,yl1973 , any kinds of measurements represented as the _ positive operator - valued measures ( povms ) _ are allowed to be utilized .then not only inseparable projective measurements on the two qubits but even _ collective _ measurements on whole ensembles are allowed . in thissetting , it has been known that the _ non - commutativity _ of quantum mechanics has significant influence on the attainable lower bounds on errors in estimating quantum states with multiple - parameter .although significant progress has been made , finding the asymptotically optimal measurement strategy and obtaining the achievable lower bounds on the errors in estimating quantum states with multiple - parameter are still important open problems .on the other hand , in our setting , the measurement strategy we employed is not such a optimal _ collective_-measurement strategy , but _ local _ tomographic measurements represented by ( [ eq : projectors ] ) .nonetheless , our results reveal another aspects of the quantum state estimation , that is , the nature of local measurements .figure [ fig : aic ] shows that the errors in estimating the entangled state , i.e. , the he s ( which has small entropy but large entanglement ; see fig .[ fig : devmatrix ] ) is the largest among the three states in the asymptotic region .thus , the existence of entanglement seems to degrade the accuracy of the estimation if the measurements are performed locally .in this section , we discuss the alternative measurement strategy for two qubits . since it may be extremely difficult to experimentally realize optimal _ collective _ measurements , the following discussion is restricted to projective measurements on just one sample in the ensemble , i.e. , on two qubits .note that there is another favorable measurement strategy , that is , _self - learning measurements _ fkf2000,hrbntw2002 . however , to the best of our knowledge ,the lower bound on errors in estimating with this type of adaptive - measurement strategy is still missing .it is reasonable to expect that if we employ measurements on the _ inseparable _ projectors on two qubits , the errors in estimating the entangled states might be reduced . for inspectingwhether this expectation is true or not , the following specific projective measurements : are employed as an alternative to the _ local _ tomographic measurements ( [ eq : projectors ] ) . here , as mentioned in appendix app : tomography , , , and , as and being the horizontal polarization state and vertical one , respectivelythis set of 16 projective measurements includes 10 inseparable projectors , and satisfies the condition of tomographic measurements , which is presented in appendix [ app : tomography ] .these projective measurements can be realized by slightly modifying the interferometric bell - state analyzer .figure [ fig : bsa ] shows the proposed experimental setup for realizing the projective measurements ( [ eq : projectors2 ] ) . for 10 inseparable - projective measurements in ( [ eq : projectors2 ] ) , the down - converted photonsare coupled into single mode optical fibers ( smfs ) and mixed at 50/50 coupler. then if the optical path length of two photons are appropriately adjusted and the effects of birefringence in the smfs are compensated by fiber polarization controller , only two photon whose state of polarization belongs to anti - symmetric subspace , ( singlet subspace ) contribute the coincidence counts of photon detector ( pd ) a and b. this coincidence measurement is equivalent to one of the inseparable projective measurements , .the other 9 inseparable measurements can be straightforwardly realized by the local unitary transformation of the state of polarization using half - wave plates ( hwp ) and quarter - wave plates ( qwp ) before coupling photons into the smfs . for remained 6 local - projective measurements in ( [ eq : projectors2 ] ) ,the state of one - photon polarization is projected on the particular state , e.g. , , by inserting a mirror into the path b ( path a ) and using hwp , qwp , polarizer , and pd b2 ( pd a2 ( not shown ) ) as shown in the dotted box in fig .[ fig : bsa ] .another photon is propagated to either pd a or b. the coincidence measurements of pd b2 ( pd a2 ) and either pd a or pd b are served as the local - projective measurements , e.g. , ( ) . for utilizing this strategy as an alternative to the local one ( eq : projectors ) , it is vital to minimize the systematic errors due to imperfect intensity interference in the inseparable measurements . in order to achieve required high visibility of interference , the distinguishability in any degree of freedom of two photons other than the polarization should be reduced . by using smfs for enhancing spacial - mode overlap of two photons , we expect that such systematic errors due to the spacial degree of freedom might be reduced to some extent . in the recent experiments , the visibilities of this interference exceeding 98% and even reaching 99.4% were reported . ) .see text for details . ]the comparison between the asymptotic lower bounds eq .( [ eq : loglimittm ] ) for the above _ inseparable _ measurements ( [ eq : projectors2 ] ) and the conventional _ local _ ones ( [ eq : projectors ] ) is presented in fig .[ fig : ubasis ] .as expected , the improvement of the accuracy in estimating the he s can be found in fig .[ fig : ubasis ] ( c ) , although the accuracy is decreased in estimating the apss as can be seen in fig .fig : ubasis ( b ) . as indicated in fig .[ fig : ubasis ] ( a ) , even for the vnms , which has no entanglement at all ( see fig .[ fig : devmatrix ] ) , the inseparable measurements ( [ eq : projectors2 ] ) are working better than the separable ones ( [ eq : projectors ] ) .this rather surprising result might be viewed as the _ non - locality without entanglement _ in quantum state estimation .we conjecture that for the mixed states like the vnms , no _local _ tomographic measurements can attain the same accuracy achieved by the inseparable measurements presented in ( [ eq : projectors2 ] ) .this phenomenon may stem from the fact that the mixed states can be represented as the classical mixture of the entangled states as well as that of the product states .we presented quantitative analysis concerning the accuracy of the quantum state estimation , and demonstrated that they depend both on the states to be estimated and on the measurement strategies .for this purpose , the spdc process was employed for experimentally preparing various ensembles of the bi - photon polarization states and the aic based new estimation strategy , i.e. , the maice was introduced for eliminating the numerical problems in the estimation procedures .our results showed errors of the estimated density matrices decreased as inversely proportional to the ensemble size for all of the three states we examined ( the vnms , the apss , and the he s ) in the asymptotic region . besides , it was revealed that the existence of entanglement degrade the accuracy of the estimation when the measurements were performed locally on two qubits .the performance of the alternative measurement strategy , which included the projective measurements on inseparable bases , was numerically examined , and we found that the inseparable measurements improved the accuracy in estimating the vnms as well as the he s .we are grateful to tohya hiroshima , satoshi ishizaka , bao - sen shi , akihisa tomita , masahito hayashi , masahide sasaki , and prof .osamu hirota for valuable discussions and encouragements , to shunsuke kono and kenji kazui for their technical support , and to kwangseuk kyhm for reviewing the manuscript .we also would like to thank jaroslav rehcek for useful correspondence . in apps .[ app : tomography ] and [ app : mle ] , we give a brief review of tomographic measurements and maximum likelihood estimation ( mle ) , respectively , in accordance with ref .readers who are familiar with two issues can skip these two apps .we mention the cramr - rao inequality and the fisher information matrix for providing the optimality of the mle in apps .[ app : cr bound ] . with the standard pauli matrices supplemented with the identity matrix , an arbitrary density matrix of two qubits can be represented in hilbert - schmidt space as a parametric statistical model : where and . here are assumed to be real . from the trace condition of a density matrix, is equal to one .note that the above parametric model in hilbert - schmidt space does not ensure positivity condition of a density matrix , the problem of positivity is revisited in [ app : mle ] .when we try to estimate quantum states as the parametric statistical model of eq .( [ eq : dm ] ) , we should perform some kinds of measurements .suppose that the measurements are represented by projectors . imagining coincidence counting measurements on bi - photon polarization states as a concrete example , the projectors correspond to a certain polarization states .after carrying out the measurements for data acquisition time , the results are given by , \label{eq : coincidence}\ ] ] where is the coincidence counts without polarizers for the data acquisition time .then , is the coincidence counting rate .although our attention is focused on the 15 parameters , is also _ a priori _ unknown .therefore , is appended to the list of the parameters for estimating the states .the parameter is thus called the _ nuisance parameter_. using eq .( [ eq : dm ] ) , eq .( [ eq : coincidence ] ) becomes eq .( [ eq : coincidence2 ] ) provides a linear relationship between the 16 parameters and , and the measurement results .subsequently , we can derive a necessary and sufficient condition of the measurement for determining these parameters , that is , the matrix has an inverse ( thus the measurement should consist of at least 16 projectors ) .measurements that satisfy the above condition are called _ tomographic measurements _ . a specific instance of tomographic measurements are : where , , , and , as and being the horizontal polarization state and vertical one , respectively . here means . from the measurement results , we can solve linear equation , eq .( [ eq : coincidence2 ] ) , with respect to the parameters , and . as a result , the quantum state of the form eq .( [ eq : dm ] ) can be uniquely reconstructed .the solutions are explicitly expressed as ^{-1}n_{\nu}. \label{eq : linear tomography}\ ] ] this estimation strategy is called the _ linear tomography _ jkmw2001 . the flaw of the linear tomography in sec .[ app : tomography ] is two - fold .one is that there are no considerations about its optimality , another is that the parametric model for linear tomography , eq .( [ eq : dm ] ) , does not ensure the positivity condition of the density matrix as mentioned before .the solution for these flaws is to use _ maximum likelihood estimation ( mle ) _ .density matrix , which satisfy the positive condition and also the trace condition , can be written as } , \label{eq : dm2}\ ] ] where is assumed to be a normal matrix .then , following ref . , we adopt the complex lower triangular matrix parametrized by 16 real parameters , , ( cholesky decomposition ) as the normal matrix .it is explicitly written as .\label{eq : cholesky}\ ] ] we should keep in mind that while the number of the parameters for the complex lower triangular matrix ( [ eq : cholesky ] ) is 16 , that of the density matrix ( [ eq : dm2 ] ) is effectively 15 , because of the denominator ] coincides with the nuisance parameter of eq .( [ eq : nuisance ] ) . thus the probability mass function ( poisson density function ) of the measurement results for given values of the parameters is written as although the parametric model , eq .( [ eq : dm2 ] ) , guarantees the positivity and trace condition , the simple linear relationship between the results of measurements and the parameters like eq .( eq : coincidence2 ) has disappeared .nonetheless , the mle can be applied for inferring the parameters from the observed results .we can regard eq .( [ eq : pdf ] ) as a function on the 16-dimensional parameter space where each point corresponds to a certain quantum state .it is called _likelihood function_. then , it is reasonable to consider that the point ( state ) which maximizes the likelihood function ( [ eq : pdf ] ) is likely to be the nearest to the _ true _ point ( state ) , .the strategy to choose the values which maximizes eq .( [ eq : pdf ] ) as the estimates is called maximum likelihood estimation ( mle ) .the mle is elucidated based on the _ kullback - leibler distance ( relative entropy ) _it is often convenient to consider the natural logarithm of the likelihood function , which is called _ log - likelihood function _ : =\sum_{\nu = 1}^{16}\ln [ p(n_{\nu}|\theta ) ] .\label{eq : loglikelihood}\ ] ] here it is apparent that this change does not influence the location of the maximum . as the data acquisition time is increased infinitely , the log - likelihood function divided by tends , with probability 1 , to the the mean log - likelihood function for unit time , , i.e. , & \approx & \!\!\sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}\cdots \!\!\sum_{n_{16}=0}^{\infty}\!\!p_{0}(n)\ln[p(n|\theta ) ] |_{t=1 } \nonumber \\ & \ & \equiv \mathop{\mathbf{e}}\nolimits_{\theta_{0 } } [ \ln[p(n|\theta ) ] % is the _ true _ probability mass function of .the difference between the _ true _ probability mass function and the parametric model can be measured by the kullback - leibler distance nctext2000,cttext1991,antext2000 , -\ln [ p(n|\theta ) ] ] .\label{eq : relativee}\ ] ] this takes a positive value , unless in all ( in this case ) .then it becomes clear that what we try to do by the mle ( i.e. , to increase the log - likelihood function , eq .( [ eq : mloglikelihood ] ) , with respect to ) is to minimize the kullback - leibler distance between the _ true _ probability mass function and its parametric model .the mle is supposed to be the optimal estimation strategy in the following sense .the errors of the estimates can be represented by the covariance matrix ] , is well known as the _ fisher information matrix_. by the _ schwarz inequality _ for expectation, we have )^{2 } \nonumber \\ & { \le } & [ \sum_{i=1}^{16}\sum_{j=1}^{16}z_{i}z_{j}j_{ij}(\theta_{0})][\sum_{i=1}^{16}% \sum_{j=1}^{16}y_{i}y_{j}v^{ij}(\theta_{0 } ) ] .\label{eq : cs}\end{aligned}\ ] ] where , we introduce two sets of 16 auxiliary real variables and . from eq .( [ eq : mean_score ] ) , we have \nonumber \\ & = & \mathop{\mathbf{e}}\nolimits_{\theta_{0}}[s_{i}(n|\theta_{0})\,\hat{% \theta ^{j}}(n ) ] \nonumber \\ & = & \!\!\sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}\cdots\sum_{n_{16}=0}^{% \infty}\!\ ! p(n|\theta_{0})\frac{\frac{\partial}{\partial\theta^{i } } p(n|\theta)|_{\theta=\theta_{0}}}{p(n|\theta_{0 } ) } \hat{\theta^{j}}(n ) \nonumber \\ & = & \frac{\partial}{\partial \theta^{i } } \sum_{n_{1}=0}^{\infty}% \sum_{n_{2}=0}^{\infty}\cdots \sum_{n_{16}=0}^{\infty}\!\!p(n|\theta ) \hat{% \theta^{j}}(n)|_{\theta=\theta_{0 } } \nonumber \\ & = & \delta^{j}_{i } , \label{eq : cs2}\end{aligned}\ ] ] where is the kronecker s delta .consequently , the left - hand side of the inequality ( [ eq : cs ] ) becomes by substituting eq .( [ eq : lefthand ] ) and putting in the schwarz inequality ( [ eq : cs ] ) , we obtain that is , which is the cramr - rao inequality for unbiased estimates . note that most estimators used in practice are not unbiased .however , the cramr - rao bound on the variance of an unbiased estimator is _ asymptotically _ also a bound on the mean square error , .\label{eq : mse}\ ] ] of any well - behaved estimator , as shown by gill and massar in ref . gm2000 .thus , the cramr - rao inequality provides us with an _ asymptotic _ lower bound on the covariance matrix for wide variety of estimates in terms of the fisher information matrix . here, we mention the significant fact that the maximum likelihood estimates are asymptotically efficient , in other words , by the mle , the covariance matrix asymptotically achieves the cramr - rao lower bound antext2000,braunstein1992 . in this sense , the mle is the optimal strategy .m. xiao , l. -a .wu , and h. j. kimble , phys .lett . * 59 * , 278 ( 1987 ) ; y. -q .li , d. guzun , and m. xiao , phys .82 * , 5225 ( 1999 ) ; m. a. armen , j. k. au , j. k. stockton , a. c. doherty , and h. mabuchi , phys .lett . * 89 * , 133602 ( 2002 ) p. grangier , r. e. slusher , b. yurke , and a. laporta , phys .lett . * 59 * , 2153 ( 1987 ) ; a. kuzmich and l. mandel , quantum semiclass .* 10 * , 493 ( 1998 ) ; g. santarelli , ph .laurent , p. lemonde , a. clairon , a. g. mann , s. chang , a. n. luiten , and c. salomon , phys .* 82 * , 4619 ( 1999 ) . s. j. freedman and j. f. clauser , phys .* 28 * , 938 ( 1972 ) ; a. aspect , j. dalibard , and g. roger , phys . rev . lett . *49 * , 1804 ( 1982 ) ; g. weihs , t. jennewein , c. simon , h. weinfurter , and a. zeilinger , phys . rev .lett . * 81 * , 5039 ( 1998 ) .t. jennewein , c. simon , g. weihs , h. weinfurter , and a. zeilinger , phys .lett . * 84 * , 4729 ( 2000 ) ; d. s. naik , c. g. peterson , a. g. white , a. j. berglund , and p. g. kwiat , phys .lett . * 84 * , 4733 ( 2000 ) ; w. tittel , j. brendel , h. zbinden , and n. gisin phys .* 84 * , 4737 ( 2000 ) .a necessary and the sufficient condition of tomographic measurements for the linear tomography is not straightforwardly applied to that for the mle ; nonetheless , we use these condition , because of its simplicity and the fact that it is at least valid as the sufficient condition for inferring the parameters by the mle .
|
we report our theoretical and experimental investigations into errors in quantum state estimation , putting a special emphasis on their asymptotic behavior . tomographic measurements and maximum likelihood estimation are used for estimating several kinds of identically prepared quantum states ( bi - photon polarization states ) produced via spontaneous parametric down - conversion . excess errors in the estimation procedures are eliminated by introducing a new estimation strategy utilizing akaike s information criterion . we make a quantitative comparision between the errors of the experimentally estimated states and their asymptotic lower bounds , which are derived from the cramr - rao inequality . our results reveal influence of entanglement on the errors in the estimation . an alternative measurement strategy employing inseparable measurements is also discussed , and its performance is numerically explored .
|
the concept of high - gain adaptive feedback arose from a desire to stabilize certain classes of linear continuous systems without the need to explicitly identify the unknown system parameters .this type of adaptive controller does not identify system parameters at all , but rather adapts the feedback gain itself in order to regulate the system .a number of papers examine the details of various kinds of high - gain adaptive controllers , among others .more recently , several papers have discussed one particularly practical angle on the high - gain adaptive controller , namely how to cope with input / output sampling . in particular, owens showed that it is not generally possible to stabilize a linear system with adaptive high - gain feedback under uniform sampling .thus , owens , et .develop a mechanism to adapt the sampling rate as well as the gain , a notion subsequently improved upon by ilchmann and townley , and logemann . in this paper, we employ results from the burgeoning new field of mathematics called _ dynamic equations on time scales _ to accomplish three principal objectives .first , we use time scales to unify the continuous and discrete versions of the high - gain controller , which have previously been treated separately . next we give an upper bound on the system graininess to guarantee stabilizability for a much wider class of time scales than previously known , including mixed continuous / discrete time scales .third , the paper represents the first application of several very recent advances in stability theory and lyapunov theory for systems on time scales , and two new lemmas are presented in that vein .we also give a simulation of a high - gain controller on a mixed time scale .we first state two assumptions that are required in the subsequent text .( a1 ) : : the system model and feedback law are given by the linear , time - invariant , minimum phase system .system parameters , , and are unknown .the feedback gain is piecewise continuous , and nondecreasing as . by _minimum phase _, we mean that the polynomial with is hurwitz ( zeros in open left hand plane ) .( a2 ) : : furthermore , .e . is positive definite .( in it is pointed out that a nonsingular input / output transformation always exists such that and give . ) under these conditions it has been known for some time ( e.g. ) that there are a wide class of gain adaptation laws , , that can asymptotically stabilize system ( [ system_continuous ] ) in the sense that subsequently , various authors assumed that the output is obtained via sample - and - hold , i.e. with and .thus it becomes necessary also to adapt the sample period so the closed - loop control objectives are though several variations on these results exist , these remain the basic control results for continuous and discrete high - gain adaptive controllers .the continuous and discrete cases have previously been treated quite differently , but we now construct a common framework for both using time scale theory .the system of ( a1 ) can be replaced by where is any time scale unbounded above with . with a series expansion similar to , we see that expc is the matrix power series function is the time scale graininess . implementing control law then gives note that and may all be time - varying , but we will henceforth drop the explicit reference to for these variables . for future reference , we also note that if , then and are also bounded ( c.f .appendix , lemma 7.4 ) .the design objectives are to find graininess and feedback gain as functions of the output , with and nondecreasing , such that it is important to keep in mind the generality of the expressions above .a great deal of mathematical machinery supports the existence of delta derivatives on arbitrary times scales , as well as the existence and characteristics of solutions to ( [ system_general ] ) .see , for example , e.g. .we begin this section with a definition and theorem from the work of ptzsche , siegmund , and wirth : the _ set of exponential stability _ for the time - varying scalar equation , with and , is given by where with and arbitrary .solutions of the scalar equation are exponentially stable on an arbitrary if and only if .we note here that ptzche , siegmund , and wirth did not explicitly consider scenarios where is time - varying , but their stability analysis remains unchanged for .the set contains nonregressive eigenvalues , and a loose interpretation of suggests that it is necessary for a regressive eigenvalue to reside in the area of the complex plane where most " of the time .the contour is termed the _hilger circle_. since the solution of is , theorem 4.2 states that , if then some exists such that where is a generalized time scale exponential .the hilger circle will be important in the upcoming lyapunov analysis , as will the following lemmas .let be a function which is known to satisfy the inequality , with and .if for all , then .defining gives rise to the initial value problem where .first , suppose is negatively regressive for , i.e. with . then ( [ y_t ] ) yields .on the other hand , suppose is nonregressive for some . if over , then invoke the preceding argument .if over , then solve ( [ y_t ] ) to get since for , we see that .however , for , ( [ z_regressive ] ) becomes thus , for all for both negatively regressive and nonregressive , a contradiction of the lemma s presupposition .this leaves only . at this pointwe pause briefly to discuss lyapunov theory on time scales .dacunha produced two pivotal works on solutions of the _ generalized time scale lyapunov equation _, , and are known and . though it will not be necessary to solve ( [ generalized_lyap ] ) in this work to the time scale lyapunov equation with positive definite exists if and only if the eigenvalues of are in the hilger circle for all .furthermore , is unique . as with the well known result from continuous system theory ( c.f . ) , the solution is constructive , with where denotes the transition matrix for the linear system , .the correct interpretation of this integral is crucial : for each the time scale over which the integration is performed is , which has constant graininess for each fixed . ], we will see that the form of ( [ generalized_lyap ] ) leads to an upper bound on the graininess that is generally applicable to mimo systems , an advancement beyond previous works which gave an explicit bound only for siso systems . before the next lemma, we define the next lemma follows directly . given assumptions and and ,there exists a nonzero graininess and a time such that , for all and , the matrix satisfies a time scale lyapunov equation with , for small , and from .we construct as with sufficiently small so that on .this holds if multiplying ( [ cb_lyap2 ] ) by gives ( we now drop the explicit time - dependence for readability . ) set so that ] time scale , which is continuous for an interval , then has a gap for an interval , then repeats .figure [ fig_adapt ] shows the regulation of a system implementing an adaptive gain controller in a blocking situation : .\]]the gain begins at and the sampling period at .the bounding function for the graininess is ( so that ) .in summary , the paper illustrates a new unified continuous / discrete approach to the high - gain adaptive controller . using recent developments in the new field of time scale theory, the unified results reveal that this type of feedback control works well on a much wider variety of time scales than explored in previous literature , including those that switch between continuous ( or nearly continuous ) and discrete domains or those without monotonically decreasing graininess .furthermore , several results relating to lyapunov analysis on time scales appear here for the first time , including lemma 4.3 ( and its use in the proof of theorem 5.1 ) and 5.2 . a simulation of an adaptive controller on a mixed continuous / discrete time scale is also given .it is our hope that time scale theory may find wider application in the broad fields of signals and systems as it seems that many of the tools needed in those fields are beginning to appear in their generalized forms .we thank our colleague , robert j. marks ii , for his very helpful suggestions throughout this project .we comment on the properties of the expc " function referenced in the main body of the paper .1 . 2 . when exists .3 . for real , scalar arguments , , where sinc denotes the sine cardinal function .this is the motivation for the expc notation .4 . .5 . .parts 1 - 3 follow immediately from the definition . to verify 4 ,note part 5 follows from a similar argument .note that , by property 5 , the decomposition gives as , with uniform convergence .gravagne , j.m .davis , j.j .dacunha , r.j .marks ii , bandwidth reduction for controller area networks using adaptive sampling , proc .ieee int .robotics and automation , new orleans , la , april 2004 , pp 5250 - 5255 c. ptzsche , s. siegmund , f. wirth , a spectral characterization of exponential stability for linear time - invariant systems on time scales , discrete and continuous dynamical systems 9 , 2003 , pp 1223 - 1241 j.c .willems , c.i .byrnes , global adaptive stabilization in the absence of information on the sign of the high frequency gain , lecture notes in control and information sciences 62 , 1984 , springer , berlin , pp 49 - 57
|
it has been known for some time that proportional output feedback will stabilize mimo , minimum - phase , linear time - invariant systems if the feedback gain is sufficiently large . high - gain adaptive controllers achieve stability by automatically driving up the feedback gain monotonically . more recently , it was demonstrated that sample - and - hold implementations of the high - gain adaptive controller also require adaptation of the sampling rate . in this paper , we use recent advances in the mathematical field of dynamic equations on time scales to unify and generalize the discrete and continuous versions of the high - gain adaptive controller . we prove the stability of high - gain adaptive controllers on a wide class of time scales . _ keywords _ : time scales , hybrid system , adaptive control
|
we aim to study the oscillations of a thin flexible plate interacting with an inviscid potential flow in which it is immersed . in the literature , many models have been suggested to accommodate various configurations and physical parameters . in this treatmentwe are concerned with analyzing the effect ( from an infinite dimensional point of view ) of the so called _ kutta - joukowsky _( k - j ) flow conditions in a flow - plate model of great recent interest . the k - j condition is stated in as taking a zero pressure jump off the wing and at the trailing edge " ; in line with the analyses in , we take this to correspond to taking the _ acceleration potential _ of the flow to be zero outside the plate , in the plane of the plate . in this analysiswe take _ clamped plate boundary conditions _ in order to focus on the abstract problems associated to the pde analysis of the k - j condition .in fact , preliminary investigations indicate that free plate boundary conditions may better accommodate the k - j flow conditions this is in line with certain engineering applications ( e.g. flag type models ) .however there are many technical challenges associated to pde models of flow - plate interactions when the plate boundary conditions are not of homogeneous type .more specifically , we address a flow - structure pde model which describes the interactive dynamics between a plate and a surrounding potential flow ( see , e.g. , and the references therein ) .the novel feature of our analysis is the implementation of the k - j flow condition in the model considered in . in the aforementioned analyses , a more straightforward ( neumann type )flow boundary condition is taken in the plane of the plate , in line with a standard _panel _ configuration . to extend the analysis to more general ( physical ) configurations , well - posedness of the modelmust be established with the k - j flow condition of recent interest ( see section [ physicals ] for more discussion ) .our goals in the treatment are therefore ( 1 ) to make precise the k - j boundary condition in the three dimensional model found in and provide a well - posedness result , and , ( 2 ) to relate two existing mathematical analyses of flow - plate models found in and . for simplicity in exposition ,we first consider a linear plate in our arguments .nonlinearity is then included , as it required for accuracy in modeling ; however , it is nonessential to demonstrate the principal mechanisms at play with respect to the k - j condition .the considerations address the nonlinear aspects of the model in great detail .we address the nonlinear nature of the plate , and provide a discussion of the critical properties of the nonlinear model in section [ physicals ] .lastly , we note that these results were first reported in , without a complete proof . for the remainder of the text we write for or , as dictated by context .norms are taken to be for the domain dictated by context .inner products in are written , while inner products in are written .also , will denote the sobolev space of order , defined on a domain , and denotes the closure of in the norm which we denote by or .we make use of the standard notation for the trace of functions defined on , i.e. for , =\phi \big|_{z=0} ] , which is a priori undefined ( ) .the key insight into the present analysis occurs at the level of the energy relation . for flow models which call for the k - j boundary condition, we may again utilize the state variable ( rather than ) ; doing so , we arrive at the same energy relation in the case of supersonic flows ( as discussed in the preceding paragraph ) .this indicates that the abstract approach ( i.e. the decomposition of the dynamics ) taken in the supersonic case with standard boundary conditions will in fact accommodate the k - j boundary conditions , if a suitable trace theory can be developed for the flow .we pause here to mention another approach to the study of a similar flow - plate model ; the author of considers a linear _ wing _ immersed in a subsonic flow ; the wing is taken to have a high aspect ratio thereby allowing for the suppression of the span variable , and reducing the analysis to individual chords normal to the span . by reducing the problem to a one dimensional analysis , many technical hangups are avoided , and fourier - laplace analysis is greatly simplified .ultimately , the problem of well - posedness and regularity of solutions can be realized in the context of the classical _ possio integral problem _ , involving the inverse hilbert transform and analysis of mikhlin multipliers . in our approach, we attempt to characterize our solution by similar means and point out how the two dimensional analysis greatly complicates matters and gives rise to singular integrals in higher dimensions .we also mention the confluence of our approach and the papers mentioned above in remark [ bal1 ] .the main result of this paper provides existence , uniqueness and continuous dependence on the data of finite energy solution .this result is obtained under a technical trace regularity condition imposed on aeroelastic potential , and stated in condition [ le : ftr0 ] . in the casewhen the flow domain is two dimensional , the validity of this condition is proved herein ( see also ) .we do not discuss this condition in detail here to avoid clouding the exposition .additionally , as discussed above , we focus on the linear theory in the arguments below .however , our analysis ( as in ) applies to the case of nonlinear plates .we have provided a description of the pertinent nonlinearities and critical properties in section [ physicals ] .the final result reads as follows : with reference to the model ( [ flowplate ] ) , with : 1 . assuming is _ locally lipschitz _ , 2 . assuming the trace regularity condition [ le : ftr0 ] holds for the aeroelastic potential , then there exists a unique finite energy solution that is local in time .this is to say , there exists such that for all initial data .this solution depend continuously on the initial data .+ if in addition , we take to be the von karman nonlinearity , the berger nonlinearity , or kirchoff type nonlinearity ( section 6.1 ( 1 ) , ( 2 ) , ( 3 ) , resp . ) , then the solution above is global in time . in other words ,the nonlinear dynamical system generates a continuous semigroup on the space .moreover , when we restrict to the lower dimensional case ( and consider nonlinearities which are analogous to those listed in section 6.1 in two dimensions ) we have : [ c:1 ] in the case when the dimension of is one , condition [ le : ftr0 ] is satisfied .hence , in that case , any semiflow defined by ( [ flowplate ] ) with nonlinear function subject to the hypotheses of proposition [ abstractnonlin ] generates a continuous semigroup .the generation of semigroups for an arbitrary three dimensional flow is subjected to the validity of the trace condition [ le : ftr0].this , in turn , depends on invertibility properties of finite riesz - type transforms in two dimensional domains . while it is believed that this property should be generically true , at the present stage this appears to be an open question in the analysis of singular integrals and depends critically on the geometry of in two dimensions .we briefly outline our approach : 1 .as motivated by the supersonic analysis in , we decompose the linear dynamics into a dissipative piece ( unboxed below ) and a perturbation piece ( boxed below ) : &\text { in } { { \omega}}\times ( 0,t),\\ u={\partial_{\nu}}u = 0 & \text { on } { { \partial}}{{\omega}}\times ( 0,t).\\ \end{cases}\ ] ] we then proceed to show that ( corresponding to the unboxed dynamics above ) is -dissipative on the state space . dissipativity is natural and built in within the structure of the problem , while maximality requires analysis of the zaremba problem ( mixed flow boundary conditions ) .2 . to handle the perturbation " of the dynamics , ( boxed ) we cast the problem into an abstract boundary control framework following the analysis in .in order to achieve this , the critical ingredient in the proof is demonstrating hidden " boundary regularity for the acceleration potential of the flow . it will be shown that this component is an element of a negative sobolev space , based on an assumption about special integral transform related to the finite hilbert transform .the above regularity allows us to show that the term > ] ) , solving for is standard .in addition , having solved for , we may then specify that , with appropriate boundary conditions .we must verify that this is valid by recovering .note that ( from the regularity of the flow equations and mixed boundary conditions ) we _ will not obtain _ , demonstrating that the resolvent operator , in this case , is not compact . to see that , we proceed as follows : let and consider the equation for , where is a solution ( as obtained above for the case ) : + \lambda v = g_2 \in l_2({{\omega}})\end{aligned}\ ] ] applying to both sides of first equation , multiplying by , and integrating givesthe relation multiplying the second equation by , and integrating by parts ( with the boundary conditions ) gives adding the two equations and bounding yields the following a priori estimate on : .\ ] ] in addition , we have the standard bound for the plate components of the system : .\ ] ] we note that from the equations we recover .moreover , for all , and so finally , taking sufficiently large , we have the final a priori estimate on the solution : in addition , we note from the proof of the m - dissipativity of above , that is also m - dissipative ; indeed , .the proof of maximality ( the corresponding estimates ) do not depend on the sign of , owing to inherent cancellations and the structure of the static flow problem .thus , with both m - dissipative , we have ( * ? ? ?2.4.11 ) : the operator is skew - adjoint on and generates a c group of isometries . in order to encode the flow boundary conditions abstractly into our operator representation of the evolution, we introduce the _ flow - neumann map _ defined for the flow operator with by the argument utilized in the proof of maximality above , we notice that the membership in implies that , , and in addition , as before ( neglecting the plate components ) , we have that the operators are -dissipative on .this indicates that is skew - adjoint ; we demonstrate the symmetric action below : >_{{\mathbb{r}^2}}\ ] ] we first utilize the boundary conditions : >_{{\mathbb{r}^2 } } = < { \partial_{\nu}}\phi , \gamma [ \hat{\psi}]>_{\omega } + < { \partial_{\nu}}\phi,\gamma[\hat { \psi } ] > _ { { \mathbb{r}^2}\backslash \omega } = 0.\ ] ] then , integrating by parts in , using green s formula , and once more utilizing the boundary conditions : hence with a help of the flow map , we define neumann - flow map as follows : given by we then consider the associated regularity of the map .note that the neumann map is associated with the matrix operator rather than the usual harmonic extensions associated with a scalar elliptic operator .this difference is due to the fact that k - j conditions affect both the flow and the aeroelastic potential . in order to describe the regularity of the mapwe shall use the following anisotropic function spaces : these spaces are subspaces of with the additional information on regularity in -direction . where indicates that the regularity of the neuman - flow problem is related to the zaremba elliptic problem ( as discussed above ) .first , we take , where and with the mixed boundary conditions : thus satisfies the problem this is zaremba mixed problem , which then yields ( with ) the solution consequently finally .we again emphasize the dependency of the above result on the strong ellipticity of the operator .the operator enjoys this property only when considering subsonic flows .our next result identifies ] .let .this means : ( \phi , \psi ) , g > _ { \omega } = & ~ ( [ { \mathbb{a}}^*_0 + i ] ( \phi , \psi ) , ng ) _ { y_f } \\ = & ~ \big(\nabla(u \phi_x - \psi + \phi ) , \nabla \hat{\phi}\big ) _ { { \mathbb{r}^3}_+ } + ( u\psi_x - \delta \phi + \psi , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } \\ = & ~ ( u\nabla \phi_x,\nabla \hat \phi)_{{\mathbb{r}^3}_+}-(\nabla \psi , \nabla \hat \phi)_{{\mathbb{r}^3}_+}+ ( \nabla \phi , \nabla \hat \phi)\\ & + ( u\psi_x , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } - ( \delta \phi , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \psi , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } \end{aligned}\ ] ] we utilize green s theorem in the second and fifth terms : ( \phi , \psi ) , g > _ { \omega}}=&~ ( u\nabla \phi_x,\nabla \hat \phi)_{{\mathbb{r}^3}_+}+ ( \psi , \delta \hat \phi)_{{\mathbb{r}^3}_+}+ ( \nabla \phi , \nabla \hat \phi)\\ & + ( u\psi_x , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \nabla \phi , \nabla \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \psi , \hat{\psi } ) _ { { \mathbb{r}^3}_+}\\ & -<\gamma[\psi],{\partial_{\nu}}\hat \phi>_{{\mathbb{r}^2}}+<{\partial_{\nu}}\phi , \gamma[\hat \psi]>_{{\mathbb{r}^2}}\end{aligned}\ ] ] we may simplify the boundary terms using the boundary conditions for and the fact that . ( \phi , \psi ) , g > _ { \omega}}=&~ ( u\nabla \phi_x,\nabla \hat \phi)_{{\mathbb{r}^3}_+}+ ( \psi , \delta \hat \phi)_{{\mathbb{r}^3}_+}+ ( \nabla \phi , \nabla \hat \phi)\\ & + ( u\psi_x , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \nabla \phi , \nabla \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \psi , \hat{\psi } ) _ { { \mathbb{r}^3}_+}\\ & + < \gamma[\psi],g>_{\omega}\end{aligned}\ ] ] at this point we utilize the relations from the map in the second and fifth terms : ( \phi , \psi ) , g > _ { \omega}}=&~ ( u\nabla \phi_x,\nabla \hat \phi)_{{\mathbb{r}^3}_+}+ ( \psi , ( u\hat{\psi_x}-\hat \psi))_{{\mathbb{r}^3}_+}+ ( \nabla \phi , \nabla \hat \phi)\\ & + ( u\psi_x , \hat{\psi } ) _ { { \mathbb{r}^3}_+ } + ( \nabla \phi , \nabla ( u\hat\phi_x-\hat \phi ) ) _ { { \mathbb{r}^3}_+ } + ( \psi , \hat \psi ) _ { { \mathbb{r}^3}_+}\\ & + < \gamma[\psi],g>_{\omega}\\ = & < \gamma[\psi],g>_{\omega},\end{aligned}\ ] ] where in the last line we have utilized integrated by parts multiple times and used the fact that and compactly supported on . with the introduced notation we can express the flow - structure operator as - nv \\ v\\ -{{\mathscr{a}}}u + n^ * ( { \mathbb{a}}_0^ * + i ) \begin{pmatrix}\phi\\\psi \end{pmatrix } \end{pmatrix}.\ ] ] this new representation of encodes the boundary conditions , and further reveals the antisymmetric structure of the problem .having established that is -dissipative , the cauchy problem is well - posed on .the dynamics of the _ original _ fluid - structure interaction in can be re - written ( taking into account the action and domain of ) as where is what remains of the dynamics in which is not captured by .this allows us to treat the problem of well - posedness within the framework of unbounded trace perturbations " , where the perturbation in question becomes here is defined via duality ( using its adjoint expression ) via lemma [ duality ] .we now verify that ( computed formally ) fully encodes the dynamics of .-n(v+uu_x ) \\ v \\ -{{\mathscr{a}}}+n^*({\mathbb{a}}^*_0+i)\begin{pmatrix}\phi\\\psi \end{pmatrix } \end{pmatrix}\end{aligned}\ ] ] that the plate components are correct is standard .we focus on the flow component : let .this implies that hence -n(v+uu_x ) = & ~{\mathbb{a}}_0\big [ \begin{pmatrix } \phi \\ \psi \end{pmatrix}-\begin{pmatrix } \hat \phi \\\hat \psi\end{pmatrix}\big ] -\begin{pmatrix } \hat \phi \\\hat \psi\end{pmatrix}\\ = & ~ \begin{pmatrix } -u(\phi+\hat \phi)_x-(\psi+\hat \psi ) \\ -u(\psi+\hat \psi)_x-\delta(\phi+\hat \phi)\end{pmatrix}-\begin{pmatrix } \hat \phi \\\hat \psi \end{pmatrix } \\= & ~\begin{pmatrix}-u\phi_x+\psi \\ -u\psi_x+\delta \phi \end{pmatrix } -\begin{pmatrix } -u\hat \phi_x+\hat \psi \\ -u\hat \psi_x+\delta \hat \phi \end{pmatrix}-\begin{pmatrix } \hat \phi \\\hat \psi \end{pmatrix}\\ = & ~\begin{pmatrix}-u\phi_x+\psi \\ -u\psi_x+\delta \phi \end{pmatrix } \end{aligned}\ ] ] where we have used in the last line to make the cancellation .we would like to recast the full dynamics of the problem in as a cauchy problem in terms of the operator . to do this, we define an operator as follows : \equiv\begin{pmatrix}0\\-u{\mathbb{a}}_0n\partial_x u\\0\\0\end{pmatrix}\ ] ] specifically , the problem in ( [ flowplate ] ) has the abstract cauchy formulation : where will produce semigroup ( mild ) solutions to the corresponding integral equation , and will produce classical solutions . to find solutions to this problem , we will consider a fixed point argument , which necessitates interpreting and solving the following inhomogeneous problem , and then producing the corresponding estimate on the solution : for a given . to do so , we must understand how acts on ( and thus on ) . to motivate the following discussion ,consider for and the formal calculus ( with as the pivot space ) ,z)_y = -u(({\mathbb{a}}_0+i)n\partial_x u,\overline \psi ) = -u<\partial_x u , \gamma[\overline \psi]>.\end{aligned}\ ] ] hence , interpreting the operator ( via duality ) is contingent upon the ability to make sense of ] ( since ) . in what follows ,we show a trace estimate on ( for semigroup solutions ) of ( [ inhomcauchy ] ) allows us to justify the program outlined above .we now state the trace regularity which is required for us to continue the abstract analysis of the dynamics .we relegate a discussion ( and the corresponding proof in one dimension ) to section [ tracereg ] . in what follows we implement microlocal analysis and reduce this theorem to a statement about integral integral transforms in analogous to the finite hilbert transform .this theorem is critical for the arguments to follow concerning the perturbation and general abstract approach .we note , the above result holds for _ any _ flow solver we will be applying this result in the case where coming from a semigroup solution generated by . at this stagewe follow the approach taken in by interpreting the variation of parameters formula for ds.\ ] ] by writing ( with some , ) : ds.\ ] ] we initially take this solution in '=[{\mathscr{d}}({\mathbb{a}})]' ] ( duality with respect to the pivot space ) , or equivalently , for all . for fixed and define the convolution operator corresponding to the mild solution of the abstract inhomogeneous equation ',~x(0)=x_0,\ ] ] with the input function .[ dualityequiv ] let and be reflexive banach spaces and the conditions in and be in force. then 1 .the semigroup can be extended to the space ' ] for every ] continuously .\le c(u ) || u ||_{h^{2}(\omega)},\ ] ] which implies .the relation in follows from and the boundedness of the operator '\mapsto y ] and . then given by belongs to ;y) ] , i.e. in addition we have that ')\ ] ] and holds in ' ] .however , we have shown that the additional `` hidden '' regularity of the trace of for solutions to ( [ flow ] ) with the boundary conditions in allows us to bootstrap to be continuous from to ( with corresponding estimate ) via theorem [ dualityequiv ] .this result justifies _ formal _ energy methods on the equation ( [ inhomcauchy ] ) in order to produce a fixed point argument .the hidden " trace regularity of the term coming from the flow equation is _ critical _ to the arguments above . in this sectionwe analyze this problem in the dual ( fourier - laplace ) domain and relate it to a certain class of integral transforms reminiscent of the finite hilbert transform . in the case of two dimensions ,we reduce the trace regularity to an hypothesis about the invertibility of hilbert - like transforms on bounded domains . additionally , we demonstrate the necessary trace regularity by performing microlocal analysis on a pseduodifferential operator corresponding to the flow problem in one dimension .we are interested in the trace regularity of the following flow problem in : =\gamma[\psi]=0 , ~&{{\bf{x}}}\in { \mathbb{r}^2}\backslash \omega , \end{cases}\ ] ] with , is the downwash " generated on the structure , and ] follows from a priori interior regularity of and the fact that direction is tangential to the boundary .thus , the only real requirement is the trace regularity of the time derivative of .we recall that related regularity of the trace to was derived in which deals with the neumann boundary data .in fact , it was shown there that in the case of neumann boundary conditions with both subsonic and supersonic velocities , one obtains the regularity as in ( [ trace - reg - est - m ] ) with . though these are related regularity results , the techniques of obtaining them are very different . in the neumann casewe perform microlocal analysis by microlocalizing hyperbolic and elliptic sectors . in the present case ,the regularity phenomenon has to do with spectral analysis of finite hilbert transforms . in line with the similar analyses in , we take zero initial data ( the principle of superposition may then be applied ) .we consider the fourier - laplace transform of the original linear equation ( formally , sending and ) ) , resulting in : let us denote then , we must solve ( in ) doing so , we obtain our next step is to relate the boundary conditions to the the normal derivative and the acceleration potential : hence , on we have the relation this relation is supplemented with the information the boundary conditions above in are the _ key _ feature which distinguish this analysis of trace regularity from that in . in what follows , we accommodate the mixed nature of the boundary conditions below by decomposing the symbol in into the product of two symbols which can be separately analyzed .let denote the pseudodifferential operator corresponding to the multiplier , and let denote projection on .then for any with support contained in , the integral equation in can be recast as noting that acts on a truncated " function , so that the pdo operator can be viewed as a pdo operator on . we can formally construct as follows : for any with outside , we construct laplace ( time ) fourier ( space ) transform of which we denote . then the operator is the pdo operator such that taking the inverse of the fourier - laplace transform , and applying projection on gives the appropriate statement possio type equation . equation ( [ p ] ) is an abstract version of a possio integral equation in two dimensions .we note here that by that we have already solved for given ; indeed , in analyzing the operator above , we have deduced solvability of the system in for .we are therefore interested in characterizing the solution : _ given on with some regularity , find the corresponding regularity of aeroelastic potential ] .is bounded from above for all . we analyze the symbol via cases : first , in case a , we note that the quantity , and hence , directly from ( discarding terms ) we have ^{1/4},\ ] ] for all and . in case c ,we utilize the characterizing bound in the principal term to arrive at ^{1/4},\ ] ] for all and . in caseb the principal portion of the denominator can degenerate , indicating the need for the ^{1/4} ] . since the singular support of is empty , by the pseudolocal properties of pseudodifferential operators , the operator is smooth " and compact ( see for detailed calculations with a similar decomposition ) .the operator is therefore a compact perturbation of finite hilbert transform , injective , hence invertible on with ( by the invertibility properties of the finite hilbert transform see the appendix ) .with , inverting , we have with , which yields via the sobolev embedding that for every by taking a suitable .thus , utilizing corollary [ sinv ] , , as desired , which concludes the proof of theorem [ 1dtracereg ] . noting that the sobolev embedding we used is not critically affected by the dimension of , we have thus provided the main motivation for condition [ le : ftr0 ] .what is missing however , is the analog of theory of finite hilbert transforms carried at the two dimensional level .[ bal1 ] we note that the same result ( essentially ) follows from the analysis in , where the author proves that aeroelastic potential for .since for there exists such that and one then obtains that with .the loss of derivative in the characteristic region was already observed and used in the analysis of regularity of the aeroelastic potential for the neumann problem with supersonic velocities .however , in the case of k - j conditions there is an additional loss , due to the necessity of inverting finite hilbert transform .we now bring our attention back to the case when is a two dimensional domain .the analysis above can be performed analogously for when ; however , as we see from , we must have invertibility estimates on the operator associated to the symbol in a vectorial setting for and for a two dimensional domain .this corresponds to a finite riesz transform _ in the direction . _the trace regularity analysis ( as done above ) will depend on solutions to singular integral equations which to the authors knowledge are not readily available .such results , should they exist , will depend highly on the geometry of domains .as such , the corresponding trace regularity result for a two dimensional domain _ as in _ will also depend on the geometry ; ultimately , this assumption will be verified if properties of the finite hilbert transform carry over to the higher dimensional transforms mentioned previously .we now state this as a lemma : assume that the operator is continuously invertible for . then the trace regularity condition condition [ le : ftr0 ] holds . as discussed above and in the appendix ,the hypothesis of this lemma is a generalization of the invertibility properties of the finite hilbert transform which were critical in the proof of theorem [ 1dtracereg ] above .we briefly mention the three principal nonlinearities associated to the flow - plate interaction model presented above , appearing in the plate equation , i.e. in the standard analyses we consider a general situation that covers nonlinear ( cubic - type ) force terms resulting from aeroelasticity modeling . these include : 1 . _ kirchhoff model _ : is the nemytski operator with a function which fulfills the condition where is the first eigenvalue of the biharmonic operator with homogeneous dirichlet boundary conditions .2 . _ von karman model : _ ] is given by = \partial ^{2}_{x } u\cdot \partial ^{2}_y v + \partial ^{2}_y u\cdot \partial ^{2}_{x } v - 2\cdot \partial ^{2}_{xy } u\cdot \partial ^{2}_{xy } v,\ ] ] and the airy stress function solves the following elliptic problem = 0 ~~{\rm in}~~ \omega,\quad { \partial_{\nu}}v(u ) = v(u ) = 0 ~~{\rm on}~~ { { \partial}}{{\omega}}.\ ] ] von karman equations are well known in nonlinear elasticity and constitute a basic model describing nonlinear oscillations of a plate accounting for large displacements , see and also and references therein ._ berger model : _ in this case the feedback force has the form \delta u,\ ] ] where and are parameters , for some details and references see and also ( * ? ? ?* chap.4 ) .the following proposition allows us to incorporate the above nonlinearities into our theory of well - posedness as in . for this reason we do not elaborate further on the nonlinear analysis , as the key issues arising in the study of this model which stand apart from the aforementioned analyses occur in the _ linear theory_. [ abstractnonlin ] for each of the nonlinearities above, we have that is locally lipschitz from into and there exists -functional on such that is a frchet derivative of , .moreover , is locally bounded on , for all sufficiently small there exists a such that the configuration considered in this treatment represents an attempt to understand pde aspects of the dynamic , mixed k - j conditions .to do so , we have taken clamped plate boundary conditions . however , in recent discussions with e. dowell ( duke ) , the authors have come to understand that the _ free - clamped _configuration represents a model of great recent interest . in addition , it is perhaps the most mathematically interesting ( and difficult ) case corresponding to this class of flow - plate models .these configurations are extremely important in the modeling of airfoils and in the modeling of panels in which some component of the boundary is left free .in addition to k - j condition , one must contend with the difficulties associated with the free plate boundary condition .the applicability of k - j boundary condition is highly dependent upon the geometry of the plate in question .the configuration below represents an attempt to model oscillations of a plate which is _ mostly free_. the dynamic nature of the flow conditions correspond to the fact that the interaction of the plate and flow is no longer static along the free edge(s ) , and in this case the implementation of the k - j condition is called for .this yields the following boundary conditions for the flow - plate system : where and we have partitioned the boundary . the boundary operators and are given by : \,=\partial_{\tau}\partial_{\nu}\partial_{\tau}u,\end{array}\ ] ] where is the outer normal to , is the unit tangent vector along .the parameter is nonnegative ; the constant has the meaning of the poisson modulus . the abstract boundary damping is encapsulated in the term .the regions are described by the picture below .the configuration above arises in the study of airfoils , but another related configuration referred to as _ axial flow _ takes the flow to occur in the direction in the picture above . in our analysis ,the geometry of the plate ( and hence the orientation of the flow ) do not play a central role . in practice, the orientation can have a dramatic effect on the occurrence and magnitude of the oscillations associated with the flow - structure coupling . in the case of axial flow ,the above configuration is often discussed in the context of _ flag flutter _ or flapping .see for more details .the physical nature of the models given by the boundary conditions in makes their analysis desirable ; however , such models involve a high degree of mathematical complexity due to the dynamic and mixed nature of the boundary coupling near the flow - plate interface . from the point of view of the existing analysis ,much of the well - posedness and long - time behavior analysis is contingent upon taking clamped boundary conditions assumed for the plate ; these allow for smooth extensions to of the neumann flow boundary conditions satisfied by the flow . in the absence of these, one needs to approximate the original dynamics in order to construct sufficiently smooth functions amenable to pde calculations .this is a technical challenge and was carried out in a similar fashion in , although the need for this analysis was not due to plate boundary conditions .the authors are grateful to prof .balakrishnan for a longtime sharing of his pioneering results in the field of continuum aeroleasticity. additionally the authors are grateful to prof .e. dowell for very constructive and informative ongoing discussion regarding flow - structure interaction models and configurations of recent interest .the research conducted by irena lasiecka was supported by the grants nsf - dms-0606682 and afosr - fa99550 - 9 - 1 - 0459 .in this appendix we present results on the finite hilbert transform which are critical to the analysis in and to our analysis when is an interval .our references are .the results provide the motivation for assumptions concerning the additional trace regularity found in condition [ le : ftr0 ] , and hence are apropos to the invertibility of the higher dimensional hilbert - like singular integral transform .we consider the case where . for , define the finite hilbert transform to be : which is , .we are concerned with the inversion formula , as given in tricomi : noting that the null space of is captured by the last term above . with regard to , we have a straightforward argument ( from the inversion formula ) which yields the _lowest possible integrability _ of in order for the inversion formula to be valid .this yields a certain type of _ optimality _ for the inversion : for , we get 1 . for any map is fredholm of index and the psedoinverse is bounded on .2 . for any map is injective and fredholm of index .thus , the inverse is bounded on its range , where 3 . for , is dense and proper . 99 a. v. balakrishnan , aeroelasticity continuum theory , _springer - verlag _ , 2012 .a. v. balakrishnan , nonlinear aeroelastic theory : continuum models ._ control methods in pde dynamical systems _ , contemp ., 426 ( 2007 ) , _ amer ._ , providence , ri , 79101 .berger , a new approach to the analysis of large deflections of plates , _ j. appl ._ , 22 ( 1955 ) , 465472 .r. bisplinghoff , h. ashley , principles of aeroelasticity ._ wiley _ , 1962 ; also _ dover _ , new york , 1975 .bolotin , nonconservative problems of elastic stability , _ pergamon press _ , oxford , 1963 .a. boutet de monvel and i. chueshov , the problem of interaction of von karman plate with subsonic flow gas , _ math .methods in appl . sc ._ , 22 ( 1999 ) , 801810 .l. boutet de monvel and i. chueshov , non - linear oscillations of a plate in a flow of gas , c.r .paris , ser.i , 322 ( 1996 ) , 10011006 .l. boutet de monvel and i. chueshov , oscillation of von karman s plate in a potential flow of gas , _izvestiya ran : ser ._ 63 ( 1999 ) , 219244 .i. chueshov and i. lasiecka , generation of a semigroup and hidden regularity in nonlinear subsonic flow - structure interactions with absorbing boundary conditions ._ 3 ( 2012 ) , 127 .a. favini , m. horn , i. lasiecka and d. tataru , global existence , uniqueness and regularity of solutions to a von karman system with nonlinear boundary dissipation .eqs _ 9 , 1996 , pp .267294 . ; addendum , _ diff ._ 10 , 1997 , pp . 197220 .i. lasiecka and j.t .webster , generation of bounded semigroups in nonlinear subsonic flow - structure interactions with boundary dissipation , _ math .methods in app . sc ._ , doi : 10.1002/mma.1518 , published online 2011 .r. sakamoto , mixed problems for hyperbolic equations , _kyoto univ _ 2 , 1970 , pp .. g. savare , regularity and perturbations results for mixed second order elliptic problems _ communications on pdes_. vol 22 , issue 5 - 6 , pp 869 - 900 .1997 m. shubov , asymptotical form of possio integral equation in theoretical aeroelasticity , _ asymptot ._ 64 , 2009 , pp .
|
we analyze the well - posedness of a flow - plate interaction considered in . specifically , we consider the _ kutta - joukowski _ boundary conditions for the flow , which ultimately give rise to a hyperbolic equation in the half - space ( for the flow ) with _ mixed boundary conditions_. this boundary condition has been considered previously in the lower - dimensional interactions , and dramatically changes the properties of the flow - plate interaction and requisite analytical techniques . we present results on well - posedness of the fluid - structure interaction with the kutta - joukowsky flow conditions in force . the semigroup approach to the proof utilizes an abstract setup related to that in but requires ( 1 ) the use of a neumann - flow map to address a zaremba type elliptic problem and ( 2 ) a trace regularity assumption on the acceleration potential of the flow . this assumption is linked to invertibility of singular integral operators which are analogous to the finite hilbert transform in two dimensions . ( we show the validity of this assumption when the model is reduced to a two dimensional flow interacting with a one dimensional structure ; this requires microlocal techniques . ) our results link the analysis in to that in . .1 cm key terms : flow - structure interaction , nonlinear plate , nonlinear semigroups , well - posedness , mixed boundary conditions , possio integral equation , finite hilbert transform .
|
distributed control , using many sensors , computers and actuators , can improve the performance of systems at many scales .examples include controlling traffic flow in cities , regulating office environments , active strengthening of structural materials , structural vibration control , reducing fluid turbulence and adjusting optical responses .the continuing development of micrometer - scale machines and proposals for even smaller devices constructed with atomically precise manipulations offer further possibilities for designing materials whose properties can be modified under program control , giving rise to so - called `` smart matter '' .smart matter is a material that locally adjusts its response to external inputs through programmed control .such control is enabled by embedding sensing , computation and actuation ability within the material .specifically , control programs are designed to use measurements of the system response to compute appropriate control inputs to the system , such as forces or electric fields , which are then imposed on the system by the actuators .this operation is known as feedback control because measurements of the system response are fed back to the controller for use in determining the control inputs . to date, proposals for smart matter focus on controlling classical behaviors of materials . however , small , precisely constructed devices can also exploit quantum behaviors .thus , an interesting open question is the extent to which the demonstrated abilities to modify quantum behaviors , together with distributed computation , can provide a much finer level of control of the properties of a material .this leads to _ quantum smart matter _ , which consists of actuators , sensors and computers integrated to operate on quantum behaviors .in this context , classical control methods and computers have a limited role due to their use of a measurement process which necessarily disrupts the quantum behavior . instead ,control of the quantum behavior of materials is a possible application for quantum computers .although there has been some work on distributed , parallel quantum computers , the use of quantum computers for controlling materials contrasts with most studies of such computers , which focus on purely computational questions such as whether they can compute classically intractable functions .quantum computers are distinguished by their ability to operate simultaneously on superpositions of many classical states ( `` quantum parallelism '' ) , and their restriction to unitary linear operations on such superpositions which can be used to produce interference among different computational paths .in particular , this restricts the programs to be reversible , and hence requires development of reversible devices . in this paper , we discuss coupling the programability of quantum computers to properties of materials , to create quantum smart matter . both classical and quantum smart matter share the basic idea of using a large number of integrated sensors , computers and actuators .they differ in that using quantum computers avoids the need to perform measurements on the quantum system . in the remainder of this paper , we first describe some of the control options for smart matter , then present an idealized example , and discuss a number of possible applications .[ sect.controls ] a large majority of the control applications implemented today use global controls .these controllers employ a single centralized controller that receives measurements of the system s state and delivers control inputs .their popularity stems from their conceptual simplicity : the control program deals directly with the desired overall properties of the system and need not coordinate its activities with other controllers .more formally , existing theoretical tools provide a basis for establishing provable performance bounds and the optimal use of control resources .global controllers have serious drawbacks in the context of smart matter .first , manufacturing defects and variations in the environment make it difficult to accurately model the exact dynamic behavior of the system .second , coordinating the activities of all the actuators in real time becomes an intractable designing and programming task as the number of active elements ( sensors and actuators ) increases .there can also be communication bottlenecks from the need to provide all the system measurements to the central controller in a timely manner .finally , the failure of the single central controller completely eliminates all control of the system .these difficulties motivate the use of distributed , or decentralized , control mechanisms .these control methods consist of a combination of many controllers , each designed and operated with limited knowledge of the complete system .this approach can allow control to be applied to more complex systems , including distributed computation . while global performance can not necessarily be guaranteed as with global controllers , decentralized controllers can , in practice , be remarkably robust to the failure of individual active elements , and are found in a variety of systems such as biological ecosystems , market economies and the scientific community .some applications of distributed control include regulating office environments , traffic flow , and , in the context of smart matter , structural vibrations .materials with desirable properties can be created in a number of ways . for most materials in use today , the properties are built in through a suitable choice of component materials and fabrication method ( e.g. , plastics and metal alloys ) .this technique is very robust when suitable materials can be found , but limited by properties of natural materials the available fabrication technologies . in effect, this procedure designs the system so additional control is not needed , i.e. , the uncontrolled behavior of the system has the desired properties already .unfortunately , once fabricated it is difficult to change the material properties .this is especially true of changes that should take place only at specific locations and occur rapidly in response to some environmental change .one way to change the properties of materials in a controlled manner is through the use of external fields applied to the whole system .provided the relevant physical properties change in response to this field , changing the field provides a global control of the material .examples include piezoelectric crystals where electric fields modify mechanical properties and the use of lasers to modify chemical reactions of large groups of molecules .if the system can be accurately modelled these external fields can be designed a priori . however , designing effective global control for large , dynamic , heterogeneous systems is intractable due to the scale and difficulty of modelling their quantum behavior . alternatively ,if many repeated experiments are feasible , the controls can be adapted to the system by incremental changes that improve performance based on measurements of the system response .another approach is to apply the required fields locally through embedded actuators , but still without any sensors .this alternative can handle spatial variations in the material that are known in advance , or provide a match to a fixed system through overall adaptation after many trials .this alternative still does not dynamically adjust to variations resulting from imperfections in the system .the above control methods work without any feedback , either by having good knowledge of the system behavior so the control force can be suitably designed , or through an adaptive process where different controls can be applied to many copies of the system to determine which method is best .when these conditions do not hold , these control methods are not effective .smart matter , where sensors and actuators are integrated in large numbers throughout the material , leads to an an alternate control method : the ability to sense and act independently on a local scale is employed to create desirable global behavior .this approach allows the control force to respond dynamically to unanticipated changes in the system or compensate for an inaccurate model of the dynamics at a very local scale . in effect, this allows the adaptation to take place while the system operates and in response to local variations , in contrast to a global adaptation of controls without local sensors where adjustments are based on the average behavior of many trials or copies of the system .a good example of the need for dynamic control is the behavior of vortices near a surface moving through a turbulent fluid . herelocal sensors can allow response to individual vortices , whose location and occurrence are not readily predicted .the most extreme case of smart matter is when the computation needed for the control is fast compared to any relevant changes in the physical configuration of the material .this allows for a decoupling of the slow physical degrees of freedom , which we denote by , from the rapid computational degrees of freedom , denoted by , in the same way that molecular or solid - state dynamics can often be approximated by considering separately the behavior of the electrons and the atomic nuclei .for example , this could be achieved by using light particles for the computation while heavy ones determine the relevant physical response .viewed another way , within a given implementation , this requirement also limits the number of computational steps that the control program can perform to determine its result , thus , defining the maximum acceptable latency of the control system . finally , the distinction between a variety of individually fabricated materials and smart matter , where properties can be changed under program control , is somewhat analogous to the distinction between customized electronic circuits for specific tasks and the use of general microprocessors . in the former cases , the customized material or circuithave a fixed set of properties , and can be well - matched to specific applications whose requirements do not change rapidly . for the latter , the programmability of smart matter or microprocessors , allows for a wider range of applications and flexible response to changes .controlling quantum behaviors elegantly extends the capabilities of smart materials , since the active elements can operate with the full quantum state of the material . realizing this possibility requires translating classically based control methods to quantum systems .these methods include controlled behavior based on either feedback or modelling .the difficulty of applying these control techniques will depend on the way the system is constructed .at one extreme , precise construction simplifies the control problem by allowing accurate modelling of the environment of each device and individually tailored programs , but imposes severe difficulties for fabrication . on the other hand, more readily manufactured devices will exist in a statistically variable environment , making the control design more difficult .it is this latter case that we mainly focus on here , as it raises a number of engineering control issues where sophisticated controls can compensate for current inability to precisely fabricate materials .one consequence of employing localized control is that creating macroscopic effects with microscopic controllers will require a large number of controllers .an immediate consequence is the requirement that these controllers be relatively homogeneous in design and function to simplify their design and construction .furthermore , each controller will be required to act either autonomously , or in concert with only a few others , since the design of complex interactions among so many controllers using a global model and algorithm is intractable .thus , quantum smart matter will be based on controllers , designed with local knowledge and behavior , which are homogeneous and act autonomously to achieve a desired macroscopic effect .a second consequence is that global simulation and performance predictions will be statistical in nature due to the inability to precisely specify each controller s detailed environment , or perform simulation and optimization for so many degrees of freedom .in fact , the precise location of the devices would be described according to a probability distribution rather than known a priori , as assumed by standard control methods .this fact will require different types of systems analysis than are typically employed with classical systems . in a control context, the forces acting on the physical degrees of freedom of a material must also depend on the computational ones .this observation is the analog of actuators in classically defined smart matter where results of a computation can change the forces acting on the physical system .thus , the potential acting on must be a function . within the range of variation of this function ,the control program can adjust , based on the value of , to produce an effective physical potential defined by because a quantum computer can perform this operation on quantum superpositions , it produces a system whose relevant physical behavior is governed by , a controlled potential for the quantum system .this discussion illustrates the difference between quantum smart matter , controlled by quantum computers , and smart matter whose control is determined by a classical computer . since quantum computers operate on superpositions they are essentially applying all possible control actions weighted by the wave function values .hence , quantum smart matter does not need to measure quantum states for feedback by contrast , classical computers ca nt give feedback control since they would require a measurement of the state , hence collapsing the wave function .( [ eq.potential ] ) also illustrates a similarity between quantum and classical control of smart matter : both types of control modify the potential governing the system s dynamics .quantum smart matter does this for quantum systems , without collapsing the wave function , while classical feedback modifies classical dynamic system properties .another important control issue is that of stability : whether the control achieves the desired behavior while maintaining the state of the system near some equibribum configuration .quantitative criteria for stability are given by results from control theory .for the case where the control force does not depend explicitly on time , stability amounts to a bound on the total energy of the system .more specifically , for reversible , non - dissipative , quantum systems stability implies that the total energy of the system is constant in the absence of external inputs .these results can apply much more generally , e.g. , when the control force has an explicit time dependence , through the use of lyapunov functions .a lyapunov function is a function of the system state , including the time dependence introduced by adjustments made by the control program .it may also have an explicit time dependence . if is a convex function of within some region including the initial state of the system, has a non - positive total derivative with respect to time , and at some point within that region , then the entire system will be stable with control .in essence this criterion determines when control feedback could positively amplify small perturbations in the system state , resulting in unstable behavior .this result suggests a direct link between the potentials , , and classical control theory in which there are several synthesis methods that guarantee stability and performance of the controlled system .such synthesis techniques would be employed to design the control program computing values for such that it is reversible and satisfies the stability conditions for .thus , controlled stability could be guaranteed for individual systems .however , stability of each individual system does _ not _ guarantee stability of the entire macroscopic system , unless each system is entirely autonomous . as an example , instability could arise when a small change in one part of the system gives rise to larger changes in other parts .furthermore , the theory for control stability developed for classical systems will need to be extended to account for quantum systems where the long time behavior can be very different from the classical counterpart .this discussion indicates that while design and synthesis of individual controllers may be put into a form that guarantees some measure of local stability and performance , global stability is not as straightforward a problem .such global stability measures would likely differ from those of standard control theory in being a statistical expectation of stability , rather than a specific proof .stability in this sense is particularly desirable under the assumption that the undesirable interaction of enough elements could reduce the expectation of stability for the entire system , and introduce undesirable behavior .hence , the interactions present at the microscopic scale to which control is being applied would have to be determined , estimated , or implicitly considered because the application of localized controllers to achieve global results is inherently affected by the strength of ( potentially ) unmodelled interactions .= 1.5 in consider a one - dimensional system with a single extra binary degree of freedom . in this casethe physical degree of freedom is the position , i.e. , , and the computational degree of freedom is 0 or 1 .a simple potential would be to have two distinct functional forms and , as shown in fig .[ fig.potentials ] .suppose the system is initially prepared in the state and then acted on by a quantum computer whose program sets to 0 or 1 depending on whether or , respectively .this gives as this state evolves , amplitudes from positive and negative values of become mixed and the control computation acts to readjust .provided this computation is fast compared to the physical evolution , this gives in effect the behavior governed by the programmed potential .other effective potentials could be constructed with different programs to compute from . in this example, the result is an asymmetric potential as has been studied in the context of second harmonic generation in optics .this illustrates the trade - off between smart materials and direct construction of materials with the desired effective potential . on the one hand ,the properties of smart materials can be readily modified simply by changing the program in the control computers . on the other hand , direct implementation in materialsallows for a faster response but becomes more difficult as more complex or time - variable potentials are considered . as a final comment on this example , note that it made use of quantum parallelism but no use of interference .the latter capability of quantum computers is crucial for their possible improvement on classically intractable problems , and provides additional possibilities for designing behavior of smart matter .for instance , the use of destructive interference could be used to cancel the amplitudes of certain undesirable behaviors , a feature that is not possible with classical computations , even if they are probabilistic .so far , we have described the behavior of a single quantum controller . for use in smart matter, we would have a system consisting of a large number of such devices to allow distributed control over specific quantum behaviors of the material .results could include programs that provide different behaviors at different spatial locations , as well as changing with time .furthermore , the control computation could make use of some of the computational states of its neighbors , providing a way to build correlations among different regions of the material .as an example of how simple programmed couplings can lead to more complex potentials suppose we start with two independent one - dimensional systems whose individual potentials take the form shown in fig .[ fig.potentials ] .the overall system potential is then we can couple the behaviors together with a control program that , for instance , sets to 0 or 1 depending on whether the _ other _ system state satisfies or , respectively , and vice versa for . in summary ,quantum smart matter relies on potentials that can be adjusted by their dependence on degrees of freedom that can be rapidly changed under program control . to maintain superpositions ,the control computations must rely on quantum computers .finally , if the promise of quantum computing is realized , this capability could be used to create complex , varying potentials governing the behavior of matter .if implemented , quantum smart matter provides the _ capability _ for using local control to produce desired behaviors .however , beyond the difficulty of fabricating such systems , there remains the challenge of _ designing _ suitable control algorithms . to illustrate possible control methods ,consider the behavior of a one - dimensional harmonic oscillator subjected to an additional control force so the effective potential is for simplicity we restrict the control potential to be of the form , with the corresponding control force given by .the first term in this control potential is a time - independent force proportional to , which just changes the basic frequency of the system to be .the second term gives a time - dependent control force that acts equally on the whole system , i.e. , has no dependence . with these choices ,the behavior of the wave function is readily determined , and is particularly simple for gaussian wave packets , i.e. , wave functions of the form where denotes the position of the center of the packet and characterizes its spread . as the system evolves from this initial state , the wave function continues to be described as a gaussian packet whose position , width and phase vary with time . in particular , the position of the center of the packet at time is given by in this context , designing a control amounts to finding values of and to achieve desired behaviors .the portion of the control force that does not depend on , i.e. , , can be delivered either from an external global source or through the computations of the local controller .the forcing term can be determined this way because it does not involve any knowledge of the system state and hence does not require any sensor values . however , providing a force that does depend on , in this case a modification of the oscillation frequency through the value of , requires the applied force to depend on the system state . for a classical systemthis result would require the controller to measure the state for use in its control computations .employing quantum computers for control of quantum systems , however , means that the controller acts on all possible states of the system , through quantum parallelism. often many choices of the control force will achieve the same objective . in these cases ,additional criteria can be added to the design .a common additional criterion is to pick from among the feasible controls , i.e. , those that produce the desired behavior , the one that minimizes some measure of the applied control force , e.g. , this constraint acts to reduce the control gain required , in the control design process , and therefore the actuation authority required . from a practical standpointsmall gain controllers are desirable since any system noise encountered undergoes minimum amplification in the feedback control process .for example , suppose we want the system s position , or more precisely the expected value of the position , to be at a desired value at a given time , i.e. , we want at a particular time .this task is accomplished without sensors assuming accurate information about the system parameters and the dynamics can be integrated , as given in this case by eq .( [ eq.position ] ) . under these conditions ,sensors are not needed to determine how the system will behave and we can get the optimal behavior . with the explicit result of eq .( [ eq.position ] ) and knowledge of the system parameters , in this case the frequency and the initial position , we can obtain the desired control by choosing and such that .the choice that minimizes eq .( [ eq.criterion ] ) can be determined with standard variational techniques to be and an example of the resulting behavior for is shown in fig .[ fig.position ] .= 1.5 in this example shows how a control without feedback can correctly produce desired behaviors provided the system is accurately modeled , and it can be solved to determine the dynamical behavior .more realistically , we may have information on the nominal characteristics of the material , but various imperfections in the fabrication process or environment will cause the actual system to vary from the ideal case .in addition , anharmonicities in the potential will make it very difficult to integrate the dynamics even if the exact system parameters were known .thus , as described in [ sect.controls ] , this control method will not work as well when applied to more realistic systems .feedback control using sensors can address these problems to some extent .for instance , suppose we are attempting to control to a specific path , such as the one shown in fig .[ fig.position ] , to reached a desired value , by using a force given in eq .( [ eq.force ] ) .if the system behavior were perfectly known , the actual system would follow this path , according to eq .( [ eq.position ] ) .imperfections in the model or its evaluation will cause the system to deviate from this path .one way to address this is to add a feedback control force of the form . for the symmetric wave packets treated here, this additional force would have zero expected value if the system matched the modeled behavior .the overall control potential becomes even if the system dynamics is only approximately known , a large value for will keep the system fairly close to the ideal path . on the other hand , this also means the controllers are using stronger forces , hence increasing the value of eq .( [ eq.criterion ] ) .this illustrates a general trade - off that feedback control provides : better performance when the system behavior is not known precisely , but at the expense of larger control forces .an example is shown in fig .[ fig.feedback ] .= 1.5 in in addition to providing more robust compensation in the presence of imperfect system models , feedback control ( i.e. , forces that depend on the value of ) can be used to modify the shape of the potential which governs dynamic behavior .one example of using this ability is to manipulate the width of the wave packet , something not possible for a constant applied force .furthermore , by changing the effective spring constant of the system , the control can change the energy level spacing , and thus the frequencies with which the system will resonate . near resonance ,a small change in can produce a large change in the system response .this result represents a case where small control forces can have a relatively large effect .specifically , near resonance , the size of the system response to an external force of frequency is proportional to .thus , small changes in the value of in the controller design , and the consequent small changes in , can lead to large changes in the system response .this change could in turn alter the damping of the external force , e.g. , low frequency sound waves in the system. provided these mechanical frequencies were small compared to the rate of the control computations , the control force would be able to track and respond to the external force at the desired frequency .another control task involves coupling the behavior of distinct parts of the smart matter .for example , suppose we have two oscillators whose behavior we want to have correlated . one way to do this is add a control force to the first oscillator , and to the second .this gives an overall effective potential of with . by rotating the coordinate system to use and this becomes thus this coupling gives an additional restoring force acting on the difference in position of the two oscillators , which will tend to keep their positions correlated .larger values of give stronger correlations , but also require more control force . in summary, these examples illustrate a variety of control methods that can be applied .although for simplicity we have considered harmonic oscillations , the control computations could also allow to vary with , e.g. , so the control is smaller when the system is near the desired position .this amounts to adding aharmonicity to the system .even more generally , could also be time - dependent : instead of the time - dependent force acting uniformly on the system as treated above , this would allow more complex control forces .although more difficult to analyze , this flexibility greatly extends the options for control strategies .therefore , one way to design quantum control methods is to apply standard classical control algorithms , with some modification , directly to quantum systems .one major difference arises from the fact that employing quantum computers requires the use of reversible control programs , and as a result there is no possibility of creating dissipative ( non - conservative ) control laws .an , example , described above is trying to control to the desired location of the center of the packet by applying force to all values .although conceptually simple , it is by no means obvious that a control designed under the assumption of zero packet width ( i.e. , a classical algorithm ) will continue to work for quantum systems .moreover , some behaviors , such as the width of the packet , have no classical analogs so to control those aspects of the system will require uniquely quantum mechanical control algorithms .furthermore , control algorithms could also make use of interference .an example would be combining several different classical control algorithms so as to cancel out an undesired behavior , even though each control by itself produces that behavior .this gives additional options for control algorithms , beyond attempting to use a single classical method .as another example , the control could also manipulate the phase of the wave packet .a superposition of such controls , e.g. , based on different models of the system , may be possible where the phase varies slowly near the correct model , thus giving a strong contribution to the final result from the control choice in the superposition that actually matches the system parameters .an important practical issue involves the construction of the devices required for quantum smart matter .while some progress has been reported on the basic components of quantum computers , quantum smart matter also requires sensing and actuation abilities that can be coupled to such computations .thus progress in the development of quantum based sensing and actuation is needed before applications , such as those presented in this paper , become possible .in addition , it is also important to understand the relative time scales possible for quantum computing and various physical behaviors that might be controlled . this knowledge is required to develop effective sensing and actuation mechanisms for any particular application .only when the computing is relatively fast can we hope to construct smart matter for the behavior in question . to illustrate the concept of quantum smart matter , we present several potential applications .these examples are based on applying microscopic , decentralized or local , control to create a desired macroscopic result .these applications are active camouflage ; custom manipulated material properties ; control of certain chemical reaction rates ; and nonlinear , active springs for quantum machines .the key idea is to operate on the full quantum state through the control program to produce a desired ( global ) behavior .to some extent , classical controllers in smart matter could also be used . however , as described above quantum computers allow for manipulating a wider range of behaviors , and in particular to apply feedback without disrupting the wave function .active camouflage takes advantage of the ability to create asymmetric , optical potentials within materials .manipulating these potentials allows the controller to manipulate the optical response to light striking that material and , thus , for example , change the color with which it appears to an external observer .this would allow quantum smart matter to determine the color of its exterior surroundings , and modify the material potentials to reflect a matching color .the end result is a material capable of actively blending in with its surroundings like a chameleon .there are several different time scales involved in this example .first is the fast interaction of light with the material .this is likely to be much faster than the time scale governing computational speed .however , for this application , the relevant physical time scale is the rate at which environment conditions change , which is much slower .hence substantial time could be available for computation , perhaps using feedback to gradually adjust the potentials to achieve a desired result .active materials employ quantum smart matter to manipulate their lattice structure to customize their mechanical properties .this could also be used to locally adjust the propagation of phonons or the specific heat of the material , as well as to actively adjust the material s mechanical behavior .these abilities would enable the development of active thermal and acoustic isolators .in addition , such behaviors could be employed to manipulate the displacement of a structure under specific disturbance inputs , or to direct the thermal stresses in a material to a desired location , similar to the way in which a photocopier directs paper along a specific path .several applications could be accomplished using classical computers for control .for example , a signal might be sent to a portion of the material to change the state of its control on the lattice structure , modifying the stiffness , ductility , and strength of the material as a result .such control inputs create a material that ( at different times ) is both stiff and flexible , depending on what properties were required , and where they were required. one important application of such a material is the forming of high strength alloys . in this instance , a high strength alloy could actively be made more ductile for forming , and then actively re - strengthened once in the proper shape .such an approach provides an elegant solution for the typical difficulty encountered in industrially forming high strength materials such as titanium .similar to the previous example , the relative time scales of physical interaction and computational action are important . in this case , the physical interaction occurs only as fast as the actuator bandwidth , while the computational bandwidth is a function of the quantum computers .the relevant time scale in this example is the desired speed for adjusting the material properties .however , these applications involve mechanical changes , which typically operate slowly compared to compuational speeds .chemical reaction rates can be affected by the local environment , such as electric fields due to nearby ions .smart matter capable of adjusting fields could be used to modify reaction rates at a surface .alternatively , the material could be dispersed throughout solutions containing the reactants .coordinated programs running on the individual pieces could then adjust reaction rates in a bulk medium .this example illustrates that smart matter need not consist of a single connected material since any necessary communication among the controllers could use light or acoustic waves rather than wires. a more subtle control would use local electric fields generated by the active devices in a distributed version of controlling reactions through external fields , a method that can also exploit quantum interference .chemical reactions can also be controlled through mechanical forces , thus providing another path for smart matter to influence chemical behaviors .one possible application would be the control of certain enzyme reactions . more generally , this would amount to a programmable catalyst .since reaction rates are generally quite rapid , this application would not involve active feedback response during reactions .rather the control would take place by modifying the conditions for the reactions at a slower time scale , leading to changes in the overall reaction rate or the mix of products .this could be done both through the reduction of critical reaction elements and by using external field changes previously determined to be useful .feedback at this slower time scale would still be useful for controlling complex reactions for which accurate models are difficult to evaluate .controlled nonlinear springs could be created by using quantum smart matter to control the bond strength between two ( or several ) molecules .this control could also be used as a finer scale version of current methods that modify molecular motion .specifically , electric fields could be generated to control the bond strength between two molecules , creating an active spring which might be utilized in quantum machines .the relevant time scale in this case is dependent on the application of the active spring created .specifically , any application of such a spring will have a maximum bandwidth that is necessary .as in the active material example , this will involve mechanical time scales , leaving plenty of time for computation .we have described an application of quantum computers to control the behavior of materials . even fairly small computers , involving only a few bits ,may be able to produce useful new behaviors that would otherwise be difficult to fabricate directly .the use of such programmable materials could allow for experiments on the behavior of many possible structures before deciding which few to actually attempt to fabricate . in this way, quantum smart matter could be used as a simulator for different quantum structures .it could also serve as an experimental platform for examining a range of macroscopic quantum effects by introducing programmable correlations in the overall quantum state of the material . however , quantum computers face serious implementation difficulties of decoherence and error control , especially for programs that require many steps .while substantial difficulities remain before such devices can be constructed , there is encouraging progress in the development of the basic components needed for quantum computation and methods for error control .the simple individual programs useful for smart matter may be less susceptible than others proposed for difficult computational problems , but on the other hand any requirement for communication over large distances will increase the difficulty of avoiding undesired coupling to the environment .this provides another reason to favor local , distributed control methods over the use of global controls .some local communications among the devices may provide an application for proposals to transmit quantum states .we have suggested some possible applications of such capabilities , but it remains to be seen whether the capacity to operate on superpositions provides enough of an improvement over classical computers to justify the difficulty in maintaining coherence . finally , if the promise of quantum computing is realized to improve combinatorial search , e.g. , for factoring or more general cases , this capability could also be used to give very complex potentials for the behavior of matter .we thank b. huberman and r. merkle for helpful comments on this work .charles h. bennett , gilles brassard , claude crepeau , richard jozsa , asher peres , and william k. wootters .teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels . , 70(13):18951899 , 1993 .a. a. berlin , h. abelson , n. cohen , l. fogel , c. m. ho , m. horowitz , j. how , t. f. knight , r. newton , and k. pister .distributed information systems for mems .technical report , information systems and technology ( isat ) study , 1995 .bernardo huberman and scott h. clearwater .a multi - agent system for controlling building environments . in v. lesser , editor , _ proc .of the 1st international conference on multiagent systems ( icmas95 ) _ , pages 171176 , menlo park , ca , 1995 .aaai press .peter w. shor .algorithms for quantum computation : discrete logarithms and factoring . in s.goldwasser , editor , _ proc .of the 35th symposium on foundations of computer science _ , pages 124134 .ieee press , november 1994 .langchi zhu , valeria kleiman , xiaonong li , shao ping lu , karen trentelman , and robert j. gordon .coherent laser control of the product distribution obtained in the photoexcitation of hi . , 270:7780 , 1995 .
|
the development of small - scale sensors and actuators enables the construction of `` smart matter '' in which physical properties of materials are controlled in a distributed manner . in this paper , we describe how quantum computers could provide an additional capability , programmable control over some quantum behaviors of such materials . this emphasizes the need for spatial coherence , in contrast to the more commonly discussed issue of temporal coherence for quantum computing . we also discuss some possible applications and engineering issues involved in exploiting this possibility . _ a condensed version of this paper will appear in the physcomp96 conference proceedings . _
|
physicists investigate biological and sociological systems , applying their tools , borrowed mainly from statistical physics . in the last years, the evolution of languages and competition among them gained much interest of non - linguistics , especially researchers working in evolutionary systems similar to biological ones .there have been several approaches to reveal the similarities between the evolution of biological systems and languages at the end of the nineteenth century .numerical models simulating the competition of many languages have given new insights into the behavior of agents , for instance on a lattice , as well as into the size distribution of languages .a review of them can be found in ref . .our model is based on the well understood penna ageing model on a lattice which provides us a possibility to model a sexually reproducing stable population .we simplify the model by defining languages as an integer number , that is they are not composed of different words .thus we are able to avoid changes in the same language .the parents pass their language entirely to their offspring . in order to stabilize their distribution ,languages can be forgotten during lifetime .the study of an interface between originally different regions under different parameter sets reveals certain characteristics of the geographical distribution of the languages on the lattice as well as of the age structure of the agents for different languages .this articles is organized as follows : the next section explains the main features of the model and tries to justify the parameters we use .we present the results and the conclusions in the two following sections .this section is separated into two subsections in order to provide the reader a small review of the penna model on a lattice as well as to present our modifications adapting the model to a system where languages compete with each other .each individual or agent comprises two bit - strings ( diploid ) of 32 bits that are read in parallel . after birth , at every time step a new position of both bit - strings is read .a bit equal to 1 corresponds to a harmful allele .all individuals have five predefined dominant positions where one harmful allele suffices to represent an inherited disease starting to diminish health from that age on which corresponds to the bit position . at the other positions two set bits are needed to switch on the effect of a disease .as soon as the agent reaches an age at which the current number of deleterious mutations exceeds the threshold value , the agent dies . in order to have a stable population , every time step an individual dies with the additional probability : where is the actual population size and the so called carrying capacity representing a maximum population size . after reaching the minimum reproduction age ,a female agent searches every time step for a male , of age equal or greater than the minimum reproduction age , among the central site and its four nearest neighbors to generate two offspring .it selects a male on the central site with a probability of 25% , and if it fails , it searches among the nearest neighbor sites , at each one with a probability of 25% .the two bit - strings of the offspring are built by a random crossing and recombination of the parents bit - strings ( see ref .each new bit - string suffers a deleterious mutation ( from zero to one ) at a random position .if the selected bit is already 1 , it remains 1 in the offspring bit - string .the offspring is placed on a nearest neighbor site of the mother even if the site is already occupied ( which is different from the usual versions of the penna model on a lattice ) .every time step an agent moves to a randomly selected nearest neighbor site with probability , if this site is less or equally populated .the bit - strings are initialized randomly with zeros and ones at the first time step . for a more detailed description of the penna model and its implementation on a square latticewe refer to refs . . for simplicitywe define a language by an integer number and not by a bit - string which would describe , for instance , different words or an alphabet as in ref . .every agent speaks a language , an individual variable which can have three values : and mean that it speaks language 1 or 2 , respectively .the third possibility , , describes the case where an agent speaks both languages .our model contains two parameters dealing directly with these language values . at birthan offspring learns the language of its parents if they speak the same one(s ) . in the case of parents with different values of offspring speaks both languages ( ) with probability , otherwise it speaks only one language , each with the same probability .the other parameter is the probability , for which an agent may forget an already learned language .every time step an agent , which speaks _ both _ languages , counts the number of surrounding agents speaking language in its neighborhood . this neighborhood is defined by a square of a distance of sites from the central site , for instance the 8 nearest neighbors with or 24 with . the central site is not counted . if and only if there is a majority of people in the neighborhood speaking language 1 or 2 the agent forgets language 2 or 1 with probability , respectively .thus it speaks only the language which dominates in its surrounding .the lattice has free boundary conditions .in our simulations we restrict ourselves to the following initial conditions : half of the lattice of size is filled up with agents speaking language and the other half with language .we study stability and shape of the interface between these two regions .simulations with randomly distributed languages do not present a stable interface : the whole population speaks only one and the same language after a few time steps .the initial population consists of males and females randomly distributed over the lattice with the values of as described above .the carrying capacity is on a square lattice .the simulations show that the interface is neither stable for values smaller than one nor for low occupation ( less than 100 agents per site ) , which is controlled by the carrying capacity and the lattice size . after a short timethe number of agents speaking begins to fluctuate strongly and finally converges into the stable state where only one language is spoken .in general the stability of the interface depends crucially on the initial state as well as on the random seed .for instance , figure [ fig : inst ] shows a simulation with , , and presenting such an instability .remain.,scaledwidth=80.0% ] , [ fig : inst ] we concentrate now on the results of the simulations for different values of the parameter , leading an agent to forget one of its languages .we fix the other parameters to , and . as a function of age ,figure [ fig : a_struct ] shows the population of bilinguals ( agents speaking both languages ) divided by the monolinguals ( or ) for different values of .the function decreases exponentially due to the effect that every time step a fraction of the bilinguals becomes monolingual . with increasing larger fraction of older bilingual agents forgets one of their two languages since in their environment they do not need both .the higher is the probability to forget a language , the smaller is the number of older agents speaking both languages and thus less offspring with are created .figure [ fig : freqpf ] depicts the mean value of monolinguals and bilinguals during one simulation for different values of .the number of bilinguals decreases with increasing as expected .it seems that the fraction of bilinguals decays roughly as power law with exponent for higher values of .divided by the number of agents with , as a function of ages .the number of agents speaking both languages decreases drastically with age for large values of .,scaledwidth=80.0% ] figure [ fig : distr_pf ] shows the number of agents with certain value of versus their position in direction perpendicular to the interface . the number is averaged over the direction parallel to the interface and is measured after time steps .we observe a quite stable interface between the two regions , each one with one of the two languages in majority .the number of agents speaking and decays exponentially at the interface as also reported in ref .interestingly , the number of bilinguals is constant over the whole lattice .the shape is not altered by changing .the dashed line corresponds to and the solid line to . for all values of find an exponential behavior at the interface.,scaledwidth=80.0% ] we increased the number of agents by setting and the initial populations to females and males for : now the exponential decay at the interface is observed clearly , as seen in figure [ fig : distrb ] . in our simulationswe have also changed the parameter , defining the number of neighbors an agent with examines , in order to know which language it can forget .the results for large are the same as for . on the interface .bilinguals are homogeneously distributed .the number of monolinguals decays exponentially at the interface.,scaledwidth=80.0% ] the distribution of speakers on the lattice for different movement rates is shown in figure [ fig : distrm ] .we set and .higher movement rates lead to a smoother interface .thus the exponential decay is weaker for large . at very high movement ratesthe interface becomes unstable .with and .the larger is , the smoother becomes the steepness of the exponential decay.,scaledwidth=80.0% ]we present simulations where the population of speakers of two different languages are of similar size for at least time steps .this meta - stable state is obtained only for a large number of agents per site and initialization of the lattice by distributing the speakers of different languages on different halfs of the lattice .we can interpret that an interface of speakers in high populated areas , for instance at the canal street of new orleans where on one side french and on the other side english is spoken , is more stable than in low populated areas .different languages can not survive for long times if their speakers are not geographically separated .another result of the model is that the fraction of bilinguals relative to monolinguals decreases exponentially with age .older people , living for long time at the same place , do not need a second language .the mean value of the number of bilinguals versus the parameter to forget a second language shows a power law with exponent .the results of ref . are well reproduced in our model : at the interface the number of monolinguals decreases exponentially .surprisingly , the number of bilinguals distributes rather homogeneously over the whole grid .the exponential decay of the number of monolinguals becomes steeper for smaller movement rates but is left unaltered by the lattice size .higher movement rates lead to a more homogeneous distribution and can break the meta - stability of the interface . nowadays, globalization gives us the possibility to travel frequently over long distances and to stay larger periods at different places on earth , one of the reasons why languages are becoming extinct .we presented here the first model for language competition including ageing and sexual reproduction and reproduced well the results of other models although they are quite different .numerical agent - based models on the computer yield interesting results despite their simplicity , and we think that there will be much more to be done in future .i am funded by the daad ( deutscher akademischer austauschdienst ) and thank d. stauffer and s. moss de oliveira for very useful comments on my manuscript .
|
recently , individual - based models originally used for biological purposes revealed interesting insights into processes of the competition of languages . within this new field of population dynamics a model considering sexual populations with ageing is presented . the agents are situated on a lattice and each one speaks one of two languages or both . the stability and quantitative structure of an interface between two regions , initially speaking different languages , is studied . we find that individuals speaking both languages do not prefer any of these regions and have a different age structure than individuals speaking only one language .
|
centrality metrics , such as closeness or betweenness , quantify how central a node is in a network .they have been successfully used to carry analysis for various purposes such as structural analysis of knowledge networks , power grid contingency analysis , quantifying importance in social networks , analysis of covert networks , decision / action networks , and even for finding the best store locations in cities .several works which have been conducted to rapidly compute these metrics exist in the literature .the algorithm with the best asymptotic complexity to compute centrality metrics is believed to be asymptotically optimal .research have focused on either approximation algorithms for computing centrality metrics or on high performance computing techniques .today , it is common to find large networks , and we are always in a quest for better techniques which help us while performing centrality - based analysis on them . when the network topology is modified , ensuring the correctness of the centralities is a challenging task .this problem has been studied for dynamic and streaming networks . even for some applications involving a static network such as the contingency analysis of power grids and robustness evaluation of networks , to be prepared and take proactive measures , we need to know how the centrality values change when the network topology is modified by an adversary and outer effects such as natural disasters .a similar problem arises in network management for which not only knowing but also setting the centrality values in a controlled manner via topology modifications is of concern to speed - up or contain the entity dissemination .the problem is hard : there are candidate edges to delete and candidate edges to insert where and are the number of nodes and edges in the network , respectively . here , the main motivation can be calibrating the importance / load of some or all of the vertices as desired , matching their loads to their capacities , boosting the content spread , or making the network immune to adversarial attacks .similar problems , such as finding the most cost - effective way which reduces the entity dissemination ability of a network or finding a small set of edges whose deletion maximizes the shortest - path length , have been investigated in the literature .the problem recently regained a lot of attention : a generic study which uses edge insertions and deletions is done by tong et al .they use the changes on the leading eigenvalue to control / speed - up the dissemination process .other recent works investigate edge insertions to minimize the average shortest path distance or to boost the content spread . from the centrality point of view , there exist studies which focus on maximizing the centrality of a node set or a single node by edge insertions . in generic centrality - based network management problem ,the desired centralities of all the nodes need to be obtained or approximated with a small set of topology modifications . as figure[ fig : gseq ] shows , the effect of a local topology modification is usually global .furthermore , existing algorithms for incremental centrality computation are not efficient enough to be used in practice .thus , novel incremental algorithms are essential to quickly evaluate the effects of topology modifications on centrality values . , , and , respectively ) insertions / deletions , and values of closeness centrality.,title="fig:",scaledwidth=38.0% ] , , and , respectively ) insertions / deletions , and values of closeness centrality.,title="fig:",scaledwidth=47.0% ]our contributions can be summarized as follows : 1 . to attack the variants of the centrality - based network management problem , we propose incremental algorithms which efficiently update the closeness centralities upon edge insertions and deletions .2 . the proposed algorithms can serve as a fundamental building block for other shortest - path - based network analyses such as the temporal analysis on the past network data , maintaining centrality on streaming networks , or minimizing / maximizing the average shortest - path distance via edge insertions and deletions .3 . compared with the existing algorithms ,our algorithms have a low - memory footprint making them practical and applicable to very large graphs . for random edge insertions / deletions to the wikipedia users communication graph, we reduced the centrality ( re)computation time from 2 days to 16 minutes . andfor the real - life temporal dblp coauthorship network , we reduced the time from 1.3 days to 4.2 minutes .the proposed techniques can easily be adapted to algorithms for approximating centralities . as a result, one can employ a more accurate and faster sampling and obtain better approximations .the rest of the paper is organized as follows : section [ sec : bac ] introduces the notation and formally defines the closeness centrality metric .section [ sec : manage ] defines network management problems we are interested .our algorithms explained in detail in section [ sec : main ] .existing approaches are described in section [ sec : rel ] and the experimental analysis is given in section [ sec : exp ] .section [ sec : con ] concludes the paper .let be a network modeled as a simple graph with vertices and edges where each node is represented by a vertex in , and a node - node interaction is represented by an edge in .let be the set of vertices which are connected to in .a graph is a _ subgraph _ of if and .a _ path _ is a sequence of vertices such that there exists an edge between consecutive vertices .a path between two vertices and is denoted by ( we sometimes use to denote a specific path with endpoints and ) .two vertices are _ connected _ if there is a path from to .if all vertex pairs are connected we say that is _connected_. if is not connected , then it is _ disconnected _ and each maximal connected subgraph of is a _ connected component _ , or a component , of .we use to denote the length of the shortest path between two vertices in a graph . if then . and if and are disconnected , then . given a graph , a vertex called an _ articulation vertex _ if the graph ( obtained by removing ) has more connected components than .similarly , an edge is called a _ bridge _ if ( obtained by removing from ) has more connected components than . is _ biconnected _ if it is connected and it does not contain an articulation vertex .a maximal biconnected subgraph of is a _biconnected component_. given a graph , the _ farness _ of a vertex is defined as = \sum_{\stackrel{v \in v}{{{d}}_g(u , v ) \neq\infty } } { { d}}_g(u , v).\ ] ] and the closeness centrality of is defined as = \frac{1}{{{\tt far}\xspace}[u]}. \label{eq : first}\ ] ] if can not reach any vertex in the graph = 0 ] , the sum of the distances which are different than . and , as the last step, it computes ] . the following theorem is used to detect such vertices and filter their sssps .let be a graph and and be two vertices in s.t . . let . then = { { \tt cc}\xspace}'[s] ] .if is only connected to one of and in the difference is , and the closeness centrality score of needs to be updated by using the new , larger connected component containing . when is connected to both and in , we investigate the edge insertion in three cases as shown in figure [ fig : leveldiffs ] : case 1 . : assume that the path is a shortest path in containing . since there exist another path in with one less edge .hence , can not be in a shortest path : . : let and assume that is a shortest path in containing . since , there exist another path in with the same number of edges .hence , . : let .the path in is shorter than the shortest path in since .hence , an update on ] if and only if . with this corollary, the work filter can be implemented for both edge insertions and deletions .the pseudocode of the update algorithm in case of an edge insertion is given in algorithm [ alg : filtered ] .when an edge is inserted / deleted , to employ the filter , we first compute the distances from and to all other vertices . and , it filters the vertices satisfying the statement of theorem [ thm : add ] . \gets ] sssp( , ) in theory , filtering by levels can reduce the update time significantly .however , in practice , its effectiveness depends on the underlying structure of .many real - life networks have been repeatedly shown to possess unique characteristics such as a small diameter and a power - law degree distribution . and the spread of information is extremely fast .the proposed filter exploits one of these characteristics for efficient closeness centrality updates : the distribution of shortest - path lengths .its efficiency is based on the phenomenon shown in figure [ fig : levels ] for a set of graphs used in our experiments : the probability distribution function for a shortest - path length being equal to is unimodular and spike - shaped for many social networks and also some others .this is the outcome of the short diameter and power - law degree distribution .on the other hand , for some spatial networks such as road networks , there are no sharp peaks and the shortest - path distances are distributed in a more uniform way .the work filter we propose here prefer the former . for four social and web networks . ]our work filter can be enhanced by employing and maintaining a biconnected component decomposition ( bcd ) of .a bcd is a partitioning of the edge set where indicates the component of each edge .a toy graph and its bcds before and after edge insertions are given in figure [ fig : bcd ] .when is inserted to and is obtained , we check if is empty or not . if the intersection is not empty , there will be only one element in it , , which is the i d of the biconnected component of containing ( otherwise is not a valid bcd ) . in this case , is set to for all and is set to . if there is no biconnected component containing both and ( see figure [ fig : pig2 ] ) , i.e. , if the intersection above is empty , we construct from scratch and set . can be computed in linear , time .hence , the cost of bcd maintenance is negligible compared to the cost of updating closeness centrality .let be the biconnected component of containing where let be the set of articulation vertices in .given , it is easy to detect the articulation vertices since is an articulation vertex if and only if it is part of at least two components in the bcd : .we will execute sssps only for the vertices in and use the new values to fix the centralities for the rest of the graph .the contributions of the vertices in are integrated to the sssps by using a representative function which maps each vertex either to a representative in or to ( if and the vertices in are in different connected components of ) . for each vertex , we set = u ] .otherwise , ] , the following is a corollary of the theorem .[ cor : path ] for each vertex with \neq { \tt null} ] which is different than .furthermore , for a vertex which is also represented in but not in the connected component of containing , is equal to ) + { { d}}_{g'}({{rep}}[v],{{rep}}[w ] ) + { { d}}_{g'}({{rep}}[w],w).\ ] ] if the last term on the right is , since = w ] is the number of vertices in which are represented by ( including ) .and ] , \gets \left|\{v\in v , { { rep}}[v ] = u\}\right| ] , \gets ] sssp( , ) [ lem : bcd1 ] for each vertex , algorithm [ alg : combined ] computes the correct ] is correct for all .let be the vertex whose closeness centrality update is started at line [ line : source ] . at line [ line : far ] of algorithm [ alg : combined ] , the update on ] which can be rewritten as = w}}{{d}}_{g'}(v , w ) + { { d}}_{g'}(w , u),\ ] ] by using and . according to corollary [ cor : path ] , this is equal to = w}}{{d}}_{g'}(v , u).\ ] ] due to the definition of , only the vertices which are connected to will have an effect on ] in as desired .[ lem : bcd2 ] for each vertex , algorithm [ alg : combined ] computes the correct ] is correct for all after the fix phase .let ] .if and are in the same connected component of then and . hence , the change on ] due to are both . on the other hand , if is in a different connected component of according to corollary [ cor : path ] , ) + { { d}}_{g'}({{rep}}[w],w),\ ] ] where the sum of the second and the third terms is equal to .since the first term does not change by the insertion of , the change on is equal to the change on .that is when aggregated , the change on ] .lemma [ lem : bcd1 ] implies that ] , computed at line [ line : farup ] , must also be correct .for each vertex , algorithm [ alg : combined ] computes the correct ] .we then sort the vertices with respect to their hash values and construct the type - i vertex - classes by eliminating false positives due to collisions on the hash function .a similar process is applied to detect type - ii vertex classes .the complexity of this initial construction is assuming the number of collisions is small and hence , false - positive detection cost is negligible . maintaining the equivalance classes in case of edge insertions and deletionsis easy : for example , when is added to , we first subtract and from their classes and insert them to new ones ( or leave them as singleton if none of the vertices are now identical with them ) .the cost of this maintenance is . while updating closeness centralities of the vertices in , we execute an sssp at line [ line : source ] of algorithm [ alg : combined ] for at most one vertex from each class . for the rest of the vertices , we use the same closeness centrality value .the improvement is straightforward and the modifications are minor . for brevity , we do not give the pseudocode . the spike - shaped distribution given in figure [ fig : levels ]can also be exploited for sssp hybridization .consider the execution of algorithm [ alg : cc ] : while executing an sssp with source , for each vertex pair , is processed before if and only if .that is , algorithm [ alg : cc ] consecutively uses the vertices with distance to find the vertices with distance .hence , it visits the vertices in a _ top - down _ manner .sssp can also be performed in a a _ bottom - up _ manner .that is to say , after all distance ( level ) vertices are found , the vertices whose levels are unknown can be processed to see if they have a neighbor at level .figure [ fig : bfstimes ] gives the execution times of bottom - up and top - down sssp variants for processing each level .the trend for top - down resembles the shortest distance distribution in small - world networks .this is expected since in each level , the vertices that are step far away from are processed . on the other hand , for the bottom - up variant , the execution time is decreasing since the number of unprocessed nodes is decreasing .following the idea of beamer et al . , we hybridize the sssps throughout the centrality update phase in algorithm [ alg : combined ] .we simply compare the number of edges need to be processed for each variant and choose the cheaper one . for the case presented in figure[ fig : bfstimes ] , the hybrid algorithm is times faster than the top - down variant . and times faster than the top - down and bottom - up versions respectively . , scaledwidth=40.0% ]to the best of our knowledge , there are only two works that deal with maintaining centrality in dynamic networks . yet , both are interested in betweenness centrality .lee et al . proposed the * qube * framework which updates betweenness centrality in case of edge insertion and deletion within the network .* qube * relies on the biconnected component decomposition of the graphs . upon an edge insertion or deletion , assuming that the decomposition does not change , only the centrality values within the updated biconnected component are recomputed from scratch .if the edge insertion / deletion affects the decomposition the modified graph is decomposed into its biconnected components and the centrality values in the affected part are recomputed .the distribution of the vertices to the biconnected components is an important criteria for the performance of * qube*. if a large component exists , which is the case for many real - life networks , one should not expect a significant reduction on update time .unfortunately , the performance of * qube * is only reported on small graphs ( less than 100k edges ) with very low edge density .in other words , it only performs significantly well on small graphs with a tree - like structure having many small biconnected components . green et al .proposed a technique to update centrality scores rather than recomputing them from scratch upon edge insertions ( can be extended to edge deletions ) .the idea is storing the whole data structure used by the previous betweenness centrality update kernel .this storage is indeed useful for two main reasons : it avoids a significant amount of recomputation since some of the centrality values will stay the same . and second, it enables a partial traversal of the graph even when an update is necessary .however , as the authors state , values must be kept on the disk . for the wikipedia user communication and dblp coauthorship networks , which contain thousands of vertices and millions of edges , the technique by green et al .requires terabytes of memory .the largest graph used in has approximately vertices and edges ; the quadratic storage cost prevents their storage - based techniques to scale any higher .on the other hand , the memory footprint of our algorithms are linear and hence they are much more practical .we implemented our algorithms in c. the code is compiled with gcc v4.6.2 and optimization flags -o2 -dndebug .the graphs are kept in memory in the compressed row storage ( crs ) format .the experiments are run on a computer with two intel xeon e cpu clocked at and equipped with gb of main memory .all the experiments are run sequentially . for the experiments , we used networks from the ufl sparse matrix collection and we also extracted the coauthor network from current set of dblp papers .properties of the graphs are summarized in table [ tab : graph_prop ] .we symmetrized the directed graphs .the graphs are listed by increasing number of edges and a distinction is made between small graphs ( with less than 500k edges ) and the large graphs ( with more than 500k ) edges . to assess the effectiveness of our algorithms, we need to know that when each edge is inserted to / deleted from the graph .our datasets from ufl sparse matrix collection do not have this information . to conduct our experiments on these datasets, we delete 1,000 edges from a graph chosen randomly in the following way : a vertex is selected randomly ( uniformly ) , and a vertex is selected randomly ( uniformly ) .since we do not want to change the connectivity in the graph ( having disconnected components can make our algorithms much faster and it will not be fair to cc ) , we discard if it is a bridge .if this is not the case we delete it from and continue .we construct the initial graph by deleting these 1,000 edges .each edge is then inserted one by one , and our algorithms are used to recompute the closeness centrality after each insertion . beside these random insertion experiments, we also evaluated our algorithms on a real temporal dataset of the dblp coauthor graph . in this graph, there is an edge between two authors if they published a paper .publication dates are used as timestamps of edges .we first constructed the graph for the papers published before january 1 , 2013 .then , we inserted the coauthorship edges of the papers since then .although our experiments perform edge insertion , edge deletion is a very similar process which should give comparable results .in addition to cc , we configure our algorithms in four different ways : cc - bonly uses biconnected component decomposition ( bcd ) , cc - bluses bcd and filtering with levels , cc - bliuses all three work filtering techniques including identical vertices . and cc - blihuses all the techniques described in this paper including sssp hybridization. table [ tab : results ] presents the results of the experiments.the second column , cc , shows the time to run the full brandes algorithm for computing closeness centrality on the original version of the graph .columns of the table present absolute runtimes ( in seconds ) of the centrality computation algorithms .the next four columns , , give the speedups achieved by each configuration .for instance , on the average , updating the closeness values by using cc - bon _ pgpgiantcompo _ is times faster than running cc .finally the last column gives the overhead of our algorithms per edge insertion , i.e. , the time necessary to detect the vertices to be updated , and maintain bcd and identical - vertex classes .geometric means of these times and speedups are also given to provide comparison across instances .the times to compute closeness centrality using ccon the small graphs range between to seconds . on large graphs ,the times range from minutes to hours .clearly , ccis not suitable for real - time network analysis and management based on shortest paths and closeness centrality .when all the techniques are used ( cc - blih ) , the time necessary to update the closeness centrality values of the small graphs drops below seconds per edge insertion .the improvements range from a factor of ( _ cond - mat-2005 _ ) to ( _ pgpgiantcompo _ ) , with an average improvement of across small instances . on large graphs ,the update time per insertion drops below minutes for all graphs .the improvements range from a factor of ( _ loc - gowalla _ ) to ( _ dblp - coauthor _ ) , with an average of . for all graphs ,the time spent filtering the work is below one second which indicates that the majority of the time is spent for sssps .note that this part is pleasingly parallel since each sssp is independent from each other .the overall improvement obtained by the proposed algorithms is very significant .the speedup obtained by using bcds ( cc - b ) are and on the average for small and large graphs , respectively .the graphs _ pgpgiantcompo _ , and _ wiki - talk _ benefits the most from bcds ( with speedups and , respectively ) . clearly using the biconnected component decomposition improves the update performance .however , filtering by level differences is the most efficient technique : cc - blbrings major improvements over cc - b . for all social networks ,cc - blincreased the performance when compared with cc - b , the speedups range from ( _ web - notredame _ ) to ( _ dblp - coauthor _ ) .overall , cc - blbrings a improvement on small graphs and a improvement on large graphs over cc .for each added edge , let be the random variable equal to . by using 1,000 edges , we computed the probabilities of the three cases we investigated before and give them in fig .[ fig : leveldiffs01 ] . for each graph in the figure , the sum of first two columns gives the ratio of the vertices not updated by cc - bl .for the networks in the figure , not even of the vertices require an update ( ) .this explains the speedup achieved by filtering using level differences .therefore , level filtering is more useful for the graphs having characteristics similar to small - world networks . into three cases we investigated whenan edge is added ., scaledwidth=44.0% ] filtering with identical vertices is not as useful as the other two techniques in the work filter .overall , there is a times improvement with cc - blion both small and large graphs compared to cc - bl .for some graphs , such as _ web - notredame _ and _ web - google _ , improvements are much higher ( and , respectively ) .finally , the hybrid implementation of sssp also proved to be useful .cc - blihis faster than cc - bliby a factor of on small graphs and by a factor of on large graphs .although it seems to improve the performance for all graphs , in some few cases , the performance is not improved significantly .this can be attributed to incorrect decisions on sssp variant to be used .indeed , we did not benchmark the architecture to discover the proper parameter . cc - blihperforms the best on social network graphs with an improvement ratio of ( _ soc - sign - epinions _ ) , ( _ loc - gowalla _ ) , and ( _ wiki - talk _ ) .all the previous results present the average update time for 1,000 successively added edges .hence , they do not say anything about the variance . figure [ fig : updatedist ] shows the runtimes of cc - band cc - blihper edge insertion for _ web - notredame _ in a sorted order .the runtime distribution of cc - bclearly has multiple modes . either the runtime is lower than milliseconds or it is around seconds .we see here the benefit of bcd . according to the runtime distribution , about of _ web - notredame_ s vertices are inside small biconnected components .hence , the time per edge insertion drops from 2,845 seconds to 700 .indeed , the largest component only contains of the vertices and of the edges of the original graph .the decrease in the size of the components accounts for the gain of performance. added edges of _ web - notredame_.,scaledwidth=44.0% ] the impact of level filtering can also be seen on figure [ fig : updatedist ] . of the edges in the main biconnected component do not change the closeness values of many vertices and the updates that are induced by their addition take less than second .the remaining edges trigger more expensive updates upon insertion . within these expensive edge insertions , identical vertices and sssp hybridization provide a significant improvement ( not shown in the figure ) . [[ better - speedups - on - real - temporal - data ] ] better speedups on real temporal data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the best speedups are obtained on the dblp coauthor network , which uses real temporal data .using cc - b , we reach speedup w.r.t .cc , which is bigger than the average speedup on all networks .main reason for this behavior is that of the inserted edges are actually the new vertices joining to the network , i.e. , authors with their first publication , and cc - bhandles these edges quite fast .applying cc - blgives a speedup over cc - b , which is drastically higher than on all other graphs .indeed , only of the vertices require to run a sssp algorithm when an edge is inserted on the dblp network .for the synthetic cases , this number is .cc - bliprovides similar speedups with random insertions and cc - blihdoes not provide speedups because of the structure of the graph .overall , speedups obtained with real temporal data reaches , i.e. , times greater than the average speedup on all graphs .our algorithms appears to perform much better on real applications than on synthetic ones .all the techniques presented in this paper allow to update closeness centrality faster than the non - incremental algorithm presented in by a factor of on small graphs and on large ones .small - world networks such as social networks benefit very well from the proposed techniques .they tend to have a biconnected component structure that allow to gain some improvement using cc - b. however , they usually have a large biconnected component and still , most of the gain is derived from exploiting their spike - shaped distance distribution which brings at least a factor of .identical vertices typically brings a small amount of improvement but helps to increase the performance during expensive updates . using all the techniques, we achieved to reduce the closeness centrality update time from days to minutes for the graph with the most vertices in our dataset ( _ wiki - talk _ ) . andfor the temporal dblp coauthorship graph , which has the most edges , we reduced the centrality update time from 1.3 days to 4.2 minutes .in this paper we propose the first algorithms to achieve fast updates of exact centrality values on incremental network modification at such a large scale .our techniques exploit the biconnected component decomposition of these networks , their spike - shaped shortest - distance distributions , and the existence of nodes with identical neighborhood . in large networks with more than edges , our techniques proved to bring a times speedup in average . with a speedup of 458, the proposed techniques may even allow dblp to reflect the impact on centrality of the papers published in quasi real - time .our algorithms will serve as a fundamental building block for the centrality - based network management problem , closeness centrality computations on dynamic / streaming networks , and their temporal analysis .the techniques presented in this paper can directly be extended in two ways .first , using a statistical sampling to compute an approximation of closeness centrality only requires a minor adaptation on the sssp kernel to compute the contribution of the source vertex to other vertices instead of its own centrality .second , the techniques presented here also apply to betweenness centrality with minor adaptations . as a future work, we plan to investigate local search techniques for the centrality - based network management problem using our incremental centrality computation algorithms .this work was supported in parts by the doe grant de - fc02 - 06er2775 and by the nsf grants cns-0643969 , oci-0904809 , and oci-0904802 .k. madduri , d. ediger , k. jiang , d. a. bader , and d. g. chavarra - miranda . a faster parallel algorithm and efficient multithreaded implementations for evaluating betweenness centrality on massive datasets . in _ proc .of ipdps _ , 2009 .
|
analyzing networks requires complex algorithms to extract meaningful information . centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic . here , we are interested in the problem of centrality - based network management . the problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination . it can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration . as a fundamental building block to tackle that problem , we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology , i.e. , edge insertions and deletions . our algorithms are proven to be efficient on many real - life networks , especially on small - world networks , which have a small diameter and a spike - shaped shortest distance distribution . in addition to closeness centrality , they can also be a great arsenal for the shortest - path - based management and analysis of the networks . we experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal dblp - coauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch . to the best of our knowledge , this is the first work which can yield practical large - scale network management based on closeness centrality values . [ graph algorithms ]
|
_ active learning_ refers to a family of powerful supervised learning protocols capable of producing more accurate classifiers while using a smaller number of labeled data points than traditional ( passive ) learning methods .here we study a variant known as _ pool - based _ active learning , in which a learning algorithm is given access to a large pool of unlabeled data ( i.e. , only the covariates are visible ) , and is allowed to sequentially request the label ( response variable ) of any particular data points from that pool .the objective is to learn a function that accurately predicts the labels of new points , while minimizing the number of label requests .thus , this is a type of sequential design scenario for a function estimation problem .this contrasts with passive learning , where the labeled data are sampled at random . in comparison , by more carefully selecting which points should be labeled , active learning can often significantly decrease the total amount of effort required for data annotation. this can be particularly interesting for tasks where unlabeled data are available in abundance , but label information comes only through significant effort or cost . recently , there have been a series of exciting advances on the topic of active learning with arbitrary classification noise ( the so - called _ agnostic _ pac model ) , resulting in several new algorithms capable of achieving improved convergence rates compared to passive learning under certain conditions .the first , proposed by balcan , beygelzimer and langford was the ( agnostic active ) algorithm , which provably never has significantly worse rates of convergence than passive learning by empirical risk minimization .this algorithm was later analyzed in detail in , where it was found that a complexity measure called the _ disagreement coefficient _ characterizes the worst - case convergence rates achieved by for any given hypothesis class , data distribution and best achievable error rate in the class .the next major advance was by dasgupta , hsu and monteleoni , who proposed a new algorithm , and proved that it improves the dependence of the convergence rates on the disagreement coefficient compared to .both algorithms are defined below in section [ sec : algorithms ] .while all of these advances are encouraging , they are limited in two ways .first , the convergence rates that have been proven for these algorithms typically only improve the dependence on the magnitude of the noise ( more precisely , the noise rate of the hypothesis class ) , compared to passive learning .thus , in an asymptotic sense , for nonzero noise rates these results represent at best a constant factor improvement over passive learning .second , these results are limited to learning with a fixed hypothesis class of limited expressiveness , so that convergence to the bayes error rate is not always a possibility . on the first of these limitations , recent work by castro and nowak on learning threshold classifiers discovered that if certain parameters of the noise distribution are _ known _ ( namely , parameters related to tsybakov s margin conditions ) , then we can achieve strict improvements in the asymptotic convergence rate via a specific active learning algorithm designed to take advantage of that knowledge for thresholds .subsequently , balcan , broder and zhang proved a similar result for linear separators in higher dimensions , and castro and nowak showed related improvements for the space of boundary fragment classes ( under a somewhat stronger assumption than tsybakov s ) .however , these works left open the question of whether such improvements could be achieved by an algorithm that does not explicitly depend on the noise conditions ( i.e. , in the _ agnostic _ setting ) , and whether this type of improvement is achievable for more general families of hypothesis classes , under the usual complexity restrictions ( e.g. , vc class , entropy conditions , etc . ) . in a personal communication ,john langford and rui castro claimed achieves these improvements for the special case of threshold classifiers ( a special case of this also appeared in ) .however , there remained an open question of whether such rate improvements could be generalized to hold for arbitrary hypothesis classes . in section [ sec : rates ] , we provide this generalization .we analyze the rates achieved by under tsybakov s noise conditions ; in particular , we find that these rates are strictly superior to the known rates for passive learning , when the disagreement coefficient is finite. we also study a novel modification of the algorithm of dasgupta , hsu and monteleoni , proving that it improves upon the rates of in its dependence on the disagreement coefficient .additionally , in section [ sec : aggregation ] , we address the second limitation by proposing a general model selection procedure for active learning with an arbitrary structure of nested hypothesis classes .if the classes have restricted expressiveness ( e.g. , vc classes ) , the error rate for this algorithm converges to the best achievable error by any classifier in the structure , at a rate that adapts to the noise conditions and complexity of the optimal classifier . in general ,if the structure is constructed to include arbitrarily good approximations to any classifier , the error converges to the bayes error rate in the limit .in particular , if the bayes optimal classifier is in some class within the structure , the algorithm performs nearly as well as running an agnostic active learning algorithm on that single hypothesis class , thus preserving the convergence rate improvements achievable for that class .in the active learning setting , there is an _ instance space _ , a _ label space _ and some fixed distribution over , with marginal over .the restriction to binary classification ( ) is intended to simplify the discussion ; however , everything below generalizes quite naturally to multiclass classification ( where ) .there are two sequences of random variables : and where each pair is independent of the others , and has joint distribution . however , the learning algorithm is only permitted direct access to the values ( unlabeled data points ) , and must request the values one at a time , sequentially .that is , the algorithm picks some index to observe the value , then after observing it , picks another index to observe the label value , etc .we are interested in studying the rate of convergence of the error rate of the classifier output by the learning algorithm , in terms of the number of label requests it has made . to simplify the discussion, we will think of the data sequence as being essentially inexhaustible , and will study -confidence bounds on the error rate of the classifier produced by an algorithm permitted to make at most label requests , for a fixed value .the actual number of ( unlabeled ) data points the algorithm uses will be made clear in the proofs ( typically close to the number of points needed by passive learning to achieve the stated error guarantee ) . a _ hypothesis class_ is any set of measurable classifiers .we will denote by the vc dimension of ( see , e.g. , ) . for any measurable and distribution over ,define the _ error rate _ of as ; when , we abbreviate this as .this simply represents the risk under the loss .we also define the _ conditional error rate _ , given a set , as .let , called the _ noise rate _ of . for any ,let , let - 1 ] denote the _ empirical error rate _ on , [ and define by convention ]. it will often be convenient to make use of sets of ( index , label ) pairs , where the index is used to uniquely refer to an element of the sequence ( while conveniently also keeping track of relative ordering information ) ; in such contexts , we will overload notation as follows . for a classifier , and a finite set of ( index , label ) pairs ,let ] and measurable , let [ def : disagreement - coefficient ] the _ disagreement coefficient _ of with respect to under is defined as where ( though see appendix [ subsec : r0 ] for alternative possibilities for ) .[ def : global - disagreement - coefficient ] we further define the disagreement coefficient for the hypothesis class with respect to the target distribution as }} ] is any sequence in with }) ] if the minimum is achieved ] . in definition [ def : disagreement - coefficient ] , it is conceivable that may sometimes not be measurable . in such cases , we can define as the _ outer _ measure , so that it remains well defined .we continue this practice below , letting and ( and indeed any reference to `` probability '' ) refer to the outer expectation and measure in any context for which this is necessary.=1 because of its simple intuitive interpretation , measuring the amount of disagreement in a local neighborhood of some classifier , the disagreement coefficient has the wonderful property of being relatively simple to calculate for a wide range of learning problems , especially when those problems have a natural geometric representation . to illustrate this, we will go through a few simple examples from .consider the hypothesis class of thresholds on the interval ] . in this case, it is clear that the disagreement coefficient is , since for sufficiently small , the region of disagreement of is , which has probability mass . in other words , since the disagreement region grows with in two disjoint directions , each at rate , we have . as a second example , consider the disagreement coefficient for _ intervals _ on ] and be uniform , but this time is the set of intervals } ] , }(x ) = + 1 ] ( for ) .in contrast to thresholds , the disagreement coefficients }} ] .specifically , we have } } = \max\{\frac{1}{b - a } , 4\} ] has its lower and upper boundaries within of and , respectively ; thus , },r ) ) ) \leq4 r ] , so that },r ) ) ) = 1 ] s.t . for any measurable set , .let be a measurable classifier , and suppose and are the disagreement coefficients for with respect to under and , respectively .then [ lem : mixtures ] suppose ] ( from definition [ def : global - disagreement - coefficient ] ) , let }(x)\bigr ) > \frac{{\mathbb p}(r)}{2\theta}\biggr\}.\ ] ] note that after step 7 , if , then }(x)\bigr ) \leq { \mathbb p}(r)/(2\theta)\bigr\}\bigr)\bigr ) \\ &= & \lim_{k^\prime\rightarrow\infty } { \mathbb p}\biggl(\operatorname{dis } \biggl(\bigcap_{k > k^\prime } b\bigl(h^{[k]},{\mathbb p}(r)/(2\theta ) \bigr)\biggr)\biggr)\\ & \leq&\lim_{k^\prime\rightarrow\infty } { \mathbb p}\biggl(\bigcap _ { k > k^\prime } \operatorname{dis}\bigl(b\bigl(h^{[k ] } , { \mathbb p}(r)/(2\theta ) \bigr)\bigr)\biggr)\\ & \leq&\liminf_{k \rightarrow\infty } { \mathbb p}\bigl(\operatorname{dis } \bigl(b\bigl(h^{[k ] } , { \mathbb p}(r)/(2\theta)\bigr)\bigr)\bigr ) \\ & \leq & \liminf _ { k\rightarrow\infty } \theta_{h^{[k ] } } \frac{{\mathbb p}(r)}{2\theta } = \frac{{\mathbb p}(r)}{2},\end{aligned}\ ] ] so that we will satisfy the condition in step 2 on the next round .here we have used the definition of in the final inequality and equality . on the other hand ,if after step 7 , we have , then }(x)\bigr ) > \frac{{\mathbb p}(r)}{2\theta}\biggr\ } \\ & = & \biggl\{h \in v \dvtx\biggl(\frac{\limsup_{k \rightarrow \infty } { \mathbb p}(h(x ) \neq h^{[k]}(x))}{\mu}\biggr)^{\kappa } > \biggl(\frac{{\mathbb p}(r)}{2\mu\theta}\biggr)^{\kappa}\biggr\}\\ & \subseteq & \biggl\{h \in v \dvtx\biggl(\frac{{\operatorname{diam}}({\mathit{er}}(h)-\nu;{\mathbb c})}{\mu } \biggr)^{\kappa } > \biggl(\frac{{\mathbb p}(r)}{2\mu\theta } \biggr)^{\kappa}\biggr\}\\ & \subseteq & \biggl\{h \in v \dvtx { \mathit{er}}(h)-\nu >\biggl(\frac{{\mathbb p}(r)}{2\mu \theta}\biggr)^{\kappa}\biggr\}\\ & = & \bigl\{h \in v \dvtx { \mathit{er}}(h|r)-\inf_{h^\prime\in v } { \mathit{er}}(h^\prime| r ) > { \mathbb p}(r)^{\kappa-1}(2\mu\theta)^{-\kappa}\bigr\}\\ & \subseteq & \bigl\{h \in v \dvtx { \mathit{ub}}(h , q,\delta / n)-\min_{h^\prime\in v } { \mathit{lb}}(h^\prime , q,\delta / n ) > { \mathbb p}(r)^{\kappa-1}(2\mu\theta)^{-\kappa } \bigr\}\\ & \subseteq & \bigl\{h \in v \dvtx { \mathit{lb}}(h , q,\delta / n)-\min_{h^\prime\in v } { \mathit{ub}}(h^\prime , q,\delta / n ) \\ & & \hspace*{40.2pt } > { \mathbb p}(r)^{\kappa-1}(2\mu\theta)^{-\kappa } - 4g(|q|,\delta / n)\bigr\}.\end{aligned}\ ] ] here , the third line follows from the fact that } ) \leq { \mathit{er}}(h) ] , define [ con : entropy ] there exist finite constants and s.t . and ] , .[ thm : structure - dependent - entropy ] suppose is the classifier returned by algorithm 3 , when allowed label requests and confidence parameter .suppose further that satisfies conditions [ con : tsybakov - dependent ] and [ con : entropy - aggregation ] .then there exist finite ( , , and -dependent ) constants such that , with probability , , in addition to these theorems for this structure - dependent version of tsybakov s noise conditions , we also have the following result for a structure - independent noise condition , in the sense that the noise condition does not depend on the particular choice of sets , but only on the distribution ( and in some sense , the full class ) ; it may be particularly useful when the class is universal , in the sense that it can approximate any classifier .[ thm : structure - independent ] suppose the sequence is constructed so that , and is the classifier returned by algorithm 3 , when allowed label requests and confidence parameter .suppose that there exists a constant s.t . for all measurable , .then there exists a finite ( -dependent ) constant such that , with probability , , the condition is quite easy to satisfy : for example , could be axis - aligned decision trees of depth , or thresholded polynomials of degree , or multi - layer neural networks with internal units , etc . as for the noise condition in theorem [ thm : structure - independent ], this would be satisfied whenever for some constant ] , and finite sets , define and where , for our purposes , we can take and , though there seems to be room for improvement in these constants . for completeness , we also define by convention. we will also define a related quantity , representing a distribution - dependent version of , also explored by koltchinskii . specifically , for , define for , let and where , for our purposes , we can take and . for completeness, we also define . in definition [ def : disagreement - coefficient ], we took .if , then this choice is usually relatively harmless .however , in some cases , setting results in a suboptimal , or even infinite , value of , which is undesirable . in these cases , we would like to set as large as possible while maintaining the validity of the bounds .if we do this carefully enough , we should be able to establish bounds that , even in the worst case when , are never worse than the bounds for some analogous passive learning method ; however , to do this requires to depend on the parameters of the learning problem : namely , , , and .the effect of a larger can sometimes be dramatic , as there are scenarios where ; we certainly wish to distinguish between such scenarios , and those where .generally , depending on the bound we wish to prove , different values of may be appropriate . for the tightest bound in terms of proven in the appendices ( namely , lemma 7 of appendix b in the supplementary material ), the definition of in ( [ eqn : r0 ] ) below gives a good bound .for the looser bounds ( namely , theorems [ thm : tight - agnostic ] and [ thm : tight - agnostic - entropy ] ) , a larger value of may provide better bounds ; however , this same general technique can be employed to define a good value for in these looser bounds as well , simply using upper bounds on ( [ eqn : r0 ] ) analogous to how the theorems themselves are derived from lemma 7 in appendix b .likewise , one can state analogous refinements of for theorems [ thm : cal - upper][thm : bbl - adaptive ] , though for brevity these are left for the reader s independent consideration .[ def : r0 ] define and we use this definition of in all of the main proofs .in particular , with this definition , lemma 7 of appendix b is never significantly worse than the analogous known result for passive learning ( though it can be significantly better when ) .i extend my sincere thanks to larry wasserman for numerous helpful discussions and also to john langford for initially pointing out to me the possibility of adapting to tsybakov s noise conditions for threshold classifiers .i would also like to thank the anonymous referees for extremely helpful suggestions on earlier drafts .balcan , m .- f . ,broder , a. and zhang , t. ( 2007 ) .margin based active learning . in _ proceedings of the 20th conference on learning theory_. _ lecture notes in computer science _ * 4539 * 3550 .springer , berlin .
|
we study the rates of convergence in generalization error achievable by active learning under various types of label noise . additionally , we study the general problem of model selection for active learning with a nested hierarchy of hypothesis classes and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions . in particular , we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning . .
|
many problems in mathematics and computer science can be reduced to proving satisfiability of conjunctions of ( ground ) literals modulo a background theory .this theory can be a standard theory , the extension of a base theory with additional functions , or a combination of theories .it is therefore very important to find efficient methods for reasoning in standard as well as complex theories .however , it is often equally important to find local causes for inconsistency . in distributed databases , for instance ,finding local causes of inconsistency can help in locating errors .similarly , in abstraction - based verification , finding the cause of inconsistency in a counterexample at the concrete level helps to rule out certain spurious counterexamples in the abstraction .the problem we address in this paper can be described as follows : let be a theory and and be sets of ground clauses in the signature of , possibly with additional constants .assume that is inconsistent with respect to .can we find a ground formula , containing only constants and function symbols common to and , such that is a consequence of with respect to , and is inconsistent modulo ?if so , is a _ ( craig ) interpolant _ of and , and can be regarded as a `` local '' explanation for the inconsistency of . in this paperwe study possibilities of obtaining ground interpolants in theory extensions .we identify situations in which it is possible to do this in a hierarchical manner , by using a prover and a procedure for generating interpolants in the base theory as `` black - boxes '' .we consider a special type of extensions of a base theory namely local theory extensions which we studied in .we showed that in this case hierarchical reasoning is possible .i.e. proof tasks in the extension can be reduced to proof tasks in the base theory . herewe study possibilities of hierarchical interpolant generation in local theory extensions .the main contributions of the paper are summarized below : 1 .first , we identify new examples of local theory extensions .2 . second , we present a method for generating interpolants in local extensions of a base theory .the method is general , in the sense that it can be applied to an extension of a theory provided that : a. is convex ; b. is -interpolating for a specified set of predicates ( cf . the definition in section [ examples - sep ] ) ; c. in every inconsistent conjunction of ground clauses allows a ground interpolant ; d. the extension is defined by clauses of a special form ( type ( [ general - form ] ) in section [ examples - sep ] ) .+ the method is _ hierarchical _ : the problem of finding interpolants in is reduced to that of finding interpolants in the base theory .we can use the properties of to control the form of interpolants in the extension .third , we identify examples of theory extensions with properties ( i)(iv ) .4 . fourth , we discuss application domains such as : modular reasoning in combinations of local theories ( characterization of the type of information which needs to be exchanged ) , reasoning in distributed databases , and verification . the existence of ground interpolants has been studied in several recent papers , mainly motivated by abstraction - refinement based verification . in mcmillanpresents a method for generating ground interpolants from proofs in an extension of linear rational arithmetic with uninterpreted function symbols .the use of free function symbols is sometimes too coarse ( cf .the example in section [ motivation - verif ] ) . here , we show that similar results also hold for other types of extensions of a base theory , provided that the base theory has some of the properties of linear rational arithmetic .another method for generating interpolants for combinations of theories over disjoint signatures from nelson - oppen - style unsatisfiability proofs was proposed by yorsh and musuvathi in .although we impose similar conditions on , our method is orthogonal to theirs , as it can also handle combinations of theories over non - disjoint signatures . in a different interpolation property stronger than the property under consideration in this paper is studied , namely the existence of ground interpolants for _ arbitrary formulae _ which is proved to be equivalent to the theory having quantifier elimination .this limits the applicability of the results in to situations in which the involved theories allow quantifier elimination .if the theory considered has quantifier elimination then we can use this for obtaining ground interpolants for arbitrary formulae .the goal of our paper is to identify theories possibly without quantifier elimination in which , nevertheless , ground interpolants for ground formulae exist ._ structure of the paper : _ we start by providing motivation for the study in section [ motivation ] . in section [ prelim ] the basic notions needed in the paperare introduced .section [ local ] contains results on local theory extensions . in section [ hierarchic ] local extensions allowing hierarchical interpolationare identified , and based on this , in section [ procedure ] a procedure for computing interpolants hierarchically is given . in section [ appl ]applications to modular reasoning in combinations of theories , reasoning in complex databases , and verification are presented . in section [ conclusions ]we draw conclusions , discuss the relationship with existing work , and sketch some plans for future work . for the sake of clarity in presentation ,all the proofs that are not directly related to the main thread of the paper can be found in the appendix .( these results concern illustrations of the fact that certain theory extensions are local , or satisfy assumptions that guarantee that interpolants can be computed hierarchically . )in this section we present two fields of applications in which it is important to efficiently compute interpolants : knowledge representation and verification .consider a simple ( and faulty ) terminological database for chemistry , consisting of two extensions of a common kernel chem ( basic chemistry ) : achem ( inorganic ( anorganic ) chemistry ) and biochem ( biochemistry ) .assume that chem contains a set of concepts and a set of constraints : let achem be an extension of chem with concepts - , a rle , terminology and constraints : let biochem be an extension of chem with the concept , the rles , terminology and constraints : the combination of chem , achem and biochem is inconsistent ( we wrongly added to the constraint instead of ) .this can be proved as follows : by results in ( p.156 and p.166 ) the combination of chem , achem and biochem is inconsistent if and only if where is the extension of the theory of semilattices with smallest element 0 and monotone function symbols corresponding to for each rle .using , for instance , the hierarchical calculus presented in ( see also section [ local ] ) , the contradiction can be found in polynomial time . in order to find the mistake we look for an explanation for the inconsistency in the common language of and .( common to and are the concepts and the rle . )this can be found by computing an interpolant for the conjunction in ( [ incons - s1 ] ) in the theory of semilattices with monotone operators . in this paperwe show how such interpolants can be found in an efficient way .the method is illustrated on the example above in section [ appl - knowledge ] . in ,mcmillan proposed a method for abstraction - based verification in which interpolation ( e.g. for linear arithmetic + free functions ) is used for abstraction refinement .the idea is the following : starting from a concrete , precise description of a ( possibly infinite - state ) system one can obtain a finite abstraction , by merging the states into equivalence classes .a transition exists between two abstract states if there exists a transition in the concrete systems between representatives in the corresponding equivalence classes .literals describing the relationships between the state variables at the concrete level are represented at the abstract level by predicates on the abstract states ( equivalence classes of concrete states ) .classical methods ( e.g. bdd - based methods ) can be used for checking whether there is a path in the abstract model from an initial state to an unsafe state .we distinguish the following cases : 1 .no unsafe state is reachable from an initial state in the abstract model .then , due to the way transitions are defined in the abstraction , this is the case also at the concrete level .hence , the concrete system is guaranteed to be safe .there exists a path in the abstract model from an initial state to an unsafe state .this path may or may not have a correspondent at the concrete level . in order to check this ,we analyse the counterpart of the counterexample in the concrete model .this can be reduced to testing the satisfiability of a set of constraints : 1 .if the set of constraints is satisfiable then an unsafe state is reached from the initial state also in the concrete system .thus , the concrete system is not safe . 2 .if the set of constraints is unsatisfiable , then the counterexample obtained due to the abstraction was spurious .this means that the abstraction was too coarse . in order to refine it we need to take into account new predicates or relationships between the existing predicates .interpolants provide information about which new predicates need to be used for refining the abstraction .we illustrate these ideas below .consider a water level controller modeled as follows : changes in the water level by inflow / outflow are represented as functions , depending on time and water level .alarm and overflow levels , as well as upper / lower bounds for mode durations are parameters of the systems .# 1#20.4#1 ll & l + if then a valve is opened until time , + time changes to and the water level to + .+ if then the valve is closed ; time changes to + , and the water level to . + we impose restrictions on and on and : + + + + + we want to show that if initially then the + water level always remains below .+ we start with an abstraction in which the predicates are : and no other relations between these predicates are specified .we can , for instance , use finite model checking for the finite abstraction obtained this way .note for instance that is satisfiable , i.e. in the abstract model there exists a path ( of length 2 ) from the initial state to an unsafe state .we analyze the corresponding path in the concrete model to see if this counterexample to safety is spurious , i.e. we check whether there exist such that the conjunction : is true . if are regarded as free function symbols this conjunction is satisfiable , so the spuriousness of the counterexample can not be detected can however be proved to be unsatisfiable if we take into account the additional conditions on the functions and .interpolants can be used for determining the cause of inconsistency , and can therefore help in refining the abstraction .the hierarchical interpolation method we present here allows us to efficiently generate ground interpolants for extensions with functions satisfying axioms of the type considered here and also for a whole class of more general axioms .an illustration of this method on the formulae in the example presented here is given in section [ appl - verification ] .besides the application to verification by abstraction - refinement , computation of craig interpolants has other potential applications ( e.g. to goal - directed overapproximation for achieving faster termination , or to automatic invariant generation ) .in this section we introduce the main notions and definitions concerning theories , models and interpolants needed in the paper .theories can be regarded as sets of formulae or as sets of models . in this paper , whenever we speak about a theory if not otherwise specified we implicitly refer to the set of all models of .let be a theory in a given signature , where is a set of function symbols and a set of predicate symbols .let and be formulae over the signature with variables in a set .the notion of truth of formulae and of entailment is the usual one in logic .we say that : 1 . is true with respect to ( denoted ) if is true in each model of . is satisfiable with respect to if there exists at least one model of and an assignment such that .otherwise we say that is unsatisfiable .we say that entails with respect to ( denoted ) if for every model of and every valuation , if then .note that is unsatisfiable with respect to if and only if ( stands for false ) .a theory has interpolation if , for all formulae and in the signature of , if then there exists a formula containing only symbols which occur in both and such that and .first order logic has interpolation but for an arbitrary theory even if and are e.g. conjunctions of ground literals , may still be an arbitrary formula , containing alternations of quantifiers ( cf . for an example of ground formulae and in the language of the theory of arrays whose conjunction is unsatisfiable , but there is no ground interpolant over the common variables of and ) .it is often important to identify situations in which ground clauses have ground interpolants .we say that a theory has the _ ground interpolation property _ ( or , shorter , that has _ ground interpolation _ ) if for all ground clauses and , if then there exists a ground formula , containing only the constants occurring both in and , such that there exist results which relate ground interpolation to amalgamation or the injection transfer property and thus allow us to recognize many theories with ground interpolation .we present these results in appendix [ app : amalg - interp ] .the following theories allow ground interpolation in appendix [ app : amalg - interp ] ) .similar results were also established for ( 2 ) in . ] : 1 . the theory of pure equality ( without function symbols ) .2 . linear rational and real arithmetic .3 . the theory of posets .4 . the theories of ( a ) boolean algebras , ( b ) semilattices , ( c ) distributive lattices . [ ground - interp - eq - th ] the proof is given in appendix [ app : amalg - interp ] .other examples of theories which allow ground interpolation are the equational classes of ( abelian ) groups and lattices . in many applicationsone needs to consider extensions or combinations of theories , and proving amalgamation properties can be complicated . on the other hand , just knowing that ground interpolants exist is usually not sufficient : we would like to construct the interpolants fast . in the examples considered in theorem [ ground - interp - eq - th ] , methods for constructing interpolants exist . for the theories of pure equality and of posets interpolantscan be constructed for instance from proofs . for linearrational or real arithmetic they can either be constructed from proofs or by constructing linear programming problems and solving these problems using an off - the - shelf sound solver . for the theories of boolean algebras , distributive lattices and semilatticesthey can be reconstructed from resolution proofs associated with the translation of the satisfiability problems to propositional logic ; the construction is similar to the one described in the proof of theorem [ example - assumptions-1 - 3 ] in appendix [ app : assumptions - examples ] .we would like to use the advantages of modular or hierarchical reasoning for constructing interpolants in theory extensions in an efficient way .this is why in this paper we aim at giving methods for _ constructing _ interpolants in a hierarchical way .since in we identified a class of theory extensions namely , local theory extensions in which hierarchical reasoning was possible , in what follows we will study interpolation in local theory extensions .let be a theory with signature .we consider extensions of with signature , where ( i.e. the signature is extended by new _function symbols _ ) and is obtained from by adding a set of ( universally quantified ) clauses .thus , consists of all -structures which are models of and whose reduct to is a model of .a _ partial -structure _ is a structure , where and for every with arity , is a partial function from to .any variable assignment extends in a natural way to terms , such that .thus , the notion of evaluating a term with respect to a variable assignment for its variables in a partial structure is the same as for total algebras , except that this evaluation is undefined if and at least one of is undefined , or else is not in the domain of . let be a partial -structure , a clause and . then if and only if either a. for some term in , is undefined , or else b. is defined for all terms of , and there exists a literal in such that is true in . _ weakly satisfies _ ( notation : ) if for all assignments .we say that _ weakly satisfies a set of clauses _ or _ is a weak partial model of _ ( notation : ) if for all .let be a theory with signature and let be a set of ( universally quantified ) clauses in the signature , where . in what follows , when referring to sets of ground clauses we assume they are in the signature where is a set of new constants .for the sake of simplicity , we will use the same notation for a structure and for its universe .a ( total ) model of is a -structure s.t. and is a model of .let be the class of all weak partial models of , in which the -functions are partial and such that is a total model of .an extension is _ local _ if , in order to prove unsatisfiability of a set of clauses with respect to , it is sufficient to use only those instances ] then no matter which terms are chosen for separating mixed clauses in \wedge { \mathcal k}_0 ] then there exists a set of -terms containing only constants common to and , and common new constants in a set such that the terms in can be used to separate \cup { \mathcal k}_0 ] , at some step in the procedure and there exists such that and . in this case is logically equivalent to , and is logically equivalent to , where , are the following conjunctions of literals : thus , if for instance in interpolants for conjunctions of ground literals are always again conjunctions of ground literals , the same is also true in the extension .[ remark - interpolants ] the following theory extensions have ground interpolation : a. extensions of any theory in theorem [ example - assumptions-1 - 3](1)(4 ) with free function symbols .b. extensions of the theories in theorem [ example - assumptions-1 - 3](2),(4 ) with monotone functions . c. extensions of the theories in theorem [ example - assumptions-1 - 3](2),(4 ) with .d. extensions of the theories in theorem [ example - assumptions-1 - 3](2),(4 ) with .e. extensions of any theory in theorem [ example - assumptions-1 - 3](1)(4 ) with or ( where is a term and a set of literals in the base theory ) . f. extensions of the theories in theorem [ example - assumptions-1 - 3](2),(4 ) with , if is monotone in its variables .g. , the extension of the theory of reals with a unary function which is -lipschitz in a point , where is .( a)(d ) are direct consequences of corollary [ thm : interp ] , since all sets of extension clauses are of type ( [ general - form ] ) . for extensions of linear arithmetic note that due to the totality of we can always assume that and are positive ,so convexity with respect to is sufficient ( cf .proof of proposition [ prop : separation ] ) .also , in we show that being -interpolating with respect to is sufficient in this case .( e)(g ) follow from corollary [ thm : interp ] and the fact that if each clause in contains only one occurrence of an extension function , no mixed instances can be generated when computing ] , 3 . , where , for + and is the formula obtained from ] in which all extension terms already occur in . after flattening and purifying \wedge g$ ] , we separate the problem into a definition part ( extension ) and a base part . by theorem[ thm : hierarchic ] , the problem can be reduced to testing the satisfiability in the base theory of the conjunction . as this conjunction is unsatisfiable with respect to , is unsatisfiable . _ interpolation ._ let and be given by : the set of constants which occur in is . in occur .the shared constants are and . to generate an interpolant for , we partition the clauses in , where : the clause in is mixed .since already the conjunction of the formulae in is unsatisfiable , is not needed to prove unsatisfiability .the conjunction of the formulae in is equivalent to , where the interpolant for is , which is also an interpolant for .the abstraction defined in section [ motivation - verif ] can then be refined by introducing another predicate .we presented a method for obtaining simple interpolants in theory extensions .we identified situations in which it is possible to do this in a hierarchical manner , by using a prover and a procedure for generating interpolants in the base theory as `` black - boxes '' .this allows us to use the properties of ( e.g. the form of interpolants ) to control the form of interpolants in the extension .we discussed applications of interpolation in verification and knowledge representation .the method we presented can be applied to a class of theories which is more general than that considered in mcmillan ( extension of linear rational arithmetic with uninterpreted function symbols ) .our method is orthogonal to the method for generating interpolants for combinations of theories over disjoint signatures from nelson - oppen - style unsatisfiability proofs proposed by yorsh and musuvathi in , as it allows us to consider combinations of theories over non - disjoint signatures .the hierarchical interpolation method presented here was in particular used for efficiently computing interpolants in the special case of the extension of linear arithmetic with free function symbols in ; the algorithm we used in that paper ( on which an implementation is based ) differs a bit from the one presented here in being tuned to the constrained based approach used in .the implementation was integrated into the predicate discovery procedure of the software verification tools blast and armc .first tests suggest that the performance of our method is of the same order of magnitude as the methods which construct interpolants from proofs , and considerably faster on many examples .in addition , our method can handle systems which pose problems to other interpolation - based provers : we can handle problems containing both strict and nonstrict inequalities , and it allows us to verify examples that require predicates over up to four variables .details about the implementation and benchmarks for the special case of linear arithmetic + free function symbols are described in .although the method we presented here is based on a hierarchical reduction of proof tasks in a local extension of a given theory to proof tasks in , the results presented in section [ hierarchic ] ( in particular the separation technique described in proposition [ prop : separation ] ) and in section [ procedure ] also hold for non - purified formulae ( i.e. they also hold if we do not perform the step of introducing new constant names for the ground terms which occur in the problem or during the separation process ) .depending on the properties of , techniques for reasoning and interpolant generation in the extension of with free function symbols e.g. within state of the art smt solvers can then be used .we can , therefore , use the results in sections [ hierarchic ] and [ procedure ] to extend in a natural way existing methods for interpolant computation which take advantage of state of the art smt technology ( cf .e.g. ) to the more complex types of theory extensions with sets of axioms of type ( [ general - form ] ) we considered here .an immediate application of our method is to verification by abstraction - refinement ; there are other potential applications ( e.g. goal - directed overapproximation for achieving faster termination , or automatic invariant generation ) which we would like to study .we would also like to analyze in more detail the applications to reasoning in complex knowledge bases .* acknowledgements . *i thank andrey rybalchenko for interesting discussions .i thank the referees for helpful comments .this work was partly supported by the german research council ( dfg ) as part of the transregional collaborative research center `` automatic verification and analysis of complex systems '' ( sfb / tr 14 avacs ) .see ` www.avacs.org ` for more information .10 f. baader and s. ghilardi .connecting many - sorted theories . in r.nieuwenhuis , editor , _20th international conference on automated deduction ( cade-20 ) , lnai 3632 _ , pages 278294 .springer , 2005 .amalgamation properties and interpolation theorem for equational theories . , 5:4555 , 1975 .a. cimatti , a. griggio , and r. sebastiani .efficient interpolant generation in satisfiability modulo theories . in _tacas2008 : tools and algorithms for the construction and analysis of systems , lncs 4963 _ , pages 397412 , springer , 2008 .t. a. henzinger , r. jhala , r. majumdar , and k. l. mcmillan .abstractions from proofs . in _popl2004 : principles of programming languages _ , pages 232244 .acm press , 2004 .b jnsson. extensions of relational structures . in j.w .addison , l. henkin , and a. tarski , editors , _ the theory of models , proc . of the 1963 symposium at berkeley _, pages 146157 , amsterdam , 1965 .north - holland .d. kapur , r. majumdar , c. zarba .interpolation for data structures . in _ proc .14th acm sigsoft international symposium on foundations of software engineering _ , pages 105116 , acm 2006 .interpolation and sat - based model checking . in _cav2003 : computer aided verification , lncs 2725 _ , pages 113 .springer , 2003 .an interpolating theorem prover . in _tacas2004 : tools and algorithms for the construction and analysis of systems , lncs 2988 _ , pages 1630 .springer , 2004 .applications of craig interpolants in model checking . in _tacas2005 : tools and algorithms for the construction and analysis of systems , lncs 3440 _ , pages 112 .springer , 2005 .a. podelski and a. rybalchenko . : the logical choice for software model checking with abstraction refinement . in _padl2007 : practical aspects of declarative languages , lncs 4354 _ , pages 245259 , springer , 2007 .a. rybalchenko and v. sofronie - stokkermans .constraints for interpolation .constraint solving for interpolation . in b.cook and a. podelski , editors , _ proceedings of the 8th international conference on verification , model checking and abstract interpretation ( vmcai 2007 ) , lncs 4349 _ , pages 346362 , springer verlag , 2007 .v. sofronie - stokkermans . on the universal theory of varieties of distributive lattices with operators : some decidability and complexity results . in h.ganzinger , editor , _ proceedings of cade-16 , lnai 1632 _ , pages 157171 , springer verlag , 1999 . v. sofronie - stokkermans .resolution - based decision procedures for the universal theory of some classes of distributive lattices with operators ., 36(6):891924 , 2003 .v. sofronie - stokkermans . automated theorem proving by resolution in non - classical logics . in _4th int .. journees de linformatique messine : knowledge discovery and discrete mathematics ( jim-03 ) _ , pages 151167 , 2003 .v. sofronie - stokkermans .hierarchic reasoning in local theory extensions . in r.nieuwenhuis , editor , _ 20th international conference on automated deduction ( cade-20 ) , lnai 3632 _ , pages 219234 .springer , 2005 .v. sofronie - stokkermans .hierarchical and modular reasoning in complex theories : the case of local theory extensions . in b.konev and f. wolter , editors , _ frontiers of combining systems , 6th international symposium , ( frocos 2007 ) , lncs 4720 _ , pages 4771 , springer , 2007 .v. sofronie - stokkermans and c. ihlemann .automated reasoning in some local extensions of ordered structures ., ieee computer society , 2007 .a. wroski . on a form of equational interpolation property . in _foundations of logic and linguistics ( salzburg , 1983 ) _ , pages 2329 , new york , 1985 . plenum .g. yorsh and m. musuvathi .a combination method for generating interpolants . in r.nieuwenhuis , editor , _20th international conference on automated deduction ( cade-20 ) , lnai 3632 _ , pages 353368 .springer , 2005 .there exist results which relate ground interpolation to amalgamation or the injection transfer property and thus allow us to recognize many theories with ground interpolation .if is a signature and are -structures , we say that : 1 .a map is a _ homomorphism _ if it preserves the truth of positive literals , i.e. has the property that if then , and if is true then is true .a map is an _ embedding _ if it preserves the truth of both positive and negative literals , i.e. is true ( in ) if and only if is true ( in ) for any predicate symbol , including equality .thus , an embedding is an injective homomorphism which also preserves the truth of negative literals .let be a signature , and let be a class of -structures . 1 .we say that has the _ amalgamation property _ ( ap ) if for any and any embeddings and there exists a structure and embeddings and such that . has the _ injection transfer property _ ( itp ) if for any , any embedding and any homomorphism there exists a structure , a homomorphism and an embedding such that . an equational theory ( in signature where ) has the _ equational interpolation property _ if whenever where , and are ground atoms , there exists a conjunction of ground atoms containing only the constants occurring both in and , such that [ equational - interpolation ] let be a universal theory . then : 1 . has ground interpolation if and only if has ( ap ) .in addition , we can guarantee that if is positive then the interpolant of is positive if and only if has the injection transfer property .2 . if is an equational theory , then has the equational interpolation property if and only if has the injection transfer property .[ amalgamation - interpolation ] theorem [ amalgamation - interpolation ] can be used to prove that many equational theories have ground interpolation : the following theories allow ground interpolation : 1 . the theory of pure equality ( without function symbols ) .2 . linear rational and real arithmetic .3 . the theory of posets .4 . the theories of ( a ) boolean algebras , ( b ) semilattices , ( c ) distributive lattices .[ app : ground - interp - eq - th ] ( 1 ) , ( 2 ) , ( 3 ) are well - known ( for ( 2 ) we refer for instance to or ) . for proving ( 4 ) we use the fact that if a universal theory has a positive algebraic completion then it has the injection transfer property .all theories in ( 4 ) are equational theories ; by results in , for equational theories the injection transfer property is equivalent to the equational interpolation property . with these remarks ,( 4)(a ) follows from the fact that any gaussian theory is its own positive algebraic completion , and ( 4)(b),(c ) from the fact that the theory of semilattices and that of distributive lattices have a positive algebraic completion .similarly it can be proved that the equational classes of ( abelian ) groups and lattices have ground interpolation .* theorem [ examples - local ] * _ we consider the following base theories : _ 1 . ( posets ) , 2 . ( totally - ordered sets ) , 3 . ( semilattices ) , 4 . ( distributive lattices ) , 5 . ( boolean algebras ) . 6 .the theory of reals resp . ( linear arithmetic over ) , or the theory of rationals resp . ( linear arithmetic over ) , or ( a subtheory of ) the theory of integers ( e.g. presburger arithmetic ) .the following theory extensions are local : 1 .extensions of any theory for which is reflexive with functions satisfying boundedness or guarded boundedness conditions + + + where is a term in the base signature and a conjunction of literals in the signature , whose variables are in .extensions of any theory in ( 1)(6 ) with , if is a term in the base signature in the variables such that for every model of the associated function is monotone in the variables .extensions of any theory in ( 1)(6 ) with functions satisfying .+ 4 .extensions of any totally - ordered theory above ( i.e. ( 2 ) and ( 6 ) ) with functions satisfying .+ 5 .extensions of any theory in ( 1)(3 ) with functions satisfying .all the extensions above satisfy condition . in what followswe will denote by the signature of the base theory , and with the extension functions , namely for cases ( a ) and ( b ) , for case ( c ) , for case ( d ) and for case ( e ) .\(a ) let be a partial -structure which weakly satisfies , such that and is partial .let be a total -structure with the same support as , where : then satisfies .let be the identity .obviously , is a -isomorphism ; and if is defined then .similar arguments also apply to .\(b ) let be a partial -structure which weakly satisfies , such that and is partial . in cases ( 1)(3 )let , where is the family of all order ideals of , and is clearly monotone .let .then for some with defined . as , .therefore .the map defined by is a weak embedding .since and are locally finite , results in show that in ( 4 ) and ( 5 ) it is sufficient to assume that is finite .let , where is clearly monotone .we prove that it also satisfies the boundedness condition , i.e. that for all , . by definition , as and is monotone , we know that for all with defined . therefore , that the identity is a weak embedding can be proved as before .\(c ) the proof is very similar to the proof of ( b ) .we first discuss the case ( 1)(3 ) .let be a weak partial model of .let , where is defined as in ( b ) .we define by assume that , and let .then for some with defined . if , then so , as , we know that .it therefore follows that in this case .otherwise , , hence . for the cases ( 4 ) and ( 5 )we again use the criterion in and theorem [ rel - loc - embedding ] .let be a weak partial model of .let be such that whenever is defined .we define as follows : is obviously monotone . in order to prove that the second condition holds , we analyze two cases .assume first that is undefined .then for all with defined , thus , .if is defined , then for all with we also have , so .again , it follows that .\(d ) let be the theory of totally ordered sets .assume that is a totally ordered weak partial model of .let , where and are extensions of defined as in the proof of ( b ) . are obviously monotone .we prove that the condition holds in .assume that for , and let .then there exist such that is defined and . as , there exist such that is defined and .let .then .hence .therefore , so . let defined by . to show that it is a weak embedding we only have to show that if is defined then .this is true by the definition of .( e ) assume that is the theory of semilattices . the construction in ( d )can be applied to this case without problems .the proof is similar to that of ( d ) with the difference that if we only have one element so we do not need to compute a maximum ( which for may not exist if the order is not total ) .the proof of the fact that the remaining theories satisfy is based on the criterion of finite locality given in theorem [ rel - loc - embedding ] .the constructions and the proofs are similar to those in the proof of ( b ) resp .( c ) for the cases ( 4 ) and ( 5 ) . due to the fact that we assumed that the definition domain of the extension functions is finite is a finite join , and thus exists ( if is nowhere defined it is sufficient to define it as being everywhere equal to in case ( b ) or to in case ( c ) ) .the fact that the definition domains are finite also ensures that in the proof of ( c ) an element ( chosen in the definition of ) with the desired properties always exists .* theorem [ example - assumptions-1 - 3 ] . * 1 . the theory of of pure equality without function symbols ( for ) .2 . the theory of posets ( for ) .3 . linear rational arithmetic and linear real arithmetic ( convex with respect to , strongly -interpolating for ) .4 . the theories of boolean algebras , of semilattices and of distributive lattices ( strongly -interpolating for ) . note first that if a partially - ordered theory is interpolating for it is also for .assume that .then and , hence there exist terms , containing only common constants of and such that and .it follows that , .\(1 ) and ( 2 ) : convexity is obvious ; the property of being -interpolating can be proved by induction on the structure of proofs .( 3 ) is known ( cf .e.g. ) . a method for computing interpolating terms for and presented in .\(4 ) this is a constructive proof based on ideas from .the results presented there show , as an easy particular case , that one can reduce the problem of checking the satisfiability of a conjunction of unit clauses with respect to one of the theories above to checking the satisfiability of a conjunction obtained by introducing a propositional variable for each subterm occurring in , a set of renaming rules of the form and translations of the positive resp .negative part of : \(a ) the convexity of the theory of boolean algebras with respect to follows from the fact that this is an equational class ; convexity with respect to follows from the fact that if and only if .we prove that the theory of boolean algebras is -interpolating , i.e. that if and are two conjunctions of literals and , where is a constant occurring in and not in and a constant occurring in and not in , then there exists a term containing only common constants in and such that and .we can assume without loss of generality that and consist only of atoms ( otherwise one moves the negative literals to the right and uses convexity ) . if and only if the following conjunction of literals in propositional logic is unsatisfiable : we obtain an unsatisfiable set of clauses .propositional logic allows interpolation , so there exists an interpolant , which is a boolean combination ( say in cnf ) of the common propositional variables occurring in and such that but then and .( 4)(b ) the proof is similar to that of ( 4)(a ) with the difference that in the renaming rules in the structure - preserving translation to clause form only the conjunction rules apply , hence and are sets of non - negative horn clauses .we can saturate under resolution with selection on the negative literals in linear time .the saturated set of clauses contains all unit clauses where is subterm of with .only unit positive clauses where occurs in both and can enter into resolution inferences with clauses in and lead to a contradiction .thus we proved that this is equivalent to , where obviously , .( 4)(c ) the case of distributive lattices can be treated similarly . due to the fact that in this case the renaming rules for and are taken into account , the sets and are not horn .we adopt the same negative selection strategy . when saturating a finite set of positive clausesis generated , namely of the form where .we consider a total ordering on the propositional variables where is larger than if occurs in and not in and occurs in both and in .then the only inferences which can lead to a contradiction with are those between the clauses in which only contain common propositional variables .thus we proved that this is equivalent to , where .obviously , .
|
in this paper we study interpolation in local extensions of a base theory . we identify situations in which it is possible to obtain interpolants in a hierarchical manner , by using a prover and a procedure for generating interpolants in the base theory as black - boxes . we present several examples of theory extensions in which interpolants can be computed this way , and discuss applications in verification , knowledge representation , and modular reasoning in combinations of local theories .
|
the dynamics of open quantum systems plays a central role in a wide class of physical systems .usually , the dynamics of an open system is described in terms of the reduced density matrix which is defined by the trace over the environment degrees of freedom . on the ground of the weak - coupling assumption and the rotating wave approximation the dynamics may be formulated in terms of a quantum dynamical semigroups which yields a markovian master equation . however , the dynamical equation , thus obtained , is very often untractable .this fact has encouraged some physicists to look for alternative ways to describe open systems . instead of representing the dynamics of an open system by a quantum master equation for its density matrix , it is formulated in terms of a stochastic process for the open system s wave function .the stochastic representation of quantum markov processes already appeared in a fundamental paper by davies and was applied to derive the _photocounting formula_. while the theory was originally formulated in terms of a stochastic process for the reduced density matrix , in the last decade it has been proposed as a stochastic evolution of the state vector in the reduced hilbert space ( for a review see ) . at the same timecarmichael has developed the idea of an _ unravelling _ of the master equation in terms of an ensemble of _ quantum trajectories_. his theory is applicable only to a particular class of quantum systems ( the photoemissive sources ) and it induces to think that this treatment is equivalent to the master equation approach . in the last few years f.petruccione and h.p.breuer generalize and provide a mathematical formulation of carmichael s idea of _ quantum trajectories _ and of the _ monte carlo wave function method _ . in this contest their main result is to demonstrate that the dynamics given by the most general master equation in lindblad form can be represented as a _piecewise deterministic process _( pdp ) in the hilbert space of the open system . the physical basis to achieve their aimis provided by continuous measurement theory .the link between the first and second way of describing open quantum systems is , however , only one way in the sense i am going to explain .the cited paper contains a flow diagram that is a clear and straightforward picture to review _ concept and methods in the theory of open quantum systems _ , as the same authors title their work .the diagram shows us that pdps , describing a _ selective _ level of measurement , implies the _ non - selective one _ , appearing in the form of a lindblad markovian master equation , while the opposite has not been demonstrated .my aim is to put an arrow in the opposite direction in order to demonstrate the equivalence between the two approaches to open systems dynamics , at least under born - markov approximation . to achieve my goal i start from microscopic models and , exploiting the same approximation leading to the most general master equation ( not in lindblad form ), i solve this last exactly ( at a bath temperature ) obtaining an operatorial expression for .moreover , in the contest of optical quantum system i derive an expression for that is in accordance with the carmichael s one , but , differently from that , the mine is applicable also when the master equation is not in the lindblad form in which cases carmichael s solution is very often merely formal , as the same author underline in recent papers . my mathematical tool , here called _ nud theorem _, is applicable to a wide class of systems , provided that they satisfy the hypothesis necessary to make it working . optical physical systemswell satisfy the conditions of nud theorem s validity and in this contest i have studied , as example of monopartite system , a single mode cavity , and as example of multipartite systems , two two - level _ dipole - dipole _ interacting atoms and two - level not - directly - interacting atoms placed in fixed _ arbitrary _ point inside a loss cavity . in the investigation of the monopartite systemsthe novelty is constituted by the operatorial way to approach to the master equations of the systems , being already known in literature their dynamical properties .i reproduce for example the _ photocounting formula _ in order to appreciate the easyness of application of my method .moreover i choose to analyze monopartite systems at a temperature to highlight that , differently to multipartite ones , spontaneous emission provokes decoherence phenomena that , inevitably , guides the first kind of systems to their ground states . on the contrary, multipartite systems can exhibit collective properties induced by the common reservoir .this general feature already appeared in some fundamental papers in which the interaction between atomic dipoles , induced by electromagnetic field , could cause the decay of the multiatom system with two significantly different spontaneous emission rates , one enhanced and the other reduced . in two recent papers , as an example of multipartite system ,i have investigated the dynamics of a couple of spontaneously emitting two - level atoms , taking into account from the very beginning their dipole - dipole interaction and and two - level not - directly - interacting atoms placed in fixed _ arbitrary _ point inside a loss cavity .the result , not trivially expected , is that in such a condition the matter subsystem , because of the cooperation induced by energy loss mechanism , may be conditionally guided toward a stationary robust entangled state .the renewed interest toward entanglement concept reflects the consolidated belief that unfactorizable states of multipartite system provide an unreplaceable applicative resource , for example , in the quantum computing research area .however , the realization of quantum computation protocols suffers of the difficulty of isolating a quantum mechanical system from its environment . in this sensethe cited work are also aimed at proposing theoretical scheme to bypass decoherence manifestations , so taking its place among intense theoretical and experimental research of the last few years .the paper is structured as follow : in section ii i report the principal step and approximation leading to the microscopic derivation of the markovian master equation , putting in evidence some peculiar properties of it useful in order to demonstrate the _ nud theorem n_. in section iii i solve the markovian master equation when . in sectioniv i review the applications to old exemplary problem and to new , previous unresolved , problem . in sectionv i try to justify the obtained dynamical behaviour in terms of continuous measurement theory .it is well know that under the rotating wave and the born - markov approximations the master equation describing the reduced dynamical behavior of a generic quantum system linearly coupled to an environment can be put in the form +d(\rho_s ( t)),\ ] ] where is the hamiltonian describing the free evolution of the isolated system , and being the one - sided fourier transforms of the reservoir correlation functions .finally we recall that the operators and , we are going to define and whose properties we are going to explore , act only in the hilbert space of the system . eq . ( [ me ] ) has been derived under the hypothesis that the interaction hamiltonian between the system and the reservoir , in the schrdinger picture , is given by that is the most general form of the interaction . in the above expression and operators acting respectively on the hilbert space of the system and of the reservoir .( [ hi ] ) can be written in a slightly different form if one decomposes the interaction hamiltonian into eigenoperators of the system and reservoir free hamiltonian .definition supposing the spectrums of and to be discrete ( generalization to the continuous case is trivial ) let us denote the eigenvalue of ( ) by ( ) and the projection operator onto the eigenspace belonging to the eigenvalue ( ) by ( ) .then we can define the operators : from the above definition we immediately deduce the following relations =-\omega a_{\alpha } ( \omega),\;\;\ ; [ h_b , b_{\alpha } ( \omega)]=-\omega b_{\alpha } ( \omega),\ ] ] =+\omega a^{\dag}_{\alpha } ( \omega)\;\;\ ; and \;\;\ ; [ h_b , b^{\dag}_{\alpha } ( \omega)]=+\omega b^{\dag}_{\alpha } ( \omega).\ ] ] an immediate consequence is that the operators e raise and lower the energy of the system by the amount respectively and that the corresponding interaction picture operators take the form finally we note that summing eq .( [ aconalfadiomega ] ) over all anergy differences and employing the completeness relation we get the above positions enable us to cast the interaction hamiltonian into the following form the reason for introducing the eigenoperator decomposition , by virtue of which the interaction hamiltonian in the interaction picture can now be written as is that exploiting the rotating wave approximation , whose microscopic effect is to drop the terms for which , is equivalent to the schrodinger picture interaction hamiltonian : lemma [ th1 ] the rotating wave approximation imply the conservation of the free energy of the global system , that is =0\ ] ] the necessary condition involved in the previous proposition is equivalent to the equation =0 $ ] we are going to demonstrate .&=&[h_s+h_b , h_i]=[h_s , h_i]+[h_b , h_i]\\\nonumber & = & \sum_{\alpha , \omega } [ h_s , a_{\alpha } ( \omega ) ] \otimes b_{\alpha}^{\dag}(\omega ) + \sum_{\alpha , \omega } a_{\alpha } ( \omega ) \otimes [ h_b , b_{\alpha}^{\dag}(\omega)]\\\nonumber & = & -\sum_{\alpha , \omega}\omega a_{\alpha } ( \omega ) \otimes b_{\alpha}(-\omega)+\sum_{\alpha , \omega}\omega a_{\alpha } ( \omega ) \otimes b_{\alpha}(-\omega)=0.\end{aligned}\ ] ] where we have made use of eq .( [ com1],[com2 ] ) ' '' '' lemma [ th2 ] the detailed balance condition in the thermodynamic limit imply where ' '' '' corollary [ th3 ] let us suppose the temperature of the thermal reservoir to be the absolute zero , on the ground of lemma 2 immediately we see that let us now cast eq .( [ me ] ) in a slightly different form splitting the sum over the frequency , appearing in eq.([dissme ] ) , in a sum over the positive frequencies and a sum over the negative ones so to obtain where we again make use of eq .( [ aconalfadiomega ] ) . in the above expression we can recognize the first term as responsible of spontaneous and stimulated emission processes , while the second one takes into account stimulated absorption , as imposed by the lowering and raising properties of .therefore if the reservoir is a thermal bath at the corollary 4 tell us that the correct dissipator of the master equation can be obtained by suppressing the stimulated absorption processes in eq .( [ diss ] ) .we are now able to solve the markovian master equation when the reservoir is in a thermal equilibrium state characterized by .we will solve a cauchy problem assuming the factorized initial condition to be an eigenoperator of the free energy .this hypothesis does nt condition the generality of the found solution being able to extend itself to an arbitrary initial condition because of the linearity of the markovian master equation .nud theorem [ th4 ] if eq .( [ me ] ) is the markovian master equation describing the dynamical evolution of a open quantum system s , coupled to an environment b , assumed to be in the detailed - balance thermal equilibrium state characterized by a temperature t=0 , and if the global system is initially prepared in a state so that , where is the free energy of the global system then is in the form of a piecewise deterministic process , that is a process obtained combining a deterministic time - evolution with a jump process .the weak - coupling assumption is equivalent to .the above equation can be used to derive the reduced density matrix tracing over the environment degree of freedom .let us choose a factorized base in the tensor product hilbert space made of eigenvectors of and where and define respectively the spectra of and and and their relative degenerations .let us remember that we have made the semplificative hypotheses of discreteness of and .in addition we assume , also for easyness , that is bounded from below and made of isolated points . on the ground of these chiosesthe total density matrix can be written as imposes a strong selection rule on the indices of the summation , that is : by virtue of which the trace over the degrees of freedom of the environment , that can be written as latexmath:[\ ] ] the environment - induced multipartite cooperation not - vanished by the loss of memory of the environment . the found solution ( nud theorem ) tell us that the state of the system is a statistical mixture of the free energy system eigenoperators .this fact depends and it is consistent with the existence of the m - c measurement device because the act of measurement introduces a stochastic variable respect to which we can only predict the probability to have one or another of the possible alternative measures .these probabilities can be regarded as the weight of the possible alternative generalized trajectories and , analytically , they are given by the partial traces of the . with this approachthe dynamics has to be depicted as a statistical mixture of this alternative generalized trajectories .moreover the found trajectories evolve in time in a deterministic way : for example the trajectory relative to the initially excited system state is a shifted free evolution characterized by complex frequencies that means an exponential decaying free evolution. this statement may give the sensation that every system has to decay in its ground state because of the observed dynamics .it is in general not true .actually , if the system is multipartite , it is possible that it admits excited and entangled equilibrium decoherence free subspace ( dfs ) ( so as it happens in a lot of known models ) , constituted by states on which the action of is identically zero and then , if the system , during evolution , passes through one of these states , the successive dynamics will be decoupled from the environment evolution .an equilibrium condition is reached in which entanglement is embedded in the system . what could ensure , for example , that an entangled decoherence - free state , if existing , has been generated ?( for example ) the number of click we hear in a period of time long enough respect spontaneous emission rate . actually , if the numbers of the clicks is less than the numbers of initial excitations then we can say that it has been generated a decoherence - free state .if , on the contrary , the system is monopartite it is possible to demonstrate that the only possible dfs is generated by the ground state of the system so that a monopartite system will loose its internal coherence and the population of excited states because of the measurement and the consequent interaction with a memory - cleaned environment , unable to induce cooperation among the parts : there exists only one part .
|
_ quantum mechanics must be regarded as open systems . on one hand , this is due to the fact that , like in classical physics , any realistic system is subjected to a coupling to an uncontrollable environment which influences it in a non - negligible way . the theory of open quantum systems thus play a major role in many applications of quantum physics since perfect isolation of quantum system is not possible and since a complete microscopic description or control of the environment degrees of freedom is not feasible or only partially so _ . practical considerations therefore force one to seek for a simpler , effectively probabilistic description in terms of an open system . there is a close physical and mathematical connection between the evolution of an open system , the state changes induced by quantum measurements , and the classical notion of a stochastic process . the paper _ provides _ a bibliographic review of this interrelations , it _ shows _ the mathematical equivalence between _ markovian master equation _ and generalized _ piecewise deterministic processes _ and it _ introduces _ the _ open system in an open observed environment _ model .
|
the kardar - parisi - zhang ( kpz ) universality class is a prominent nonequilibrium class , ruling diverse kinds of nonlinear fluctuations in growing interfaces , driven particle systems , fluctuating hydrodynamics , and so on . particularly noteworthyare recent analytical developments on the -dimensional kpz class , which have exactly determined a number of its statistical properties on the solid mathematical basis .specifically , for -dimensional kpz - class interfaces , the interface height , measured along the growth direction at lateral position and time , grows as with parameters and , a rescaled random variable , and being the characteristic growth exponent of the -dimensional kpz class .then the recent analytical studies have consistently shown that exhibits one of the few universal distribution functions , selected by the choice of the initial condition , or equivalently the global shape of the interfaces . for example, circular interfaces grown from a point nucleus show the largest - eigenvalue distribution of random matrices in gaussian unitary ensemble , called the gue tracy - widom distribution , while flat interfaces on a linear substrate show the equivalent for gaussian orthogonal ensemble .this implies that the kpz class splits into a few universality subclasses .they are also characterized by different spatial correlation functions , whose exact forms are also known analytically .these results for the circular and flat subclasses were corroborated by direct experimental verifications using growing interfaces of turbulent liquid crystal ( lc ) .in contrast to these clear characterizations of the distribution and the spatial correlation , analytical results on the temporal correlation remain limited , hence challenging .the lc experiment and numerical simulations showed that the temporal correlation is also different between the circular and flat subclasses .firstly , the two - time correlation function denotes the ensemble average defined over infinitely many realizations .it is independent of because of the translational symmetry .therefore , for evaluation , we can take averages over positions too to achieve better statistical accuracy , without changing its mathematical definition . ] was shown to decay , in its rescaled form , as with for the flat case and for the circular case .the latter implies , i.e. , correlation remains strictly positive , forever , in the circular case .secondly , the persistence probability was also measured , which is defined here by the probability that the fluctuation at a fixed position never changes its sign ( from the one denoted by the subscript ) in the time interval ] , which is then regarded as a time series .such dichotomization has recently been used to characterize time - correlation properties of interactions on lipid membranes and of turbulence , successfully .the constructed process is compared with a theoretically defined dichotomous process , arguably the simplest and best - studied one , namely the renewal process ( rp ) .rp consists of a single two - state variable , which switches from one to the other state after random , uncorrelated waiting times generated by a power - law distribution : = \ !{ \left(}\frac{\tau}{\tau_0}{\right)}^{-\theta}\hspace{-12pt } ,\hspace{9pt}(\tau \geq \tau_0 ) .\label{eq : waitingtimerp}\ ] ] this model shows web and aging for .concerning kpz - class interfaces , we use the experimental data of the circular and flat interfaces obtained in refs . ( lc turbulence ) : for the circular ( or flat ) case , total observation time was , time resolution was , and realizations were used , respectively .we also analyze newly obtained numerical data for circular interfaces of the off - lattice eden model ( ) and flat interfaces of the discrete polynuclear growth ( dpng ) model ( ) .further descriptions of the systems and parameters are given in [ app : a ] .against for the circular interfaces , obtained at different in the lc experiment ( main panels ) and the eden model ( insets ) .the dashed lines are guides to the eyes indicating exponent .the same set of colors / symbols and is used in both panels .the ordinates are arbitrarily shifted . ]first of all , we stress that rp is far too simple to fully describe kpz , because rp is a two - state model without even spatial degrees of freedom and has uncorrelated waiting times .we nonetheless measure the waiting times between two sign changes of , first for the circular interfaces , for which we anticipate relation to web as discussed above .more specifically , we define the waiting - time distribution by the probability that the sign renewed at time ( changed to the subscripted one ) lasts over time length or longer , hence is the complementary cumulative distribution function ( ccdf ). figure [ fig1a ] shows the results for both the lc experiment ( main panels ) and the eden model ( insets ) .remarkably , in both cases we find a clear power law as described in eq .( [ eq : waitingtimerp ] ) with exponent , while the cutoff is out of the range of our resolution . and , respectively , for the circular interfaces at different in the lc experiment ( main panels ) and the eden model ( insets ) . and are rescaled by .the black lines indicate rp s exact results , eqs .( [ eq : forwrecurrp ] ) and ( [ eq : backrecurrp ] ) , with .the data are normalized so that they have the same statistical weight as rp s exact results in the range covered by their abscissa .the same set of colors / symbols and is used in all panels .note that data at are not shown for because the remaining time is then too short to measure the distribution of . ]this similarity to rp leads us to compare further statistical properties between the two systems .first we focus on the forward recurrence time , defined as the interval between time and the next sign change , as well as the backward recurrence time , which is the backward interval from to the previous sign change . for rp , dynkin derived exact forms of the probability density function ( pdf ) of and as follows , for : with and denoting the pdf of the beta distribution .although their derivation essentially relies on the independence of waiting times in rp , a feature not shared with kpz , we find , as shown in fig .[ fig1b ] , that both experimental and numerical results for the circular interfaces precisely follow rp s exact results indicated by the solid lines ( except finite - time corrections ) .note that the persistence probability considered in eq .( [ eq : persprob ] ) actually amounts to the ccdf of , i.e. , .this indicates that the functional form of the persistence probability , which is usually intractable for such spatially - extended nonlinear systems , seems to be given by rp s exact result ( [ eq : forwrecurrp ] ) in the case of the circular kpz subclass .we also remark that the explicit dependence of the pdfs on indicates the aging of the system . for the lc experiment ( for circles , squares , and diamonds , respectively ) and the eden model ( ) , compared to the lamperti distribution ( [ eq : lamperti ] ) with ( dashed line ) .the gray vertical line indicates the ensemble - averaged value .the existence of the broad asymptotic distribution is a direct evidence of web in the circular kpz subclass .( b ) correlation function of sign , , measured at different for the lc experiment ( main panel ) and the eden model ( inset ) .the dashed and dotted lines indicate exponents and , respectively . ] in contrast to this agreement , we also find statistical properties that are clearly different between the two systems .the occupation time , i.e. , the length of time spent by the positive sign , is a quantity well - studied in two - state stochastic processes and in more general scale - invariant phenomena ( see , e.g. , ) .it is simply related to the time - averaged sign by .for rp with , lamperti showed that it does not converge to the ensemble average , but remains stochastic even for , with its pdf derived exactly as follows : this distributional behavior of the time - averaged sign is a clear evidence of web in rp .the corresponding pdfs obtained at different for the circular kpz interfaces [ fig . [ fig2a](a ) symbols ] indeed indicate an asymptotic broad distribution , demonstrating that remains stochastic and does not converge to the ensemble average ( determined by the gue tracy - widom distribution ) shown by the gray vertial line in fig .[ fig2a](a ) .this demonstrates that kpz indeed exhibits web , at least for the circular case . on the other hand ,the found distribution is clearly different from the lamperti s one for rp with ( black dashed line ) .we find instead a nontrivial distribution universal within the circular kpz subclass , as supported by good agreement between experiments and simulations ( symbols and turquoise solid line ) . of the circular interfaces , measured for the positive ( a , c ) and negative ( b , d ) fluctuations ( sign at is used ) in the lc experiment ( a , b ) and the eden model ( c , d ) .the dashed and dotted lines indicate exponents and , respectively .the same set of colors / symbols and is used in all panels . ]another quantity of interest is the correlation function of sign , .this can be expanded by the generalized persistence probability , i.e. , the probability that the sign changes times between and ( hence ) : where denotes the probability that fluctuations at take the sign indicated by the subscript . for rp with , one can explicitly calculate the infinite sum of eq .( [ eq : signcorrpers ] ) and obtain .in contrast , for the circular kpz interfaces , we find that decays as with [ fig .[ fig2a](b ) ] ( see also footnote [ ft1 ] ) , in the same way as the rescaled correlation function does [ eq .( [ eq : corrfunc ] ) ] . since the relation ( [ eq : signcorrpers ] ) holds generally and is alike , the difference from rp should stem from , which encodes correlation between waiting times . for rp, one can show with ( see [ app : c ] ) .now , for the circular kpz interfaces , the results in fig .[ fig2b ] show that the long - time behavior seems to be consistent with that of rp for some quantities the asymptotic decay is only reached by the numerical data , obtained with longer time . ] , but the short - time behavior for odd shows faster decay than that of rp ( after the initial growth , which occurs at for rp ; see [ app : c ] ) . in other words , has heavier weight in the short - time regime for odd .this difference from rp gives nontrivial contribution to the sum in eq .( [ eq : signcorrpers ] ) , which is absent for rp .we consider that this is how the different behavior of the correlation function arises , which captures , for kpz , the characteristic time correlation of the ( non - binarized ) kpz - class fluctuations . against for the flat interfaces , obtained at different in the lc experiment ( main panels ) and the dpng model ( insets ) .the dashed and dotted lines are guides to the eyes , indicating exponents labeled alongside , though for the dotted lines we expect larger asymptotic exponents ( see text ) .the same set of colors / symbols and is used in both panels .the ordinates are arbitrarily shifted . ]now we turn our attention to the flat interfaces .figure [ fig3a ] shows the waiting - time distribution ( ccdf ) for the flat lc experiment ( main panels ) and the dpng model ( insets ) . at short waiting times , we identify power - law decay with exponent .this exponent seems to be different from previously observed for a related quantity in the kpz stationary state with , whereas we measure here the waiting - time distribution , or the persistence probability of the sign of with the condition .although both probabilities concern the first return to zero , the different definitions may lead to different exponent values .] , but is identical to the one we found for the circular interfaces ( fig .[ fig1a ] ) . for the flat interfaces ( fig .[ fig3a ] ) , however , this power law is followed by another one with larger ( in magnitude ) exponent for longer waiting times , which now takes different values between positive and negative fluctuations .the measured exponents do not seem to reach their asymptotic values within our observation time , increasing gradually with , but they are clearly asymmetric with respect to the sign , in sharp contrast with the exponent for the shorter waiting times or for the circular interfaces .moreover , taken at different overlaps when it is plotted against ( fig .[ fig3a ] ) .this indicates that the value of separating the two power - law regimes is not constant , but grows with , showing the aging property of the waiting - time distribution . and , respectively , for the flat interfaces at different in the lc experiment ( main panels ) and the dpng model ( insets ) . and are rescaled by .the black lines indicate numerical results for the 2-step rp at .the data are normalized so that they have the same statistical weight in the range covered by their abscissa .the same colors / symbols correspond to the same in all panels . ]these results on the flat - kpz waiting - time distribution lead us to introduce a variant of rp with two power - law regimes , called hereafter the 2-step rp model : ) ] depends on , it is _ not _ in the scope of the models considered in the renewal theory . ] we then solved it numerically with ( as observed experimentally ) in the following way : first , the initial sign was chosen to be either or with the equal probability . the first waiting time was generated according to , because eq . ( [ eq : waitingtimerp2 ] ) is invalid for .subsequent waiting times were generated by eq .( [ eq : waitingtimerp2 ] ) , until the time ( cumulative sum of waiting times ) exceeds the recording time .we sampled independent realizations to investigate statistical properties of this 2-step rp model .now we compare this 2-step rp model and the flat kpz interfaces .figure [ fig3b ] shows the forward and backward recurrence - time distributions .we find that these quantities for the flat kpz interfaces are reproduced by the 2-step rp model reasonably well , similarly to those for the circular interfaces found in agreement with the standard rp .aging of the recurrence - time distributions is also clear in both cases .note however that , while is known to hold for the standard rp [ eq .( [ eq : waitingtimerp ] ) ] with , our 2-step rp rather indicates [ dotted lines in fig .[ fig3b](a , b ) ] . since is given by the derivative of the persistence probability , this implies , hence asymptotically and are expected for the flat kpz subclass .for the lc experiment ( for circles , squares , and diamonds , respectively ) and the dpng model ( ; turquoise line ) , compared to numerical data for the 2-step rp ( ; dashed line ) .the gray vertical line indicates the ensemble - averaged value .the existence of the broad asymptotic distribution is a direct evidence of web in the flat kpz subclass , but in the form different from that of the circular case .( b - d ) correlation function of sign , , at different for the 2-step rp ( ) ( b ) , the lc flat interfaces ( c ) , and the dpng model ( d ) .the dashed lines in the panels ( c , d ) indicate the exponent , while the dotted line in the panel ( b ) shows . ] in contrast to this agreement in the recurrence - time distributions , the distribution of the time - averaged sign turns out to be different between the flat kpz subclass and the 2-step rp [ fig .[ fig4](a ) ] , analogously to the results for the circular interfaces .more specifically , both the flat kpz subclass and the 2-step rp are found to show asymptotic broad distributions [ fig .[ fig4](a ) ] , hence both of them exhibit web , but the distributions are again clearly different between the two systems . note here that the time - averaged sign distribution for the standard rp [ eq . ([ eq : waitingtimerp ] ) ] with becomes infinitely narrow in the limit ; this is however not the case here , despite .the existence of the broad distribution results from the aging of the waiting - time distribution , i.e. , from the fact that the crossover time in the waiting - time distribution grows with [ see fig .[ fig3a ] and eq .( [ eq : waitingtimerp2 ] ) ] . for the 2-step rp ( ) ( a , b ) and for the flat kpz - class interfaces [ lc experiment ( c , d ) and dpng model ( e , f ) ] , measured for the positive ( a , c , e ) and negative ( b , d , f ) signs .the dashed lines in all panels indicate the exponent found in the short - time regime ( ) of the 2-step rp . the dotted and dot - dashed lines are guides for the eyes indicating exponents and , respectively , which characterize the long - time regime for the 2-step rp [ see eq .( [ eq : genpersrp2 ] ) ] . the same set of colors / symbols and is used in all panels . ] the difference between the flat kpz subclass and the 2-step rp is also detected in the correlation function of sign , : while our simulations of the 2-step rp show [ fig . [ fig4](b ) ] , for the flat interfaces it decays as with [ fig .[ fig4](c , d ) ] , the characteristic exponent for the decorrelation of the flat kpz subclass [ see eq .( [ eq : corrfunc ] ) ] . similarly to the circular case , this difference results from correlation of waiting times , which can be characterized by the generalized persistence probability . for the 2-step rp ,i.e. , in the absence of correlation , we numerically find [ fig .[ fig5](a , b ) ] where in the latter case the two double signs are set to be the same sign for even and the opposite ones for odd .this long - time behavior can also be seen in the flat kpz subclass [ fig .[ fig5](c , d ) for the lc experiment and ( e , f ) for the dpng model ] .in contrast , short - time behavior of is found to be different between the 2-step rp and the flat kpz subclass [ compare data and the dashed lines in fig .[ fig5](c - f ) ] , the latter carrying heavier weight in the short - time regime .analogously to the circular case , such pronounced short - time behavior of seems to generate , via eq .( [ eq : signcorrpers ] ) , the characteristic decay of the correlation function slower than that of the 2-step rp [ fig .[ fig4](b ) ] .we have shown an unexpected similarity between sign renewals of the kpz - class fluctuations and rp , studied in the context of aging phenomena and web . despite the fundamental difference between the two systems, we found , for the circular interfaces , that the kpz waiting times obey simple power - law distributions identical to those defining rp , while those for the flat interfaces correspond to its straightforward extension with two power - law regimes [ eq . ( [ eq : waitingtimerp2 ] ) , the 2-step rp model ] .further quantitative agreement has been found in the recurrence - time distributions ( figs .[ fig1b ] and [ fig3b ] ) , from which the agreement in the persistence probability follows .these quantities have remained theoretically intractable for kpz , but now , following the agreement we found , their precise forms are revealed for the circular interfaces , thanks to the exact solutions for the original rp .this also implies that recurrence - time statistics may be determined independently of the intercorrelation of waiting times , contrary to the usual beliefs .the correlated waiting times of kpz otherwise generate characteristic aging properties of the kpz - class fluctuations ( figs .[ fig2a ] and [ fig4 ] ) , especially their broad asymptotic distributions of the time - averaged sign .this indicates web of the kpz - class fluctuations , which turned out to be different from that of rp , and in fact also from other types of web , known from the studies of single - particle observations .we therefore consider that the web found in this study is of a new kind , characteristic of many - body problems governed by the kpz universality class .this also implies that rp can not be a proxy for the full kpz dymamics ; instead rp reproduces only some of the time - correlation properties of kpz , surprisingly well , though .in fact , such a partial similarity to rp was also argued in the past for the fractional brownian motion ( fbm ) , in the context of linear growth processes .krug et al . showed , for the stationary state of linear growth processes , that the stochastic process is equivalent to fbm .its first - return time ( corresponding to the waiting time of its sign ) is then characterized by a power - law distribution with exponent with being the growth exponent , or the hurst exponent of fbm . then suggested that the sign of fbm would form rp , showing numerical observations of its persistence probability as a partial support , but it turned out later that the two models behave differently in other statistical quantities , because of the intercorrelation of waiting times . in our contribution , we studied the growth regime of nonlinear growth processes in the kpz class and compared the sign of the stochastic process with rp .as already summarized , we showed thereby precise agreement in the waiting - time distribution and the persistence probability , but not in the other statistical properties we studied .understanding the mechanism of this partial agreement is an important issue left for future studies , all the more because no theoretical understanding has been made so far on persistence properties of the kpz growth regime .such developments will also help to understand the deviations from rp , which we believe carry characteristic information of underlying growth processes ( recall our results on the correlation function ) .we hope this direction of analysis may afford a clue to elucidate hitherto unexplained time - correlation properties of the kpz class .we also believe that our approach may be useful to characterize other scale - invariant processes such as critical phenomena .we acknowledge fruitful discussions with e. barkai , i. dornic , c. godrche , and s. n. majumdar . this work is supported in part by kakenhi from jsps ( no .jp25707033 and no .jp25103004 ) , the jsps core - to - core program `` non - equilibrium dynamics of soft matter and information '' , and the national science foundation under grant no .nsf phy11 - 25915 .in this appendix we briefly describe the three systems studied in this paper , namely the lc experiment , the off - lattice eden model , and the dpng model , all known to be in the kpz class .the experimental results are obtained from the raw data acquired in refs .the readers are referred to these publications for the complete description of the experimental system .the experiment concerns fluctuating interfaces between two turbulent regimes of electrically driven nematic liquid crystal , called the dynamic scattering modes 1 and 2 ( dsm1 and dsm2 , respectively ) .the dsm1/dsm2 configuration can be argued to lie in pure two dimensions , so the interfaces in between are one - dimensional . under sufficiently high applied voltage , here ,dsm2 is more stable than dsm1 , and the interfaces grow until the whole system is occupied by dsm2 .the initial nucleus of the dsm2 state can be introduced by shooting laser pulses .this allows us to study both circular and flat growing interfaces : circular interfaces grow from a point nucleus generated by focused laser pulses , while flat interfaces originate from a linear region of dsm2 , created by linearly expanded laser pulses . in refs . , takeuchi and sano measured 955 circular interfaces over time length and 1128 flat interfaces over , and found the characteristic statistical properties of the circular and flat kpz subclasses , respectively . in the present study , we employ the same data sets which are guaranteed to belong to these subclasses , and analyze the sign of the height fluctuations as explained in the main text .the sign renewals are detected at every and for the circular and flat interfaces , respectively .numerical data for circular interfaces are obtained with the off - lattice eden model , the version introduced in ref . which is sometimes called the off - lattice eden d model . while detailed descriptions can be found in ref . , in this model , one starts with a round particle of unit diameter placed at the origin of two - dimensional continuous space . at each time step ,one randomly chooses one of the existing particles , and attempts to put an identical particle next to it in a direction randomly chosen from the range .if the new particle does not overlap any existing particles , it is added as attempted , otherwise the particle is discarded .time is then increased by , whether the attempt is adopted or not .particles without enough adjacent space , to which no particle can be added any more , are labelled inactive and excluded from the particle counter ( but can still block new particles ) .since we are interested in the interface , or specifically the outermost closed loop of adjacent particles , particles surrounded by the interface are also marked inactive and treated likewise .this model was previously shown to belong to the circular kpz subclass .the data presented in the present paper are newly obtained from independent simulations of time length , and the sign renewals are detected at every time unit .numerical simulations of flat interfaces are performed with the dpng model .this is a discretized version of the png model , which is one of the exactly solvable models in the -dimensional kpz class .the evolution of the height variable of the dpng model , with non - negative integers , is given by the following equation : where is an independent and identically distributed random variable generated from the geometric distribution , with .the original png model with nucleation rate and nucleus expansion rate is retrieved by the continuum limit and . in our study, we set and the periodic boundary condition with ( or lattice units ) .we start from the flat initial condition and evolve the system until by independent simulations .the sign renewals are detected at every time step , i.e. , time unit .note that the dpng model with shows the same universal statistical properties as the original png model , provided that the height variable is appropriately rescaled using non - universal scaling coefficients [ and in eq .( [ eq : height ] ) ] .the values of the scaling coefficients depend on and : for example , they are estimated at and for the dpng model studied here ( the evaluation method described in ref . is used ) , while the values for the original png model ( corresponding to ) are and .here we derive two asymptotic behaviors of the generalized persistence probability for the renewal process with a power - law waiting - time distribution .we assume eq .( [ eq : waitingtimerp ] ) with for the waiting time distribution .thus , the laplace transform of the probability density function ( pdf ) of waiting times , , is given by with .the generalized persistent probability can be represented by - \text{prob}[\delta t_{n+1 } < \delta t;t_0],\ ] ] where $ ] is the probability that holds , with waiting times and the forward recurrence time ( time elapsed from to the first renewal event since then ) . for ,the double laplace transform of with respect to and can be calculated as follows : where is the double laplace transform of , given by therefore , now we consider the following two asymptotic limits . for ( ) , we obtain where we used approximation . in other words , is so small that , i.e. , . then the inverse laplace transform yields for and .we note that this asymptotic behavior does not depend on nor .in contrast , for ( ) , where we used the same approximation as in the previous case .the inverse laplace transform then yields for .this asymptotic behavior is independent of but does depends on .brokmann , x. , hermier , j.p . ,messin , g. , desbiolles , p. , bouchaud , j.p . ,dahan , m. : statistical aging and nonergodicity in the fluorescence of single nanocrystals .lett . * 90 * , 120,601 ( 2003 ) dynkin , e.b . : some limit theorems for sums of independent random variables with infinite mathematical expectations .nauk sssr ser. mat . * 19 * , 247266 ( 1955 ) .selected translations math .prob * 1 * , 171189 ( 1961 ) .jeon , j.h . ,tejedor , v. , burov , s. , barkai , e. , selhuber - unkel , c. , berg - srensen , k. , oddershede , l. , metzler , r. : _ in vivo _ anomalous diffusion and weak ergodicity breaking of lipid granules . phys . rev .lett . * 106 * , 048,103 ( 2011 ) manzo , c. , torreno - pina , j.a . ,massignan , p. , lapeyre , g.j . ,lewenstein , m. , garcia parajo , m.f . : weak ergodicity breaking of receptor motion in living cells stemming from random diffusivity .x * 5 * , 011,021 ( 2015 ) metzler , r. , jeon , j.h . ,cherstvy , a.g . ,barkai , e. : anomalous diffusion models and their properties : non - stationarity , non - ergodicity , and ageing at the centenary of single particle tracking .* 16 * , 24,12824,164 ( 2014 ) tabei , s.m.a . ,burov , s. , kim , h.y . , kuznetsov , a. , huynh , t. , jureller , j. , philipson , l.h . ,dinner , a.r . ,scherer , n.f .: intracellular transport of insulin granules is a subordinated random walk .usa * 110*(13 ) , 49114916 ( 2013 ) weigel , a.v . ,simon , b. , tamkun , m.m . ,krapf , d. : ergodic and nonergodic processes coexist in the plasma membrane as observed by single - molecule tracking .usa * 108 * , 64386443 ( 2011 ) wong , i.y ., gardel , m.l . ,reichman , d.r . , weeks , e.r . , valentine , m.t . ,bausch , a.r . , weitz , d.a .: anomalous diffusion probes microstructure dynamics of entangled f - actin networks .lett . * 92 * , 178,101 ( 2004 )
|
tracking the _ sign _ of fluctuations governed by the -dimensional kardar - parisi - zhang ( kpz ) universality class , we show , both experimentally and numerically , that its evolution has an unexpected link to a simple stochastic model called the renewal process , studied in the context of aging and ergodicity breaking . although kpz and the renewal process are fundamentally different in many aspects , we find remarkable agreement in some of the time correlation properties , such as the recurrence time distributions and the persistence probability , while the two systems can be different in other properties . moreover , we find inequivalence between long - time and ensemble averages in the fraction of time occupied by a specific sign of the kpz - class fluctuations . the distribution of its long - time average converges to nontrivial broad functions , which are found to differ significantly from that of the renewal process , but instead be characteristic of kpz . thus , we obtain a new type of ergodicity breaking for such systems with many - body interactions . our analysis also detects qualitative differences in time - correlation properties of circular and flat kpz - class interfaces , which were suggested from previous experiments and simulations but still remain theoretically unexplained . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
with the advent of the science of complexity , numerous complexity measures have been proposed .these measures can be grouped mainly in two categories : first group follows the route of constructing the shortest computer program corresponding to a given string .the well - known kolmogorov - chaitin and logical depth measures fall into this category .the second group follows the information - theoretic approaches whose main examples can be given as effective complexity , thermodynamic depth , shiner - davison - landsberg and lpez - ruiz - mancini - calbet measures .the information - theoretic approaches are founded on the main idea of multiplying a measure of order by that of disorder . in this sense , these approaches rely heavily on the definitions of entropy as a measure of order / disorder . for example , shiner - davison - landsberg uses boltmann - gibbs - shannon ( bgs ) entropy of the physical system in its definition , while thermodynamic depth as a complexity measure considers the entropy of the ensemble focusing on the entire history of the system under investigation .however , to the best of our knowledge , none of these complexity measures have been constructed particularly to deal with the non - equilibrium stationary states resulting from the external influence of a field .such a complexity measure has been recently introduced by saparin et al . and applied to logistic map , heart rate variability and to the analysis of electroencephalograms of epilepsy patients .this new complexity measure is an information - theoretic one and is called renormalized entropy , historically originating from klimontovich s s - theorem ( the letter s here stands for the self - organization ) .the renormalized entropy is theoretically equivalent to negative relative entropy between a reference distribution and any other distribution obtained either analytically or numerically through a time - series analysis .however , it is not only associated with the relative entropy , since an additional procedure of renormalization is also introduced .the process of renormalization is used in order to compare two states by equating their mean energies so that non - equilibrium stationary states attain the same mean energy , thereby mimicking the ordinary closed system formalism . combining renormalization and the relative entropy , the renormalized entropy decreases as the control parameter increases , indicating the relative degree of order in the system as first suggested by haken in the context of self - organization . on the other hand ,another related issue in the context of self - organization is the existence of bifurcations often observed in nonlinear dynamical systems : the systems possessing a stable fixed point become unstable as they recede away from this stable fixed point as a result of increasing nonlinear effects .these systems eventually pave their way to the new stable branches through bifurcations . for the system away from the stable fixed point, this process continues until the system sets in a new stationary state , thereby increasing its order as a signature of self - organization due to the non - linearity , dissipation and the non - equilibrium exhibited by the system ( see the detailed discussion by nicolis and prigogine in ref .physical processes such as rayleigh - benard , taylor instability experiments , bacterial and _ dictyostelium discoideum _ colonies fall into the aforementioned category .these systems , being open due to exchange of energy and/or matter with its surroundings , can not be analyzed in terms of usual h - theorem , since it is valid only for isolated systems .therefore , prigogine proposed a more general form of the second law i.e. where is the entropy produced inside the system and is the transfer of entropy across the boundaries of the system . in this more generalsetting , whereas can be positive or negative depending on the flow of energy across the boundaries of the system .the overall sign of is determined by the interplay between and .all the aforementioned models of self - organization requires until the system sets in the stationary state corresponding to the most ordered pattern .the paper is organized as follows . in section 2 , calculation methods of the renormalized entropy is given . in section 3 , this complexity measure is applied to discrete maps possessing different universality classes i.e. to the logistic map which has periodic route to chaos and its self similar windows like period 3 and period 5 to show its robustness and to sine - circle map which has periodic and quasi - periodic route to chaos .finally , we discuss our results and compare the two different routes to chaos in terms of the renormalized entropy .let us consider a generic dynamical map where and denote the number of iterations and the corresponding control parameter of the dynamical map , respectively .a small change in the value of the control parameter , , yields two normalized distributions i.e. and .the corresponding shannon entropies read let us now assume that the system i.e. dynamical map under scrutiny evolves in such a manner that the state with index is evolved to through increasing order , namely , the system becomes self - organized as the control parameter increases .then , setting equilibrium temperature , the normalized boltzmann - gibbs distribution reads \label{boltzmann}\ ] ] with the following effective energy s - theorem by klimontovich equates the effective energies of the concomitant states i.e. renormalizes the states in order to apply h - theorem of boltzmann to open systems , implying in the second law formulation of prigogine and to compensate for the mean energy difference i.e. turning open system into a closed one . denoting the renormalized state by , one can write ^{\beta_{eff}}= c\,\exp \left [ \frac{-h_{eff}(x)}{t_{eff } } \right ] \label{renormalization}\ ] ] where is the normalization constant . to check whether any heat intake occurs during self - organization , to form spatially more ordered patterns , through dissipation as a result of interaction with the environment , calculated from the equality of mean energies if heat intake is needed for the process of self - organization such as rayleigh - benard convection resulting in spatially ordered hexagonal patterns at the stationary state , then one expects , comparing the equilibrium and non - equilibrium states , , since .otherwise , we deduce that our assumption regarding the more disorderliness of the initial distribution is not correct , indicating the second distribution to be more disordered .therefore , when this is the case , one renormalizes the second distribution .the measure of relative degree of order for these compared states can then be given as the difference of entropies which is called renormalized entropy .we use the spectral intensities and averaged over periodograms based on the multiplication of the fourier and the inverse fourier transformation of the time series of in the frequency domain , instead of the density distributions and in eqs .[ [ shannon]-[relatifentropy ] ] , respectively so that where the sequence is a sufficiently long series of values that can be obtained by iterating a mapping by sampling equidistant points and is the length of the samples in every periodogram , satisfying and .such a fourier spectra eliminate the zeros in the distributions and detect different regimes of a deterministic dynamical system .to start with , let us consider a -dimensional mapping of the form on some -dimensional phase space .we can numerically generate data from such a mapping equation that describes dynamics of a specific dynamical system .we particularly focus on two examples .one is the logistic map given as where ] , which exhibits periodic route to chaos . for ,the system is ( semi)conjugated to a bernoulli shift and strongly mixing .the other is the sine - circle map given as where is a point on a circle and parameter ( with ) is a measure of the strength of the nonlinearity , whichs exhibit periodic route to chaos .it describes dynamical systems possessing a natural frequency which are driven by an external force of frequency ( is the bare winding number or frequency - ratio parameter ) and belongs to the same universality class of the forced rayleigh - benard convection .winding number for this map is defined to be the limit of the ratio where is the angular distance travelled after iterations of the map function . to increase the degree of irrationality of the system ,one could use frequency ratio parameter corresponding to winding number that approaches the golden mean gradually by following the sequence of ratios of the fibonacci numbers ( ) . the map is monotonic and invertible ( non - monotonic and non - invertible ) for ( ) and develops a cubic inflexion point at for .considering technical details , it is important to emphasize that we added gaussian white noise into systems in eqs .( [ logisticmap ] ) and ( [ circlemap ] ) with a small intensity , which is called basic noise , to every state .the low intensity of the noise is chosen so as not to influence the dynamics of these systems .this procedure enables a continuous spectral distribution .after 65536 transients , we generated the discrete sequences with the length of points separately obtained from eq .( 9 ) and ( 10 ) .finally , the spectrum of 18 shifted windows of 4096 samples is estimated and averaged .one of the dynamical systems possessing bifurcation properties can be cited as the logistic map , which moves from a unique stable fixed point to the critical accumulation point possessing infinite periods through the periodic doubling route .having reached the critical accumulation point , the system enters into the chaotic regime and from this point on it moves under the influence of chaotic band merging ( i.e. , inverse period - doubling ) .it should also be remarked that periodic windows with different periods but albeit self - similar structures are also found in this chaotic regime .we now present the results concerning the behavior of the renormalized entropy and the bifurcation properties in the regions of period , and . before proceeding further, we note that the renormalized entropy has already been applied to the logistic map for period-2 window by saparin _our aim in this section is to investigate all other self - similar windows to check whether the renormalized entropy behaves consistently as a complexity measure , i.e. , to check its robustness .1a in particular shows the behavior of the renormalized entropy in the period 2 region where control parameter lies between and . in this region , relative degree of order increases until the period accumulation point i.e. , as the system evolves from the equilibrium state to a new stationary state in accordance with the self - organization process . as a signature of the detection of the self - organization in this regionthen , the relative entropy monotonically decreases as expected . after the period accumulation point onward until the most chaotic state with , the relative degree of order decreases , since band - merging ( as opposed to bifurcation in the previous region ) is exhibited by the system in this region .therefore , the renormalized entropy increases in the aforementioned region in a non - monotonic manner . to sum up , for an open non - equilibrium dynamical system which approaches the stationary state through period - doubling and recedes away from the stationary state by means of band - merging , the behavior of the renormalized entropy conforms to the dictum of prigogine i.e. , order out of chaos . in fig1b , we zoom in the period window i.e. , the largest self - similar window in the period region .the control parameter values for this window are confined in the interval between and .similarly to the period window , the relative degree of order monotonically increases up to the period accumulation point , and thereafter decreases non - monotonically .accordingly , the relative entropy decreases up to the accumulation point , and begins to increase after the period accumulation point .it is worth noting the sudden , unexpected changes in the values of the renormalized entropy which signal the existence of the self - similar windows in the chaotic region .1c shows the behavior of the renormalized entropy in the period window , which is one of the self - similar windows in the logistic map . due to the self - similarity, a behavior similar to the one in period is exhibited by the renormalized entropy : it decreases almost up to , and then begins to increase in accordance with the decrease in the relative degree of order .finally , it is interesting to observe the turns in the relative degree of order for each of the three period accumulation points representing the stationary state of a non - equilibrium dynamical system possessing inherent fractal structure . ( a ) , period ( b ) and period ( c ) of the logistic map.,title="fig:",height=245 ] ( a ) , period ( b ) and period ( c ) of the logistic map.,title="fig:",height=245 ] ( a ) , period ( b ) and period ( c ) of the logistic map.,title="fig:",height=245 ] the sine - circle map can exhibit periodic , quasi - periodic or chaotic behaviors depending on the frequency ratio and the nonlinearity parameters i.e. , and , respectively . for ,the system dynamics is either periodic ( frequency - locked ) or quasi - periodic depending on the value of the frequency ratio parameter being rational or irrational .as the nonlinearity parameter approaches zero , the system exhibits quasi - periodic behavior for all values of the frequency ratio parameter .as the nonlinearity parameter approaches one , frequency - locked steps extend and occupy all axes where is equal to one . in this case , there is a special fraction of value called the most irrational , corresponding to the `` golden mean '' winding number if frequency ratio parameter is locked to its critical value . shortly after this critical value on plane , is the edge of quasi - periodic route to chaos since chaotic behavior can occur .all these characteristic shapes on plane is called `` arnold tongues '' in the literature . for the region where the nonlinearity parameter is dominant on the system dynamics, there could be periodic regions with different periods , chaotic regions , and so edges of periodic route to chaos .also , for this region , there could be periodic windows possessing same universality class with the logistic map .2 shows the behavior of the renormalized entropy and the bifurcations of the sine - circle map for , and obtained from eqs .( [ circlemap]-[winding ] ) , respectively where the nonlinearity parameter lies between zero and .the reference state for the renormalized entropy is chosen to be the one with and where the system evolves towards a unique stable point .note that the degree of irrationality of the system increases as one moves from fig .2a towards fig .2c . in each of the aforementioned figures , the oscillatory behavior of the renormalized entropyis observed when the system is in the quasi - periodic regime .this oscillatory behavior is exhibited when ] , ] , ] corresponding to = , and , respectively .the renormalized entropy always attains values close to zero in these intervals for the chaotic regions , while it decreases with the increasing number of periods in the periodic regions until it reaches the edge of chaos .this can be considered as the signature of the relative degree of order within the system .( a ) , ( b ) and ( c ) of the sine - circle map.,title="fig:",height=245 ] ( a ) , ( b ) and ( c ) of the sine - circle map.,title="fig:",height=245 ] ( a ) , ( b ) and ( c ) of the sine - circle map.,title="fig:",height=245 ] it is well - known that the sine - circle map is in the same universality class as the logistic map for $ ] when . fig .3 shows the bifurcation and the renormalized entropy for this particular window .the renormalized entropy behaves exactly as it does in fig .1a for the logistic map , thereby indicating that different dynamical maps exhibit same behavior in the regions falling into the same universality class . of the the sine - circle map.,width=377,height=245despite the presence of many different complexity measures , the ones enabling a local comparison of the distributions are quite few ( for a recent example , see ref .one such measure of relative nature is the renormalized entropy introduced by klimontovich , kurths and coworkers . in this work ,the renormalized entropy is used to analyze the logistic and sine - circle maps . in the former example of the logistic map , renormalized entropy decreases ( increases ) up to the accumulation point ( after the accumulation point until the most chaotic state ) as a sign of increasing ( decreasing ) relative degree of order in all the self - similar periodic windows , thereby proving the robustness of this complexity measure . by robustness, we emphasize the similarity of the behavior of the renormalized entropy in all the self - similar windows , therefore removing the doubt concerning a possible accidental feature of the renormalized entropy as a complexity measure . on the other hand ,the aforementioned observed changes in the renormalized entropy are reasonable , since the bifurcations occur before the accumulation point , after which the band - merging , in opposition to the bifurcations , is exhibited .on top of the precise detection of the accumulation points in all these windows , we see that the renormalized entropy can detect the self - similar windows in the chaotic regime by exhibiting sudden changes in its values . for the sine - circle map ,on the other hand , the renormalized entropy detects also the quasi - periodic regimes by signaling oscillatory behavior particularly in these regimes .moreover , the oscillatory regime of the renormalized entropy corresponds to a larger interval of the nonlinearity parameter of the sine - circle map as the value of the frequency ratio parameter reaches the critical value , at which the winding ratio attains the golden mean .lastly , we remark that the renormalized entropy is superior to the lyapunov exponent as a complexity measure , since the renormalized entropy can detect the quasi - periodic regimes as well as the periodic regimes at the bifurcation points in a distinct manner whereas the lyapunov exponent is zero for both of these regions , hence detecting no difference at all .this work has been supported by tubitak ( turkish agency ) under the research project number 112t083 .u.t . is a member of the science academy , istanbul , turkey .
|
we apply renormalized entropy as a complexity measure to the logistic and sine - circle maps . in the case of logistic map , renormalized entropy decreases ( increases ) until the accumulation point ( after the accumulation point up to the most chaotic state ) as a sign of increasing ( decreasing ) degree of order in all the investigated periodic windows , namely , period- , , and , thereby proving the robustness of this complexity measure . this observed change in the renormalized entropy is adequate , since the bifurcations are exhibited before the accumulation point , after which the band - merging , in opposition to the bifurcations , is exhibited . in addition to the precise detection of the accumulation points in all these windows , it is shown that the renormalized entropy can detect the self - similar windows in the chaotic regime by exhibiting abrupt changes in its values . regarding the sine - circle map , we observe that the renormalized entropy detects also the quasi - periodic regimes by showing oscillatory behavior particularly in these regimes . moreover , the oscillatory regime of the renormalized entropy corresponds to a larger interval of the nonlinearity parameter of the sine - circle map as the value of the frequency ratio parameter reaches the critical value , at which the winding ratio attains the golden mean .
|
the video ( entry # 83832 , aps 65 annual dfd meeting 2012 ) demonstrates the patterns exhibited by gravity - driven , particle - laden thin films flowing down a solid substrate .the results from three experiments are shown in the video .a finite volume of fluid mixed with particles is allowed to flow down the solid plane ; visualization is achieved from the front view and the videos have been recorded with a digital slr camera ( canon eos rebel t2i ) .the suspension consists of silicone oil , glass and ceramic beads .the ceramic beads are denser than the glass beads while both sets of particles are denser than the fluid .it is noted that both species of beads are of the same size .the substrate angle of inclination is fixed at while the total particle concentration , is fixed at .the series of experiments conducted aim in understanding the effect of adding a second , heavier species of beads to a slurry composed of oil and glass beads . in order to visualize the separation , if any , between the two species , the ceramic and glass beads are dyed blue and red , respectively .we introduce a dimensionless parameter , , defined as the ratio of the concentration of glass beads to the total concentration of particles. the rightmost video shows a reference case wherein the suspension consists only of glass beads ( i.e. ) ; the relative ratio is decreased from 1 ( right ) to 0.75 ( center ) to 0.25 ( left ) with the addition of ceramic beads .the runtime associated with the individual videos has been fast - forwarded 6 times , in order to demonstrate the onset of flow instabilities as well as the development of fingering patterns .the choice of parameters allows the presentation of three , distinct regimes , also exhibited by an increase in particle concentration in monodisperse slurry flows . in this video , we observe the three regimes by keeping the total particle concentration constant while we add a second species of negatively buoyant beads . for large concentrations of _ ceramic _ beads ( left video , mostly blue particles ) ,the particles settle rapidly allowing the clear fluid to flow over them which results in fingering . for small concentrations of _ ceramic _ beads ( middle video , red and blue particles ) , we observe a well - mixed regime characterized by finger formation ; this regime is considered to give an unstable , transient pattern . finally , for monodisperse suspensions of _ glass _ beads i.e. no _ ceramic _ beads ( right video , red particles ) , the beads aggregate at the contact line , forming a particle - rich ridge , evident by a darker red color at the front of the flow . + + * references * + 1 .t. ward , c. wey , r. glidden , a. e. hosoi , and a. l. bertozzi .experimental study of gravitation effects in the flow of a particle - laden thin film on an inclined plane , _ phys .fluids _ , * 21 * , 083305 ( 2009 ) .b. cook , o. alexandrov and a. l. bertozzi .linear stability of particle - laden thin films , _ the european physical journal - special topics _ , * 166 * , 1 , 77 - 81 ( 2009 ) .n. murisic , j. hob , v. huc , p. latterman , t. koche , k. linf , m. mata , a.l .particle - laden viscous thin - film flows on an incline : experiments compared with a theory based on shear - induced migration and particle settling , _ physica d : nonlinear phenomena _ * 240 * , 20 , 1661 - 1673 ( 2011 ) .murisic , b. pausader , d. peschka , a.l .dynamics of particle settling and resuspension in viscous liquids , _ under review for j. fluid mech_. + + * acknowledgements * + the authors would like to thank miss kaiwen huang for her help in conducting the experiments presented in this video .
|
this arxiv article describes the fluid dynamics video on ` bi - disperse particle - laden flows in the stokes regime ' , presented at the 65th annual meeting of the aps division of fluid dynamics in san diego , ca in november 2012 . the video shows three different experiments which aim to investigate the dynamics of a thin film of silicone oil , laden with glass beads and the effects of adding a second species of particles to the slurry . the mixture of oil and particles is allowed to flow down an incline under the action of gravity . the videos were recorded at the ucla applied math laboratory .
|
the process of freehand sketching has long been employed by humans to communicate ideas and intent in a minimalist yet almost universally understandable manner . in spite of the challenges posed in recognizing them , sketches have formed the basis of applications in areas of forensic analysis , electronic classroom systems , sketch - based retrieval etc .sketching is an inherently sequential process . the proliferation of pen and tablet based devices today enables us to capture and analyze the entire process of sketching , thus providing additional information compared to passive parsing of static sketched content . yet , most sketch recognition approaches either ignore the sequential aspect or lack the ability to exploit it .the few approaches which attempt to exploit the sequential sketch stroke data do so either in an unnatural manner or impose restrictive constraints ( e.g. markov assumption ) . in our work , we propose a recurrent neural network architecture for sketch object recognition which exploits the long - term sequential and structural regularities in stroke data in a scalable manner .we make the following contributions : * we propose the first deep recurrent neural network architecture which can recognize freehand sketches across a large number ( ) of object categories .specifically , we introduce a gated recurrent unit ( gru)-based framework ( section [ sec : overview ] ) which leverages deep sketch features and weighted per - stroke loss to achieve state - of - the - art results . *we show that the choice of deep sketch features and recurrent network architecture _ both _ play a crucial role in obtaining good recognition performance ( section [ sec : results ] ) . * via our experiments on sketches with partial temporal stroke content , we show that our framework recognizes the largest percentage of sketches ( section [ sec : results ] ) . given the on - line nature of our recognition framework ,it is especially suited for on - the - fly interpretation of sketches as they are drawn .thus , our framework can enable interesting applications such as camera - equipped robots playing the popular party game pictionary with human players , generating sparsified yet recognizable sketches of objects , interpreting hand - drawn digital content in electronic classrooms etc .to retain focus , we review approaches exclusively related to recognition of hand - drawn object sketches . early datasets tended to contain either a small number of sketches and/or object categories . in , eitz et al . released a dataset containing hand - drawn sketches across categories of everyday objects .the dataset , currently the largest sketch object dataset available , provided the first opportunity to attempt the sketch object recognition problem at a relatively large - scale . since its release, a number of approaches have been proposed to recognize freehand sketches of objects .the initial performance of handcrafted feature - based approaches has been recently surpassed by deep feature - based approaches , culminating in an custom - designed convolutional neural network dubbed sketchcnn which achieved state - of - the - art results .the approaches mentioned above are primarily designed for static , full - sketch object recognition .in contrast , another set of approaches attempt to exploit the sequential stroke - by - stroke nature of hand - drawn sketch creation . for example ,arandjelovic and sezgin propose a hidden markov model ( hmm)-based approach for recognizing military and crisis management symbol objects .although mentioned above in the context of static object recognition , a variant of the sketchcnn can also handle sequential stroke data .in fact , the authors demonstrate that exploiting the sequential nature of sketching process improves the overall recognition rate .however , given that cnns are not inherently designed to preserve sequential `` state '' , better results can be expected from a framework which handles sequential data in a more natural fashion .the approach we present in our paper aims to do precisely this .our framework is based on gated recurrent unit ( gru ) networks recently proposed by cho et al .gru architectures share a number of similarities with the more popular long short term memory networks including the latter s ability to perform better than traditional models ( e.g. hmm ) for problems involving long and complicated sequential structures . to the best of our knowledge, recurrent neural networks have not been utilized for online sketch recognition .sketch creation involves accumulation of hand - drawn strokes over time .thus , we require our recognition framework to optimally exploit object category evidence being accumulated on a per - stroke basis as well as temporally . moreover , the variety in sketch - based depiction and intrinsic representational complexity of objects results in a large range for stroke - sequence lengths .therefore , we require our recognition framework to address this variation in sequence lengths appropriately . to meet these requirements ,we employ gated recurrent unit ( gru ) networks .our choice of gru architecture is motivated by the observation that it involves learning a smaller number of parameters and performs better compared to lstm in certain instances including , as shall been seen ( section [ sec : experiments ] ) , our problem of sketch recognition as well .a gru network learns to map an input sequence to an output sequence .this mapping is performed by the following transformations which are applied at each time step : here , and represent the -th input and -th output respectively , represents the `` hidden '' sequence state of the gru whose contents are regulated by parameterized gating units and represents the elementwise dot - product .the subscripted , and represent the trainable parameters of the gru .please refer to chung et al . for details . of sketches at all levels of sketch completion .best viewed in color . ] for each sketch , information is available at temporal stroke level .we use this to construct an image sequence of sketch strokes cumulatively accumulated over time .thus , represents the full , final object sketch and represents the number of sketch strokes or equivalently , time - steps ( see figure [ fig : overview ] ) . to represent the stroke content for each , we utilize deep features obtained when is provided as input to alexnet . the resulting deep feature sequence forms the input sequence to gru ( see figure [ fig : overview ] ) .the gru unit contains hidden units and its output is densely connected to a final softmax layer for classification . for better generalization , we include a dropout layer before the final classification layer which tends to benefit recurrent networks having a large number of hidden units .we used a dropout of in our experiments .our architecture produces an output prediction for every time - step in the sequence . by comparing the predictions with the ground - truth, we can determine the corresponding loss for a fixed loss function ( shown as a yellow box in figure [ fig : overview ] ) .this loss is weighted by a corresponding and backpropagated ) for the corresponding time step . for the weighing function ,we use thus , losses corresponding to final stages of sequence are weighted more to encourage correct prediction of the full sketch .also , since is non - zero , our design incorporates losses from all steps of the sequence .this has the net effect of encouraging correct predictions even in the early stages of the sequence .overall , this feature enables our recognition framework to be accurate and responsive right from the beginning of the sketching process ( section [ sec : experiments ] ) in contrast with frameworks which need to wait for the sketching to finish before analysis can begin .we additionally studied variations of the weighing function given in equation using the final sequence member loss ( i.e. ) and linearly weighted losses ( i.e. ) .we found that exponentially weighted loss for our experiments .] gave superior results .to address the high variation of sequence length across sketches , we create batches of sketches having equal sequence length ( i.e. ( sec .[ sec : overview ] ) ) .these batches of varying size are randomly shuffled and delivered to the recurrent network during training .for each batch , categorical - cross - entropy loss is generated for each sequence by comparing the predictions with the ground - truth .the resulting losses are weighted ( equation ) on a per - timestep basis as described previously and back - propagated through the corresponding sequence during training .we used stochastic gradient descent with a learning rate of for training .suppose for a given input sequence , the corresponding outputs at the softmax layer are .note that in our case , where is the dimension of deep feature and where is the number of object categories ( ) . to determine the final category label , we perform a weighted sum - pooling of the softmax outputs as where is as given in equation and .we explored various other softmax output pooling schemes last sequence member - based prediction ( ) , max - pooling ( $ ] ) , mean - pooling ( ) . from our validation experiments , we found weighted sum - pooling to be the best choice overall ..average recognition accuracy ( rightmost column ) for various architectures .# hidden refers to the number of hidden units used in recurrent network .we obtain state - of - the - art results for sketch object recognition .[ cols="^,^,^,^",options="header " , ]in addition to obtaining the best results among approaches using handcrafted features , the work of rosalia et al . was especially instrumental in identifying a -category subset of the tu berlin dataset which could be unambiguously recognized by humans .consequently , our experiments are based on this curated -category subset of sketches .following rosalia et al . , we use sketches from each of the categories . to ensure principled evaluation, we split the sketches of each category randomly into sets containing , and of sketches to be used for training , validation and testing respectively , and sketches from each category in the training , validation and test sets respectively . ] .additionally , we utilized the validation set exclusively for making choices related to architecture and parameter settings and performed a one - shot comparative evaluation of ours and competing approaches on the test set .we compared our performance with the following architectures : * alexnet - ft : * as a baseline experiment , we fine - tuned alexnet using our -category training data .to ensure sufficient data , we augmented the training data on the lines of sarvadevabhatla et al .we also used the final fully - connected -dimensional layer features as input to our recurrent architectures .we shall refer to such usage by alexnet - fc .* sketchcnn : * this is essentially the deep architecture of yu et al . but retrained for the categories and splits mentioned in section [ sec : data ] . since cnns do not inherently store state " , the authors construct six different sub - sequence stroke accumulation images which comprise the channels of the input representation to the cnns .it comprises of five different cnns , each trained for five different scaled versions of sketches .the last fully - connected layer s -dimensional features from all the five cnns are processed using a bayesian fusion technique to obtain the final classification . for our experiments, we also concatenated the dimensional features from each scale of sketchcnn as the input feature to the recurrent neural network architectures that were evaluated .however , only the full sketch was considered as the input to cnn ( i.e. single - channel ) . for the rest of the paper, we refer to the resulting -dimensional feature as sketchcnn - sch - fc .* recurrent architectures : * we experimented with the number of hidden units , the number of recurrent layers , the type of recurrent layers ( i.e. lstm or gru ) , the training loss function ( section [ sec : training ] ) and various pooling methods for obtaining final prediction in terms of individual sequence member predictions ( section [ sec : prediction ] ) .we built the software framework for our proposed architecture using lasagne and theano libraries .we also used matconvnet and caffe libraries for experiments related to other competing architectures . *overall performance * : table 1 summarizes the overall performance in terms of average recognition accuracy for various architectures . as can be seen , our gru - based architecture ( first row ) outperforms sketchcnn by a significant margin even though it is trained on only of the total data .we believe our good performance stems from ( a ) being able to exploit the sequential information in a scalable and efficient manner via recurrent neural networks ( b ) the superiority of the deep sketch features provided by alexnet compared to the sketchcnn - fc features .the latter can be clearly seen when we compare the first two rows of table 1 with the last two rows . in our case , the performance of gru was better than that of lstm when alexnet features were used .overall , it is clear that the choice of ( sketch ) features and the recurrent network _ both _ play a crucial role in obtaining state - of - the - art performance for the sketch recognition task . * on - linerecognition : * we also compared the various architectures for their ability to recognize sketches as they are being drawn ( i.e. on - line recognition performance ) . for each classifier , we determined the fraction of test sketches correctly recognized when only the first of the temporal sketch strokes are available .we varied between to in steps of and plotted as a function of .the results can be seen in figure [ fig : onlinerec ] .intuitively , the higher a curve on the plot , the better its online recognition ability .as can be seen , our framework consistently recognizes a larger fraction of sketches at all levels of sketch completion ( except for very small ) relative to other architectures . * semantic information : * to determine the extent to which our architecture captures semantic information, we examined the performance of the classifier on misclassified sketches .as can be seen in figure [ fig : semantic ] , most of the misclassifications are reasonable errors ( e.g. ` guitar ` is mistaken for ` violin ` ) and demonstrate that our framework learns the overall semantics of the object recognition problem .in this paper , we have presented our deep recurrent neural network architecture for freehand sketch recognition .our architecture has two prominent traits . _firstly _ , its design accounts for the inherently sequential and cumulative nature of human sketching process in a natural manner ._ secondly _ , it exploits long - term sequential and structural regularities in stroke data represented as deep features .these two traits enable our system to achieve state - of - the - art recognition results on a large database of freehand object sketches .we have also shown that our recognition framework is highly suitable for on - the - fly interpretation of sketches as they are being drawn .our framework source - code and associated data ( pre - trained models ) can be accessed at https://github.com/val-iisc/sketch-obj-rec .we thank nvidia for their grant of tesla k40 gpu .f. bastien , p. lamblin , r. pascanu , j. bergstra , i. j. goodfellow , a. bergeron , n. bouchard , and y. bengio .theano : new features and speed improvements .deep learning and unsupervised feature learning nips 2012 workshop , 2012 .b. meyer , k. marriott , a. bickerstaffe , and l. knipping .intelligent diagramming in the electronic online classroom . in _ human system interactions , 2009 .2nd conference on _ , pages 177183 .ieee , 2009 .r. k. sarvadevabhatla and v. b. radhakrishnan .eye of the dragon : exploring discriminatively minimalist sketch - based abstractions for object categories . in _ proceedings of the 23rdacm international conference on multimedia _ , mm 15 , pages 271280 , new york , ny , usa , 2015 .o. seddati , s. dupont , and s. mahmoudi .deepsketch : deep convolutional neural networks for sketch recognition and similarity search . in _ content - based multimedia indexing( cbmi ) , 2015 13th international workshop on _ , pages 16 .ieee , 2015 .z. sun , c. wang , l. zhang , and l. zhang .query - adaptive shape topic mining for hand - drawn sketch recognition . in _ proceedings of the 20th acm international conference on multimedia _ , pages 519528 .acm , 2012 .
|
freehand sketching is an inherently sequential process . yet , most approaches for hand - drawn sketch recognition either ignore this sequential aspect or exploit it in an ad - hoc manner . in our work , we propose a recurrent neural network architecture for sketch object recognition which exploits the long - term sequential and structural regularities in stroke data in a scalable manner . specifically , we introduce a gated recurrent unit based framework which leverages deep sketch features and weighted per - timestep loss to achieve state - of - the - art results on a large database of freehand object sketches across a large number of object categories . the inherently online nature of our framework is especially suited for on - the - fly recognition of objects as they are being drawn . thus , our framework can enable interesting applications such as camera - equipped robots playing the popular party game pictionary with human players and generating sparsified yet recognizable sketches of objects .
|
trust is ubiquitous in social and economic activity , irrespective of underlying cultural differences . drawing from the literature on social capital , trust is defined as the commitment of resources to an activity where the outcome depends upon the cooperative behavior of others " .moreover , it seems to obey a type of self - reinforcing dynamic individuals continue to trust beyond the point where evidence points to the contrary . eventually , however , the accumulated weight of evidence turns them towards distrust , which is equally reinforcing .credit markets exemplify the way in which trust lies at the very foundation of modern financial systems . indeed ,the very word credit " derives from the latin , _ credere _ , which means to trust .if credit is readily available , it enables us to derive goods and services from those whom we do not know or have any reason to trust .schumpeter highlights the critical role of credit in real economic activity , noting that capitalism is that form of private property economy in which innovations are carried out by means of borrowed money [ credit ] . "viewed in this light , a proximate cause of the global financial crises of 2007/08 has been the generalized breakdown of trust between banks and investors in banks .the triggers were revelations of losses on united states sub - prime mortgages and other toxic financial assets by banks .an immediate consequence was a freeze in interbank money markets , as banks ceased lending to each other .mounting funding pressures , in turn , lead to questions about banks future profitability and , in some cases , viability .[ fig - libor ] illustrates how the arrival of news of losses at troubled hedge funds , downgrades of structured financial products , and concerns about asset quality increased funding pressures on banks . before the crisis , banks required some 10 basis points of compensation for making one month loans to each other . by september 2007 ,that compensation premium had risen to around 100 basis points .the ensuing collapse of the investment banks bear sterns and lehman brothers in 2008 led the premium to rise more than thirty - fold from pre - crisis levels . and , despite public sector bailouts of the banking system in the major economies, trust has been slow to return . while specific details and rigorous economic modelling are necessary to properly understand the financial crisis , there is nonetheless scope to clarify the mechanisms by which trust evaporates in networks in general , and financial networks in particular . when credit markets break down , _ strategic uncertainty _ i.e. , uncertainty about the actions of other participants can be more important than _ structural uncertainty _ i.e. , uncertainty concerning the soundness of fundamentals . in the case of a bank run ,for example , depositors may consider withdrawing their funds if they believe that other depositors are also going to withdraw their savings .such behavior can be triggered by doubts over the banks balance sheet position ( structural uncertainty ) , or by fears that others may withdraw their funds ( strategic uncertainty ) . with imperfect common knowledge of fundamentals ,the arrival of bad news about fundamentals leads small seeds of doubt to reverberate across all lenders , leading potentially to a wholesale withdrawal of lending and the bankruptcy of the counterparty .morris and shin formalize such situations as a coordination game between lenders involved with a single , risky , counter - party .their analysis does not , however , address the issue of wide - spread contagion at the system level . as the recent crisis makes clear , small seeds of doubt about one counter - party reverberated across the entire global financial system , enveloping credit markets from new york to sydney .agents are likely to be involved in as many coordination games as the number of agents they are lending to . in addition , as borrowers , they are also party to coordination games being played by the agents lending to them . in short , in realistic financial market settings , many coordination games take place simultaneously , with players having less than full information about the balance sheet positions of their counterparties . in this paper , we extend the insight offered by coordination games to the system level using a model of network growth .specifically , we show how the arrival of signals about counterparties sows the seeds of distrust and triggers foreclosures across a large population .the model , thus , helps shed light on the recent freeze " in global interbank lending , in which banks ceased lending to each other , including to well known and long - standing counterparties , once negative signals began to accumulate . our results may be summarized as follows : the financial system can converge to a good " equilibrium in which a dense network of credit relations exists and the risk of a run , and subsequent default , is negligible . buta bad " equilibrium is also possible here the credit network is sparse because investors are more skittish and prone to prematurely foreclosing their credit relationships .the transition between the two equilibria is sharp and both states exhibit a degree of resilience ; once a crisis tips the system into the sparse state , the restoration of trust requires considerable effort , with model parameters needing to shift well beyond the turning point . andwhen the system reverts to a good state , it is robust even to deteriorating conditions .a crucial feature of our model is the rate at which bad news about the creditworthiness of an agent arrives .this , together with the maturity structure of debt contracts , determines the ( endogenous ) rate of link decay in the network .intuitively , when bad news arrives an agent may be forced into default by the ensuing foreclosures .this leads to a rearrangement of balance sheets across the financial system agents who have lent to it loose assets , while agents who borrowed from the defaulter loose liabilities . as a result, there is a possibility that some counterparties may be placed under stress , precipitating further rounds of foreclosures .we discuss the properties of the stationary state of these processes .our paper complements recent work on interbank contagion . on the empirical front, researchers have relied on counterfactual simulations based on available interbank exposure data to estimate the probability and spread of contagion ( see for a review ) .the key finding is that network contagion is unlikely but , if it were to take place , can lead to the breakdown of a substantial fraction of the banking system . unlike our paper , these analyses treat the underlying topology of interactions and the balance sheets of the agents as static .theoretical literature in the area builds upon allen and gale and focuses on optimal behavior in small networks .for example , caballero and simsek perturb a financial network in order to model the spread of contagion .they appeal to the rising costs of understanding the structure of the network as the basis for complexity .if information about the network structure is costless , there is no foreclosure .but if , following a shock , these information costs rise sharply , banks inability to understand the structure of the ( small ) network to which they belong leads them to withdraw from their loan commitments . in a related contribution ,acharya et . also highlight the role played by information arrival relative to rollover frequency . while their model is also about runs in credit markets, they do not adopt a network approach .moreover , their focus is on the debt capacity of assets used as collateral for short - term borrowing , whereas our emphasis is on the self - reinforcing dynamics of trust in the face of information disclosure .the remainder of this paper in organized as follows . in sec .[ sec : model ] we begin by presenting a solution to the single instance coordination game using the global games framework .this is followed by our network growth model for credit markets . in sec .[ sec : results ] we provide simulation and numerical results for our model , followed by a simple analytical characterization for the stationary states .we conclude in sec .[ sec : discussion ] by describing how our model provides insight to the financial crises of 2007/08 and add to the evolving regulatory policy discussion .we relegate some of the more technical details to the appendices .consider a population of agents engaged in bilateral credit relationships with each other .a financial system of this kind can be viewed as a directed network , with nodes representing the agents and outgoing links reflecting loans from one agent to another . to keep matters simple ,suppose that all loans take the same nominal value .the financial position of agent is summarized by the assets and liabilities on its balance sheet .assets include holdings of cash , , as well as loans made to other agents , . liabilities , namely the monies owed by agent to its counterparties , are denoted by and reflect ( the number of ) incoming links . since every liability is someone else s asset , every outgoing link for one node is an incoming link for another node .so the total amount of assets in the system matches the total liabilities or , equivalently , the average in - degree equals the average out - degree , where the angled bracket refers to the average over all agents .that said , individual agents may be in surplus or deficit in their individual financial positions .the average connectivity , of the network offers a summary measure of the extent of global financial market integration in what follows .the credit network is dynamic , with debt contracts ( or links ) continuously being established and terminated as they reach maturity .the dynamic evolution of the network is punctuated by episodes where the lenders of agent engage in a game to decide whether to prematurely foreclose their loans to .we first describe this foreclosure game , before clarifying the dynamics of the network .imagine that , at a particular time , agent has liabilities , assets and amount of cash .this information is disclosed to all lenders , who are then given the choice of withdrawing their funds ( foreclosing ) or rolling them over to maturity .we follow the analysis of morris and shin in describing this situation . accordingly , their paper and appx .[ apx : foreclose ] provide a detailed game - theoretic account , while we limit the discussion below to the key elements . for each creditor , foreclosure yields a payoff of zero , whereas rolling over yields a payoff of , provided that the number of lenders who opt out does not exceed , on the asset side of agent s balance sheet .if , however , more than agents opt out , this depletes the financial resources of agent , who is forced into default .this results in lender , who decided to roll over , to incur a loss of . following , we refer to as the _ cost of miscoordination _ to .intuitively , when the cost of miscoordination is high , coordination is more difficult to achieve since the opportunity cost of remaining in the investment is greater .the payoff matrix for agent , in terms of the number of lenders who foreclose , is therefore if no agent were to foreclose , it would be convenient to roll over for all of them , as long as .but if an agent nurture doubts on others foreclosing , this may prompt the agent to consequently foreclose , making those doubts self - fulfilling .agents know their cost and that the costs of other agents are drawn from some distribution .in this manner , one breaks the assumption of common knowledge .the ambiguity faced by an agent , in not knowing the costs faced by others , plants the seed of doubt for strategic uncertainty. the unique nash equilibrium for this game amounts to choosing a switching strategy " for each counterparty , : we now provide the basic intuition for this solution .assume that all counter - parties are subject to switching strategies , i.e. , will rollover its loan , if , or foreclose , otherwise .we wish to define in terms of s balance sheet . to this end, we introduce the probability that no more than agents have cost greater than and hence foreclose their loans to . in order to compute , consider the position of an agent with .this implies that must be indifferent between foreclosing and rolling over . on equating the expected payoffs corresponding to the two options , we get .furthermore , is given by observing that the probability that there are exactly agents that have their cost greater than is .indeed , _ a priori _ , can not depend on .therefore , which , combined with the previous result yields eq . ( [ cj ] ) .the independence of this result on the distribution of costs of miscoordination implies that strategic uncertainty is relevant _ even _ in the absence of uncertainty on the costs of other players . in what follows , given our emphasis on the collective behavior of the network , we assume for all creditors , irrespective of the counterparty . to simplify matters further, we also treat the liquid asset holdings of agents , to be constant across the network so that for all .we now present a stylized model that captures the growth and evolution of credit networks by a series of poisson processes . at rate ,each agent takes out a loan from agent , selected at random from the pool of other investors .this implies and .these loans are unsecured and made without the knowledge of counterparties current positions .this may be seen as a reflection of _ a priori _ trust between agents .the loans mature and are amicably settled by counter - parties with rate .this results in the link removal between agents and an update of their balance sheets .these two processes would , by themselves , produce a stochastic credit network that belongs to the erds - rnyi random graph ensemble , with average degree .therefore , if is small , i.e. , debt is long lived , these two processes produce a dense credit network . however , at random poisson times , which occur with rate , information on the current position of agent is disclosed to all of s lenders . on the basis of this information , each lenders must decide to either roll - over the loan until maturity or foreclose .the analysis of the foreclosure game provides the lenders with a simple rule to follow , i.e. , foreclose their loans if as a consequence , agent is said to default and is replaced by a new agent with no links , i.e. . agents , , who previous borrowed from will each loose one liability , i.e. , . finally , the lenders , , will loose one asset each , i.e. , .if , instead , eq . ( [ instabcond ] ) is not satisfied , then all of s counter - parties will rollover their loans .notice that default occurs solely due to the breakdown of trust .we do not assume any exogenous shock that can lead to an agents failure .this reflects our focus on understanding the role played by trust in credit networks . in summary , an agent s financial stateis specified in terms of its position in the balance sheet plane .the three processes , ( i ) link addition at rate per agent , ( ii ) link decay at rate and ( iii ) information disclosure , at rate per agent , induce a stochastic process in the plane of balance sheets , as depicted in fig . [ fig - scheme ] .we now turn to the properties of the stationary state of these processes , as a function of the parameters and . for simplicity and without loss of generality , we set in what follows , by an appropriate scaling of time .one may probe the collective properties of the stationary state either via direct numerical simulation of the processes or by numerical solving the associated master equation ( see appx .[ apx : master ] and ) . in fig .[ fig - sim ] we plot simulation results for the average connectivity , once the system has reached a stationary state , as a function of the cost of miscoordination , , for different values of debt maturity .we note the following features ; ( i ) for small , there is a dense network and , indicative of a high level of trust .the news that is released and permeates through the network is encouraging to lenders , who thereby continue to rollover their loans .( ii ) however , for large , lenders perceive that the cost of miscoordination from rolling over their loans is high .thus , doubts concerning the actions of other lenders leads to the collective foreclose of loans , resulting in a sparse financial network .( iii ) for small values of and in an intermediate range of , we note the coexistence of both dense and sparse network solutions . finally for larger , one morphs continuously from a dense network to a sparse one , as is increased. this hysteresis may be appreciated as a consequence of the self - reinforcing dynamics of trust .far from the tipping point , a small incremental change in does not impact the stationary state and we continue to observe the dense network . once conditions deteriorate with increasing beyond the tipping point , a sparse network solution emerges .however , by the same incremental change argument , as one decreases improve conditions the sparse network solution is stable and one needs to decrease to well before the tipping point to regain the dense network solution .this hysteresis is also observed as a function of the liquid assets .similar stylized results have been found in other models of network growth , which also articulate the underlying mathematical structure .nevertheless , a qualitative understanding of our results is readily available via a simple approximation of the processes .the key variable is the endogenous rate of link decay , , caused by the default of one counterparty .thus , will crucially depend on the rate at which news is released and the maturity of debt contracts . in a dense network , is negligible , whereas in the sparse network it is expected to be sizable . in order to derive an expression for , we need to focus on the twin stochastic processes for the liability and asset positions for a representative agent evolving in time .this process starts from the origin of fig .[ fig - scheme ] and drifts toward the top right - hand corner . fromany given point on the grid , jumps to the right and up occur at rate , whereas jumps to the left or below take place at rate . in the absence of the absorption process , both processes converge to a stationary state , where and are poisson variables with mean .however , when is `` turned on '' , the process is absorbed whenever it is disclosed to be in the shaded region , i.e. , , and restarts at the origin .if is not too large , we may assume that attains the stationary state and hence where is the complimentary error function , arising from approximating by a gaussian random variable , and a graphical solution to eq .( [ appth ] ) is provided in the inset of fig .[ fig - phase ] , where we have either one or three fixed points .[ fig - phase ] plots boundaries for regions in the vs. plane where these different situations arise .in the dense ( d ) and sparse ( s ) phases , one obtains the stable fixed points and , respectively . in the co - existence ( co ) phase , however , the two stable solutions are separated by a third , unstable fixed point .if we impose initial conditions that placed the system to the left of the unstable solution , we would obtain the stable stationary solution .similarly , starting just to the right of the unstable point would yield the solution .a signature for the transition from and is that the slope at the fixed point is exactly one , i.e. , it is tangential to the 45 degree line .inspection of the argument of the function provides further insight . for small values of both and , only one solution with small possible , as is of order . for small and , instead , the term is negligible with respect to the term .the argument of the function is and eq .( [ appth ] ) , again , admits one unique solution . in the intermediate range ,both solutions are possible , together with a third unstable one . while precise numerical values of the transition points can not be accurately obtained , the qualitative features are , however , clear .for example , by increasing , or equivalently , decreasing , the curve in the inset of fig .[ fig - phase ] moves to the right , thus favoring the dense network phase ( low ) .likewise , decreasing flattens the function , suggesting that the coexistence of solutions is possible only for large values of .this is indeed confirmed by numerical simulations .finally , notice that the dependence on only enters in the combination . hence lowering debt maturity ( increasing ) is equivalent to shifting the whole curve to the left which again results in the disappearance of the coexistence region , as shown in fig .[ fig - phase ] and in the simulations .our model and results highlight elements that were central to the interbank credit freeze that has characterized the recent global financial crisis .first , our model shows how the arrival of bad news about a counterparty and the subsequent foreclosure decisions by its creditors can quickly spread across the entire system . as fig .[ fig - libor ] shows , beginning in mid-2007 , the financial world was bombarded with jittery news of larger - than - expected and projected losses .exogenous disclosure requirements , as modelled by , in effect forced banks to release information about their positions , providing investors with signals that helped precipitate the crisis . in a globalized world , where investors require frequent and better quality information on their investments , it had the effect of making the crisis especially severe and far - reaching .second , our model highlights maturity mismatches on balance sheets as a key factor underlying the current crisis .banks financed long - term , illiquid , assets ( such as special investment vehicles ) by short - term borrowing on the interbank market .this situation corresponds to a large ratio in our model , where debts have a long maturity compared to the timescale , , over which banks refinance their debts by convincing creditors to roll over their loans .it is precisely , and only , in the limit of large that a sharp transition such as the one observed in fig .[ fig - sim ] can occur .third , our model sheds light on the nature of public sector intervention during ( and since ) the crisis .the resumption of normality in the interbank markets required a restoration of trust in the balance sheets of key financial institutions . to facilitate this, central banks cut interest rates to historically low levels , effectively decreasing the cost of miscoordination , .that these interest rates have been so low and for so long emphasizes the hysteresis entailed in the restoration of trust a key feature of our model .central banks have also been active in providing emergency lending to troubled institutions as well as sovereign guarantees for borrowing activity , both of which are akin to an increase in .it is worth noting that while the method of analysis suggested by affords a unique solution to individual coordination games , this uniqueness is lost at the system level ; multiple and simultaneous coordination games played on a credit network are characterized by co - existence of equilibria and hysteresis when debt is long - lived the current policy debate on promoting systemic financial stability has highlighted the importance of liquidity cushions in averting future crises .our stylized model lends credence to such considerations . at the system - wide level ,our results and numerical simulations indicate that increasing , i.e. , liquid assets , for all agents ( for a given debt maturity ) results in dense credit networks with for larger values of . at the individual level , increasing clearly motivates creditor to roll over loans to agent .relatedly , setting liquidity requirements to be proportional to short - term debt , in the manner of the greenspan - guidotti rule for short - term debt - to - reserve ratios in emerging - market countries , can also improve system stability . in our model, this amounts to setting , where is some pre - defined ratio . from eq .( [ cj ] ) this is equivalent to reducing the cost of miscoordination to , and replacing by .clearly , however , the benefits of _ ex post _ regulation of this kind need to be set against the _ ex ante _ costs to banks of such regulation . requiring banks to hold liquidity cushions may lower the extent of lending _ ex ante _ and the overall implications of such a policy for economic welfare is likely to be unclear .an alternative to blanket leverage ratios and liquidity requirements is to target such policies on those financial institutions in the network that are most important .there are interesting parallels here with the literature on attacks on internet - router networks .extensions of our model along such lines might allow for differential link formation , , or preferential linkage where agents in one sub - group prefer to interact with others in the same sub - group .agent heterogeneity of this kind holds out the possibility of promising new insights into the design of financial stability policy .the model presented here is simple . in particular, it does not allow for any macroeconomic variability or the exogenous default by a group ( or sub - set ) of agents .moreover , it seems likely that the parameters of the model will evolve according to economic conditions and will need to be determined endogenously .for example , in a crisis , agents will strategically disclose information or form links in ways that improve their chances of a public sector bailout . incorporating a richer set of economic interactions into a network setting such asours is an important step for future research .we now provided a more detailed description and analysis of the foreclosure game , drawing on lines of reasoning provided in .there are three distinct time periods , _ initial _ , _ interim _ and _ final _ , which we label , and , respectively . at the beginning of period ,each agent is endowed with one unit of money , which may be used to readily purchase consumption goods during either periods or .we assume that agents are indifferent between consuming during the interim and final periods .this is formalized by requiring the utility function for each agent to have the simple additive form where is the consumption during period .this simplification allows us concentrate on measuring coordination in terms of expected returns . at the start of period ,a pool of investors enter into a _ short - term loan _agreement with agent , where each loan is for the nominal amount .if , for example , creditor lends to , then during the interim date is given the choice to either terminate the line of credit ( foreclose ) , or rollover the loan to the final date .if chooses to rollover its loan and s investment is successful , then will receive one unit of money at the end of period .thus , at the end of the initial period , is characterized by three quantities : ( i ) number of liabilities , , i.e. , counter - parties that have lent to , ( ii ) number of illiquid assets , , which counts the number of loans given to other agents by and ( iii ) level of liquid assets , , which may be thought of as cash reserves .one may interpret the loans made out by as a result of other agents tendering requests for loans at date as well . during the interim period ,each of s lenders receive information on the fundamental soundness , of s investment .if the fundamentals are weak , would seek an additional injection of capital , which would demand that a large number lenders rollover their loans for the investment to succeed .in particular , if is the number of early foreclosures , then the investment is successful if and only if the number of rollovers satisfies .a high degree of coordination is required by the lenders to see the investment through . in such a case of weak fundamentals ,we say is large .we take is uniformly distributed _ ex ante _ along the positive real axis .the information for agent is realized as the _ cost of miscoordination _ , , from rolling over the loan . in particular , where ] .note that , despite our prior on was improper , i.e. , it had infinite mass , the posterior , which is a conditional distribution is well defined . finally , by a suitable change of variables , we reduce the integral in eq .( [ eq : post int ] ) to a beta function , which we readily evaluate . combining this result with eq .( [ eq : expected payoff ] ) we get which is independent of . having worked out the switching threshold in terms of the cost for a hypothetical agent , agent adopts this behavior .moreover , as there was nothing special in our choice of , _ all _ lenders to will follow a similar exercise and adopt the switching strategy same result .for a given cost of miscoordination , , the master equation for the joint distribution of liabilities and assets of a representative agent is given by p(\ell , b)\,,\end{aligned}\ ] ] where is the partial derivative with respect to time and refers to the heaviside function . to keep our notation concise ,we have left out the time label in the distribution .the parameters , and are exogenously given rates of link creation , dissipation and news arrival , respectively .the rates , and are endogenous default rates , which are self - consistently determined against the stationary distribution of as here , and are the mean assets and liabilities , respectively .the angled brackets refers to the average over , which in fact yields .we can understand the master equation via simple geometric considerations .if we consider any arbitrary point in the interior of our lattice , the probability that an agent has this balance sheet positions is given by the probability there will be an incremental _ hop _ to from one of the neighboring sites . the rate at which there will be a hop from the left or bottom site is the rate at which either a new liability or asset , respectively , is added . the rate at which a hop occurs from either above or the right is simply the rate at which an asset or liability are lost . in the latter case , this rate may be decomposed into two aspect : ( i ) the natural dissipation of a link , i.e. , and ( ii ) the probability that our representative agent had borrowed from another agent who defaulted and lost all links after revealing , with rate , its balance sheet to its creditors .the rate at which such incidents occur is .a similar argument may be used to construct the rate at which assets are lost as well . whenever an agent defaults , it is stripped of all its asset and replaced by a new agent , who starts at .the first term in equation ( [ eq : mastereq ] ) reflects this action .a full analytical solution for equation ( [ eq : mastereq])-([eq : barnu_b ] ) is complicated due to the presence of various non - linearities .we can nevertheless numerically solve the system for the stationary state .squam lake working group on financial regulation ( 2009 ) reforming capital requirements for financial institutions .council on foreign relations , center for geoeconomic studies , squam lake working group paper no .2 . available at www.cfr.org/publication/19001/reforming_capital_requirements_for_ financial_institutions.html , plane during our network dynamics . the shaded area correspond to where eq .( [ instabcond ] ) is satisfied and foreclosures take place . with rate , a credit relationship is established .agent gains an assets ( ) , while increments the number of liabilities it holds ( ) . with rate ,however this link matures and expires , causing a rearrangement of balance sheets .finally , with rate , debtor reveals its balance sheet position , to the creditors .if is found to be in the shaded region , foreclosures take place and defaults , thereby transporting it back to the origin , i.e. , .agent , who had borrowed from , looses one liability ( ) , while agent who lent to looses an asset ( ).,width=491 ] in the network as a function of cost for different values of .the symbols are produced from direct simulations while the lines are from solving the corresponding master equation numerically . in producing the curves we took and .,width=491 ] vs. plane , where the boundaries distinguishes the set of parameters that result in either a dense ( d ) or sparse ( s ) network .we also note that for small , there is a third phase of co - existence ( co ) between the dense and sparse states . in the insertwe plot as a function of , for , where is given by eq .( [ z ] ) .the different curves correspond to different values. we not the existence of either one or three fixed points.,width=491 ]
|
trust lies at the crux of most economic transactions , with credit markets being a notable example . drawing on insights from the literature on coordination games and network growth , we develop a simple model to clarify how trust breaks down in financial systems . we show how the arrival of bad news about a financial agent can lead others to lose confidence in it and how this , in turn , can spread across the entire system . our results emphasize the role of hysteresis it takes considerable effort to regain trust once it has been broken . although simple , the model provides a plausible account of the credit freeze that followed the global financial crisis of 2007/8 , both in terms of the sequence of events and the measures taken ( and being proposed ) by the authorities .
|
the distribution of large scale structures ( lss ) is one of the most important tools to study our universe . withthe recently concluded and upcoming lss surveys such as boss , desi , euclid , lsst etc .the vast amount of data will provide enough statistical power that errors in the analysis will be dominated by poor theoretical understanding of lss .these surveys generally observe tracers such as galaxies , which unlike in weak lensing studies , do not perfectly trace the underlying matter density .making this connection is a two step process- to realize that galaxies reside in collapsed dark matter halos following some statistical distribution- and that these halos themselves are biased tracers of matter distribution . in this paper , we focus on the latter of these and study novel techniques to measure halo bias .for a recent review on galaxy bias see . in full generalitythe relation between the halo density field and the dark matter contains information about all processes relevant for the formation of halos , as well as stochastic contributions arising due to the fact that halos come in a finite number .however on large enough scales , we can hope to treat the problem perturbatively , and characterize the bias relation with a relatively small number of free parameters we can marginalize over .the simplest way to get physical intuition of halo bias comes from the so - called peak - background split argument ( pbs ) . in the original formulation of pbs , halos are regions where the value of the dark matter density field exceeds some critical threshold . this threshold can be crossed more easily in presence of a positive large scale dark matter fluctuations , and conversely less easily for a negative one .this means that , on average , overdense regions host more halos than the mean .linear bias , , can therefore be defined as the linear response of the halo overdensity field , , to the presence of long wavelength perturbations , _ h = b_1 _ m the above relation can be generalized to any order in the density field , leading to the well known result by , [ eq : frygatz ] 1+_h(x ) = _ n = 0^_m^n(x ) , where however one must also enforce integral constraint by imposing that .typically the bias expansion is written in two ways : either by relating the the halo field to the dark matter field at time the observations are made , at low redshift , or by identifying in the initial gaussian field regions which are more likely to collapse into halos at a later time .the former is called a eulerian bias approach , the latter a lagrangian bias approach .white the two approaches can be shown to be mathematically equivalent , each method has its own pros and cons .it has been recently shown that the statistics of the halos does not only depend on the dark matter density field , as in eq .( [ eq : frygatz ] ) , but also on spatial derivatives of the density field, and on the tidal fields , each of which with its own new bias coefficient. these new terms , with abuse of notation , have been called non local bias coefficients , since they contain gradients of the the density field or of the gravitational potential .they have been shown to be important in the eulerian approach , as they can be generated by non linear gravitational evolution , and can actually be predicted in perturbation theory ( pt ) . as a matter of facta complete basis would include all possible fields one can construct at any given order in perturbation theory compatible with the symmetry of the problem .the relevance of non local terms in the lagrangian approach is still under debate . at the linear level, a scale dependent bias term is certainly present , as shown for instance in , using n - body simulations , and its measured value agrees fairly well with analytical models of structure formation . at second orderthe leading non local term is proportional to the traceless shear , also called tidal field , defined as [ eq : s2 ] s^2 ( ) _ ij s^2_ij ( ) s^2_ij ( ) = ( - _ ij^k ) ( ) , this new term has been measured in eulerian space by , and it is nowadays included in all analysis of galaxy redshift surveys . as we have already mentioned , a non - zero tidal bias is expected from non - linear gravitational evolution .it is therefore important to compare the values measured in n - body simulations to the assumptions of zero lagrangian tidal bias , since an incorrect assumption on the latter could affect the determination of cosmological parameters .tidal bias in the lagrangian picture is non - zero if the shear is sampled around proto - halos differently than around random positions .we actually do expect the shear to play an important role in the process of halo collapse , that is very likely to happen in an ellipsoidal fashion .the work in have not shown clear evidence for tidal bias in lagrangian space using n - body simulations .it is therefore interesting , and it is one of our main goals , to check whether this bias coefficient has non - zero value in a lagrangian scheme and what would be the implications .in addition to finding evidence of new bias coefficients , there is also another good reason to study lagrangian halo bias . in lagrangian spaceit is more natural to write down relation among the different parameters , since , contrary to the eulerian picture , non - linear evolution and the biasing scheme are decoupled from each other . for similar reasons ,the evolution of bias with redshift , an important issue for galaxy surveys that spans a wide redshift range , is better understood in lagrangian space .finally , analytic approaches to predict the value of bias coefficients are built in the lagrangian formalism , and can be used to put priors on some of the bias parameters .as we are ultimately interested in cosmological analysis of lls data , a combination of priors and relations between the bias coefficients would be of great help to narrow the error bars on cosmological parameters .recently have shown that at 1-loop in lagrangian effective perturbation theory , the measurements in redshift space of the halo multipoles are very well described if one adopts the following bias expansion , + b_{s^2}[s^2(\bx ) - \langle s^2(\bx ) \rangle]\;,\ ] ] stopping at second order and assuming that third order terms are only generated by gravitational evolution .our goal is to estimate the three bias coefficients in the above equation in n - body simulations using different techniques , and to compare them to analytic predictions .we are primarily interested in scale dependent bias , linear and non linear , and in the first clear evidence of tidal shear in the bias expansions .we will make use of three different estimators : cross correlations in fourier space , cross correlations in real space , and direct implementation of the pbs argument to simulations .we organize the paper as follows . in section [ sec : bf ]we describe our approach to estimate bias using cross power spectra in fourier space , showing results for the bias coefficients up to second order . in section [ sec : breal ] , estimators in real space that make use of the pdf of the density field are discussed , in particular focusing on their agreement with the fourier space measurements . in section [ sec :bpbs ] we present a direct application of pbs to simulations that allows to recover the value of the scale independent part of the bias coefficients .after establishing the agreement between different estimators , in section [ sec : rel ] , we present various relations amongst different parameters .section [ sec : stoch ] deals with the halo - bias stochasticity , and how much it is reduced by introducing a more complicated bias model .finally , section [ sec : conclude ] summarizes the main results , drawing conclusions and future prospects .since we work in lagrangian space , throughout the text what we will refer to a dark matter halo as the initial collection of particles in the n - body simulations which a halo is made of at the final redshift , let s say , the proto - halo .the simulations we used for the analysis have been produced using the fastpm code .fastpm is a tool to generate non linear dark matter and halo fields in a quick way , employing a number of approximations to reproduce the results of full n - body simulation . despite its approximate nature , in was shown that the code performs extremely well on various benchamarks such as dark matter power spectrum , the halo mass function and halo power spectrum .we refer the reader to for more details of the tests performed .we evolved particles in periodic cubic boxes of size 690 , 1380 and 3000 .the mass of the particle in three cases is , and respectively .the linear power spectrum used was generated with camb with cosmology , , , , and the initial particle displacement and velocity were computed at 2nd order in lagrangian perturbation theory .the halos were identified using fof halo finder in nbodykit with the optimization presented in using a linking length of 0.2 .the smallest halo identified in every simulation consists of a 100 particles .we ran 5 independent realizations for every box and all the results presented are average of these runs . unless stated otherwise , we show results at a single redshift , . + to calculate power spectra ,we interpolate the halos on grid using cloud - in - cell ( cic ) interpolation and deconvolved the cic window . to keep the comparison consistent , we used a grid for dark matter density field as well and sampled the corresponding 512 modes from the 2048 modes available from the simulations .* : we show , and as measured from fourier space estimator .the dashed , solid and dot - dashed lines are from different boxes of size , , respectively . for clarity , we have refrained from showing all the mass - bins .the dependence on the wavenumber is the shared by all the parameters : constant piece on large scales followed by -like piece on intermediate scales followed by a cutoff around the halo scale . ]the easiest thing to measure in fourier space is linear bias , b_1 = where in our notation the power spectrum between the and field reads . in figure[ fig : bias - kgrid ] , top panel , we show linear bias as measured from our fastpm runs for several halo mass bins .the common features shared by all the halos are : a constant piece on large scales , and then a -like piece on intermediate scales followed by a cut off , approximately at the scale of the halo .similar plots for linear bias can be found for instance in .a convenient parametrization of linear bias is therefore [ eq : b1k ] b_1(k ) = ( b_10 + 2 b_11)w(k r ) where is the variance of the linear field smoothed on the scale of the halo , .for instance , for a gaussian windows , , the scale dependent term is exactly proportional to . in this formthe two bias coefficients and are both dimensionless .since neither the halo scale , , nor the average halo shape , , are known , fitting the above equation to the simulations results requires some care . to be conservative , and for comparison with previous work , we assume proto - halos of mass are spherical patches of size , such that the window function is top - hat of radius in real space . as a further safeguard ,we fit for only upto the first peak near the halo scale , after which the halo window function starts to dominate over the scale dependence of bias .. crosses zero at large excludes the gaussian window , , employed by .the choice of the filter is almost irrelevant for the best - fit value of .] at this time we can fit for the scale independent piece of linear bias , , as well as for the scale dependent one , , in eq .( [ eq : b1k ] ) . a comparison of with theoretical models and with other estimators for linear bias on large scales will be done in the next section , whereas here we only show , in figure [ fig - b11_mass ] , results for as a function of mass and how it compares to the analytic prediction shown as the continuous line .the model we use is an extension of peaks theory ( bbks)( ) that includes the excursion set constraints ( esp ) and a simple treatment of ellipsoidal collapse ( ) ( ) .the model reproduces reasonably well the halo mass function and linear and quadratic bias halo bias on large scale as measured from n - body simulations , see for more details .it has one free parameter , , entering the definition of the critical threshold , , required for collapse , which measures the relative importance of shear fluctuations to density fluctuations at the halo scale [ eq : esptau ] b = _ c + where is the standard spherical collapse value .we fit by eye to the measurements , finding that provides a good fit to the data . performs remarkably , if not surprisingly , well in comparison to the scale dependent piece over a orders of magnitude in mass . since the value of scale dependent bias is sensitive to the choice of the filter ( see eq .( [ eq : b1k ] ) ) , the agreement between the theory and the data suggests that defining proto - halos with a top - hat filter is a very good approximation .this finding is in contradiction with , and it deserves future investigation in a future work . as we will see later in section [ sec : rel ] the performance of on scale dependent bias is important , as the value of and are related by mass conservation .we would like to extend the results we have shown for to higher order bias coefficients , in particular we seek for a simple estimator of .first , notice that , since the bias relation in eq .( [ eq : real - bias ] ) is written in real space , any estimators of second order bias coefficients in fourier space will make use of convolutions , for instance the squared density field is trivially ^2(x ) = e^ - i ^3q ( q)(k - q ) and an analogous expression holds for .we also remind that since the dark matter field is linear in lagrangian space , cross correlations between the halos and a generic quadratic fields are very easy to write down , as any expectation value involving three fields vanishes . within the model in eq .( [ eq : real - bias ] ) we arrive to where for instance p_^2 ^2(k ) = k^3 _ r = 0^ _ x=-1 ^ 1 p(kr)p(k ) and similar expression involving legendre polynomials can be written for and .+ the linear system of equations in eq .( [ eq : b2k - bs2k ] ) can be solved to obtain a value of and at each mode and for different halo populations .( to assist in solving , one can also re - write these equations in terms of to decouple from , though it is not necessary . )the solution is shown in the two bottom panels of figure [ fig : bias - kgrid ] , which makes clear that the behavior as a function of we have seen for linear bias also applies to higher order bias coefficients .scale dependence of non linear bias would have , for instance , to be taken into account for lagrangian perturbation theory calculations beyond -loop order .the bottom panel clearly shows that tidal bias is non zero for a variety of halo populations .this has important consequences for cosmological analyses using lagrangian perturbation theory plus lagrangian bias ( ) .as already known , is negative for low mass halos and then becomes large and positive for very massive halos .the opposite trend is observed for , which is negative at the high mass end .this is expected , since shear is acting against the formation of the halos , whereas density enhances it .had we started with a local bias expansion we would have gotten different results for , in contradiction with the results of the next sections , and as further discussion at the end of section [ sec : breal ] . while we do find evidence for non - zero tidal bias , getting the precise values for various mass bins is non - trivial .this is due to the combined effect of cosmic variance as well as degeneracy between and on large scales , making things noisy on very large scales where the scale independent piece of bias should dominate , and it becomes more severe for our smallest simulation box .this also comes into play when we try to extend this method to ( see appendix [ sec : appendix - a ] ) .furthermore , unlike , the functional form of the scale dependence of is highly non - trivial , and hence much more sensitive to the halo window , which is not exactly known , as well as , up to which any such a fit is done . nevertheless , for the purpose of this work , we strive to fit for a constant on large scales as function of increasing and choose the value which gives the best reduced of the fit . as a function of mass : * the values are estimated from the best fit values for and using eq .( [ eq : b1k ] ) , assuming a tophat window at the halo scale . we will also follow the scheme of the colors blue , green and red ( b , g , r ) representing different box sizes throughout this paper unless explicitly mentioned otherwise , or when the box size is not important . ]cross correlations between the halos and other fields can also be written in real space .the advantage of working in lagrangian space is that we know the pdf of the density field , and therefore estimators for the cross correlation and for the bias coefficients can be written down without expensive pair counting .following , if is the dark matter density in a sphere of radius around halos of given mass , for the -th order bias coefficient we can write [ eq : bn - real ] _n = _ i=1^n h_n ( ) where the sum runs over all the halos falling into the mass bin , and is a probabilist s hermite polynomial .the rationale behind this estimator is that hermite polynomials are orthogonal polynomials of gaussian random fields , therefore they will pick up only the -th order in the density field contribution to the bias expansion . in the above equation the variance of the field on large scale , and is the cross variance between the halo scale and the large scale . in real space ,the convenient separation of scales we have seen in fourier space is lost .the scale independent and the scale dependent term will now be mixed in the estimator in eq .( [ eq : bn - real ] ) . by fourier transforming eq .( [ eq : b1k ] ) , it is easy to see that measurements in real space can be fitted by [ eq : b1real ] b_1 = b_10 + _ b_11 where .the scale independent piece can be extracted by using a very large , since goes to zero at large scales , or by combining measurements at different scales . , and as function of mass * : agreement between real space , fourier space estimator and theory . for real space, we mention in the subscript the large scale used to calculate the bias parameters . for , we use eq .( [ eq : b1real ] ) and the two smoothing scales mentioned to extract scale independent part .real space points have been shifted along x - direction for clarity . ]our first goal is to compare estimates of and from real and fourier space .we find that both approaches give very similar results . in figure [ fig - b1 m ]we compare the values of linear and quadratic bias estimated from real space ( squares and triangle ) and fourier space ( circles ) .the agreement between the two is very good over the whole mass range we probe and it is very well described by .it is important to point out what would have happened if we had not include shear in our bias estimator in fourier space .this is shown in figure [ fig - b2nos2 ] , in which the estimates of in fourier space without taking shear into account are clearly incompatible with the real space results ( and with the theoretical model ) .we conclude the discussion of real space bias measurements by generalizing the estimator in eq .( [ eq : bn - real ] ) to non local bias parameters .it is easy to show that the shear field , , is -distributed with five degrees of freedom , which implies that the orthogonal polynomials to use in the estimator are laguerre polynomials , where and is the large scale ( ) shear .this estimator assumes that the bias expansion is written in terms of laguerre polynomials , 1 + _h(m ) _ 2 l_1 ^ 3/2 ( ) which are properly normalized and ensure by construction that .the bias parameter is related to the previously defined tidal bias by the following relation , b_s^2 = c_2 and from now on , to reduce the dynamic range in the figures , we will plot only ( we will make use of the words , and interchangeably ) . figure [ fig - b1 m ] , bottom panel , shows the large scale values of the estimated from real and fourier space .the two agree fairly well , however the real space measurements have still significant errorbars that do not allow us to assert the significance of the excursions of above zero we see in fourier space .we plan to come back to this issue , with better measurements , in a future work .the analytic prediction for the tidal bias is shown as solid line in figure [ fig - b1 m ] .the agreement with the measurement in the n - body is poorer that for and it indicates that , although , the model in is capable of capturing the gross features of shear into the bias expansions more work is needed to properly understand the origin of the non local bias parameters . without * : fourier space estimator for with ( empty boxes ) and without ( solid dots ) including the shear in eq .( [ eq : b2k - bs2k ] ) . for comparison the real space estimates ( triangles ) , by construction independent of the shear ,the two estimators are in agreement only allowing a non - vanishing tidal bias in fourier space .though not shown here , the fit for without shear is also much worse for pbs estimator , especially for high mass halos .overall consistency with real space estimates is presented in figure [ fig - b1 m ] . ]the very first definition of halo bias , the pbs , says that bias parameters are the -th order response of the halo population to the presence of large scale fluctuations . for the case of isotropic response , one can exploit the well known equivalence between a infinite wavelength spherical perturbation in a flat friedmann - roberston - walker ( frw ) background and a closed frw universe .this technique , known as separate universe , allows to measure isotropic bias parameters in n - body simulations with basically no cosmic variance .the drawback of this method is that it does not apply to tidal bias , where one would need n - body simulations with anisotropic expansion along the different axis , since the introduction of shear breaks isotropy of space . in lagrangian space , where the density field is linear , one can take advantage of the pbs to measure _ any _ bias parameters in a much simpler way .consider a relatively large box ( of size ) of a n - body simulation .we chop this box into several smaller boxes of linear size , we call them boxlets . each boxlets will have its own value of density field , drawn from a gaussian distribution smoothed at the scale of the boxlet , ( this scale is computed by matching the volume of a sphere of radius and a cube of the boxlet side ) . in order to suppress scale dependent terms one must ensure that , where is the typical size of halos . differently from the standard separate universe approach , each boxlets will also have its own value of the shear field , drawn from a -distribution . to estimate the density associated with every boxlet, we smooth the box with a tophat window of the scale of the boxlet , .the value of this smoothed density field at the center of every boxlet is the associated super sample density mode with boxlet ( ) .for this smoothed density field then , we estimate the shear field over the whole box using eq .( [ eq : s2 ] ) .the value of this shear field at the center of every boxlet is the associated super sample shear value ( ) .since the smoothing scale is large and hence the variance is small , the value of these fields at the center of the boxlet is equivalent to computing the mean from all the grid cells belonging to each boxlet .to evaluate the halo overdensity in every boxlet , we measure the mean number of halos in the box ( ) and in every boxlet ( ) , and then compute the relative excess in halo number density as bias parameters can then easily be obtained by fitting the above quantity to the measured density and shear in each boxlet [ eq : pbs ] _ h_i = b_10 _ i + b_20_ i^2 + b_s^2s_i^2 + ... while this method has several advantages over standard techniques it has its own drawbacks . since the boxlets are large , , the value of the density and shear are small and the scatter among different boxlets largethis makes difficult to fit for bias parameters and it yields slightly larger error bars than the standard separate universe technique .also , high mass halos are poorly sampled on the small volume of the boxlets , and are mostly affected by non - poissonian shot noise , which is hard to estimate . figure [ fig - sep_univ_boxlets ] shows bias coefficients estimated using eq .( [ eq : pbs ] ) for different chopping of the box , . for various size of the boxletsthe measured bias parameters agree among themselves , and we also report clear evidence of tidal bias with this novel estimator . as discussed , the size of error bars decreases with decreasing the size of boxlets ( increasing number ) due to better constraining power . however , to safe - guard against possible systematic errors , we will henceforth quote values for in the remainder of this work .the concordance between the three different estimators used in this paper , fourier space , real space and pbs , is shown in figure [ fig - sep_univ_b1 m ] and it is one of our main results . we stress that , similarly to what happened in fourier space , in fitting eq .( [ eq : pbs ] ) , the presence non local bias was crucial to obtain the agreement for local linear and quadratic bias between different estimators , as well as the theory .while we do find convincing evidence of tidal bias in all three measurements and there is general agreement in the values , a % level calibration of non local bias would require more simulations and better control of any systematics , and it goes beyond the scope of this paper .we plan to return to it in a future work . , and as function of mass * : comparison with esptau theory , fourier space and real space measurements .measurements are shown only for box and pbs measurements for 20 boxlets . the pbs and real space pointshave been shifted along x - direction for clarity . ]relations among the bias coefficients are of great value for cosmological analyses as they make possible to reduce the number of free parameters . in this sectionwe study three kinds of such relations : between the the scale dependent and the scale independent terms of linear bias , the general universality of these parameters with redshift and empirical relations between or and that universality allows one to fit for . and : * eq .( [ eq : consrel ] ) relates the linear bias to the mean dark matter overdensity at the position of the halos . for spherical collapse ,this mean = 1.686 , but its a function of mass in ellipsoidal collapse .the square markers ( shifted along x - direction for clarity ) are the direct measurements from the simulations while solid dots are prediction from bias parameters estimated from fourier space correlations in previous sections from those catalogs . ] in the lagrangian picture , properties of the ( proto-)halos profiles are closely related to the bias parameters . from the very definition of halo mass and halo bias in eq .( [ eq : b1real ] ) , it is easy to see that at the scale of the halo , and therefore [ eq : consrel ] b_10+b_11 = where is the mean dark matter density inside halos of a given mass and is the variance on the scale of halos . if halos were perfect spheres , then . measurements of this quantity have been presented in .the above equation says something very non - trivial about bias coefficients : one can measure the value of bias parameters at large scales and learn something about the density of halos at a much smaller scale , the halo scale .it is also important to note that eq .( [ eq : consrel ] ) remains valid even in the presence of assembly bias , as it is a direct consequence of the definition of mass .we have seen in figure [ fig - b11_mass ] and figure [ fig - b1 m ] that different estimates of linear bias agree with each other and with a theoretical model .we thus expect the consistency relation to hold in the data and to be well described by the model .this is shown in figure [ fig - consistency ] , where one can see how the eq .( [ eq : consrel ] ) is verified at the level of the data , blue and red points with error bars , and how the theory actually does a good job in predicting it . in real spaceit is much harder to check the consistency relation for linear bias , see , while a similar analysis in fourier space can be found in ( in preparation ) .since the term in linear bias , , is highly degenerate with the leading order counter term , in the effective field theory ( eft ) of lagrangian perturbation theory , one could argue that it should be possible to just fit for in a eft approach , and then use the consistency relation to put a prior on the value of scale dependent bias .similar relation should hold between higher order bias parameters and higher moments of the density field at the halo scale . in the study of halo abundances and clustering, universality means that the halo mass function and halo bias are unambiguously determined by a single function of cosmology and redshift , the variance of the linear density field , .the existence of universal relations is important , since it means we have a proxy for how the bias of a population of halos evolves , for instance , with redshift .violations of the universality of the mass function in n - body simulations with redshift have been reported in , while the eulerian linear bias remains a universal function the purpose of this section is to test universality of linear and non linear local bias , as well as the novel , with redshift .the best way to test universality is to plot quantities estimated at different redshift as function of the peak height . andgreen points are .fourier space points are for all three boxes while pbs measurements are only for box .the analytic prediction is assumed to be redshift independent . ]we carried out the analysis of the previous sections at , and a test of universality of bias is reported in figure [ fig - universal ] .the values of and , in blue at and in green at in the top panel , align very nicely as a function of over all the mass range probed by our simulations . for tidal bias, the pbs measurements do show that the bias is approximately universal , whereas the fourier space measurements show some disagreement for large mass halos . while this can possibly be due to poor fits , as discussed in section [ sec : bf ] , on the other hand , in the model , the amplitude of the tidal bias is mainly set by the parameter appearing in eq .( [ eq : esptau ] ) , which a priori could have arbitrary redshift dependence .the continuous line is the same theory , for which we assume that does not depend on redshift . a detailed study of the importance of the shear for halo formation as a function of redshift goes beyond the scope of this work , and we intend to come back to this issue in the future .another way to restrict the dimensionality of the bias parameters space is to find relations between linear bias and higher order bias parameters .while such relations can be determined at each redshift , if the bias parameters are universal , then one could just fit for and . in an eulerian frameworkthis has been discussed a lot in the literature , see recent work by .however it is important to realize that if such relations exist , they do because they hold in lagrangian space .we must also stress another word of caution : the nature of these relations is empirical , not fundamental , and it is , for instance , violated by assembly bias effects .figure [ fig - relation ] shows as a function of , upper panel , and as a function of , lower panel .since the pbs estimates of tidal bias are universal , we show the relations only for these data .the measured values smoothly align , indicating that the relation could be fitted by some simple functional form .as expected from results in the previous sections , the theory is able to predict very well the relation between non linear and linear bias , but it is not as accurate for the tidal bias .we want now to go back to the relation between lagrangian and eulerian bias parameters as discussed in the introduction .if was zero , gravitational evolution makes a unique prediction for the eulerian tidal bias , , b_s^2^e = -b_1 = -(b_1^e-1 ) [ eq - shear_coev ] where we have used the well known relation .as we have measured a non - zero shear bias in lagrangian space , the above equation needs to be modified , figure [ fig - bs2eul ] shows the relation between linear and tidal bias in eulerian space , using the measurement from the pbs method .the assumption of no lagrangian shear bias , black line , is far from the measurements at both low and high value of .thus , in the presence of this lagrangian shear bias , using the relation of eq .( [ eq - shear_coev ] ) such as done in analysis of can lead to possible systematic errors in the analysis . since the analytic model does not yield an accurate fit to the data , for practical purposes we provide a numerical fit to the pbs measurements : ( upper panel ) and ( lower panel ) shown only for pbs estimator since tidal bias is most universal for these .the solid lines are using bias values predicted by theory .these relations , co - evolved to eulerian space help to reduce dimensionality of bias parameter space . ] ) ; markers ) and without ( eq .( [ eq - shear_coev ] ) ; solid black line ) the presence of lagrangian shear bias as measured by pbs estimator . the numerical best fit ( eq . ( [ eq : bs2_fit ] ) ) to the measurement points is in dashed blue . ]following , for any two fields and , where the field is supposed to model field , we measure the error in modeling in terms of the stochasticity , , defined as for our purpose here , we are interested in examining how well we reconstruct the halo field with the estimated bias parameters .thus is the halo power spectrum ( ) , while is the auto - power spectra for the biased field ( ) .the explicit expression in terms of bias parameters is if the bias - field reconstructs the halo field perfectly , all the three power spectra in the definition of stochasticity should be the same resulting in .however since the number of halos , , is finite , the halo auto spectrum contains poisson shot noise , ( where is the box size ) , which is not captured by the continuous bias field . in figure [ fig - stochasticity ] , we show measurement of stochasticity in eq .( [ eq : stoch ] ) for three different mass bins , one from each of the three simulation boxes . for each mass, we show how the stochasticity changes as we include higher order bias parameters over simple linear bias in eq .( [ eq : power_stoch ] ) . to make this figure , we again averaged over the stochasticity of 5 simulations for a given size and for each simulation , and we used the same mean value , across the realisations , of the bias coefficients , which in this case was the constant ( large scale ) fourier space bias .overall , on large scales , the stochasticity is close to its poisson shot noise value for intermediate and low mass bin , while its somewhat lower for heavier halos ( explains this as exclusion effects ) .however , especially for the low mass halos , simply using the linear bias for halo field does leave significant scale dependence in the stochasticity which is improved upon by including higher order bias parameters ( upper panel ) .this is useful since this residual can still be modeled with a scalar , if not necessarily poisson , shot noise . in addition , including and especially does assist in reducing stochasticity further over linear bias models ( lower panel ) . ) .the solid lines , dashed and dotted lines respectively correspond to measurements on successively including , and in eq .( [ eq : power_stoch ] ) . for comparison , the black lines are shot noise for that mass bin .the lower panel shows the ratio of dashed and dotted lines of upper panel with the solid lines to emphasize that more complex bias models reduce stochasticity . ]understanding the relation between the galaxy or halo field and the underlying dark matter distribution , is crucial in order to extract cosmological information from the lss . in this paperwe have focused our attention to halo bias defined in lagrangian space .we employed three different estimators of bias parameters- fourier space correlations , real space correlations and pbs estimator .the three approaches are quite different and thus sensitive to different systematic effects , nevertheless their agreement in terms of best - fit bias parameters as a function of halos mass is quite remarkable ( figure [ fig - sep_univ_b1 m ] ) .we have shown a convincing evidence for shear bias in lagrangian space with all three methods ( figures [ fig : bias - kgrid ] , [ fig - b1 m ] and [ fig - sep_univ_b1 m ] ) . indeed , including the tidal bias was crucial to obtain the consensus of quadratic local bias of all the three estimators ( figure [ fig - b2nos2 ] ) .we have shown that the model presented in is able to predict well and , but it misses a good description of tidal bias .the fourier space estimator have also enabled us to present the scale dependence of quadratic and shear bias ( figure [ fig : bias - kgrid ] ) , and to show that it is similar to the one of linear bias . for linear bias ,the fourier space method allowed us to check that the scale dependent piece ( figure [ fig - b11_mass ] ) is very well predicted by theory .this has implications for the problem of what is the right window to use when defining halos in lagrangian space .we also successfully demonstrated , in n - body simulations , the validity of a consistency relation for linear bias coefficients , [ eq : consrel ] and figure [ fig - consistency ] , that are fundamental to the bias parameters .this opens the door to reduce the number of independent bias parameters .we have then used the pbs argument to generalize previous results on bias with respect to density , to the case of bias with respect to the shear field .we were therefore able to measure non - local bias as the response of the halo number density to the presence of long wavelength tidal field ( figure [ fig - sep_univ_boxlets ] ) . in the appendix[ sec : appendix - a ] , we also show that this method can be extended to third order bias parameter ( figure [ fig - sep_univ_b3 ] ) .however care needs to be taken while choosing the size of boxlets due to the dichotomy between better constraining power for small boxlets but at the possible cost of residual scale dependence and non - poissonian shot noise .relations among linear bias and quadratic bias parameters have also been discussed , with the general finding that those relations exist and can be easily fitted for ( figure [ fig - relation ] ) . alongthe way we also showed that all these bias parameters are to a very good approximation a universal function of redshift ( figure [ fig - universal ] ) .we note that while the evidence of non - zero tidal bias seems convincing and there is general agreement between different estimators , fourier space shear estimates make larger excursions above zero than real space or pbs estimates ( figure [ fig - sep_univ_b1 m ] ) , and also seem to exhibit some violations of universality for high mass halos .while there is no a priori reason to expect density and shear bias coefficients to exhibit the same level of universality , we discussed in section [ sec : bf ] whereas the possible reason could be poor fits due to lack of correct halo window as well as complicated scale dependence of tidal bias , coupled with degeneracy of and on very large scales .this is an important point which we wish to investigate in future work , with better measurements .as we have shown , the analytic model in is not capable to reproduce our measurements of the tidal bias , figure [ fig - bs2eul ] , therefore we provide a numerical fit to the relation between the eulerian linear bias , , and the eulerian shear bias , , in eq .( [ eq : bs2_fit ] ) .finally , we discuss the stochasticity of the halo field and find that including more complex bias models over simple linear bias does seem to reduce stochasticity , especially its scale dependence ( figure [ fig - stochasticity ] ) .this makes these models worth studying since a scale independent stochasticity can be parameterized with a constant shot noise which can be marginalized over in the analysis .ec would like to thank ravi sheth , aseem paranjape and martin white for useful discussions .this work is supported by nasa grant nnx15al17 g .in principle , we should be able to extend the above estimates to higher order bias parameters . in this appendix ,we provide details on our attempts to do so . for the real space estimates ,this involves calculating the mean of third hermite polynomial at the position of the halos , which is straightforward .our attempts to extend the fourier space estimate return very noisy estimates for on large scales .this is due to as well as being degenerate on large scales .+ in the pbs case however , we can simply fit for following equation : the results for various estimates are show in figure [ fig - sep_univ_b3 ] , where we are showing the real space results only from , to compare them with their separate universe counterparts .while the numbers generally agree , the error bars are quite big in the pbs case .
|
we present several methods to accurately estimate lagrangian bias parameters , in particular the quadratic terms , both the local and the non local ones and show the first clear evidence for the latter . using fourier space correlations , we also show for the first time , the scale dependence of the quadratic and non - local bias coefficients . we fit for the scale dependence of linear bias and demonstrate the validity of a consistency relation between linear bias parameters . furthermore we employ real space estimators , using both cross - correlations and the peak - background split argument . this is the first time the latter is used to measure anisotropic bias coefficients . we find good agreement among the methods , and also good agreement for local bias with esp theory predictions . possible relations among the different bias parameters are exploited . finally , we also show how including higher order bias reduces the magnitude and scale dependence of stochasticity of the halo field . [ firstpage ] cosmology : theory , large - scale structure of universe methods : analytical , numerical
|
a central problem in quantum information theory is that of interconversion between resources , for example , how to use communication channels to produce entanglement between distant parties , or how to use such entanglement to carry out nonlocal operations . in particular , the use of prior entanglement assisted by classical communication to carry out nonlocal unitaries has been the subject of various studies ; for a more extensive list see ref . . in this paperwe add _ time _ as a resource to be considered along with entanglement cost when constructing protocols for bipartite nonlocal unitaries ( nonlocal gates ) . the ability to implement nonlocal unitaries rapidly may be particularly relevant in the context of distributed quantum computation , where less time consumption means less decoherence ; or in position - based quantum cryptography , where it may allow certain position verification schemes to be broken .the usual protocols for bipartite unitaries , such as those in ref . , have the following general structure : alice carries out local operations and measurements , and sends the measurement results through a classical communication channel to bob , who then carries out corresponding operations and measurements , and sends the measurement results back to alice using classical communication . finally , alice performs additional local operations that may depend on the previous measurement results of both parties .when the distance between the two parties is large the total time required for the protocol will be dominated by the two rounds of communication , thus double the minimum time for a signal to pass from one to the other .however , there exist nonlocal unitaries which can be implemented by a protocol in which alice and bob carry out local operations and measurements at the same time , and then simultaneously send the results to the other party , and finally perform local operations depending on the received messages to complete the protocol .this reduces the total communication time by a factor of two .we are interested in identifying which bipartite unitaries can be carried out using such a _ fast _ protocol , and also in finding the associated entanglement cost .the crucial distinction between a fast and slow protocol of the form considered here is that for the latter , bob needs to wait for a message from alice before choosing the basis in which to carry out his measurement ( `` choosing the measurement basis '' is equivalent to choosing what local gates to do before his measurement ) , whereas in the former this basis can be fixed in advance .we have identified two classes of nonlocal unitary that lend themselves to a fast protocol : _ controlled _ unitaries of the form shown in below , and _ group _ unitaries of the form shown in .the slow versions of both were considered in our previous work , where we showed that controlled unitary protocols , while useful for understanding how such protocols work , can always be replaced by group unitary protocols that make use of the same resources .our fast protocols represent special cases ( i.e. , special groups and parameter choices ) of the slow protocols discussed previously , and once again the controlled kind can be replaced by the group kind . by increasing the amount of entanglement expended , additional unitaries can be carried out using these fast protocols . in some casesthis allows an arbitrarily close approximation to a unitary which can not be carried out exactly by these methods .a still more general class of slow protocols , corresponding to eq . ( 18 ) in , also has a fast version , but we have yet to find examples of unitaries it can carry out that can not be implemented by our other fast protocols .the protocols we consider are deterministic they succeed with probability one and use a definite amount of entanglement determined in advance .such deterministic fast protocols have previously been studied by groisman and reznik for a controlled - not ( cnot ) gate on two qubits , and by dang and fan its counterpart on two qudits .in addition , buhrman __ and beigi and knig have published approximate schemes for what they call `` instantaneous quantum computation , '' equivalent to a fast bipartite unitary in our language .these protocols , unlike ours , can be used , to approximately carry out any bipartite unitary .the one in , which is based on the nonlocal measurement protocol in , has a probability of success less than 1 , so it is not deterministic , but this probability can be made arbitrarily close to 1 by using sufficient entanglement .the protocol in uses a fixed amount of entanglement to implement with probability 1 a bipartite quantum operation ( completely positive trace preserving map ) which is close to the desired unitary , and it can be made arbitrarily close by using sufficient entanglement .the term `` instantaneous '' is not unrelated to the idea of an `` instantaneous measurement '' as discussed in , where the terminology seems somewhat misleading in that completing their protocols actually requires a finite communication time , e.g. , the parties must send the results to headquarters ( or to each other ) in order to complete the identification of the measured state .in the same way , `` instantaneous quantum computation '' actually requires a finite communication time , the same as in our fast protocol .the paper is organized as follows . in sec .[ sct2 ] we consider controlled unitaries of the form eq . , where the unitaries being controlled form an abelian group .( appendix [ sct_extend ] contains an argument , which may be of more general interest , that allows projectors in this formula to be replaced by projectors of rank 1 . ) in addition we show how subsets of the collection of unitaries representing an abelian group can be employed to generate fast unitaries otherwise not accessible by our protocol .section [ sct3 ] is devoted to group unitaries of the form , including a significant number of examples .we also present an argument showing that the controlled - abelian - group unitaries of sec .[ sct2 ] can be transformed to group unitaries of the form .the concluding sec .[ sct4 ] contains a brief summary along with an indication of some open problems .in this section we construct a fast protocol for any controlled - abelian - group unitary of the form where the are orthogonal projectors , possibly of rank greater than 1 , on a hilbert space of dimension .the are unitary operators on a hilbert space of dimension , that form a representation of an abelian group of order .as shown in appendix [ sbct_control_rank ] , it suffices to consider projectors of rank 1 .that is , a scheme for implementing where denotes a ket belonging to a standard ( or computational ) orthonormal basis , is easily extended to one that carries out the more general .in addition we shall consider cases , sec .[ sbct2.3 ] , in which the form a _ subset _ of an abelian group , with the sum in restricted to a subset of the integers from to .the simplest abelian group is a cyclic group , so we start with the case where the in are a representation of such a group .( it suffices to consider ordinary representations , since a projective representation of a cyclic group is equivalent to an ordinary representation ; see sec .12.2.4 of . )it will be convenient to let be the identity and .the slow protocol for this case , which works for any collection of unitaries on , is shown in fig .[ fgr1 ] , where is a fully entangled state on the ancillary systems and associated with and , respectively , and the gates , , and ( the fourier matrix ) are defined by here denotes , so . the symbols resembling `` d '' in fig .[ fgr1 ] represent measurements in the standard basis .the slow protocol proceeds by alice carrying out the operations indicated on the left side of fig .[ fgr1 ] , and then sending the outcome of the measurement on to bob over a classical channel .he uses it to carry out a gate , followed by the other operations in the center of the figure .his final measurement outcome is sent to alice over another classical channel , who uses it to perform an additional gate that completes the protocol .a faster protocol can be constructed if the two rounds of classical communication can be carried out simultaneously instead of consecutively .this is possible if bob can carry out various operations , including a measurement , in advance of receiving the value of from alice , as in fig .the classical signals can then be sent simultaneously , and the protocol is completed when both alice and bob make final corrections that depend on the signals they receive . in order to change the slow protocol into a fast protocol , one must , in effect , push the gate in fig .[ fgr1 ] through the two gates that follow it in order to arrive at the situation in fig .the two steps are as follows : 1 .commute with the controlled- gate : the itself passes through the control node unchanged , but leaves the gate controlled by state instead of , so instead of acts on .this can be compensated at the end of the protocol by a local unitary correction of , with ( see the discussion below ) .2 . commute with : we have that , and since the are diagonal unitaries , they do not affect the measurement result in the standard basis , and thus is absent from the fast protocol in fig .( due to the removal of there is an unimportant global phase , dependent on and , that is introduced in the implementation of . )the final correction is possible because the form a cyclic group : the net operation on is , where follows from fig . [ fgr1 ] and the definition of the gate .this is an extra restriction over the slow protocol , where the can be arbitrary unitaries .* example 1 .* case ( i ) . is the -dimensional cnot gate , with , and form a cyclic group .this is the class of unitaries implementable by dang and fan s protocol shown in fig . 2 of .case ( ii ) . is a controlled unitary of the form with , , , .( the are not shift operators , so this is more general than the protocol in . )the fast protocol for controlled - cyclic - group unitaries is easily generalized to the case where the in form an ordinary representation of an abelian group of order .again it suffices ( appendix [ sbct_control_rank ] ) to consider the case where the are rank 1 projectors .any finite abelian group is the direct sum ( direct product ) of cycles , and it is convenient to adopt a label for elements of that reflects this structure , by thinking of it as an -tuple of integers , with , where is the length of the -th cycle . in this way group multiplication , with the identity , is the same as vector addition , modulo for the -th component .similarly , the labels on the systems and in fig. [ fgr2 ] , and the measurement outcomes and , can also written as -tuples : , etc . in the followingwe will make use of the inner product of two -tuples such as .the , , and gates are now appropriate tensor products of the cyclic group gates in , for example , understood as and , using the obvious -tuple definition of .the gate in fig .[ fgr2 ] is the tensor product of the gates for the different cycles , here is why the protocol works .assume an initial product state on .then the operator implemented on is . the gate on before the measurement gives rise to a phase , which is partially compensated by the phase in the gate on , and since , we are left with an overall phase of .since this phase is independent of , a superposition of initial product states of this form for different will also be transformed by , up to an overall phase that is of no concern .note that the themselves may , but need not , be tensor products , as illustrated in the following example .* example 2 .* case ( i ) : .the defined by are tensor products , and form a group . if one regards as well as as a tensor product of two qubits , the defined by is itself a tensor product , with each factor a controlled - cyclic group unitary with one qubit on the and the other on the side .thus can be implemented by an overall protocol which is just two smaller protocols running in parallel with each other , one for and the other for .case ( ii ) : . modify the in by keeping only the first three rows and columns , so they are no longer tensor products , though they still form a group .consequently , the protocol that carries out can no longer be viewed as two smaller protocols running in parallel .assume that the in form an ordinary representation of an abelian group of order , but the sum over is restricted to some subset of the set of -tuples defined in the last subsection .it will suffice once again to consider the case of rank - one projectors , i.e. , .the , , and in fig .[ fgr2 ] run over the same range as before , but is restricted to the set . therefore the dimension of is less than the schmidt rank of , which is the order of the group .it is convenient to use the elements of to label the kets that form the basis of in [ corresponding to the projectors in ] .the operator is now given by , but with restricted to .the reason that the fast protocol in fig .[ fgr2 ] will work in this case is the same as given above in sec .[ sbct2.2 ] ; the fact that is restricted to a subset makes no difference . the significance of this extension of the result in sec .[ sbct2.2 ] is that it enlarges the class of fast unitaries that can be carried out using a protocol of this sort , though perhaps with a significant increase in the entanglement cost .this is illustrated by the following example , which shows that in certain cases one can approximate a continuous family of unitaries using sufficient entanglement ( a large enough ) .* example 3 .* consider a unitary on two qubits and of the form where for some integer . by relabeling on as , we see that is of the form with in the sum restricted to the two values in .thus can be carried out in a fast way at an entanglement cost of ebits . in general , any two - qubit controlled unitary is of the form ( up to local unitaries on and , before and after ) with for some real number .since can be approximated by multiplied by a rational number with large enough , any two - qubit controlled unitary can be approximately implemented ( up to local unitaries ) using this fast protocol by setting equal to for suitable and ; the entanglement cost is again ebits .a further generalization to arbitrary and is possible for any unitary which is diagonal in a product of bases , a basis on and another on .when such a diagonal is written in the form , the unitaries are diagonal .each diagonal element of is approximately an integer root of unity , hence each is approximately an integer root of the identity operator , and the whole set can be approximated by a subset of an ordinary representation of an abelian group of sufficient size .thus any bipartite unitary diagonal in a product of bases can be approximately implemented by a fast protocol .in this section we consider a fast protocol for `` double - group '' unitaries of the form where is a group of order , and are unitaries on hilbert spaces and of dimension and , respectively , and the operators form a projective representation of , in the sense that the collection of complex numbers of unit magnitude is known as the factor system . similarly , and each form a projective unitary representation of with individual factor systems which may differ from one another , whose product for a given and is . from sec .12.2.1 of , for our purposes we can assume the factor system is _ standard _ , that is , where is the identity element in .the slow protocol in sec .iv d of for implementing unitaries of the type is shown in fig .the two parties share a maximally entangled state on the ancillary systems , , each of dimension , the order of .alice and bob perform controlled- and controlled- gates on and , respectively .alice follows this with a gate on , where in the standard basis is a complex hadamard matrix divided by , that is , a unitary with all elements of the same magnitude .then she does a measurement on in the standard basis and sends the result to bob .bob carries out a gate on , where each is a diagonal unitary matrix whose diagonal elements are the complex conjugates of those in the -th row of .thus and generalize the fourier gate and the gate in our previous paper .this more general choice does not extend the set of unitaries the slow protocol can implement , since the phases in and cancel each other , but it allows the fast protocol to implement a larger set of unitaries than would otherwise be possible .next , bob applies a unitary gate to , where is defined in and the coefficients are those in .( the coefficients are not uniquely defined by if the are linearly dependent , but there is always at least one choice for which is unitary , theorem 7 of . ) then he measures in the standard basis and sends the result to alice . to completethe protocol alice and bob apply unitary corrections and to their respective systems .the slow protocol in fig . [ fgr3 ] can be replaced with the fast protocol in fig .[ fgr4 ] provided the gate in the former , which in effect determines the basis for bob s measurement , can be eliminated at the cost of re - interpreting the outcome of his measurement .a sufficient condition for this is that for every there exists a _complex permutation _ matrix ( exactly one nonzero element of magnitude 1 in each row and in each column ) such that the effect of is simply to permute the measurement outcomes , which can then be re - interpreted once is known .the phases in would only introduce a global phase for the implemented ( dependent on and ) and are therefore of no concern . a useful procedure for generating and matrices for which holds employs the _ character table _ of an abelian group of order .this is an matrix , all elements of which are of magnitude 1 , with columns labeled by elements of and rows by its distinct irreducible representations , all of which are one - dimensional . because the representations are one - dimensional , each row is itself a representation ( i.e. , the character is the representation `` matrix '' ) .the element - wise product of two columns is another column , since each row is a representation of the group ; likewise the element wise product of two rows is another row , since the ( tensor ) product of two representations is a representation .thus the transpose of a character table is again a character table .the complex conjugate of any column ( or row ) is another column ( row ) of corresponding to the inverse of the group element .the actual order of the columns or the rows is arbitrary , though it is often convenient to assume that the first row and the first column contain only s .since is a unitary matrix , a character table is a special case of a complex hadamard matrix : one whose elements are all of magnitude 1 , and whose rows ( and columns ) are mutually orthogonal .if is a cyclic group , then up to permutations of rows or columns , with the fourier matrix ; similarly if is the direct product ( sum ) of cycles , is the tensor product of the corresponding fourier matrices .[ thm1 ] ( a ) let be the character table of an abelian group of order , and define where and are complex permutation matrices , and a diagonal matrix with diagonal elements of magnitude 1 , thus a diagonal unitary .let be the diagonal matrix with diagonal elements equal to the complex conjugates of those forming row of . then there existcomplex permutation matrices such that is satisfied for every .\(b ) if the rows of an matrix are linearly independent and form a ( necessarily abelian ) group up to phases under element - wise multiplication , then is of the form given in .the proof is in appendix [ appthm1 ] .note that the group need not be isomorphic to the group represented by although they are of the same order see example 8 in sec .[ sbct3.3 ] with .the matrix defined in has the property that the rows under element - wise products form the abelian group up to a possible phase factor determined by .one consequence of is where is a complex permutation matrix and is a diagonal unitary .when the matrix is of the form , the fact that all its elements are of the same magnitude , as implied by , means that the same is true of the . a partial answer to the question of whether implies that and have the form given in is provided by the following theorem , whose proof is also in appendix [ appthm1 ] .[ thm2 ] for each let be the diagonal matrix whose diagonal elements are the complex conjugates of those forming row of a complex hadamard matrix . if there exists a unitary matrix without a zero element in its first row , and complex permutation matrices such that eq. holds for every , then the matrices form an abelian group up to phases , and and hold .it is worth noting that with and of the form there is a symmetrical version of the fast protocol in fig .[ fgr4 ] in which the gate on the side is replaced with .this requires that the entangled resource be changed to the reason this works is that the changes produced by in in the measurement outcome can always be compensated by altering the function that determines the final corrections .the following is a simple example. * example 4 . * the two - qubit unitary , where and are pauli gates on and , is equivalent under local unitaries to a cnot gate .it is of the form with the cyclic group of order 2 , and can be implemented by the fast protocol in fig .[ fgr4 ] using the matrices where the rows of multiplied by form a group . because and change the measurement basis , alice and bob effectively perform measurements of and , respectively ; this is the same thing ( with the parties interchanged ) as the fast protocol in .an equivalent symmetrized protocol in which is replaced by employs a resource state , and both parties perform a measurement . given a particular bipartite unitary , can it be implemented using the fast double - group protocol ?any such can always be written in the form using a sufficiently large group ( see sec .v a of ) , and typically there are many different ways of constructing such an expansion . however , for our fast protocol to work , assuming that and are of the form given in , one must find a _particular _ expansion , a particular group and unitaries and along with expansion coefficients , that satisfy appropriate conditions . in particular , ( i ) the must all be of the same magnitude , as noted following . but two additional conditions must be checked : ( ii ) the matrix defined in must be unitary , and ( iii ) must be related to the character table of some group as in . condition ( iii ) can be checked in the following way .multiply each row and then each column of by a suitable phase such that the resulting matrix , where and are diagonal unitaries , has s in the first row and first column . then check whether its rows ( alternatively , its columns ) form a group under component - wise multiplication . if this is so , then is a character table of . equating it to in , letting and , and choosing any complex permutation matrix , we arrive at a which along with this satisfies the conditions of theorem [ thm1 ] .thus , provided ( i ) , ( ii ) , and ( iii ) are satisfied there is in fact a fast unitary .the scheme just described provides a useful approach for constructing examples .start with a group and ( projective ) unitary representations and on and , and look for a set of coefficients of equal magnitude such that given by is unitary , and satisfies condition ( iii ) in the preceding paragraph .the search is aided by noting that any factor system , see sec .12.2.2 of , is equivalent to a _normalized _ factor system in which each is an -th root of 1 .a consequence , proved in appendix [ appthm1 ] , is the following : [ thm3 ] let in be a projective representation of a group of order with a normalized ( see above ) factor system , . assume the matrix defined in is of the form given in .then the coefficients in can be written in the form , \label{eqn20}\ ] ] where is an integer that depends upon , and is a phase factor independent of .the theorem justifies the following exhaustive , albeit tedious , search procedure for possible sets of coefficients , assuming they all have the same magnitude , once a group , a projective representation of , and a normalized factor system have been chosen .consider all possible sets of coefficients of the form , setting and for the identity of , as the global phase of is unimportant .for each set , check that the matrix given by is unitary .then see if the rows of the corresponding , constructed as described above , form a group under component - wise multiplication. using this procedure we have been able to show that if , so the group is , the only two - qubit unitaries that can be implemented by our fast protocol are either trivial products of unitaries or else equivalent under local unitaries to a cnot gate .( note that there are additional fast two - qubit unitaries that can be carried out using a bigger group , thus larger and more entanglement . ) both conditions ( ii ) and ( iii ) are nontrivial requirements .not every case in which the are of equal magnitude will lead to a unitary matrix .for example , if for every and is an ordinary representation of , so , then obviously does not define a unitary matrix . and even if is unitary , condition ( iii ) may not hold .for example , the unitary in eq .( 58 ) of with , assuming is not an integer multiple of , results in a matrix whose rows do not form a group . for every ordinary representation of an abelian group is a corresponding fast protocol , as the group is a direct product ( sum ) of cycles , and one can apply the construction in example 6 below . for general projective representations or non - abelian groupsthe matter remains open .the examples which follow represent just a few of the unitaries that can be carried out by our double - unitary fast protocol .examples 5 and 6 make relatively efficient use of entanglement resources , in that the order of the group , which is the rank of the fully - entangled resource state , is equal to the schmidt rank ( or schmidt number ) of minimum number of summands required to represent it as a sum of products of operators on and .examples 7 and 8 , the latter involving a non - abelian group , illustrate how the class of fast unitaries can be significantly expanded by using more entanglement .note that any two - qubit unitary is equivalent under local unitaries to one of the form ( see ) , , \label{eqn21}\ ] ] where , , and are real numbers that can be calculated from the matrix of ( see , e.g. , the appendix of for the method of calculation ) . for the two - qubit examples below we give the values of , , and .* example 5 . * in the two - qubit unitary with the identity , and the pauli operators and , and the group , the method of search indicated in sec .[ sbct3.2 ] yields the following possibilities for .\(a ) the case is equivalent to the swap gate defined in , in which the two qubits are interchanged ; in .an alternative fast protocol for this gate consists of teleportation done simultaneously in both directions .\(b ) the case implements the gate as defined in , equivalent under local unitaries to the double - cnot ( dcnot ) gate defined in ; .\(c ) the case , where ; . in each casethe entanglement resource of two ebits required to carry out the protocol is the minimum possible amount , since the unitary is capable of creating two ebits of entanglement .* example 6 .* when the , with an integer between 0 and , form an ordinary representation of the cyclic group of order , the coefficients will provide a fast implementation of . in particular with this becomes the method of proof of theorem [ thm4 ] can be used to show the equivalence of with , which in turn is locally equivalent to the -dimensional cnot gate of example 1 .* example 7 .* the unitary of schmidt rank 4 on two qubits , where and the operators are the same as in example 5 , employs an unfaithful ( each operator , e.g. , appears twice in the sum ) representation of the abelian group , with the eight coefficients being the corresponding .it can be verified that this set satisfies the requirements for the fast protocol . as this groupis of order 8 the protocol requires a resource of 3 ebits , and we have not found any fast protocol which can implement this unitary using less entanglement .it corresponds to in ( the b gate of ) .* example 8 . * for any given integer , let ( ) be the matrices they form an irreducible ordinary representation of the dihedral group of order , where the first kind in correspond to rotations and the second kind to reflections .let , & \text { even,}\\ ( \epsilon(f)/\sqrt{2n})\exp[\,i\pi m f(f+1)/n\ , ] , & \text { odd , } \end{cases}\ ] ] where is any positive integer coprime with , and is 1 for and for .it can be verified that these sets satisfy the requirements for the fast protocol .the two - qubit unitary constructed in this way is locally equivalent to with , , and , which necessarily lies in the interval ] .the following theorem , proved in appendix [ appcnot ] , shows that the family of unitaries for which fast protocols were constructed in sec .[ sct2 ] can also be realized using our fast protocol for double - group unitaries .the converse is not true , since , for instance , the 2-qubit swap gate in example 5 can not be realized as a controlled - abelian - group unitary , as it is of schmidt rank 4 , while a controlled unitary on 2 qubits can not have schmidt rank greater than 2 .[ thm4 ] let be a controlled - abelian - group unitary of the form , where the are a subset of an ordinary representation of an abelian group of order .then is equivalent under local unitaries to where the are complex coefficients , the are linear combinations of s , and , , are all ordinary representations of the group .in addition the can be chosen to satisfy the requirements for the fast protocol as given in sec .[ sbct3.2 ] .hence all controlled - abelian - group unitaries of the form discussed in sec .[ sct2 ] can be implemented by our fast double - group unitary protocol , without using more entanglement .any nonlocal unitary can be carried out deterministically by means of local operations and classical communication provided an appropriate entangled resource is available .however , teleportation and various more efficient schemes typically require two rounds of classical communication , and hence the minimum total amount of time required to complete the protocol is twice the time required for one - way communication . in certain cases there are fast protocols in which the minimum total time is only half as long , and in this paper we have discussed two protocols for fast bipartite unitaries . the first is shown in fig. [ fgr2 ] : it carries out a controlled - abelian - group unitary of the form , including cases in which only a subset of the collection that forms an abelian group appear in the sum . the second , shown in fig .[ fgr4 ] , will carry out a double - group unitary of the form , provided the coefficients satisfy appropriate conditions .we have shown , sec .[ sbct3.4 ] , that unitaries which can be carried out by the first protocol can also be carried out by the second , though the converse is not true ( e.g. , example 5 ) .we have constructed some examples for both protocols .note , however , that we have not been able to answer the fundamental question as to precisely _ which _ unitaries can be carried out _ exactly _ using a fast protocol and a fixed entanglement resource specified in advance .we do not know the answer even for fast protocols of the two types considered in this paper .finding examples for our double - group protocol is not at all trivial ; see the discussion in sec .[ sbct3.2 ] . in sec .[ sbct2.3 ] we discussed cases in which subsets of a group can be used to carry out a fast controlled unitary protocol at the cost of greater entanglement .see in particular example 3 , where we showed that any unitary in a particular continuous family can be approximated arbitrarily closely by a unitary implementable by a deterministic fast protocol , provided one is willing to use up enough entanglement .this is similar in spirit to the results in and .their protocols may need less or more entanglement than our protocols , depending on the form of the unitary . in certain situations ( e.g. , example 5 ) , our protocol uses the minimum possible entanglement. it would be nice if these issues could be clarified in terms of some basic principle(s ) of quantum information theory .another question we have not been able to answer is whether unitaries of the more general form , where the form an ordinary or projective representation of a group , but need not do so , can be carried out by means of a fast protocol .a slow protocol was found in our earlier work , and we have constructed a fast version for that protocol , but it seems to only work for those unitaries implementable by our fast double - group protocol of sec . [ sct3 ] .again , this may reflect some fundamental principle of quantum information , but if so we have not been able to identify it .we thank serge fehr , hoi kwan lau , and shiang yong looi for helpful discussions , including those on the connections of this work with the topics of instantaneous measurement and position - based quantum cryptography .patrick coles read some of the appendices and made helpful suggestions .this work has been supported in part by the national science foundation through grants no .phy-0456951 and no .s.m.c . has also been supported by a grant from the research corporation .in this section , we consider implementation of on using a different and use these ideas below to show that consideration of controlled unitaries can be restricted to those with rank- projectors .a scheme is shown in fig .[ figextend ] , which is valid for both the fast and slow protocols .if the protocol for is fast , then the whole protocol is fast .the circuit in fig .[ figextend ] can be used for the following two situations , to be discussed in detail in the two subsections below .the first situation , called `` extension '' , is that we extend the space of to , and the unitary is an extension of , where is any unitary on .the second situation , called `` compression with extension , '' is only for general controlled unitaries of the form .the protocol replaces the higher - rank projectors on in with rank - one projectors on in , while adding more projectors if needed .applications are found in sec .[ sct2 ] . note that while fig .[ figextend ] shows an extension ( and compression in the case of controlled unitaries ) on the side , one can just as well do this on the side , or both sides .the input state for the whole protocol is any state on together with some fixed state on the ancilla denoted by .the map is unitary , and is its inverse .the unitary obviously has its input dimension equal to its output dimension : , where is determined by ( see below ) , may be unequal to , and may be unequal to . herewe consider the first type of extension , where is the direct sum of ( the same as but on a different space ) and another unitary .dimension is fixed by and is greater than .one can always choose to be equal to , but in general may choose to be less than .the action of the unitary on the actual input space is determined by the following equation : where is an orthonormal basis of .the requirements for in this equation can be extended to a full definition of a unitary .the effect of is to transfer alice s input state into .define to be the span of .then , where is a space orthogonal to .then the correct form of should be , where is the same as the original unitary except it is on a different space , and is an arbitrary unitary .now we prove that the circuit defined in fig .[ figextend ] applied to yields .suppose the input state on is .then the argument can be extended by linearity to superpositions of the states . as a side remark , the derivations above should still work if we replace the unitary by an isometry , replace by , and remove system from the circuit in fig .[ figextend ] .it can be verified that the overall operation of the circuit is .we chose to present the argument using the unitary rather than the isometry in order to show that the scheme has no trouble finding an experimental implementation .the same remark also applies to the next subsection .the current extension technique was useful in finding the protocols in secs .[ sbct2.3 ] and example 8 in [ sbct3.3 ] , but it turns out that those protocols ( for the particular types of unitaries ) can be simplified such that no extension is needed , which is why we have not explicitly mentioned this idea of extension in those sections . for general controlled unitaries of the form ( not limited to those implementable by the fast protocols in this paper ) ,we now consider a procedure that compresses the higher - rank projectors on into rank - one projectors on , while adding more projectors if needed .the form of is where .apparently .the steps of the protocol are similar to those in appendix [ sbct_extend1 ] , but with the following change to the requirements on ( and accordingly ) : where is the label for the states in a specific basis of , with ( ) labeling which projector , and labeling which basis state in the support of .note the range of depends on , and because of this , should be at least the maximum rank among the s ( ) , while satisfying .the requirements for in can be extended to a full definition of a unitary .the effect of is to transfer the information about `` which '' into , and that information is used in the controlled unitary , and then transferred back to by .the final state of the protocol is , and the proof for the correctness of the protocol is similar to that in appendix [ sbct_extend1 ] .suppose the input state on is .then the argument can be extended by linearity to superpositions of the states . as noted above ,for the current subsection it is also plausible to replace the unitary by an isometry , replace by , and remove system from the circuit in fig .[ figextend ] . using the definition of in, it can be verified that the overall operation of the circuit is , where is of the form .* proof of theorem [ thm1 ] . *\(a ) we need to show [ see and note that both and are diagonal matrices ] that is a complex permutation matrix .the diagonal elements of are complex conjugates of the elements in a row of and thus [ eq . ]equal to a common phase factor times those in a particular row , say row , of the character table ; recall that the complex conjugate of a row in is always another row of .now the matrix is the matrix with each column multiplied by the corresponding diagonal element of .thus the -th row of is the element - wise product of row of with row of , up to an overall phase that depends on .but since the rows of form a group under element - wise products , this means that for a suitable complex permutation matrix . since ,tells us that , and because , , and are all complex permutation matrices , so is .\(b ) the rows of are the elements of a group in the following sense .the element - wise product of any two rows is , up to a phase , a third row , and the fact that the rows are linearly independent means that this third row is uniquely determined .that is , there is a well - defined associative group multiplication , which is commutative , so the group is abelian .there is necessarily one row consisting of identical elements ; this is the identity element of . given any row, there is another row which is , element by element , its complex conjugate , up to a single phase for the whole row ; these two rows are inverses of each other .hence the group is well defined .obviously , each column of consists of elements ( viewed as matrices ) that form an irreducible representation of under the multiplication of complex numbers , and all the elements of are of magnitude 1 .divide each row by its first element to form the matrix , whose rows again form the group , but now without additional phases since the first element of each row is 1 . since the rows of are linearly independent , so are the rows of , and hence also its columns. thus each column of forms a distinct irreducible representation of the group .all the distinct irreducible representations of are included in the columns of , thus is the transpose of a character table of the abelian group , and such a transpose is itself a character table of ( see the discussion preceding theorem [ thm1 ] ) .hence we can identify as the in , and the phases used to change to can be included in the matrix in .* proof of theorem [ thm2 ] .* let and rewrite in the equivalent form the matrix on the left is obtained from by multiplying each column by the corresponding diagonal element of , while the one on the right is obtained by some permutation of the rows of , with an additional overall phase for each row .consider the special case in which the first row of consists of 1 s .then the first row of is , the row vector whose elements are the diagonal elements of , and according to , it is equal to the first row of ( i.e. , a phase times some other row of ). as the are linearly independent ( they are complex conjugates of the rows of the hadamard matrix ) , it follows by equating , for each , the first row on the left side of with that on the right side , that the element - wise product of the first row of and the for all possible values of generates all the rows of up to a phase ( i.e. , each row of is a phase times one of the ) . then according to, the element - wise product of any row of with any ( i.e. the element - wise product of any two rows of up to a phase ) , is always a third row of up to a phase .similarly the product of any two of the matrices in the set is a third , up to a phase ; this is an associative group product .the first row of is equal to one of the up to a phase , and this corresponds to the group identity . because the first row of consists of 1 s , there is a row of , say row , which consists of equal elements .then the -th row of [ see ] , identifies the group inverse of .thus the under matrix products form a group ( denoted by ) up to phases . for any , its complex conjugate is the matrix inverse of , and hence some up to a phase. consequently all the different rows of , the complex conjugates of , are equal to the different up to phases and a permutation of the ordering , hence these rows form the group up to phases under element - wise multiplication .then according to theorem [ thm1](b ) , must be abelian , and is of the form given in . since the rows of are a permutation of the different up to phases , they are also a permutation of the rows of up to phases .thus is of the form with .hence for the special case under consideration , is satisfied , and then follows from . next consider the more general case in which the elements of the first row of are all nonzero .form from by dividing every column of by the corresponding element on the first row .this means that , where is a diagonal matrix . since commutes with every , we can replace on both sides of with . as the first row of consists of 1 s ,the argument given above shows that the form an abelian group up to phases , and the rows of are , up to phases , some permutation of the rows of , and is of the form given in , again according to theorem [ thm1](b ) .thus the different columns of all have the same normalization , and since the same is true of , as is assumed to be a unitary matrix , it follows that the diagonal elements of all have magnitude 1 , and one can set in . then follows from .* proof of theorem [ thm3 ] .* the coefficients must all be of the same magnitude , ( see the remark following theorem [ thm1 ] ) , and without loss of generality can be chosen such that in , where is the group identity .first consider the case that is an ordinary representation of , so that .choose any group element and assume it has order , which means that divides and .the first row of , corresponding to in , contains in some order , interspersed with other coefficients .now consider the row of corresponding to in .it is related to the first row as indicated here , where only the relevant columns are shown , rearranged in a convenient order .let us define the matrix , as in the first paragraph of sec .[ sbct3.2 ] , to be the one obtained from by multiplying each row and each column by some phase , so that all the elements in the first row and first column are equal to 1 .this is a character table , so every element is an -th root of 1 .equivalently , is obtained from by dividing each column of by the corresponding element in the first row , and then in the resulting matrix dividing each row by its first element .consequently , applying this process to the rows and columns shown in , we conclude that where each is an -th root of 1 .the product of these equations , ^p=\phi_1 \phi_2 \cdots \phi_{p-1}(\sqrt{n})^{p-1},\ ] ] implies , since , that must be a -th root of a number which is itself an -th root of 1 , and because divides , is of the form .this completes the argument for an ordinary representation .when the form a projective representation of with a standard factor system , the first row in is the same , but the second row will be multiplied by appropriate factors . since we are assuming a normalized factor system , all these additional factors are themselves -th roots of 1 , so still holds for which are -th roots of 1 , and the rest of the argument is the same as before .in this appendix , we prove theorem [ thm4 ] , which says that the controlled unitary given by , where are orthogonal projectors , and form a subset of an ordinary representation of an abelian group , is equivalent to under local unitaries , where are complex coefficients [ will be defined in ] , and are linear combinations of , and is an ordinary representation of . in addition , the can be chosen to satisfy the requirements for the fast protocol , hence all controlled - abelian - group unitaries can be implemented by the fast double - group unitary protocol .+ we first prove the case that form a whole representation , not a subset , and at the end we will remark that the proof also works in the `` subset '' case .the proof is by explicitly constructing a and showing that it is equivalent to under local unitaries .any abelian group is a direct sum of cyclic groups , so , where , and is the order of the cyclic group . then .the group element is relabeled by a vector , where .we use the convention that is the sequential numbering ( starting from 0 ) for the vectors in lexicographical order , so that corresponds to , and corresponds to , etc .suppose has been diagonalized under a suitable unitary similarity transform , then is the direct sum of some irreducible representations ( possibly with redundancy ) .all possible irreducible representations of are one - dimensional , and have the form where is the label for irreducible representations ( some may be missing from , but we still include them in this labeling scheme for convenience ) .denote the computational basis of by , , then the -th diagonal elements in determine an irreducible representation labeled by , .the can be written in the vector form [ see and the sentence after that ] , and the components in the vector will be denoted by . as discussed above , for every we can represent using a set of integers : , with group multiplication corresponding to vector addition ( modulo for the -th element of the vector ) .define , where is defined by ( basically the same as in example 6 ) , \notag\\ \,\,\,f_s=0,1,\cdots , r_s-1.\end{aligned}\ ] ] we choose the to be p^a_k,\ ] ] where is the vector labeling for .define as \dyad{b}{b},\ ] ] where are the components in the vector labeling for .it is not hard to verify that is an ordinary representation of , and the coefficients form a unitary matrix of the type , hence is unitary . where , , and is any eigenstate of .denote the phase factor in front of in the above equation by , then ^ 2+[k_s+q_{b , s}+(r_s\mbox { mod } 2)/2]^2\}/r_s \big ) \notag\\ & = \frac{1}{\sqrt{n}}\prod_{s=1}^{\eta}\left[\left(\sum_{j=0}^{r_s-1}\exp\{-\pi i [ j+k_s+q_{b , s}+(r_s\mbox { mod } 2)/2]^2/r_s\}\right ) \exp\{\pi i [ k_s+q_{b , s}+(r_s\mbox { mod } 2)/2]^2/r_s\}\right]\notag\\ & = \frac{\alpha}{\sqrt{n}}\prod_{s=1}^{\eta}\exp\{\pi i[k_s+q_{b , s}+(r_s\mbox { mod } 2)/2]^2/r_s\}\end{aligned}\ ] ] where is a constant independent of and . in deriving the last line above ,we have used for even , and for odd , which make the substitution possible , and obtained , where ^ 2/r_s\}$ ] . define the local operators and on and , respectively , as follows : from the unitarity of , is always a phase factor with magnitude 1 , hence and are unitary operators . then for chosen arbitrarily from the eigenstates of , we have \ket{k}\ket{b } \notag\\ & = \sum_{k=0}^{n-1}p^a_k\otimes v^b_k \ket{k}\ket{b } \notag\\ & = \uc \ket{k}\ket{b}\end{aligned}\ ] ] where we have used to derive the third line , and used to derive the fourth line .since are of finite rank , there exists a finite collection of states of the form to make a complete basis of .the actions of and are the same on all states in a complete basis , hence they must be identical operators . therefore is equivalent to under local unitaries . using the algorithm in sec .[ sbct3.2 ] , it can be verified that the choice of coefficients given above ( which can be viewed as the choice in example 6 generalized to the non - cyclic abelian groups ) satisfies the requirements for the fast protocol .hence the double - group protocol for is fast .the proof above can basically be applied to the case that form a subset of a representation . in generalsome do not occur in the expressions for and ; those can be safely removed because and are block diagonal , where the blocks are determined from the support of the s .the coefficients are still the same as above , so the protocol is still fast .hence the proof still works .a. yimsiriwattana and s. j. lomonaco jr ., in _ coding theory and quantum computing _ , edited by d. evans _ et al ._ , volume 381 of contemporary mathematics , ( american mathematical society , 2005 ) , p. 131 .e - print arxiv : quant - ph/0402148v3 .h. buhrman , n. chandran , s. fehr , r. gelles , v. goyal , r. ostrovsky , and c. schaffner , in _ advances in cryptology crypto 2011 _ , edited by p. rogaway , ( springer , new york , 2011 ) , p. 423 .e - print arxiv:1009.2490v4 [ quant - ph ] .
|
in certain cases the communication time required to deterministically implement a nonlocal bipartite unitary using prior entanglement and locc ( local operations and classical communication ) can be reduced by a factor of two . we introduce two such `` fast '' protocols and illustrate them with various examples . for some simple unitaries , the entanglement resource is used quite efficiently . the problem of exactly which unitaries can be implemented by these two protocols remains unsolved , though there is some evidence that the set of implementable unitaries may expand at the cost of using more entanglement .
|
a chaos game , defined in section [ secattr ] , is a markov chain monte carlo algorithm applied to describe stationary probability distributions supported on an attractor of an iterated function system ( ifs ) .chaos games can yield efficient approximations to attractors , such as cantor sets and sierpinski triangles , of well - known types of ifs , such as finite sets of contractive similitudes on .they have applications in computer graphics and digital imaging .rigorous convergence results have been established for chaos games on ifss whose maps are contractive on the average , see and references therein .recently , it has become clear ( see , barnsleylesniakrypka ) that attractors of ifss are of a topological nature .applications of the chaos game on a general ifs of continuous maps on a metric space , without any contractivity conditions on the maps , have been investigated , see ) , and various somewhat complicated convergence results have established . in this paper , we simplify the situation by restricting attention to topological ifss , defined in section [ secattr ] .our main result is theorem [ tmain ] : under very general conditions , chaos games yield attractors of topological ifss .our approach is based on understanding the subtle structure of attractors and their basins of attraction .we proceed as follows . in sectionsecattr we give basic definitions and instructive examples of concerning attractors . in section [ basin ] , we discuss the basin of attraction of an ifs in a topological space and the relationship to a new object , the _ pointwise basin _ ; this complements observations made in barnsleylesniakrypka .theorem [ tmain ] is proved in section [ chg ] and illustrated by example [ ex : proj ] . in section [ conclsec ]we relate this work to the notion of a semi - attractor as defined by lasota , myjak and szarek .we use the notation and results from . a * * ( topological ) iterated function system * * _ _ ( ifs ) __ is a normal hausdorff topological space together with a finite collection of continuous maps , .the associated * hutchinson operator * is defined on the family of nonempty compact sets by .the composition of is denoted by .similarly we define for any set .the hutchinson operator on and its restriction to , , are continuous , when is endowed with the vietoris topology ( cf .* proposition 1.5.3 ( iv ) ) ) . without ambiguitywe may also write . note that has the order theoretic property for .[ chaosdef ] a * chaos game * on the ifs comprises a sequence of points with and for where is a sequence of random variables with values in .( in some cases , but not in this paper , it is required that these random variables are independent and identically distributed . ) in this paper , in line with realistic applications , we require only that there is a probability so that for each , for all independently of the outcomes for all other values of .an ifs may or may not possess some kind of attractor , see below .but , if it does possess an attractor , and if a chaos game is such that , where `` '' denotes topological closure in , then we say `` the chaos game yields the attractor '' or some equivalent statement .we call a nonempty compact set a ( * strict ) attractor * of , when admits an open neighbourhood such that as , for all nonempty compact . the convergence is understood in the vietoris sense .the maximal open set with the above property is called the * basin of ( the attractor ) * , and is denoted by . if is metrizable with metric , then it is well - known that the associated hausdorff metric , , induces the vietoris topology on , .the question of when the existence of an attractor implies the existence of a metric with respect to which the ifs is contractive , on a neighborhood of the attractor , is an active research area , see for example kameyama , barnsleyvince - projective , vince .[ ex : transitive ] let be a compactum and be a homeomorphism such that is a minimal invariant set , i.e. , if and , then . by virtue of the birkhoff minimal invariant set theorem ( see ) we know that the forward orbit of any point in under is dense in , that is canonical situation of this kind arises for an irrational rotation of the circle .( interestingly , a circle is the attractor of a contractive ifs on the plane , cf .the ifs , where is the identity map on , has as a strict attractor with .however , is noncontractive and can not be remetrized into a contractive system .moreover , this is an example of an ifs where the attractor is not point - fibred in the sense of kieninger ( cf . ) and it is not topologically self - similar in the sense of kameyama ( cf .kameyama ) .thus symbolic techniques , like those in , are not directly applicable . to see that is the unique strict attractor of we observe the following .first , for all , second , for a general we have .this is because for arbitrary .nonmetrizable compact spaces are important for fundamental questions in measure theory and functional analysis , cf . . however , proposition [ th : separability ] below shows that it may be cumbersome to identify a concrete example of a nonmetrizable attractor .classical examples , like tychonoff s product of uncountably many compact factors or alexandroff s double circle can not be used for this purpose , because they are not separable .[ th : separability ] if is a strict attractor of a topological ifs , then is separable .choose any .the set is countable and dense in .[ ex : nonmetric ] let \times \{0\}\cup \lbrack 0,1)\times \{1\}\subset \mathbb{r}^{2} ] , and double intervals are tailored to .the space is known to be compact separable first - countable and not metrizable .it turns out that is a strict attractor of an ifs .consider the ifs where .one easily observes that and that taking a counter - image via does not change the shape " of a double interval .for instance \times \{0\}\cup ( x_{0}-r , x_{0})\times \{1\ } ) \\ & = & ( 1-x_{0},1-x_{0}+r)\times \{0\}\cup \lbrack 1-x_{0},1-x_{0}+r)\times \{1\}.\end{aligned}\]]therefore constitutes an ifs of continuous maps . to show that is a strict attractor of , it is enough to verify this on singletons by inspecting the behavior of cascades parallel to those appearing in the ifs ;f_{1},f_{2}\} ] , .namely , if is dense in w.r.t .the euclidean topology , then is dense in .the basin of attraction plays a key role in chaos games on a general ifs , because it is usually required that in definition [ chaosdef ] belongs to , to permit the chaos game to yield the attractor .in fact , it is only necessary that belongs to the pointwise basin of , defined next . in this sectionwe examine properties of basins and pointwise basins .the * pointwise basin * of a set ( w.r.t . the ifs ) is defined to be that may be empty . if , then we say that is a * pointwise ( strict ) attractor * of .[ basprop ] let be a strict attractor of with basin .then is a pointwise strict attractor of and .we shall check only that , because the reverse inclusion is obvious . fix . for every , . since is an open neighbourhood, there exists such that . being each continuous, we can find open neighbourhoods so that . next , we take finite subcovering of . by normality of can divide in such a way that ; cf . . for each separate onehas , so there exists with the property for . hence ( [ eq : jtersuba ] ) holds for .since is continuous , there exists an open neighbourhood of which is also mapped with into .being arbitrary and open , we can conclude that .the following example illustrates a pointwise strict attractor that is not an attractor .[ exdk ] consider the set of points on the circle which may be projected to integers and infinity on real line .let be such that and ( see figure [ countex ] ) .observe that the map is continuous with respect to the euclidean metric on the circle .it is obvious that each point of is attracted to the north pole by the induced mapping on so we have .however , is not an attractor of the ifs .a positive criterion for the existence of a strict attractor according to the nonemptiness of reads as follows .[ th : pointwiselem ] let be a metric space and , nonexpansive mappings , i.e. , let . if , then is a strict attractor of with the basin .fix and a nonempty compact .choose a finite net , .from the additivity of it follows that , because is a finite subset of .then for large enough . from nonexpansiveness ,hence we get for large .therefore [ basinv ] if , then the pointwise basin is open and positively invariant : 1 . , 2 . . in particular, the basin of an attractor is positively invariant , .denote .fix .there exists such that . by continuity of can find an open neighbourhood of satisfying .we shall check that . since for each set is finite , we get . therefore belongs to together with its neighbour points .ad ( ii ) .let , .we will show that . obviously , given an open , we have that for large . in particular , for some .now , fix an open .pick anyhow . since , we have for large . altogether , . therefore .invariance of can be inferred from the above by applying proposition [ basprop ] . in general ;just take a projection onto a point .finally , we show how important is to iterate compact sets , not merely ( closed ) bounded sets like it was the case in the original paper hutchinson and its successors .let be the hilbert space of square summable sequences with an orthonormal basis .we define via for .one easily checks that is nonexpansive .we shall prove that is a strict attractor of the ifs with the full basin of attraction , but it does not attract iterates of nonempty closed bounded noncompact sets , no matter how small these sets are .first , we examine the convergence obviously we need to verify this we employ the lebesgue dominated convergence theorem adapted to series ( understood as integrals with respect to the counting measure ) . indeed both , the pointwise convergence and the majorization hold true . thus .second , we bootstrap ( [ eq : edelstein ] ) to ( [ eq : attractor ] ) via lemma [ th : pointwiselem ] .thus is a strict attractor .third , we note that where , which is the element of the sphere at with radius .hence does not attract closed bounded sets ; , for arbitrarily small radii .we are ready to show that the chaos game works on topological spaces in a rather general framework .the initial point of the orbit must be picked from the pointwise basin of attraction .[ tmain ] let be an ifs .let be a nonempty compact set such that .if is a random orbit under starting at , then with probability one , where the limit is taken with respect to the vietoris topology and each mapping is chosen at least with probability .fix .we have first , for every open set there exists such that for all .this is due to ( [ eq : lima ] ) .second , we will show that , for any open and all , with probability one .let us denote by ( [ eq : lima ] ) the set is compact .moreover , , thanks to proposition [ basinv ] .fix an open . to each can assign in such a way that .this is possible , because ( [ eq : lima ] ) holds for replaced with .now we can use the continuity of . for each there exists an open such that for all .the open covering of admits a finite subcovering .we put .therefore for every there exists such that . hence there exists a finite word ( of the length not exceeding ) which satisfies .each map , , is drawn with the probability not less than .having drawn the point , , the probability that is not less than .denote by the event such that for some .the complementary event shall be written as .taking into account the observations made so far , basic conditional probability calculation shows that thus .moreover . on calling the second borelcantelli lemma ( ( * ? ? ?* p.18 ) ) , we get that , with probability , happens for infinitely many .overall we are almost sure that all tails intersect those open sets which are intersecting an attractor .a class of noncontractive ifss with an attractor may be found in projective spaces .[ ex : proj ] ( ( * ? ? ?* example 3 ) ) consider the ifs , where neither of the functions has an attractor .however , the ifs consisting of both functions has an attractor plotted in figure [ ci ] .it is obtained by means of the chaos game .one can still weaken the uniform positive minorization of drawing probabilities and allow for some decay in time ( say logarithmic and alike ) .we refer to for these more involved nonstationary conditions .[ th : semiattractor ] let be a hausdorff topological space .let be a compact semiattractor of the ifs comprising continuous maps .if is a random orbit under starting at , then with probability one , provided each mapping is chosen at least with probability . 1 .the proof of theorem [ tmain ] relies on compactness of and continuity of .it works actually in hausdorff nonnormal spaces too , although preparatory material from section [ basin ] involved normality in few places .lasota - myjak semiattractors are defined for multivalued iterated function systems so some discontinuity of maps is allowed .moreover , semiattractor can be noncompact .we restrict the chaos game to a concrete class of stochastic processes which randomly draw the maps from a system .thanks to this we do not have to validate whether the markov operator associated with the ifs is asymptotically stable ; in particular , the ifs at our disposal need not fulfill the standard average contraction condition ( cf .lasotamyjakszarek ) .4 . as already pointed out in , the nature of noncontractive ifss possessing attractors is still not well understood .barnsley , d.c .wilson , k. leniak , _ some recent progress concerning topology of fractals _ , in : k.p .hart , p. simon , j. van mill ( eds . ) , _ recent progress in general topology iii _ , atlantis press , amsterdam 2014 .
|
we explore the chaos game for the continuous ifss on topological spaces . we prove that the existence of attractor allows us to use the chaos game for visualization of attractor . the essential role of basin of attraction is also discussed . iterated function system , strict attractor , basin of attractor , pointwise basin , chaos game 28a80
|
visualization has always been an important ingredient for communicating mathematics .figures and models have helped to express ideas even before formal mathematical language was able to describe the structures .numbers have been recorded as marks on bones , represented with pebbles , then painted onto stone , inscribed into clay , woven into talking knots , written onto papyrus or paper , then printed on paper or displayed on computer screens .while figures extend language and pictures allow to visualize concepts , realizing objects in space has kept its value . already in ancient greece ,wooden models of apollonian cones were used to teach conic sections .early research in mathematics was often visual : figures on babylonian clay tablets illustrate pythagorean triples , the moscow mathematical papyrus features a picture which helps to derive the volume formula for a frustum .al - khwarizmi drew figures to solve the quadratic equation .visualization is not only illustrative , educational or heuristic , it has practical value : pythagorean triangles realized by ropes helped measuring and dividing up of land in babylonia .ruler and compass , introduced to construct mathematics on paper , can be used to build plans for machines .greek mathematicians like apollonius , aristarchus , euclid or archimedes mastered the art of representing mathematics with figures .visual imagination plays an important role in extending geometrical knowledge .while pictures do not replace proofs - gives a convincing visual proof that all triangles are equilateral - they help to transmit intuition about results and ideas .the visual impact on culture is well documented .visualization is especially crucial for education and can lead to new insight .many examples of mechanical nature are in the textbook . as a pedagogical tool, it assists teachers on any level of mathematics , from elementary and high school over higher education to modern research .a thesis of has explored the feasibility of the technology in the classroom .we looked at work of archimedes using this technology .visualizations helps also to showcase the beauty of mathematics and to promote the field to a larger public .figures can inspire new ideas , generate new theorems or assist in computations ; examples are feynman or dynkin diagrams or young tableaux .most mathematicians draw creative ideas and intuition from pictures , even so these figures often do not make it into papers or textbooks .artists , architects , film makers , engineers and designers draw inspiration from visual mathematics . well illustrated books like advertise mathematics with figures and illustrations .such publications help to counterbalance the impression that mathematics is difficult to communicate to non - mathematicians .mathematical exhibits like at the science museum in boston or the museum of math in new york play an important role in making mathematics accessible .they all feature visual or even hands - on realizations of mathematics . while various technologies have emerged which allow to display spacial and dynamic content on the web , like javascript , java , flash , wrml , svg or webgl , the possibility to * manipulate an object with our bare hands * is still unmatched .3d printers allow us to do that with relatively little effort .the industry of rapid prototyping and 3d printing in particular emerged about 30 years ago and is by some considered part of an * industrial revolution * in which manufacturing becomes digital , personal , and affordable . first commercialized in 1994 with printed wax material , the technology has moved to other materials like acrylate photopolymers or metals and is now entering the range of consumer technology .printing services can print in color , with various materials and in high quality .the development of 3d printing is the latest piece in a chain of visualization techniques .we live in an exciting time , because we experience not only one revolution , but two revolutions at the same time : an information revolution and an industrial revolution .these changes also affect mathematics education .3d printing is now used in the medical field , the airplane industry , to prototype robots , to create art and jewelry , to build nano structures , bicycles , ships , circuits , to produce art , robots , weapons , houses and even used to decorate cakes .its use in education was investigated in .since physical models are important for hands - on active learning , 3d printing technology in education has been used since a while and considered for sustainable development , for k-12 education in stem projects as well as elementary mathematics education .there is no doubt that it will have a huge impact in education .printed models allow to illustrate concepts in various mathematical fields like calculus , geometry or topology .it already has led to new prospects in mathematics education .the literature about 3d printing explodes , similar as in computer literature expanded , when pcs entered the consumer market .examples of books are . as for any emerging technology , these publicationsmight be outdated quickly , but will remain a valuable testimony of the exciting time we live in .the way we think about mathematics influences our teaching .images and objects can influence the way we think about mathematics . to illustrate visualizations using 3d printers ,our focus is on mathematical models generated with the help of * computer algebra systems*. unlike * 3d modelers * , mathematical software has the advantage that the source code is short and that programs used to illustrate mathematics for research or the classroom can be reused .many of the examples given here have been developed for classes or projects and redrawn so that it can be printed .in contrast to modelers " , software which generate a large list of triangles , computer algebra systems describe and display three dimensional objects mathematically . while we experimented also with other software like 123d design " from autodesk , sketchup " from trimble , the modeler free cad " , blender " , or rhinoceros `` by mcneel accociates , we worked mostly with computer algebra systems and in particular with mathematica . to explain this with a concrete example ,lets look at a * theorem of newton on sphere packing * which tells that * the kissing number of spheres in three dimensional space is 12 . *the theorem tells that the maximal number of spheres we can be placed around a given sphere is twelve , if all spheres have the same radius , touch the central sphere and do not overlap . while newton s contemporary gregory thought that one can place a thirteenth sphere ,newton believed the kissing number to be 12 .the theorem was only proved in 1953 . to show that the kissing number is at least 12 , take an icosahedron with side length and place unit spheres at each of the 12 vertices then they kiss the unit sphere centered at the origin . the proof that it is impossible to place 13 spheres uses an elementary estimate for the area of a spherical triangle , the euler polyhedron formula , the discrete gauss - bonnet theorem assuring that the sum of the curvatures is and some combinatorics to check through all cases of polyhedra , which are allowed by these constraints . in order to visualize the use of mathematica, we plotted 12 spheres which kiss a central sphere . while the object consists of 13 spheres only , the entire solidis made of 8640 triangles .the mathematica code is very short because we only need to compute the vertex coordinates of the icosahedron , generate the object and then export the stl file . by displaying the source code, we have * illustrated the visualization * , similar than communicating proof . if fed into the computer , the code generates a printable ' ' stl `` or ' ' x3d " file .physical models are important for hands - on active learning. repositories of 3d printable models for education have emerged .3d printing technology has been used for k-12 education in stem projects , and elementary mathematics education .there is optimism that it will have a large impact in education .the new technology allows everybody to build models for the classroom - in principle . to make it more accessible, many hurdles still have to be taken .there are some good news : the stl files can be generated easily because the format is simple and open .stl files can also be exported to other formats .mathematica for example allows to import it and convert it to other forms .programs like meshlab " allow to manipulate it .terminal conversions like admesh " allow to deal with stl files from the command line .other stand - alone programs like stl2pov " allow to convert it into a form which can be rendered in a ray tracer like povray .one major point is that good software to generate the objects is not cheap .the use of a commercial computer algebra system like mathematica can be costly , especially if a site licence is required .there is no free computer algebra software available now , which is able to export stl or 3ds or wrl files with built in routines .the computer algebra system sage , which is the most sophisticated open source system , has only export in experimental stage .it seems that a lot of work needs to be done there .many resources are available however .the following illustrations consist of mathematica graphics which could be printed .this often needs adaptation because a printer can not print objects of zero thickness .the first table summarizes information and industrial revolutions . + + of course , these snapshots are massive simplifications .these tables hope to illustrate the fast - paced time we live in .+ * remarks and acknowledgements*. the april 7 2013 , version of this document has appeared in .we added new part f)-n ) in the last section as well as the last 16 graphics examples as well as a couple of more code examples and references .we would like to thank * enrique canessa * , * carlo fonda * and * marco zennaro * for organizing the workshop at ictp and inviting us , * marius kintel * for valuable information on openscad , * daniel pietrosemoli * at medialab prado for introducing us to use processing " for 3d scanning , * gaia fior * for printing out demonstration models and showing how easy 3d printing can appear , * ivan bortolin * for 3d printing an appollonian cone for us , * stefan rossegger * ( cern ) for information on file format conversions , * gregor ltolf * ( http://www.3drucken.ch ) for information on relief conversion and being a true pioneer for 3d printing in the classroom .thanks to * thomas tucker * for references and pictures of the genus 2 group . thanks to * anna zevelyov * , director of business development at the artec group company , for providing us with an educational evaluation licence of artec studio 9.1 which allowed us to experiment with 3d scanning .o. knill and e. slavkovsky . visualizing mathematics using 3d printers . in c.fonda e. canessa and m. zennaro , editors , _ low - cost 3d printing for science , education and sustainable development_. ictp , 2013 .isbn-92 - 95003 - 48 - 9 .y. last .exotic spectra : a review of barry simon s central contributions . in _ spectral theory and mathematical physics : a festschrift in honor of barry simon s 60th birthday _ , volume 76 of _ proc .pure math ._ , pages 697712 .soc . , providence , ri , 2007. r.q .berry , g.bull , c.browning , d.d .thomas , k.starkweather , and j.h .preliminary considerations regarding use of digital fabrication to incorporate engineering design principles in elementary mathematics education . , 10(2):167172 , 2010 .
|
3d printing technology can help to visualize proofs in mathematics . in this document we aim to illustrate how 3d printing can help to visualize concepts and mathematical proofs . as already known to educators in ancient greece , models allow to bring mathematics closer to the public . the new 3d printing technology makes the realization of such tools more accessible than ever . this is an updated version of a paper included in .
|
for the design of pedestrian facilities concerning safety and level of service , pedestrian streams are usually characterized by basic quantities like density or flow borrowed from fluid dynamics . up to nowthe experimental data base is insufficient and contradictory and thus asks for additional experimental studies as well as improved measurement methods .most experimental studies of pedestrian dynamics use the classical definition of the density in an area by , where gives the number of pedestrians in the area of size ] , and passing to the limit of areas of size zero is obviously not a well defined procedure .also the choice of the geometry of is important . for large convex areasit can be expected that finite size and boundary effects as well as influences of the shape of the area can be neglected , though it is almost always possible to cut out fairly large areas ( of complicated shape ) containing no person .further , the design of pedestrian facilities is usually restricted to an order of magnitude of to and rectangular geometries .in addition the number of pedestrians is small and thus the scatter of local measurements has the same order of magnitude as the quantity itself , see e.g. the time development of the density in front of a bottleneck in . often , due to cost restrictionsthe density was measured at a certain point in time , while the measurement of speed was averaged over a certain time interval .but the process of measurement and averaging influences the resulting data even for systems with few degrees of freedom as the movement along a line , see .the large progress in video techniques during recent years has made feasible the gathering of much more detailed data on pedestrian behavior , both in experiments and in real life situations , than was possible only a decade ago .the higher detail asks for a reevaluation of the methods of defining and measuring basic quantities like density , flow and speed , as the methods to get time and space averages encompassing a hundred persons over minutes may not be suitable for a resolution of a second and a square meter .basic quantities of pedestrian dynamics are the density ] and speed of a person or a group of persons , and the flow through a door or across a specific line ] & ] + & 4.06 & 0.88 & 47.0 + & 4.33 & 0.40 & 11.7 + & 4.07 & 0.62 & 13.7 + & 3.90 & 0.23 & 4.7 + & 3.83 & 0.29 & 10.6 + _ for the different densities from fig .3 we have the data in table [ tab1 ] : the difference between the density averages for and are within the limits of the fluctuations , see table [ tab1 ] , but it is obvious that the density distribution is not homogeneous over the entire camera area .the voronoi cells carry information from outside the rectangle , where the density may be different. this may be the reason for part of the differences .+ using voronoi cells , a density distribution is attributed to every point in space . however , this distribution oscillates with stepping , so for best results only time averages have to be taken over the time of at least a step , or some smoothing of the oscillations is needed ( see below ). given trajectory of a person , one standard definitions of the velocity with fixed , but arbitrary is alternatively , with given entrance and exit time the velocity is \ ] ] with . the average standard speed in an observation area a is then where the sum is taken over all persons that are in a for the entire time interval $ ] .these definitions seem simple , but there are two sources of uncertainty .the velocity of an extended object is generally ( and reasonably ) defined as the velocity of its center of mass , and for pedestrians that is hard to detect .moreover , pedestrians can and do change shape while walking .so the simple approach works only for distances long enough to make errors from shape changes and in placing the supposed center of mass unimportant .a second problem comes from the fact that velocity is a vector , and the movement of people is not straight .thus the average of the local speeds will be bigger than the value of distance per time for longer distances .notably head tracking gives tracks that can be decomposed into a fairly uniform principal movement and a local swaying superimposed .the swaying shows the steps , it varies between individuals and is larger at low speeds . of course , the head sways more than the center of mass , and it can do some independent movements , but these can dominate only at very low speeds . for most models of pedestrian movement , only the principal movement is of interest .the swaying movement of shoulders may be important in estimating the necessary width for staircases and corridors to allow overtaking .the separation of principal movement from swaying movement could be done by fourier analysis , but that requires a trajectory many steps long .a way to do it locally is to detect positions of identical phase of the movement and interpolate them .as long as there is appreciable forward motion ( ) , the mode of movement is the swinging of the legs in the direction of movement with approximately their free pendulum frequency ( 1.5hz-2hz ) . in this modethere is a regular sequence of points of maximum ( positive ) curvature , minimum ( negative ) curvature , and zero curvature , which correspond to the times of setting down the right foot , the left foot , and having one foot on the ground while the other just passes the standing leg in forward motion , which are the points we use .these points are easy to detect . below that speed ,the mode of stepping changes to the whole body swinging right and left with a frequency smaller than 1hz and only a small forward component , and there may be multiple points with zero curvature within one step .however , typically the positions of setting down a foot give a dominant extremum of curvature , while in the part in between the curvature will be close to zero with more than one zero per step . in this case , we take the middle between the maximum and the minimum curvature point as interpolation point .this has been possible down to speeds of 5 cm / s .below that , steps can only be guessed , they can not be detected reliably .the speed of the principle movement is calculated by interpolating these points now eliminates most of the swaying movement and gives a good approximation to the movement of the center of mass .the requirement of identical phase asks for taking only every other zero curvature position , but for persons with symmetric gait , taking every zero curvature position gives a better resolution with only marginally more swaying . for analysis, we will take this as curve of principal movement .similarly , the velocity vector will be obtained by computing the difference quotient of position and time between zero curvature positions , and attached to the intermediate time .standard measurement of flow through a door or across a line is done similar to density measurements , by counting heads passing within a time interval .this suffers from the same problems as the standard density measurement , namely large scatter and low time resolution . using the voronoi cells to obtain fractional counts( half a person has passed if half of the voronoi cell has passed ) gives a much smoother voronoi flow .this still does not allow a useful passing to the limit of , but the moving average over about half the average time difference between persons passing gives a sufficiently smooth result .[ fl_co ] compares the product of averaged standard speed and density , , with the direct measurement of the flow as voronoi density passing the middle line .the density was determined from a rectangle symmetric to the line for the flow of width and the averaged speed of the persons inside this rectangle from a symmetric time difference of .the two product evaluations agree reasonably well with the voronoi flow , but in spite of the fact that the measurements average over time and space they show much faster variations in time , and the total flow they calculate is somewhat less than the number of persons passing . for , one person of 180 is missing in the integration , for it is even 7 persons missing . depending on the purpose of the measurements, this may be a serious problem .the high resolution measurement of speed and density allows to follow individuals on their way through some obstacles and look what combinations of speed and density they have . in fig .[ sp_vel ] the persons start in front of the bottleneck with low speed , no .72 and 80 with large space while no .58 is already in the jamming area .they pass the congestion area and pick up speed inside the bottleneck .it also allows correlating momentary speed and personal space , as well as comparing personal space and speed in general for whole groups of persons .[ sp_vel ] and [ sp_v_gen ] show a substantial difference in this relation before and inside the bottleneck , indicating that the individual speed depends more on the expectation of the future ( walking into or out of high density regions ) than on the present situation .the high resolution makes it also possible to analyze how the accelerations observed are related to changes in the space available , and to the space of people in front . on the resolution presented hereit becomes clear that the correlation of speed and available space in an instationary situation differs considerably from that in the stationary situation described by the fundamental diagram .the most important relation for any model ( and for much of the analysis of pedestrian movement ) is the so - called fundamental diagram , which can be given either as relation speed versus density or flow versus density .using the improved methods of measurement , the quality of the resulting diagram is greatly enhanced . as an example fig.([fd ] ) shows the analysis of a series of measurements of the fundamental diagram for single file movement performed with different numbers of persons ( 14 to 70 ) in the walking area , which covers all densities of interest from low ( almost free walking ) to jamming density ( near standstill ) . with the high density , stop - and - go waves developed , see fig . 5 in .for the one dimensional case with position of pedestrian the calculation of the voronoi density distribution reduces to \\ 0 & \mbox{otherwise } \end{array } \right.\ ] ] the definition of density is according to eq .[ dvoroni ] .( [ fd ] ) ( left ) shows that use of greatly enhances the quality of the diagram , due to the fine density resolution .use of the standard density contracts the band of s versus d onto a few vertical lines which are much longer than the width of this band and thus reduces the precision . in fig.([fd] ) , ( right ) one can see that with the standard speed the lowest speed is about 0.02 m / s , while gives values of zero .actually , some people were standing in one position for about 30s - the duration of a stop phase . for higher speeds, there is little difference between and .the combination of and gives the best diagram for the full scale of densities .the combination of modern video equipment with new methods for extracting relevant data allows an unprecedented depth of analysis of pedestrian behavior .the method for determining density is based on the concept of a voronoi cell as personal space of a pedestrian and allows a resolution down to individual level .the concept of determining velocities from difference quotients of positions with identical phase of the stepwise movement gets the resolution down to a single step .this level of resolution allows mathematical combination of data that are not valid for large scale averages .moreover they are able to resolve stop and go waves and allow a analysis of instationary processes on a microscopic level .the experiments are supported by the dfg under grant kl 1873/1 - 1 and se 1789/1 - 1 .we thank m. boltes for his support in preparation of videos for analysis .v. m. predtechenskii and a. i. milinskii . .amerind publishing , new dehli , 1978 .translation of : proekttirovanie zhdanii s uchetom organizatsii dvizheniya lyuddskikh potokov , stroiizdat publishers , moscow , 1969 .a. seyfried , b. steffen , t. winkens , a. rupprecht , m. boltes , and w. klingsch . .in c. appert - rolland , f. chevoir , p. gondret , s. lassarre , j. lebacque and m. schreckenberg ( eds . ) _ traffic and granular flow 2007 _p. 189199 .springer berlin heidelberg , 2009 .a. seyfried and a. schadschneider . .in h. umeo , s. morishita , k. nishinari , t. komatsuzaki , and s. bandini ( eds . ) , _ cellular automata _ , _ lecture notes in computer science_ v. 5191/2008 , pp.563566 .springer berlin heidelberg , 2008 .a. seyfried , m. boltes , j. khler , w. klingsch , a. portz , t. rupprecht , a. schadschneider , b. steffen and a. winkens . .in _ pedestrian and evacuation dynamics 2008 _ springer , ( to appear ) .preprint : arxiv:0810.1945v1
|
the progress of image processing during recent years allows the measurement of pedestrian characteristics on a `` microscopic '' scale with low costs . however , density and flow are concepts of fluid mechanics defined for the limit of infinitely many particles . standard methods of measuring these quantities locally ( e.g. counting heads within a rectangle ) suffer from large data scatter . the remedy of averaging over large spaces or long times reduces the possible resolution and inhibits the gain obtained by the new technologies . in this contribution we introduce a concept for measuring microscopic characteristics on the basis of pedestrian trajectories . assigning a personal space to every pedestrian via a voronoi diagram reduces the density scatter . similarly , calculating direction and speed from position differences between times with identical phases of movement gives low - scatter sequences for speed and direction . closing we discuss the methods to obtain reliable values for derived quantities and new possibilities of in depth analysis of experiments . the resolution obtained indicates the limits of stationary state theory . video tracking , voronoi diagram , pedestrian modeling , velocity measurement , pedestrian density
|
the accurate acquisition of physical objects has numerous applications , especially in the entertainment industry .while there exist commercial systems for digitizing rigid objects , the acquisition of deforming objects remains a challenge due to the complex changes in geometry over time .a rigid object can be scanned sequentially from multiple viewpoints to accurately capture the complete surface , whereas scanning the entire surface of a deforming object would require a complex and expensive physical setup involving multiple synchronized sensors , which may still be subject to occlusions .recently , several techniques were proposed that solve this problem by using a template shape as a geometric and topological prior for the reconstruction and by deforming the template to fit to the observed data . in some of these methods ,the observed data comes from a set of single - view scans .the single viewpoint assumption greatly simplifies the acquisition process .template - based tracking approaches are shown to lead to visually pleasing results for numerous examples .however , the deformation of the unobserved side of the object is generally only guided by a smoothness function .we combine a tracking - based approach with fitting a volumetric elastic model to improve the estimation of the unobserved side of the object .we employ a linear finite element method ( fem ) to solve for physical deformations when a given force is applied .our method proceeds in two steps : first , we use a tracking approach to deform the template model .second , we use the displacements of the observed vertices of the template mesh found using the tracking step in a fem to predict the displacements of the unobserved vertices . hence , rather than smoothly deforming the unobserved side of the model , we deform the unobserved side through the volumetric mesh of the fem model .we repeatedly linearize the deformation in the fem at its current deformation state .note that our method allows for tracking data acquired using single , multiple , or moving viewpoints . while deformable models have been introduced to computer vision and computer graphics 30 years ago ,here we combine modern non - rigid template - based tracking with a volumetric elastic model for completion of the deformation at the unobserved side only .our major contributions are therefore : * the combination of a non - rigid template - based tracking approach with a linear finite element model to robustly track the complete geometry of an object whose deformation is captured from a single viewpoint only . * the use of a fem - based model to deform the unseen side leading to more physically plausible results than by using a smoothness cost in the template - based tracking . * tracking linear and non - linear deformations by repeatedly linearizing the fem model at its current deformation state .this paper presents the following three major improvements over the preliminary version of this work .first , we present a computationally more efficient energy formulation to track the deformable object using a non - rigid iterative closest point framework .second , we propose an iterative method for the fem estimation that considers forces at supporting surfaces of the model. our method does not require the measurement of forces but estimates the forces up to a scale factor from the fem model and the deformation .third , we evaluate the performance of our method extensively using numerous synthetic and scanned data sequences and compare our results to our preliminary findings .special attention is paid to the influence of both synthetic and real scanner noise , as well as to the influence of the fem step on the result .this section reviews work related to tracking surfaces and predicting shape deformations using finite element models .computing the correspondence between deformed shapes has received considerable attention in recent years and the surveys of van kaick et al . and tam et al . give a comprehensive overview of existing methods . the review in this paperfocuses on techniques that do not employ a priori skeletal models or manually placed marker positions , as we aim to minimize assumptions about the structure of the surface .none of the following approaches combine physics - based models with tracking approaches .the following techniques solve the tracking problem using a template as shape prior .de aguiar et al . tracked multi - view stereo data of a human subject that is acquired using a set of cameras .the algorithm uses a template of the human subject that was acquired in a similar posture as the first frame . the approach used for tracking first useslaplace deformations of a volumetric mesh to find the rough shape deformation and refines the shape using a surface deformation .the deformation makes use of automatically computed features that are found based on color information .vlasic et al . developed a similar system to track multi - view stereo data of human subjects .tung and marsuyama extended de aguiar et al.s approach by using 3d shape rather than color information to find the features .li et al . proposed a generic data - driven technique for mesh tracking .a template is used to model the rough geometry of the deformed object , and the algorithm deforms this template to each observed frame .a deformation graph is used to derive a coarse - to - fine strategy that decouples the complexity of the original mesh geometry from the representation of the deformation .cagniart et al . proposed an alternative where the template is decomposed into a set of patches to which vertices are attached .the template is then deformed to each observed frame using a data term that encourages inter - patch rigidity .cagniart et al . extended this technique to allow for outliers by using a probabilistic framework . in this work , we combine a template fitting method with a finite element step .the following techniques solve the tracking problem without using a shape prior .however , the methods assume prior information on the deformation of the object .mitra et al . modeled the surface tracking problem as a problem of finding a smooth space - time surface in four - dimensional space . to achieve this , they exploited the temporal coherence in the densely sampled data with respect to both time and space .sharf et al . used a similar concept to find a volumetric space - time solid .their approach assumes that each cell of the discretized four - dimensional space contains the amount of material that flowed into it .wand et al . used a probabilistic model based on bayesian statistics to track a deformable model .the surface is modeled as a graph of oriented particles that move in time .the position and orientation of the particles are controlled by statistical potentials that trade off data fitting and surface smoothness .tevs et al . extended this approach by first tracking a few stable landmarks and by subsequently computing a dense matching .furukawa and ponce proposed a technique to track data from a multi - camera setup .instead of using a template of the shape as prior information , their technique computes the polyhedral mesh that captures the first frame and deforms this mesh to the data in subsequent frames .liao et al . took images acquired using a single depth camera from different viewpoints while the object deforms and assembled them into a complete deformable model over time .popa et al . used a similar approach that is tailored to allow for topological consistency across the motion .li et al . avoid the use of a template model by initializing the tracking procedure with the visual hull of the object .zheng et al . track a deformable model using a skeleton - based approach , where the skeleton is computed using the data .several authors suggested learning the parameters of linear finite element models from a set of observations .we use such a method in combination with a tracking method to find an accurate tracking result of the observed and the unobserved side of the model .for a summary of linear finite element methods for elastic deformations , refer to bro - nielsen .lang and pai used a surface displacement and contact force at one point along with range - flow data to estimate elastic constants of homogeneous isotropic materials using numerical optimization .becker and teschner presented an approach to estimate the elasticity parameters for isotropic materials using a linear finite element method .the approach takes as input a set of displacement and force measurements and uses them to compute the young s modulus and poisson s ratio ( see bro - nielsen ) with the help of an efficient quadratic programming technique .eskandari et al . presented a similar approach to estimate both the elasticity and viscosity parameters for viscoelastic materials using a linear finite element method .this approach reduces the problem to solving a linear system of equations and has been shown to run in real - time .syllebranque and boivin estimated the parameters of a quasi - static finite element simulation of a deformable solid object from a video sequence .the problems of optimizing the young s modulus and the poisson ratio were solved sequentially .schnabel et al . used finite element models to validate the non - rigid image registration of magnetic resonance images .nguyen and boyce presented an approach to estimate the anisotropic material properties of the cornea .bickel et al . proposed a physics - based approach to predict shape deformations .first , a set of deformations is measured by recording both the force applied to an object and the resulting shape of the object .this information is used to learn a relationship between the applied force and the shape deformation , which allows to predict the shape deformation for new force inputs .the technique assumes the object material to have linear elasticity properties .bickel et al . extended this approach to allow the modeling and fabrication of materials with a desired deformation behavior using stacked layers of homogeneous materials .recently , choi and szymczak used fem to predict a consistent set of deformations from a sequence of coarse watertight meshes but their method does not apply to single - view tracking , which is the focus of our method . finite element models are used in medical applications to estimate the material parameters of different tissues because the stiffness of a particular tissue can be used to detect anomalies . for instance , elasticity information can help to segment medical data or to detect cancerous tissues or malignant lesions . in this context ,lee et al . proposed a method to estimate material parameters and forces of tissue deformations based on two given frames of observations .this method , which is similar in spirit to our approach , assumes that for both frames , the segmented boundaries of the organ ( which are 3d surfaces ) are given .the approach proceeds by repeatedly simulating a deformation from the start frame for a set of material parameters and input forces , and by measuring the distance of the simulated deformation to the given target surface .the distance to the target surface is then used to improve the estimated material parameters and input forces using a gradient descent technique . unlike our method, this approach operates exclusively on a tetrahedral volumetric mesh and therefore has limited resolution .a more serious limitation of the approach is the need for good initial material parameters and force directions .while good initial estimates are known for many organic tissues , we do not have access to good initial estimates in our application .hence , our approach takes a different strategy to optimize the material parameters that does not require initial estimates .the input to our method consists of a closed template , a set of contact points on along with force directions that lead to the deformation of the model , and a set of observed 3d video frames ( point clouds ) capturing the deformation .we assume that the template has roughly the shape of the object before the deformation , and is approximately aligned to the first frame .the main idea of our approach is to combine tracking the observed point cloud data using a template deformation approach and predicting the deformation on the unobserved side of the model using a linear fem . to track the data, we use an energy optimization approach that aims to find deformation parameters that deform to be close to the observed data and that maintain the geometric features of .let denote the template that was deformed to fit to .afterwards , we displace the vertices of that are not observed in the data with a linear fem using the given contact point and force direction . to compute the fem deformation, we use a down sampled version of that is tetrahedralized , which is denoted by in the following .let denote the deformed tetrahedral mesh . finally , we readjust the shape of the unobserved side of to take this displacement into account . when multiple frames are recorded , we start the tracking and fem deformation for frame from and , respectively . fig .[ fig_overview ] gives a visual overview of the proposed method .we use the following representation for the deformation of .let denote the vertices of , denote their position vectors , and denote their homogeneous coordinates .furthermore , let denote the unit outer normal at . we deform by applying a transformation matrix to each of .the transformation matrix depends on six parameters : three parameters describing a translation , two parameters describing a unit rotation axis , and one parameter describing a rotation angle .that is , describes a rigid transformation as where is a coordinate transformation that expresses a point in a coordinate frame centered at and is a coordinate transformation that expresses a point in a coordinate frame rotated by angle around axis . expressingthe transformation in a coordinate system centered at has the advantage that differences between the transformations of neighboring vertices can be measured directly .this section presents our energy optimization approach to find the transformation parameters , and that lead to a mesh that is close to the point cloud data .we aim to deform the template to the observed data . when fitting to the first frame , we start by deforming using a global rigid transformation to fit the observed data as much as possible .we consider all vertices of and compute the nearest neighbor of the deformed point in the point cloud data .let denote the unit outer normal at in the point cloud . to rigidly align to , we find by minimizing with respect to seven degrees of freedom ( one for scaling , three for rotation , three for translation ) , where is a weight term and denotes the scalar product .note that the term measures the distance of the transformed point to the supporting plane of its nearest data point , which leads to a faster convergence rate of the algorithm than using the distance from to .note that we fix the associated point of for this step to obtain a differentiable energy function .the scaling term accommodates slight errors in the calibration of the 3d scanner used to acquire the template shape and/or the calibration of the camera system used to acquire the frames .the weight is used to distinguish valid data observations from invalid ones , and should be one for vertices that have a valid data observation and zero for all other vertices . to exclude data points that are inconsistent with , is set to zero if the distance between and is above or if the angle between and is above , where is the average edge length of the undeformed template and and are parameters . setting all of the remaining to one has the problem that many vertices of that are close to the acquisition boundary of may pick the same nearest neighbor , which leads to poor assignments . to remedy this problem , we wish to set to zero if is located on the acquisition boundary of .it is not straight forward to define a `` boundary '' on a noisy point cloud .we use a heuristic that considers points of to be part of the boundary if many vertices of choose as nearest neighbor . to find the boundary points in a way that is independent to global resolution changes of both and , we count for each point of the number of vertices of that chose it as nearest neighbor and average this count over all points of that were chosen by at least one point of . a point of then considered a boundary point if its count exceeds twice the average count . in all remaining cases , set to one .to fit to any frame , we deform in a non - rigid fashion by changing , and to minimize the energy where and are weights for the individual energy terms , is the set of indices corresponding to points that have geodesic distance at most of , and denotes the cardinality of a set . as before , is the average edge length of and is a parameter .as above , is set to zero if the angle between and is above , if the distance between and is above , or if is a boundary point , and to one otherwise .the transformation matrices are computed according to equation [ deformation_matrix ] .the data term drives the template mesh to the observed data .however , using only this term results in an ill - posed problem .we therefore add the smoothness term to act as a regularization term that encourages smooth transformations .unlike previously used regularization terms that measure the difference between the transformations , measures the differences in translations and rotation angles in a local coordinate system centered at . to obtain results that are invariant under rigid transformations , it is important to initialize the transformation parameters in a way that is invariant to rigid transformations of the scene .initializing and to zero yields the initial identity transformation regardless of how is initialized .a natural rotation - invariant choice to initialize is .if were to depend on the difference in rotation axes with this initialization , would not be zero in the rest state .hence , the difference in does not increase .we found that in practice , helps to avoid self - intersections during the deformation , which is important as self - intersections might cause problems in the fem step . to minimize , we start by encouraging smooth transformations by setting and .similar to li et al . , whenever the energy does not change by much , we relax the smoothness weight as to give more weight to the data term .we stop when the relative change in energy + , where is the iteration number , is less than or when is smaller than . to obtain a differentiable energy function, we do not update the associated point of for a fixed set of weights .that is , is updated every time the weight is changed .recall that the distance to the nearest neighbor used in is limited by the template resolution . to allow for larger deformations, we use a multi - resolution approach as follows .we compute a multi - resolution hierarchy of by collapsing edges of the mesh according to garland and heckbert s geometry criterion .we do not collapse edges if this would lead to a self - intersecting mesh .we perform the test whether an edge collapse leads to self - intersections greedily by performing the collapse and testing if self - intersections occur . in each resolution step , we halve the number of vertices .we stop the collapse when the base mesh contains about 1000 vertices or when no more valid edges can be found for the edge collapse operation .once is minimized for resolution level , we consider the mesh of the next higher resolution level .for the vertices of level that are present in level , we initialize the transformation parameters to the result of the previous resolution . for the remaining vertices , initialized to , and and are found by minimizing with respect to the indices that are not present in resolution level .this multi - resolution framework works well when the geometric complexity is approximately linked to the amount of deformation .however , in cases where most of the deformation occurs in feature - less regions of the surface , some deformation detail may be lost by this multi - resolution framework .a possible remedy is to use a deformation graph to compute a multi - resolution framework .while the solution yields a globally smooth deformation field by design , it is not guaranteed to give a deformation field that is locally smooth at every vertex . instead, it may happen that a single vertex is transformed by a significantly different deformation than its neighbors , thereby generating a new feature in the geometry .if this happens , we can optionally post - process the result as follows .for every vertex of , we consider the minimum of over all in the one - ring neighborhood of .if this minimum is larger than two , which means that the distance of to each of its neighbors has at least doubled during the deformation , we set the transformation parameters of to the average of the transformation parameters of the one - ring neighbors of .the average rotation axis is computed using spherical linear interpolation . in our experiments, we observed that the post - processing step was usually only required because of the fem step in the previous frame .that is , the fem step caused some vertices to move relatively far from their neighbors when updating the unobserved side of the previous frame .this in turn led to a lack of smoothing as contained few points .a possible remedy to this problem is to use a more complex multi - resolution framework , as discussed above .we chose not to implement this solution , because in practice , we found that this post - processing was usually not crucial in most examples . in our experiments ,less than of the frames were influenced by this post - processing .our preliminary work used a data energy based on a point - to - point distance , a smoothness energy that varied depending on the differences in rotation axes , and an energy designed to discourage self - intersections by repelling close - by vertices that are not neighbors . if we let denote the complexity to find , and assume to have constant complexity ( which holds for regularly sampled templates ) , a single evaluation of the energy used by wuhrer et al .has a complexity of for a template mesh with vertices as their energy requires the computation of all distances between pairs of vertices on the mesh .in contrast , evaluating has a complexity of .furthermore , it is known that using a point - to - plane distance instead of a point - to - point distance leads to faster convergence rates .hence , we reduced the computational complexity of our method .furthermore , as will be shown in section [ sec : results ] , using results in less self - intersections and higher data accuracy than using the energy by wuhrer et al .consider the situation after was deformed to frame using the approach outlined in the previous section , and denote the deformation of by .we call the vertices in that were deformed using valid data observations_ observed vertices _ , and we call the remaining vertices _unobserved vertices_. unobserved vertices were deformed using smoothness assumptions on the deformation field only .this section describes how to displace the unobserved vertices using a linear fem .we aim to reposition the unobserved vertices of using a finite element model .we use with as start position for the fem step .a tetrahedral mesh is used to compute the fem .the initial tetrahedral mesh of is obtained by tetrahedralizing a simplified version of .this simplification is necessary to make the algorithm more time and space efficient .the tetrahedral mesh contains vertices on the surface of the model ( these vertices are a subset of the vertices of ) and vertices that are internal to the model . in the following ,let denote this tetrahedral mesh .the fem linearly relates the displacements of the vertices and the forces applied to the tetrahedral mesh using a stiffness matrix that depends on the geometry of the tetrahedral mesh and on two elasticity parameters , the young s modulus and the poisson ratio .let denote the vector of forces applied to the vertices of the tetrahedral mesh and let denote the vector of displacements of the vertices of the tetrahedral mesh . both and have dimension , where is the number of vertices of the tetrahedral mesh .furthermore , let and denote the force and displacement vectors of vertex .then , this equation can be used in three ways .* given all displacements and forces , and can be estimated by solving a linear system of equations , as shown by becker and teschner . in principle , if the displacements and forces at all vertices of a single tetrahedron are known , the approach by becker and teschner can estimate and .however , due to numerical instabilities when using a single tetrahedron , in practice , redundant observations are commonly used to estimate the material parameters . *given , , and , we can compute by a matrix multiplication . * given and along with at least three fixed displacements , equation [ fem_equation ] can be modified such that is invertible .if for each vertex with non - fixed displacement either the force or the displacement are provided , we can compute the missing displacements and force vectors by rearranging the linear system of equations .we rely on forces in addition to displacements in the estimation of unobserved vertices because overspecified boundary conditions are required to estimate material parameters .note that this simple linear fem is only suitable to model small deformations , as large rotations may cause artifacts .however , this problem does not occur in our case as we linearize the deformation locally at each frame by modeling deformations between consecutive frames , which ensures that only small deformations are considered .because of using the deformed tetrahedral template from the previous tracking frame as the rest state , the material parameters estimated by our method are not expected to be physically meaningful .it is easy to see that we do not have enough constraints to use equation [ fem_equation ] directly to solve for all missing information . from the tracking step , we computed displacements for all surface vertices , but not for the internal vertices .these displacements are reliable for observed vertices only , and we aim to use the fem model to find reliable displacements for the unobserved ones .furthermore , we are given the direction of the force at the contact point .note that we can normalize the length of the force direction at the contact point , since changing the length of only scales ( see e.g. becker and teschner ( * ? ? ?* equation 17 ) ) .the forces at internal vertices can be assumed to be zero as no external forces can act on the interior of the model .note that other contact surfaces of the model , such as the table the model rests on , are not modeled explicitly in our framework . instead , we rely on the observed surface points to model these additional constraints .this leaves us with the following unknown or unreliable quantities : , , the displacements at internal and unobserved vertices , and the forces at surface vertices that are not contact points .prior work assumed all forces to be zero , solved for and by only considering the points with known displacements and forces , and used the estimated and to solve for the displacements of unobserved vertices .this approach does not model all physical constraints as forces from the contact with supporting surface are being set to zero .hence we did not adopt this approach .we propose an iterative method to find reliable displacements at unobserved vertices and demonstrate in section [ sec : results ] that this change leads to an improvement of the tracking results .the method starts by using the displacements at surface vertices computed using the tracking step as an initial estimate .that is , is computed as the difference between the vertex coordinate of on and its corresponding point on .this estimate is diffused to internal vertices using a thin - plate spline ( tps ) deformation .the following description of tps closely follows the description by dryden and mardia ( * ? ? ?* chapter 10.3 ) .let ] denote the matrix of displacement vectors sorted in the same order as in .the tps deformation is , where is a -dimensional vector , is a matrix , is a matrix , and ^t$ ] is a -dimensional vector with we find by solving the linear system of equations where is a vector containing at each position , and is a matrix containing the vectors as its rows .we then evaluate at the internal nodes of to obtain an initial estimate for the displacements .let denote the vector of length containing all initially estimated displacements .these displacements are used to iteratively update the material parameters using ( a1 ) and the estimated forces using ( a2 ) .finally , the estimated , and along with fixed displacements at observed vertices are used to estimate at unobserved vertices using ( a3 ) .algorithm [ alg_estimate_fem ] summarizes this approach .compute based on and a thin - plate spline deformation initialize a set of indices corresponding to known forces use the estimated to find at unobserved vertices after the fem step , the tetrahedral mesh of is obtained by simply updating the vertex positions of using . in our experiments ,we set ( since we found this to be sufficient in our experiments ) .once the fem step is completed for frame , it remains to adjust the transformation parameters , and to capture the new deformation .we achieve this by minimizing where is the position of the point corresponding to vertex on the deformed tetrahedral mesh .note that we only optimize the energy with respect to parameters that influence unobserved vertices of . in our experiments ,we set and .the implementation of the algorithm is in c++ and uses a quasi - newton method for all of the optimization steps . for each optimization step , at most 1000 iterations are used .the tetrahedralization is computed using tetgen ( http://tetgen.berlios.de ) . when tetrahedralizing the model , we find a high quality tetrahedralization by restricting the radius - edge ratio of each tetrahedron to be at most two .this section discusses implementation details , and in particular the parameter settings used in the experiments .the parameters and used during tracking give the relative weights between the different energy terms , the parameters and control which data points influence the data term , and the parameter influences the neighborhoods considered for the smoothing term .finally , the parameter controls how many iterations are performed during the fem estimation .cccc & & & + + & & & + to make the relative influence of the weights and invariant with respect to scaling , we pre - scale all of the input models , such that the length of the diagonal of the bounding box of the template model is one .this allows to set most of the parameters to one constant value for all experiments .the weight schedule used for and as well as the choice of has been discussed in sections [ sec : tracking ] and [ sec : unobserved ] .furthermore , we set .the parameter is the only parameter that is varied .this parameter gives the smoothing radius with respect to the resolution of the template mesh .it needs to be adjusted depending on the ratio between the mesh resolution and the mesh size .if the mesh resolution ( measured as average edge length ) is high compared to the size of the model , then can be set relatively low .if the mesh resolution is low compared to the size of the model , then needs to be set to a higher value .[ ear_parameter_s_sm ] shows the influence of on the result of tracking scan data of an ear model .the larger the parameter , the more localized shape deformations are penalized by the tracking energy .this has the effect that for small , the template can accurately follow the data at the cost of being influenced by data noise and for large , the template is not significantly affected by noise but can not follow localized shape deformations . in our experiments , we set for synthetic data , for the ear model , and for the dinosaur model .this section discusses the datasets used in the experiments and shows a synthetic evaluation of the method as well as experiments based on real data .furthermore , we compare the proposed method to its predecessor , denoted by _wuhrer et al .( 2012 ) _ in the following .more detailed visualizations of some experiments are available in the supplementary material .for all the experiments , the input models are pre - scaled , such that the length of the bounding box diagonal of the template model is one .this information on the scale of the models serves as reference for the numerical evaluations below . * synthetic data . * the synthetic datasets ( shown in fig .[ buste_hand_dog_template ] ) are created using the bust , hand , and bulldog models from the aim repository .we create synthetic deformations of the models by applying different finite element deformations to the models with getfem .first , the shapes are deformed using a linear fem , and second , the shapes are deformed using the incompressible non - linear saint venant - kirchhoff model ( stvk ) .the back sides of the deformed models are removed and the remaining front sides ( shown in fig . [ buste_hand_dog_template ] ) are used as input to the algorithm . in our simulations , the head of the bustis pushed to the left , the middle finger of the hand is pushed to the left , and the head of the bulldog is pushed to the side . for all deformations , the lagrange multiplier was set to .refer to table [ info_synthetic ] for more information on the models and the parameters used to generate these deformations , and to fig .[ buste_hand_dog_deformations ] for the start and end poses of the deformations . [ cols="^,^,^",options="header " , ] we compare the results of our method to the results of only using the surface - based template deformation method outlined in section [ sec : tracking ] . choosing this surface - based deformation technique results in deformations that predict the unobserved side using a term that aims to preserve a smooth deformation field . comparing to this techniquedirectly evaluates the influence of the fem step on the result . in the following ,we refer to the surface - based template deformation as our method without fem .we evaluate the influence of the fem correction for tracking noisy scans .consider the neck sequence acquired using the stereo setup .[ neck_fem_eval ] shows the template ( yellow ) and the results at the end of the sequence with ( green ) and without ( blue ) the fem correction .note how the surface - based deformation finds a solution that deforms the template smoothly , which leads to a translation of the leg rather than a bending .furthermore , the tail is merely translated upwards . using our method , the legs slide and bend realistically , and the model s tail lifts up , as in reality .distance + & & ( 2012 ) & & & + [ 0.7cm]1 & & & & [ 1.5 cm ] & [ 1.5 cm ] + [ 0.7cm]4 & & & & & + [ 0.7cm]7 & & & & & + [ 0.7cm]10 & & & & & + fig .[ ear_comparison ] ( third from left ) shows that the fem correction also leads to a more physically plausible result for the ear model acquired using the stereo setup . in this case , using the fem correction prevents a fattening of the object , which is mostly visible at the base and the helix . finally , we compare the proposed method to its predecessor and demonstrate that the proposed changes yield a significant improvement in the performance of the algorithm . to start , consider the synthetic bust model generated with linear fem .we track this sequence using both the method by wuhrer et al .( 2012 ) and our method , and measure the mean and maximum distances over all vertices to the ground truth .[ buste_comp_3dimpvt ] shows the tracking results and the measured distances .note that the mean and maximum distances of our method are less than a third of the corresponding distances of the method by wuhrer et al .as can be seen in the left of the figure , this improvement is obtained because our method tracks the rotation of the head better than the method by wuhrer et al .( 2012 ) and because the back side of the bust is not flattened by our method .the improvement of our method over the method by wuhrer et al .( 2012 ) is especially noticeable for tracking noisy scan data .[ ear_comparison ] shows a comparison of the result obtained using the method by wuhrer et al .( 2012 ) ( blue ) to our result ( green ) for the last frame of the ear dataset acquired using a stereo setup .note that while the method wuhrer et al .( 2012 ) does not track the local deformation of the helix , our method tracks the data correctly without resulting in a noisy output .[ real_data_comp_3dimpvt ] shows the results of the two methods for different frames of datasets acquired using stereo or range cameras .note that our method results in less noise and self - intersections of the model ( especially visible in the areas of the flaps of the dinosaur models ) , while at the same time yielding a higher data accuracy .this significant improvement is a consequence of the improvements of both the tracking step and the fem prediction .input & wuhrer et al . ( 2012 ) & our method & + & & & + & & & [ 2.0 cm ] + + input & wuhrer et al . (2012 ) & our method & + & & & + & & & [ 2.0 cm ] + + input & wuhrer et al .( 2012 ) & our method & + & & & + & & & [ 2.0 cm ] + + input & wuhrer et al .( 2012 ) & our method & + & & & + & & & [ 2.0 cm ] + the proposed method is currently designed to deform models of homogeneous isotropic material that are deformed by applying external forces on a small number of points .modeling complex force fields or force fields acting on heterogeneous material , such as deformations of tissue caused by muscle movements , would require a segmentation of the model into regions of homogeneous materials and the input of a full force field .this is too tedious to acquire to be of practical use .however , by combining template - based tracking with simple physical simulation models , we make a first step in the direction of acquiring the geometry and material properties of an object jointly . we have demonstrated that our method , which uses a linear fem model , allows to track both linear and non - linear material deformations accurately .this flexibility comes at the cost of material parameter estimates that vary over time and are not expected to be physically meaningful . in the future, we plan to explore modeling non - linear material behaviour explicitly , and to find stable and physically meaningful material parameters for this scenario by considering all available frames for material parameter estimation .as our approach employs a non - rigid iterative closest point algorithm to fit the template to the data , tracking large deformations may lead to drift .an example of this problem is included in the supplementary material .furthermore , due to the non - rigid iterative closest point algorithm , our method can not deform the template accurately if the initial alignment is poor or if there is significant deformation between consecutive frames .hence , in cases of extreme deformations that are sampled sparsely in time , our tracking may get lost .this is shown in fig .[ hand_large_def ] . here , we simulated the same hand deformation twice ; once sampled sparsely in time using 50 frames , and once sampled densely in time using 350 frames . for a particular frame ( frame 20 in the sparsely sampled simulation , which corresponds to frame 140 in the densely sampled simulation ) ,the ground truth deformation is shown in yellow , the result for tracking using the sparse sequence is shown in blue , and the result for tracking using the dense sequence is shown in green .note that by using more frames , the tracking is able to follow the data more closely at the cost of additional drift .we proposed an approach to track the geometry of a surface over time from a set of input point clouds captured from a single viewpoint .we combine the use of a template and the use of a linear finite element method to track the model . by linearizing the deformation at each frame ,we show that we can accurately track surfaces that deform in a non - linear fashion .we demonstrate the robustness of our approach with respect to noise using a synthetic evaluation and using real data captured with a stereo setup and with a depth camera .we leave the following ideas for future work .the tracking is lost when the distance between consecutive frames is large .this could potentially be addressed by tracking feature points on the model and by using these features to guide the non - rigid motion of the template during tracking .furthermore , our approach assumes that a template is known a priori .while this assumption is commonly used in 3d tracking approaches , it will be interesting to relax this requirement in the future .one way to relax this requirement would be to assume that the undeformed object is observed from a single moving viewpoint before the deformation , which allows to fuse these views into a template shape automatically .this work has partially been funded by nserc , canada , networks of centres of excellence grand , canada , and the cluster of excellence on _ multimodal computing and interaction _ within the excellence initiative of the german federal government .bernd bickel , moritz bcher , miguel otaduy , hyunho richard lee , hanspeter pfister , markus gross , and wojciech matusik .design and fabrication of materials with desired deformation behavior . , 29(3 ) , 2010 .proceedings of siggraph .edilson de aguiar , carsten stoll , christian theobalt , naveed ahmed , hans - peter seidel , and sebastian thrun . performance capture from sparse multi - view video . , 27(3):98:198:10 , 2008 .proceedings of siggraph .jennifer hensel , cynthia mnard , peter chung , michael milosevic , anna kirilova , joanne moseley , masoom haider , and kristy brock .development of multiorgan finite element - based prostate deformation model enabling registration of endorectal coil magnetic resonance imaging for radiotherapy planning ., 68(5):15221528 , 2007 .huai - ping lee , mark foskey , marc niethammer , pavel krajcevski , and ming lin .simulation - based joint estimation of body deformation and elasticity parameters for medical image analysis . , 31(11):21562168 , 2012 .julia schnabel , christine tanner , andy castellano - smith , andreas degenhard , martin leach , rodney hose , derek hill , and david hawkes .validation of nonrigid image registration using finite - element methods : application to breast mr images ., 22(2):238247 , 2003 .andrei sharf , dan alcantara , thomas lewiner , chen greif , alla sheffer , nina amenta , and daniel cohen - or .space - time surface reconstruction using incompressible flow ., 27(5 ) , 2008 .proceedings of siggraph asia .gary tam , zhi - quan cheng , yu - kun lai , frank langbein , yonghuai liu , david marshall , ralph martin , xian - fang sun , and paul rosin .registration of 3d point clouds and meshes : a survey from rigid to non - rigid ., 19:11991217 , 2013 .michael wand , philipp jenke , qixing huang , martin bokeloh , leonidas guibas , and andreas schilling .reconstruction of deforming geometry from time - varying point clouds . in _symposium on geometry processing _ , 2007 .qian zheng , andrei sharf , andrea tagliasacchi , baoquan chen , hao zhang , alla sheffer , and daniel cohen - or .consensus skeleton for non - rigid space - time registration . , 29(2 ) , 2010 .proceedings of eurographics .
|
we present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint . the deformations we consider are caused by applying forces to known locations on the object s surface . our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation . this allows the accurate reconstruction of both the observed and the unobserved sides of the object . we present tracking results for noisy low - quality point clouds acquired by either a stereo camera or a depth camera , and simulations with point clouds corrupted by different error terms . we show that our method is also applicable to large non - linear deformations . * keywords : * geometry processing , surface tracking , template deformation , linear finite element deformation
|
necessity of the general relativity generalization arises as a result of a geometry progress .now we know nonaxiomatizable ( physical ) geometries , which were unknown 20 years ago .physical geometries are essentially the metric geometries , whose metric is free of almost all conventional restrictions . in a metric geometrythere exists a problem , how one should define geometric concepts and rules of geometric objects construction. one can construct sphere and ellipsoid , which are defined in terms of metric ( world function ) .however , one needs to impose constraints on metric ( triangle axiom ) even for construction of a straight line .it is unclear , how one can define the scalar product and linear dependence of vectors .the deformation principle solves the problem of the geometrical concepts definition , without imposing any restrictions on the metric .the physical geometry is equipped by the deformation principle , which admits one to construct all definitions of the physical geometry as a deformation of corresponding definitions of the proper euclidean geometry . in the physical geometry the information on the geometry dimension and its topology appears to be redundant .it is determined by the metric ( world function ) , and one may not give it independently .a physical geometry is described completely by its world function .the geometry is multivariant and nonaxiomatizable in general .the world function describes uniformly continuous and discrete geometries . as a resultthe dynamic equations in physical space - time geometry are finite difference ( but not differential ) equations . besides , in the physical space - time geometry the particle dynamics can be described in the coordinateless form .it is conditioned by a possibility of ignoring the linear vector space , whose properties are not used in a physical geometry .it is rather uncustomary for investigators dealing with the riemannian geometry , which is based on usage of the linear vector space properties .there is only one uniform isotropic geometry ( geometry of minkowski ) in the set of riemannian geometries , whereas there is a lot of uniform isotropic geometries among physical geometries . in particular , let us consider the world function of the form is the world function of minkowski , and are respectively quantum constant , the speed of the light and some universal constant .the space - time geometry is discrete and multivariant .free particle motion appears stochastic ( multivariant ) .its statistical description is equivalent to quantum description in terms of the schrdinger equation .thus , application of the physical geometry in the microcosm admits one to give a statistical foundation of quantum mechanics and to convert the quantum principles into appearance of the correctly chosen space - time geometry .one should expect , that a consideration of a more general space - time geometry and a refusal from the riemanniance , which is conditioned by our insufficient knowledge of geometry , will lead to a progress in our understanding of gravitation and cosmology .an arbitrary space - time geometry is described completely by the world function , given for all pairs of points .information on dimension and on topology of the geometry is redundant , as far as it may be obtained from the world function . the riemannian geometry , which is used in the contemporary theory of gravitation , is considered usually to be the most general possible space - time geometry .however , it can not describe a discrete geometry , or a geometry , having a restricted divisibility .the world function of the riemannian geometry satisfies the equation means , that in the expansion metric tensor determines completely the whole world function .conventional gravitation equations determine only metric tensor .the world function and the space - time geometry are determined on the basis of supposition on the riemannian geometry .generalization of the gravitation equations admits one to obtain the world function directly ( but not only the metric tensor ) .the deformation principle admits one to construct all definitions of a physical geometry as a result of deformation of definitions of the proper euclidean geometry .one uses the fact , that the proper euclidean geometry is an axiomatizable geometry and a physical geometry simultaneously .it means , that all definitions of the euclidean geometry , obtained in the framework of euclidean axiomatics can be presented in terms and only in terms of the world function of the euclidean geometry . replacing in all definitions of the euclidean geometry by a world function of some other geometry , one obtains all definitions of the geometry .definition of the scalar product of two vectors and and their equivalency are the most used definitions they are defined in such a way in the euclidean geometry .they are defined in the same way also in any physical geometry .solution of ( [ a1.4 ] ) is unique in the case of the proper euclidean geometry , although there are only two equations , whereas the number of variables to be determined is larger , than two . for arbitrary physical geometrya solution is not unique , in general . as a resultthere are many vectors at the point , which are equivalent to vector at the point .even geometry of minkowski is multivariant with respect to spacelike vectors , although it is single - variant with respect to timelike vectors .space - time geometry becomes to be multivariant with respect to timelike vectors only after proper deformation .at the generalization of the general relativity on the case of arbitrary space - time geometry the two circumstances are important . 1 . a use of the deformation principle , 2 .a use of adequate relativistic concepts , in particular , a use of relativistic concept of the events nearness ( see details in ) . two events and are near , if and only if in the space - time of minkowski a variation of the metric tensor under influence of the matter have the form is the energy - momentum tensor of the matter, is the gravitational constant. appearance of world function in the -function means , that the condition of the nearness leads to interpretation of gravitational ( and electromagnetic ) interactions as a direct collision of particles .being presented in terms of world function these formulae have the same form in any physical geometry. are arbitrary points of the space - time .summation is produced over all world lines of particles perturbing the space - time geometry .the segment is infinitesimal element of the world line of one of perturbing particles .the point is near to the point , which is a middle of the segment . vectors are basic vectors at the point .vector is timelike .if is the unperturbed world function of space - time geometry without particles , then is the world function of the space - time geometry after appearance of perturbing particles .one should use the world function at calculation of scalar products in rhs of ( [ a1.8 ] ) by the formula ( [ a1.3 ] ) . at firstthe world function is unknown , and relation ( [ a1.8 ] ) is an equation for determination of . equation ( [ a1.8 ] ) is solved by the method of subsequent approximations . at the first step onecalculates rhs of ( [ a1.8 ] ) by means of and obtains at the second step one calculates rhs of ( [ a1.8 ] ) by means of and obtains and so on .applying relation ( [ a1.8 ] ) to heavy pointlike particle , one obtains in the first approximation is the mass of the particle .space - time geometry appears to be non - riemannian already at the first approximation , although the metric tensor has the form , which it has in the conventional gravitation theory for a slight gravitational field .the next approximations do not change the situatition .thus , the space - time geometry appears to be non - riemannian .furthermore , supposition on the riemannian space - time leads to an ambiguity of the world function for large difference of times even in the case of a gravitational field of a heavy particle .it is conditioned by the fact , that there are many geodesics , connecting two points .it is forbidden in a physical geometry , where the world function must be single - valued .thus , generalization of the relativity theory on the general case of the space - time geometry is generated by our progress in geometry and by a use of adequate relativistic concepts .the deformation principle is not a hypotheses , but it is the principle , which lies in the basis of physical geometry . the uniform formalism , suitable for both continuous and discrete geometries , is characteristic for physical geometries .this formalism uses dynamic equations in the form of finite difference equations .sometimes these equations have a form of finite relations .the uniform formalism is formulated in coordinateless form .it gets rid of necessity to consider coordinate transformations and their invariants .the contemporary elementary particle theory ( ept ) is qualified usually as the elementary particle physics ( epp ) .however , it should be qualified more correctly as an elementary particle chemistry ( epc ) .the fact is that , the structure of the elementary particle theory reminds the periodical system of chemical elements .both conceptions classify elementary particles ( and chemical elements ) . on the basis of the classification both conceptionspredict successfully new particles ( and chemical elements ) .both conceptions are axiomatic ( but not model ) constructions .the periodical system of chemical elements has given nothing new for investigation of the atomic structure of chemical elements .one should not expect any information about elementary particle structure from contemporary ept .for this purpose a model approach to ept is necessary .the simplest particle is considered usually as a point in usual 3d - space .this point is equipped by a mass and by a momentum 4-vector .one may to prescribe an electric charge and some other characteristics to the point .the aggregate of this information forms a nonrelativistic concept of a particle .this concept of a particle is based on the concept of the linear vector space , _ which is based in turn on the concept of axiomatizable continuous space - time geometry_. in the consecutive relativistic theory one should use another concept of a particle .the simplest particle is defined by two points in the space - time .the vector , formed by the two points , is a geometric momentum of the particle .its length is the geometric mass of the particle .the geometric mass and momentum connected with conventional mass and 4-momentum by means of relations is some universal constant , and is the speed of the light .the electric charge appears in the 5d - geometry of kaluza - klein as a projection of 5-momentum on the additional fifth dimension , which is a chosen direction .projection on this direction is invariant , because the direction is chosen . as a resultall parameters of a particle appear to be geometrized .a free motion of the simplest particle in the properly chosen 5d - geometry of the space - time is equivalent to motion of a charged particle in the given gravitational and electromagnetic fields of the minkowskian space - time geometry ._ such a concept of a particle may be used in any space - time geometry ( nonaxiomatizable and discrete)_. a particle may have a complicated structure , in this case the particle is described by its skeleton , consisting of space - time points question : `` what does unite the skeleton points in a particle '' is relevant only in the space - time geometry with unlimited divisibility . in the physical geometry the skeleton pointsmay be connected between themselves simply as points of a geometry with a limited divisibility .the particle evolution is described by a chain of connected skeletons . skeletons of the chain are equivalent. the chain coincide then according to ( [ a2.2 ] ) the leading vector of skeleton is equivalent to the leading vector of skeleton , i.e. rotation of a skeleton is absent .the translational motion is carried out along the leading vector .dynamics is described by means of finite difference equations .it is reasonable , if the space - time geometry may be discrete .the leading vector describes the evolution direction in the space - time . the number of dynamic equations is equal to , whereas the number of variables to be determined is equal to . here is the dimension of the space - time , and is the number of points in the particle skeleton .the difference between the number of equations and the number of variables , which are to be determined , may lead to different results .1 . multivariance , i.e. ambiguity of the world chain links position , when .it is characteristic for simple skeletons , which contain small number of points .multivariance is responsible for quantum effects .zero - variance , i.e. absence of solution of equations , when .it is characteristic for complicated skeletons , which contain many points .zero - variance means a discrimination of particles with complicated skeletons . as a result thereexist only particles , having only certain values of masses and other parameters . _ quantum indeterminacy and discrimination mechanism are two different sides of the particle dynamics_. the conventional theory of elementary particles has not a discrimination mechanism , which could explain a discrete spectrum of masses .there are two sorts of elementary particles : bosons and fermions .boson has not its own angular momentum ( spin ) .it is rather reasonable , because motion of elementary particles is translational .however , the fermions have a discrete spin , which looks rather unexpected at the translation motion .spin of a fermion appears as a result of translation motion along a space - like helix with timelike axis .the helix world line of a free particle is possible only for spacelike world line .it is conditioned by multivariance of the space - time geometry with respect to spacelike vectors .this multivariance takes place even for space - time of minkowski .this multivariance takes place for any space - time geometry .it does not vanish in the limit .however , in the space - time geometry of minkowski the helix world chain is impossible , because the temporal component of momentum increases infinitely . for existence of the helix world chain ,the world function is to have the form in the conventional relativity theory the helix spacelike world lines are not considered , because one assumes , that they are forbidden by the relativity principles .fermions are described usually by means of the dirac equation , which needs introduction of such special quantities as -matrices .a use of -matrices generates a mismatch between the particle velocity and its mean momentum .( the quantum mechanics uses the mean momentum always . )this enigmatic mismatch is explained easily by means of the helix world chain .the velocity is tangent to helix , whereas the mean momentum is directed along the axis of helix .besides , the fermion skeleton is to contain not less , than three points .it is necessary for stabilization of the helix world line .existence of the fermion is possible only at certain values of its mass , which depends on the space - time geometry ( the form of function in ( a2.7 ) ) and on a choice of the skeleton points .thus , the spin and magnetic moment of fermions appear to be connected with spacelike world chain and with multivariance of the space - time geometry with respect to space - like vectors . at the conventional approach to geometrythe spacelike world lines are considered to be incompatible with the relativity principles .spin is associated with existence of enigmatic -matrices .multivariance with respect to timelike vectors is slight ( it vanishes in the limit ) .multivariance with respect to spacelike vectors is strong ( it is not connected with quantum effects ) the particle motion is free in the properly chosen space - time geometry .however , the particle motion can be described in arbitrary geometry , given on the same point set , where the true geometry is given .the world function of the true geometry is presented in the form is some addition to the world function of of the space - time geometry of kaluza - klein , which is used in the given case as a basic geometry . in this geometrythe particle motion ceases to be free .it turns into a motion in force fields , whose form is determined by the form of addition .progress in the elementary particle dynamics is conditioned by a progress in geometry and by a use of adequate relativistic concepts .the suggested elementary particle dynamics is a model conception .it is demonstrable and simple .multivariance of the geometry explains freely quantum effects .the zero - variance generates a discriminational mechanism , responsible for discrete characteristics of elementary particles .mathematical technique is formulated in a coordinateless form , that gets rid of a necessity to investigate coordinate transformations and their invariants .two - point technique of the dynamics and many - point skeletons contain a lot of information , which should be only correctly ordered .simple principles of dynamics reduce a construction of the elementary particle theory to formal calculations of different skeletons dynamics at different space - time geometries .there is a hope , that true skeletons of elementary particles can be obtained by means of the discrimination mechanism of the true space - time geometry . at any rate ,having been constructed in the framework of simple dynamic principles , this dynamics explains freely discrete spins and discrete masses of fermions and mismatch between the particle velocity and its mean momentum .these properties are described usually by introduction of -matrices , that is a kind of fitting .yu.a.rylov , non - euclidean method of the generalized geometry construction and its application to space - time geometry in _ pure and applied differential geometry _pp.238 - 246 .franki dillen and ignace van de woestyne .shaker verlag , aachen , 2007 .see also _ e - print math.gm/0702552._
|
contemporary relativity theory is restricted in two points : ( 1 ) a use of the riemannian space - time geometry and ( 2 ) a use of inadequate ( nonrelativistic ) concepts . reasons of these restrictions are analysed in . eliminating these restrictions the relativity theory is generalized on the case of non - riemannian ( nonaxiomatizable ) space - time geometry . taking into account a progress of a geometry and introducing adequate relativistic concepts , the elementary particle dynamics is generalized on the case of arbitrary space - time geometry . a use of adequate relativistic concepts admits one to formulate the simple demonstrable dynamics of particles .
|
kekb asymmetric electron positron collider has been operated for 10 years improving the experimental performance continuously . at the beginning of the commissioningit was anticipated whether the positron injection efficiency was sufficient . in order to enhance the injection efficiency several projectswere performed , including double beam bunches in a pulse and continuous ( top - up ) injection during the experiment .such efforts improved the integrated luminosity , and the linac turned out not to be a bottleneck .further challenge was that the injector is shared between four storage rings , namely , kekb - her ( high energy ring ) , kekb - ler ( low energy ring ) , pf ( photon factory ) and pf - ar ( photon factory advanced ring ) , with quite different energies from 2.5gev to 8gev and charges from 0.1nc to 10nc as in table [ four - beams ] .there were concerns about the beam characteristic reproducibility on the beam mode changes . lcccc injection & beam & energy & charge & no . of + mode & && /bunch & bunches + kekb - her & e & 8.0gev & 1.2nc & 1 or 2 + kekb - ler & e & 3.5gev & 0.6nc & 2 + kekb - ler & e & 4.0gev & 10nc & 2 + pf & e & 2.5gev & 0.1nc & 1 + pf - ar & e & 3.0gev & 0.1nc & 1 + [ four - beams ] for this reason , during the commissioning , many slow closed feedback loops were installed in order to stabilize the beam energies and orbits at several different locations along the 600-m linac .recently , we succeeded in pulse - to - pulse beam mode changes , with modulating many parameters in 20ms .this is called as a simultaneous injection and it enabled both the luminosity tuning stability at kekb and the top - up injection to pf .it is considered whether it may improve the injection stability with beam feedback loops even under simultaneous injection .because beam instabilities were observed at the beginning of the linac commissioning for kekb , many closed loops were installed .if the energy fluctuated , the beam position , where the dispersion function was large , was measured and rf phases at two adjacent rf sources were modulated to opposite directions from the crest phase in order to maintain the energy but not to make energy spread . if the orbit was unstable , beam positions at two locations where betatron phases are 90-degree apart were measured and corresponding steering coils were adjusted . in order to suppress the measurement noises , a weighed average over several locations were often used , instead of a single beam position information , based on the response functions to the energy change or the beam kick .if the energy measurement noise was large , the betatron oscillation was measured using the beam positions at two locations where the dispersion functions were zero , and the information was used to compensate the energy displacement information . those procedures were described employing a scripting computer language and operators could manipulate their parameters via a graphical user interface .the feedback gains were normally small and the time constants were several to 100s .the same procedure was also applied to stabilize hardware such as amplitudes and timings of power supplies . while those loops were simple pi ( proportional - integral ) controllers , they were very effective .the fluctuations often meant that certain equipment or utility became out of order .it was not determined well how much stability was required for such equipment at the beginning . because it may take time to analyze and fix those pieces of equipment , feedback loops were effectively applied at many locations . as the fluctuation sources were identified and resolved , those closed loops became unnecessary during the normal operation .however , they are still important during the beam studies because the beam conditions are much different from the normal ones .while a certain parameter is scanned in such studies , other parameters often have to be maintained stable .they are also effective in order to identify the reason when the beam characteristics changed unintentionally .furthermore , they can be occasionally used to predict a certain failures .thus , the information from those loops was valuable .those feedback loops were dependent on the beam modes .the linac accelerates the beams for four storage rings , kekb - her , kekb - ler , pf and pf - ar , with quite different energies .moreover , those beam modes had been exchanged every several minutes .even with such frequent beam mode changes the beam reproducibility and stabilities were maintained with the beam feedback loops .until 2008 it took from 30seconds to 2minutes to switch the beam injection modes between kekb - her , kekb - ler , pf and pf - ar .many accelerator device parameters were changed , and the slowest process was the bending magnet standardization . however , simultaneous top - up injections to three rings , kekb - her , kekb - ler and pf became necessary in order to improve the physics experiments at those rings .several sets of pulsed equipment were installed .beam optics development was also performed to support the wide dynamic range of the beam energy and charge , 3-times different energies and 100-times different charges .the betatron matching condition was established at the entrance to each beam transport line .an event - based control system was introduced , in addition to existent epics control system , in order to achieve global and fast controls .the new control system could change many parameters quickly and globally , and it enabled the pulse - to - pulse beam modulation .the linac pulse repetition is 50hz , and approximately 130 independent parameters over 600-m linac are changed within 20ms .such efforts enabled simultaneous top - up injections , and the beam current stabilized to be 1ma in 1.1 1.6a at kekb and 0.1ma in 450ma at pf , which contributed the physics experiment results .the new event - based control system arranges the linac for a pulse to operate in one of the ten beam modes .the injection beam modes for kekb - her , kekb - ler and pf can be switched quickly , and each beam pulse is separated by 20ms .the event - based control system is composed of three basic parts .the first part is the beam - mode pattern arbitrator / generator .it listens to the beam frequency requests from downstream rings and/or human operators , and then it arbitrates those requests based on pre - assigned priorities , and generates a beam mode pattern for up to 10s ( 20ms 500 ) .there are certain constraints how to organize the pattern because many of the pulsed power supplies expect that they are fired at constant intervals .the main part is the event generator station .the event generator ( evg230 from mrf ) provides an event sequence synchronized to rf clock ( 114.24mhz ) .each event is accompanied with an event code , while additional data is transferred as well .the event generator software extracts adjacent beam - mode elements out of the beam - mode pattern received from the pattern generator , and arranges several event codes corresponding to the first mode , then adds another event code to notify the beam mode of the next pulse .finally , event receiver stations accept events from the event generator through optical fiber links .the event receiver ( evr230rf from mrf ) regenerates the rf clock using the bit train . the first part of the event code sequence is used to generate signals with specific delays .the last event code informs the receiver software to prepare specific delay and analog values for the beam mode in the next pulse .approximately 130 parameters on 17 event receiver stations are changed every pulse ( fig .[ fig - config ] ) .each one of those parameters is associated with ten variables that correspond to ten beam modes , and those thousand variables can be manipulated any time by operational software .the beam - mode pattern arbitrator normally accepts the new requests from rings every several seconds , and it regenerates a corresponding pattern . under a typical operation condition ,average injection rates are 25hz for her , 12.5hz for ler and 0.5hz for pf .however , it is often required to assign all the pulses for injection .such a flexible injection pulse arrangement enables efficient use of the injector linac .as described previously , the old read - out system for beam position monitors ( bpm ) could operated at 1hz . however , for simultaneous injection it was required to process signals at 50hz .the new bpm read - out system was designed with oscilloscopes , mainly because one oscilloscope can cover several bpms and the system is simple with only passive components , so that the maintenance becomes easier .the same software as the previous system was embedded into the epics control software framework on windows xp on the oscilloscopes .it accepts the event information through the network .events are used to tag the beam - position and -charge information with the beam - mode information . as approximately 200 bpmsare installed along the linac and bt , each one of which has independent variables for ten beam modes , there are variables provided .the client operational software can receive beam information which is related to a specific beam mode .under the simultaneous injection configuration the event - based control system provides beam - mode dependent control parameters .moreover , those parameters in different beam modes are organized to be independent both for controls and measurements .thus , we can see those independent parameter sets as independent virtual accelerators . for each 20-ms time slot , the event system associates one of the virtual accelerators with the real accelerator . because those control parameters for each virtual accelerator continue to exist , human operators and operational software can act on one of those virtual accelerators without any interference between other beam modes . bpm information and rf control parametersare also handled independently in each virtual accelerator . at first, energy feedback loops at the 180-degree arc and at the end of linac were installed again using event control parameters on each virtual accelerator as in fig .[ fig - virt ] .as parameters are independently managed , no modification to the software was necessary .the performance of those closed loops were observed with small feedback gains during the normal operation . in those feedback operationsno beam stability improvements were achieved . in other words , no signs of instabilitieswere observed other than white noise . for pf bt energy stabilization, it turned out that the separation of betatron and dispersion functions was not optimal and the resolution of bpms was insufficient because of the low beam charges . the procedures of the betatron oscillation compensation and the weighed average of beam positions will be applied later . because the processing speed with a scripting language is not sufficient , compiled procedures are tested as well .the orbit and energy spread stabilizations can be implemented in the same way .those beam feedback signals will be valuable information for the accelerator operation .a pulse - to - pulse modulated simultaneous injection to kekb - her , kekb - ler and pf rings was realized with an event - based fast and global control system .it provides several virtual accelerators for the accelerator operations with independent parameter sets . under such environment ,event - based closed beam energy feedback loops were successfully applied , which would provide valuable resources for the future operations including superkekb .k. furukawa _et al . _ , `` new event - based control system for simultaneous top - up operation at kekb and pf '' , _ proc . icalepcs2009 _ , kobe , japan , 2009 , thp052 . n. iida __ , `` pulse - to - pulse switching injections to three rings of different energies from a single electron linac at kek '' , _ proc .pac2009 _ , vancouver , canada , 2009 , we6pfp110 .
|
beam injections to kekb and photon factory are performed with pulse - to - pulse modulation at 50hz . three very different beams are switched every 20ms in order to inject those beams into kekb her , ler and photon factory ( pf ) simultaneously . human operators work on one of those three virtual accelerators , which correspond to three - fold accelerator parameters . beam charges for pf injection and the primary electron for positron generation are 50-times different , and beam energies for pf and her injection are 3-times different . thus , the beam stabilities are sensitive to operational parameters , and if any instability in accelerator equipment occurred , beam parameter adjustments for those virtual accelerators have to be performed . in order to cure such a situation , beam energy feedback system was installed that can respond to each of virtual accelerators independently .
|
time series analysis is a central topic in physics , as well as a powerful method to characterize data in biology , medicine and economics , and to understand their underlying dynamical origin . in the last decades, the topic has received input from different disciplines such as nonlinear dynamics , statistical physics , computer science or bayesian statistics and , as a result , new approaches like nonlinear time series analysis or data mining have emerged .more recently , the science of complex networks has fostered the growth of a novel approach to time series analysis based on the transformation of a time series into a network according to some specified mapping algorithm , and on the subsequent extraction of information about the time series through the analysis of the derived network . within this approach ,a classical possibility is to interpret the interdependencies between time series ( encapsulated for instance in cross - correlation matrices ) as the weighted edges of a graph whose nodes label each time series , yielding so called functional networks , that have been used fruitfully and extensively in different fields such as neuroscience or finance . a more recent perspective deals with mapping the particular structure of univariate time series into abstract graphs , with the aims of describing not the correlation between different series , but the overall structure of isolated time series , in purely graph - theoretical terms . among these latter approaches ,the so called visibility algorithms have been shown to be simple , computationally efficient and analytically tractable methods , able to extract nontrivial information about the original signal , classify different dynamical origins and provide a clean description of low dimensional dynamics . as a consequence , this particular methodology has been used in different domains including earth and planetary sciences , finance or biomedical fields ( see for a review ) . despite their success, the range of applicability of visibility methods has been so far limited to univariate time series , whereas the most challenging problems in several areas of nonlinear science concern systems governed by a large number of degrees of freedom , whose evolution is indeed described by multivariate time series . }( t)\}_{t=1}^n , \alpha=1,\ldots , m ] for any value of , measured empirically or extracted from a -dimensional , either deterministic or stochastic , dynamical system .an -layer multiplex network , that we call the _ multiplex visibility graph _ is then constructed , where layer corresponds to the hvg associated to the time series of state variable }}(t ) \}_{t=1}^n ] , where } } = \{a{^{[\alpha]}}_{ij}\} ] if and only if node and node are connected by a link at layer .this builds a bridge between multivariate series analysis and the recent developments in the study of multilayer networks . among the different possible multiplex measures that with our mapping is now possible to exploit also in the context of multidimensional time series, we focus here on one which allows to detect and quantify inter - layer degree correlations .in such a way we can characterize information shared across variables ( layers ) of the underlying high dimensional system , this aspect being indeed of capital importance in fields such as neuroscience or economics and finance .given a pair of layers and of , respectively characterized by the degree distributions }}) ] , we can define an _interlayer mutual information _ as : }}}\sum_{k{^{[\beta]}}}p(k{^{[\alpha ] } } , k{^{[\beta ] } } ) \log{\frac{p(k{^{[\alpha ] } } , k{^{[\beta]}})}{p(k{^{[\alpha]}})p(k{^{[\beta ] } } ) } } \label{eq : mi}\ ] ] where } } , k{^{[\beta]}}) ] at layer and degree }} ] of site is determined by : }}(t+1 ) = ( 1-\epsilon)f[x{^{[\alpha]}}(t ) ] + \frac{\epsilon}{2}\bigg(f[x{^{[\alpha-1]}}(t)]+f[x{^{[\alpha+1]}}(t)]\bigg ) , \label{cml}\end{aligned}\ ] ] , where $ ] is the coupling strength , and is typically a chaotic map . for different values of and cmls display a very rich phase diagram , which includes different degrees of synchronization and dynamical phases such as fully developed turbulence ( fdt , a phase with incoherent spatiotemporal chaos and high dimensional attractor ) , pattern selection ( ps , a sharp suppression of chaos in favor of a randomly selected periodic attractor ) , or different forms of spatio - temporal intermittency ( sti , a chaotic phase with low dimensional attractor ) . the origin of such a rich and intertwined structure comes from the interplay between the local tendency towards inhomogeneity , induced by the chaotic dynamics , and the global tendency towards homogeneity in space , induced by the diffusive coupling . fig .[ comparison ] report the results obtained for a cml of diffusively coupled , fully chaotic logistic maps , which exhibits several transitions from high dimensional chaos , to pattern selection , to several forms of partially synchronized chaotic states when is increased .the plots of fig .[ comparison ] are based on averages over 100 realisations of the cml dynamics . for each realisation, we constructed a multivariate time series of data points ( discarding the transient ) and we generated the corresponding multiplex visibility graph . in fig .[ comparison](b ) we show how the average mutual information of the multiplex visibility graph associated to the system ( see si for other multiplex measures ) is able to distinguish between the different phases . in particular , is a monotonically increasing function of in the fdt phases , and therefore quantifies the amount of information flow among units .notably , it also detects qualitative changes in the underlying dynamics ( such as the chaos suppression in favor of a randomly selected periodic pattern , or the onset of a multi - band chaotic attractor during intermittency ) and therefore can be used as a scalar order parameter of the system . for comparison , in fig.[comparison](c ) we also plot the corresponding quantity derived from a standard functional network analysis .namely , is the average mutual information computed directly on the multivariate time series , after performing the necessary time series symbolization .although there are qualitatively similarities , subtle aspects such as the monotonic increase of synchronization with in fdt , or the onset of multiband attractors in sti are not captured by ( additional details comparing our method to standard functional network approaches can be found in the si ) . in panel( a ) of the same figure we also report the projections of the three multiplex networks into the corresponding graphs of layers , whose edge widths are proportional to the values of mutual information between layers and .a simple visual inspection of such graphs reveals the different type of information flow among units , depending on the dynamical phases of the system . in particular ,notice that the diffusive nature of the coupling emerges in the ring - like structure of graph corresponding to weakly interacting maps ( fdt ) ( the analysis is extended in si to globally coupled maps , these being a mean - field version of cml where complete synchronisation is possible , showing that our method correctly detects the onset of this new regime ) .the previous study suggests that the quantities ( see fig . [ comparison](a ) ) accurately capture relevant information of the underlying dynamics . to further explore this aspect and to assess scalability, we considered a chain of diffusively coupled logistic maps , each governed by eq .( [ cml ] ) .new short dynamical phases , such as the so called brownian motion of defects ( bd ) a transient phase between fdt and ps , emerge when the dimension of the system is increased , and as the description gets more cumbersome , projections and coarse - grained variables are needed . since the graph of layers is by construction a complete graph ( just as any functional network ) , for visual reasons in fig .[ fig : fig3 ] we report the structural properties of , the backbone of obtained starting from an empty graph of nodes and adding edges sequentially in decreasing order of , until the resulting graph consists of a single connected component .the structure of is unique for each phase and qualitatively different across phases , thus providing a simple qualitative way to portrait different dynamics in high - dimensional systems . + as an example of the possible applications of the multiplex visibility graph approach to the analysis of real - world multivariate time series , we report a study of the prices of financial assets .namely , we considered the time evolution of stock prices of of the largest us companies by market capitalization from nyse and nasdaq ( see si for details ) over the period 1998 - 2013 .the time series have a very high resolution ( one data per minute ) , yielding data per company .we divided each multivariate time series into non - overlapping periods of three months ( quarters ) , and we constructed a _temporal _ multiplex visibility graph consisting of 64 multilayer snapshots , each formed by the 35-layer multiplex visibility graph corresponding to one of the 3-months periods .we then investigated the time evolution of the multiplex mutual information among layers , and how this correlates to the presence of periods of financial instability . in fig .[ miseries_refined](a ) we plot the value of the average multiplex mutual information as a function of time . for comparisonwe also report in fig .[ miseries_refined](b ) the analogous measure computed directly on the original series , after an appropriate symbolization ( see si for details ) .we find that the multiplex visibility graph approach captures the onset of the major periods of financial instability ( 1998 - 1999 , corresponding to the .com bubble , and 2007 - 2012 , corresponding to the great recession that took place as a consequence of the mortgage subprime crisis ) , which are characterised by a relatively increased synchronisation of stock prices , clearly distinguishing them from the seemingly unsynchronised interval 2001 - 2007 , which in turn corresponds to a more stable period of the economy . in direct analogy with the language used for cmls, we could say that in periods of financial stability , the system is close to equilibrium , degrees of freedom are evolving in a quasi - independent way , reaching a fully developed turbulent state of low mutual information ( hence unpredictable and efficient from a financial viewpoint ) . on the other hand , during periods of financial instability ( bubbles and crisis ) the system is externally perturbed , hence driven away from equilibrium , and the degrees of freedom share larger mutual information ( the system is therefore less unpredictable and inefficient from a financial viewpoint ) . as shown in figure [ miseries_refined](b ) , an analogous analysis based on the symbolization of the time series fails to capture all such details ( see si for additional analysis ) . finally , as also seen in the case of the multiplex visibility graphs associated to cmls , the differences in the values of average mutual information corresponding to different phases are indeed related to a different underlying structure of the network of layers . in fig.[miseries_refined ]we show the maximal spanning trees ( mst ) of the networks of layers associated to six typical time windows .the three networks at the bottom of the figure represents periods of financial stability , while those at the top of the figure correspond to the three local maxima of mutual information .interestingly , the msts in periods of financial instability all have a massive hub which is directly linked to as much as of all the other nodes .conversely , the degree is more evenly distributed in the msts associated to periods of economic stability .the approach based on multiplex visibility graphs introduced in this work provides an alternative and powerful method to analyze multivariate time series .we have first validated our method focusing on signals whose underlying dynamics is well known and showing that measures describing the structure of the corresponding multiplex networks ( which are not affected by the usual problems of standard symbolization procedures ) are able to capture and quantify the onset of dynamical phases in high - dimensional coupled chaotic maps , as well as the increase or decrease of mutual information among layers ( maps ) within each phase .we then have studied an application to the analysis of multivariate financial series , showing that multiplex measures , differently from other standard methods , can easily distinguish periods of financial stability from crises , and can thus be used effectively as a support tool in decision making .+ the proposed method is extremely flexible and can be used in all situations where the dynamics is poorly understood or unknown , with potential applications ranging from fluid dynamics to neuroscience or social sciences . in this article we have focused only on a particular aspect , which is the characterization of the information flow among the different variables of the system , and we have consequently based our analysis on the study of the resulting networks of layers . however , our approach is quite general , and the mapping of multivariate time series into multiplex visibility graphs paves the way to the study of the relationship between specific structural descriptors recently introduced in the context of multiplex networks and the properties of real - world dynamical systems .we are confident that our method is only the first step towards the construction of feature - based automatic tools to classify dynamical systems of any kind . 99 kantz , h. , schreiber , t. nonlinear time series analysis ( cambridge university press , cambridge , 2006 ) .hastie , t. , tibshirani , r. , friedman , j. elements of statistical learning ( springer - verlag , 2009 ) .albert , r. , barabasi , a .-statistical mechanics of complex networks . _ rev .* 74 , * 47 ( 2002 ) .boccaletti , s. , latora , v. , moreno , y. , chavez , m. , hwang , d. u. complex networks : structure and dynamics .rep . _ * 424 , * 175 ( 2006 ) . newman , m. e. j. networks : an introduction .( oxford university press , oxford , 2010 ) .zhang , j. , small , m. complex network from pseudoperiodic time series : topology versus dynamics . _lett . _ * 96 * , 238701 ( 2006 ) .kyriakopoulos , f. , thurner , s. directed network representations of discrete dynamical maps , in _ lecture notes in computer science _ * 4488 * , 625632 ( 2007 ) .xu , x. , zhang , j. , small , m. superfamily phenomena and motifs of networks induced from time series .usa _ * 105 * , 19601 - 19605 ( 2008 ) .donner , r. v. , zou , y. , donges , j. f. , marwan , n. , kurths , j. recurrence networks : a novel paradigm for nonlinear time series analysis ._ new j. phys . _ * 12 * , 033025 ( 2010 ) .donner , r. v. , et al .the geometry of chaotic dynamics - a complex network perspective . _ eur . phys .j. b _ * 84 * , 653 - 672 ( 2011 ) .lacasa , l. , luque , b. , ballesteros , f. , luque , j. , nuno , j. c. from time series to complex networks : the visibility graph .usa _ * 105 * , 13 ( 2008 ) .luque , b. , lacasa , l. , luque , j. , ballesteros , f. j. horizontal visibility graphs : exact results for random time series .e _ * 80 * , 046103 ( 2009 ) .lacasa , l. on the degree distribution of horizontal visibility graphs associated to markov processes and dynamical systems : diagrammatic and variational approaches ._ nonlinearity _ * 27 * , 2063 - 2093 ( 2014 ) .gutin , g. , mansour , m. , severini , s. a characterization of horizontal visibility graphs and combinatorics on words ._ physica a _ * 390 * , 12 ( 2001 ) .lacasa , l. , luque , b. , luque , j. , nuo , j. c. the visibility graph : a new method for estimating the hurst exponent of fractional brownian motion ._ europhys .lett . _ * 86 * , 30001 ( 2009 ) .lacasa , l. , toral , r. description of stochastic and chaotic series using visibility graphs .e _ * 82 * , 036120 ( 2010 ) .luque , b. , lacasa , l. , ballesteros , f. j. , robledo , a. analytical properties of horizontal visibility graphs in the feigenbaum scenario . _ chaos _ * 22 * , 013109 ( 2012 ) .luque , b. , ballesteros , f. j. , nunez , a. m. , robledo , a. quasiperiodic graphs : structural design , scaling and entropic properties . _j. nonlin .* 23 * 335 - 342 ( 2013 ) .nunez , a. , luque , b. , lacasa , l. , gomez , j. p. , robledo , a. horizontal visibility graphs generated by type - i intermittency .e _ * 87 * , 052801 ( 2013 ) .aguilar - san juan , b. , guzman - vargas , l. earthquake magnitude time series : scaling behavior of visibility networks .j. b_. * 86 * , 454 ( 2013 ) .donner , r. v. , donges , j. f. visibility graph analysis of geophysical time series : potentials and possible pitfalls ._ acta geophysica _ * 60 * , 589 - 623 ( 2012 ) .zou , y. , small , m. , liu , z. , kurths , j. complex network approach to characterize the statistical features of the sunspot series ._ new j. phys . _* 16 * , 013051 ( 2014 ) .qian , m. c. , jiang , z. q. , zhou , w. x. universal and nonuniversal allometric scaling behaviors in the visibility graphs of world stock market indices ._ j. phys . a _ * 43 * 335002 ( 2010 ). ahmadlou , m. , ahmadi , k. , rezazade , m. , azad - marzabadi , e. global organization of functional brain connectivity in methamphetamine abusers ._ clinical neurophysiology _ * 124 * , 6 , 1122 ( 2013 ) .nuez , a. , lacasa , l. , luque , b. visibility algorithms : a short review in graph theory ( intech ) ( 2012 ) .bianconi , g. statistical mechanics of multiplex networks : entropy and overlap . _ phys .e. _ * 87 * , 062806 ( 2013 ) .nicosia , v. , bianconi , g. , latora , v. , barthelemy , m. growing multiplex networks ._ * 111 * , 058701 ( 2013 ) .de domenico , m. , et al . mathematical formulation of multilayer networks . _x _ * 3 * , 041022 ( 2013 ) .kivel , m. , et al .multilayer networks _ j. complex networks _ , * 2 * ( 3 ) , 203 - 271 ( 2014 ) . boccaletti , s. , et al . the structure and dynamics of multilayer networks .rep . _ * 544*(1 ) , 1 - 122 ( 2014 ) .battiston , f. , nicosia , v. , latora , v. structural measures for multiplex networks . _ phys .* 89 * , 032804 ( 2014 ) .lacasa , l. , nuez , a. , roldan , e. , parrondo , j. m. r. , luque , b. time series irreversibility : a visibility graph approach . _j. b _ * 85 * , 217 ( 2012 ) .donges , j. f. , donner , r. v. , kurths , j. testing time series irreversibility using complex network methods ._ europhys ._ * 102 * , 10004 ( 2013 ) .nicosia , v. , bianconi , g. , latora , v. , barthelemy , m. non - linear growth and condensation in multiplex networks .* 90 * , 042807 ( 2014 ) .nicosia , v. , latora , v. measuring and modelling correlations in multiplex networks .arxiv:1403.1546 ( 2014 ) .kaneko , k. pattern dynamics in spatiotemporal chaos : pattern selection , diffusion of defect and pattern competition intermettency ._ physica d _ * 34 * , 1 - 41 ( 1989 ) .kaneko , k. theory and applications of coupled map lattices ( vol .john wiley son ltd .( 1993 ) .beck , c. chaotic scalar fields as models for dark energy .* 69*(12 ) , 123515 ( 2004 ) .bullmore , e. , sporns , o. complex brain networks : graph theoretical analysis of structural and functional systems .. neurosci _ * 10*(3 ) , 186198 ( 2009 ) .tumminello , m. , lillo , f. , mantegna , r. n. correlation , hierarchies , and networks in financial markets ._ journal of economic behavior organization _ , 75(1 ) , 40 - 58 ( 2010 ) .granger , c. w. j. investigating causal relations by econometric models and cross - spectral methods ._ econometrica _ * 37*(3 ) , 424438 ( 1969 ) .sporns , o. structure and function of complex brain networks ._ dialogues clin .neurosci . _ * 15*(3 ) , 247262 ( 2013 ) .kaneko , k. clustering , coding , switching , hierarchical ordering , and control in a network of chaotic elements ._ physica d _ * 41 * , 137 - 172 ( 1990 ) .mantegna , r. hierarchical structure in financial markets ._ europhys j b _ , 11 , 193197 ( 1999 ) .v.n . and v.l .acknowledge support from the project lasagne , contract no.318132 ( strep ) , funded by the european commission . v.l . acknowledges support from the epsrc project gale , ep / k020633/1 .all the authors conceived the study , performed the experiments , analysed the results and wrote the paper .all the authors approved the final version of the manuscript .
|
our understanding of a variety of phenomena in physics , biology and economics crucially depends on the analysis of multivariate time series . while a wide range of tools and techniques for time series analysis already exist , the increasing availability of massive data structures calls for new approaches for multidimensional signal processing . we present here a non - parametric method to analyse multivariate time series , based on the mapping of a multidimensional time series into a multilayer network , which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network . the method is simple to implement , general , scalable , does not require _ ad hoc _ phase space partitioning , and is thus suitable for the analysis of large , heterogeneous and non - stationary time series . we show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps , including the transition between different dynamical phases and the onset of various types of synchronization . as a concrete example we then study financial time series , showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability , where standard methods based on time - series symbolization often fail .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.