article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
we study the following model . given finitely many classes ( or groups ) each containing a given initial number of members , new members arrive one at a time .for each new member arriving at time , with probability we create a new class in which we place the member ; with probability , we place the member in an existing class .we assume that each existing class attracts new members with probability proportional to a certain positive function of the cardinality of the group , called the _ reinforcement _ or _ weight scheme _ .if the groups are allowed to have different reinforcement schemes , then we show that looking at the asymptotics as time tends to infinity we have exactly three different regimes : one group is infinite and all the others are finite ; all groups are infinite ; all groups are finite .our main result , theorem [ princres1 ] , shows that in the first regime the process will eventually create a unique infinite group : this happens when each group is reinforced quite a bit , but not too much with respect to the other groups . in the second regime, the cardinality of each group goes to infinity . finally , in the last regime , all the groups will be finite ; what happens is that the process creates various peaks : in the beginning one group dominates the others , but sooner or later another group will start dominating , and this change happens infinitely many times . in this way , no group dominates definitively the other groups .this is a kind of `` there is always a faster gun '' principle .our model is a generalization of two models from two different classes : one model from the class of _ preferential attachment models _ , as introduced in and in , and one model from the class of _ reinforcement processes _ , as introduced in .the first main model we are generalizing was introduced and studied independently in and in , and later studied in more detail in and .this model is part of the class of _ preferential attachment models _ , which are models of growing networks , and which were first proposed in the highly - influential papers and . in new verticesarrive at the network one at a time and send a fixed number of edges to already existing vertices ; the probability that a new vertex is linked to a given existing vertex is proportional to the in - degree of the respective existing vertex . here , the in - degree of a vertex is the number of children of that vertex .the model studied in and is as follows : consider a model of an evolving network in which new vertices arrive one at a time , each connecting by an edge to a previously existing vertex with a probability proportional to a function of the existing vertex s in - degree .this function is called _ attachment rule _ , or _weight function _ , and it determines the existence of two main different regimes .the first regime corresponds to , and it was studied in and ; the second regime corresponds to for , and it was studied in .the third regime corresponds to for , and it was studied in . in the first two regimes, it is shown that the degrees of all vertices grow to infinity ; in the third regime there is a second phase as one vertex eventually dominates all other vertices .in the first regime , the so - called plya urn , the urn process is exchangeable and is the only case where exchangeability appears ; see .( for more results on preferential attachment models , see the survey . )preferential attachment models have been motivated by real - life problems , especially in regards to network and internet applications .one important example of growing networks is the world wide web , in which the more popular a page ( or vertex ) is the more hits it receives ; a similar principle applies to social interaction or to citation networks .another example is the one of users of a software program who can report bugs on a website .bugs with the highest number of requests get priority to be fixed .if the user can not find an existing report of the bug , they can create a new report .however , it could be that there are duplicate reports , in which case the number of requests is split between the reports , making it less likely that the bug the user found will get fixed . since bugs that have more requests appear higher up the search results , the user is more likely to add a request to an existing report than to a new one .this can be explained by the fact that such networks are built dynamically and that new vertices prefer to attach themselves to existing popular vertices with high in - degree rather than to existing unpopular vertices with low in - degree .the second main model we are generalizing is studied in and .it is known as the _ generalized plya urn process _ ; it belongs to the class of _ reinforcement processes _ and can be described as follows .given finitely many bins each containing one ball , new balls arrive one at a time .for each new ball , with probability we create a new bin in which we place the ball ; with probability , we place the ball in an existing bin .the probability that the ball is placed in an already existing bin is proportional to , where is the number of balls in that bin .the case with and is the well - known _ plya urn problem_. for and no new bins are created , and the process is called a _ finite plya process with exponent . if , then the process is called an _ infinite plya process_. similarly to the preferential attachment models , for generalized plya urn processes with , it is known that for the number of balls in all bins eventually grows to infinity , whereas for one bin eventually comes to dominate all other bins .( a detailed review of a number of other interesting results on plya s urn processes and on reinforcement processes in general is provided in the survey . )the generalized plya urn process has applications to many areas .we briefly mention one such application to biology ; for an extensive overview of other applications of generalized plya urn processes to reinforced random walks , statistics , computer science , clinical trials , biology , psychology and economics , see , for example , chapter 4 in .the generalized plya urn process with is used in and to study a real - life application ; the reinforcement scheme used in these papers is set to , with , and real - life data are compared against different values of and initial configurations .more precisely , the authors study a colony of ants , which explores a chemically unmarked territory randomly , starting from the nest .the exploration is done on a fixed number of paths of various lengths .each ant passes along one of the paths leaves a pheromone mark and in this way infuences the following ant s decision in choosing a particular path .this decision is also influenced by whether the paths of various lengths are discovered at the same time , or whether they are discovered at different times . in the real - life experimentit is noticed in the case of paths of equal lengths that , after initial fluctuations , one of the paths becomes more or less completely preferred to the others .we will show in our paper that the above two models , belonging to these two different areas , are in fact closely related because they are both special cases of our much more general model .the first of our results , theorem [ princres ] , proved for our general model , unifies the two above - described phase transition results for a very general class of weight functions ; the result holds in particular both for preferential attachment processes and for generalized plya s urn processes .it is worth noting that our condition on the weight function is much weaker than all previously - proved results for the models we generalize .moreover , in our main result , theorem [ princres1 ] , we show , under no assumptions on the weight function , that we can have only three possible phases ; in the third phase , all groups ( resp . , vertices , bins ) stay finite as time tends to infinity . to the best of our knowledge ,this is the first time when a third regime as described in our theorem [ princres1 ] , has been proved for any model of preferential attachment or plya s urn type . in the case of weight functions which give rise to the second phase , we devise in our theorem [ testo1 ] , and , respectively , in corollary [ rf ] , a test for obtaining an upper bound , and , respectively , a lower bound , on the probability that a given group ends up being dominant .the motivation for our model comes from the class of _ species sampling sequences _ , to which class our model belongs .species sampling sequences are models for exchangeable sequences with a prediction rule , that is , a formula for the conditional distribution of given for more precisely , given the first terms of the sequence , equals the distinct value observed so far with probability , for , and otherwise is a new value with distribution for some probability measure .species sampling sequences were first introduced and studied in and are now used extensively in bayesian nonparametric statistics ; see , for example , or for more on species sampling sequences or for their applications to statistics .we next introduce precisely our model .we consider the following model where at each step a new vertex and at most one new edge appear according to the following rules .the probability that the new vertex is disconnected is positive and may change in time .when a vertex is disconnected from the existing ones , it becomes a _ pioneer vertex_. we label the pioneer vertices in order of appearance .given that the new vertex is connected to an existing one , the latter is chosen with probability proportional to a reinforcement scheme of its degree .the graph formed with this procedure is the union of trees .each tree has a pioneer vertex as a root .the tree with root observed at time , is called the group ( or component ) by time .more formally , fix a collection of positive functions with and for all , and a sequence which takes values in ] converges to as .we denoted by ] is the element of , ordered from the smallest to the largest .for example , if , then =2 ] , = \infty ] for all . notice that ] exists , possibly infinite . foreach let be a sequence of independent exponential random variables , with .moreover let be a sequence of i.i.d .bernoulli such that .we are going to use these sequences to generate a .the bernoullis will be used to create new groups , while the exponentials play a central role in the allocation of new individuals into existing groups .we are assuming that all the variables involved are independent of each other .set , for all .then , for , let in words , for each , the processes are independent processes with the property that are distributed like binomial with parameters and , while is a random subset of composed by and all the partial sums of the sequence with .to each element we associate a corresponding bernoulli as follows .let be a random function defined by ) { { \,\stackrel{\mathrm{def}}{=}\,}}r^{{({1})}}_{n} ] , for some , then ) = r^{{({1})}}_{j} ] for some , then ) = r^{{({2})}}_{j} ] is labeled 2 , in fact it belongs to , and it is the smallest point belonging to this random set .the next point on the line , that is , ] then ) ] then a new group is formed , which is labeled . the probability that this happens is .next we want to compute the probability that ] . given , the two events appearing on the right - hand side of ( [ eve ] ) are independent , because the first one depends on the exponentials while the second is determined by the bernoullis .the probability of the second event , conditionally on , is .if the random variable ] was labeled 2 , then it would belong to and would be equal to = \xi_{2}[\tau_{2 } ] + w_{1}^{{({2})}}/f ( 1 ) ] is labeled 2 is . this is consistent with what happens in .suppose we defined .define \bigr ) = 1\bigr\},\ ] ] that is , the time when the group is formed .given let + \sum_{i=1}^{n } \frac{w_{i}^{{({m-1})}}}{f(n_{m}(i))}\dvtx n\ge1 \biggr\}\ ] ] and + \sum_{i=1}^{n } \frac{w_{i}^{{({m-1})}}}{f(n_{m}(i))}\dvtx\mbox { either } n=0\nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \hspace*{72pt}\qquad { } \mbox{or both } n\ge1 \mbox { and } r^{{({m})}}_{n } = 0 \biggr\}.\end{aligned}\ ] ] the elements of are labeled .moreover let be defined as follows .if there exists such that = \xi_{m-1}[j] ] .if = ( \xi_{m}\setminus\xi_{m-1})[j ] ] .denote by .each point belongs , a.s ., to exactly one for some , that is , . in our construction , we label the point with if and only if .define the random function as follows .if = \widetilde{\xi}_{j}[s ] ] notice that can be used to generate a generalized attachment model , as follows . denote by \mbox { has label }\bigr\}.\ ] ] then is distributed like the process introduced in section [ main ] . to see this ,suppose that in the set , \mbox { with } i \le n\} ] is labeled , that is , the probability that ) ] is not labeled , then the probability that it is labeled , with , is exactly where we used the memoryless property of the exponential random variables .in fact , using this property , given that ] is distributed like the minimum of exponentials with parameters , for .the probability that the exponential is the minimum is given exactly by ( [ permo ] ) through a simple integration .summarizing , given that in the set , ] is labeled , with , is define + \sum_{i=1}^{\infty } \frac { w_{i}^{{({m})}}}{f(n_{m}(i))},\ ] ] and for any integer , set + \sum_{s=1}^{n } w^{{({j})}}_{s}/f \bigl(n_{j}(s ) \bigr ) \dvtx n \ge0 \biggr\}.\ ] ] in the next result , we prove that is a.s .finite , for any .this , together with ( [ xim ] ) and ( [ ximtil ] ) , implies that is an accumulation point for and .we say that a vertex is generated by if \in\xi^{*}_{j} ] from 1 to 0 , we would have that ] equals 1 , this point might not be able to generate a child in using the exponentials and bernoulli that have been defined so far .this is the case if ) = \infty ] and all the have already been used .this is going to be an important point in the proof of lemma [ genac ] .[ finit ] the random variables , with , are almost surely finite .fix . set , and let . then , with are and are independent of the , with . if , then . hence + \sum_{i=1}^{\infty } \sum _ { j= z_{m}(i)}^{z_{m}(i+1 ) -1}\frac{w_{j}^{{({m})}}}{f(n_{m}(j ) ) } \\ & = & \xi[\tau_{m } ] + \sum_{i=1}^{\infty } \frac1{f(i ) } \sum_{j= z_{m}(i)}^{z_{m}(i+1 ) -1}w_{j}^{{({m})}}.\end{aligned}\ ] ] as the series in the latter expression is composed by nonnegative random variables , it is a.s .finite if its mean is finite .its mean is exactly to see this , notice that , is independent of , , which implies = \frac1{1-p}.\ ] ] moreover , we have that ] is stochastically smaller than an exponential random variable whose mean is smaller than .moreover , the random variable is negative binomial with parameters and .this can be checked by induction ; in fact , is geometrically distributed with mean .suppose this is true for .then we have to wait for an independent to create the next group .combining this fact with ( [ sumf ] ) we have that <\infty ] we are going to prove the following statement : < m ] only depends on with .hence ] which is determined by a disjoint collection of exponentials and bernoullis .the probability that ] are equal is 0 , as they are continuous independent random variables .this is exactly the probability that .as the set of , is countable , are all , a.s . , distinct .next we show that ( [ fqaf ] ) implies ( [ info ] ) .as already mentioned , the sequence ] , a.s . ] .this implies that \le x_{i}^{*} ] .now we turn to the proof of the other inequality which implies ( [ info ] ) .fix .it is sufficient to prove that ( [ fqaf ] ) implies } \le \inf_{i } x^{*}_{i } - { \varepsilon}\bigr ) < \infty.\ ] ] in fact , if ( [ acm1 ] ) holds , only finitely many satisfy ) ] is finite .for each we have that there exists a such that \in\xi^{*}_{u} ] , then belongs to .in fact , \ge\xi^{*}_{u(i)}[1] ] where we set . as we are assuming that ( [ fqaf ] ) holds , the random tree has only finitely many vertices satisfying >m ] , that is , \bigr ] \bigr ) = \infty,\ ] ] then already infinitely many groups have been created . hence <\xi^{*}_{j}[n] ] can not generate any new group in using the exponentials and bernoullis defined so far , because they have already been used . to overcome this problem, we create a new tree , larger than , by introducing new random variables which allow also the observations ] satisfies ( [ inft1 ] ) , and the associated bernoulli equals one , a new group , that we label , is created ( notice that we can not use any of the integers as a label , because they are already all taken ) . in this case, we set = \xi^{*}_{j}[n] ] , then belongs to the new tree that we are going to define .but then we would have to allow that is able to generate groups as well , in the same fashion .this approach would require that we introduce new sequences of exponentials and bernoullis , and the notation would be quite awkward . hence we prefer a different approach .before we proceed in a formal description of , notice that for this tree the number of offspring per vertex are independent and identically distributed .in fact , } ] as determined by disjoint sets of exponentials and bernoulli .moreover , analyzing the event } < x^{*}_{j } - \xi^{*}_{\nu}[0]\} ] for some and .this implies that the number of offspring per vertex are i.i.d .now we are ready to give a formal construction of .suppose that to each we associate an extra sequence of exponential random variables , with parameter 1 , and an independent copy , say , of , with let the previous random variable counts also the satysfying ) = \infty ] the time when this vertex was generated and by = \xi^{*}_{u}[0]+ \sum_{j=1}^{n } w^{{({u})}}_{j}/f(n_{u}(j)) ] which decreases faster than for any . for any vertex denote by its distance from the root of the tree .recall that the set of vertices at distance from the root is called level .fix a large parameter .a vertex of is _ good _ if the element which generates is smaller than .a path is a ( possibly finite ) sequence of vertices , such that is generated by .we say that a path connects vertex to level if the first element of the path is and the last lies at level .we build the following random path .we start from and if this vertex has at least one offspring in , we choose one at random assigning the same probability to each offspring .we denote its label as .if has at least one offspring , we choose one of them at random and denote its label by .we follow this procedure until we either reach level or find a vertex with no offspring . the event path connects to a vertex at level equals the event that each of the has at least one offspring .hence notice that each event is independent of \mathbf{u} 1 n+1 ] is stochastically larger than , as is a.s .decreasing in .this proves the relationship between ( [ nz2 ] ) and the first probability in the last equation of ( [ nz1 ] ) .next a simple exponential bound , which uses the fact that are independent , yields where as . for each , is the fenchel legendre transform ( i.e. , we minimize the exponent on ) of in the point .hence , the number of good vertices in at level is smaller or equal to hence only finitely many vertices in are good .this implies that only finitely many vertices in are good , and this , in turn , implies ( [ fqaf ] ) .proof of theorem [ princres ] ( ) first suppose that .the minimum of is a.s .unique , and we denote it by . by lemma [ genac ] = x^{*}_{j^{*}} ] .if , then . define )=1\} ] , respectively , .we set )=1\} ] , with is a ) .let .denote by .this implies that the group in is the group in .let , and this implies that is actually a minimum and has a unique minimizer .following the same reasoning given in the previous paragraph , we conclude that the only group whose cardinality grows to infinity is .proof of corollary [ findegree ] we first assume that . for any , denote by the set of groups which are generated by . in virtue of ( [ mo ] ), we have that < x^{*}_{u } \mbox { for only finitely many }\bigr \ } \qquad\mbox{holds a.s.}\ ] ] notice that for which is not a vertex of we have that >\inf_{i}x^{*}_{i}= \lim_{n \ti } \xi[n] u u ] , for all .then ] , a.s .now notice that i \ge1 ] .assume that is a.s .finite and notice that for fixed , as , then . as is an accumulation point for , then ] , as is strictly less than which is the only accumulation point for .the latter statement is a direct consequence of the definitions of and .=-1 on the other hand , if for some , then using again that all the which are finite are also a.s .distinct , we have that = x^{*}_{i}. ] then there would be an accumulation point smaller than , which would yield a contradiction . next we analyze the general case , that is , , for some and all .the problem here is that the reinforcement function is group dependent . in the special case we had that the first point labeled was ] set - \xi^{*}_{i}[0]+ \upsilon_{i-1}\bigl[v(i)\bigr ] \dvtx n \ge0 \bigr\ } , \nonumber \\[-8pt ] \\[-8pt ] \nonumber u^{*}_{i } & { { \,\stackrel{\mathrm{def}}{=}\ , } } & x^{*}_{i } - \xi^{*}_{i}[0]+ \upsilon_{i-1}\bigl[v(i)\bigr].\end{aligned}\ ] ] hence moreover , let .it is easy to check that embeds .we prove the theorem on the event is a.s .finite for at least one created group . repeating the argument we gave for the case , we see that either all the groups remains finite or there exists exactly one dominating the others .proof of theorem [ princres1 ] ( ) first assume that , for some . under the assumptions of this part of the theorem, we have that each , a.s .hence . by our construction , either = \infty ] , a.s. , in which case <\infty ] . hence + \sum _ { \ell=1}^{\infty } w^{{({u})}}_{\ell}/f_{u } \bigl(n_{u}(\ell)\bigr ) > x^{*}_{j } \biggr ) \nonumber\hspace*{-35pt } \\[-8pt ] \\[-8pt ] \nonumber & & \quad\le\mathbb{p } \biggl(\xi_{j}[\tau_{u } ] + \sum _ { \ell=1}^{\infty } w^{{({u})}}_{\ell}/f_{u } \bigl(n_{u}(\ell)\bigr ) > x^{*}_{j } \biggr)\hspace*{-35pt } \\ & & \quad\le\mathbb{p } \biggl(\sum_{\ell=1}^{\infty } w^{{({u})}}_{\ell } /f_{u}\bigl(n_{u}(\ell ) \bigr ) > \sum_{\ell = u^{2}}^{\infty } w^{{({j})}}_{\ell } /f_{j } \bigl(n_{j}(\ell)\bigr ) { | } \tau_{u }< u^{2 } \biggr)+ \mathbb{p } \bigl(\tau_{u } \ge u^{2 } \bigr).\nonumber\hspace*{-35pt}\end{aligned}\ ] ] the last inequality in ( [ seq ] ) is justified as follows . for any pair of events and we have that notice that is measurable with respect to the -algebra in words , if we know the first observations of each , with , and the associated bernoullis , we know if the event holds . hence the latter event is independent of the pair hence the last expression in ( [ seq ] ) equals the last expression is summable . to see this , in virtue of ( [ sum1 ] ) , we just need to prove that the first term is summable .then our argument follows from an application of the first borel cantelli lemma .in fact , the summability implies that for infinitely many .set , and recall that is fixed .for any pair of random variables and and any constant , we have that we apply this fact to obtain notice while , by a similar reasoning , notice that in virtue of markov s inequality , we have and the right - hand side is summable in for fixed . in a similar way , using chebyshev s inequality after applying the function to both sides and choosing , we obtain the last expression is summable in , because , for fixed , is larger than for all sufficiently large .suppose that the positive function satisfies the condition .consider an urn with white balls and red one .we pick a ball at random , and it is white with probability .suppose that by the time of the extraction , we picked white balls and red ones .the probability to pick a white ball at the next stage becomes .let denote by the probability measures referring to the urn with initial conditions and dynamics described above .let and recall that .let the process be a standard brownian motion , which starts from the point .denote by the measure associated with this brownian motion .we use this process to generate the urn sequence described at the beginning of this section , as follows . set and let if , then set ; otherwise set .suppose we defined and .set . on the event , we define set by the ruin problem for brownian motion , we have that which is exactly the urn transition probability . in this way we embedded the urn into brownian motion .in fact , the process , with , is distributed like the number of white balls withdrawn from the urn associated to the reinforcement scheme described at the beginning of this section .notice that define this limit exists because the sequence of stopping times is increasing . for this reason is itself a stopping time .define moreover , in virtue of theorem [ hrubin ] we have that exactly one of the collection of events and holds finitely many times , a.s .this implies that the event holds -a.s . by our embedding ,we have that where was defined at the beginning of this section . proof of theorem [ estur ] in order to prove our result we only need to prove the following : let this stopping time can be infinite with positive probability .notice that on , by ( [ punto ] ) , we have that the urn generated by the brownian motion contains , at time , an equal number of white and red balls , and viceversa , if we let then we have that to prove ( [ tie1 ] ) , notice that for , with , the random sequence can not switch sign without becoming .so if and , for some , then there exists an , with , such that . in this case , by time we have a tie .we use this fact throughout the proof . recall that under the brownian motion starts from . for ,let notice that for .moreover , by time , with , on the event , at least red balls have been extracted . to see this , we first focus on , and prove that by this time , on the event , at least one red ball has been picked .suppose that this is not true ; that is , suppose that we picked 0 red balls by time . the reader can check from our embedding that this implies that this would imply that contradicting our hypothesis . by reiterating the same reasoning we get that the statement holds true for any .define on the brownian motion , after time , will hit before there is a tie in the urn , because , for .next we prove that for any , if holds , then only a finite number of red balls are extracted , that is , .we split this proof into two parts : we first prove that and then . in order to prove the first inclusion , recall that under the brownian motion starts at .this implies that if , then infinitely many balls will be extracted before the brownian motion hits . as , we have that infinitely many balls will be extracted before hits , that is , before a tie .this implies that , which in turn implies .next we prove that . on the set , by time number of red balls extracted is at least .this implies that this is a consequence of ( [ punto ] ) and the fact that is a nondecreasing random sequence , and if for some , then .let that is , the first time after that a white ball is extracted . the stopping time could be infinite .next we prove that on the random time is a.s .recall that is the first time that the process hits , and that .this implies that by time the number of white balls generated by the brownian motion , plus the initial , overcomes that of the red ones . on , after time , the process will hit before it hits .this implies that a.s . on .in fact if no white balls are extracted after time the process would hit before it hits giving a contradiction .moreover on , we have that , hence by time the white balls are still ahead with respect the red ones .we can repeat the same reasoning with to argue that is a.s .finite and by time the white balls are still in advantage . by reiterating this argument, we get that only finite many red balls will be extracted , because each occurs before a tie , a.s .hence holds when holds .this implies that for each .if holds , then either holds or holds . if the latter event holds , independently of the past , the probability that only finitely many white balls are picked is exactly 1/2 , by symmetry .moreover , the events are independent , because they are determined by the behavior of disjoint increments of the brownian motion . by the standard ruin problem for this process, we have that we get of theorem [ testo1 ] notice that _ lead _ must be a vertex of . under the assumptions of the theorem , the probability that is smaller or equal to the probability that .the latter probability is bounded as follows : we set and , where . in virtue of ( [ cmlf ] ) ,the probability that all the vertices at level are good is at least where were introduced at the end of the proof of lemma [ genac ] , and was introduced in ( [ mo ] ) .recall that is the set of the vertices of at level .moreover , recall that .we have & \le & c_{1}{\operatorname{e}}^{-c_{2 } m } + m^{n } \inf_{r > 1 } \bigl({\operatorname{e}}^{-c_{n}(r , m ) n } + r^{-n}\bigr ) .\end{aligned}\ ] ] proof of corollary [ rf ] set and define recursively .notice that .if a vertex of belongs to , then we have that for some .we have - \mathbb{p}(\mathrm{lead } \in g_{2}).\ ] ] we bound the last probability in the previous expression using theorem [ testo1 ] .order the groups at level one , starting from the smaller . as , we have that by the time the group at level 1 is created , there are at least balls in urn 1 .hence , using theorem [ estur ] , we get \le \sum_{k=1}^{\infty } \frac12 \prod _ { \ell=1}^{k-1 } \frac{f(\ell)f_{k}}{1 + f(\ell ) f_{k}}.\vspace*{-1pt}\ ] ]fix two real numbers and , and two sequences of positive real numbers and .suppose we have an urn with ( resp . , ) white ( resp ., red ) balls .if at step there are exactly white balls , with , then the probability to pick a white ball is if a white ( resp ., red ) ball is picked , at time the composition of the urn becomes ( resp ., j ) white balls and ( resp . , ) red ones . denote by let be the measure describing the dynamics of this urn .we have the following result , due to herman rubin ; see the appendix in .[ hrubin ] we have the following 3 cases : if and , then both the number or red balls and the number of white balls in the urn goes to , a.s . , as . if and , then if and , then and both and are strictly positive .we thank two anonymous referees for their suggestions .moreover we thank peter mrters for helpful discussions , patrick lahr for spotting a few typos in an early version and roman koteck for pointing out to us the reference , which was one of the first to introduce preferential attachment schemes .
|
we study a general preferential attachment and plya s urn model . at each step a new vertex is introduced , which can be connected to at most one existing vertex . if it is disconnected , it becomes a pioneer vertex . given that it is not disconnected , it joins an existing pioneer vertex with probability proportional to a function of the degree of that vertex . this function is allowed to be vertex - dependent , and is called the reinforcement function . we prove that there can be at most three phases in this model , depending on the behavior of the reinforcement function . consider the set whose elements are the vertices with cardinality tending a.s . to infinity . we prove that this set either is empty , or it has exactly one element , or it contains all the pioneer vertices . moreover , we describe the phase transition in the case where the reinforcement function is the same for all vertices . our results are general , and in particular we are not assuming monotonicity of the reinforcement function . finally , consider the regime where exactly one vertex has a degree diverging to infinity . we give a lower bound for the probability that a given vertex ends up being the leading one , that is , its degree diverges to infinity . our proofs rely on a generalization of the rubin construction given for edge - reinforced random walks , and on a brownian motion embedding . ,
|
in recent years there has been a huge explosion in the variety of sensors and the dimensionality of the data produced by these sensors and this has been in a large number of applications ranging from imaging to other scientific applications.the total amount of data produced by the sensors is much more than the available storage .so we often need to store a subset of the data .we want to reconstruct the entire data from it .the famous nyquist - shannon sampling theorem [ 5 ] tells us that if we can sample a signal at twice its highest frequency we can recover it exactly . in applicationsthis often results in too many samples which must be compressed in order to store or transmit .an alternative is compressive sampling ( cs ) which provides a more general data acquisition protocol by reducing the signal directly into a compressed representation by taking linear combinations . in this paperwe present a brief of the conventional approach of compressive sampling and propose a new approach that makes use of the em algorithm to reconstruct the entire signal from the compressed signals .when a signal is sparse in some basis , a few well chosen observations suffice to reconstruct the most significant nonzero components . [[ consider - a - signal - mathbfx - represented - in - terms - of - a - basis - expansion - as ] ] consider a signal represented in terms of a basis expansion as + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the basis is such that only coefficients have significant magnitude .many natural and artificial signals are sparse in the sense that there exists a basis where the above representation has just a few large coefficients and other small coefficients . as an example natural images are likely to be compressible in discrete cosine transform(dct ) andwavelet bases [ 1 ] . in general we do not know apriori which coefficients are significant .the data collected by a measurement system consists of some linear combinations of the signals where is a measurement matrix ( also called sensing matrix ) which is chosen by the statistician .the measurement process is non - adaptive as (and hence does not depend in any way on the signal . is the error which is assumed to be bounded or bounded with high probability .[ [ our - aim - here - is - to ] ] our aim here is to : + + + + + + + + + + + + + + + + + + + + * design a stable measurement matrix that preserves the information in any -sparse signal during the dimensionality reduction from to .* design a reconstruction algorithm to recover the original data from the measurements .[ [ we - note - that - the - recovery - algorithm - addresses - the - problem - of - solving - for - mathbfx - when - the - number - of - unknowns - i.e .-n - is - much - larger - than - the - number - of - observations - i.e .- m-.-in - general - this - is - an - ill - posed - problem - but - cs - theory - provides - a - condition - on - phi - which - allows - accurate - estimation . ] ] we note that the recovery algorithm addresses the problem of solving for when the number of unknowns ( i.e. ) is much larger than the number of observations ( i.e. ) . in generalthis is an ill - posed problem but cs theory provides a condition on which allows accurate estimation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one such popularly used property is restricted isometry property ( rip ) [ 2 ] .the matrix satisfies the restricted isometry property of order with parameters if holds simultaneously for all sparse vectors having no more than nonzero entries .matrices with this property are denoted by rip([ [ the - following - theorem - shows - that - matrices - satisfying - rip - will - yield - accurate - estimates - of - mathbfx - with - the - help - of - recovery - algorithms . ] ] the following theorem shows that matrices satisfying rip will yield accurate estimates of with the help of recovery algorithms .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be a matrix satisfying rip with and let be a vector of noisy observations , where .let be the best -sparse approximation of , that is , is the approximation obtained by keeping the largest entries of and setting others to zero .then the estimate obeys where and are constants depending on but not on or .[ [ the - reconstruction - in-3.1-is - equivalent - to ] ] the reconstruction in ( 3.1 ) is equivalent to + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + where is a regularization parameter which depends on .in this approach we apply em algorithm for the reconstruction of the signal .since we observe some linear combinations of the signals instead of the entire signals we can treat the observed linear combinations as our observed data and the entire signals as the complete data which is unobserved . hence we apply em algorithm as a most natural tool of missing data analysis to reconstruct the data . herewe assume that data are coming from a population with mean and that is sparse ( w.r.t some basis ) . without loss of generalitywe assume that is sparse with respect to euclidean basis.we assume that at most elements of * * is nonzero .[ [ let - us - assume - that - the - parent - population - is - normal - viz .- nmathbfboldsymbolmathbfmusigma2i_n ] ] let us assume that the parent population is normal viz . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ then - we - have - the - signal - as - mathbfxboldsymbolmathbfmuboldsymbolepsilon - where - mathbfboldsymbolepsilonsim - n0i_n ] ] then we have the signal as where + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + then with the help of the sensing matrix we have the observed data as where [ [ thus - unlike - the - conventional - approach - here - we - assume - that - the - signals - themselves - are - subject - to - error - and - consequently - the - observed - combinations - of - the - signals - are - also - subject - to - error .- here - we - try - to - reconstruct - the - unobserved - true - signals - which - are - free - from - error . ]] thus unlike the conventional approach here we assume that the signals themselves are subject to error and consequently the observed combinations of the signals are also subject to error .here we try to reconstruct the unobserved true signals which are free from error .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we then treat as the complete data and as the observed data and try to estimate from the observed data using em algorithm .thus we have [ [ the - complete - data - likelihood - is - given - by - fmathbfxfrac1sigmasqrt2pine - frac12sigma2mathbfx - boldsymbolmathbfmux - boldsymbolmathbfmumathbfxinmathbbrnmathbfboldsymbolmathbfmuinmathbbrnsigma0-the - conditional - distribution - of - the - complete - data - given - the - observed - data - is ] ] the complete data likelihood is given by the conditional distribution of the complete data given the observed data is + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ after - t - iterations - in - em - algorithm - we - have ] ] after iterations in em algorithm we have , + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + * * * : we compute the expected complete data log - likelihood w.r.t the conditional distribution of .now also define * * * : here we try to maximize with respect to .we know that is sparse i.e. some of the are zero .so we need to maximize w.r.t . belonging to a subset thus we find [ [ for - this - we - note - that - scup_i1n - choose - ks_i - where - s_iboldsymbolmutextrmat - most - ensuremathispecific - elements - of - ensuremathboldsymbolmu - are - nonzero - we - then - find - argmax_boldsymbolmuin - s_iqboldsymbolmu - for - each - i - and - call - the - estimate - as - mathbfhatmut1s_ihatmu_1t1s_ihatmu_2t1s_i ...hatmu_kt1s_i - now - the ] ] for this we note that where we then find for each and call the estimate as now the + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ setting - fracpartialpartialmu_jqboldsymbolmu0-for - those - j - such - that - mu_jneq0-we - find - that - hatmu_jt1s_imu_jtalpha_jbeta_j - where - alpha_jtextrmensuremathjthelement - ofphiphiphi-1y - and - beta_jtextrmensuremathjthelement - ofphiphiphi-1phiboldsymbolmut . ] ] setting for those such that we find that where and .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ then - we - choose - the - mathbfhatboldsymbolmut1s_i - for - which - qmathbfhatboldsymbolmut1s_i - is - maximum - as - the - new - estimate - of - mathbfboldsymbolmu - at - t1th - iteration.thus - the - estimate - of - boldsymbolmu - is - hatboldsymbolmut1hatboldsymbolmut1s_i - such - that - qmathbfhatboldsymbolmut1s_igeq - qmathbfhatboldsymbolmut1s_jforall - jneq - i - we - iterate - until - convergence . ] ] then we choose the for which is maximum as the new estimate of at iteration.thus the estimate of is such that we iterate until convergence .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +the new approach discussed in the previous section requires the maximization of over subspaces and then choose the one for which it is maximum at the m step of each em iteration .this is computationally expensive and practically impossible to implement for large .hence we suggest an alternative way which instead of maximization over subspaces in each em iteration identifies a particular subspace where is most likely to belong , and then finds the maximum over that subspace in each m step .[ [ let - s_boldsymbolmu - be - the - subspace - where - boldsymbolmu - lies - that - is - s_boldsymbolmux_1x_2ldotsx_ninmathbbrnforall - imu_i0rightarrow - x_i0-we - note - that - if - we - find - the - unrestricted - maximizer - of - qboldsymbolmu - in - each - m - step - of - the - em - algorithm - henceforth - call - unrestricted - em - that - is - if - we - find - hatboldsymbolmuunargmax_boldsymbolmuinmathbbrqboldsymbolmu - then - the - unrestricted - em - estimate - hatboldsymbolmuun - should - lie - close - to - s_boldsymbolmu .- hence - the - unrestricted - estimate - should - provide - an - indication - of - the - subspace - in - which - the - original - parameter - lies .- hence - we - find - which - components - of - hatboldsymbolmuun - are - significant - so - that - we - can - take - the - other - insignificant - components - to - be - zero - and - take - the - corresponding - subspace - thus - formed - to - be - the - one - in - which - our - estimate - should - lie .- we - test - which - components - of - hatboldsymbolmuun - are - significantly - different - from - zero . ]] let be the subspace where lies , that is we note that if we find the unrestricted maximizer of in each m step of the em algorithm ( henceforth call unrestricted em ) , that is if we find then the unrestricted em estimate should lie close to .hence the unrestricted estimate should provide an indication of the subspace in which the original parameter lies .hence we find which components of are significant so that we can take the other insignificant components to be zero and take the corresponding subspace thus formed to be the one in which our estimate should lie .we test which components of are significantly different from zero .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + now for the unrestricted em algorithm the estimate of should converge to the maximizer of the observed log - likelihood .the observed log - likelihood is setting we get where .[ [ the - above - equation - eqrefeqmyequation1-does - not - have - a - unique - solution - as - rankphiv-1phi_ntimes - nmll - n .- hence - the - observed - likelihood - does - not - have - a - unique - maximum - and - our - unrestricted - em - algorithm - will - produce - many - estimates - of - boldsymbolmu .- among - these - many - estimates - we - choose - the - sparsest - solution .- this - is - taken - care - of - by - taking - the - initial - estimate - of - boldsymbolmuas - boldsymbol0-in - the - iterative - process - as - then - the - estimate - will - hopefully - converge - to - nearest - solution - which - will - be - the - sparest - one .- we - will - justify - this - later - with - the - help - of - simulation . ] ] the above equation does not have a unique solution as =m\ll n ] and plot the mean residuals along with the standard error bars . for small values of plot the average residuals for the three approaches discussed earlier .[ [ for-n10-we-find-that-the-naive-approach-works-uniformly-best-for-different-values-of-sigma.thus-it-would-have-been-nice-if-we-can-apply-this-naive-approach-for-all-values-of-n-but-unfortunately-due-to-the-inapplicability-of-this-procedure-we-turn-our-attention-towards-the-new-approach . ] ] for we find that the naive approach works uniformly best for different values of .thus it would have been nice if we can apply this naive approach for all values of , but unfortunately due to the inapplicability of this procedure we turn our attention towards the new approach .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ for - moderate - to - large - values - of - n - we - cannot - plot - the - residuals - of - the - naive - approach - as - it - is - computationally - impossible .- also - the - comparison - between - the - new - and - the - conventional - approach - cannot - be - performed - for - very - large - values - of - n - because - of - computational - time .- we - find - that - the - new - approach - works - uniformly - better - for - different - values - of - sigma - for - both - n50-and - n100 . ] ] for moderate to large values of we can not plot the residuals of the naive approach as it is computationally impossible .also the comparison between the new and the conventional approach can not be performed for very large values of because of computational time .we find that the new approach works uniformly better for different values of for both and .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ imageimage ] ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the value of in the above procedures is an important point of consideration .it signifies the sampling fraction , that is to what extent we can reduce the dimensionality of the problem .we fix and with we plot the average residuals for varying . [ [ section ] ] + [ [ the - procedure - works - good - if - we - take - m500-that - is - at - this - variance - level - we - can - afford-50-dimensionality - reduction . ] ] the procedure works good if we take , that is at this variance level we can afford dimensionality reduction .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ thus - we - find - that - the - new - approach - works - better - than - the - conventional - method - of - signal - reconstruction .- the - conventional - method - of - reconstructing - the - signal - assumes - the - noise - to - be - bounded - with - high - probability - and - thus - fail - to - perform - well - for - large - error - variance - whereas - the - new - approach - allows - the - error - variance - to - be - large - enough - and - thus - make - it - applicable - to - other - situations .- also - the - conventional - approach - assumes - that - the - signal - is - sparse - and - sparsity - is - an - essential - ingredient - in - the - reconstruction - algorithm .- the - new - proposed - approach - can - easily - be - generalized - to - even - situations - where - signals - need - not - to - be - sparse .- however - we - find - that - the - naive - approach - we - proposed - earlier - works - best - if - it - can - be - implemented .- for - moderate - to - large - dimensional - problems - which - are - common - in - practice - the - new - algorithm - works - better - than - the - conventional - approach . ] ] thus we find that the new approach works better than the conventional method of signal reconstruction .the conventional method of reconstructing the signal assumes the noise to be bounded with high probability and thus fail to perform well for large error variance whereas the new approach allows the error variance to be large enough and thus make it applicable to other situations . also the conventional approach assumes that the signal is sparse and sparsity is an essential ingredient in the reconstruction algorithm .the new proposed approach can easily be generalized to even situations where signals need not to be sparse .however we find that the naive approach we proposed earlier works best if it can be implemented . for moderate to large dimensional problems which are common in practice the new algorithm works better than the conventional approach .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +the present paper treats observations or signals as iid samples from a population .this can be extended assuming a non - iid setup where the signals may be generated from a stochastic process .further here we work with linear combinations of all signals .a further extension can be done where we builld the model with linear combinations of some signals and apply it for future signals in the process .
|
conventional approaches of sampling signals follow the celebrated theorem of nyquist and shannon . compressive sampling , introduced by donoho , romberg and tao , is a new paradigm that goes against the conventional methods in data acquisition and provides a way of recovering signals using fewer samples than the traditional methods use . here we suggest an alternative way of reconstructing the original signals in compressive sampling using em algorithm . we first propose a naive approach which has certain computational difficulties and subsequently modify it to a new approach which performs better than the conventional methods of compressive sampling . the comparison of the different approaches and the performance of the new approach has been studied using simulated data .
|
the description of the dynamics of open quantum systems has attracted increasing attention during the last few years .the major reason for this is the identification of the phenomena of decoherence and dissipation , which characterize the dynamics of open quantum systems interacting with their surroundings , as a main obstacle to the realization of quantum computers and other quantum devices .secondly , recent experiments on engineering of environments have paved the way to new proposals aimed at creating entanglement and superpositions of quantum states exploiting decoherence and dissipation . a common approach to the dynamics of open quantum systems consists in deriving a master equation for the reduced density matrix which describes the temporal behavior of the open system .the solution for the master equation can then be searched by using analytical or simulation methods , or the combination of both .this article concentrates on the developing of new monte carlo simulation methods for non - markovian open quantum systems .the general feature of the monte carlo methods is the generation of an ensemble of stochastic realizations of the state vector trajectories . the density matrix and the properties of the system of interestare then consequently calculated as an appropriate average of the generated ensemble .some common variants of the monte carlo methods for open systems include the monte carlo wave - function ( mcwf ) method , the quantum state diffusion ( qsd ) , and the non - markovian wave function ( nmwf ) formulation unravelling the master equation in an extended hilbert space .the mcwf method has been very successfully used to model the laser cooling of atoms .actually , 3d laser cooling has so far been described only by mcwf simulations .qsd in turn has been found to have a close connection to the decoherent histories approach to quantum mechanics , and nmwf method has been recently applied to study the dynamics of quantum brownian particles . the various monte carlo methods and related topics have been reviewed e.g. in refs . in general , simulating open quantum systems is a challenging task . it has been shown earlier that the methods mentioned above can solve a wide variety of problems .nevertheless , sometimes there arise situations in which the complexity of the studied system or the parameter region under study makes the requirement for the computer resources so large that the solution may become impossible in practice , though not in principle .thus , it is important to assess the already existing methods from this point of view , and develop new variants to improve their applicability .this is the key point of this article . here, we address the monte carlo simulation methods for the short time - evolution of non - markovian systems which are weakly coupled to their environments . in this case, the dynamics of the system may exhibit rich features , whereas the weak coupling may lead to extremely small quantum jump probabilities , the consequence being unpractically large requirement for the size of the generated monte carlo ensemble . to overcome this problem, we present below a method which in general allows to reduce the ensemble size . by studying the hilbert space path integral for the propagator of a piecewise deterministic process ( pdp ) , we show that part of the expectation value of an arbitrary operator as a function of time , , has scaling properties which can be exploited in monte carlo simulations to speed up the generation of the ensemble , in the optimal case by several orders of magnitude .we derive a scaling equation , from which the result for can be calculated , all the quantities in the equation being easily obtainable from the scaled monte carlo simulations .we concentrate first on the lindblad - type non - markovian case which can be solved by the standard mcwf method , and then focus on the non - lindblad - type case which requires the use of the nmwf simulations in the doubled hilbert space .the paper is structured as follows .section [ sec : dyn ] introduces the master equation , the corresponding stochastic schrdinger equation , and the appropriate simulation schemes for the lindblad- and non - lindblad - type systems .the hilbert space path integral method is then used to calculate the expectation value of an arbitrary operator setting the scene for the scaling method which is presented in sec .[ sec : scaling ] .section [ sec : examples ] shows explicitly how the scaling can be implemented and demonstrates the usability of the method , for the example of quantum brownian motion .finally sec .[ sec : conclusions ] presents discussion and conclusions .we describe first in sec .[ subsec : lindblad ] the master equation for the lindblad - type systems and the corresponding standard mcwf method .we then continue in sec.[subsec : nonlindblad ] with the description of the non - lindblad - type case with the corresponding stochastic schrdinger equation and nmwf unravelling in the doubled hilbert space . the last subsection [ subsec : path ] presents the calculation of the expectation value of an arbitrary operator with the hilbert space path integral method which paves the way for the scaling procedure .we begin by considering master equations obtained from the time - convolutionless projection operator technique ( tcl ) of the form with time - dependent linear operators , , , and .a specific case of the master equation ( [ eq : genmaster ] ) is the one of lindblad - type + \sum_i \gamma_i(t ) \bigg\ { l_i \rho(t ) l_i^\dagger - \nonumber \\ & & \left .\frac{1}{2 } l_i^\dagger l_i\rho(t ) - \frac{1}{2 } \rho(t ) l_i^\dagger l_i \right\ } , \label{eq : master}\end{aligned}\ ] ] where is the system hamiltonian , the time dependent decay rate to channel , and is the corresponding lindblad operator .we define this non - markovian master equation to be of lindblad - type when the time dependent decay coefficients for all times , and non - lindblad type when acquire temporarily negative values during the time - evolution .the lindblad - type case can be treated with the standard mcwf method introduced in this subsection , and the non - lindblad case with the nmwf method in the doubled hilbert space presented in the following subsection .the core idea of the standard mcwf method is to generate an ensemble of realizations for the state vector by solving the time dependent schrdinger equation with the non - hermitian hamiltonian where is the reduced system s hamiltonian and the non - hermitian part includes the sum over the various allowed decay channels , where the jump operator for channel coincides with the lindblad operator appearing in the master equation ( [ eq : master ] ) . during a discrete time evolution step of length norm of the state vector may shrink due to .the amount of shrinking gives the probability of a quantum jump to occur during the short interval .based on a random number one then decides whether a quantum jump occurred or not . before the next time step is taken , the state vector of the system is renormalized . if and when a jump occurs , one performs a rearrangement of the state vector components according to the jump operator , before renormalization of . the jump probability corresponding to the decay channel for each of the time - evolution steps is the expectation value of an arbitrary operator a is then the ensemble average over the generated realizations where is the number of realizations .the solution of the general master equation ( [ eq : genmaster ] ) can be obtained by using the nmwf unravelling in the doubled hilbert space where is the hilbert space of the system .the state of the system is described by a pair of stochastic state vectors such that becomes a stochastic process in the doubled hilbert space . denoting the corresponding probability density functional by ] , ( d ) .the terms has been shifted to start from the same initial value for easier comparison . here is the number operator .the final result presented in fig . 2 .is mostly given as a sum of the terms displayed in ( a ) and ( d ) . ] in general , non - markovian systems , even when they are weakly coupled to their environments , can posses rich dynamical features despite of the fact that the quantum jump probability per stochastic realization is small during the time evolution period of interest ( see the examples above ) .this is the key area where the scaling method we have presented is useful .the small jump probabilities due to the weak coupling can lead to the situations where the requirement for the size of the generated ensemble in the monte carlo wave function simulations is unconveniently large . in these cases ,the scaling method can be used to reduce and optimize the generated ensemble size for efficient numerical simulation of weakly coupled non - markovian systems . the scaling method presented herecan be used when the master equation of the open quantum system can be expressed in the general form of eq .( [ eq : genmaster ] ) obtained by the time - convolutionless projection operator techniques ( the one - jump restriction still applies , see below ) . to compare our method to the other simulation methods for non - markovian systems one should actually compare the validity of the tcl with respect to the methods presented e.g. in refs . .thus , making a rigorous comparison is an involved task and is left for future studies .we initially note here that our method is not restricted with respect to the temperature of the environment ( while method presented in ref . is valid for the zero - temperature bath ) and is valid , at least in principle , to the order used in the tcl expansion of master equation to be unravelled ( while method presented in ref . is post - markovian , i.e. first order correction to markovian dynamics ) .however , it is worth mentioning that the validity of the tcl expansion is crucially related to the existence of the tcl generator ( see e.g. page of ref . ) .the scaling method is limited to the cases where there is maximally one jump per realization in the generated monte carlo ensemble .moreover , it is also important to note that the same restriction applies also for the scaled simulations .these limits can be easily checked by calculating the jump probabilities from eqs .( [ eq : jp ] ) and ( [ eq : fw ] ) for the time period of interest or by monitoring the number of jumps in the simulations .as soon as more than one jump per realization in the scaled simulations begin to occur , one can estimate the error by calculating the ratio between the two - jump and the one - jump probabilities per realization . in the examples we have described ,we have not used very aggressive optimization of the ensemble size ( though the ensemble size reduction is on the order of ) , and no error has been introduced .this has been confirmed by monitoring the jumps in the simulations : no two - jump realizations was generated .thus , the error bars displayed in the figs .( [ fig : eg1 ] ) and ( [ fig : eg2 ] ) correspond to the usual statistical error ( standard deviation ) of the monte carlo ensemble . in conclusion, the scaling method has limitations ( one jump per realization ) but it is interesting to note that in the region where the method can not be applied ( more than one jump per realization ) , it is not needed .this is because in this region there already occurs large enough number of jumps enhancing the statistical accuracy of the simulations .in other words , the problem which the scaling solves appears only within the region of validity of the method .the authors thank h .-breuer for discussions in freiburg and acknowledge csc - the finnish it center for science - for the computer resources .this work has been financially supported by the academy of finland ( jp , project no .204777 ) , the magnus ehrnrooth foundation ( jp ) , and the angelo della riccia foundation ( sm ) .expanding the exponential waiting time distribution and taking into account the terms corresponding to maximum one jump per realization for short times and weak couplings , the contribution to the propagator from the path without the jumps is = \left ( 1-f\left [ \psi_0,t \right ] \right ) \delta \left [ \psi - g_t\left(\psi_0\right ) \right ] = \nonumber \\ & & \left ( 1-\int_0^t ds \sum_i \gamma_i(s ) \| l_i g_s(\psi_0 ) \|^2 \right ) \times \nonumber \\ & & \delta\left ( \psi - g_t\left(\psi_0\right ) \right ) \label{eq : t0}\end{aligned}\ ] ] where is the functional delta - function and the deterministic evolution according to the non - hermitian hamiltonian is given by where by using the recursion relation for the propagator and neglecting the terms of the order of or higher , one can now calculate the contribution of the one jump path to the propagator as = \int_0^t ds \int d \psi_1 d \psi_1^ * \int d \psi_2 d \psi_2^*~ \nonumber \\ & & \delta\left ( \psi - g_t\left ( \psi_2 \right ) \right ) \sum_i \gamma_i(s ) \| l_i \psi_1\|^2 \delta\left ( \frac{l_i \psi_1}{\| l_i\psi_1 \| } - \psi_2 \right ) \nonumber \\ & & \delta\left ( \psi_1 - g_s\left ( \psi_0 \right ) \right ) .\label{eq : t1}\end{aligned}\ ] ] where the transition rate summed over the decay channels is = \sum_i \gamma_i(s ) \| l_i \psi_1 \|^2 \delta \left [ \frac{l_i \psi_1}{\| l_i\psi_1 \| } - \psi_2 \right],\ ] ] the physical interpretation of eq .( [ eq : t1 ] ) is straightforward .the integrations sums over the various one jump routes and over all the possible jump times .in the simulations we scale up the jump probabilities by a factor , and leave the non - hermitian hamiltonian as it is [ includes also ] , we get the corresponding equation for eq .( [ eq : aini ] ) as = \int d \psi d \psi^*~ \langle \psi | a | \psi \rangle \nonumber \\ & & \times \left\ { - \delta \left ( \psi - g_t(\psi_0 )\right ) \int_0^t ds \sum_i \beta \gamma_i(s ) \|l_ig_s(\psi ) \|^2 \nonumber \right .\\ & & + \int_0^t ds \int d \psi_1 d \psi_1^ * \int d \psi_2 d\psi_2^*~ \delta \left ( \psi - g_t(\psi_2 ) \right ) \nonumber \\ & & \times \sum_i \beta \gamma_i(s ) \| l_i \psi_1 \|^2 \delta \left ( \frac{l_i \psi_1}{\| l_i\psi_1 \| } -\psi_2\right ) \nonumber \\ & & \times \delta \left ( \psi_1 - g_s(\psi_0 ) \right ) \bigg\}.\label{eq : newsc}\end{aligned}\ ] ] for scaling to work , we have to be able to extract from the simulations the information on the r.h.s . of this equation .furthermore , we denote by the second term on the r.h.s . of eq .( [ eq : newsc ] ) as \nonumber \\ & = & \frac{n_j(t)}{n}\sum_{i=1}^{n_j(t ) } \langle \psi_i(t ) | a |\psi_i(t ) \rangle /n_j(t),\end{aligned}\ ] ] where is number of jumps , and the total number of realizations . here ,the second part is the one jump contribution to the expectation value , expressed formally , and the summation is carried over those realizations that have jumped till time .the corresponding simulation presentation ( simulation average ) is given in the last part .equation ( [ eq : newsc ] ) , which includes the quantity we are interested in , can now be written as = - \bar{\langle a \rangle}_{0}(t ) + \bar{\langle a \rangle}_{1}(t ) = \nonumber \\ & & - p_{tot}\langle a \rangle_0 ( t ) - \frac{n - n_j}{n}\langle a \rangle_{0}(t)+ \bar{\langle a \rangle}_{tot}(t).\end{aligned}\ ] ]
|
we demonstrate a scaling method for non - markovian monte carlo wave - function simulations used to study open quantum systems weakly coupled to their environments . we derive a scaling equation , from which the result for the expectation values of arbitrary operators of interest can be calculated , all the quantities in the equation being easily obtainable from the scaled monte carlo wave - function simulations . in the optimal case , the scaling method can be used , within the weak coupling approximation , to reduce the size of the generated monte carlo ensemble by several orders of magnitude . thus , the developed method allows faster simulations and makes it possible to solve the dynamics of the certain class of non - markovian systems whose simulation would be otherwise too tedious because of the requirement for large computational resources .
|
in the last two decades , complex network became a popular framework to investigate a large variety of real systems , including internet , social networks , biological networks and financial networks . in recent years, the complex network approach was found to be useful in studying of climate phenomena , using `` climate network'' . in a climate network ,usually , nodes are chosen to be geographic locations and links are constructed based on the similarities between the time variability between pairs of nodes .climate networks have been used to quantify and analyze the structure and dynamics of the climate system .moreover , climate networks have been used to better understand and even forecast some important climate phenomena , such as monsoon , the north atlantic oscillation and el nio .el nio is probably the most influential climate phenomenon on interannual time scales . during el nio ,the eastern pacific ocean is getting warmer by several degrees , impacting the local and global climate .la nia is cold anomaly over the el nio region .the el nio activity is quantified , for example , by the oceanic nio index ( oni ) , which is noaa s primary indicator for monitoring el nio and la nia .el nio can trigger many disruptions around the globe and in this way affect various aspects of human life .these include unusual weather conditions , droughts , floods , declines in fisheries , famine , plagues , political and social unrest , and economic changes .global impacts of el nio had been investigated in by halpert et al . however , the mechanism through which el nio influences the global climate and the impact of el nio are still not fully understood .here we propose a percolation framework analysis to describe the structure of the global climate system during el nio , based on climate networks .percolation theory is also used to analyze the behavior of connected clusters in a network .the applications of percolation theory covers many areas , such as , optimal path , directed polymers , epidemics , immunization , oil recovery , and nanomagnets . in the framework of percolation theory onemay define phase transition based on simplest pure geometrical considerations . in the present study, we construct a sequence of monthly - shifting - climate networks by adding links one by one according to the similarities between nodes .more specifically , the nodes which are more similar ( based on their temperature variations ) will be connected first .we statistically found that around one year prior to the onset of el nio , the climate network undergoes a first order phase transition ( i.e. , exhibiting a significant discontinuity in the order parameter ) , indicating that links with higher similarities tend to localize into two large clusters , in the higher latitudes of the northern and southern hemispheres .however , during el nio times , there is only one big cluster via tropical links .we find that indications the discontinuity in the order parameter is closely related to the oni .our analysis is based on the daily near surface ( hpa ) air temperature of era - interim reanalysis .we pick 726 grid points that approximately homogeneously cover the entire globe ; these grid points are chosen to be the nodes of our climate network . for each node ( i.e. , longitude - latitude grid point ) , daily values within the period 1979 - 2016are used , from which we subtract the mean seasonal cycle and divide by the seasonal standard deviation . specifically , given a record , where is the year and stands the day ( from 1 to 365 ) , the filtered record is defined as , where `` mean '' and `` std '' are the mean and standard deviation of the temperature on day over all years . to obtain the time evolution of the strengths of the links between each pair of nodes , we define , the time - delayed cross - correlation function as , and where is the time lag between 0 and 200 days .note , that for estimating the cross - correlation function at day , only temperature data points prior to this day are considered .we then define the link s weight as the maximum of the cross - correlation function .in lattices model , a percolation phase transition occurs if the systems dimension is larger than one .the system is considered percolating if there is a path from one side of the lattice to the other , passing through occupied bonds ( bond percolation ) or sites ( site percolation ) .the percolation threshold usually depends on the type and dimensionality of the lattice .however , for the network system no notion of side exists . for this reason ,a judgment condition to verify whether the system is percolating is the existence of a giant component ( cluster ) containing nodes , where is the total number nodes in the network .if two nodes are in the same cluster then there is at least one path passing through them . in this section ,we discuss the construction of the climate networks , and study the evolution of clusters .initially , given isolate nodes , links are added one by one according to the link strength , i.e. , we first add the link with the highest weight , and continue selecting edges ordered by decreasing weight . during the evolution of our network, we measure the size of the normalized largest cluster and the susceptibility , where represents the size of the largest component .the susceptibility of the climate network ( the average size of the finite clusters ) is defined as , where denotes the average number of clusters of size at edge s weight , and the prime on the sums indicates the exclusion of the largest cluster in each measurement . since our network is finite , we use the following procedure to find the percolation threshold .we first calculate , during the growth process , the largest size change of the largest cluster : . \label{eq5}\ ] ] the step with the largest jump is defined as .the percolation transition in the network is characterized by and corresponds to its transition point .( color online ) .the percolation forecasting scheme and power based on the near surface temperature of the era - interim dataset .we compare the largest gap of the largest cluster during the climate network evolution with a threshold ( red curve , left scale ) and the oni ( blue curve , right scale ) between january 1980 and september 2016 .when the is above the threshold , , we give an alarm and predict that an el nio event will start in the following calendar year .correct predictions are marked by green arrows and false alarms by dashed arrows . ]( color online ) .the largest cluster ( red curve , left scale ) and the susceptibility ( blue curve , right scale ) as a function of the link strength .( a ) for the network two years before el nio episode , where the end time is dec 1980 ; ( b ) for the network one year before el nio episode , where the end time is dec 1981 ; ( c ) for the network during el nio episode , where the end time is dec 1982 ; ( d ) for the network one year after el nio episode , where the end time is dec 1984 .note the largest jump in one year prior to el nio event . ] for each network , we obtain , and find that , usually around one year ahead of the beginning of el nio , the climate network has the largest .this feature is used here for forecasting the inception of an el nio event in the following year . to this end, we place a varying horizontal threshold and mark an alarm when is above threshold , outside an el nio episode . fig .[ fig:1 ] demonstrates the forecasting power where the red curve depicts , and the blue curve is the oni ; correct predictions are marked by green arrows .the lead time between the prediction and the beginning of the el nio episodes is year .our method forecasts out of events .note the similarity in the power of forecasting to that of ludescher et al .next , we concentrate on specific el nio events to illustrate the evolving cluster structure through el nio .we first focus on one of the strongest el nio event , the event . in fig .[ fig:2 ] we show for this event , and as a function of link strength two years and one year before el nio , during el nio and one year after el nio .we find that exhibits the largest jump in about one year before el nio ; the jump in also becomes very large at the same point .the two quantities yields the same percolation threshold , strengthening the confidence of the threshold value .[ fig:2a ] ( a ) shows the climate network cluster structure in the globe map at the percolation threshold one year before el nio event .it seems like the equatorial region separates the network into two communities , northern and southern hemispheres , where the nodes with green color indicate the largest cluster and the blue indicates the second largest cluster ; after the critical link adding ( marked by thicker green line ) , the largest and second largest cluster merge , and the new largest cluster approximately covers the entire globe ( fig .[ fig:2a ] ( b ) ) .we find that typically during the el nio event ( fig .[ fig:2](c ) ) does not exhibit a large jump at the percolation threshold .[ fig:2a ] ( c ) shows the cluster structure at the percolation threshold .there are more edges in the tropical zone .this is since during the el nio period , the nodes in low latitudes are drastically affected by the el nio , resulting in higher cross - correlation .therefore , we do not find a large gap in the percolation of the network .( color online ) . the cluster structure on map at the percolation threshold for the network one year before the el nio event ( dec 1981 ) .( a ) before the critical link was added ; ( b ) after the critical link ( marked by thicker green line ) was added .( c ) and ( d ) are the cluster structure at the percolation threshold for the network during el nio episode ( dec 1982 ) .different colors represent different clusters , especially , the green represents the largest cluster and the blue represents the second largest cluster . ] another example for the evolution of the network ( represented by and vs. ) during the el nio event is shown in fig .[ fig:2e ] . also herewe find that one year before event there is a large gap in ( fig .[ fig:2e ] ( b ) ) , however , during the el nio event , two years before the event and one year after the event , the gap becomes smaller ( fig .[ fig:2e ] ( a)(c)(d ) ) .( color online ) .same as fig .[ fig:2 ] for the strong 1997 - 1998 el nio event . ] following the above , we assume that large is an alarm forecasts that el nio will develop in the following calendar year . in the case of multiple alarms in the same calendar year ,only the first one is considered .the alarm results in a correct prediction , if in the following calendar year an el nio episode actually occurs ; otherwise it is regarded as a false alarm .there were 10 el nio events ( years ) between 1979 and 2016 and additional 27 non - el nio years . to quantify the accuracy for our prediction, we use the receiver operating characteristic ( roc)-type analysis when altering the magnitude of the threshold and hence the hit and false - alarm rates .[ fig:3 ] shows , the best hit rates for the false - alarm rates 0 , 1/27 , 2/27 , and 3/27 .the best performances are for ( i ) thresholds in the interval between 0.286 and 0.289 , where the false - alarm rate is 1/27 and the hit rate is 0.7 , for ( ii ) thresholds between 0.264 and 0.266 , where the false - alarm rate is 2/20 and the hit rate is 0.7 , and ( iii ) for thresholds between 0.223 and 0.26 , where the false - alarm rate is 3/20 and the hit rate is 0.7 . to further test our results we also applied the same method for different datasets , the ncep / ncar reanalysis dataset , and the jra-55 dataset . to allow simple comparison to the era - interim reanalysis, we only consider the last 37 years ( 1980 - 2016 ) .the prediction accuracy are summarized in table .[ comparison ] .we basically find very similar results for all three different reanalysis datasets , strengthening the confidence in our prediction method .( color online ) .the prediction accuracy for out method .for the four lowest false - alarm rates = the best hit rates . ].[comparison ] the forecast accuracy for different reanalysis datasets , based on the receiver operating characteristic ( roc)-type . [ cols="^,^,^",options="header " , ] to further test the order of the percolation phase transitions in the climate network before el nio event , we study the finite size effects of our network , and suggest that the transition is a first order phase transition .we change the system s size by altering the resolution of nodes , and at the same time make sure that every node in a given covers the same area on the global ( i.e. , we are fewer nodes at the high latitudes ) .first , we define the resolution ( in degree latitude ) at the equator as and then find that the number of nodes is .then the number of nodes in latitude is , where $ ] .the total number of nodes is then .we choose to be , which yields .we then calculate as a function of the system size .if approaches zero as , the corresponding giant component is assumed to undergo a continuous percolation ; otherwise , the corresponding percolation is assumed to be discontinuous .this is since it suggests that the order parameter has a non - zero discontinuous jump at the percolation threshold .the results of as a function of the system size are shown in fig .[ fig:4 ] for two el nio events considered above .the results suggest a discontinuous percolation since tends to a non - zero constant ( fig .[ fig:4 ] ( a ) and ( c ) ) .we also find that follows a scaling form , where is a constant and is a critical exponent .[ fig:4 ] ( b ) and ( d ) show the related results where we find that is very close to , implying it might be an universal scaling exponent .it has been pointed out that a random network always undergoes a continuous percolation phase transition during a random process .the question whether percolation transitions could be discontinuous has attracted much attention .discontinuous percolation in networks was reported in the framework of the explosive percolation model .however , later studies questioned this finding .interestingly , our results indicate the possibility of first order phase transition in climate networks .( color online ) .the finite size effects for the networks before el nio episode .the largest gap as a function of system size for ( a ) dec .1981 and ( b ) nov 1996 .( b),(d ) log - log plot of versus , indicating possible scaling law with scaling exponent ; see eq .( [ eq6 ] ) . ]to summarize , a time - evolving weighted climate network is constructed based on near surface air temperature time series . a percolation framework to study the cluster structure properties of the climate networkis put forward .we find that the structure of the network changes violently approximately one year ahead of el nio events we suggest to use such abrupt transitions to forecast el nio events .the percolation description of climate system ( as reflected by the surface air temperature records ) highlight the importance of such network techniques to understand and forecast el nio events . based on finite size scaling analysis, we also find that the percolation process is discontinuous .the methodology and results presented here not only facilitate the study of predicting el nio events but also can bring a fresh perspective to the study of abrupt phase transitions .j. fan thanks the fellowship program funded by the planning and budgeting committee of the council for higher education of israel .we acknowledge the multiplex ( no .317532 ) eu project , the israel science foundation , onr and dtra for financial support .
|
complex networks have been used intensively to investigate the flow and dynamics of many natural systems including the climate system . here , we develop a percolation based measure , the order parameter , to study and quantify climate networks . we find that abrupt transitions of the order parameter usually occur year before el nio events , suggesting that they can be used as early warning precursors of el nio . using this method we analyze several reanalysis datasets and show the potential for good forecasting of el nio . the percolation based order parameter exhibits discontinuous features , indicating possible relation to the first order phase transition mechanism . * climate conditions influence the nature of societies and economies . el nio event , in particular , has great influences on climate , which may further cause widespread natural disasters like flood and drought across the globe . we have just undergone one of the strongest el nio events since 1948 , it brings drought conditions in venezuela , australia and more tropical cyclones within the pacific ocean . there have been still improvements in the understanding of el nio , its climate effects and associated impacts . here , we present a multidisciplinary renaissance combined climate , network and percolation theory to study the mechanism of el nio . our method can forecast 1 year - ahead of el nio events , with a high prediction accuracy , and a low false alarm . the methodology and results presented here not only facilitate the study of predicting el nio events but also can bring a fresh perspective to the study of abrupt phase transitions . *
|
continuous - variable quantum key distribution ( cvqkd ) , which is an alternative to single - photon quantum - key distribution ( qkd ) , has many advantages , such as high repetition rate of communication , high detection efficiency , and ease of integration with standard telecom components , so it has received much more attention in recent years .however , because there exist some imperfections such as loss or noise in practical system , actually the unconditional security of the final key of qkd may be compromised .it has been extensively investigated in single - photon qkd , such as photon - number - splitting ( pns ) attack , passive faraday - mirror attack and parially random phase attack etc . , but not the case in cvqkd .this is because the system of cvqkd is a one - way communication system and needs fewer optical elements than two - way system .furthermore , most of intervention of eve on the system of cvqkd can be detected by the parameter estimation of classical postprocessing of cvqkd . in ref . , the wavelength - dependent property of a beam splitter ( bs ) was exploited by eavesdropper eve to attack single - photon qkd successfully .subsequently , huang _ et al _ extended this attack , the so - called wavelength attack , to the all - fiber system of cvqkd .however , in that paper , two significant problems are not considered .first , the equation that must be satisfied in this attack , so - called _ attacking equation _ , was not solved specifically in some permitted parameter regime , which may make this attack invalid .second , the shot noise of the two beams that eve sends to bob , when transmitting through the homodyne detector , was neglected .however , shot noise is the main contribution to the deviation of bob s measurements from eve s when implementing the wavelength attack , so it must be considered accurately . in this paper , we demonstrate and resolve these two problems , and then we improve the wavelength attack method against the all - fiber cvqkd system by tuning the attacking parameter s regime .finally , we conclude that the wavelength attack will be implemented successfully in some parameter regime .the paper is organized as follows : in sec .[ sec : wave ] , we demonstrate the wavelength attack and solve the attacking equations specifically . then , we analyze the shot noise introduced by the one - port and two - port homodyne detectors and calculate the conditional variance between the two legitimate parties alice and bob considering the deviations introduced by the shot noise . finally , in sec .[ sec : discussion ] , we discuss and make some conclusions about the feasibility of this wavelength attack on a practical cvqkd system based on the conditional variance obtained in sec .[ sec : deviation ] .in a practical cvqkd system , alice first modulates a coherent state by amplitude and phase modulators according to bivariate gaussian distributions centered on of variance , where is the shot - noise variance that appears in the heisenberg relation , , .subsequently , she sends this state to bob through a quantum channel optical fiber and bob implements a homodyne or heterodyne detection after receiving this state .so , after many repetitions , alice and bob can share trains of correlated data and then get the final key after the classical data postprocessing procedure . however , because the practical system may have some imperfections which will leave a loophole for the potential eavesdropper eve , we must calibrate the system carefully and make efforts to eliminate all the existing loopholes .generally , the transmittance of some practical beam splitters depends on the wavelength of beams , namely , where is the maximal power that is coupled , is the coupling coefficient , and is the heat source width . consequently , proposed in ref . , eve can use this wavelength - dependent property to attack the practical cvqkd system with heterodyne protocol shown in fig .[ fig:1 ] .she can send two beams whose wavelength and intensity can be tuned to control bob s measurement results , which are made identical to eve s .specifically speaking , eve first intercepts alice s sending states and makes a heterodyne detection on them , so she can get the quadratures , which can be given by where , is the shot noise introduced by eve s heterodyne detection .then , she sends to bob two beams whose intensities are denoted as and respectively .because the interference between these two beams is destroyed , eve might be able to control the intensities and wavelengths of them to make these two beams , after transmitting bob s heterodyne detectors , satisfy where represents the transmittance of a practical beam splitter corresponding to the beam whose wavelength is . is the amplitude of local oscillator in the absence of this wavelength attack , and is the channel loss . after scaling with ,then , bob will get measurements and .thus , the wavelength attack may succeed ; however , whether eqs .( [ eq : attackeq ] ) have the real solutions in the practical parameter regime determines the validity of this wavelength attack .so , in what follows we analytically investigate the solutions of eqs .( [ eq : attackeq ] ) in the permitted parameter regime .first , we rewrite eqs .( [ eq : attackeq ] ) as provided that is the same as , then is and eqs .( [ eq : ivattackeq ] ) will be reduced to as ref . says , generally are very small in practical implementation of cvqkd , and eqs .( [ eq : redateq ] ) are always solvable if are both positive or negative at the same time .that is , .\ ] ] however , when are different in sign , $ ] , but by virtue of opting for an appropriate ( ) , we are always able to confirm both the right - hand sides of eqs .( [ eq : ivattackeq ] ) as being either positive or negative at the same time , because both the second terms on the right - hand sides of eqs .( [ eq : ivattackeq ] ) are the same in sign .hence , eqs .( [ eq : ivattackeq ] ) or eqs .( [ eq : attackeq ] ) always hold if we select appropriate . additionally , we point out that we can always make the - right hand side of eqs .( [ eq : ivattackeq ] ) sufficiently small and close to ; thus in the left - hand of eqs . ([ eq : ivattackeq ] ) can always be small too , especially in discrete modulation protocol of cvqkd in which the signal intensity is always on a single - photon level .consequently , this attack can not be avoided efficiently even if bob added a wavelength filter on his system before detectors , which is the same case in ref .this is because such extremely weak signal is still able to transmit through the practical filter just by increasing the intensity of the incoming light , and the wavelength of the fake local oscillator can be close to the original one which is 1550 nm .the laser after transmitting through the practical filter is permitted to have a line width , so the fake local oscillator can not be filtered either . consequently , by implementing this wavelength attack , theoretically eve can control her attacking parameters to make bob s measurements identical to hers , or rather eve completely knows bob s measurements . however , in eqs .( [ eq : attackeq ] ) we do not consider the interference between eve s sending beams and vacuum mode entering from other ports of bob s beam splitters , as fig . [ fig:1 ] shows .this interference could introduce excess noise into bob s measurements , thus leading them to deviate from eve s .we demonstrate this in the next section .as the previous section demonstrated , the interference with vacuum mode will lead bob s measurements to deviate from eve s .if the deviation is unfortunately large , alice and bob will find that they can not distill any secure keys because bob s measurements are too noisy after comparing partial data within parameter - estimation phase of data postprocessing .so , we must calculate the conditional variance between alice and bob to see whether eve s wavelength attack could be successful .we point out that the shot noise introduced by bob s detectors is the main contribution to the conditional variance , which is not noted in ref .we begin this calculation by analyzing the quantum noise of unbalanced homodyne detectors first and apply the results to bob s apparatus .pulsed homodyne detector is extensively exploited to measure a weak signal with a bright local oscillator . as depicted in fig .[ fig:2 ] , when the transmittance of beam splitter , the homodyne detector is balanced homodyne detector ( bhd ) , or unbalanced homodyne detector ( ubhd ) .figure [ fig:2](a ) shows two - port homodyne detector with a subtractor , and fig .[ fig:2](b ) shows one - port homodyne detector without a subtractor .those two - port or one - port homodyne detector can be used to measure a weak signal s quadratures or vacuum state s quantum noise .generally , two - port balanced homodyne detector can measure the quadratures or of weak signal , and between and the local oscillator the relative phase is .the measurement is and , . , are the vacuum mode quadratures , of which the variance is .the variance of is where is the signal s variance . when , the output is the shot noise , with unbalanced two - port homodyne detector , the measurement will be different from the one in eq .( [ eq : x0 ] ) .the following is the analysis of this case .since the strong local oscillator can be treated as a classical field , the amplitude of it can be denoted as where is the relative phase in eq .( [ eq : x0 ] ) . provided the pulsed local oscillator and signal beam have the same optical frequency and are all in the coherent state respectively , they will interfere with each other when transmitting through the beam splitter [ cf .[ fig:2](a ) ] ; i.e. , after optical mixing , their intensity can be written as where we have denoted by , respectively .then , with the subtraction of and , the output of two - port ubhd can be obtained as where has been given by eq .( [ eq : x0 ] ) and when eq .( [ eq : x01 ] ) is reduced to eq .( [ eq : x0 ] ) .if the weak signal is a vacuum state , the amplitude of it can be read as , and , so and . describes the amplitude fluctuation of the vacuum state .thus , we can neglect its square terms , so the output of ubhd is where is obtained from eq .( [ eq : x0 ] ) by .when is selected to be or , we can get the shot noise of the output denoted as or [ , which is consistent with the one of bhd output when .the second term of the right - hand of eq .( [ eq : x02 ] ) is the two ports subtraction of the intensity of the local oscillator because of the unbalanced splitting rate of the asymmetric beam splitter ( abs ) .subsequently , let us analyze the unbalanced one - port homodyne detector . as shown in fig .[ fig:2](b ) , the intensity of local oscillator or signal after transmitting the unbalanced beam splitter has been obtained already by eq .( [ eq : i11 ] ) . when the weak signal is a vacuum state , the intensity of eq .( [ eq : i11 ] ) can be reduced to where and has been selected to be or . the right - hand side of each of eqs .( [ eq : i12 ] ) , except for the first term , which is the unbalanced part of splitting of the bright local oscillator because of the asymmetric splitting rate of abs , is the fluctuation of each port of unbalanced one - port homodyne detector respectively . with the two - port or one - port ubhd , we can compute the noise of the beams eve sends to bob in the next section . however ,keep in mind that the noise of ubhd is introduced by the vacuum state from the other input port of the asymmetric beam splitter of ubhd . by looking back at fig .[ fig:1 ] again , it is clear that there are two types of ubhd at bob s station . on one hand, the first kind of abs ( bs2 and bs3 ) , through which the signal or local oscillator transmits alone , can be viewed as a one - port ubhd , and each port output is the interference between the vacuum state and either signal or local oscillator .so , as analyzed in sec .[ sec : quantum ] , the intensity of the first kind of abs output is ( taking the signal beam as an example and the local oscillator as analogous ) , is the intensity of the signal beam eve sends ( ) .on the other hand , the second kind of abs ( bs4 and bs5 ) is a two - port ubhd , but the output is the interference between signal beam and vacuum state in addition to the one between local oscillator and vacuum state , because the wavelengths of signal beam and local oscillator are different from each other and they can not interfere with each other . recall that the shot - noise amplitude of the two - port ubhd output is or as shown in eq .( [ eq : x02 ] ) . scaled with [ cf .( [ eq : x0 ] ) , generally , for heterodyne detection local oscillator is split into two beams so its intensity should be divided by two ] , the measurements of bob s detection are -(1 - 2t_2)[-2\sqrt{t_2(1-t_2)i_{l\!o}}x_n]+2\sqrt{i_s^r}\delta\ ! x_s+2\sqrt{i_{l\!o}^r}\delta\!x_{l\!o}}{\sqrt{2}|\alpha_{l\!o}|}\\ & = \sqrt{\frac{\eta}{2}}\hat{x}_e+\hat{x}_{b|e}~,\\ \hat{p}_b&=\frac{\delta i}{\sqrt{2}q|\alpha_{l\!o}|}=\frac{(1 - 2t_1)i_s^t-(1 - 2t_2)i_{l\!o}^t+2\sqrt{i_s^t}\delta\ !p_s+2\sqrt{i_{l\!o}^t}\delta\!p_{l\!o}}{\sqrt{2}|\alpha_{l\!o}|}\\ & = \sqrt{\frac{\eta}{2}}\hat{p}_e+\frac{(1\!-\!2t_1)\![2\sqrt{t_1(1\!-\!t_1)i_s}x_n]\!-\!(1\!-\!2t_2)\![2\sqrt{t_2(1\!-\!t_2)i_{l\!o}}x_n]+2\sqrt{i_s^t}\delta\!p_s+2\sqrt{i_{l\!o}^t}\delta\!p_{l\!o}}{\sqrt{2}|\alpha_{l\!o}|}\\ & = \sqrt{\frac{\eta}{2}}\hat{p}_e+\hat{p}_{b|e}~ , \end{split}\ ] ] where is the photocurrent subtraction of two port outputs of ubhd proportional to the intensity difference of two beams and the proportional constant is _ or is the deviation of or from or .the conditional variance of bob s measurements conditioned on eve s can be computed as \}^2\rangle \negmedspace+\negmedspace\langle\{(1\negmedspace-\negmedspace2t_2)[-2\sqrt{t_2(1\negmedspace-\negmedspace t_2)i_{l\!o}}x_n]\}^2\rangle \negmedspace+\negmedspace4i_s^r(\delta\ ! x_s)^2\negmedspace+\negmedspace4i_{l\!o}^r(\delta\!x_{l\!o})^2}{2|\alpha_{l\!o}|^2}\\ & \approx\frac{2t_2(1-t_2)(1 - 2t_2)^2i_{l\!o}n_0 + 2(1-t_2)i_{l\!o}[4t_2(1-t_2)n_0]}{|\alpha_{l\!o}|^2}\\ & \approx2t_2(1-t_2)(1 - 2t_2)^2n_0 + 8t_2(1-t_2)^2n_0~,\\ v^p_{b|e}&\approx v^x_{b|e}\equiv v_{b|e}~. \end{split}\ ] ] the first approximate equality holds because , and the last equality of can be reduced for providing that the intensity of the fake local oscillator ( ) that eve sends is the same as that of the original local oscillator . under this assumption , is equivalent to and both of them can be denoted as for short . in order to obtain the conditional variance ( ) of bob s measurements conditioned on alice s random optional gaussian variables or , we can substitute eqs .( [ eq : xapa ] ) and ( [ eq : xepe ] ) into eq .( [ eq : xbpb ] ) to achieve bob s measurements about alice s mode . that s then , we can get the conditional variance interestingly ,it s clear that is always smaller than , so the information between bob and eve is always much larger than that between bob and alice .this is consistent with the intercept - and - resend strategy ; i.e. , when implementing this wavelength attack , eve totally knows bob s measurements except for some small deviations .we analyze this deviation accurately and investigate how eve hides herself with this attack in the next section .as shown in previous sections , if eve implements this wavelength attack against bob s practical system , the conditional variance in eq .( [ eq : vbe ] ) versus the transmittance ( corresponding to the local oscillator ) of abs on bob s side is plotted in fig .[ fig:3 ] .when equals 0.15 or 0.5 , will be and can reach the maximum value when equals 0.3 . in practical heterodyne protocol of cvqkd ,the secure conditional variance is always except for some small excess noise .consequently , eve must select appropriate , making equal , and then she can hide herself completely and get all information between alice and bob . moreover , as discussed in sec .[ sec : wave ] , different values of can always make eq .( [ eq : attackeq ] ) satisfied ; namely eve could implement this wavelength attack successfully in any case . besides , from eq .( [ eq : vbe ] ) , it can be seen that the main contribution to conditional variance is the shot noise of two beams ( especially local oscillator , , so the shot noise contributed by weak signal can be neglected ) when transmitting through the two - port ubhd .the noise introduced by the first kind of abs of bob s side ( as one - port ubhd ) ( fig . [ fig:3 ] upper inset ) is very small ( ) and signal s contribution can be neglected because of the large denominator proportional to .additionally , if eve could reduce the intensity of two sending beams which are both smaller than the original local oscillator , the conditional variance can be decreased to any extent because the shot noise , after being enlarged by the intensity of beam or , becomes small . for this purpose andnot being found , eve can exploit the wavelength - dependent property of additional monitoring abs ( splitting rate may be 1:99 ) to make the monitor recording the intensity of the beam or be unchanged . in conclusion, we investigate the feasibility of a wavelength attack combined with the intercept - resend method on practical cvqkd system with heterodyne protocol and conclude that this attack will be implemented successfully if we choose appropriate transmittance corresponding to the appropriate wavelength of the fake local oscillator .this attack can be implemented successfully due to several reasons .first , we analyzed the solution of equations that two sending beams by eve satisfied , and it can be clearly seen that the solutions satisfying all conditions exist .second , the main contribution of deviation of bob s measurements from eve s is due to the fake local oscillator s enlarging shot noise and can be reduced by selecting appropriate transmittance or decreasing the intensity of fake local oscillator .these two aspects are not considered in ref . , which may make their scheme invalid in some parameter regimes .last but not least , all of the practical beam splitters at bob s station have the same wavelength - dependent property . however , if bob inserts optical filters on his system before receiving the light , this attack can not be avoided efficiently , as analyzed in sec . [sec : scheme ] , which is considered to be impossible in ref .so , using high - quality filters and wavelength - independent beam splitters in practical cvqkd systems and carefully monitoring the local oscillator are very important to confirm the security of the heterodyne protocol of cvqkd .additionally , we point out that , for simplicity , in this paper we do not consider the imperfections of bob s homodyne detector such as detection efficiency , electronic noise and other excess noise etc .however , when taking these imperfections into account , eve only needs to slightly adjust her attacking parameters like and or fake local oscillator s intensity , which thus is able to hide her wavelength attack .this work is supported by the national natural science foundation of china , grant no . 61072071 .is supported by the program for new century excellent talents .acknowledge support from nudt under grant no .kxk130201 .here the shot noise variance ( also called shot noise or vaccuum noise ) indicates the variance of vaccum - state quadratures and has been normalized with lo intensity or power .in this paper we all scale as done in ref . , and in other papers has been also scaled to or due to the different definitions between the quadratures ( or ) and creation or annihilation operator ( or ) . however , when not scaled , the shot noise is surely proportional to the locol oscillator power .
|
we present the wavelength attack on a practical continuous - variable quantum - key - distribution system with a heterodyne protocol , in which the transmittance of beam splitters at bob s station is wavelength - dependent . our strategy is proposed independent of but analogous to that of huang _ et al_. [ arxiv : 1206.6550v1 [ quant - ph ] ] , but in that paper the shot noise of the two beams that eve sends to bob , transmitting after the homodyne detector , is unconsidered . however , shot noise is the main contribution to the deviation of bob s measurements from eve s when implementing the wavelength attack , so it must be considered accurately . in this paper , we firstly analyze the solutions of the equations specifically that must be satisfied in this attack , which is not considered rigorously by huang _ et al_. then we calculate the shot noise of the homodyne detector accurately and conclude that the wavelength attack can be implemented successfully in some parameter regime .
|
a - order -dimensional real tensor consists of entries in real numbers : is called _ symmetric _ if the value of is invariant under any permutation of its indices . recall the definition of tensor product , is a vector in with its component as a real symmetric tensor of order dimension uniquely defines a degree homogeneous polynomial function with real coefficient by we call that the tensor is positive definite if for all . in 2005 , qi and lim proposed the definition of eigenvalues and eigenvectors for higher order tenors , independently . furthermore , in , these definitions were unified by chang , person and zhang .let and be real - valued , - order -dimensional symmetric tensors .assume further that is even and is positive definite .we call is a * generalized eigenpair * of if when the tensor is an identity tensor such that for all , the eigenpair reduces to -eigenpair which is defined as a pair satisfying another special case is that when with , the real scalar is called an -eigenvalue and the real vector is the associated -eigenvector of the tensor . in the last decade ,tensor eigenproblem has received much attention in the literature , which has numerous applications in magnetic resonance imaging , image analysis , data fitting , quantum information , automatic control , higher order markov chains , spectral graph theory , multi - label learning , and so on . in ,a positive semidefinite diffusion tensor ( psdt ) model was proposed to approximate the apparent diffusion coefficient ( adc ) profile for high - order diffusion tensor imaging , where the smallest z - eigenvalue need to be nonnegative to guarantee the positive definiteness of the diffusivity function .based on all of the z - eigenvalues , a generalized fractional anisotropy ( gfa ) was proposed to characterize the anisotropic diffusion profile for psdt .gfa is rotationally invariant and independent from the choice of the laboratory coordinate system . in automatic control , the smallest eigenvalue of tensors could reflect the stability of a nonlinear autonomous system . in , the principal z - eigenvectors can depict the orientations of nerve fibers in the voxel of white matter of human brain .recently , a higher order tensor vessel tractography was proposed for segmentation of vascular structures , in which the principal directions of a 4-dimensional tensor were used in vessel tractography approach . in general , it is np - hard to compute eigenvalues of a tensor . in , a direct method to calculate all of z - eigenvalueswas proposed for two and three dimensional symmetric tensors . for general symmetric tensors ,a shifted higher order power method was proposed for computing z - eigenpairs in .recently , in , an adaptive version of higher order power method was presented for generalized eigenpairs of symmetric tensor . in order to guarantee the convergence of power method, they need a shift to force the objective to be ( locally ) concave / convex . in this case , the power method is a monotone gradient method with unit - stepsize . by using fixed - point analysis, linear convergence rate is established for the shifted higher order power method .however , similarly to the case of matrix , when the largest eigenvalue is close to the second dominant eigenvalue , the convergence of power method will be very slow .in the recent years , there are various optimization approaches were proposed for tensor eigenvalue problem . in , han proposed an unconstrained optimization model for computing generalized eigenpair of symmetric tensors . by using bfgs method to solve the unconstrained optimization , the sequence will be convergent superlinearly .a subspace projection method was proposed in for z - eigenvalues of symmetric tensors .recently , in , hao , cui and dai proposed a trust region method for z - eigenvalues of symmetric tensor and the sequence enjoys a locally quadratic convergence rate . in , ni and qi employed newton method for the kkt system of optimization problem , and obtained a quadratically convergent algorithm for finding the largest eigenvalue of a nonnegative homogeneous polynomial map . in , an inexact steepest descent method was proposed for computing eigenvalues of large scale hankel tensors . since nonlinear optimization methods may stop at a local optimum , a sequential semi - definite programming method was proposed by hu et al . for finding the extremal z - eigenvalues of tensors .moreover , in , a jacobian semi - definite relaxation approach was presented to compute all of the real eigenvalues of symmetric tenors . in practice ,one just need to compute extremal eigenvalues or all of its local maximal eigenvalues , for example in mri . on the other hand ,when the order or the dimension of a tensor grows larger , the optimization problem will become large - scale or huge - scale .therefore , we would like to investigate one simple and low - complexity method for finding tensor eigenpairs . in this paper, we consider an adaptive gradient method for solving the following nonlinear programming problem : where denote the unit sphere , i.e. , , denotes the euclidean norm . by some simple calculations, we can get its gradient and hessian , as follows : and its hessian is where , and is a matrix with its component as according to ( [ gradient ] ) , we can derive an important property for the nonlinear programming problem ( [ max - optimization - problem ] ) that the gradient is located in the tangent plane of at , since let is a constrained stationary point of ( [ max - optimization - problem ] ) , i.e. , that then we can claim that * every constrained stationary point * of ( [ max - optimization - problem ] ) * must be a stationary point * of since should be hold for all otherwise , if , we could choose , and then .suppose and denote .by , we know that any kkt point of ( [ max - optimization - problem ] ) will be a solution of the system of equations ( [ generalized eigenpair ] ) . before end of this section, we would like to state the following theorem and its proof is omitted .if the gradient at vanishes , then is a generalized eigenvalue and the vector is the associated generalized eigenvector .the rest of this paper is organized as follows . in the next section ,we introduce some existed gradient methods for tensor eigenvalue problems . in section 3 ,based on a curvilinear search scheme , we present a inexact gradient method .then , we establish its global convergence and linear convergence results under some suitable assumptions .section 4 provides numerical experiments to show the efficiency of our gradient method .finally , we have a conclusion section .the symmetric higher - order power method ( s - hopm ) was introduced by de lathauwer , de moor , and vandewalle for solving the following optimization problem : this problem is equivalent to finding the largest z - eigenvalue of and is related to finding the best symmetric rank-1 approximation of a symmetric tensor } ] , an initial unit iterate .let + * for * * do * + * 1 : * + * 2 : * + * 3 : * + * end for * + ' '' '' the cost per iteration of power method is , mainly for computing . let .set , , then the main iteration could be reformulated as , which is a projected gradient method with unit - stepsize .kofidis and regalia pointed out that s - hopm method can not guarantee to converge . by using convexity theory, they show that s - hopm method could be convergent for even - order tensors under the convexity assumption on . for general symmetric tensors , a shifted s - hopm ( ss - hopm ) methodwas proposed by kolda and mayo for computing z - eigenpairs .one shortcoming of ss - hopm is that its performance depended on choosing an appropriate shift .recently , kolda and mayo extended ss - hopm for computing generalized tensor eigenpairs , called geap method which is an adaptive , monotonically convergent , shifted power method for generalized tensor eigenpairs ( [ generalized eigenpair ] ) .they showed that geap method is much faster than the ss - hopm method due to its adaptive shift choice . ' '' '' + given tensors } ] , and an initial guess .let if we want to find the local maxima ; otherwise , let for seeking local minima. let be the tolerance on being positive / negative definite .+ * for * * do * + * 1 : * precompute ,,,, , + * 2 : * + * 3 : * + * 4 : * + * 5 : * + * 6 : * + * end for * + ' '' '' in , ng , qi and zhou proposed a power method for finding the largest h - eigenvalue of irreducible nonnegative tensors .it is proved in that nqz s power method is convergent for primitive nonnegative tensors .further , zhang et .al established its linear convergence result and presented some updated version for essentially positive tensors and weakly positive tensors , respectively .however , similarly to the case of matrix , when the largest eigenvalue is close to the second dominant eigenvalue , the convergence of power method will be very slow . in ,hao , cui and dai proposed a sequential subspace projection method ( sspm ) for z - eigenvalue of symmetric tensors . in each iteration of sspm method , one need to solve the following 2-dimensional subproblem : let .the point in can be expressed as if , then sspm method will reduce to the power method . for simplicity ,if , the iterate can be expressed as with . in order to solve ( [ 2-dimenional subproblem ] ) ,one need to solve a equation like .for each iteration , the computational cost of sspm method is times than that of power method .as shown in , the main computational cost of sspm is the tensor - vector multiplications and ( defined in ) , which requires operations and operations , respectively .indicated by the idea in , we can present a gradient method with optimal stepsize for computing the generalized tensor eigenpairs problem ( [ max - optimization - problem ] ) .but we do nt want to present it here , since the computational cost per iterate is more expensive than power method . in this section ,we firstly present the following inexact gradient method , and then establish its global convergence and linear convergence results under some suitable assumptions . ' '' '' + given tensors } ] , an initial unit iterate , parameter .let be the tolerance .set k=0 ; calculate gradient . + * while * * do * + * 1 : * generate a stepsize such that satisfying * 2 : * update the iterate , calculate . +* end while * + ' '' '' it is clear that .moreover , by using ( [ xtgx ] ) , we can show the first - order gain per iterate is . since the spherical feasible region is compact, is positive and bounds away from zero , we can get that all the functions and gradients of the objective ( [ max - optimization - problem ] ) at feasible points are bounded , i.e. , there exists a constant such that for all , the following theorem indicates that the algorithm 3 is convergent to the kkt point of the problem ( [ max - optimization - problem ] ) .the constructive proof is motivated by the idea in . [ the : globalconvergence ] suppose that the gradient is lipschitz continuous on the unit shpere .let is generated by algorithm 3 .then the inexact curvilinear search condition defined in ( [ linesearch ] ) is well - defined and there exists a positive constant such that furthermore , * proof . *firstly , we have furthermore , we can obtain let , using ( [ xtgx ] ) , we can derived that for any constant , there exists a positive scalar such that for all ] .it follows from ( [ eq : gapfk1 ] ) that ( [ linesearch ] ) holds for all ] be the symmetric tensor defined by to compare the convergence in terms of the number of iterations .figure 1 shows the results for computing z - eigenvalues of from * _ example 1 _ * , and the starting point is ] .for each set of experiments , the same set of random starts was used . for the largest eigenpair, we list the number of occurrences in the 1000 experiments .we also list the median number of iterations until convergence , the average error and the average run time in the 1000 experiments in tables 1 - 4 .as we can see from tables 1 - 4 , adaptive gradient ( ag ) method is much faster than geap method and could reach the largest eigenpair with a higher probability .+ [ cols="^,^,^,^,^,^",options="header " , ] [ fig : ag - geap - z - eigen ] {ag - geap - h - eigenvalues-7.eps } } \end{array}\ ] ] we used 100 random starting guesses to test ag method and geap method for computing h - eigenvalues of from * _ examples 5 - 7_*. for each set of experiments , the same set of random starts was used . for the largest eigenpair, we list the number of occurrences in the 100 experiments .we also list the median number of iterations until convergence , the average error and the average run time in the 100 experiments in tables 5 - 7 .as we can see from table 5 , geap method fails to stop in 500 iterations for all of the 100 test experiments for example 5 .but geap can slowly approach to the largest h - eigenvalue in 81 test experiments as shown in figure 3 .adaptive gradient ( ag ) method much faster than geap method and could reach the largest eigenpair with a higher probability .especially , for examples 6 and 7 , adaptive gradient ( ag ) method could find the largest h - eigenvalue in all of the 100 experiments .in this paper , we introduced an adaptive gradient ( ag ) method for generalized tensor eigenpairs , which could be viewed as an inexact version of the gradient method with optimal stepsize for finding z - eigenvalues of tensor in .what we have done is to use an inexact curvilinear search condition to replace the constraint on optimal stepsize .so , the computational complexity of ag method is much cheaper than sspm method in .global convergence and linear convergence rate are established for the ag method for computing generalized eigenpairs of symmetric tensor .some numerical experiments illustrated that the ag method is faster than geap and could reach the largest eigenpair with a higher probability .this work was supported in part by the national natural science foundation of china ( no.61262026 , 11571905 , 11501100 ) , ncet programm of the ministry of education ( ncet 13 - 0738 ) , jgzx programm of jiangxi province ( 20112bcb23027 ) , natural science foundation of jiangxi province ( 20132bab201026 ) , science and technology programm of jiangxi education committee ( ldjh12088 ) , program for innovative research team in university of henan province ( 14irtsthn023 ) .l. bloy , r. verma , on computing the underlying fiber directions from the diffusion orientation distribution function " , in medical image computing and computer - assisted intervention miccai , vol . 2008 .springer : berlin / heidelberg , 2008 : 1 - 8 . k. chang , k. pearson , t. zhang , primitivity , the convergence of the nqz method , and the largest eigenvalue for nonnegative tensors " , _ siam journal on matrix analysis and applications _ 32 ( 2011 ) 806 - 819 .s. hu , g. li , l. qi , y. song , finding the maximum eigenvalue of essentially nonnegative symmetric tensors via sum of squares programming on the largest eigenvalue of a symmetric nonnegative tensor " , _ journal of optimization theory and applications _ 2013 .l. sun , s. ji and j. ye , hypergraph spectral learning for multi - label classification " , in proceedings of the 14th acm sigkdd international conference on knowledge discovery and data mining , acm , 2008 , pp .668 - 676 .
|
high order tensor arises more and more often in signal processing , data analysis , higher - order statistics , as well as imaging sciences . in this paper , an adaptive gradient ( ag ) method is presented for generalized tensor eigenpairs . global convergence and linear convergence rate are established under some suitable conditions . numerical results are reported to illustrate the efficiency of the proposed method . comparing with the geap method , an adaptive shifted power method proposed by tamara g. kolda and jackson r. mayo [ siam j. matrix anal . appl . , 35 ( 2014 ) , pp . 1563 - 1581 ] , the ag method is much faster and could reach the largest eigenpair with a higher probability . * keywords : * higher order tensor , eigenvalue , eigenvector , gradient method , power method .
|
locating oil reservoirs that are economically viable is one of the main problems in the petroleum industry .this task is primarily undertaken through seismic exploration , where explosive sources generate seismic waves whose reflections at the different geological layers are recorded at the ground or sea level by acoustic sensors ( geophones or hydrophones ) .these seismic signals , which are later processed to reveal information about possible oil occurrences , are often contaminated by noise and properly cleaning the data is therefore of paramount importance . in particular , the design of efficient filters to suppress noise that shows coherence in space and time ( and often appears stronger in magnitude than the desired signal ) remains a scientific challenge for which novel concepts and methods are required .in addition , the filtering tools developed to treat such kind of noise may also find relevant applications in other physical problems where coherent structures evolving in a complex spatiotemporal dynamics need to identified properly . in land seismic surveys , the seismic sources generate various type of surface waves which are regarded as noise since they do not contain information from the deeper subsurface .this so - called coherent noise represents a serious hurdle in the processing of the seismic data since it may overwhelm the reflection signal , thus severely degrading the quality of the information that can be obtained from the data .a source - generated noise of particular concern is the ground roll , which is the main type of coherent noise in land seismic records and is commonly much stronger in amplitude than the reflected signals .ground roll are surface waves whose vertical components are rayleigh - type dispersive waves , with low frequency and low phase and group velocities .an example of seismic data contaminated by ground roll is shown in fig .[ fig : sismo ] .this seismic section consists of land based data with 96 traces ( one for each geophone ) and 1001 samples per trace .a typical trace is shown in fig .[ fig : trazo ] corresponding to geophone 58 .the image shown in fig .[ fig : sismo ] was created from the 96 traces using a standard imaging technique .the horizontal axis in this figure corresponds to the offset distance between source and receiver and the vertical axis represents time , with the origin located at the upper left corner .the maximum offset is 475 m ( the distance between geophones being 5 m ) and the maximum time is 1000 ms .the gray levels in fig . 1 change linearly from black to white as the amplitude of the seismic signal varies from minimum to maximum . owing to its dispersive nature, the ground roll appears in a seismic image as a characteristic fan - like structure , which is clearly visible in fig .[ fig : sismo ] .the data shown in this figure was provided by the brazilian petroleum company ( petrobras ) .standard methods for suppressing ground roll include one - dimensional high pass filtering and two - dimensional filtering .such `` global '' filters are based on the elimination of specific frequencies and have the disadvantage that they also affect the uncontaminated part of the signal . recently ,`` local '' filters for suppressing the ground roll have been proposed using the karhunen - love transform and the wavelet transform .the wiener - levinson algorithm has also been applied to extract the ground roll .filters based on the karhunen love ( kl ) transform are particularly interesting because of the _ adaptativity _ of the kl expansion , meaning that the original signal is decomposed in a basis that is obtained directly from the empirical data , unlike fourier and wavelet transforms which use prescribed basis functions .the kl transform is a mathematical procedure ( also known as proper orthogonal decomposition , empirical orthogonal function decomposition , principal component analysis , and singular value decomposition ) whereby any complicated data set can be optimally decomposed into a finite , and often small , number of modes ( called proper orthogonal modes , empirical orthogonal functions , principal components or eigenimages ) which are obtained from the eigenvectors of the data autocorrelation matrix . in applying the kltransform to suppress the ground roll , one must first map the contaminated region of the seismic record into a horizontal rectangular region .this transformed region is then decomposed with the kl transform and the first few principal components are removed to extract the coherent noise , after which the filtered data is inversely mapped back into the original seismic section .the advantage of this method is that the noise is suppressed with negligible distortion of the reflection signals , for only the data within the selected region is actually processed by the filter .earlier versions of the kl filter have however one serious drawback , namely , the fact that the region to be filtered must be picked by hand a procedure that not only can be labor intensive but also relies on good judgment of the person performing the filtering . in this article we propose a significant improvement of the kl filtering method , in which the region to be filtered is selected automatically as an optimization procedure .we introduce a novel quantity , namely , the coherence index , which gives a measure of the amount of energy contained in the most coherent modes for a given selected region .the optimal region is then chosen as that that gives the maximum .we emphasize that introducing a quantitative criterion for selecting the ` best ' region to be filtered has the considerable advantage of yielding a largely unsupervised scheme for demarcating and efficiently suppressing the ground roll .although our main motivation here concerns the suppression of coherent noise in seismic data , we should like to remark that our method may be applicable to other problems where coherent structures embedded in a complex spatiotemporal dynamics need to be identified or characterized in a more refined way .for example , the kl transform has been recently used to identify and extract spatial features from a complex spatiotemporal evolution in combustion experiment . a related method the so - called biorthogonal decomposition has also been applied to characterize spatiotemporal chaos and identified structures as well as identify changes in the dynamical complexity , and the spatial coherence of a multimode laser .we thus envision that our optimized kl filter may find applications in these and related problems of coherent structures in complex spatiotemporal dynamics .the article is organized as follows . in sec .[ sec : klt ] we define the karhunen love transform , describe its main properties , and discuss its relation to the singular value decomposition of matrices . in sec .[ sec : kl ] we present the kl filter and a novel optimization procedure to select the noise - contaminated region to be parsed through the filter .the results of our optimized filter when applied to the data shown in fig .[ fig : sismo ] are presented in sec .[ sec : results ] .our main conclusions are summarized in sec .[ sec : conclu ] . in appendixesa and b we briefly discuss , for completeness , the relation between the kl transform and two other similar procedures known as proper orthogonal decomposition ( or empirical orthogonal function expansion ) and principal component analysis .consider a multichannel seismic data consisting of traces with samples per trace represented by a matrix , so that the element of the data matrix corresponds to the amplitude registered at the geophone at time .for definiteness , let us assume that , as is usually the case .we also assume for simplicity that the matrix has full rank , i.e. , , where denotes the rank of . letting the vectors and denote the elements of the row and the column of , respectively , we can write with the above notation we have where denotes the element of the vector . ( to avoid risk of confusion matrix elementswill always be denoted by capital letters , so that a small - cap symbol with two subscripts indicates vector elements . ) next consider the following symmetric matrix where the superscript denotes matrix transposition .it is a well known fact from linear algebra that matrices of the form ( [ eg : g ] ) , also called covariance matrices , are positive definite ., as assumed ; if , the matrix has nonzero eigenvalues with all remaining eigenvalues equal to zero .] let us then arrange the eigenvalues of in non - ascending order , i.e. , , and let be the corresponding ( normalized ) eigenvectors .the karhunen - love ( kl ) transform of the data matrix is defined as the matrix given by where the columns of the matrix are the eigenvectors of : the original data can be recovered from the kl transform by the inverse relation we refer to this equation as the kl _ expansion _ of the data matrix .to render such an expansion more explicit let us denote by the , , the elements of the row of the kl matrix , that is , then ( [ eq : au ] ) can be written as where it is implied matrix multiplication between the column vector and the row vector .the eigenvectors are called _ empirical eigenvectors _ , _ proper orthogonal modes _ , or _kl modes_. as discussed in appendix [ sec : pod ] , the total energy of the data can be defined as the sum of all eigenvalues , so that can be interpreted as the energy captured by the empirical eigenvector .we thus define the relative energy in the kl mode as we note furthermore that since is a covariance - like matrix its eigenvalues can also be interpreted as the variance of the respective principal component ; see appendix [ sec : pca ] for more details on this interpretation .we thus say that the higher the more coherent the kl mode is . in this context , the most energetic modes are identified with the most coherent ones and vice - versa .an important property of the kl expansion is that it is ` optimal ' in the following sense : if we form the matrix by keeping the first rows of and setting the remaining rows to zero , then the matrix given by is the best approximation to by a matrix of rank in the frobenius norm ( the square root of the sum of the squares of all matrix elements ) .this optimality property of the kl expansion lies at the heart of its applications in data compression and dimensionality reduction , for it allows to approximate the original data by a smaller matrix with minimum loss of information ( in the above sense ) .another interpretation of relation ( [ eq : ak ] ) is that it gives a low - lass filter , for in this case only the first kl modes are retained in the filtered data . on the other hand ,if the relevant signal in the application at hand is contaminated with coherent noise , as is the case of the ground roll in seismic data , one can use the kl transform to remove efficiently such noise by constructing a high - pass filter . indeed ,if we form the matrix by setting to zero the first rows of and keeping the remaining ones , then the matrix given by is a filtered version of where the first ` most coherent ' modes have been removed .however , if the noise is localized in space and time it is best to apply the filter only to the contaminated part of the signal . in previous versions of the kl filterthe choice of the region to be parsed through the filter was made _ a priori _ , according to the best judgement of the person carrying out the filtering , thus lending a considerable degree of subjectivity to the process . in the next section, we will show how one can use the kl expansion to implement an automated filter where the undesirable coherent structure can be ` optimally ' identified and removed . before going into that ,however , we shall briefly discuss below an important connection between the kl transform and an analogous mathematical procedure known as the singular value decomposition of matrices .readers already knowledgeable about the equivalence between these two formalisms ( or more interested in the specific application of the kl transform to filter coherent noise ) may skip the remainder of this section without loss of continuity .we recall that the singular value decomposition ( svd ) of any matrix , with , is given by the following expression : where is as defined in ( [ eq : u ] ) , is a diagonal matrix with elements , the so - called _ singular values _ of , and is a matrix whose columns correspond to the eigenvectors of the matrix with nonzero eigenvalues .the svd allows us to rewrite the matrix as a sum of matrices of unitary rank : in the context of image processing the matrices are called _ eigenimages _ .now , comparing ( [ eq : au ] ) with ( [ svda ] ) we see that the kl transform is related to the svd matrices and by the following relation so that the row vectors of are given in terms of the singular values and the vectors by it thus follows that the decomposition in eigenimages seen in ( [ eq : q ] ) is precisely the kl expansion given in ( [ eq : uxi ] ) . furthermore the approximation given in ( [ eq : ak ] ) can be written in terms of eigenimages as similarly , the filtered data shown in ( [ eq : atilk ] ) reads in terms of eigenimages : the svd provides an efficient way to compute the kl transform , and we shall use this method in the numerical procedures described in the paper .as already mentioned , owing to its dispersive nature the ground - roll noise appears in a seismic image as a typical fan - like coherent structure .this space - time localization of the ground roll allows us to apply a sort of ` surgical procedure ' to suppress the noise , leaving intact the uncontaminated region . to do that , we first pick lines to demarcate the start and end of the ground roll and , if necessary , intermediate lines to demarcate different wavetrains , as indicated schematically in fig .[ fig : sector ] . in this figurewe have for simplicity used straight lines to demarcate the sectors but more general alignment functions , such as segmented straight lines , can also be chosen . to make our discussion as general as possible ,let us assume that we have a set of parameters , , describing our alignment functions .for instance , in fig .[ fig : sector ] the parameters would correspond to the coefficients of the straight lines defining each sector .once the region contaminated by the ground roll has been demarcated , we map each sector onto a horizontal rectangular region by shifting and stretching along the time axis ; see fig .[ fig : sector ] . the data points between the top and bottom lines in each sector is mapped into the corresponding new rectangular domain , with the mapping being carried out via a cubic convolution interpolation technique .after this alignment procedure the ground roll events will become approximately horizontal , favoring its decomposition in a smaller space .since any given transformed sector has a rectangular shape it can be represented by a matrix , which in turn can be decomposed in empirical orthogonal modes ( eigenimages ) using the kl transform .the first few modes , which contain most of the ground roll , are then subtracted to extract the coherent noise .the resulting data for each transformed sector is finally subjected to the corresponding inverse mapping to compensate for the original forward mapping .this leaves the uncontaminated data ( lying outside the demarcated sectors ) unaffected by the whole filtering procedure .the kl filter described above has indeed shown good performance in suppressing source - generated noise from seismic data .the method has however the drawback that the region to be filtered must be picked by hand , which renders the analysis somewhat subjective . in order to overcome this difficulty, it would be desirable to have a quantitative criterion based on which one could decide what is the ` best choice ' for the parameters describing the alignment functions .in what follows , we propose an optimization procedure whereby the region to be filtered can be selected automatically , once the generic form of the alignment functions is prescribed .suppose we have chosen sectors to demarcate the different wavetrains in the contaminated region of the original data , and let be the set of parameters characterizing the respective alignment functions that define these sectors .let us denote by , , the matrix representing the transformed sector obtained from the linear mapping of the respective original sector , as discussed above .for each transformed sector we then compute its kl transform and calculate the _ coherence index _ for this sector , defined as the relative energy contained in its first kl mode : where are the eigenvalues of the correlation matrix and is the rank of . such as defined above , represents the relative weight of the most coherent mode in the kl expansion of the transformed sector .( a quantity analogous to our is known in the oceanography literature as the similarity index . )next we introduce an overall coherence index for the entire demarcated region , defined as the average coherence index of all sectors : as the name suggests , the coherence index is a measure of the amount of ` coherent energy ' contained in the chosen demarcated region given by the parameters .thus , the higher the larger the energy contained in the most coherent modes in that region . for the purpose of filtering coherent noiseit is therefore mostly favorable to pick the region with the largest possible .we thus propose the following criterion to select the optimal region to be filtered : vary the parameters over some appropriate range and then choose the values that maximize the coherence index , that is , .\label{eq : ci*}\ ] ] once we have selected the optimal region , given by the parameters , we then simply apply the kl filter to this region as already discussed : we remove the first few eigenimages from each transformed sector and inversely map the data back into the original sectors , so as to obtain the final filtered image . in the next sectionwe will apply our optimized kl filtering procedure to the seismic data shown in fig .[ fig : sismo ] .here we illustrate how our optimized kl filter works by applying it to the seismic data shown in fig .[ fig : sismo ] . in this case, it suffices to choose only one sector to demarcate the entire region contaminated by the ground roll .this means that we have to prescribe only two alignment functions , corresponding to the uppermost and lowermost straight lines ( lines ab and cd , respectively ) in fig .[ fig : sector ] . to reduce further the number of free parameters in the problem ,let us keep the leftmost point of the upper line ( point a in fig .[ fig : sector ] ) fixed to the origin , so that the coordinates of point a are set to , while allowing the point to move freely up or down within certain range ; see below .similarly , we shall keep the rightmost point of the lower line ( point c in fig .[ fig : sector ] ) pinned at a point , where and is chosen so that the entire ground roll wavetrain is above this point .the other endpoint of the lower demarcation line ( point in fig .[ fig : sector ] ) is allowed to vary freely . with such restrictions ,we are left with only two free parameters , namely , the angles and that the upper and lower demarcation lines make with the horizontal axis .so reducing the dimensionality of our parameter space allows us to visualize the coherence index as a 2d surface . for the case in hand ,it is more convenient however to express not as a function of the angles and but in terms of two other new parameters introduced below .let the coordinates of point , which defines the right endpoint of the upper demarcation line in fig .[ fig : sector ] , be given by , where . in our optimization procedurewe let point move along the right edge of the seismic section by allowing the coordinate to vary from a minimum value to a maximum value , so that we can write where is the number of intermediate sampling points between and , and .similarly , for the coordinates of point in fig .[ fig : sector ] , which is the moving endpoint of the lower straight line , we have and where is the number of sampling points between and , and . for each choice of and in ( [ eq : jb ] ) and ( [ eq : jd ] ) , we apply the procedure described in the previous section and obtain the coherence index of the corresponding region . in fig .[ fig : energ ] we show the energy surface , for the case in which , , , , , and .we see in this figure that possesses a sharp peak , thus showing that this criterion is indeed quite discriminating with respect to the positioning of the lines demarcating the region contaminated by the ground roll . the global maximum of in fig .[ fig : energ ] is located at and , and in fig .[ fig : transf]a we show the transformed sector obtained from the linear mapping of this optimal region . in this figureone clearly sees that the ground roll wavetrains appear mostly as horizontal events . in fig .[ fig : transf]b we present the first eigenimage of the data shown in fig .[ fig : transf]a , which corresponds to about 33% of the total energy of the image in fig .[ fig : transf]a , as can be seen in fig .[ fig : modes ] where we plot the relative energy captured by the first 10 eigenimages .the second eigenimage , shown in fig .[ fig : transf]c , captures about 10% of the total energy , with each successively higher mode contributing successively less to the total energy ; see fig . [fig : modes ] . in fig .[ fig : transf]d we give the result of removing the first kl mode ( fig .[ fig : transf]b ) from fig .[ fig : transf]a .it is clear in fig .[ fig : transf]d that by removing only the first eigenimage the main horizontal events ( corresponding to the ground roll ) have already been greatly suppressed .performing the inverse mapping of the image shown in fig .[ fig : transf]c yields the data seen in the region between the two white lines in fig .[ fig : sismoenerg]a , which shows the final filtered image for this case ( i.e. , after removing the first kl mode from the transformed region ) .we see that the ground roll inside the demarcated region in fig .[ fig : sismoenerg]a has been considerably suppressed , while the uncontaminated signal ( lying outside the marked region ) has not been affected at all by the filtering procedure .if one wishes to filter further the ground roll noise one may subtract successively higher modes .for example , in fig .[ fig : sismoenerg]b we show the filtered image after we also subtract the second eigenimage .one sees that there is some minor improvement , but removing additional modes is not recommended for it starts to degrade relevant signal as well .a. ] a. in b ) we show the result after removing the first two eigenimages.,title="fig : " ] a. in b ) we show the result after removing the first two eigenimages.,title="fig : " ]an optimized filter based on the karhunen love transform has been constructed for processing seismic data contaminated with coherent noise ( ground roll ) .a great advantage of the kl filter lies in its local nature , meaning that only the contaminated region of the seismic record is processed by the filter , which allows the ground roll to be removed without distorting most of the reflection signal .another advantage is that it is an adaptative method in the sense the the signal is decomposed in an empirical basis obtained from the data itself .we have improved considerably the kl filter by introducing an optimization procedure whereby the ground roll region is selected so as to maximize an appropriately defined coherence index .we emphasize that our method , require as input , only the generic alignment functions to be used in the optimization procedure as well as the number of eigenimages to be removed from the selected region .these may vary depending on the specific application at hand .however , once these choices are made , the filtering task can proceed in the computer in an automated way .although our main motivation here has been suppressing coherent noise from seismic data , our method is by no means restricted to geophysical applications .in fact , we believe that the method may prove useful in other problems in physics that require localizing coherent structures in an automated and more refined way .we are currently exploring further such possibilities .financial support from the brazilian agencies cnpq and finep and from the special research program is acknowledged .we thank l. lucena for many useful conversation and for providing us with the data .in dynamical systems the mathematical procedure akin to the kl transform is called the proper orthogonal decomposition ( pod ) . in this context, one may view each column vector of the data matrix as a set of measurements ( real or numerical ) of a given physical variable performed simultaneously at space locations and at a certain time , that is , , .for example , in turbulent flows the vectors often represent measurements of the fluid velocity at points in space at a given time .the data matrix thus corresponds to an ensemble of such vectors , representing a sequence of measurements over instants of time . in pod oneis usually concerned with finding a low - dimensional approximate description of the high - dimensional dynamical process at hand .this is done by finding an ` optimal ' basis in which to expand ( and then truncate ) a typical vector of the data ensemble .such a basis is given by the eigenvectors of the time - averaged autocorrelation matrix , which is proportional to the matrix define above : hence the eigenvectors of are also eigenvectors of . in pod parlancethe eigenvectors are called _ empirical eigenvectors _ or _ proper orthogonal modes_. in the continuous case , the corresponding eigenfunctions of the autocorrelation operator are known as _ empirical orthogonal functions _( eof ) . from ( [ eq : a ] ) , ( [ eq : psi ] ) and ( [ eq : u ] ), one can easily verify that we thus see that the columns of the kl transform correspond to the coordinates of the vectors in the empirical basis : it is this expansion of any member of the ensemble in the empirical basis that is called the proper orthogonal decomposition or empirical orthogonal function expansion .it now follows from ( [ eq : pod ] ) that where in the last equality we used the fact that where is the diagonal matrix .equation ( [ eq : ey ] ) thus suggests that we can interpret the eigenvalue as a measure of the energy in the empirical orthogonal mode .for example , in the case of turbulent flows where the vector contains velocity measurements at time , the left hand of ( [ eq : ey ] ) yields twice the average kinetic energy per unit mass , so that gives the kinetic energy in the empirical orthogonal mode .similarly , in the case of seismic data the vectors represent amplitudes of the reflected waves , and hence the quantity may be viewed as a measure of the total energy of the data , thus justifying the definition given in ( [ eq : e ] ) .the optimality of the kl expansion also has a nice physical and geometrical interpretation , as follows .suppose we write a vector in an arbitrary orthonormal basis : where .if we now wish to approximate by only its first components , then the optimality of the kl expansion implies that the first proper orthogonal modes capture more energy ( on average ) that the first modes of any other basis .more precisely , the mean square distance is minimum if we use the empirical basis .in statistical analysis of multivariate data , the kl transform is known as principal component analysis ( pca ) . in this case, one views the elements of a row vector of the data matrix as being realizations of a random variable , so that the matrix itself corresponds to samples of a random vector with components : .in other words , the column vectors correspond to the samples of .if the rows of are centered , i.e. , the variables have zero mean , then the matrix is proportional to the covariance matrix of : instead of , but this is not relevant for our discussion here . ] or alternatively in matrix notation [ note that the matrices and defined respectively in ( [ eq : r ] ) and ( [ eq : s ] ) are essentially the same but have different interpretations . ] in the pca context , the diagonal elements of the matrix are thus proportional to the variance of the variables , whereas the off - diagonal elements , , are proportional to the covariance between the variables and . furthermore , the eigenvectors of correspond to the principal axis of the covariance matrix . the idea behindpca is to introduce a new set of variables , each of which being a linear combination of the original variables , such that these new variables are mutually uncorrelated .this is accomplished by projecting the vector onto the principal directions of the covariance matrix .more precisely , we define the principal components , , by the following relation in other words , the vector of principal components is obtained from a rotation of the original vector : the covariance matrix of the principal components is then given by thus showing that as desired . the first principal component then represents the particular linear combination of the original variables ( among all possible such combinations that yield mutually uncorrelated variables ) that has the largest variance , with the second principal component possessing the second largest variance , and so on . from ( [ eq : psi ] ) and ( [ eq : p ] )one sees that the elements of the row of the kl transform correspond to the samples or _ scores _ of the principal component . that is , if we denote the sample vector of the principal component by , then .for this reason in the pca context the kl transform is called the matrix of scores .
|
signals obtained in land seismic surveys are usually contaminated with coherent noise , among which the ground roll ( rayleigh surface waves ) is of major concern for it can severely degrade the quality of the information obtained from the seismic record . properly suppressing the ground roll from seismic data is not only of great practical importance but also remains a scientific challenge . here we propose an optimized filter based on the karhunen love transform for processing seismic data contaminated with ground roll . in our method , the contaminated region of the seismic record , to be processed by the filter , is selected in such way so as to correspond to the maximum of a properly defined coherence index . the main advantages of the method are that the ground roll is suppressed with negligible distortion of the remanent reflection signals and that the filtering can be performed on the computer in a largely unsupervised manner . the method has been devised to filter seismic data , however it could also be relevant for other applications where localized coherent structures , embedded in a complex spatiotemporal dynamics , need to be identified in a more refined way .
|
in a digraph with vertex set and arc ( directed edge ) set , a vertex _ dominates _ itself and all vertices of the form . a _ dominating set _ , , for the digraph is a subset of such that each vertex is dominated by a vertex in .a _ minimum dominating set _ , , is a dominating set of minimum cardinality ; and the _ domination number _ , ,is defined as , where is the cardinality functional ( ) .if a minimum dominating set is of size one , we call it a _ dominating point_. let be a measurable space and consider a function , where represents the power set of . then given , the _ proximity map _ associates with each point a _ proximity region _ .the region depends on the distance between and . for , the _ -region _ , associates the region with each set . for ,we denote as .if is a set of -valued random variables , then the ( and ) , are random sets . if the are independent and identically distributed , then so are the random sets ( and ) .furthermore , is a random set .notice that , since iff iff for all iff for all iff .consider the data - random proximity catch digraph with vertex set and arc set defined by .the random digraph depends on the ( joint ) distribution of the and on the map ( see priebe et al .( 2001 ) and priebe et al .( 2003 ) ) .the adjective _ proximity _ for the catch digraph and for the map comes from thinking of the region as representing those points in `` close '' to ( see , e.g. , toussaint ( 1980 ) and jaromczyk and toussaint ( 1992 ) ) .for the domination number of the associated data - random proximity catch digraph , denoted , is the minimum number of points that dominate all points in .note that , iff .the random variable depends on explicitly , and on and implicitly . in general ,the expectation ] ; and the variance of satisfies , \le n^2/4 ] . in general ,let and let be three non - collinear points. denote by the triangle including the interior formed by these three points .the most straightforward extension of the data random proximity catch digraph introduced by priebe et al .( 2001 ) is the spherical proximity map which is the ball centered at with radius or the arc - slice proximity map .however , both cases suffer from the intractability of the -region and hence the intractability of the finite and asymptotic distribution of .we propose a new class of proximity regions which does not suffer from this drawback .for ] .notice that , with the additional assumption that the non - degenerate two - dimensional probability density function exists with , implies that the special case in the construction of falls on the boundary of two vertex regions occurs with probability zero .note that for such an , is a triangle a.s .and is a star - shaped polygon ( not necessarily convex ) .[ ht ] [ ht ] let be the ( closest ) edge extremum for edge .then , where is the edge opposite vertex , for .so , for .let the domination number be and }:=\operatorname{argmin}_{x\in { \mathcal{x}}_n \cap r({\mathsf{y}}_j)}d(x , e_j) ] for each . thus \le 3 \text { and } 0 \le { \mathbf{var}\,[}{\gamma}_n(r ) ] \le 9/4.\ ] ]the null hypothesis for spatial patterns have been a contraversial topic in ecology from the early days . collected a voluminous literature to present a comprehensive analysis of the use and misuse of null models in ecology community .they also define and attempt to clarify the null model concept as a pattern - generating model that is based on randomization of ecological data or random sampling from a known or imagined distribution . . .the randomization is designed to produce a pattern that would be expected in the absence of a particular ecological mechanism . " in other words , the hypothesized null models can be viewed as thought experiments , " which is conventially used in the physical sciences , and these models provide a statistical baseline for the analysis of the patterns . for statistical testing , the null hypothesis we consider is a type of _ complete spatial randomness _ ; that is , where is the uniform distribution on .if it is desired to have the sample size be a random variable , we may consider a spatial poisson point process on as our null hypothesis .we first present a geometry invariance " result which allows us to assume is the standard equilateral triangle , , thereby simplifying our subsequent analysis .* theorem 1 * : let be three non - collinear points . for , let , the uniform distribution on the triangle .then for any ] , converges to the area of the superset region , , as .in particular , ] for ] for ] for ] as .* proof : * for , and }) ] has positive area for all pairs .recall that with probability 1 for all and . hence in probability as .for ] as , and \rightarrow \sigma^2 \approx .1918 ] and should hold . for ] , and for , a.s . as .then the test statistic is a constant a.s .and implies that a.s .hence consistency follows for segregation . under , as , for all , a.s .then implies that a.s ., hence consistency follows for association . figure [ fig : csrvsseg ] , we observe empirically that even under mild segregation we obtain considerable separation between the kernel density estimates under null and segregation cases for moderate and values suggesting high power at .a similar result is observed for association . with and , under ,the estimated significance level is relative to segregation , and relative to association . under , the empirical power ( using the asymptotic critical value )is , and under , . with and , under ,the estimated significance level is relative to segregation , and relative to association .the empirical power is for both alternatives .we also estimate the empirical power by using the empirical critical values . with and , under ,the empirical power is at empirical level and under the empirical power is at empirical level . with and , under ,the empirical power is at empirical level and under the empirical power is at empirical level .the extension to for is straightforward .let be non - coplanar points .denote the simplex formed by these points as .( a simplex is the simplest polytope in having vertices , edges , and faces of dimension . ) for ] which goes to as at the rate .see for the details .note that .then we find where is the event such that and , and , , and , and . first letting , then , yields the desired result .see for the details .next , , since .let where is the edge opposite vertex for and let be the realization of for . then iff or or .we find , by finding the asymptotically accurate joint pdf of .let be the triangle formed by the median lines at and for and , and let be small enough such that , for . then the asymptotically accurate joint pdf of is where and with domain with be small enough such that , for .similarly we find , by finding the joint pdf of , where is the triangle with vertices .then the asymptotically accurate joint pdf of is where with domain .
|
priebe et al . ( 2001 ) introduced the class cover catch digraphs and computed the distribution of the domination number of such digraphs for one dimensional data . in higher dimensions these calculations are extremely difficult due to the geometry of the proximity regions ; and only upper - bounds are available . in this article , we introduce a new type of data - random proximity map and the associated ( di)graph in . we find the asymptotic distribution of the domination number and use it for testing spatial point patterns of segregation and association . _ keywords : _ random digraph ; domination number ; proximity map ; spatial point pattern ; segregation ; association ; delaunay triangulation research was supported by the defense advanced research projects agency as administered by the air force office of scientific research under contract dod f49620 - 99 - 1 - 0213 and by office of naval research grant n00014 - 95 - 1 - 0777 . + corresponding author . + cep.edu ( c.e . priebe )
|
open development platforms like android and the facebook platform have resulted in the availability of hundreds of thousands of third - party applications that end users can install with only a few clicks . consequently , end users are faced with a large and potentially bewildering number of choices when looking for applications .users installation decisions have privacy and security ramifications : android applications can access device hardware and data , and facebook applications can access users profile information and social networks . as such , it is important to help users select applications that operate as the user intends .android and facebook use permission systems to control the privileges of applications .applications can only access privacy- and security - relevant resources if the user approves an appropriate permission request .for example , an android application can only send text messages if it has the ` send_sms ` permission ; during installation , the user will see a warning that the application can `` send sms messages '' if the installation is completed .these permission systems are intended to help users avoid privacy- or security - invasive applications .unfortunately , user research has demonstrated that many users do not pay attention to or understand the permission warnings .a major problem here is that users do not know what permission combinations are typical for applications .our work is a first step in the direction of simplifying permission systems by means of statistical methods .we propose to identify common patterns in permission requests so that applications that do not fit the predominant patterns can be flagged for additional user scrutiny . towards this goal, we apply a probabilistic method for boolean matrix factorization to the permission requests of android and facebook applications .we find that while applications with good reputations ( i.e. , many ratings and a high average score ) typically correspond well to a set of permission request patterns , applications with poor reputations ( i.e. , less than 10 ratings ) often deviate from those patterns .the primary contribution of this paper is the first analysis of permission request patterns with a statistically sound model .our technique captures the concept of identifying `` unusual '' permission requests .our evaluation demonstrates that our technique is highly generalizable , meaning that the found clustering is stable over different random subsets of the data .we find that permission request patterns can indicate user satisfaction or application quality .android and the facebook platform support extensive third - party application markets .they use permission systems to limit applications access to users private information and resources .the android market is the official ( and primary ) store for android applications .the market provides users with average user ratings , user reviews , descriptions , screenshots , and permissions to help them select applications .android applications can access phone hardware ( e.g. , the microphone ) and private user information ( e.g. , call history ) via android s api .permissions restrict applications ability to use the api .for example , an application can only take a photograph if it has the ` camera ` permission .developers select the permissions that their applications need to function , and these permission requests are shown to users during the installation process .the user must approve all of an application s permissions in order to install the application .several studies have examined android applications use of permissions .barrera et al .surveyed the most popular applications and found that applications primarily request a small number of permissions , leaving most other permissions unused .they used self - organizing maps ( a dimensionality reduction technique ) to visualize the relationship between application categories and permission requests ; based on this analysis , they concluded that categories and permissions are not strongly related .their focus was on visualization and their findings are not applicable to identifying unusual permission request patterns ; they relied on the minimization of a euclidian cost function to find a low dimensional visualization of the data , whereas we use a generative probabilistic model to learn request patterns . felt et al . and chia et al .surveyed android applications and identified the most - requested permissions .chia et al . also found several correlations between the number of permissions and other factors : a weak positive correlation with the number of installs , a weak positive correlation with the average rating , a positive correlation with the availability of a developer website , and a negative correlation with the number of applications published by the same developer .we expand on these past analyses , and our analysis of permission requests is by far the largest study to date .other research has focused on using machine learning techniques to identify malware .sanz et al .applied several types of classifiers to the permissions , ratings , and static strings of applications to see if they could predict application categories , using the category scenario as a stand - in for malware detection .shabtai et al . similarly built a classifier for android games and tools , as a proxy for malware detection .zhou et al . found real malware in the wild with droidranger , a malware detection system that uses permissions as one input .although our techniques are similar , our goal is to understand the difference between high - reputation and low - reputation applications rather than to identify malware .applications may be of low quality or act in undesirable ways ( i.e. , be risky ) without being malware .additionally , our approach only relies on permission requests ; unlike these past approaches , we do not statically analyze applications to extract features , which makes our technique applicable to platforms where code is not available ( such as the facebook platform ) .enck et al . built a tool that warns users about applications that request blacklisted sets of permissions .they manually selected the blacklisted patterns to represent dangerous sets of permissions .in contrast , we advocate a statistical whitelisting approach : we propose to warn users about applications that do not match the permission request patterns expressed by high - reputation applications .these two approaches could be complementary ; human review of the statistically - generated patterns could potentially improve them .the facebook platform supports third - party integration with facebook .facebook lists applications in an `` apps and games '' market alongside information about the applications , including the numbers of installs , the average ratings , and the names of friends who use the same applications . through the facebook platform, applications can read users profile information , post to users news feeds , read and send messages , control users advertising preferences , etc .access to these resources is limited by a permission system , and developers must request the appropriate permissions for their applications to function .applications can request permissions at any time , but most permission requests are displayed during installation as a condition of installation .chia et al . surveyed facebook applications and found that their permission usage is similar to android applications : a small number of permissions are heavily used , and popular applications request more permissions .we collected information about 188,389 android applications from the official android market in november 2011 .this data set encompasses approximately of the android market , which contained 319,161 active applications as of october 2011 . to build our data set , we crawled and screen - scraped the web version of the android market .each application has its own description page on the market website , but the market does not provide an index of all of its applications . to find applications description pages , we first crawled the lists of `` top free '' and `` top paid '' applications .these lists yielded links to 32,106 unique application pages .next , we fed 1,000 randomly - selected dictionary words and all possible two - letter permutations ( e.g. , `` ac '' ) into the market s search engine .the search result listings provided us with links to an additional unique applications .once we located applications description pages , we parsed their html to extract applications names , categories , average rating score , numbers of ratings , numbers of downloads , prices , and permissions .chia et al . provided us with a set of 27,029 facebook applications .they collected these applications by crawling socialbakers , a site that aggregates statistics about facebook applications .after following socialbakers s links to applications , they screen - scraped any permission prompts that appeared .they also collected the average ratings and number of ratings for each application .one limitation of this data set is that it only includes the permission requests that are shown to users as a condition of installation ; they did not attempt to explore the functionality of the applications to collect secondary permission requests that might occur later . as such , our analysis only incorporates the permission requests that are shown to users as part of the installation flow . as an overview , we provide global statistics of the application datasets .we investigate overall application features , such as the price , ratings , and most popular permissions .the characteristics of these features play a role in our analysis of permission request patterns ( section [ sec_exps ] ) .table [ tab_globalperms ] lists the 15 most frequently requested android permissions , and table [ tab_fb ] depicts the 15 most frequently requested facebook permissions .as these indicate , a small number of permissions are widely requested , but most permissions are infrequently requested .
|
android and facebook provide third - party applications with access to users private data and the ability to perform potentially sensitive operations ( e.g. , post to a user s wall or place phone calls ) . as a security measure , these platforms restrict applications privileges with permission systems : users must approve the permissions requested by applications before the applications can make privacy- or security - relevant api calls . however , recent studies have shown that users often do not understand permission requests and lack a notion of typicality of requests . as a first step towards simplifying permission systems , we cluster a corpus of 188,389 android applications and 27,029 facebook applications to find patterns in permission requests . using a method for boolean matrix factorization for finding overlapping clusters , we find that facebook permission requests follow a clear structure that exhibits high stability when fitted with only five clusters , whereas android applications demonstrate more complex permission requests . we also find that low - reputation applications often deviate from the permission request patterns that we identified for high - reputation applications suggesting that permission request patterns are indicative for user satisfaction or application quality .
|
physiological signals are signals that are measured from sensors that are either placed on or implanted into the body .such physiological signals include those obtained using electromyography ( emg ) , electrocardiography ( ecg ) , electroencephalography ( eeg ) , photoplethysmography ( ppg ) , and ballistocardiography ( bcg ) .the processing and interpretation of such signals is challenging due to a number of different factors .for example , it is often difficult to obtain high - fidelity physiological signals due to noise , resulting in low signal - to - noise ratio ( snr ) .traditionally , signal averaging and linear filters such as band - reject and band - pass filters have been used to process such physiological signals to suppress noise ; however , such approaches have also been shown to result in signal degradation .as such , more advanced methods for handling such physiological signals are desired .multi - scale decomposition has become an invaluable tool for the processing of physiological signals . in multi - scale decomposition, a signal is decomposed into a set of signals , each characterizing information about the original signal at a different scale .a common signal processing task that multi - scale decomposition has shown to provide significant benefits is noise suppression , based on the notion that the information pertaining to the noise component would be largely characterized by certain scales that are separate from the scales characterizing the desired signal .much of literature in multi - scale decomposition for physiological signal processing has focused on scale - space theory and wavelet transforms , with some investigations also conducted using methods such as empirical mode decomposition . in scale - space theory , a signal is decomposed into a single - parameter family of signals , denoted by , with a progressive decrease in fine scale signal information between successive scales : where denotes time , denotes scale , is the signal at the scale , and . by decomposing a signal into a set of signals with a progressive decrease in fine scale signal information between successive scales , one can then analyze signals at coarser scales without the influence of fine scale signal information such as that pertaining to noise , which is mainly characterized at the finer scales .as such , one can utilize scale space theory to suppress noise in a signal by perform scale space decomposition on the signal and then treating one of signals at a coarser scale as the noise - suppressed signal .however , there are several limitations to the use of scale space theory for physiological signal processing pertaining to noise suppression .first , noise suppression using scale space theory requires the careful selection of which scale represents the noise - suppressed signal , which can be challenging .second , noise suppression using scale space theory does not facilitate for fine - grained noise suppression at the individual scales , which limits its overall flexibility in striking a balance between noise suppression and signal structural preservation . in wavelet decomposition , a signal is decomposed into a set of wavelet coefficients obtained using a wavelet transform : where is the wavelet , is the dyadic dilation , and is the dyadic position .wavelet transforms has a number of advantages for the purpose of physiological signal processing , particularly pertaining to noise suppression .first , as signal information at different scales are better separated in the wavelet domain ( i.e. , signal information at one scale is not contained in another scale ) , this facilitates fine - grained noise suppression at the individual scales to strike a balance between noise suppression and signal structural preservation .second , scale selection when performing noise suppression using wavelet transforms is less critical than that for noise suppression using scale space theory , since all scales are considered in noise suppression using wavelet transforms as opposed to a single scale selection with scale space theory .one limitation worth noting pertaining to signal processing using wavelet transforms , particularly pertaining to noise suppression , is that signals processed using wavelet transforms can exhibit oscillation artifacts related to wavelet basis functions used in the wavelet transform , which is particular noticeable when dealing with low snr scenarios . therefore , given some of the limitations with both scale space theory and wavelet transforms when used for physiological signal processing , one is motivated to explore alternative approaches that can address these limitations . here, we take a different approach by exploring a bayesian perspective to multi - scale signal decomposition . in this perspective, a signal is viewed as an amalgamation of a number of signals , each characterizing unique signal information at a different scale with different statistical characteristics .taking such a perspective to the problem of multi - scale signal decomposition has a number of advantages .first , like the wavelet transform , since signal information at one scale is not contained in another scale , it allows us to achieve the benefits of taking better advantage of fine - grained noise suppression at the individual scales to strike a balance between noise suppression and signal structural preservation .second , since signals are decomposed based on their statistical characteristics as opposed to a set of deterministic basis functions , signals processed using this approach would not exhibit the types of basis - related artifacts associated with the use of wavelet transforms . motivated by this , in this study , we investigate the feasibility of utilizing a new bayesian - based method for multi - scale signal decomposition called bayesian residual transform ( brt ) for the purpose of physiological signal processing .this paper is organized as follows .first , the methodology behind the proposed bayesian residual transform is described in section [ methods ] . the experimental setup for evaluating the feasibility of using the brt for suppressing noise in physiological signals via signal - to - noise ratio ( snr ) analysis using electrocardiography ( ecg )signals is described in section [ setup ] .the experimental results and discussion is presented in section [ results ] , and conclusions are drawn and future work discussed in section [ conclusions ] .a full derivation of the proposed bayesian residual transform ( brt ) can be described as follows . in the brt, a signal is modeled as the summation of residual signals , each characterizing signal information from the signal at increasingly coarse scales : where denote a signal representing the summation of all residual signals at scales ] and the summation of all residual signals at scales $ ] .hence , one can treat this as an inverse problem of estimating given , with the analytical solution given by the conditional expectation ( the quantification of the conditional expectation will be explained in more detail in a later section discussing the realization of the brt via kernel regression ) .therefore , given , one can substitute for in eq .[ process2 ] and rearrange the terms to obtain as : given , which is computed to obtain , we can express the relationship between and in a similar manner to eq .[ process2 ] as : which can similarly be treated as an inverse problem of estimating given , with the analytical solution given by the conditional expectation .therefore , given , one can express as : generalizing this , at scale , for , can be obtained by where the last residual signal is computed as to conform with the form expressed in eq .[ sumofprocesses ] . hence , given eq .[ estj ] , we have a deep cascading framework for the brt where we can obtain the residual signal at scale ( i.e. , ) given the previously computed . furthermore, since the residual signal at scale ( i.e. , ) is not involved in the computation of the residual signal at scale ( i.e. , ) ( only obtained from previous cascading step is ) , the information contained within is not contained within . as such , as scale increases , the signal information contained in becomes coarser and coarser , which results in residual signals characterizing coarser and coarser signal information as scale increases .based on eq .[ estj ] , the deep cascading framework for the forward bayesian residual transform ( brt ) is illustrated in fig .[ fig1]a . due to the condition of the summation of residual signals at all scales being equal to signal ( eq .[ sumofprocesses ] ) , the inverse brt is simply the summation of all residual signals : the inverse bayesian residual transform ( inverse brt ) procedure is illustrated in fig .[ fig1]b . in this study, we implement a realization of the brt using a kernel regression strategy , which can be described as follows . at each iteration , we compute ( eq . [ condexpectation ] ) based on nonparametric nadaraya - watson kernel regression using a kernel function . here , we employ the following gaussian kernel function : finally , the residual signal at scale ( i.e. , ) can be set as , which is computed at the step where is computed . by setting , the condition of the summation of signal decompositions at all scales being equal to signal ( i.e. , eq .[ sumofprocesses ] ) is satisfied . a step - by - step summary of the realization of brt via kernel regressionis shown in algorithm [ alg1 ] .a step - by - step summary of the inverse brt is shown in algorithm [ alg2 ] . in this study, we wish to illustrate the feasibility of utilizing the brt for processing physiological signals through the task of noise suppression . as such, we first establish a simple approach to noise suppression of signals using the brt for illustrative purposes . the noise suppression method chosen for this studyis based around the idea that the observed noisy signal is formed as a summation of the desired noise - free signal and an additive noise source .suppose that we have the true noise - free signal and we decompose it using the brt into a series of residual signals , where each of the residual signals characterize only information from the noise - free signal at a particular scale .much of the information at each scale that characterizes the noise - free signal would be concentrated within only a few of the locations in each of the residual signals .what this means is that much of the information content related to is primarily concentrated within just a few locations at each scale .if we were to decompose the noisy signal using the brt in a similar fashion , the locations of the residual signal at each scale that would otherwise have negligible information content associated with would now have low but not negligible information content that characterizes the noise source .motivated by this , we employ a noise thresholding strategy where we only keep information from locations with information content greater than the noise information content level at each scale . + a signal parameters initialization : + residual signals + + ; ; compute based on kernel regression with eq .( [ kj ] ) compute eq .( [ estj ] ) ; [ alg1 ] motivated by this , the noise thresholding strategy employed in this study can be described as follows .we first perform the forward brt on the signal to obtain residual signals characterizing signal information at different scales ( ) .since the noise information content level at each scale is not known , we employ the seminal noise level estimation method proposed by donoho to determine the noise threshold at each scale , which can be described as follows . at scale , we estimate the noise threshold at each scale using the noise - adaptive scale estimate , which can be expressed by : where is the median absolute deviation and is the normal inverse cumulative distribution function .based on , noise thresholding is achieved to obtain noise - suppressed residual signal by : finally , the inverse brt ( eq . [ inversetransform ] ) is performed on the set of noise - suppressed residual signals at the different scales ( ) to produce the noise - suppressed signal .a step - by - step summary of the noise suppression method using the brt is shown in algorithm [ alg3 ] .+ + residual signals parameters initialization : + a signal + + ; ; ; ; [ alg2 ] + a noisy signal parameters initialization : + a noise - suppressed signal + + perform the brt on to obtain residual signals . algorithm [ alg1 ] ; compute noise threshold eq . [ noiseest2 ] ; compute noise - suppressed residual signal via threshold using eq . [ eqn-2 ] ; ; perform inverse brt on to obtain noise - suppressed signal algorithm [ alg2 ] [ alg3 ]in this study , to illustrate the feasibility of utilizing the brt for processing physiological signals , we performed a snr analysis using electrocardiography ( ecg ) signals to study the performance of the brt for the task of noise suppression .ecg signals from the mit - bih normal sinus rhythm database were used in this study to perform the snr analysis .this database consists of 18 ecg recordings ( recorded at a sampling rate of 128 hz ) of subjects conducted at the arrhythmia laboratory in the beth israel deaconess medical center .the subjects were found to have no significant arrhythmias .a total of 18 low - noise segments of 10 seconds was extracted , one from each recording , based on visual inspection to act as the baseline signals for evaluation . to study noise suppression performance at different snr levels ,each of the 18 baseline signals were contaminated by white gaussian noise to produce noisy signals with snr ranging from 12 db to 2.5 db ( with 20 different noisy signals at each snr ) , resulting in 3960 different signal perturbations used in the analysis . for comparison purposes , waveletdenoising methods with the following shrinkage rules were also used : i ) stein s unbiased risk ( sure ) , ii ) heuristic sure ( hsure ) , iii ) universal ( uni ) , and iv ) minimax ( minimax ) .each of the methods uses their corresponding noise threshold and shrinkage rules in the original works .to quantitative evaluate noise suppression performance , we compute the snr improvement as follows : where , , and are the noisy , baseline , and noise - suppressed signals obtained using a noise suppression method , respectively .+ + to study the effect of the number of scales on noise suppression performance , the same snr analysis is performed as described above for .+ the brt is implemented in matlab ( the mathworks , inc . ) , with the nonparametric conditional expectation estimates implemented in c++ and compiled as a dynamically linked matlab executable ( mex ) to improve computational speed .the only free parameters of the implemented realization of the brt are the standard deviations used to model the residual signals ( e.g. , ) , the number of scales , and time window size , which can be adjusted by the user to find a tradeoff between noise suppression quality and computational costs . for the snr analysis of ecg signals, is set equally for all scales to the standard deviation of for simplicity , is set at 6 scales , and the time window size is set to . for this configuration , the current implemented realization of the brt can process a 1028-sample signal in second on an intel(r ) core(tm ) i5 - 3317u cpu at 1.70ghz cpu . for the wavelet - based methods tested ( sure , hsure , uni , and minimax ) , as implemented in matlab ( the mathworks , inc . ) , soft thresholding with the coiflet3 mother wavelet at 6 scales and single level rescaling was used as it was found to provide superior results for ecg noise suppression .each of the methods uses their corresponding noise threshold and shrinkage rules as specified in the original works .to illustrate the feasibility of utilizing the brt for processing physiological signals , such as for the task of noise suppression , we first performed the brt on two test signals : i ) a noisy periodic test signal , and ii ) a noisy piece - wise regular test signal .the multi - scale signal decomposition using the brt on a noisy periodic test signal is shown in fig .[ fig2 ] . here, a baseline test signal ( fig . [ fig2]a ) is contaminated by a zero - mean gaussian noise process to produce a noisy signal ( fig .[ fig2]b ) and then decomposed using the brt at different scales ( figs .[ fig2]c - h ) .it can be observed that the noise process contaminating the signal is well characterized in the decompositions at the lower ( finer ) scales ( scales 1 to 3 ) , while the structural characteristics of the test signal is well characterized in the decompositions at the higher ( coarser ) scales ( scales 4 to 6 ) . the multi - scale signal decomposition using the brt on a noisy piece - wise regular test signal ( generated using ) is shown in fig . [ fig3 ] .as with the previous example , a baseline test signal ( fig . [ fig3]a ) is contaminated by a zero - mean gaussian noise process to produce a noisy signal ( fig .[ fig3]b ) and then decomposed using the brt at different scales ( figs .[ fig3]c - h ) .it can be observed that , as with the periodic signal example , the noise process contaminating the signal is well characterized in the decompositions at the lower ( finer ) scales ( scales 1 to 2 ) , while the structural characteristics of the test signal is well characterized in the decompositions at the higher ( coarser ) scales ( scales 3 to 6 ) .furthermore , more noticeable here than in the periodic signal example , it can be seen that that the decomposition at each scale exhibits good signal structural localization .therefore , given the ability of the brt to decouple the noise process from the true signal into different scales , as illustrated in both the periodic and piece - wise regular test signals , the brt has the potential to be useful for performing noise suppression on signals while preserving inherent signal characteristics . in this study ,to illustrate the feasibility of utilizing the brt for processing physiological signals , we introduced a simple thresholding approach to noise suppression using the brt for illustrative purposes ( * see section [ noisesuppression ] * ) .we then performed a quantitative snr analysis using electrocardiography ( ecg ) signals from the mit - bih normal sinus rhythm database to study the performance of the brt for the task of noise suppression , where the snr improvement ( * see section [ noisesuppression ] for formulation * ) . a plot of the mean snr improvement of the tested methods vs. the different input snrs ranging from 12 db to 2.5 db is shown in fig .it can be observed that the noise - suppression method using the brt provided strong snr improvements across all snrs , comparable to sure and higher than the other 3 tested methods .it can also be observed that the uni method consistently achieved snr improvements below 0 db .this is primarily due to the tendency to overestimate the noise level , resulting in signal oversmoothing and thus producing a noise - suppressed signal that is less similar to the baseline signal than the actual noisy signal .it can also be observed that the snr improvement increases as the snr of the input noisy signal decreases , which indicates that greater benefits are obtained through the use of noise suppression methods in low signal snr scenarios . to study the effect of the number of scales on noise suppression performance , a plot of the mean snr improvement vs. the different input snrs ranging from 12 db to 2.5 db for the method using the brt with a range of different number of scales shown in fig .it can be observed that a significant gain in snr improvement exists going from to , with smaller snr improvement gains from all the way to .furthermore , it can be observed that the snr improvement gains from increasing the number of scales become smaller and smaller as the input snr decreases , with the snr improvement for to being approximately the same when the input snr is 2.5 db .therefore , this indicates that the effect of selecting the number of scales on noise suppression performance can be significant and thus a balance between snr improvement and the computational complexity of the brt ( which grows linearly with the number of scales ) is necessary , particularly given the snr of the noisy signal . to study the effect of the standard deviation ( sd ) used for on noise suppression performance , a plot of the mean snr improvement vs. the different input snrs ranging from 12 db to 2.5 db for the method using the brt with a range of different multiples of sd used for shown in fig .it can be observed that a significant gain in snr improvement exists going from to , with a significant drop in snr improvements going from to .furthermore , it can be observed that there are noticeable snr improvement gains going from to that grows larger as the input snr decreases .therefore , this indicates that the effect of selecting on noise suppression performance can be significant , and careful selection may be important when dealing with different types of signals .for the signals tested here , it was found that provided the strongest results .typical results of noise - suppressed signals produced by the method using the brt are shown in fig .[ fig5]b and fig . [ fig5]e ( corresponding to two different 12 db noisy input signals shown in fig . [ fig5]a and fig .[ fig5]d , respectively ) .visually , it can be seen that the brt was effectively used to produce signals with significantly reduced noise artifacts while preserving signal characteristics .results in this study show that it is feasible to utilize the brt for processing physiological signals for tasks such as noise suppression .in this study , the feasibility of employing a bayesian - based approach to multi - scale signal decomposition introduced here as the bayesian residual transform for use in the processing of physiological signals .the bayesian residual transform decomposes a signal into a set of residual signals , each characterizing information from the signal at different scales and following a particular probability distribution .this allows information at different scales to be decoupled for the purpose of signal analysis and , for the purpose of noise suppression , allows for information pertaining to the noise process contaminating the signal to be separated from the rest of the signal characteristics .this trait is important for performing noise suppression on signals while preserving inherent signal characteristics .snr analysis using a set of ecg signals from the mit - bih normal sinus rhythm database at different noise levels demonstrated that it is feasible to utilize the brt for processing physiological signals for tasks such as noise suppression .given the promising results , we aim in the future to investigate alternative adaptive thresholding schemes for the task of noise suppression in physiological signals characterized by nonstationary noise , so that one can better adapt to the nonstationary noise statistics embedded at different scales . moving beyond low - level signal processing tasks such as noise suppression, we aim with our future work to investigate and devise methods for multi - scale analysis of a signal using the bayesian residual transform , which could in turn lead to improved features for signal classification .finally , we aim to investigate the extension and generalization of the bayesian residual transform for dealing with high - dimensional physiological signals such as vectorcardiographs ( vcg ) , and dealing with high - dimensional medical imaging signals from systems such as multiplexed optical high - coherence interferometry , optical coherence tomography , dermatological imaging , diffusion weighted magnetic resonance imaging ( dwi ) , microscopy , dynamic contrast enhanced mri ( dce - mri ) , and correlated diffusion imaging .this work was supported by the natural sciences and engineering research council of canada , canada research chairs program , and the ontario ministry of research and innovation .kestler , m. haschka , w. kratz , f. schwenker , g. palm , v. hombach , and m. hoher , denoising of high - resolution ecg - signals by combining the discrete wavelet transform with the wiener filter .conf . computers . cardiology _ * 1 , * 233 - 236 ( 1998 ) . s.a .chouakri , f. bereksi - reguig , s. ahmaidi , and o. fokapu , wavelet denoising of the electrocardiogram signal based on the corrupted noise estimation .computers in cardiology _ * 32 , * 1021 - 1024 ( 2005 ) . a.l .goldberger , l. amaral , l. glass , j. hausdorff , p. ivanov , r. mark , j. mietus , g. moody , c. peng , and h. stanley , physiobank , physiotoolkit , and physionet : components of a new research resource for complex physiologic signals ._ circulation _ * 101 , * e215-e220 ( 2000 ) .m. akhbari , m. shamsollahi , c. jutten , and b. coppa , ecg denoising using angular velocity as a state and an observation in an extended kalman filter framework .ieee conf .society _ * 1 , * 2897 - 2900 ( 2012 ) . j. glaister , a. wong , and j. glaister , `` msim : multistage illumination modeling of dermatological photographs for illumination - corrected skin lesion analysis , '' _ ieee transactions on biomedical engineering _ vol .60 , no . 7 , pp .1873 - 1883 , 2013 .m. j. shafiee , s. haider , a. wong , d. lui , a. cameron , a. modhafar , p. fieguth and m. haider , `` apparent ultra - high b - value diffusion - weighted image reconstruction via hidden conditional random fields , '' ieee transactions on medical imaging , vol .5 , pp . 1111 - 1124 , 2015 .a. wong , f. kazemzadeh , c. jin , and x. wang , `` bayesian - based aberration correction and numerical diffraction for improved lensfree on - chip microscopy of biological specimens , '' optics letters vol .10 , no . 10 , pp .2233 - 2236 , 2015 .alexander wong ( m 05 ) received the b.a.sc .degree in computer engineering from the university of waterloo , waterloo , on , canada , in 2005 , the m.a.sc .degree in electrical and computer engineering from the university of waterloo , waterloo , on , canada , in 2007 , and the ph.d .degree in systems design engineering from the university of waterloo , on , canada , in 2010 .he is currently the canada research chair in medical imaging systems , co - director of the vision and image processing research group , and an assistant professor in the department of systems design engineering , university of waterloo , waterloo , canada .he has published refereed journal and conference papers , as well as patents , in various fields such as computer vision , graphics , image processing , multimedia systems , and wireless communications .his research interests revolve around imaging , image processing , computer vision , pattern recognition , and cognitive radio networks , with a focus on integrative biomedical imaging systems design , probabilistic graphical models , biomedical and remote sensing image processing and analysis such as image registration , image denoising and reconstruction , image super - resolution , image segmentation , tracking , and image and video coding and transmission .wong has received two outstanding performance awards , an engineering research excellence award , an early researcher award from the ministry of economic development and innovation , two best paper awards by the canadian image processing and pattern recognition society ( cipprs ) , a distinguished paper award from society for information display , and the alumni gold medal .xiao yu wang received the m.a.sc .degree in electrical engineering from concordia university , montreal , canada , in 2006 , and the ph.d . degree in electrical and computer engineering from the university of waterloo , on , canada , in 2011 .she is currently an adjunct assistant professor in the department of systems design engineering , university of waterloo , waterloo , canada .her research interests include stochastic graphical learning and modeling for large - scale networks and data mining and visualization , affective computing , image processing , computer vision , signal processing , femtocell networking , network control theory , wideband spectrum sensing , and dynamic spectrum access .her current focus is on efficient high - resolution , remote spatial biosignals measurements using video imaging for affective computing .
|
multi - scale decomposition has been an invaluable tool for the processing of physiological signals . much focus in multi - scale decomposition for processing such signals have been based on scale - space theory and wavelet transforms . in this study , we take a different perspective on multi - scale decomposition by investigating the feasibility of utilizing a bayesian - based method for multi - scale signal decomposition called bayesian residual transform ( brt ) for the purpose of physiological signal processing . in brt , a signal is modeled as the summation of residual signals , each characterizing information from the signal at different scales . a deep cascading framework is introduced as a realization of the brt . signal - to - noise ratio ( snr ) analysis using electrocardiography ( ecg ) signals was used to illustrate the feasibility of using the brt for suppressing noise in physiological signals . results in this study show that it is feasible to utilize the brt for processing physiological signals for tasks such as noise suppression . a bayesian residual transform for signal processing signal processing , physiological signals , multi - scale , noise suppression , electrocardiography
|
a common way to route data in communication networks is shortest path routing .routing schemes using shortest path are _ single - path _ ; they route all packets of a session through the same dedicated path .although single - path schemes thrive because of their simplicity , they are in general throughput suboptimal .maximizing network throughput requires _ multi - path routing _, where the different paths are used to provide diversity . when the network conditions are time - varying or when the session demands fluctuate unpredictably , it is required to balance the traffic over the available paths using a _ dynamic routing _ scheme which adapts to changes in an online fashion . in the past ,schemes such as _ backpressure _ have been proposed to discover multiple paths dynamically and mitigate the effects of network variability .although backpressure is desirable in many applications , its practicality is limited by the fact that it requires all nodes in the network to make online routing decisions .often it is the case that some network nodes have limited capabilities and can not perform such actions . _ in this paper we study dynamic routing when decisions can be made only at a subset of nodes , while the rest nodes use fixed single - path routing rules . _network overlays are frequently used to deploy new communication architectures in legacy networks . to accomplish this ,messages from the new technology are encapsulated in the legacy format , allowing the two methods to coexist in the legacy network .nodes equipped with the new technology are then connected in a conceptual network overlay , fig .[ fig : intro ] .prior works have considered the use of this methodology to introduce new routing capabilities in the internet .for example , content providers use overlays to balance the traffic across different internet paths and improve resilience and end - to - end performance . in our workwe use a network overlay to introduce dynamic routing to a legacy network which operates based on single - path routing .nodes that implement the overlay layer are called _ routers _ and are able to make online routing decisions , bifurcating traffic along different paths .the rest nodes , called _forwarders _ , rely on a single - path routing protocol which is available to the physical network , see fig .[ fig : intro ] .there are many applications of our overlay routing model . for networks with heterogeneous technologies ,the overlay routers correspond to devices with extended capabilities , while the forwarders correspond to less capable devices .for example , to introduce dynamic routing in a network running a legacy routing protocol , it is possible to use software defined networks to install dynamic routing functions on a subset of devices ( the routers ) . in the paradigm of multi - owned networks ,the forwarders are devices where the vendor has no administrative rights .for example consider a network that uses leased satellite links , where the forwarding rules may be pre - specified by the lease . in such heterogeneous scenarios, maximizing throughput by controlling only a fraction of nodes introduces a tremendous degree of flexibility . in the physical network the set of routers with .also , denote the throughput region of this network with .then , is the throughput of the network when all nodes are routers .we call this the full throughput of , and it can be achieved if all nodes run the backpressure policy . also , is the throughput of a network consisting only of forwarders , which is equivalent to single - path throughput . since increasing the number of routers increases path diversity , we generally have .prior work studies the necessary and sufficient conditions for router set to guarantee full throughput , i.e. , . the results of the study show that using a small percentage of routers ( ) is sufficient for full throughput in power - law random graphs an accurate model of the internet .although characterizes the throughput region , a dynamic routing to achieve this performance is still unknown .for example , in the same work it is showcased that backpressure operating in the overlay is suboptimal . _ in this work we fill this gap under a specific topological assumption explained in detail later .we study dynamic routing in the overlay network of routers and propose a control policy that achieves .our work is the first to analytically study such a heterogeneous dynamic routing policy and prove its optimality . _we consider a physical network where the nodes are partitioned to routers and forwarders .the physical network has installed single - path routing rules , which we capture as follows .every router is assigned an acyclic path to every other router .[ fig : model ] ( left ) shows with bold arrows both paths assigned to router , i.e. , , and .let be the set of all such paths in the network . .we indicate with bold arrows the shortest paths available to by the single - path routing scheme of the physical network .( right ) the equivalent overlay network of routers and tunnels ., title="fig : " ] .we indicate with bold arrows the shortest paths available to by the single - path routing scheme of the physical network .( right ) the equivalent overlay network of routers and tunnels ., title="fig : " ] [ fig : model ] we introduce the concept of _ tunnels_. the tunnel corresponds to a path with end - points routers and intermediate nodes forwarders .we then define the overlay network consisting of routers and tunnels .figure [ fig : model ] ( right ) depicts the overlay network for the physical network in the left , assuming shortest path routing is used . in this workwe study the case of _ non - overlapping tunnels_. let be the set of all physical links of tunnel with the exception of the first input link .an overlay network satisfies the non - overlapping tunnels condition if for any two tunnels we have . whether the condition is satisfied or not , depends on the network topology , the set of routers , and the set of paths which altogether determine , for all .the network of figure [ fig : model ] satisfies the non - overlapping tunnels condition since each of the links belongs to exactly one tunnel . on the other hand , in the network of figure [ fig : model2 ]link belongs to two tunnels , hence the condition is not satisfied . when tunnels overlap , packets belonging to different tunnels compete for service at the forwarders , which further complicates the analysis .our analytical results focus exclusively on the non - overlapping tunnels case which still constitutes an interesting and difficult problem .however , in the simulation section we heuristically extend our proposed policy to apply to general networks with overlapping tunnels and showcase that the extended policy has near - optimal performance .the overlay network admits a set of sessions , where each session has a unique router destination , but possibly multiple router sources .time is slotted ; at the end of time slot , packets of session arrive exogenously at router , where is a positive constant .are defined at overlay router nodes . ] are i.i.d . over slots ,independent across sessions and sources , with mean .for every tunnel , a routing policy chooses the routing function in slot which determines _ the number of session packets _ to be routed from router into the tunnel .additionally , we denote with the actual number of session packets that exit the tunnel in slot . for a visual association of and to the tunnel links see figure [ fig : functions ] . note that is decided by router while is uncontrollable .model22 ( 52,26)(26,22) ( 25,13) ( 67,22) ( 17.5,7) ( 78,7) ( 47,5) [ fig : functions ] let the sets represent the incoming and outgoing neighbors of router on .packets of session are stored at router in a _router queue_. its backlog evolves according to the following equation where we use since there might not be enough packets to transmit . on tunnel we collect all packets into one _ tunnel queue _ whose evolution satisfies the packets that actually arrive at might be less than , hence the inequality ( [ eq : queuefij ] ) .we remark that is the total number of packets in flight on the tunnel .physically these packets are stored at different forwarders along the tunnel .we only keep track of the sum of these physical backlogs since , as we will show shortly , this is sufficient to achieve maximum throughput . above ( [ eq : qevol ] )assumes that all incoming traffic at router arrives either from tunnels , or exogenously .it is possible , however , to have an incoming neighbor router such that is a physical link , a case we purposely omitted in order to avoid further complexity in the exposition .the optimal policy for this case can be obtained from our proposed policy by setting the corresponding tunnel queue backlog to zero , .we assume that inside tunnels packets are forwarded in a _ work - conserving _ fashion , i.e. , a forwarder does not idle unless there is nothing to send . due to work - conservation and the assumption of non - overlapping tunnels , a tunnel with `` sufficiently many '' packetshas instantaneous output equal to its bottleneck capacity .denote by the number of forwarders associated with tunnel .let be the greatest capacity among all physical links associated with tunnel and the smallest , also let .\ ] ] [ lem : leaky ] under any control policy , suppose that in time slot the total tunnel backlog satisfies , for some , where is defined in .the instantaneous output of the tunnel satisfies the proof is provided in the appendix [ app : lem ] .lemma [ lem : leaky ] is a path - wise statement saying that the tunnel output is equal to the tunnel bottleneck capacity in every time slot that the tunnel backlog exceeds .notably we havent discussed yet how the forwarders choose to prioritize packets from different sessions . based on lemma [ lem : leaky ] and the results that follow , we will establish that independent of the choice of session scheduling policy , there exists a routing policy that maximizes throughput .furthermore , we demonstrate by simulations that different forwarding scheduling policies result in the same average delay performance under our proposed routing .hence , in this paper forwarders are allowed to use any work - conserving session scheduling , such as fifo , round robin or even strict priorities among sessions .a choice for the routing function is considered permissible if it satisfies in every slot the corresponding capacity constraint , where denotes the capacity of the input physical link of tunnel , see fig .[ fig : functions ] . in every time slot, a control policy determines the routing functions at every router .let be the class of all permissible control policies , i.e. , the policies whose sequence of decisions consists of permissible routing functions .we want to keep the backlogs small in order to guarantee that the throughput is equal to the arrivals . to keep track of thiswe define the stability criterion adopted from .a queue with backlog is stable under policy if }<\infty.\ ] ] the overlay network is stable if all router and tunnel queues are stable .the _ throughput region _ of class is defined to be ( the closure of ) the set of for which there exists a policy such that the system is stable . avoiding technical jargon, the throughput region includes all achievable throughputs when implementing dynamic routing in the overlay .recall that throughput depends on the actual selection of routers , and that for it may be the case that the achievable throughput may be less than the full throughput of , i.e. , .therefore it is important to clarify that in this work we assume that is fixed and we seek to find a policy that is stable for any , i.e. , a policy that is _ maximally stable_. such a policy is also called in the literature `` throughput optimal '' .the throughput region can be characterized as the closure of the set of matrices for which there exist nonnegative flow variables such that where ( [ eq : region1 ] ) are flow conservation inequalities at routers , ( [ eq : region2 ] ) are capacity constraints on tunnels , and recall that is the bottleneck capacity in the tunnel .we write note , that the conditions for the stability region are the same with the conditions for full throughput , with the difference that the flow variables are defined on the network of routers instead of .indeed the proof that ( [ eq : region1])-([eq : region2 ] ) are necessary and sufficient for stability may be obtained by considering a virtual network where every tunnel is replaced by a virtual link . controlling this system in a dynamic fashion amounts to finding a routing policy which stabilizes the system for any ._ finding such a policy in the overlay differs significantly from the case of a physical network _ , since physical links support immediate transmissions while overlay links are work - conserving tandem queues which induce queueing delays .as discussed in , using backpressure in the overlay may result in poor throughput performance . in this sectionwe propose the threshold - based backpressure ( ) policy , a distributed policy which performs online decisions in the overlay . is designed to operate the tunnel backlogs close to a threshold .this is a delicate balance whereby the tunnel output works efficiently ( by lemma [ lem : leaky ] ) while at the same time the number of packets in the tunnel are upper bounded .consider the threshold where is defined in ( [ eq : t0 ] ) and is the capacity of input physical link of tunnel and thus also the maximum increase of the tunnel backlog in one slot .define the condition : the reason we use this threshold is that if ( [ eq : c1 ] ) is false , it follows that both and , and hence we can apply lemma 1 to both slots and .this is used in the proof of the main result . ' '' '' * threshold - based backpressure ( ) policy * ' '' '' at each time slot and tunnel , let be a session that maximizes the differential backlog between routers , ties resolved arbitrarily . then route into that tunnel and .recall , that denotes the capacity of input physical link of tunnel ., then we fill the transmissions with dummy non - informative packets . ] ' '' '' is similar to applying backpressure in the overlay , with the striking difference that _ no packet is transmitted to a tunnel _ if condition ( [ eq : c1 ] ) is not satisfied .therefore the total tunnel backlog is limited to at most plus the maximum number of packets that may enter the tunnel in one slot .formally we have [ lem : detfb ] assume that the system starts empty and is operated under .then the tunnel backlogs are uniformly bounded above by follows from and .this shows that our policy does not allow the tunnel backlogs to grow beyond .to show that our policy efficiently routes the packets is much more involved .it is included in the proof of the following main result .[ th : optimality][maximal stability of ] [ th : opti ] consider an overlay network where underlay forwarding nodes use any work - conserving policy to schedule packets over predetermined paths , and the tunnels are non - overlapping .the policy is maximally stable : the proof is is based on a novel -slot lyapunov drift analysis and it is given in the appendix [ app : th ] . is a distributed policy since it utilizes only local queue information and the capacity of the incident links , while it is agnostic to arrivals , or capacities of remote links , e.g. note that the decision does not depend on the capacity of the bottleneck link .a very simple distributed protocol can be used to allow overlay nodes to learn the tunnel backlogs . specifically can be estimated at node using an acknowledgement scheme , whereby periodically informs of how many packets have been received so far . in practice ,the router nodes obtain a delayed estimate .however , using the concepts in - p.85 , it is possible to show that such estimates do not hurt the efficiency of the scheme .in this section we perform extensive simulations to : * showcase the maximal stability of and compare its throughput performance to other routing policies , * examine the impact of different forwarding scheduling policies ( fifo , hlpss , strict priority , lqf ) on throughput and delay of , * demonstrate that has good delay performance , and * study the extension of to the case of overlapping tunnels . first we present dynamic routing policies from the literature against which we will compare .* backpressure in the overlay ( ) : * for every tunnel define ties solved arbitrarily .then choose and this corresponds to backpressure applied only to routers , which is admissible in our system , .* backpressure in the physical network ( ) : * for every physical link define ties solved arbitrarily . then choose and this is the classical backpressure from , applied to all nodes in the network , and thus it is not admissible in the overlay , , whenever .since this policy achieves the full throughput , we use it as a throughput benchmark . * backpressure enhanced with shortest paths bias ( ) : * for every node - session pair define the hop count from to the destination of as . for every physical link define ties solved arbitrarily . then choose according to ( [ eq : servbp ] ) .this policy was proposed by to reduce delays .when the congestion is small , the shortest path bias introduced by the hop count difference leads the packets directly to the destination without going through cycles or longer paths .such a policy requires control at every node , and thus it is not admissible in the overlay , , whenever . since , however , it is known to achieve and to outperform in terms of delay , it is useful for throughput and delay comparisons. consider the network of figure [ fig : maxsta ] ( left ) , and define two sessions sourced at ; session 1 destined to and session 2 to .we assume that and all the other link capacities are unit as shown in the figure .we choose in this way to make the routing decisions of session 1 more difficult .we show the full throughput region achieved by which however are not admissible in the overlay .then we experiment with and we also show the throughput of plain shortest path routing . for , according to example settings and ( [ eq : thres ] ) it is ; we choose .since the example satisfies the non - overlapping tunnel condition , by theorem [ th : opti ] our policy achieves .this is verified in the simulations , see figure [ fig : maxsta ] ( right ) . from the figurewe can conclude that for this example we have , although .this is consistent to the findings of . from the same figurewe see that both backpressure in the overlay and shortest path achieve only a fraction of , and hence they are not maximally stable . for , we have loss of throughput when both sessions compete for traffic , in which case fails to consider congestion information from the tunnel and therefore allocates this tunnel s resources wrongly to the two sessions . for shortest path ,it is clear that each session uses only its own dedicated shortest path and hence the loss of throughput is due to no path diversity .model_b2 throughput ( 87,0) ( -4,80) ( 27,55), ( 71,26) ( 54,45.3)shortest path ( 70,0) ( 37,0) ( 0,0) ( -10,40) to understand why works , we examine a sample path evolution of this system under for the case where , which is one of the most challenging scenarios . for stability ,session 1 must use its dedicated path , and send almost no traffic through tunnel . focusing on the tunnel , figure [ fig : samplepath1 ]shows the differential backlogs per session and the corresponding tunnel backlog for a sample path of the system evolution . in most time slots is congested , which is indicated by high differential backlogs . in such slots ,the tunnel has more than 1 packet , which guarantees by lemma [ lem : leaky ] that it outputs packets at highest possible rate , hence the tunnel is correctly utilized .recall that when the tunnel is full ( =6 ) no new packets are inserted to the tunnel preventing it from exceeding .observe that the differential backlog of session 2 always dominates the session 1 counterpart , and hence whenever a tunnel is again ready for a new packet insertion , session 2 will be prioritized for transmission according to ( [ eq : servf ] ) .therefore , the proportion of session 2 packets in this tunnel is close to 100% , which is the correct allocation of the tunnel resources to sessions for this case .fig_samplepath_1 ( 65,30) ( 57,44) ( 29,42) ( 40,2)time ( slots ) ( 36,50)individual backlogs ( 31,40)(0,-1)24.5 ( 59,42)(-1,-3)2.5 ( 64,30)(-1,-2)6 [ fig : samplepath1 ] at every forwarder node there is a packet scheduling decision to be made , to choose how many packets per session should be forwarded in the next slot .although by assumption we require the forwarding policy to be work - conserving , our results do not restrict the scheduling policy any further .in particular , our analysis only depends on and hence it is insensitive to the chosen discipline .here we simulate the operation of with different forwarding policies , in particular with first - in first - out ( fifo ) , head of line proportional processor sharing ( hlpps ) , strict priority and longest queue first ( lqf ) , where hlpps refers to serving sessions proportionally to their queue backlogs , and lqf refers to giving priority to the session with the longest queue .figure [ fig : fwddiscipline ] shows sample path differences for several forwarding disciplines on the example of the previous section , while table [ delay_table ] compares the average delay performance for different arrival rates .independent of the discipline used , the average total number of packets in the system is approximately the same .therefore , while our theorem states that the forwarding policy does not affect throughput , simulations additionally show that the delay is also the same .fig5_b ( 40,5)time ( slots ) ( 19,96)total backlog difference fig5_c ( 40,5)time ( slots ) ( 19,96)total backlog difference [ fig : fwddiscipline ] .average delay performance of under different underlay forwarding policies . [ cols="^,^,^,^,^ " , ] [ delay_table ] we simulate the delay of different routing policies , comparing the performance of and overlay policies , as well as and which are not admissible in the overlay .we experiment for , and we plot the average total backlogs in the system for two example networks shown to the left of each plot . in fig .[ fig : delay1 ] fails to detect congestion in the tunnel and consequently delay increases for .we observe that outperforms and , and performs similarly to .this relates to avoidance of cycles at low loads by use of shortest paths , see . in particular, achieves this by means of hop count bias , while using the tunnels . _a remarkable fact is that applies control only at the overlay nodes and outperforms in terms of delay which controls all physical nodes in the network ._ in fig .[ fig : delay2 ] we study queues in tandem , in which case all policies have maximum throughput since there is a unique path through which all the packets travel .we choose this scenario to demonstrate another reason why has good delay performance .the delay of backpressure increases quadratically to the number of network nodes because of maintaining equal backlog differences across all neighbors . in the case of , as well asany other admissible overlay policy like , the backlogs increase with the number of routers .thus , when we obtain a delay gain by applying control only at routers .[ fig : delay2 ] showcases exactly this delay gain that and have versus and . we conclude that has very good delay performance which is attributed to two main reasons : 1 .when traffic load is low , the majority of the packets follow shortest paths .the number of packets going in cycles is significantly reduced .since there is no need for congestion feedback within the tunnels , the backlog buildup is not proportional to the number of network nodes but to the number of routers .model_b fig_delay_graph.pdf ( 31,64) ( 31,58.5) ( 31,52.5) ( 31,46.5) ( 47,-2)load ( 30,74)average total backlog [ fig : delay1 ] model_c fig_delay_line.pdf ( 31,64) ( 31,58.5) ( 31,52.5) ( 31,46.5) ( 47,-2)load ( 30,74)average total backlog [ fig : delay2 ] next we extend to networks with overlapping tunnels , see the example in fig .[ fig : overlapping ] ( left ) . in this context theorem [ th : opti ] does not apply and we have no guarantees that is maximally stable .the key to achieving maximum throughput is to correctly balance the ratio of traffic from each session injected into the overlapping tunnels . for the network to be stable with load , a policy needs to direct most of the traffic of session 1 through the dedicated link , or equivalently to allocate .since node is the destination of session 1 , and hence , we need to relate this routing decision to the congestion in the tunnel . to make this work , we introduce the following extension . instead of conditioning transmissions on router differential backlog as in , we use the condition .intuitively , we expect a non - congested node to have a small backlog and thus avoid sending packets over a congested tunnel .the new policy is called .it can be proven that is maximally stable for non - overlapping tunnels .although we do not have a proof for the case of overlapping tunnels , the simulation results show that by choosing to be large achieves maximum throughput . ' '' '' * for overlapping tunnels * ' '' '' fix a to satisfy eq .( [ eq : thres ] ) , and recall condition ( [ eq : c1 ] ) : in slot for tunnel let be a session that maximizes the differential backlog between router , ties resolved arbitrarily. then route into tunnel and .recall , that denotes the capacity of physical link that connects router to the tunnel . ' '' '' figure [ fig : overlapping ] shows the results from an experiment where , , and we vary . achieves full throughput and similar delay to , doing strictly better than . to understand how works ,consider the sample path evolution ( fig .[ fig : overlapping2 ] ) , where are shown .most of the time we have , thus by the choice of and the condition used in ( [ eq : servf2 ] ) , session 1 rarely gets the opportunity to transmit packets to the overlapping tunnels . as increases session 1will get fewer and fewer opportunities , hence behavior will approximate the optimal . in fig [fig : overlapping2 ] ( right ) we plot the average total backlog for different values of . as increases, the performance at high loads improves .general_setb fig_over_1.pdf ( 31,65) ( 31,44) ( 31,51) ( 31,58) ( 47,-2)load ( 30,74)average total backlog [ fig : overlapping ] fig_samplepath_2.pdf ( 58,25) ( 57,55) ( 40,32) ( 40,2)time ( slots ) ( 30,67.6)individual backlogs ( 45,30)(0,-1)11.5 ( 59,53)(-1,-3)2.5 ( 62,22)(-1,-2)3.5 fig_delay_over_t.pdf ( 31,63.5)=2 ( 31,58)=5 ( 31,53)=10 ( 31,48)=25 ( 47,-2)load ( 27,74)average total backlog [ fig : overlapping2 ]in this paper we propose a backpressure extension which can be applied in overlay networks . from prior work ,we know that if the overlay is designed wisely , it can match the throughput of the physical network .our contribution is to prove that the maximum overlay throughput can be achieved by means of dynamic routing .moreover , we show that our proposed scheme makes the best of both worlds ( a ) efficiently choosing the paths in online fashion adapting to network variability and ( b ) keeping average delay small avoiding the known inefficiencies of the legacy backpressure scheme .future work involves the mathematical analysis of the overlapping tunnels case and the consideration of wireless transmissions . in both caseslemma [ lem : leaky ] does not hold due to correlation of routing decisions at routers with scheduling at forwarders .we would like to thank dr .chih - ping li and mr .matthew johnston for their helpful discussions and comments .10 d. andersen , h. balakrishnan , f. kaashoek , and r. morris .resilient overlay networks . in _ proc .acm sosp _ , oct .maury bramson .convergence to equilibria for fluid models of head - of - the - line proportional processor sharing queueing networks ., 23(1 - 4):126 , 1996 .l. bui , r. srikant , and a. stolyar .novel architectures and algorithms for delay reduction in back - pressure scheduling and routing . in _ proc .ieee infocom _ ,april 2009 .ford and d.r .flows in networks . in _princeton universtiy press _ , 1962 .l. georgiadis , m. neely , and l. tassiulas .resource allocation and cross - layer control in wireless networks ., 1:1147 , 2006 . n. m. jones , g. s. paschos , b. shrader , and e. modiano . an overlay architecture for throughput optimal multipath routing . in _ proc . of acm mobihoc _ , 2014 .m. j. neely . .morgan & claypool , 2010 .michael j. neely , eytan modiano , and charles e. rohrs .dynamic power allocation and routing for time - varying wireless networks ., 23:89103 , 2005 .m. e. j newman . .oxford university press , inc ., new york , ny , usa , 2010 .g. s. paschos and e. modiano .dynamic routing in overlay networks .technical report , 2014 .l. l. peterson and b. s. davie . .morgan kaufmann publishers inc . , san francisco , ca , usa , 4th edition , 2007 .r. k. sitaraman , m. kasbekar , w. lichtenstein , and m. jain . .john wiley & sons , 2014 .l. tassiulas and a. ephremides .stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks . , 37:19361948 , 1992 .under any control policy , suppose that in time slot the total tunnel backlog satisfies , for some , where is defined in .the instantaneous output of the tunnel satisfies consider a tunnel which forwards packets , using an arbitrary work - conserving policy , over the path with underlay nodes .renumber the nodes in the path in sequence they are visited by packets as , where refers to and to , hence since the statement is inherently related to packet forwarding internally in the tunnel , we will introduce some notation .denote by the packets waiting at the node at slot , to be transmitted to the , along tunnel ( the packets may belong to different sessions ) .clearly , it is .also , let be the actual number of session packets that leave this node in slot . for all ,due to work - conservation we have denoting the capacity of the physical link connecting nodes .hence , evolve as first we establish that the instantaneous output of the tunnel can not be larger than its bottleneck capacity , i.e. , if the bottleneck link is the last link on then ( [ eq : req ] ) follows immediately from .else , pick such that and suppose is the bottleneck link .then let us focus on the link .for its input we have where above and in the remaining proofs we use parentheses to denote the expressions from which equalities and inequalities follow . for link output where . starting the system empty, the backlog can not grow larger than since this is the maximum number of arriving packets in one slot and they are all served in the next slot . hence , it is also . by induction ,the same is true for for any , and we get ( [ eq : req ] ) .the remaining proof is by contradiction .assume .consider the physical link with .using to understand ( [ eq : leakyback ] ) note that if the rhs was false , by ( [ eq : fc1 ] ) we would have and thus by ( [ eq : indf ] ) also .leaky.pdf [ fig : leaky ] since by the premise we have , applying ( [ eq : fc1 ] ) we deduce from which applying ( [ eq : leakyback ] ) recursively we roll back in time and space to obtain since the maximum backlog increase at any node within one slot is , we roll forward in time to get summing up for all forwarders we get \notag\\ & = m_{ij}{{r_{ij}^{\min}}}+\frac{m_{ij}(m_{ij}-1)}2r_{ij}^{\max}\stackrel{\eqref{eq : t0}}{=}t_0.\end{aligned}\ ] ] which contradicts the premise of the lemma .in order to prove that is maximally stable , we will pick an arbitrary arrival vector in the interior of and show that the system is stable . to prove stability we perform a -slot drift analysis andshow that has a negative drift .our system state is described by the vector of queue lengths . by lemma [ lem : detfb ] ,the tunnel backlogs are deterministically bounded under , and thus for the purposes of showing stability we choose the candidate quadratic lyapunov function : ^ 2.\ ] ] we will use the following shorthand notation the -slot lyapunov drift under policy is from lemma [ lem : detfb ] we have for every sample path , and thus the -slot lyapunov drift for tb becomes . to prove the stability of , it suffices to show that for any in the interior of the stability region there exist positive constants and a finite such that , see -slot drift theorem in ( corollary of the foster s criterion ) .the remaining proof shows this fact . to derive an expression for the -slot drift we first write the -slot queue evolution inequalities where use the notation to denote summations over slots : the inequality is because the arrivals are added at the end of the -slot period some of these packets may actually be served within the -slot period .taking squares on ( [ eq : kfeq ] ) , using lemma 4.3 from , and performing some calculus we obtain the following bound where is a positive constant related to the maximum number of arriving packets in a slot , the maximum link capacity , and the maximum node - degree in graph . denote with the session packets in the tunnel , where .this backlog evolves as we have , and , hence .it follows that for any where is the deterministic upper bound of from ( [ eq : fmax ] ) .hence , \right\},\label{eq : interim}\end{aligned}\ ] ] where the equality comes from the node - centric and link - centric packet accounting in a network , see on page 48 .we design a stationary oracle ( ) policy , whose purpose is to assist us in proving the optimality of policy .the foundation of lies on the existence of a flow decomposition . for any in the interior of the stability region, there exists an such that is also stabilizable , where is a vector of ones .thus , by the sufficiency of the conditions in section [ sec : region ] there must exist a feasible flow decomposition such that and for all . using this particular decompositionwe define a specific policy for the particular as follows . ' '' '' *stationary randomized oracle ( ) policy * ' '' '' in every time slot and at each tunnel , * if ( the tunnel is loaded ) , then choose * else if ( not loaded tunnel ) , choose a session using an i.i.d . process with distribution the routing functions are then determined by and .and the allocation of service to session given by ( [ eq : allocation ] ) are independent . ] ' '' '' observe that satisfies the capacity constraints at every slot , namely .therefore .despite wasting transmissions when the tunnels are loaded , stabilizes : [ cor : stat ] for any in the interior of the stability region we have is also designed to mimic the condition ( [ eq : c1 ] ) used by .because of it , we can show that compares favorably to .[ lem : kslot ] the -slot policy comparison yields for all \right\ } \\ & \hspace{0.1 in } \geq \mathbbm{e}_{\mathbf h}\left\{\sum_{c}\tilde \mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or}}})\left [ q_i^c(t)-q_j^c(t)\right ] \right\ } -k^2b_2,\notag\end{aligned}\ ] ] where is a constant .we combine ( [ eq : interim ] ) with lemma [ lem : kslot ] to get \right\},\end{aligned}\ ] ] which can be rewritten as ,\notag\end{aligned}\ ] ] where in the last inequality we used lemma [ cor : stat ] .hence , we finally get q_i^c(t)\ ] ] choose a finite and define the positive constants and . then rewrite ( [ eq : lastdrift ] ) as which completes the proof .below we give the proofs for the technical lemmas [ cor : stat ] and [ lem : kslot ] .for any in the interior of the stability region we have first we will need a technical lemma , which states that a non - loaded tunnel can not become loaded under .we emphasize that in the following lemma all backlogs refer to the system evolution under .[ lem : absorption ] consider the system evolution on router edge under for the slots and suppose that is arbitrary .suppose that for a time slot we have , then the proof is by contradiction .suppose there exists such that and .then , there must exist a slot with where a transition occurred , such that and .then use the facts , which hold for any , and ( [ eq : queuefij ] ) to get thus , since we may apply lemma [ lem : leaky ] on slot to conclude that . then combine with and ( [ eq : queuefij ] ) again which is a contradiction . to prove lemma [ cor : stat ], we will first show that for any router edge it is we begin with the rhs of ( [ eq : orefficiency ] ) . for any slot in the observation period ,observe that if the value of is revealed , does not depend further on , i.e. , and are conditionally mutually independent and we may write then , by the law of total expectation we have for where we used by definition of . for immediately get . summing up over all slotsproves the rhs of . to prove the lhs of ( [ eq : orefficiency ] ) we will use lemma [ lem : absorption ] .first assume that the observation period starts with .then invoking lemma [ lem : absorption ] we conclude that for all for any realization of the system evolution . then assume that the observation period starts with , by we have and it follows that the tunnel backlog monotonically decreases until it becomes less than .moreover , since , the maximum number of slots required to become smaller than is at most . on the first slotwhen , we can apply lemma [ lem : absorption ] again .thus , combining the two cases , we conclude that for any realization we have let , we have where the last inequality follows from , see .this proves . to complete the proof, we use the lower bound of eq .( [ eq : orefficiency ] ) for the first term and the upper bound for the second term , and use the fact that node s out - degree is bounded above by the maximum node degree .the -slot policy comparison yields for all \right\ } \\ & \hspace{0.1 in } \geq \mathbbm{e}_{\mathbf h}\left\{\sum_{c}\tilde \mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or}}})\left [ q_i^c(t)-q_j^c(t)\right ] \right\ } -k^2b_2,\notag\end{aligned}\ ] ] where is a constant given in eq .( [ eq : b1 ] ) . fix some arbitrary router edge , and a time slot .the concept of the proof is to examine the subsequent slots and compare to with respect to the products \right\} ] . also , note that under any policy it is .then , on an underload slot , we have \notag\\ & \hspace{0.1in}\geq \sum_{c}\mu_{ij}^{c}(t+\tau,\text{{{\text{bp - t } } } } ) \left [ q_i^c(t+\tau)-q_j^c(t+\tau)\right]-\tau b_2 \notag\\ & \hspace{0.08 in } \stackrel{\text{(\ref{eq : globalbnd})}}{\geq}\sum_{c}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t+\tau)-q_j^c(t+\tau)\right]-\tau b_2\notag \\ & \hspace{0.1in}= \sum_{c}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) [ q_i^c(t)-q_j^c(t)+\notag \\ & \hspace{1.7in}+\delta q_i^c(\tau)-\delta q_j^c(\tau ) ]-\tau b_2 \notag \\ & \hspace{0.1in}\geq \sum_{c}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]-2\tau b_2\end{aligned}\ ] ] our plan is to derive a similar expression to for the overload slots . to proceed with the plan, we develop an analysis which depends on the sign of ] . for this case, we use the concept of an _ overload subperiod _ , which is a period of consecutive overload slots plus an initial underload slot .we formally define the overload subperiod with length consisting of consecutive slots , such that and . in words ,an overload subperiod begins with one underload slot and ends with an overload slot , while all slots within the subperiod are overload and the slot after the subperiod is underload , see a representation of such an overload subperiod in fig .[ fig : trajectory ] .let be the set of slots comprising the overload subperiod for sample path under study .suppose , that there are overload subperiods , where the random variable takes values in .we also define .note that the sets are disjoint , it is , and . by definition of the overload subperiodthe backlog at the last slot is larger than at the first slot , hence for our chosen sample path we have let us now extend the definition of the overload subperiod to the special case of the first subperiod . if the first slot of the observation period is overload , i.e. , , then the first overload subperiod starts at an overload slot ( as opposed to the original definition ) and completes at the last consecutive overload slot ( similar to the original definition ) .this is a natural extension to the above definition of the overload subperiod .the backlog difference between last and first slot of the first overload subperiod is now , let us examine the overload subperiod of slots for , combining ( [ eq : bound1 ] ) and ( [ eq : kfeq2 ] ) we have where the equality follows from applying lemma [ lem : leaky ] to all slots in the overload subperiod ( including the first ) .multiplying both sides with the positive quantity ] \\ & \hspace{0.1in}\geq \hspace{-0.25in}\sum_{\hspace{0.25in}c , t+\tau\in { \cal t}_m}\hspace{-0.25in}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\notag\\ & \hspace{0.1in}\geq \hspace{-0.2in}\sum_{\hspace{0.2in}c , t+\tau\in { \cal t}_m}\hspace{-0.2in}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]- \hspace{-0.25in}\sum_{\hspace{0.15in}t+\tau\in { \cal t}_m}\hspace{-0.15in}2\tau b_2\notag\end{aligned}\ ] ] where in the last step we intentionally relaxed the bound further to make it match ( [ eq : underloadslots ] ) . for and , we repeat the above approach using ( [ eq : bound21 ] ) , and ( [ eq : overloadpos ] ) still holds .however , in case , i.e. the observation period starts in overload , we must replace ( [ eq : bound1 ] ) with ( [ eq : bound2 ] ) , in which case the above approach breaks. therefore we deal with this case in a different manner .in particular we will show that if our sample path has then for all time slots in the first overload subperiod , starting from the first slot , and since , observe that both policies will make the same decision . then ( [ eq : kfeq2 ] ) is satisfied with equality , and since does not depend on the chosen policy , we have that is the same for both policies .this process is repeated for all slots in subperiod consisting of overload slots under .thus , we conclude that if the system is in the first overload period under with , then it is also in the first overload period under .therefore , for , we have and ( [ eq : overloadpos ] ) holds for this case as well .we conclude that ( [ eq : overloadpos ] ) is true for all as long as \geq 0 ] and the complement .observing that the remaining slots are underload and combining with ineq .( [ eq : underloadslots ] ) , we condition on the sample path to get \big |\mathbf{q}^+_t , s=\bm s\right\ } \notag\\ & = \mathbbm{e}_{\mathbf h}\left\{\hspace{-0.2in}\sum_{\hspace{0.2in}c , t+\tau\in { \cal t}}\hspace{-0.28 in } \mu_{ij}^{c}(t+\tau,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^+_t , s=\bm s\right\ } \notag\\ & + \mathbbm{e}_{\mathbf h}\left\{\hspace{-0.3in}\sum_{\hspace{0.25in}c , t+\tau\in { \cal k - t } } \hspace{-0.37in}\mu_{ij}^{c}(t+\tau,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^+_t , s=\bm s\right\ } \notag\\ & \hspace{-0.05in}\stackrel{\text{(\ref{eq : underloadslots})\&(\ref{eq : overloadpos})}}{\geq } \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^+_t , s=\bm s\right\ } \notag\\ & -\sum_{t+\tau\in { \cal k}}2\tau b_2\end{aligned}\ ] ] \b ) next we study the case where the observation period starts with < 0 ] we get \notag\\ & \hspace{0.1in}\geq \hspace{-0.1in}\sum_{c , t+\tau\in { \cal o}}\hspace{-0.1in}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\notag\\ & \hspace{0.1in}>\hspace{-0.1 in } \sum_{c , t+\tau\in { \cal o}}\hspace{-0.1in}\mu_{ij}^{c}(t+\tau,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]-2\tau b_2.\end{aligned}\ ] ] combining with ( [ eq : underloadslots ] ) we obtain \big |\mathbf{q}^-_t , s=\bm s\right\ } \notag\\ & = \mathbbm{e}_{\mathbf h}\left\{\hspace{-0.2in}\sum_{\hspace{0.15in}c , t+\tau\in { \cal o}}\hspace{-0.22 in } \mu_{ij}^{c}(t+\tau,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^-_t , s=\bm s\right\ } \notag\\ & + \mathbbm{e}_{\mathbf h}\left\{\hspace{-0.3in}\sum_{\hspace{0.25in}c , t+\tau\in { \cal k - o } } \hspace{-0.35in}\mu_{ij}^{c}(t+\tau,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^-_t , s=\bm s\right\ } \notag\\ & \hspace{-0.01in}\stackrel{\text{(\ref{eq : underloadslots})\&(\ref{eq : overloadneg})}}{\geq } \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^-_t , s=\bm s\right\ } \notag\\ & -\sum_{t+\tau\in { \cal k}}2\tau b_2\end{aligned}\ ] ] in conclusion , depending on the sign of $ ] , we either break the observation into overload subperiods and remaining underload slots to use ( [ eq : overloadpos ] ) and ( [ eq : underloadslots ] ) , or we study separately the overload slots and the remaining underload slots using ( [ eq : overloadneg ] ) and ( [ eq : underloadslots ] ) .note that . hence \big|s=\bm s\right\ } \notag\\ & = \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^+_t , s=\bm s\right\ } \notag\\ & + \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |\mathbf{q}^-_t , s=\bm s\right\ } \notag\\ & \hspace{-0.01in}\stackrel{\text{(\ref{eq : qpos})\&(\ref{eq : qneg } ) } } { > } \hspace{-0.01in}\mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |s=\bm s\right\}\notag\\ & -k^2 b_2.\end{aligned}\ ] ]let , we have \right\ } \notag\\ & = \sum_{\bm s\in { \cal s}}p(s=\bm s|\mathbf{h}(t))\\ & \hspace{0.4 in } \times \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{{\text{bp - t } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |s=\bm s\right\ } \notag\vspace{-0.2in}\\ & \hspace{-0.05in}\stackrel{(\ref{eq : allcases1})}{\geq } \sum_{\bm s\in { \cal s}}p(s=\bm s|\mathbf{h}(t))\vspace{-0.2in}\\ & \hspace{0.4 in } \times\mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\big |s=\bm s\right\ } \notag\\ & \hspace{0.4in}- \sum_{\bm s\in { \cal s}}p(s=\bm s|\mathbf{h}(t))k^2b_2\notag\\ & = \mathbbm{e}_{\mathbf h}\left\{\sum_{c } \tilde\mu_{ij}^{c}(t,{\boldsymbol\lambda{\text{--or } } } ) \left [ q_i^c(t)-q_j^c(t)\right]\right\ } -k^2b_2.\end{aligned}\ ] ]
|
maximum throughput requires path diversity enabled by bifurcating traffic at different network nodes . in this work , we consider a network where traffic bifurcation is allowed only at a subset of nodes called _ routers _ , while the rest nodes ( called _ forwarders _ ) can not bifurcate traffic and hence only forward packets on specified paths . this implements an overlay network of routers where each overlay link corresponds to a path in the physical network . we study dynamic routing implemented at the overlay . we develop a queue - based policy , which is shown to be maximally stable ( throughput optimal ) for a restricted class of network scenarios where overlay links do not correspond to overlapping physical paths . simulation results show that our policy yields better delay over dynamic policies that allow bifurcation at all nodes , such as the backpressure policy . additionally , we provide a heuristic extension of our proposed overlay routing scheme for the unrestricted class of networks .
|
in the last year , progress has been reported on theories providing a framework for human commuting patterns .both papers suggest that the main ingredient in a ` universal ' law predicting human mobility patterns is topological , i.e. it does not directly depend on metrical distance .this discovery aims to rewrite the assumptions that have been made during the last century on mobility patterns and in particular the traditional _ gravity model _ first suggested for use in human interaction systems by carey ( 1859 ) and popularised by zipf in 1946 and the _ intervening opportunities model _ introduced by stouffer in 1940 .it is worth noticing that lately , purely topological relations have also been found to be leading components for the explanation of animal collective behaviour . in particular in a simple theory called the _ radiation model_ , based on diffusion dynamics , has been developed and the model appears to match experimental data well .the model gives exact analytical results and it has the additional desirable feature of being parameter - free , i.e. it has the characteristics of a universal theory . in this contributionwe use three different datasets in order to assess the universality , accuracy , and robustness of the newly proposed radiation model applied to human mobility and public transport infrastructure .the datasets we use are available as : ( i ) a complete multimodal network for transportation in the uk , comprising the road network for bus and coach , the rail networks for tube and rail , and the airline networks for plane .the weights on these networks consist of the volumes of the transport ( vehicles , trains , planes ) from transport time - tables ; ( ii ) commuting patterns for england and wales at ward level resolution from the 2001 population census ; and ( iii ) population density data for the uk at ward level resolution , also from the census .our first concern about the radiation model is the presumption of universality . in our interpretation ,` universality ' means that the model can be applied at all spatial scales , all time periods , and to different places .regarding the system scale , we show that among cities , the radiation model is broadly accurate for commuting , while it is not accurate at all in forecasting both the transportation patterns between cities , or for the commuting flows within london . regarding the applicability of the model to different countries, we notice that the radiation model is normalised to an infinite population system .we derive the correct normalisation for finite systems and we show that it deviates from the one derived in at the thermodynamic limit .this deviation is not really appreciable for large population systems at the scale of counties in the us , but it becomes relevant for smaller systems composed of much smaller but equivalent entities such as wards in the regions including england and wales .the gravity model is based on empirical evidence that the commuting between two places _ i _ and _ j _ , with origin population and destination population , is proportional to the product of these populations and inversely proportional to a power law of the distance between them .many studies have been carried on such a model , where it is often subject to additional constraints on the generation and attractions of flows , and on the total travel distance ( or cost ) observed .these variants can be derived consistently using information minimising or entropy maximising procedures . in our researchwe employ two models .one is a four - parameter one , that is the one also used in and was first stated in this form by alonso ( 1976 ) : where _ a _ is a normalisation factor and , and are the parameters of the model , which can be determined by multiple regression analysis .the second is a simpler perhaps more elegant model , that just carries the parameter as the exponent of the denominator and it is the one that is more frequently used in transportation modelling the radiation model tracks its origin from a simple particle diffusion model , where particles are emitted at a given location and have a certain probability _p _ of being absorbed by surrounding locations .it comes out that the probability for a particle to be absorbed is independent of _ p _ , but it depends only on the origin population , the destination population and on the population in a circle whose centre is the origin and radius the distance between the origin and the destination , minus the population at the origin and the population at the destination , .then the number of commuters , that we call , from location _i _ to location _j _ is estimated to be a fraction of the commuters from population _i _ , , that is : p * ) _ for the us counties analysed in . in the right panel , the cumulative frequency distribution for the population size _ p(p * ) _ for the cities of england and wales.,scaledwidth=50.0% ] the most interesting aspect of eq .is that it is independent of the distance and that it is parameter free .nevertheless eq . has been derived in the thermodynamic limit , that is for an infinite system .it is easy to show that for a finite system the normalisation brings us to a slightly different form of the radiation model , that is where is the total sample population and we have for . in a finite system underestimates the commuting flows by a factor . for a very large system with uniform population eq .is a very good approximation , but actually the city size distribution is not uniform for it usually follows a very heterogeneous skewed distribution , such as zipf s law . to understand the deviations of eq . from eq ., we measure the factor for the dataset used in and for a smaller system : the region composed of england and wales . in the former casethe us system is very large .the analysis is performed at the county level and that reduces the population heterogeneity of the system .we find that the largest deviation is in the flows from anderson county and this is of around .this is not a particularly large deviation , but the same measure for england and wales for example brings a deviation for the commuting flows from london , that is a considerably larger deviation . as we have shown that eq .is not universal , but scale dependent , a better choice for our investigation of uk commuting patterns is eq . .in is considered to be proportional to , that is a good estimate , while in our analysis we derive its value directly from the commuting network , i.e. , that is in network theory terminology the out - strength of location _ i _ .moreover in the model is based on job opportunities that are considered to be proportional to population .in fact eq . can be rewritten in network theory terminology .hence given that are the elements of the weighted directional adjacency matrix representing the commuting between locations _i _ and _ j _ , we define the out - strength of vertex _i _ as , and the in - strength as .then we have here and where _ d(i , j ) _ is the distance between _j_. eq . is an interesting relation between the commuting flows of the network which can be verified in itself .in fact the in - strength of a given vertex represents the job opportunities in that location , since it quantifies exactly the number of people going to work in that location .in this section we test the models defined in eq . , eq . and eq . against empirical data . in the first subsection we analyse the commuting between the cities of england and wales( see left panel of fig .[ fig1 ] ) , thereby simulating the models at macro - scales , while in the second subsection we analyse the commuting between the wards of london ( see right panel of fig . [ fig1 ] ) , simulating the models at micro - scales . in this subsection , we test the gravity model defined in eq .and eq . and the radiation model defined in eq . against the empirical data for the cities of england and wales . in this study ,city clusters have been defined via a two step process using the population for the 8850 census area statistics ( cas ) of wards in england and wales from the 2001 population census . in the first step ,those wards with population density above 14 persons per hectare are selected from the rest ; in the second step , adjacent selected wards have been grouped to form a total of 535 city clusters .we show these cities in the left panel of fig .[ fig1 ] . on the top of this socio - geographical dataset, we analyse data for the commuting between these cities using the 2001 census journey to work data which specifies , for all surveyed commuters , the origin ward they travel from their home location and their destination ward their work location . from this data , we have calculated the number of commuters between all pairs of cities in england and wales .for this study , we have also used data for the number of trains and buses moving between these cities .this information has been derived from timetable data held by the national public transport data repository ( nptdr ) .this data includes all public transport services running in england , wales and scotland between the 5th and the 11th of october 2009 .the data is composed by two data sets : the naptan ( national public transport access nodes ) dataset and the transxchange files .the former includes all public transport nodes categorised by travel mode and geo - located in space .the latter one has a series of transport modal files for each county within england , wales and scotland ( 143 counties in total ) , with information on all services running within the county .the travel modes included are air , train , bus , coach , metro and ferry .each service includes routing information as a series of naptan referenced stops each with its corresponding departure and waiting time . in this paper , we have deduced the number of trains and buses operating between all pairs of cities on a typical working day 24h by first assigning a ward area to each bus and train stop via spatial point - in - polygon queries , and then extending this assignment to city areas .it is worth noticing that in the analysis has been made over us counties , that are artificial units , while in this analysis we consider cities as natural entities for commuting .the different choice is not merely speculative , since counties have different physics and statistical properties than cities .it is well known that the city size distribution follows zipf s law .that means that city size distribution has a fat tail characterised by the scale of very large cities .the representation of the system in terms of counties introduces an artificial cut - off in the tail of the distribution , cutting down the tail , as we show in fig.[fig2 ] .it is sufficient to think of the fact that new york city is made up of 5 different counties ( boroughs ) , so that in a county level analysis its population is split between those 5 counties . [cols="<,<,<,<",options="header " , ] * tab . 2 : * calculated for the different models for london . in tab .2 we show the results of the test for the different models . we can observe straight away that the models all perform rather badly , implying that the structure of a metropolis is more complex than the one forecasted by both the radiation and by gravity models . in the top panels of fig .[ fig6 ] we show the analysis for the commuting patterns , i.e. the models against the real data .we perform a multiple regression analysis to find the best fit with the data for eq . , whose results are shown in the figure caption . in the second from top , left panel of fig .[ fig6 ] , we show the average number of commuters in london as a function of the distance .the plot shows that real data decay faster than a power law with the distance , and this behaviour is captured by none of the gravity models , that tend to follow a power law behaviour . on the other hand the radiation model forecasts a good amount of commuting for short distances and a rapid decay , butthis does not reproduce the data well either .in the adjacent panel we show the correlations between the commuting flows and the destination population . for london ,this is counter intuitive , since the correlation analysis shows a few large peaks for wards with very small population .this phenomena resides in the fact that the wards where most of the jobs are concentrated in london are not residential wards .this evidence would let us think that the approximation _ward population_/_ward employment _ is not valid for london and that we should take this bias into account in our analysis . in the right panel of the same figurewe show the correlations of the number of commuters and .there are hints of a strong dependency of the commuting flows from this quantity , even if this dependency is weaker than the one reproduced by the radiation model . in gla .top - right panel : cumulative frequency distribution for the working population in a given ward .bottom left panel : average number of employees as a function of the ward population size .bottom right panel : commuters flow analysis for gla , in this case the flows are modelled via eq.5 . , title="fig:",scaledwidth=50.0% ] in gla .top - right panel : cumulative frequency distribution for the working population in a given ward .bottom left panel : average number of employees as a function of the ward population size .bottom right panel : commuters flow analysis for gla , in this case the flows are modelled via eq.5 . , title="fig:",scaledwidth=50.0% ] in the bottom panels of fig .[ fig6 ] , we show the results for the analysis on the bus flows in gla . in tab . 2the values are displayed and we can see that the models do not perform very well , but still better than for the commuters case .the correlations for the number of buses with the distance display an exponential tail , that has not been picked up by any of the models . as for the commuter case , we see the strongest correlations with distance and , while the correlations with destination populations are ill - defined .one could argue that the poor results obtained applying the commuting models to the london intra - commuting flows could reside in the approximation validity of the employment data with the population size , in the case of the london wards . to address this question , in the top left panel of fig .[ fig12 ] , we show the frequency distribution of the population size for the london s wards .this is well fitted by a gaussian distribution centred around 11300 people .this is not a surprise since the ward boundaries have been designed to have approximatively the same population size . in the top right panel of fig .[ fig12 ] , we show instead the cumulative frequency distribution for the number of people working in a given ward .this is a skew distribution with a broad tail , well fitted by a power law , with exponent .this reflects the fact that the approximation _ population size_/_employment data _ , that has been shown to be valid in the case of counties in us , is not valid in the case of london s wards .in particular , we see that employment follow a distribution that suggests a complex and hierarchical organization for these resources within the city .in fact , from the bottom left panel of fig .[ fig12 ] , where we measure the average number of employees as a function of the ward population size , we can see that apart from some non trivial deviations for small population size , there are not significant correlations . these deviations are related to the fact that the most significant employment locations in london often have a very small population .we can now check whether eq .5 could be a more appropriate choice in order to describe commuting flows inside of a city , instead of eq .4 . in the bottom right panel of fig .[ fig12 ] , we show the results of eq . 5 applied to the commuting between gla wards , versus the real commuting flows , in the same style and notation of fig .we notice that the plot is very similar to the one obtained using eq . 4 and the tells us that using eq.5 instead of eq.4 does not improve the goodness of the fit .this implies that failure of the radiation model in forecasting urban commuting flows does not reside in approximating population size / employment , but in the complexity of the system .human mobility is an outstanding problem in science . in more than one century of active work and observation ,the gravity model has been considered the best option to model such a phenomenon .the appearance of a new statistical model based on physical science has re - opened the debate on the topic .in particular the apparent independence of the radiation model from metrical distance and its property of being parameter - free is a significant and desirable change from past practice .the model needs to be tested in many different circumstances so that its wider applicability can be assessed . in this paperwe address the reliability of the radiation model against the gravity model for large scale commuting and transportation networks in england and wales and for the intra - urban commuting and transportation network for the london region .the first thing we notice is that both models fail to describe human mobility within london . in this sensewe argue that commuting at the city scale still lacks a valid model and that further research is required to understand the mechanism behind urban mobility .in fact , the phenomena of socio - geographical segregation and residential / business ward specialisation are key drivers in determining the structure of flows and the density of population in the city and these are not reflected by these statistical models . for england and wales, we first introduce the correct normalisation for finite systems in the radiation model .such a normalisation affects the flows from london by a factor of 17% .then we notice that the models are not very good in describing transportation data , such as bus and train flows , while they can be considered acceptable for modelling the commuting flows .the gravity model ii of eq.[eq_2 _ ] fails to describe commuting models , and confirms that commuting correlations with population at origin and destination is not just linear .the gravity model is satisfactory in describing the commuting flows and surely much better than the radiation model , even if the latter has the advantage to be parameter free that turns out to be useful in the cases where there is no data available to estimate any parameters .nevertheless from the fluctuation analysis it emerges that there is a consistent portion of the distance / destination population phase - space where the radiation model gives better estimates of the gravity model in terms of srensen - dice coefficient .this means that for large distances and small and moderate destination population scales , the principles of the radiation model are reliable and that mobility patterns can be approached by a diffusion model where intervening opportunities on the commuting paths prevail on the distance of such paths .however , the modest overall radiation model performance in terms of indicates that more research on the subject has to be done in order to improve the model reliability .other ways to represent the commuting system are possible .for example , if we were to grid all the data thereby strictly defining population and employment as density measures , this would change the dynamics of the gravity and radiation models in that they have been originally specified to deal with counts of activity data like population and employments rather than their densities .moreover the tradition in this field is to work with data that is available in administrative units rather than approximate that data on a grid because these units reflect changes in the spatial system over time .we believe that the best way to conduct this study is to consider urban conglomerations as the natural entities involved in commuting flows .this choice relates to a well settled tradition in statistical physics that consider cities as well defined entities , such as in the zipf s and gibrat s law .js and apm were partially funded by the epsrc scale project ( ep / g057737/1 ) and mb by the erc mechanicity project ( 249393 erc-2009-adg ) .further , we would like to thank the anonymous reviewers for their constructive feedback that has improved the paper , especially the suggestion to use the srensen - dice coefficient as an alternative error metric .m. ballerini , n. cabibbo , r. candelier , a. cavagna , e. cisbani , i. giardina , v. lecomte , a. orlandi , g. parisi , a. procaccini , m. viale , v. zdravkovic , interaction ruling animal collective behavior depends on topological rather than metric distance : evidence from a field study , pnas * 105 * , 1232 ( 2008 ) .t.a . srensen ( 1948 ) a method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on danish commons , biol .skr . , * 5 * ( 1948 ) , pp . 1 - 34
|
we test the recently introduced _ radiation model _ against the _ gravity model _ for the system composed of england and wales , both for commuting patterns and for public transportation flows . the analysis is performed both at macroscopic scales , i.e. at the national scale , and at microscopic scales , i.e. at the city level . it is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities . we then generalize the radiation model , introducing the correct normalisation factor for finite systems . we show that even if the gravity model has a better overall performance the parameter - free radiation model gives competitive results , especially for large scales .
|
the optimal entropic uncertainty relation for a couple of conjugate continuous variables ( position and momentum ) is known for almost 40 years .one decade later , entropic formulation of the uncertainty principle has as well been developed in the discrete settings .even though , the topic of entropic uncertainty relations ( eurs ) has a long history ( for a detailed review see ) , one can observe a recent increase of interest within the quantum information community leading to several improvements or even a deep asymptotic analysis of different bounds .this is quite understandable , because the entropic uncertainty relations have various applications , for example in entanglement detection , security of quantum protocols , quantum memory or as an ingredient of einsteinpodolskyrosen steering criteria .moreover , the recent discussion about the original heisenberg idea of uncertainty , led to the entropic counterparts of the noise - disturbance uncertainty relation ( also obtained with quantum memory ) .my favorite example of entropic description of uncertainty is situated in between the continuous and the discrete scenario .continuous position and momentum variables , while studied with the help of coarse - grained measurements lead to discrete probability distributions .this particular formulation of the uncertainty principle has been long ago recognized to faithfully capture the spirit of position - momentum duality .it also carries a deep physical insight , since the coarse - grained version of the heisenberg uncertainty relation is non - trivial for any coarse - graining ( given in terms of two widths and in positions and momenta respectively ) provided that the both widths are finite . on the practical level ,coarse - grained entropic relations are experimentally useful for entanglement and steering detection in continuous variable schemes .the aim of this paper is thus to strengthen the theoretical and experimental tools based on the coarse - grained eurs by taking an advantage of the recent improvements of discrete entropic inequalities , in particular , the one based on majorization .let me start with a brief description of the entropic uncertainty landscape , with a special emphasis on the majorization approach developed recently .the standard position - momentum scenario deals with the sum of the continuous shannon ( or in general rnyi ) entropies calculated for both densities and describing positions and momenta respectively .the position and momentum wave functions are mutually related by the fourier transformation .the discrete eurs rely on the notion of the rnyi entropy of order =\frac{1}{1-\alpha}\ln\sum_{i}p_{i}^{\alpha},\ ] ] and the sum - inequalities of the general form +h_{\beta}\left[p\left(b;\varrho\right)\right]\geq b_{\alpha\beta}\left(a , b\right)\label{eurgen}\ ] ] valid for any density matrix , and two non - degenerate observables and .if by and we denote the eigenstates of the two observables in question , the associated probability distributions entering ( [ eurgen ] ) are : the lower bound does not depend on , but only on the unitary matrix .for instance , the most recognized result by maassen and uffink gives the bound , valid whenever the couple constrained as in eq .( [ conjugate ] ) is often referred to as the conjugate parameters . in the majorization approach onelooks for the probability vectors and which majorize the tensor product and the _ direct sum _ of the involved distributions ( [ distributions ] ) : the majorization relation between any two -dimensional probability vectors implies that for all we have , with a necessary equality when . in agreement with the usual notation , the symbol denotes the decreasing order , what means that , for all . in the casewhen the vectors compared in ( [ maj1 ] ) and ( [ maj2 ] ) are of different size , the shorter vector shall be completed by a proper number of coordinates equal to . the tensor product ( also called the kronecker product ) is a -dimensional probability vector with the coefficients equal to while the direct sum is a -dimensional probability vector given by one of the most important properties of the rnyi entropy of any order is its additivity +h_{\alpha}\left[y\right]=h_{\alpha}\left[x\otimes y\right].\label{add1}\ ] ] moreover , in the special case of the shannon entropies ( \equiv h\left[\cdot\right] ] and ] .on the other hand , when , this bound can be appropriately modified to the weaker form =\frac{2}{1-\alpha}\left[\ln\left(1+\sum_{i}w_{i}\right)-\ln2\right].\label{modified}\ ] ] the whole families of the vectors and fulfilling ( [ maj1 ] ) and ( [ maj2 ] ) have been explicitly constructed in and respectively .the aim of the present paper is to obtain the counterpart of the majorizing vector applicable to the position - momentum coarse - grained scenario described in detail in the forthcoming section [ subsec1b ] . in section [ secionmaj ]we derive this vector using the sole idea of majorization , so that we shall omit here a detailed prescription established in .we restrict the further discussion to the direct - sum approach , since for ( this case covers the sum of two shannon entropies ) , the _ direct - sum entropic uncertainty relation _ is always stronger than the corresponding tensor - product eur .the last set of ingredients we shall introduce , contains the coarse - grained probabilities together with their eurs .due to coarse - graining , the continuous densities and become the discrete probabilities : with , and .the sum of the rnyi entropies ] calculated for the probabilities ( [ rs ] ) is lower - bounded by ,\label{boundy}\ ] ] where and once more the above results are valid only for conjugate parameters ( [ conjugate ] ) , so that we label the bound ( [ ibb ] ) only by the index .the function is the `` 00 '' radial prolate spheroidal wave function of the first kind . when , the spheroidal term in ( [ lr ] ) becomes negligible and we have so that the bound ( [ ibb ] ) dominates in this regime . in the opposite case , when the bound ( [ ibb ] )is negative , so starting from some smaller ( -dependent ) value of the second bound becomes significant .after the short but comprehensive introduction , we are in position to formulate the main result of this paper .assume that a sum of any position probabilities and any momentum probabilities is bounded by , that is ( ) for some indices and .we implicitly assume here that does not depend on the specific choice of the probabilities in the sum ( it bounds any choice ) , and that since the left hand side of ( [ g ] ) can not exceed .denote further by assume now that , is an increasing sequence if that happens , the construction of the vector applicable to the direct - sum majorization relation , i.e. such that can be patterned after : for . due to ( [ inc ] ) the coefficients are all non - negative , so that they form a probability vector .note that , since one picks up only a single probability ( , or , ) , and that , because whenever the quantum state is localized ( in position or momentum ) in a single bin , the left hand side of ( [ g ] ) is equal to .this is in accordance with an expectation that is the probability vector .one can check by a direct inspection that what together with ( [ g ] ) and ( [ f ] ) is the essence of majorization . as in the case of discrete majorization , there is a whole family ( labeled by ) of majorizing vectors given by the prescription for , , and when . in that notation ,the basic vector ( [ wi ] ) is equivalent to , and the following majorization chain does hold the remaining task is to find the candidates for the coefficients . to this endwe shall define two sets : ,\qquad y\left(\delta\right)=\bigcup_{b=1}^{n}\left[l_{b}^{-}\delta , l_{b}^{+}\delta\right],\ ] ] which are simply the unions of intervals associated with the probabilities present in ( [ g ] ) .the measures of these sets are equal to and respectively .( [ g ] ) rewritten in terms of the above sets simplifies to the form following lenard , we shall further introduce two projectors and , such that for any function , the function has its support equal to and the fourier transform of the function is supported in . if both and are intervals , then according to theorem 4 from ( this theorem in fact formalizes the content of eq .17 from ) the formal candidate for is the square root of the largest eigenvalue of the compact , positive operator . due to proposition 11 ( including the discussion around it ) from ,the above statement remains valid for any sets and . as concluded by lenard , this is a generalization of the seminal results by landau and pollak , who for the first time quantified uncertainty using spheroidal functions .it however happens , that has the largest value exactly in the interval case , so that it can always be upper bounded by the eigenvalue found by landau and pollak : ^{2},\label{17}\ ] ] with being the product of the measures of the two sets in question , that is . since the right hand side of ( [ 17 ] ) is an increasing function of , we can easily find the maximum in ( [ f ] ) .the maximal value of with fixed is given by possibly equal contributions of the both numbers .since and are integers we finally get where and denote the integer valued ceiling and floor functions respectively . and .] if is odd then , and in the simpler case when is an even number .note that the functions ( [ f-1 ] ) form the increasing sequence as desired .the major result of the above considerations is thus the family of new majorization entropic uncertainty relations ( ) : +h_{\alpha}\bigl[p^{\delta}\bigr]\geq\mathcal{r}_{\alpha}^{(n)}\left(\delta\delta/\hbar\right)\equiv h_{\alpha}\bigl[w^{(n)}\left(\delta\delta/\hbar\right)\bigr],\label{meurcg}\ ] ] valid for .as mentioned in section [ secmajrev ] the case of the shannon entropy directly follows from ( [ add2 ] ) , while the range is obtained due to the subadditivity of . in the case we need to replace the majorization bound according to ( [ modified ] ) , and obtain ] and ] and $ ] .even though , this approach brings the valid unitary matrix , the remaining optimization required by becomes a challenging task .it is my pleasure to thank iwo biaynicki - birula and karol yczkowski for helpful comments .financial support by grant number ip2011 046871 of the polish ministry of science and higher education is gratefully acknowledged .research in freiburg is supported by the excellence initiative of the german federal and state governments ( grant zuk 43 ) , the research innovation fund of the university of freiburg , the aro under contracts w911nf-14 - 1 - 0098 and w911nf-14 - 1 - 0133 ( quantum characterization , verification , and validation ) , and the dfg ( gr 4334/1 - 1 ) . * * .rudnicki , _ uncertainty related to position and momentum localization of a quantum state _ in : `` _ _ proceedings of new perspectives in quantum statistics and correlations _ _ '' , m. hiller , f. de melo , p. pickl , t. wellens , s. wimberger ( eds . ) , universitatsverlag winter , p. 49
|
we improve the entropic uncertainty relations for position and momentum coarse - grained measurements . we derive the continuous , coarse - grained counterparts of the discrete uncertainty relations based on the concept of majorization . the obtained entropic inequalities involve two rnyi entropies of the same order , and thus go beyond the standard scenario with conjugated parameters . in a special case describing the sum of two shannon entropies the majorization - based bounds significantly outperform the currently known results in the regime of larger coarse graining , and might thus be useful for entanglement detection in continuous variables .
|
face recognition ( fr ) is an effortless task for humans while it is difficult task for machines due to pose and illumination variation , ageing , facial growth to mention a few . while having so many difficulties face carries more information as compared to iris , figure , gait etc .fr has advantage that , it can be captured at a distance and in hidden manner whereas other biometric modalities like fingerprint , palm prints and knuckle prints require physical contact with a sensor .the fr field is active research area because of surveillance , pass port , atm etc .the problem with fr is the high - dimensionality of feature space ( original space ) .a straightforward implementation with original image space is computationally costly .therefore , techniques of feature extraction and feature selection are used . in the recent pastmany fr algorithms have been implemented .these algorithms can be roughly classified into three categories : 1 .feature based matching methods that deals with local features such as the eyes , mouth , and nose and their statistics which are fed into the recognition system [ 14 ] .2 . appearance - based schemes that make use of the holistic texture features [ 17 ] .hybrid features that uses local and holistic features for the recognition .it is reported that the hybrid approach gives better recognition results [ 26 ] .one of the implementations of the structure - based schemes is based on the human perception [ 24 ] . in this implementation , a set of geometric face features , such as eyes , nose , mouth corners etc . , is extracted .the features are selected based on the specific facial points chosen by a human expert .the positions of the different facial features form a feature vector which is the input to a structural classifier to identify the subject . in the appearance - based schemes , two of the most popular techniques are principal component analysis ( pca ) [ 20 ] and linear discriminant analysis ( lda ) [ 27 ] . _pca _ : principle component analysis is one of the holistic approaches .this is one of the statistical approaches based on unsupervised learning method . in which the eigen vectors are computed from the covariance matrix .pca model the linear variations in high dimensional data .pca preserves the global structure of the image in the sense the image can be completely reconstructed with pca .it does the guarantees dimensionality reduction by projecting the -dimensional data onto the ( ) dimensional linear subspace .the objective here is to find a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data .since pca is used for the dimensionality reduction , hence it can be used for image compression .we can find many holistic approaches in the literature like kernel pca , 2d - pca complex pca etc _ lda _ : linear discriminant analysis , l is called supervised learning algorithm method .the fisher faces method he build in the concept of lda .lda builds the projection axis in such a way that the data points belonging to same class are nearer . and projection points are for the interclass .lda encodes the discriminating information in a linear separable space using bases which are not necessarily orthogonal unlike in the case of pca .as the face samples involve high dimensional image space , the eigen - faces approach is likely to find the wrong components that leads to poor recognition . on the other hand ,the fisher - faces are computationally expensive .the ultimate aim of any feature extraction algorithm is , it should build highly describable features .the developed features should produce ideally zero variance within the class . and should produce infinite variance for the interclass .the organization of the rest of paper is as follows .section 2 describes the basic lbp and it s variants .section 3 discusses the fuzzy logic . in section 4 the proposed methodis presented .section 5 describes the experimental setup and results .section 6 gives the conclusions of this paper .in this section brief concepts of basic lbp and it s variants are discussed the basic lbp was proposed by [ 15 ] and some of it s variants are tan and triggs [ 19 ] local ternary pattern ( ltp ) uses three levels ( + 1 , 0,-1 ) , these three level are obtained by quantizing the difference between a central pixel and its neighbouring pixel gray .some of the variants of lbp , are derivative - based lbp [ 9 ] , sobel - lbp [ 25 ] , uniform lbp [ 2 ] , rotation invariant lbp [ 5 ] , multi - dimensional lbp [ 18 ] , dominant lbp [ 13 ] , centre - symmetric lbp [ 21 ] transition lbp [ 1 ] , have been proposed .lbp concept is applied to area like face recognition [ 1 ] , dynamic texture recognition [ 23 ] shape localization [ 9 ] . the local binary pattern ( lbp ) method is widely used in 2d texture analysis .the lbp operator is a non - parametric 3x3 kernel which describes the local spatial structure of an image .it was first introduced by ojala et al [ 15 ] who showed the high discriminative power of this operator for texture classification . at a given pixel position ,lbp is defined as an ordered set of binary comparisons of pixel intensities between the centre pixel and its eight surrounding pixels .the decimal values of the resulting 8-bit word ( lbp code ) leads to 28 possible combinations , which are called local binary patterns abbreviated as lbp codes with the 8 surrounding pixels .the basic lbp operator is a fixed neighbourhood as in fig .[ fig : basiclbpoperator ] . c|c|c|c|c + & 0 & 1 & 0 & + & 1 & & 0 & + & 1 & 0 & 1 & + + the lbp operator can be mathematically expressed as : where corresponds to the gray value of the centre pixel , to the gray values of the 8 surrounding pixels and function s is defined as , uniform lbp binary patterns , proposed by t. ojala et .al [ 16] have certain fundamental properties of texture . in this implementationthey have considered if the frequency of occurrence exceeds 90% then they call it as uniform patterns .the frequency considered here is the transition from zero to one and one to zero . as in the case of basic lbpthe central pixel is taken as threshold and neighbouring pixel is assigned values . in the case of mb - lbp[ 22 ] same procedure is adapted .they consider the rectangular box as the reference .for this reference rectangular box the average value is computed and this average value will be acting as threshed , and the neighbouring blocks are assigned the value based on threshold .3p - lbp [ 3] is obtained by comparing the values of three neighbouring windows to produce a one bit value .a window centred on a pixel and additional windows distributed uniformly in a ring of radius r around each pixel is considered .pairs of windows are compared with the centre patch , and a bit set describing which of the two windows is more similar to the centre patch . in 4p - lbp ,two rings of radii and are considered .the code is then produced by comparing the two centre symmetric windows in the inner ring with two centre symmetric windows in the outer ring positioned patches away along the circle . in lbp variance ( lbpv )[ 5 ] , the contrast of the image is used along with neighbouring points and radii , are used to generate joint distribution of local pattern that yields a dominant texture descriptor .lbpv contains both local pattern and local contrast information .an alternative is the use of a hybrid scheme , lbp variance ( lbpv ) , which uses globally rotation invariant matching with locally variant lbp texture features .lbpv provides efficient joint lbp and contrast distribution where the variance is used as an adaptive weight to adjust the contribution of the lbp code in histogram calculation . in spite of the the great success of lbp in pattern recognition, its underlying working mechanism still needs more investigation . even after having so many versions of lbp the problem still remain to be address other possibilities which can cater better recognition rate in other words which information is missing in the lbp and how to effectively represent the missing information in the lbp style so that better texture classification can be achieved ? looking at the limitation of the pca , lda techniques and taking advantage of information sets , which have been developed by hanmandlu [ 6] to enlarge the scope of fuzzy sets .information set concept is simple understand .the image after portioned is treated as the information source .the information sources contained in windows form the fuzzy sets .the distribution of these information sources in the fuzzy sets requires an appropriate membership function to fit .when fuzzy set coupled with membership function forms the information set .we concentrate on extracting the local fuzzy features from images , instead of considering complete image which is a high - dimensional vector . to extract the simple and robust local features of an image, we use basic local binary pattern ( lbp ) , information set and the central pixel .these features are tested empirically on svm [ 4 ] .fuzzy logic theory has been extensively used in all the fields .it is the extension of crisp set theory and fuzzy logic ( fl ) will helps to deal with imprecise and uncertain data .it tackles the concept of partial truth .it was introduced by prof .zadeh to model the vagueness and ambiguity in complex systems for which there is no mathematical model .the limitation of fuzzy sets is that information source values and their membership function values are treated separately for all the problems dealing with the fuzzy logic theory .the membership function value gives the degree of association of a particular information source value in a fuzzy set [ 6]. the proposed work seeks to combine the information source values with their membership values together called information set .the fuzzifier developed by hanmandlu et.al [ 8 ] the scope to enlarge the fuzzy set . to give an example of agent , consider a set of students graded based on the performance of the class topper as the benchmark and the student s individual performance ( information source value )is determined by comparing his performance with that of the topper ( ) .some of the popular membership functions are as given below . 1 .s shaped membership function is a spline - based curve .the parameters a and b locate the extremes of the sloped portion of the curve as given in equation . fig .[ subfig-1:smf ] illustrates s shaped membership function .+ 2 .z shaped membership function is a spline - based function of x. the parameters a and b locate the extremes of the sloped portion of the curve as given in equation .[ subfig-2:zmf ] illustrates z shaped membership function .+ 3 .gaussian membership function the symmetric gaussian function depends on two parameters and as given in equation . where is variance and is center .these two parameter define the shape of the gaussian bell .[ subfig-3:gaussian ] illustrates gaussian membership function .uncertainty appears in the gray values of images . instead of considering the whole image and their gray values, we divide the image into sub images of size ( enclosed in the window ) and extract local information using lbp method .we consider non overlapping sub images .we compute a membership function for each window using s - membership function , z - membership fuction , gaussian membership function , the new proposed membership function and root mean features .for all the features we use central pixel to compute the final feature , the reason being the central pixel value is lost once we obtain the lbp feature .let be the membership value of the window using any of the above method the size of the membership will be when we get this from the window .the procedure is explained in the later part of this section .let s be the window in the image domain .let be the information set obtained from the window , that is the element by element product of the and denoted as , where .x represents the element by element multiplication .after element by element multiplication , we get matrix .we take the sum as given below . where is the information set obtained after the summation .once we obtain the , next we compute the lbp code the example is as given in fig .[ fig : basiclbpoperator ] . in order to take account of the information from the neighbourhood pixels ( information sources ) ,we compute the lbp value for the window .this is given by : we convert the lbp code to the decimal value .let the value of lbp from a window be denoted by .the new membership function based feature is given by where represent the centre pixel in the window .the new membership function is computed from the information of local structural details like average of the window and maximum in the window under consideration .the new membership function is given by where represent the normalized intensity value which we will obtain by dividing the maximum in the image . is the average in the window under consideration and is the maximum in the window under consideration .algorithm [ newmf ] describes various steps involved for the computation of features using various membership functions for a given image . *input : * face image * output : * new membership function based feature vector for image normalize the image by dividing each pixel value with the maximum intensity value partition the image into windows // compute the proposed new membership function values for //central pixel value compute the membership value using , , or .compute information set value using compute lbp value using computer the feature using // store in feature vector this feature depend upon the two parameters on is the fuzzifier and the central pixel value and the lbp code .the fuzzifier developed by prof .hanmandlu has the scope to expand the fuzzy set .the fuzzifier in is devised by hanmandlu et al . in [ 7 ] it gives the spread of attribute values with respect to the chosen reference ( symbolized as ref ) .it is defined as can be taken as average or minimum or maximum from the window under consideration .it may be noted that the above fuzzifier gives more spread than is possible with variance as used in the gaussian function . contribution of this method is that it eliminates the shortcoming of lbp approach which ignores the central pixel value and it considers information of both central and neighbourhood pixels .the proposed method is simple and computationally efficient .a large number of experiments were conducted on orl and sheffield databases , shows the effectiveness of our method .olivetti research laboratory ( orl ) in cambridge , u.k .at&t database has 400 images with 40 different subjects with 10 images per subject of size .sheffield database has 20 classes the size of the each image is approximately .with lot of orientation of head , the degree of rotation is about the center axis .the 20 subjects of sheffield database are kept in separate folders .number of images in each older varies in number . in both databases , we normalize all of the gray ( pixel ) values by dividing them by the maximum gray value in that image .so the range of the pixel values lies in the range 0 to 1 ( also known as , the normalized information source values ) . for both databaseswe have resized the images to integer multiplication of 3 so as to avoid the padding of zeros at the end . [ cols="<,^,^,^,^,^ " , ] the experiment is also conducted on sheffield database .firstly the images are resized to . after extracting the features for the complete database ,the database is divided into train set and the test set .the first 50% dataset is used for training and remaining 50% is used for testing . for sheffield database the recognition rate are given in table [ tab : sheffield ] .the good performance is achieved from rms features for the degree of polynomial 1,2 , and the second best performance is given by the new mf .with rms features the recognition rate is 97.14% with svm poly 1.the weak performer is the s - mf as the svm poly 2 has given a recognition rate of 85.71% .the k - fold validation is done on the sheffield data base .[ fig : kfold_sheffield ] shows the k - fold results of the recognition on sheffield database .the tabular representation is shown in the table 2 for the k - fold results .database the best performances are given by z - mf , gaussian - mf and new mf with the average of all ten folds as 100% where as the root mean square ( rms ) and features obtained from s - mf with minimum recognition rate as 90% and maximum recognition rate as 100% and average recognition rate of 99% . here in this casethe new - mf has worked well . whereas the rms features are second best .receiver operating characteristic ( roc ) curve is also plotted . for obtaining the roc curves we have taken the k - nn classifier .[ fig : roc_sheffield ] shows the roc results of the recognition on sheffield database .the recognition rates at 0.1 of far are as follows with s - mf the recognition rate we got is 62% , with z - mf the recognition rate 65% , with gaussian - mf and the new - mf recognition rate 78% , and with rms features the recognition rate obtained is 73% .the better performance is given by gaussian mf and the least performance is given by s - mf .comparisons with other works : arindam kar et .al [ 12 ] reported the recognition rate on orl database with principle component analysis as 80.5% and with independent component analysis ( ica ) with 85% .arindam kar et .al in another work [ 11 ] have reported the recognition rate with principle component analysis as 82.86% .author in [ 10 ] has reported recognition rate of 63% on locally preserving projection ( lpp ) .all the recognition are done with orl database .in this research work , we have developed five different fuzzy local binary patterns ( fuzzy lbp ) as a local descriptor for face recognition .the features are designed keeping the aim to improve the recognition rate .our main contributions include : the use of information set to lbp and to use it for face recognition to produce highly describable features .the information that is lost in computation of basic lbp is now used for computation of final feature . in order to generate the optimal number of features ,only non overlapping windows are considered , and hence computations of histogram are avoided .the proposed approach is found to be effective on images having variation in expressions , illumination and pose .the experiments reveal that better the representation of the uncertainty , better will be the recognition rates .further work can be extended to use the type 2 fuzzy set , and design of new classifier .timo ahonen , abdenour hadid , and matti pietikainen .face description with local binary patterns : application to face recognition ._ pattern analysis and machine intelligence , ieee transactions on , _ 28(12):2037 - 2041 , 2006 .zhimin cao , qi yin , xiaoou tang , and jian sun .face recognition with learning - based descriptor . in _computer vision and pattern recognition ( cvpr ) , 2010 ieee conference on , _ pages 2707 - 2714 .ieee , 2010 .xiangsheng huang , stan z li , and yangsheng wang .shape localization based on statistical method using extended local binary pattern . in _ multi - agent security and survivability , 2004 ieee first symposium on , _ pages 184 - 187 .ieee , 2004 .nisar hundewale .face recognition using combined global local preserving projections and compared with various methods ._ international journal of scientific and engineering research , volume 3 , issue 3 , march-2012 , _ 3:1 - 4 , 2012 .arindam kar , debotosh bhattacharjee , dipak kumar basu , mita nasipuri , and mahantapas kundu .an adaptive block based integrated ldp , glcm , and morphological features for face recognition ._ arxiv preprint arxiv:1312.1512 , _ 2013 .arindam kar , debotosh bhattacharjee , dipak kumar basu , mita nasipuri , and mahantapas kundu .a face recognition approach based on entropy estimate of the nonlinear dct features in the logarithm domain together with kernel entropy component analysis ._ arxiv preprint arxiv:1312.1520 , _ 2013 .timo ojala , matti pietikainen , and topi maenpaa .multiresolution gray - scale and rotation invariant texture classification with local binary patterns ._ pattern analysis and machine intelligence , ieee transactions on , _ 24(7):971 - 987 , 2002 .alex pentland , baback moghaddam , and thad starner .view - based and modular eigenspaces for face recognition . in _computer vision and pattern recognition , 1994 .proceedings cvpr94 . , 1994 ieee computer society conference on , _ pages 84 - 91 .ieee , 1994 .gerald schaefer and niraj p doshi .multidimensional local binary pattern descriptors for improved texture analysis . in _ pattern recognition ( icpr ) , 2012 21st international conference on , _ pages 2500 - 2503 .ieee , 2012 .matthew a turk and alex p pentland .face recognition using eigenfaces . in _computer vi new fuzzy lbp features for face recognition 11 vision and pattern recognition , 1991 .proceedings cvpr91 . ,ieee computer society conference on , _ pages 586 - 591 .ieee , 1991 .peng wang , shankar m krishnan , c kugean , and mp tjoa . classification of endoscopic images based on texture and neural network . in _ engineering in medicine and biology society , 2001 .proceedings of the 23rd annual international conference of the ieee , _ volume 4 , pages 3691 - 3695 .ieee , 2001 . guoying zhao and matti pietikainen .dynamic texture recognition using local binary patterns with an application to facial expressions ._ pattern analysis and machine intelligence , ieee transactions on , _ 29(6):915 - 928 , 2007 . *abdullah gubbi * is currently working as an associate professor in pa college of engineering , mangalore .he obtained his bachelor of engineering from gulbarga university , gulbarga .he received his masters degree in electronics from walchand college of engineering sangli maharashtra .his areas of interest include image processing , pattern recognition and vlsi design .+ * mohammad fazle azeem * is currently working as a professor in amu aligarh ( u.p ) .he obtained his bachelor of engineering from m.m.m .engineering college , university of gorakhpur , gorakhpur ( u.p . ) india .he received his masters degree in electrical engineering from aligarh muslim university , aligarh , india .he obtained his ph.d ., from indian institute of technology , delhi , new delhi , india .his areas of interest include control system , image processing and vlsi design . + *zahid ahmed ansari * is working as professor in the department of computer science engineering , p.a .college of engineering , mangalore , india .he received his m.e .degree from birla institute of technology , pilani , india and his ph.d .degree from jawaharlal nehru technological university , kakinada , india .he has thirty research papers to his credit , published in various international journals and conferences .his areas of research include data mining , soft computing , high performance computing and model driven software development .he is a life member of csi and also a member of acm
|
there are many local texture features each very in way they implement and each of the algorithm trying improve the performance . an attempt is made in this paper to represent a theoretically very simple and computationally effective approach for face recognition . in our implementation the face image is divided into 3x3 sub - regions from which the features are extracted using the local binary pattern ( lbp ) over a window , fuzzy membership function and at the central pixel . the lbp features possess the texture discriminative property and their computational cost is very low . by utilising the information from lbp , membership function , and central pixel , the limitations of traditional lbp is eliminated . the bench mark database like orl and sheffield databases are used for the evaluation of proposed features with svm classifier . for the proposed approach k - fold and roc curves are obtained and results are compared . + + * keywords :* face recognition , fuzzy logic , information set , local binary pattern , svm .
|
thermal interaction between fluids and structures plays an important role in many applications .examples for this are cooling of gas - turbine blades , thermal anti - icing systems of airplanes or supersonic reentry of vehicles from space .another is quenching , an industrial heat treatment of metal workpieces . there ,the desired material properties are achieved by rapid local cooling , which causes solid phase changes , allowing to create graded materials with precisely defined properties . gas quenching recently received a lot of industrial and scientific interest .in contrast to liquid quenching , this process has the advantage of minimal environmental impact because of non - toxic quenching media and clean products like air . to exploit the multiple advantages of gas quenching the application of computational fluid dynamicshas proved essential .thus , we consider the coupling of the compressible navier - stokes equations as a model for air , along a non - moving boundary with the nonlinear heat equation as a model for the temperature distribution in steel . for the solution of the coupled problem , we prefer a partitioned approach , where different codes for the sub - problems are reused and the coupling is done by a master program which calls interface functions of the other codes .this allows to use existing software for each sub - problem , in contrast to a monolithic approach , where a new code is tailored for the coupled equations . to satisfy the boundary conditions at the interface, the subsolvers are iterated in a fixed point procedure .our goal here is to find a fast solver in this partitioned setting .one approach would be to speed up the subsolvers and there is active research on that .see for the current situation for fluid solvers . however , we want to approach the problem from the point of view of a partitioned coupling method , meaning that we use the subsolvers as they are . as a reference solver ,we use the time adaptive higher order time integration method suggested in .namely , the singly diagonally implicit runge - kutta ( sdirk ) method sdirk2 is employed . to improve upon this ,one idea is to define the tolerances in the subsolver in a smart way and recently , progress has been made for steady problems .however , it is not immediately clear how to transfer these results to the unsteady case .thus , the most promising way is to reduce the number of fixed point iterations , on which we will focus in the present article .various methods have been proposed to increase the convergence speed of the fixed point iteration by decreasing the interface error between subsequent steps , for example relaxation , interface - gmres , rom - coupling and multigrid coupling . here , we consider the most standard method , namely aitken relaxation and two variants of polynomial vector extrapolation , namely mpe and rre .these have the merit of being purely algebraic and very easy to implement . the second ideawe follow is that of extrapolation based on knowledge about the time integration scheme .this has been successfully used in other contexts , but has to our knowledge never been tried in fluid structure interaction , where typically little attention is given to the time integration . here, we use linear and quadratic extrapolation of old values from the time history , designed specifically for sdirk2 .the various methods are compared on the basis of numerical examples , namely the flow past a flat plate , a basic test case for thermal fluid structure interaction and an example from gas quenching .the basic setting we are in is that on a domain the physics is described by a fluid model , whereas on a domain , a different model describing the structure is used .the two domains are almost disjoint in that they are connected via an interface . the part of the interface where the fluid and the structure are supposed to interact is called the coupling interface . note that might be a true subset of the intersection , because the structure could be insulated . at the interface , coupling conditions are prescribed that model the interaction between fluid and structure . for the thermal coupling problem , these conditions are that temperature and the normal component of the heat flux are continuous across the interface .we model the fluid using the navier - stokes equations , which are a second order system of conservation laws ( mass , momentum , energy ) modeling viscous compressible flow .we consider the two dimensional case , written in conservative variables density , momentum and energy per unit volume : -.6 cm here , represents the viscous shear stress tensor and the heat flux . as the equation are dimensionless , the reynolds number and the prandtl number appear .the system is closed by the equation of state for the pressure , the sutherland law representing the correlation between temperature and viscosity as well as the stokes hypothesis .additionally , we prescribe appropriate boundary conditions at the boundary of except for , where we have the coupling conditions . in the dirichlet - neumann coupling, a temperature value is enforced strongly at . regarding the structure model, we will consider heat conduction only .thus , we have the nonlinear heat equation for the structure temperature -.6 cm where denotes the heat flux vector . for steel ,the specific heat capacity and heat conductivity are temperature - dependent and highly nonlinear . here , an empirical model for the steel 51crv4 suggested in is used .this model is characterized by the coefficient functions -.6 cm and -.6 cm with -.6 cm and -.6 cm for the mass density one has . finally ,on the boundary , we have neumann conditions .following the partitioned coupling approach , we discretize the two models separately in space . for the fluid, we use a finite volume method , leading to where represents the spatial discretization and its dependence on the temperatures in the fluid .in particular , the dlr tau - code is employed , which is a cell - vertex - type finite volume method with ausmdv as flux function and a linear reconstruction to increase the order of accuracy .regarding structural mechanics , the use of finite element methods is ubiquitious .therefore , we will also follow that approach here and use quadratic finite elements , leading to the nonlinear equation for all unknowns on here , is the heat capacity and the heat conductivity matrix .the vector consists of all discrete temperature unknowns and is the heat flux vector on the surface . in this caseit is the prescribed neumann heat flux vector of the fluid . for the time integration , a time adaptive sdirk2 methodis implemented in a partitioned way , as suggested in .if the fluid and the solid solver are able to carry out time steps of implicit euler type , the master program of the fsi procedure can be extended to sdirk methods very easily , since the master program just has to call the backward euler routines with specific time step sizes and starting vectors .this method is very efficient and will be used as the base method in its time adaptive variant , which is much more efficient than more commonly used fixed time step size schemes . to obtain time adaptivity ,embedded methods are used .thereby , the local error is estimated by the solvers separately , which then report the estimates back to the master program .based on this , the new time step is chosen . to this end , all stage derivatives are stored by the subsolvers .furthermore , if the possibility of rejected time steps is taken into account , the current solution pair has to be stored as well . to comply with the conditions that the discrete temperature and heat flux are continuous at the interface , a dirichlet - neumann coupling is used .thus , the boundary conditions for the two solvers are chosen such that we prescribe neumann data for one solver and dirichlet data for the other . following the analysis of giles ,temperature is prescribed for the equation with smaller heat conductivity , here the fluid , and heat flux is given on for the structure .choosing these conditions the other way around leads to an unstable scheme . in the following itis assumed that at time , the step size is prescribed . applying a dirk method to equation ( [ eq : odefluid])-([eq : odeheat ] ) results in the coupled system of equations to be solved at runge - kutta stage : -.6 cm -.6 cm { \boldsymbol{\theta}}_i - { \ensuremath{\mathbf{m } } } { \ensuremath{\mathbf{s}}}_i^{\theta } - { \delta t_n\,}a_{ii}{\ensuremath{\mathbf{q}}}_b({\bf u}_i ) = { \ensuremath{\mathbf{0}}}. \end{aligned}\ ] ] here , is a coefficient of the time integration method and and are given vectors , called starting vectors , computed inside the dirk scheme . the dependence of the fluid equations on the temperature results from the nodal temperatures of the structure at the interface .this subset is written as .accordingly , the structure equations depend only on the heat flux of the fluid at the coupling interface .to solve the coupled system of nonlinear equations ( [ eq : fnoneq])-([eq : tlineq ] ) , a strong coupling approach is employed .thus , a fixed point iteration is iterated until a convergence criterion is satisfied . in particular, we use a a nonlinear gau - seidel process : each inner iteration is thereby done locally by the structure or the fluid solver .more specific , a newton method is used in the structure and a fas multigrid method is employed in the fluid . in the base method ,the starting values of the iteration are given by and .the termination criterion is formulated by the relative update of the nodal temperatures at the interface of the solid structure and we stop once we are below the tolerance in the time integration scheme divided by five -.6 cm the vector -.6 cm is often referred to as the interface residual. we will now consider different techniques to improve upon this base iteration , namely using vector extrapolation inside the fixed point iteration and then extrapolation inside the time integration schemes , to obtain better initial values . to improve the convergence speed of the fixed point iteration , different vector extrapolation techniques have been suggested .these are typically classic techniques , where a set of vectors of a convergent vector sequence is extrapolated to obtain a faster converging sequence .we are now going to describe three techniques that we will investigate in this framework .relaxation means that after the fixed point iterate is computed , a relaxation step is added : -.6 cm several strategies exist to compute the relaxation parameter .the idea of aitken s method is to enhance the current solution using two previous iteration pairs and obtained from the gau - seidel - step ( [ eq : gsf])- .an improvement in the scalar case is given by the secant method -.6 cm the relaxation factor in equation ( [ eq : relax_a ] ) for the secant method ( [ eq : secant ] ) is then -.6 cm as customary , we use an added recursion on in which we use the old relaxation factor : -.6 cm in the vector case the division by the residual is not possible .therefore , we multiply the nominator and the numerator formally by to obtain -.6 cm two previous steps are required to calculate the relaxation parameter . for the first fixpoint iteration , the relaxation parameter must be prescribed .we choose , which was reported by other authors to work well .another idea we will follow here are minimal polynomial extrapolation ( mpe ) and reduced rank extrapolation ( rre ) . here , the new approximation is given as a linear combination of existing iterates with coefficients to be determined : -.6 cm for mpe , the coefficients are defined via -.6 cm where the coefficients are the solution of the problem -.6 cm for rre , the coefficients are defined as the solution of the constrained least squares problem -.6 cm these problems are then solved using a qr decomposition . to find good starting values for iterative processes in implicit time integration schemes ,it is common to use extrapolation based on knowledge about the trajectory of the solution of the initial value problem . in the spirit of partitioned solvers, we here suggest to use extrapolation of the interface temperatures only .of course , this strategy could be used as well by the subsolvers , but we will not consider this here . at the first stage , we have the old time step size with value and the current time step size with value .we are looking for the value at the next stage time .linear extrapolation results in regarding quadratic extrapolation , it is reasonable to choose , and the intermediate temperature vector from the previous stage .this results in when applying this idea at the second stage ( or at later stages in a scheme with more than two ) , it is better to use values from the current time interval . thus , we linearly extrapolate at and at to obtain this results in a first test case , the cooling of a flat plate resembling a simple work piece is considered .the work piece is initially at a much higher temperature than the fluid and then cooled by a constant air stream , that is assumed to be laminar , see figure [ fig : testcase ] .the inlet is given on the left , where air enters the domain with an initial velocity of in horizontal direction and a temperature of .then , there are two succeeding regularization regions of to obtain an unperturbed boundary layer . in the first region , , symmetry boundary conditions , , , are applied . in the second region , , a constant wall temperature ofis specified . within this region the velocity boundary layer fully develops .the third part is the solid ( work piece ) of length , which exchanges heat with the fluid , but is assumed insulated otherwise , thus .therefore , neumann boundary conditions are applied throughout .finally , the fluid domain is closed by a second regularization region of with symmetry boundary conditions and the outlet . regarding the initial conditions in the structure ,a constant temperature of at is chosen throughout . to specify reasonable initial conditions within the fluid , a steady state solution of the fluid with a constant wall temperature computed .the grid is chosen cartesian and equidistant in the structural part , where in the fluid region the thinnest cells are on the boundary and then become coarser in -direction ( see figure [ fig : grid ] ) . to avoid additional difficulties from interpolation , the points of the primary fluid grid , where the heat flux is located in the fluid solver , and the nodes of the structural grid are chosen to match on the interface . to compare the effect of the different vector extrapolation strategies , we consider the fixed point equation within the first stage of the first time step in the test problem with a time step size of and . in figure[ fig : onesystem ] , we can see how the interface residual decreases with the fixed point iterations . during the first two steps all methods have the same residual norm , since all methods need at least two iterations to start . for this example , the vector extrapolation methods outperform the standard scheme for tolerances below ..[tab : plate - fixed - v - adaptive]total number of iterations for 100 secs of real time without any extrapolation . fixed timestepsizes versus adaptive steering . [ cols="^,^,^,^",options="header " , ] [ tab : plate - fixed - timestep ] we now compare the different schemes for a whole simulation of seconds real time . if not mentioned otherwise , the initial time step size is . to first give an impression on the effect of the time adaptive method, we look at fixed time step versus adaptive computations in tabular [ tab : plate - fixed - timestep ] .thus , the different tolerances for the time adaptive case lead to different time step sizes and tolerances for the nonlinear system over the course of the algorithm , whereas in the fixed time step size , they steer only how accurate the nonlinear systems are solved . for the fixed time step case , we chose and , which roughly corresponds to an error of and , respectively .thus , computations in one line of tabular [ tab : plate - fixed - timestep ] correspond to similar errors . as can be seen , the time adaptive method is in the worst case a factor two faster and in the best case a factor of eight . thus the time adaptive computation serves from now on as the base method for the construction of a fast solver .* 6c & no relax . &aitken & mpe & rre + & 31 & 32 & 31 & 31 + & 39 & 38 & 39 & 39 + & 106 & 103 & 106 & 106 + & 857 & 736 & 857 & 857 + the next computations demonstrate the effect of vector extrapolation . with increasing time the time adaptive algorithm chooses larger time step sizes .the base method needs more fixed point iterations in the end of the time interval , while the other methods have remained roughly constant .the total number of fixed point iterations is shown in tabular [ tab:100secs ] .as we can see , only aitken relaxation has an advantage over the base method and that only for a tolerance of . for larger tolerances ,all the methods need roughly the same number of iterations , which is also confirmed in figure [ fig : onesystem ] , where all methods overlap for .essentially , the interplay between the fixed point iteration and the time adaptive scheme results in only two fixed point iterations being necessary per time step ( compare equation ) .thus , the vector extrapolation methods have no effect .* 7c & none & lin . &+ & 31 & 19 & 25 + & 39 & 31 & 32 + & 106 & 73 & 77 + & 857 & 415 & 414 + finally , we consider extrapolation based on the time integration scheme . in table [ tab : extrap - plate ] , the total number of iterations for 100 seconds of real time is shown .as can be seen , linear extrapolation speeds up the computations between and .quadratic extrapolation leads to a speedup between and being overall less efficient than the linear extrapolation procedure .overall , we are thus able to simulate 100 seconds of real time for this problem for an engineering tolerance using only 19 calls to fluid and the structure solver each . to understand this more precisely , we considered the second stage of the second time step in an adaptive computation .we thus have finished the first time step with and the second time step gets doubled , leading to .this is depicted in figure [ fig : compare_extrapolation ] . to obtain a temperature for the new time linear extrapolation method uses the values of the current time and of the first runge - kutta step at .as can be seen , this predicts the new time step very well .in contrast , the quadratic extrapolation uses for the new time step the solution from the previous time the current time and from the first runge kutta stage .since the exact solution has a more linear behavior in the time step , the quadratic extrapolation provides no advantage , in particular since it slopes upward after some point .( 160,140 ) ( 20,15)comparison of the linear and quadratic extrapolation methods for the time step .,title="fig:",scaledwidth=42.0% ] ( 120,100 ) ( 0,-2 ) ( 100,120)comparison of the linear and quadratic extrapolation methods for the time step .,title="fig:",width=18 ] ( 100,110)comparison of the linear and quadratic extrapolation methods for the time step .,title="fig:",width=18 ] ( 100,100)comparison of the linear and quadratic extrapolation methods for the time step .,title="fig:",width=18 ] ( 115,123)(0,0)[l]linear extrap .( 115,112)(0,0)[l]quad .( 115,102)(0,0)[l]final solns .( 21,10)(0,0)[c] ( 69,10)(0,0)[c] ( 116,10)(0,0)[c] ( 163,10)(0,0)[c] ( 90,3)(0,0)[c] [ s ] ( 10,0 ) ( 9,18)(0,0)[r] ( 9,32)(0,0)[r] ( 9,50)(0,0)[r] ( 9,66)(0,0)[r] ( 9,82)(0,0)[r] [ k ] ( 9,99)(0,0)[r] ( 9,115)(0,0)[r] ( 9,130)(0,0)[r] as a second test case , we consider the cooling of a flanged shaft by cold high pressured air , a process that s also known as gas quenching .the complete process consists of the inductive heating of a steel rod , the forming of the hot rod into a flanged shaft , a transport to a cooling unit and the cooling process . here, we consider only the cooling , meaning that we have a hot flanged shaft that is cooled by cold high pressured air coming out of small tubes .we consider a two dimensional cut through the domain and assume symmetry along the horizontal axis , resulting in one half of the flanged shaft and two tubes blowing air at it , see figure [ fig : flangesketch ] .we assume that the air leaves the tube in a straight and uniform way at a mach number of 1.2 .furthermore , we assume a freestream in -direction of mach 0.005 .this is mainly to avoid numerical difficulties at mach 0 , but could model a draft in the workshop .the reynolds number is and the prandtl number .the grid consists of 279212 cells in the fluid , which is the dual grid of an unstructured grid of quadrilaterals in the boundary layer and triangles in the rest of the domain , and 1997 quadrilateral elements in the structure .it is illustrated in figure [ fig : flangegrid ] . to obtain initial conditions for the subsequent tests , we use the following procedure: we define a first set of initial conditions by setting the flow velocity to zero throughout and choose the structure temperatures at the boundary points to be equal to temperatures that have been measured by a thermographic camera .then , setting the -axis on the symmetry axis of the flange , we set the temperatur at each horizontal slice to the temperature at the correspoding boundary point .finally , to determine the actual initial conditions , we compute seconds of real time using the coupling solver with a fixed time step size of .this means , that the high pressured air is coming out of the tubes and the first front has already hit the flanged shaft .this solution is illustrated in figure [ fig : temperatureflange ] ( left ) .the wiggles in the structure are due to visualization artifacts .now , we compute 1 second of real time using the time adaptive algorithm with different tolerances and an initial time step size of .this small initial step size is necessary to prevent instabilities in the fluid solver .during the course of the computation , the time step size is increased until it is on the order of , which demonstrates the advantages of the time adaptive algorithm and reaffirms that it is this algorithm that we need to compare to . in total, the time adaptive method needs 22 , 41 , 130 and 850 time steps to reach for the different tolerances , compared to the steps the fixed time step method would need .the solution at the final time is depicted in figure [ fig : temperatureflange ] ( right ) . as can be seen , the stream of cold air is deflected by the shaft .we then compare the total number of iterations for the different vector extrapolation methods , see table [ tab : flangedvectorextr ] .as before , the vector extrapolation methods have almost no effect on the number of iterations .* 6c & & no relax . &aitken & mpe & rre + & & 52 & 52 & 52 & 52 + & & 127 & 128 & 127 & 127 + & & 433 & 430 & 433 & 433 + & & 2874 & 2859 & 2874 & 2874 + finally , we consider extrapolation based on the time integration scheme . in table [ tab : extrap - shaft ] , the total number of iterations for 1 second of real time is shown .as before , the extrapolation methods cause a noticable decrease in the total number of fixed point iterations , with linear extrapolation performing better than the quadratic version .the speedup from linear extrapolation is between 20% and 30% , compared to the results obtained without extrapolation . *4c & none & lin . &+ & 52 & 42 & 47 + & 127 & 97 & 99 + & 433 & 309 & 312 + & 2874 & 1805 & 1789 +we considered a time dependent thermal fluid structure interaction problem where a nonlinear heat equation to model steel is coupled with the compressible navier - stokes equations .the coupling is performed in a dirichlet - neumann manner . as a fast base solver ,a higher order time adaptive method is used for time integration .this method is significantly more efficient than a fixed time step method and is therefore the scheme to beat . to reduce the number of fixed point iterations in a partitioned spirit, first different vector extrapolation techniques , namely aitken relaxation , mpe and rre were compared .these have a negligible effect , since they are only useful when a large number of iterations is needed per system and the time adaptive method results in only two iterations being necessary per time step .however , extrapolation based on the time integration method works from the first iteration and reduces the number of iterations by up to .hereby , linear extrapolation works better than quadratic .the combined time adaptive method with linear extrapolation thus allows to solve real life problems at engineering tolerances using only a couple dozen calls to the fluid and structure solver .financial support has been provided by the german research foundation ( dfg ) via the sonderforschungsbereich transregio 30 , projects c1 and c2 . , _ cfd - based nonlinear computational aeroelasticity _ , in encyclopedia of computational mechanics , e. stein , r. de borst , and t. j. r. hughes , eds . ,vol . 3 : fluids , john wiley & sons , 2004 , ch . 13 , pp. 459480 .
|
we consider time dependent thermal fluid structure interaction . the respective models are the compressible navier - stokes equations and the nonlinear heat equation . a partitioned coupling approach via a dirichlet - neumann method and a fixed point iteration is employed . as a refence solver a previously developed efficient time adaptive higher order time integration scheme is used . to improve upon this , we work on reducing the number of fixed point coupling iterations . thus , first widely used vector extrapolation methods for convergence acceleration of the fixed point iteration are tested . in particular , aitken relaxation , minimal polynomial extrapolation ( mpe ) and reduced rank extrapolation ( rre ) are considered . second , we explore the idea of extrapolation based on data given from the time integration and derive such methods for sdirk2 . while the vector extrapolation methods have no beneficial effects , the extrapolation methods allow to reduce the number of fixed point iterations further by up to a factor of two with linear extrapolation performing better than quadratic . _ for the mathematical sciences , numerical analysis , lunds university , box 118 , 22100 lund , sweden + e - mail : philipp.birken.lu.se + of mathematics , university of kassel , heinrich - plett - str . 40 , 34132 kassel , germany + institute of mechanics and dynamics , university of kassel , mnchebergstr . 7 , 34109 kassel , germany _ _ keywords : thermal fluid structure interaction , partitioned coupling , convergence acceleration , extrapolation _
|
after decades receiving little attention from non - scientists , the impacts of climate change are now widely discussed through a variety of mediums .originating from scientific papers , newspaper articles , and blog posts , a broad spectrum of climate change opinions , subjects , and sentiments exist .newspaper articles often dismiss or sensationalize the effects of climate change due to journalistic biases including personalization , dramatization and a need for novelty .scientific papers portray a much more realistic and consensus view of climate change .these views , however , do not receive widespread media attention due to several factors including journal paywalls , formal scientific language , and technical results that are not easy for the general public to understand . according to the ipcc fifth assessment report, humans are `` very likely '' ( 90 - 100% probability ) to be responsible for the increased warming of our planet , and this anthropogenic global warming is responsible for certain weather extremes . in april 2013 ,63% of americans reported that they believe climate change is happening .this number , however , drops to 49% when asked if climate change is being caused by humans .the percentage drops again to 38% when asked if people around the world are currently being harmed by the consequences of climate change .these beliefs and risk perceptions can vary by state or by county .by contrast , 97% of active , publishing , climate change scientists agree that `` human activity is a significant contributing factor in changing mean global temperatures '' .the general public learns most of what it knows about science from the mass - media .coordination among journalists , policy actors , and scientists will help to improve reporting on climate change , by engaging the general public and creating a more informed decision - making process .one popular source of climate information that has not been heavily analyzed is social media .the pew research center s project for excellence in journalism in january of 2009 determined that topics involving global warming are much more prominent in the new , social media . in the last decade, there has been a shift from the consumption of traditional mass media ( newspapers and broadcast television ) to the consumption of social media ( blog posts , twitter , etc . ) .this shift represents a switch in communications from `` one - to - many '' to `` many - to - many '' . rather than a single journalist or scientist telling the public exactly what to think ,social media offers a mechanism for many people of diverse backgrounds to communicate and form their own opinions .exposure is a key aspect in transforming a social problem into a public issue , and social media is a potential avenue where climate change issues can be initially exposed .here we study the social media site twitter , which allows its users 140 characters to communicate whatever they like within a `` tweet '' .such expressions may include what individuals are thinking , doing , feeling , etc .twitter has been used to explore a variety of social and linguistic phenomena , and used as a data source to create an earthquake reporting system in japan , detect influenza outbreaks , and analyze overall public health .an analysis of geo - tagged twitter activity ( tweets including a latitude and longitude ) before , during , and after hurricane sandy using keywords related to the storm is given in .they discover that twitter activity positively correlates with proximity to the storm and physical damage .it has also been shown that individuals affected by a natural disaster are more likely to strengthen interactions and form close - knit groups on twitter immediately following the event .twitter has also been used to examine human sentiment through analysis of variations in the specific words used by individuals . in , dodds et al .develop the `` hedonometer '' , a tool for measuring expressed happiness positive and negative sentiment in large - scale text corpora .since its development , the hedonometer has been implemented in studies involving the happiness of cities and states , the happiness of the english language as a whole , and the relationship between individuals happiness and that of those they connect with .the majority of the topics trending on twitter are headlines or persistent news , making twitter a valuable source for studying climate change opinions .for example , in , subjective vs objective and positive vs negative tweets mentioning climate change are coded manually and analyzed over a one year time period . in ,various climate hashtags are utilized to locate pro / denialist communities on twitter . in the present study, we apply the hedonometer to a collection of tweets containing the word `` climate '' .we collected roughly 1.5 million such tweets from twitter s gardenhose api ( a random 10% of all messages ) during the roughly 6 year period spanning september 14 , 2008 through july 14 , 2014 .this time period represents the extent of our database at the time of writing .each collected tweet contains the word `` climate '' at least once .we include retweets in the collection to ensure an appropriately higher weighting of messages authored by popular accounts ( e.g. media , government ) .we apply the hedonometer to the climate tweets during different time periods and compare them to a reference set of roughly 100 billion tweets from which the climate - related tweets were filtered .we analyze highest and lowest happiness time periods using word shift graphs developed in , and we discuss specific words contributing to each happiness score .the hedonometer is designed to calculate a happiness score for a large collection of text , based on the happiness of the individual words used in the text .the instrument uses sentiment scores collected by kloumann et al . and dodds et al . , where 10,222 of the most frequenly used english words in four disparate corpora were given happiness ratings using amazon s mechanical turk online marketplace .fifty participants rated each word , and the average rating becomes the word s score .each word was rated on a scale from 1 ( least happy ) to 9 ( most happy ) based on how the word made the participant feel .we omit clearly neutral or ambiguous words ( scores between 4 and 6 ) from the analysis . in the present study ,we use the instrument to measure the average happiness of all tweets containing the word `` climate '' from september 14 , 2008 to july 14 , 2014 on the timescales of day , week , and month .the word `` climate '' has a score of 5.8 and was thus not included when calculating average happiness . for comparison, we also calculate the average happiness score surrounding 5 climate related keywords .we recognize that not every tweet containing the word `` climate '' is about climate change .some of these tweets are about the economic , political , or social climate and some are ads for climate controlled cars . through manual coding of a random sample of 1,500 climate tweets, we determined that 93.5% of tweets containing the word `` climate '' are about the earth s climate or climate change .we calculated the happiness score for both the entire sample and the sample with the non - earth related climate tweets removed .the scores were 5.905 and 5.899 respectively , a difference of 0.1% .this difference is small enough to conclude that the non - earth related climate change tweets do not substantially alter the overall happiness score .based on the happiness patterns given by the hedonometer analysis , we select specific days for analysis using word shift graphs .we use word shift graphs to compare the average happiness of two pieces of text , by rank ordering the words that contribute the most to the increase or decrease in happiness . in this research ,the comparison text is all tweets containing the word `` climate '' , and the reference text is a random 10% of all tweets .hereafter , we refer to the full reference collection as the `` unfiltered tweets '' .finally , we analyze four events including three natural disasters and one climate rally using happiness time series and word shift graphs .these events include hurricane irene ( august 2011 ) , hurricane sandy ( october 2012 ) , a midwest tornado outbreak ( may 2013 ) , and the forward on climate rally ( february 2013 ) .fig . [ freq ] gives the raw and relative frequencies of the word `` climate '' over the study period .we calculate the relative frequencies by dividing the daily count of `` climate '' by the daily sum of the 50,000 most frequently used words in the gardenhose sample . from this figure, we can see that while the raw count increases over time , the relative frequency decreases over time .this decrease can either be attributed to reduced engagement on the issue since the maximum relative frequency in december 2009 , during copenhagen climate change conference , or an increase in overall topic diversity of tweets as twitter grows in popularity .the observed increase in raw count can largely be attributed to the growth of twitter during the study period from approximately 1 million tweets per day in 2008 to approximately 500 million in 2014 .in addition , demographic changes in the user population clearly led to a decrease in the relative usage of the word `` climate '' .[ happ ] shows the average happiness of the climate tweets by day , by week , and by month during the 6 year time span .the average happiness of unfiltered tweets is shown by a dotted red line .several high and low dates are indicated in the figure .the average happiness of tweets containing the word `` climate '' is consistently lower than the happiness of the entire set of tweets .several outlier days , indicated on the figure , do have an average happiness higher than the unfiltered tweets . upon recovering the actual tweets, we discover that on march 16 , 2009 , for example , the word `` progress '' was used 408 times in 479 overall climate tweets .`` progress '' has a happiness score of 7.26 , which increases the average happiness for that particular day . increasing the time period for which the average happiness is measured ( moving down the panels in fig . [ happ ] ) ,the outlier days become less significant , and there are fewer time periods when the climate tweets are happier than the reference tweets . after averaging weekly and monthly happiness scores ,we see other significant dates appearing as peaks or troughs in fig .for example , the week of october 28 , 2012 appears as one of the saddest weeks for climate discussion on twitter .this is the week when hurricane sandy made landfall on the east coast of the u.s .for the same reason , october 2012 also appears as one of the saddest months for climate discussion .the word shift graph in fig .[ climws ] shows which words contributed most to the shift in happiness between climate tweets and unfiltered tweets .the total average happiness of the reference text ( unfiltered tweets ) is 5.99 while the total average happiness of the comparison text ( climate tweets ) is 5.84 .this change in happiness is due to the fact that many positively rated words are used less and many negatively rated words are used more when discussing the climate. the word `` love '' contributes most to the change in happiness .climate change is not typically a positive subject of discussion , and tweets do not typically profess love for it .rather , people discuss how climate change is a `` fight '' , `` crisis '' , or a `` threat '' .all of these words contribute to the drop in happiness .words such as `` pollution '' , `` denial '' , `` tax '' , and `` war '' are all negative , and are used relatively more frequently in climate tweets , contributing to the drop in happiness .the words `` disaster '' and `` hurricane '' are used more frequently in climate tweets , suggesting that the subject of climate change co - occurs with mention of natural disasters , and strong evidence exists proving twitter is a valid indicator of real time attention to natural disasters . on the positive side , we see that relatively less profanity is used when discussing the climate , with the exception of the word `` hell '' .we also see that `` heaven '' is used more often . from our inspection of the tweets, it is likely that these two words appear because of a famous quote by mark twain : `` go to heaven for the climate and hell for the company '' . of the 97 non - earth related climate tweets from our 1,500 tweet sample , 8 of them referenced this quote .the word `` energy '' is also used more during climate discussions .this indicates that there may be a connection between energy related topics and climate related topics . as energy consumption and types of energy sourcescan contribute to climate change , it is not surprising to see the two topics discussed together .using the first half of our dataset , dodds et . calculated the average happiness of tweets containing several individual keywords including `` climate '' .they found that tweets containing the word `` climate '' were , on average , similar in ambient happiness to those containing the words `` no '' , `` rain '' , `` oil '' , and `` cold '' ( see table 2 ) . in the following section ,we compare the happiness score of tweets containing the word `` climate '' to that of 5 other climate - related keywords .the diction used to describe climate change attitudes on twitter may vary by user .for example , some users may consistently use `` climate change '' and others may use `` global warming '' .there are also cohorts of users that utilize various hashtags to express their climate change opinions . in order to address this, we collected tweets containing 5 other climate related keywords to explore the variation in sentiment surrounding different types of climate related conversation . as in , we choose to analyze the keywords `` global warming '' ( 5.72 ) , `` globalwarming '' ( 5.81 ) , `` climaterealists '' ( 5.79 ) , `` climatechange '' ( 5.86 ) , and `` agw '' ( 5.73 , standing for `` anthropogenic global warming '' ) .search terms lack spaces in the cases where they are climate related hashtags .tweets including the `` global warming '' keyword contain more negatively rated words than tweets including `` climate '' .there is more profanity within these tweets and there are also more words suggesting that climate change deniers use the term `` global warming '' more often than `` climate change '' .for example , there is more usage of the words `` stop '' , `` blame '' , `` freezing '' , `` fraud '' , and `` politicians '' in tweets containing `` global warming '' .these tweets also show less frequent usage of positive words `` science '' and `` energy '' , indicating that climate change science is discussed more within tweets containing `` climate '' .we also see a decrease in words such as `` crisis '' , `` bill '' , `` risk '' , `` denial '' , `` denying '' , `` disaster '' , and `` threat '' .the positively rated words `` real '' and `` believe '' appear more in `` global warming '' tweets , however so does the word `` do nt '' , again indicating that in general , the twitter users who who don?t acknowledge climate change use the term `` global warming '' more frequently than `` climate change '' .a study in 2011 determined that public belief in climate change can depend on whether the question uses `` climate change '' or `` global warming '' .tweets containing the hashtag `` globalwarming '' also contain words indicating that this is often a hashtag used by deniers .the word contributing most to the decrease in happiness between `` climate '' and `` globalwarming '' is `` fail '' , possibly referencing an inaccurate interpretation of the timescale of global warming consequences during cold weather .we see an increase in negative words `` fraud '' , `` die '' , `` lie '' , `` blame '' , `` lies '' , and again a decrease in positive , scientific words .there is also an increase in several cold weather words including `` snow '' , `` freezing '' , `` christmas '' , `` december '' , indicating that the `` globalwarming '' hashtag may often be used sarcastically .similarly , tweets including the hashtag `` climaterealists '' use more words like `` fraud '' , `` lies '' , `` wrong '' , and `` scandal '' and less `` fight '' , `` crisis '' , `` pollution '' , `` combat '' , and `` threat '' .the hashtag `` agw '' represents a group that is even more so against anthropogenic climate change .we see an increase in `` fraud '' , `` lie '' , `` fail '' , `` wrong '' , `` scare '' , `` scandal '' , `` conspiracy '' , `` crime '' , `` false '' , and `` truth '' .this particular hashtag gives an increase in positive words `` green '' and `` science '' , however based on the large increase in the aforementioned negative words , we can deduce that these terms are being discussed in a negative light .the `` climatechange '' hashtag represents users who are believers in climate change .there is an increase in positive words `` green '' , `` energy '' , `` environment '' , `` sea '' , `` oceans '' , `` nature '' , `` earth '' , and `` future '' , indicating a discussion about the environmental impacts of climate change .there is also an increase in `` pollution '' , `` threat '' , `` risk '' , `` hunger '' , `` fight '' , and `` problem '' indicating that the `` climatechange '' hashtag is often used when tweeting about the fight against climate change . with the exception of the `` globalwarming '' hashtag, our analysis of these keywords largely agrees with what is found in .our analysis , however , compares word frequencies within tweets containing these hashtags with word frequencies within tweets containing the word `` climate '' .we find that more skeptics use `` global warming '' in their tweets than `` climate '' , while it may be the case that `` global warming '' and `` globalwarming '' hashtag are also used by activists .while fig .[ climws ] shows a shift in happiness for all climate tweets collected in the 6 year period , we now move to analyzing specific climate change - related time periods and events that correspond to spikes or dips in happiness .it is important to note that tweets including the word `` climate '' represent a very small fraction of unfiltered tweets ( see gray squares comparing text sizes in bottom right of fig .[ climws ] ) .while our analysis may capture specific events pertaining to climate change , it may not capture everything , as twitter may contain background noise that we ca nt easily analyze .[ daysh ] gives word shift graphs for three of the happiest days according to the hedonometer analysis .these dates are indicated in the top plot in fig .the word shift graphs use unfiltered tweets as the reference text and climate tweets as the comparison text for the date given in each title .[ daysh](a ) shows that climate tweets were happier than unfiltered tweets on december 28 , 2008 .this is due in part to a decrease in the word `` no '' , and an increase in the words `` united '' , `` play '' , and `` hopes '' . on this day, there were `` high hopes '' for the u.s .response to climate change .an example tweet by oneworld news is given in fig .[ tweets](a ) .[ daysh](b ) shows that climate tweets were happier than unfiltered tweets on april 9 , 2009 , largely due to the increase in positive words `` book '' , `` energy '' , and `` prize '' .twitter users were discussing the release of a new book called _ sustainable energy without the hot air _ by david jc mackay . also on this date, users were posting about a climate prize given to a solar - powered cooker in a contest for green ideas .example tweets include fig .[ tweets](b ) and ( c ) . finally , fig .[ daysh](c ) shows that climate tweets were happier than unfiltered tweets on april 30 , 2012 .this is due to the increased usage of the words `` dear '' , `` new '' , `` protect '' , `` forest '' , `` save '' , and `` please '' . on this date, twitter users were reaching out to brazilian president dilma to save the amazon rainforest , e.g. , fig .[ tweets](d ) .similarly , fig . [ dayss ] gives word shift graphs for three of the saddest days according to the hedonometer analysis .these dates are indicated in the top panel in fig .[ dayss](a ) shows an increase in many negative words on october 9 , 2008 .topics of conversation in tweets containing `` climate '' include the threat posed by climate change to a tropical species , a british climate bill , and the u.s . economic crisis .example tweets include fig .[ tweets](e - g ) .[ dayss](b ) shows an increase in negative words `` poor'',``assault '' , `` battle '' , and `` bill '' on april 4 , 2010 .popular topics of conversation on this date included a california climate law and president obama s oil - drilling plan .example tweets include fig .[ tweets](h ) and ( i ) . finally , fig .[ dayss](c ) shows that the words `` do nt '' and `` stop '' contributed most to the decrease in happiness on august 6 , 2011 .a topic of conversation on this date was the keystone xl pipeline , a proposed extension to the current keystone pipeline .an example tweet is given in fig .[ tweets](j ) .this per day analysis of tweets containing `` climate '' shows that many of the important issues pertaining to climate change appear on twitter , and demonstrate different levels of happiness based on the events that are unfolding . in the following section , we investigate specific climate change events that may exhibit a peak or a dip in happinessfirst , we analyze the climate change discussion during several natural disasters that may have raised awareness of some of the consequences of climate change .then , we analyze a non - weather related event pertaining to climate change .natural disasters such as hurricanes and tornados have the potential to focus society s collective attention and spark conversations about climate change .a person s belief in climate change is often correlated with the weather on the day the question is asked .a study using `` climate change '' and `` global warming '' tweets showed that both weather and mass media coverage heavily influence belief in anthropogenic climate change . in this section, we analyze tweets during three natural disasters : hurricane irene , hurricane sandy , and a midwest tornado outbreak that damaged many towns including moore , oklahoma and rozel , kansas .[ hurricane ] gives the frequencies of the words `` hurricane '' and `` tornado '' within tweets that contain the word `` climate '' .each plot labels several of the spikes with the names of the hurricanes ( top ) or the locations ( state abbreviations ) of the tornado outbreaks ( bottom ) .this figure indicates that before hurricane irene in august 2011 , hurricanes were not commonly referenced alongside climate , and before the april 2011 tornado outbreak in alabama and mississippi , tornados were not commonly referenced alongside climate .this analysis , however , will not capture every hurricane or tornado mentioned on twitter , only those that were referenced alongside the word `` climate '' .hurricane arthur , for example , occurred in early july , 2014 and does not appear as a spike in fig . [ hurricane ] .this particular hurricane did not cause nearly as much damage or as many fatalities as the hurricanes that do appear in fig .[ hurricane ] , and perhaps did not draw enough attention to highlight a link between hurricanes and climate change on twitter .additionally , a large tornado outbreak in kentucky , alabama , indiana , and ohio occurred in early march 2012 and does not appear as a spike in our analysis .[ hurricane ] shows that the largest peak in the word `` hurricane '' occurred during hurricane sandy in october 2012 .[ decay ] provides a deeper analysis for the climate time series during hurricane sandy .the time series of the words `` hurricane '' and `` climate '' as a fraction of all tweets before , during , and after hurricane sandy hit are given in fig .[ decay](a ) and ( c ) .spikes in the frequency of usage of these words is evident in these plots .the decay of each word is fitted with a power law in fig .[ decay](b ) and ( d ) .a power law is a functional relationship of the following form : here , is measured in days , and is the day hurricane sandy made landfall . represents the relative frequency of the word `` hurricane '' ( top ) or `` climate '' ( bottom ) , and and are constants . using the power law fit , we calculate the first three half lives of the decay . letting equal the maximum relative frequency , the time at which the first half life of the power law relationship occursis calculated by equation [ half ] : the first three half lives of the decay in the frequency of the word `` hurricane '' during hurricane sandy are 1.57 , 0.96 , and 1.56 additional days .since the decay is not exponential , these half lives are not constant .the first half life indicates that after about a day and a half , `` hurricane '' was already tweeted only half as often .the second half life indicates that after one more day , `` hurricane '' was tweeted only one fourth as often , and so on .thus , it did not take long for the discussion of the hurricane to decrease. the half lives , however , of the word `` climate '' are much larger at 8.19 , 22.58 , and 84.85 days .[ disasters ] gives happiness time series plots for three natural disasters occurring in the united states .these plots show that there is a dip in happiness on the day that the disasters hit the affected areas , offering additional evidence that sentiment is depressed by natural disasters . the word shift graphs indicate which words contributed to the dip in happiness .the circles on the bottom right of the word shift plots indicate that for all three disasters , the dip in happiness is due to an increase in negative words , more so than a decrease in positive words . during a natural disaster ,tweets mentioning the word `` climate '' use more negative words than tweets not mentioning the word `` climate '' . in this section ,we analyze tweets during the forward on climate rally , which took place in washington d.c . on february 17 , 2013 .the goal of the rally , one of the largest climate rallies ever in the united states , was to convince the government to take action against climate change .the proposed keystone pipeline bill was a particular focus .[ rally ] shows that the happiness of climate tweets increased slightly above the unfiltered tweets during this event , which only occurs on 8% of days in fig .[ happ ] . despite the presence of negative words such as `` protestors '' , `` denial '' , and `` crisis '' ,the forward on climate rally introduced positive words such as `` live '' , `` largest '' , and `` promise '' .the keystone pipeline bill was eventually vetoed by president obama .we have provided a general exploration of the sentiment surrounding tweets containing the word `` climate '' in response to natural disasters and climate change news and events .the general public is becoming more likely to use social media as an information source , and discussion on twitter is becoming more commonplace .we find that tweets containing the word `` climate '' are less happy than all tweets . in the united states ,climate change is a topic that is heavily politicized ; the words `` deny '' , `` denial '' , and `` deniers '' are used more often in tweets containing the word `` climate '' .the words that appear in our climate - related tweets word shift suggest that the discussion surrounding climate change is dominated by climate change activists rather than climate change deniers , indicating that the twittersphere largely agrees with the scientific consensus on this issue .the presence of the words `` science '' and `` scientists '' in almost every word shift in this analysis also strengthens this finding ( see also ) .the decreased `` denial '' of climate change is evidence for how a democratization of knowledge transfer through mass media can circumvent the influence of large stakeholders on public opinion . in examining tweets on specific dates, we have determined that climate change news is abundant on twitter .events such as the release of a book , the winner of a green ideas contest , or a plea to a political figure can produce an increase in sentiment for tweets discussing climate change .for example , the forward on climate rally demonstrates a day when the happiness of climate conversation peaked above the background conversation . on the other hand , consequences of climate change such as threats to certain species , extreme weather events , and climate related legislative billscan cause a decrease in overall happiness of the climate conversation on twitter due to an increase in the use words such as `` threat '' , `` crisis '' , and `` battle '' .natural disasters are more commonly discussed within climate - related tweets than unfiltered tweets , implying that some twitter users associate climate change with the increase in severity and frequency of certain natural disasters . during hurricane irene , for example ,the word `` threat '' was used much more often within climate tweets , suggesting that climate change may be perceived as a bigger threat than the hurricane itself .the analysis of hurricane sandy in fig .[ decay ] demonstrates that while climate conversation peaked during hurricane sandy , it persisted longer than the conversation about the hurricane itself . while climate change news is prevalent in traditional media , our research provides an overall analysis of climate change discussion on the social media site , twitter . through social media, the general public can learn about current events and display their own opinions about global issues such as climate change .twitter may be a useful asset in the ongoing battle against anthropogenic climate change , as well as a useful research source for social scientists , an unsolicited public opinion tool for policy makers , and public engagement channel for scientists .the authors are grateful for the computational resources provided by the vermont advanced computing core which is supported by the vermont complex systems center .cmd , ajr , emc , and lm were supported by national science foundation ( nsf ) grant dms-0940271 to the mathematics & climate research network .psd was supported by nsf career award # 0846668 .thanks to mary lou zeeman for helpful discussions .yu - ru lin , drew margolin , brian keegan , and david lazer .voices of victory : a computational focus group framework for tracking opinion shift in real time . in _ proceedings of the 22nd international conference on world wide web _ , pages 737748 .international world wide web conferences steering committee , 2013 .takeshi sakaki , makoto okazaki , and yutaka matsuo .earthquake shakes twitter users : real - time event detection by social sensors . in _ proceedings of the 19th international conference on world wide web _ , pages 851860 .acm , 2010 .eiji aramaki , sachiko maskawa , and mizuki morita .twitter catches the flu : detecting influenza epidemics using twitter . in _ proceedings of the conference on empirical methods in natural language processing _ , pages 15681576 .association for computational linguistics , 2011 .peter sheridan dodds , kameron decker harris , isabel m kloumann , catherine a bliss , and christopher m danforth .temporal patterns of happiness and information in a global social network : hedonometrics and twitter . , 6(12):e26752 , 2011 .lewis mitchell ,morgan r frank , kameron decker harris , peter sheridan dodds , and christopher m danforth .the geography of happiness : connecting twitter sentiment and expression , demographics , and objective characteristics of place ., 8(5):e64417 , 2013 . catherine a bliss , isabel m kloumann , kameron decker harris , christopher m danforth , and peter sheridan dodds .twitter reciprocal reply networks exhibit assortativity with respect to happiness . , 3(5):388397 , 2012 .haewoon kwak , changhyun lee , hosung park , and sue moon .what is twitter , a social network or a news media ? in _ proceedings of the 19th international conference on world wide web _ , pages 591600 .acm , 2010 .joseph t ripberger , hank c jenkins - smith , carol l silva , deven e carlson , and matthew henderson . social media and severeweather : do tweets provide a valid indicator of public attention to severe weather risk communication ?, 6(4):520530 , 2014 .
|
the consequences of anthropogenic climate change are extensively debated through scientific papers , newspaper articles , and blogs . newspaper articles may lack accuracy , while the severity of findings in scientific papers may be too opaque for the public to understand . social media , however , is a forum where individuals of diverse backgrounds can share their thoughts and opinions . as consumption shifts from old media to new , twitter has become a valuable resource for analyzing current events and headline news . in this research , we analyze tweets containing the word `` climate '' collected between september 2008 and july 2014 . through use of a previously developed sentiment measurement tool called the hedonometer , we determine how collective sentiment varies in response to climate change news , events , and natural disasters . we find that natural disasters , climate bills , and oil - drilling can contribute to a decrease in happiness while climate rallies , a book release , and a green ideas contest can contribute to an increase in happiness . words uncovered by our analysis suggest that responses to climate change news are predominately from climate change activists rather than climate change deniers , indicating that twitter is a valuable resource for the spread of climate change awareness .
|
symbolic analysis of _ linear hybrid automata _( lha) can generate a symbolic characterization of the reachable state - space of the lha .when _ static parameters _ ( system variables whose values are decided before run - time and never changed in run - time ) are used in lhas , such symbolic characterizations may shed important feedback information to engineers . for example , we may use such symbolic characterizations to choose proper parameter values to avoid from unsafe system designs .unfortunately , lha systems are extremely complex and not subject to algorithmic analysis . thus in real - world applications, it is very important to use every measure to enhance the efficiency of lha parametric analysis . in this work ,we extend bdd - like data - structures for the representation and manipulation of lha state - spaces .bdd - like data - structures have the advantage of data - sharing in both representation and manipulation and have shown great success in vlsi verification industry .one of the major difficulties to use bdd - like data - structures to analyze lhas comes from the unboundedness of the dense variable value ranges and the unboundedness of linear constraints .to explain one of the major contribution of this work , we need to discuss the following issue first . in the research of bdd - like data - structures , there are two classes of variables : _ system variables _ and _ decision atoms_ .system variables are those used in the input behavior descriptions .decision atoms are those labeled on each bdd nodes . for discrete systems , these two classes are the same , that is , decision atoms are exactly those system variables . but for dense - time systems , decision atoms can be different from state varaibles . for example , in cdd and crd ,decision atoms are of the form where and are system variables of type clock .previous work on bdd - like data - structures are based on the assumption that decision atom domains are of finite sizes .thus we need new techniques to extend bdd - like data - structures to represent and manipulate state - spaces of lhas .our innovations include using constraints like ( where are dense variables ) , as the decision atoms and using total dense orderings among these atoms . in this way , we devised hrd ( hybrid - restriction diagram ) and successfully extend bdd - technology to models with unbounded domains of decision atoms . in total , we defined three total dense - orderings for hrd constriants ( section [ sec.vorderings ] ) .we also present algorithms for set - oriented operations ( section [ sec.set ] ) and symbolic weakest precondition calculation ( section [ sec.wpc ] ) , procedures for symbolic parametric analysis ( section [ sec.wpc ] ) , and discuss our implementation of symbolic convex polyhedra representation normalization ( section [ sec.norm ] ) . especially , in the presentation of our previous work of bdd - like data - structures for timed automata , people usually asked for presentation of our algorithms for weakest precondition construction . in this paper, we endeavored to make a concise presentation .we have also developed a techique for fast parametric analysis of lha ( section [ sec.pspsc ] ) .the technique prunes state - space exploration based on static parameter space characterization .the technique gives us very good performance .desirably , this technique does not sacrifice the precision of parametric analysis .especially , for one benchmark , the state - space exploration does not converge without this technique ! to our knowledge , nobody else has come up with a similar technique .finally , we have implemented our ideas in our tool red 5.0 and reported our experiments to see how the three dense - orderings perform and how our implementation performs in comparison with hytech 2.4.5 and trex 1.3 .many modern model - checkers for timed automata are built around symbolic manipulation procedures of _ zones _ , which means behaviorally equivalent convex state spaces of timed automata .the most popular data - structure for zones is dbm , which is a two dimensional matrix recording differences between pairs of clocks and nothing bdd - like .as far as we know , the first paper that discusses how to use bdd to encode zones is by wang , mok , and emerson in 1993 .they discussed how to use bdd with decision atoms like to model - check timed automata . here and are timing constants with magnitude .however , they did not report implementation and experiments . in the last several years, people have explored in this approach in the hope to duplicate the success of bdd techniques in hardware verification for the verification of timed automata . for parametric analysis ,annichini et al have extended dbm to pdbm for parametric analysis of timed automata and implemented a tool called _ trex _ , which also supports verification with lossy channels .due to the differences in their target systems , it can be difficult to directly compare the performances of trex and our implementation red 5.0 .for example , trex only allows for clocks while red 5.0 allows for dense variables with rate intervals . to construct time - progress weakest preconditions ( or strongest postcondition in forward analysis ) for systems with dense variable rate intervals , red 5.0 needs to use one -variable for each dense variables and significantly increase the number of decision atoms involving -variables .according to the new formulation of time - prorgress weakest precondition algorithm in , for systems with only clocks , no literals involving -variables ever need to be generated .thus the complexity for the algorithm used in red 5.0 is relatively higher than those used in trex . on the other hand, trex may have tuned its performance for the verification of lossy channel systems . for lhas , people also used convex subspaces , called _ convex polyhedra _ , as basic unit for symbolic manipulation .a convex polyhedron characterizes a state - space of an lha and can be symbolically represented by a set of constraints like .two commonly used representations for convex polyhedra in hytech are ( 1 ) polyhedras and ( 2 ) frames in dense state - space .these two representations neither are bdd - like nor can represent concave state - spaces .data - sharing among convex polyhedra is difficult .a _ linear hybrid automata ( lha)_ is a finite - state automaton equipped with a finite set of dense variables which can hold real - values . at any moment , the lha can stay in only one _ mode _ ( or _ control location _ ) . in its operation , one of the transitions can be triggered when the corresponding triggering condition is satisfied . upon being triggered ,the lha instantaneously transits from one mode to another and sets some dense variables to values in certain ranges . in between transitions ,all dense variables increase their readings at rates determined by the current mode . for convenience ,given a set of modes and a set of dense variables , we use as the set of all boolean combinations of atoms of the forms and , where , are integers constants , , `` '' is one of , and is a rational constant .we also let be the set of rational intervals like where is either or ; is either ] . the rate interval in each mode can be different . a _ valuation _ of a set is a mapping from the set to another set .given an and a valuation of , we say _ satisfies _ , in symbols , iff it is the case that when the variables in are interpreted according to , will be evaluated .a _ state _ of is a valuation of s.t .there is a unique such that and for all ; for each , ( the set of reals ) and .given state and such that , we call the mode of , in symbols . for any ( the set of nonnegative reals ) , iff we can go from to merely by the passage of time units . formally speaking , is true iff is a state identical to except that for every with , . for a transition , iff we can go from to with discrete transition . formally speaking , is true iff , , and is identical to except that and ; and for each , if is defined , ; otherwise , ; given an lha , a _ run _ is an infinite sequence of pairs such that is a monotonically increasing real - number ( time ) divergent sequence and for all , is a mapping from ] , .a dense variable in an lha is a _static parameter _iff its rate is always zero in all modes .suppose is the set of static parameters in of lha .static parameter valuation _ of a run is a mapping from to reals such that is consistent with every state along , i.e. , . is a_ parametric solution _ to and iff for all runs with static parameter valuation , .our verification framework is called _ parametric safety analysis problem_. a parametric safety analysis problem instance , in notations , consists of an lha and a safety state - predicate .such a problem instance asks for a symbolic characterization of all parametric solutions to and .the general parametric safety analysis problem is undecidable .given a set of dense system variables , an _ lh - expression ( linear hybrid expression ) _ is an expression like where are integer constants .it is _ normalized _ iff the gcd of nonzero coefficients in is 1 , i.e. , . from now on, we shall assume that all given lh - expressions are normalized .an _ lh - upperbound _ is either or a pair like where and is a rational number .there is a natural ordering among the lh - upperbounds .that is for any two and , iff or . intuitively ,if , then is more restrictive than .an _ lh - constraint _ is a pair of an lh - expression and an lh - upperbound .given an lh - expression and an lh - upperbound , we shall naturally write the corresponding lh - constraint as . a _convex polyhedron _ is symbolically represented by a conjunction of lh - constraints and means a behaviorally equivalent state subspace of an lha .formally , a convex polyhedron can be defined as a mapping from the set of lh - expressions to the set of lh - upperbounds .alternatively , we may also represent a convex polyhedron as the set .we shall use the two equivalent notations flexibly as we see fit .with respect to a given , the set of all lh - expressions and the set of convex polyhedra are both infinite .to construct bdd - like data - structures , three fundamental issues have to be solved .the first is the domain of the decision atoms ; the second is the range of the arc labels from bdd nodes ; and the third is the evaluation ordering among the decision atoms .for modularity of presentation , we shall leave the discussion of the evaluation orderings to section [ sec.vorderings ] . in this section, we shall assume that we are given a decision atom evaluation ordering .we decide to follow an approach similar to the one adopted in .that is , our decision atoms will be lh - expressions while our bdd arcs will be labeled with lh - upperbounds .a node labeled with decision atom together with a corresponding outgoing arc label constitute the lh - constraint of .a root - to - terminal path in an hrd thus represents the conjunction of constituent lh - constraints along the path .figure [ fig.onehrd](a ) is an example of our proposed bdd - like data - strucutre for the concave space of assuming that precedes ( in symbols ) and precedes in the given evaluation ordering .( 0,0 ) # 1#2#3#4#5 ( 5153,737)(143,-940 ) ( 453,-362)(0,0)[lb ] ( 1662,-324)(0,0)[lb ] ( 1964,-777)(0,0)[lb ] ( 3173,-324)(0,0)[lb ] ( 2153,-362)(0,0)[lb ] ( 2153,-513)(0,0)[lb ] ( 3664,-362)(0,0)[lb ] ( 1020,-362)(0,0)[lb ] ( 1020,-513)(0,0)[lb ] ( 4873,-324)(0,0)[lb ] ( 3022,-890)(0,0)[lb ] ( 227,-475)(0,0)[lb ] ( 1435,-475)(0,0)[lb ] ( 2946,-475)(0,0)[lb ] ( 4608,-475)(0,0)[lb ] in this example , the system variables are while the decision atoms are , , and .hrd ( hybrid - restriction diagram ) given dense variable set and an evaluation ordering among normalized lh - expressions of , an _hrd _ is either or a tuple such that is a normalized lh - expression ; for each , is an lh - upperbound s.t . ; and for each , is an hrd such that if , then . for completeness, we use `` '' to represent the hrd for . in our algorithms , does not participate in comparison of evaluation orderings among decision atoms .also , note that in figure [ fig.onehrd ] , for each arc label , we simply put down for convenience .note that an hrd records a set of convex polyhedra and each root - leaf path represents such a convex polyhedron .in the definition of a dense - ordering among decision atoms ( i.e. , lh - expressions ) , special care must be taken to facilitate efficient manipulation of hrds . herewe use the experience reported in and present three criteria in designing the orderings among lh - expressions .the three criteria are presented in sequence proportional to their respective importances .first , it is desirable to place a pair of converse lh - expressions next to one another so that simple inconsistencies can be easily detected .that is , lh - expressions and are better placed next to one another in the ordering . for example , with this arrangement , the inconsistency of can be checked by comparing adjacent nodes in hrd paths . to fulfill this requirement , when comparing the precedence between lh - expressions in a given ordering , we shall first toggle the signs of coefficients of an lh - expression if its first nonzero coefficient is positive .if two lh - expressions are identical after the necessary toggling , then we compare the signs of their first nonzero coefficients to decide the precedence between the two . with the requirement mentioned in the last paragraph , from now on , we shall only focus on the orderings among lh - expressions whose first nonzero coefficients are negative . secondly , according to past experience reported in the literature , it is important to place strongly correlated lh - expressions close together in the evaluation orderings .usually , instead of a single global lha , we are given a set of communicating lhas , each representing a process .thus it is desirable to place lh - expressions for the same process close to each other in the orderings .our second important criterion respects this experience . given a system with processes with respective local dense variables , we shall partition the lh - expressions into groups : . contains all lh - expressions without local variables ( i.e. , coefficients for local variables are all zero ) . for each , contains all lh - expressions with a nonzero coefficient for a local variable of process and only zero coefficients for local variables of processes .then our second criterion requires that for all , lh - expressions in precede those in .if the precedence between two lh - expressions can not be determined with the two above - mentioned criteria , then the following third criterion comes to play .this one is a challenge since for each of can be of infinite size .traditionally , bdd - like data - structures have been used with finite decision atom domains .but now we need to find a way to determine the precedence among infinite number of lh - expressions ( our decision atoms in hrd ) . for this purpose, we invent to use dense - orderings among lh - expressions .we shall present three such orderings in the following .sometimes it is difficult to predict which orderings are better suitable for what kind of verification tasks . in section [ sec.experiments ] , we shall report experiments with these orderings .+ * dictionary ordering : * we can represent each lh - expression as a string , assuming that the ordering among is fixed and no blanks are used in the string .then we can use dictionary ordering and ascii ordering to decide the precedence among lh - expressions . for the lh - expressions in figure [ fig.onehrd ], we then have since precedes and precedes in ascii .the corresponding hrd in dictionary ordering is in figure [ fig.onehrd](c ) .one interesting feature of this ordering is that it has the potential to be extended to nonlinear hybrid constraints .for example , we may say in dictionary ordering since precedes in ascii .+ * coefficient ordering : * assume that the ordering of the dense variables is fixed as . in this ordering ,the precedence between two lh - expressions is determined by iteratively comparing the coefficients of dense variables in sequence . for the lh - expressions in figure [ fig.onehrd ] , we then have the hrd in this ordering is in figure [ fig.onehrd](a ) . + * magnitude ordering : * this ordering is similar to the last one . instead of comparing coefficients, we compare the absolute values of coefficients .we iteratively first compare the absolute values of coefficients of , and if they are equal , then compare the signs of coefficients of .for the lh - expressions in figure [ fig.onehrd ] , we then have in this magnitude ordering .the hrd in this ordering is in figure [ fig.onehrd](b ) .please be reminded that an hrd records a set of convex polyhedra . for convenience of discussion ,given an hrd , we may just represent it as the set of covnex polyhedra recorded in it .definitions of set - union ( ) , set - intersection ( ) , and set - exclusion ( ) of two convex polyhedra sets respectively represented by two hrds are straightforward .for example , given hrds and , is the hrd for ; is for ; and is for .the complexities of the three manipulations are all . given two convex polyhedra and , is a new convex polyhdron representing the space - intersection of and . formally speaking , for decision atom , if ; or otherwise .space - intersection ( ) of two hrds and , in symbols , is a new hrd for .given an evaluation ordering , we can write hrd - manipulation algorithms pretty much as usual . for convenience of presentation, we may repersent an hrd symbolically as . a union operation then be implemented as follows . ' '' '' set ; / * database for the recording of already - processed cases * / + \ { + if , return ; else if , return ; + ; return ; + } + where , \ { + if is or is , return ; else if , return ; ( 1 ) + else if , construct ; + else if , construct ; + else \ { + + while and , do \ {+ if , \ { + ; + } + else if , \ { ; } + else if , \ { ; } + } + if , ; + if , ; + } + ; return ;(2 ) + } ' '' '' note that in statement ( 1 ) , we take advantage of the data - sharing capability of hrds so that we do not process the same substructure twice .the set of is maintained in statement ( 2 ) .the algorithms for and are pretty much the same .the one for space intersection is much more involved and is not discussed here due to page - limit .as reported in the experiment with crd ( clock - restriction diagram) , significant performance improvement can be obtained if an integrated bdd - like data - structure for both dense constraints and discrete constraints is used instead of separate data - structure for them .it is also possible to combine hrd and bdd into one data - structure for fully symbolic manipulation . since hrd only has one sink node : , it is more compatible with bdd without false terminal node which is more space - efficient than ordinary bdd .there are two things we need to take care of in this combination .the first is about the interpretation of default values of decision atoms . in bdd ,when we find a decision atom is missing during valuating variables along a path , the atom s value can be interpreted as either true or false .but in hrd , when we find a decision atom is missing along a path , then the constraint is interpreted as .the second is about the interpretation of hrd manipulations to bdd decision atoms .straightforwardly , `` '' and `` '' on bdd decision atoms are respectively interpreted as `` '' and `` '' on bdd decision atoms . on bdd decision atoms is interpreted as when the root variable of either or is boolean . for ,the manipulation acts as `` '' when either of the root are labeled with bdd decision atoms .due to page - limit , we shall omit the proof for the soundness of such an interpretation . from now on , we shall call it hrd+bdd a combination structure of hrd and bdd .finally , it is also important to define the evaluation orderings between bdd decision atoms and hrd decision atoms . due to page - limit, we shall adopt the wisdom reported in and place bdd decision atoms and hrd decision atoms that are strongly related to the same process close to each other .our tool red runs in backward reachability analysis by default . due to page - limit, we shall only present the algorithm in symbolic fashion without details .suppose we are given an lha .there are two basic procedures in this analysis procedure .the first , , computes the weakest precondition from state - space represented by hrd through discrete transition .assume that the dense variables that get assigned in are and there is no variable that gets assigned twice in .the characterization of is \equiv d\leq y\leq d' ] . . . ]assume that is the same as except that all dense variables are replaced by respectively .here represents the value - change of variable in time - passage . for example, . intuitively , when represents the value of variable in the weakest precondition of time passage , then is the value of in the postcondition of the time - passage .the second basic procedure , , computes the weakest precondition from through time passage in mode .it is characterized as one basic building block of both and is for the evaluation of .we implement this basic operation with the following symbolic procedure . .procedure eliminates all constraints in involving variables in set .procedure adds to a path every constraint that can be transitively deduced from two peer constraints involving in the same path in .the algorithm of is in table [ tab.xtivity ] . ' '' '' , ; + \ { ; return ; } + \ { + if is or , return ; else if , return ; + else / * assume * / \ { + ; + ; return ; + } + } + \ { + if is or , return ; else if , return ; ( 3 ) + else / * assume * / \ { + if , + ; + else ; + ; return ; ( 4 ) + } + } ' '' '' is a shorthand for the new upperbound obtained from the xtivity of and .thus we preserve all constraints transitively deducible from a dense variable before it is eliminated from a predicate .this guarantees that no information will be unintentionally lost after the variable elimination .note that in our algorithm , we do not enumerate all paths in hrd to carry out this least fixpoint evaluation .instead , in statement ( 3 ) , our algorithm follows the traditional bdd programming style which takes advantage of the data - sharing capability of bdd - like data - structures .thus our algorithm does not explode due to the combinatorial complexity of path counts in hrd .this can be justified by the performance of our implementation reported in section [ sec.experiments ] .assume that the unsafe state is in mode .with the two basic procedures , then the backward reachable state - space from the risk state ( represented as an hrd ) can be characterized by here is the least fixpoint of function and is very commonly used in the reachable state - space representation of discrete and dense - time systems . after the fixpoint is successfully constructed ,we conjunct it with the initial condition and then eliminate all variables except those static parameters ( formally speaking , projecting the reachable state - space representations to the dimensions of the static parameters ) .suppose the set of static dense parameters is .the characterization of unsafe parameter valuatons is thus the set of parametric solutions is characterized by the complement of this final result .there can be infinitely many lh - constraint sets that represent a given convex polyhedron .an lh - constraint in such a representation can also be _ redundant _ in that a no less restrictive upperbound can be derived for its lh - expression from peer lh - constraints in the same representation . to control the redundancy caused by recording many lh - constraint sets for the same convex polyhedron , representations of convex polyhedra have to be normalized .due to page - limit , we shall skip much details in this regard .we emphasize that much of our implementation effort has been spent in this regard .we use a two - phase normalization procedure in each iteration of the least fixpoint evaluation . :this step eliminates those convex polyhedra contained by a peer convex polyhedron in the hrd for the reachable state - space .first , we collect the lh - expressions that occur in the current reachable state - space hrd and call them _ proof - obligations_. then we try to derive the tightest constraints for these proof - obligations along each hrd paths of the reachable state - space representation .then we eliminate those paths which is subsumed by other paths .the subsumption can be determined by pairwise comparison of all lh - constraints along two paths .: along each path , we combinatorially use up to four constraints to check for the redundancy of peer constraints in the same path and eliminate them if they are found redundant .again , our algorithm does not enumerate paths in hrd .instead , it takes advantage of data - sharing capability of hrd for efficient processing .we have also experimented with techniques to improve the efficiency of parametric analysis .one such technique , called _ pspsc _ , is avoiding new state - space exploration if the exploration does not contribute to new parametric solutions .a constraint is _ static _iff all its dense variables are static parameters .static constraints do not change their truth values .once a static constraint is derived in a convex polyhedron , its truth value will be honored in all weakest preconditions derived from this convex polyhedron .all states backwardly reachable from a convex polyhedron must also satisfy the static constraints required in the polyhedron .thus if we know that static parameter valuation is already in the parametric solution space , then we really do not need to explore those states whose parameter valuations fall in . with pspsc ,our new parametric analysis procedure is shown in table [ tab.pspsc ] . ' '' '' \ { + + while , do \ { + ; + ; + ;(5 ) + + } + return ; + } ' '' '' in the procedure , we use varaible to symbolically accumulate the parametric evaluations leading to the risk states in the least fixpoint iterations . in statement( 5 ) , we check and eliminate in those state descriptions which can not possibly contribute to new parametric evaluations by conjuncting with .one nice feature of pspsc is that it does not sacrifice the precision of our parametric analysis .[ lemma.pspsc ] is a _ parametric solution _ to and iff satisfies the return result of .+ details omitted due to page - limit .the basic idea is that the intersection at line ( 5 ) in table [ tab.pspsc ] only stops the further exploration of those states that do not contribute to new parameter - spaces .those parameter - spaces pruned in line ( 5 ) do not contribute because they are already contained in the known parameter constraints and along each exploration path , the parameter constraints only get restricter . as mentioned in the proof sketch , pspsc can help in pruning the space of exploration in big chunks .but in the worst case , pspsc does not guarantee the exploration will terminate . in section [ sec.experiments ] , we shall report the performance of this technique . especially ,for one benchmark , the state - space exploration can not converge without pspsc .we have implemented our ideas in our tool red which has been previously reported in for the verification of timed automata based on bdd - like data - structures .red version 5.0 supports full tctl model - checking / simulation with graphical user - interface .coverage estimation techniques for dense - time state - spaces has also been reported .we have also carried out experiments to compare various ideas mentioned in this work .in addition , we have also compared with hytech 2.4.5 , which is the best known and most popular tool for the verification of lha due to its pioneering importance .the following three benchmark series are all adapted from hytech benchmark repository ._ fischer s mutual exclusion algorithm ._ this is one of the classic benchmarks .there are two static parameters and , processes , and one local clock for each process .the first process has a local clock with rate in ]. the algorithm may violate the mutual exclusion property when ._ general railroad crossing benchmarks ._ there is a static parameter cutoff , a gate - process , a controller - process , and train - processes .the local dense variable of the gate - process models the angle of the gate and has rates in ] , and ] .the system may not lower the gate in time for a crossing train when ._ nuclear reactor controller ._ there are rod - processes and one controller process .each process has a clock with rate in $ ] .a rod just - moved out of the heavy water must stay out of water for at least ( a static parameter ) time units .the timing constants used in the benchmarks are , and .the controller may miss the timing - constraints for the rods if ._ csma / cd ._ this is modified from .the two timing constants and , set to 26 and 52 respectively , are now treated as static parameters to be analyzed .we do require that . basically , this is the ethernet bus arbitration protocol with the idea of collision - and - retry .the biggest timing constant used is 808 .we want to verify that mutual exclusion after bus - contending period can be violated if . in our experiment ,we compare performance in both forward and backward reachability analyses .the performance data of hytech 2.4.5 and red 5.0 with dictionary ordering ( no pspsc ) , coefficient ordering ( no pspsc ) , magnitude ordering ( pspsc ) , and coefficient ordering with pspsc is reported in table [ tab.perf.bck ] ( for backward analysis ) and table [ tab.perf.fwd ] ( for forward analysis ) ..comparison in backward analysis with hytech w.r.t .number of processes [ cols="<,^ , > , > , > , > , > " , ] + data for trex ( backward analysis ) is collected on a pentium iii 1ghz/900 mb running linux with cpu time normalized with factor .+ data for red and for trex ( forward analysis ) is collected on a pentium 4 m 1.6ghz/256 mb running linux . + s : seconds ; k : kilobytes of memory in data - structure ; + o / m : out of memory ; n / a : not available ; + two additional options of red 5.0 were chosen : coefficient evaluation ordering with pspsc and magnitude evaluation ordering without . at this moment , since we do not have the reduce library , which is not free , in trex for backward analysis , trex team has kindly collected trex s performance in backward analysis for us .although the data set is still small and incomplete , but we feel that the hrd - technology shows a lot of promise in the table .we believe this can largely be attributed to the data - sharing capability of bdd - like data - structures .this work is a first step toward using bdd - technology for the verification of lhas .although the initial experiment data shows good promise , we feel that there are still many issues worthy of further research to check the pros and cons of hrd - technology .especially , we have to admit that we have not developed algorithms to eliminate general redundant constraints in hrds .our present implementation eliminates redundant lh - constraints that can be deduced by four peer lh - constraints along the same paths .we also require that the lh - expression of the redundant lh - constraint must not precede the lh - expressions of these four peer lh - constraints .although our current implementation does perform well against the benchmarks , we still hope that there is a better way to check redundancy . also , subsumption is another challenge .straightforward implementation may use the complement of the current reachable state - space to filter those newly constructed weakest preconditions . since the hrd of the current reachable state - space can be huge ,its complement is very expensive to construct and maintain .r. alur , c.courcoubetis , t.a .henzinger , p .- h .ho . hybrid automata : an algorithmic approach to the specification and verification of hybrid systems .proceedings of workshop on theory of hybrid systems , lncs 736 , springer - verlag , 1993 .r. alur , c. courcoubetis , n. halbwachs , t.a .henzinger , p .- h .ho , x. nicollin , a. olivero , j. sifakis , s. yovine . the algorithmic analysis of hybrid systems .theoretical computer science 138(1995 ) 3 - 34 , elsevier science b.v .graph - based algorithms for boolean function manipulation , ieee trans .c-35(8 ) , 1986 .dill . timing assumptions and verification of finite - state concurrent systems .cav89 , lncs 407 , springer - verlag .j. moller , j. lichtenberg , h.r .andersen , h. hulgaard .difference decision diagrams . in proceedings of annual conference of the european association for computer science logic ( csl ) , sept . 1999 ,madreid , spain .j. moller , j. lichtenberg , h.r .andersen , h. hulgaard .fully symbolic model - checking of timed systems using difference decision diagrams , in proceedings of workshop on symbolic model - checking ( smc ) , july 1999 , trento , italy .red : model - checker for timed automata with clock - restriction diagram .workshop on real - time tools , aug .2001 , technical report 2001 - 014 , issn 1404 - 3203 , dept . of information technology , uppsala university .efficient verification of timed automata with bdd - like data - structures , to appear in special issue of sttt ( software tools for technology transfer , springer - verlag ) for vmcai2003 .the conference version is in proceedings of vmcai2003 , lncs 2575 , springer - verlag .f. wang , g .- d .hwang , f. yu .numerical coverage estimation for the symbolic simulation of real - time systems . to appear in the proceedings of forte2003 , sept .- oct .2003 , berlin , germany ; lncs , springer - verlag .
|
we use dense evaluation ordering to define hrd ( hybrid - restriction diagram ) , a new bdd - like data - structure for the representation and manipulation of state - spaces of linear hybrid automata . we present and discuss various manipulation algorithms for hrd , including the basic set - oriented operations , weakest precondition calculation , and normalization . we implemented the ideas and experimented to see their performance . finally , we have also developed a pruning technique for state - space exploration based on parameter valuation space characterization . the technique showed good promise in our experiment . * keywords : * data - structures , bdd , hybrid automata , verification , model - checking
|
the current work was motivated by our search for an alternative implementation of the hybrid monte carlo ( hmc ) algorithm in lattice qcd which does not require the use of pseudofermions our proposed algorithm ( which is described in more detail in requires the estimation of determinant ratios where the matrices and satisfy the following conditions : and are huge ( , where ) ; the eigenvalues of and have positive real parts ; are , but is ; and the eigenvalues of are continuous perturbations of the eigenvalues of . in this paper we present a new method for estimating such determinant ratios , which takes advantage of pad approximation and noise vectors ( hence the acronym pz ) .the potential application of related methods to estimating the density of states . is also indicated .our improved algorithm is based on the approximation of with a rational function via the use of pad approximants .the pad approximation to of order ] pad expansion of around is accurate to within on the interval ] pad approximation with up to 50 noise vectors is shown in figure 1 , and the jackknife error is shown in figure 2 .the pz estimate of the real part of ( using ( 1 ) with an additional term and ) is shown in figure 3 , while the corresponding jackknife errors are shown in figure 4 .the pz estimate of the imaginary part of is shown in figure 5 ( the actual value is 0 ) .these results demonstrate that the pz method leads to controlled error in determinant ratios after a relatively small number of column inversions .there are three sources of numerical error in the pz method : pad approximation ; estimation ; and column inversion via gmres .the pad error can be virtually eliminated by taking more terms in the pad expansion , which requires more memory but not more computation time .a theoretical expression for the error is : the column inversion error is reduced by taking more iterations , at the cost of increased computation time .the computation time depends on the number of noise vectors required to get a good trace estimate .if gmres is used , each noise vector used gives rise to one column inversion .it should be mentioned that alternatives to pad approximation have been proposed , including chebyshev polynomials and stieltjes integrals .numerical experiments should be performed to determine which is more efficient .some of the ideas introduced above may be used to estimate the density of states as follows .any term of the form can be used as a `` probe '' to sample spectral information near the specified pole .hence can be estimated via the following procedure : ( 1 ) choose complex numbers which are strategically placed around the support of ; ( 2 ) estimate using the noise method ; ( 3 ) specify a fitting form for using parameters : ; and ( 4 ) find the parameters by solving the equations ritzenhfer , k. schilling , and a. frommer , hep - lat/9605008 .s. bernardson , p. mccarty , and c. thron , comp .* 78 * ( 1994 ) 256 .sexton , d.h .weingarten , systematic expansion for full qcd based on the valence approximation , ibm preprint ( 1995 ) .z. bai , m. fahey , g. golub , some large scale matrix computation problems , jour . comp . andappl . math .( to appear ) .
|
we introduce a new method for estimating determinants or determinant ratios of large matrices , which combines the techniques of pad approximation with rational functions and noise estimation of traces of large matrices . the method requires simultaneously solving several matrix equations , which can be conveniently accomplished using the mr method . we include some preliinary results , and indicate potential applications to non - hermitian matrices , and hybrid monte carlo without pseudofermions .
|
according to the currently accepted formulation , the energy flux density of radiation is the energy flux density of radiation at time and at any position on a sphere , eq .( [ eqs ] ) , is due to the radiation emanating by an accelerated charge at an earlier time and at the center of the sphere . the velocity and the acceleration of the charge in the right - hand side of eq .( [ eqs ] ) are defined at the retarded time .the energy radiated into a solid angle in the direction , and then measured at the position and the time is , for an infinitesimal time interval .hence , is the energy per unit area per unit time measured at the position on the sphere .consequently , the radiated power passing through the surface of a surrounding sphere per unit solid angle in the direction is eq . ( [ eqjak3 ] ) is the angular distribution of radiated power per unit solid angle , as measured by observers on a surrounding sphere .in contrast , according to the new formulation , the energy flux density of radiation is thus , the angular distribution of radiated power per unit solid angle , relative to the position of the charge at the retarded time , is by integrating eq .( [ huangeqjak3 ] ) over a surrounding sphere , the total radiated power crossing any surrounding sphere is the total radiated power of radiation crossing any surrounding sphere is equal to the total power emitted by the charge at the retarded time , the so - called linard s result , as it should be to fulfill the principle of conservation of energy . yet , according to the currently accepted formulation , by integrating eq .( [ eqjak3 ] ) over a surrounding sphere , the total power of radiation crossing any surrounding sphere is not equal to the linard s result .the currently accepted formulation violates the principle of conservation of energy .in the comment , singal first claimed that our reasoning above is fallacious , because we equate the evaluated result of the total radiated power of radiation crossing a sphere to the total power emitted by the charge .then , he presented a resolution to the problem . referring to fig .[ fig1 ] , at time the radiation in the region enclosed by spheres and is due to the radiation emitted by the charge during the time interval from to .the radiation region is not spherically symmetric around , since the charge moves a distance from to .thus , the radiation emitted by the charge during the time interval does not cross the sphere at all points in the time interval , rather it takes . therefore , from eq .( [ eqjak3 ] ) , the total energy of radiation enclosed by spheres and is evaluated as the total energy of radiation is due to the total energy emitted by the charge during the time interval . is equal to the total power emitted by the charge at the retarded time , that is , therefore , the currently accepted formulation does not violate the principle of conservation of energy . in that case , the new formulation violates the principle of conservation of energy .yet , it should be noted that is not the total radiated power crossing the sphere at time . according to singal s reasoning , what is the total radiated power crossing the sphere ?in our reasoning , we do not derive the linard s result , and not equate the evaluated result of the total radiated power crossing a sphere to the linard s result .instead , we compare the evaluated result with the linard s result to see which formulation satisfies the principle of conservation of energy .it turns out that currently accepted formulation , instead of the new formulation , violates the principle of conservation of energy .there are problems in the currently accepted formulation .since the charge is accelerated , it does not move uniformly during the time interval .thus , the angular distribution of radiated power in the time interval is not exactly in accordance with eq .( [ eqjak3 ] ) as evaluated at merely one retarded time , because the time - retarded positions and velocities of the charge change during the time interval .the total power emitted by the charge as evaluated in accordance with singal s reasoning should be only approximately valid , as noticed by panofsky and jackson . therefore , it is very unlikely that the _ exact _ linard s result is obtained simply by an approximation approach , unless eq .( [ eqjak3 ] ) is only approximately correct .according to singal , the factor in the time interval is interpreted as just a matter of simple geometry . yetthis interpretation negates the existent interpretations : a lorentz transformation of time between the charge s frame and the observer s frame , or something similar to the doppler effect .which interpretation is correct ?foremost , singal does not resolve the problem that the currently accepted formulation violates the principle of conservation of energy .the total radiated power crossing a surrounding sphere at time is not equal to .suppose that one wants to evaluate the total radiated _ power _ crossing a sphere .one first measures the total radiated _ energy _ crossing the sphere in a time interval .then , the total radiated power crossing the sphere is given as .it should be emphasized that the time interval must be the same , as the measurement of the total radiated energy is carried out _ at all points _ on the sphere .however , according to singal s reasoning , the measurement of the total radiated energy crossing the sphere is carried out with different time intervals at different points on the sphere . is meaningless , and is not the total radiated power crossing the sphere . if singal thinks that the total radiated power crossing a surrounding sphere is , and it is equal to the total power emitted by the charge at the retarded time , then he might make a mistake in the meaning of the total radiated power crossing a sphere .hence , the problem that the currently accepted formulation violates the principle of conservation of energy remains unsolved .another issue in the comment is `` while deriving expressions for the electric and magnetic fields , and , huang and lu in their eqs .( 20)-(25 ) simply replaced with which is not correct as these two quantities are actually related by , where is the doppler factor [ 3 ] .thus their transformed electric and magnetic fields are wrong . ''the newly derived expressions for the electromagnetic fields become the currently accepted expressions , if , instead of , is employed in the transformation of electric and magnetic fields between inertial frames .first , that the new formulation , rather than the currently accepted formulation , fulfills the principle of conservation of energy reinforces the validity of the replacement in the transformation .furthermore , maxwell s equations of electrodynamics are shown form - invariant via a novel perspective on relativistic transformation transformation of physical quantities , instead of space - time coordinates .an extra transformation of spacial coordinate such as is not necessary in the transformation of electric and magnetic fields to render maxwell s equations form - invariant among inertial frames . according to singal , the expression is considered as due to the doppler effect .yet , the doppler effect is the transformation of physical quantities of waves such as frequency and wave vector relative to inertial frames , instead of space - time coordinates .the expression has nothing to do with the doppler effect , since it involves spatial coordinates only , without frequency and wave vector of waves . a systematic method to derive the doppler effect , without involving transformation of space - time coordinates , was presented in the literature .even more , in a certain case an anomaly the problem of negative frequency of waves , was found by applying the invariance of the phase of waves which is equivalent to relativistic transformation of both physical quantities and space - time coordinates simultaneously .this indicates that the invariance of the phase of waves is invalid .therefore , the doppler effect should be related to the transformation of physical quantities of waves only . in singal s interpretation ,the factor in is a matter of simple geometry , whereas the factor in is due to the doppler effect .the two interpretations seem incompatible .ambiguities on the meaning of and still exist in the currently accepted formulation of electromagnetic radiation of an accelerated charge . owing to a misunderstanding in the meaning ofthe total radiated power crossing a sphere , the serious problem that currently accepted formulation violates the principle of conservation of energy remains unsolved .such controversies as paradoxes in special relativity are hardly resolved just by theoretical reasoning . in order to convincingly determine the validity of the currently accepted formulation , an experimental test on the angular distribution of radiated power was proposed .nonetheless , it is necessary to clarify which the angular distribution of radiated power is : eq .( [ eqjak3 ] ) , or from eq .( [ toteqjak3 ] ) further theoretical and experimental examinations on the currently accepted formulation and the new formulation should be highly welcome . 99 a.k .singal , `` a first principles derivation of the electromagnetic fields of a point charge in arbitrary motion '' , am .* 79 * , 1036 - 1041 ( 2011 ) .young - sea huang and kang - hao lu , `` exact expression for radiation of an accelerated charge in classical electrodynamics '' , found .phys . * 38 * , 151 - 159 ( 2008 ) .singal , `` comment on ' ' exact expression for radiation of an accelerated charge in classical electrodynamics `` '' , found. phys . * 43 * , 267 - 270 ( 2013 ) .young - sea huang , `` is the current formulation of the radiation of an accelerated charge valid ? '' , nuovo cimento b * 124 * , 925 - 929 ( 2009 ) .w.k.h . panofsky and m. phillips , _ classical electricity and magnetism _ , ( addison - wesley publishing , inc . , 1962 ) , chapter 20 .jackson , _ classical electrodynamics _ , third edition , ( john wiley & sons inc ., new york , 1999 ) , chapter 14 .griffiths , _ introduction to electrodynamics _ , 2nd edition , ( prentice - hall , inc ., new jersey , 1986 ) , chapters 9 and 10 .landau , and e.m .lifshitz , _ the classical theory of field _ , fourth edition , ( pergamon press ltd. , 1975 ) , chapter 9 .smith , _ an introduction to classical electromagnetic radiation _ , ( cambridge university press , new york , 1997 ) , chapter 6 .young - sea huang , `` a new perspective of relativistic transformation for maxwell s equations of electrodynamics '' , phys .* 79 * , 055001 ( 2009 ) ; `` a new perspective on relativistic transformation versus the conventional lorentz transformation illustrated by the problem of electromagnetic waves in moving media '' , * 81 * , 015004 ( 2010 ) ; `` a new perspective on relativistic transformation : formulation of the differential lorentz transformation based on first principles '' , * 82 * , 045011 ( 2010 ) .young - sea huang , can ., `` formulation of the classical and the relativistic doppler effect by a systematic method '' , * 82 * , 957 - 964 ( 2004 ) . young - sea huang , `` the invariance of the phase of waves among inertial frames is questionable '' , epl * 79 * , 10006 ( 2007 ) ; `` is the phase of plane waves an invariant ? '' , z. naturforsch .* 65a * , 615 ( 2010 ) .
|
flaws and ambiguities are pointed out upon examining the comment attempting to solve a problem as raised recently the currently accepted formulation of electromagnetic radiation of an accelerated charge violates the principle of conservation of energy . this problem is not solved by the comment , due to a misunderstanding in the meaning of the total radiated power crossing a sphere . an experiment is suggested to determine whether on not the currently accepted formulation is valid . in a recent literature , without any reason , singal made a hasty conclusion that the newly derived exact expression for the electromagnetic radiation of an accelerated charge by huang and lu is incorrect . in the comment , singal attempted to resolve a serious problem the currently accepted formulation of electromagnetic radiation of an accelerated charge violates the principle of conservation of energy , as raised recently . with a view to making a comprehensible reply , relevant expressions of both currently accepted formulation and the new formulation , as well as discrepancy between them are first presented .
|
the advancements in the physical layer technology has enabled cellular networks ( e.g. , 3 g and 4 g deployments like mobile wimax , lte advanced ) and wlans ( e.g. , ieee 802.11n ) support hundreds of megabits per second . however , with more and more users now accessing the internet using wireless as the last mile , there is a continuous necessity to judiciously use the available network resources .cross - layer strategies have become extremely helpful in supporting the ever increasing demand for bandwidth and stringent qos .opportunistic scheduling and multiuser diversity ( see ) is one such popular cross - layer technique recommended in current cellular standards and in ad hoc deployments for increasing the available network capacity . unlike the wired channel , the wireless channelwill always be constrained by fading and interference .multiuser diversity enhances the network performance by wisely scheduling the users when their relative channel conditions are better .opportunistic scheduling is known to significantly improve the network performance , especially for elastic traffic with loose delay constraints .opportunistic scheduling involves learning the channel state information of the contending users and scheduling the user with a relatively better channel .centralized schemes like polling incur a lot of overhead and may not scale well with the number of users . for such schemes , the rate region of the channel and the set of feasible qosare well known ( see e.g. , ) .the performance of the system with partial channel state information was studied in .there is a lot of interest in developing distributed and semi - distributed algorithms for opportunistic scheduling .one popular technique has been to adjust the backoff parameters of the nodes based on their instantaneous channel gain .a number of works have studied the optimal performance and the achievable throughput of such strategies ( see e.g. , ) . in ,the authors propose a splitting algorithm that resolves contention with feedback from the base station .the distributed strategies incur losses due to collisions but are known to very efficient especially for networks with a large number of users .in this work , we are interested in the contention resolution problem of resolving the identity of the user with the highest channel gain .we formulate the contention resolution problem for opportunistic scheduling as identifying a random threshold ( channel gain ) that separates the best channel from the other samples .we show that the average delay to resolve contention is related to the entropy of the threshold random variable .we illustrate our formulation by studying the opportunistic splitting algorithm .we show that osa is a maximal probability allocation scheme and we conjecture that mpa is an entropy minimizing strategy and a delay minimizing strategy as well . in this work, we have studied opportunistic scheduling for users with i.i.d .channel gains .we believe that our formulation of contention resolution as a source code can help develop optimal strategies for a variety of other network scenarios as well .the idea of splitting with ternary feedback was originally proposed for scheduling users in aloha type networks ( see ) . in ,arrow et al . , study a problem of resolving the user with the highest sample value with binary type questions .the optimal strategy was studied when accurate feedback of the number of contending users involved in every slot was available .the near optimality of greedy strategies ( like mpa studied in section [ sec : osa_source_coding ] ) was also discussed in . in , anantharam and varaiyaprove the optimality of binary type questions to minimize the average delay in .the performance of binary type questions in the presence of ternary feedback was first reported in .the optimal thresholds were obtained and the relevance to opportunistic scheduling was discussed . in ,qin and berry study splitting with ternary feedback for opportunistic scheduling for i.i.d . wireless channel .we have briefly described the algorithm in section [ sec : osa ] ; we motivate our formulation of contention resolution as a source coding problem by studying the opportunistic splitting algorithm presented in .splitting algorithms have been studied for other network and channel scenarios as well . in ,kessler and sidi study splitting algorithms for noisy channel feedback . in ,qin and berry report the performance of splitting for different notions of fairness . in , yu and giannakisstudy the performance of splitting with successive interference cancellation in a tree algorithm . in this work ,we restrict to i.i.d .wireless channel under ideal channel assumptions ; our aim is to present an alternate formulation for contention resolution using a source coding framework .there are number of works concerning distributed opportunistic feedback schemes for wireless systems ( see e.g. , ) . in , qin and berry proposes a channel aware aloha and characterizes its performance . in , patil and de veciana discuss about reducing feedback for opportunistic scheduling to support best effort and real time traffic . in this work ,we consider a semi - distributed framework where the base station helps resolve contention with feedback . in section [ sec : network_model ], we describe the network model and the opportunistic resolution problem . in section [ sec : osa ] , we briefly describe the opportunistic splitting algorithm from and motivate our formulation . in section [ sec : source_coding ] , we present contention resolution problem for opportunistic scheduling as a source coding problem . in section [ sec : osa_source_coding ] , we characterize osa using a maximal probability allocation scheme and study its performance . in section [ sec : two_examples ] , we discuss the applicability of our framework for other network scenarios and in section [ sec : conclusion ] , we conclude the paper and discuss future work .we consider the downlink wireless channel of a single cell of a cellular data network ( or of a single cell wlan in an infrastructure setup ) . a fixed number of users , , share the slotted wireless channel over time .we assume that the channel gain between the base station and the wireless users is independent and identically distributed with a common continuous distribution .we also assume that the users have knowledge of the common channel distribution and the number of users in the network , .let represent the vector channel gain of the users in slot .we assume that every user would know its instantaneous channel gain at the beginning of every slot , but that information is not available with other users in the network , including the base station .the channel state information can be made available to the user by the transmission of a pilot signal by the base station at the beginning of the slot .the base station seeks to identify and schedule the user with the highest channel gain in every slot ( opportunistic scheduling ) , i.e. , the base station seeks to schedule in slot . define , the cumulative distribution value in the slot .then , the vector is i.i.d .uniform in ] and consider as the channel gain variables .the base station resolves the identity of the user with the highest channel gain by coordinating the contention resolution process and by providing necessary feedback to aid in the resolution .we assume that a time slot comprises of mini slots , where the mini slots are used to resolve the contention .for example , the users can transmit mac packets ( like rts / cts in ieee 802.11 dcf ) , possibly with some channel information , to the base station in a minislot and the base station can feed back the state of the contention in that slot .we assume that the base station feeds back the result of the contention within the minislot and the feedback of the base station is received by all the nodes in the network without any error . at the end of the contention process, the user that succeeded in the contention is permitted to transmit data in the remainder of the slot . in this setup , an objective of the base station would be to minimize the average number of minislots required to identify the user with the highest channel gain .in this section , we briefly describe a contention resolution strategy , opportunistic splitting algorithm ( osa ) from , for a fixed number of users and for i.i.d .block fading wireless channel .polling for opportunistic scheduling requires minislots to identify the user with the highest channel gain .osa is a distributed medium access control protocol that uses ternary feedback to identify the user with the best channel with a constant overhead .a time slot is assumed to comprise of a maximum of minislots which are used for contention resolution . in every minislot, osa describes a continuous range in ] ; only the user(s ) whose channel gain values fall within the range will transmit contention resolution packets in the minislot . at the end of the minislot , every user receives a feedback from the base station of or or , indicating if the minislot was idle ( no transmission ) , contained a successful packet transmission or involved an error due to collision , respectively . if the feedback is , the lone transmitter is declared the winner of the contention and is permitted to transmit data for the remaining duration of the slot . if the feedback is or , then the range is suitably adjusted and the contention resolution process continues until either a success occurs or the time - slot ends .the following pseudo - code describes the osa algorithm for a fixed number of users and for i.i.d .channel gain ( see for more details ) . in the pseudo - code, denotes the feedback in a minislot and is the count of the number of minislots used for contention resolution .initialize : initialize : and feedback from ] , the probability of success ( identifying the user with the best channel ) in a minislot with the range ] .when a collision occurs , osa assumes that the most likely scenario is that two users are involved in the collision , and hence , it updates the threshold from ] ( the optimal strategy if there are only two contending users ) .osa is an effective contention resolution strategy with the average number of minislots required to resolve contention known to be less than 2.5070 slots , independent of the number of users and channel gain distribution . in this section, we will discuss in detail the opportunistic splitting algorithm for the two user case .the example will help us motivate the source coding framework described in the section [ sec : source_coding ] .let and let correspond to the vector channel gain of the two users in a slot .define and .then , is the ordered pair of the channel gain values where .osa initializes with and . in the first minislot , only the user(s ) with transmit a control packet .a success ( a single transmission ) happens in the first minislot iff ( ) or ( ) , i.e. , a success happens iff .the probability of the event can easily be computed and is equal to . * thus , contention is resolved in the first minislot whenever and the probability of the event is ; the threshold that resolves the contention successfully for the set is and the base station feeds back a in this case*. in the first minislot , an error due to collision occurs iff and the slot is left idle iff .suppose that the feedback in the first minislot is .then , osa updates the variables as and . in the second minislot , only the user(s ) with transmit a control packet .a success happens now iff and the conditional probability of the event ( conditioned upon a collision in the first minislot ) is . *thus , contention is resolved in the second minislot whenever and the probability of the event is ; the threshold that resolves the contention successfully for the set is and the base station feeds back a in the first two minislots*. .the probability distribution on the threshold / feedback corresponding to osa for users . [ cols="^,^,^,^,^",options="header " , ] in table [ tab : osa_two_users ] , we have listed sets of ordered two tuples along with the threshold ( ) for osa that resolves the set . the feedback from the base station corresponding to the threshold ( equivalently , the set ) and the probability of the threshold ( equivalently , the feedback ) is also listed in the table .[ rem : osa_2 ] we make the following observations from the table [ tab : osa_two_users ] . 1 .the threshold ( ) that resolves is always such that , i.e. , osa resolves contention by identifying a threshold between the user channel gains .the threshold is fed back to the users in ternary alphabet .the lone user with a channel gain strictly greater than the threshold value fed back by the base station would learn about its successful contention resolution and the other users would refrain from transmitting any further in the slotthe feedback for a threshold is , in fact , the binary expansion of ( when feedback and feedback is mapped to and feedback is mapped to ) .the feedback is equivalent to feedback followed by an eoc ( end of contention ) in this case .3 . the thresholds that resolve contention for osa form a countable set with a valid probability distribution ( the probabilities sum up to ) .the average delay to resolve contention is equal to the average length of the feedback , which is a function of the probability distribution of the threshold random variable .the probability distribution is a function only of the contention resolution algorithm ( for the i.i.d .an optimal choice of the thresholds can minimize the average description length of the feedback and the delay to resolve contention . in section [ sec : source_coding ], we will propose a general framework for contention resolution for opportunistic scheduling motivated by the above observations .in this section , we will formulate contention resolution for opportunistic scheduling with ternary feedback as identifying a random threshold ( channel gain ) that separates the best channel from the other samples .let correspond to the vector of i.i.d .channel gain values in a slot and let be the ordered n - tuple of channel gain values of the users such that .the base station seeks to identify , or , equivalently , in the slot .we aim to resolve the contention by identifying a threshold such that ; the base station will feedback the threshold using ternary alphabet of and which aids in resolving the contention . and are random variables , and hence , the threshold will also be a random variable . obviously , the uncertainty in would be a measure of the average description length of the threshold / feedback .let \times [ 0,1 ] \rightarrow [ 0,1] ] , such that .let have a discrete distribution , i.e. , let there exist a set and a set of probabilities such that , and . then , the entropy of the random variable ( equivalently , the code ) is defined as clearly , the entropy would approximate the average length of the feedback required for a contention resolution algorithm that resolves a two tuple with threshold .the code can , in general , take a continuous sample space , all of ] .given users and thresholds ] is the unique stationary point of . * remarks * 1 . for any , and with ,the above expression becomes , .the expression is maximized at .hence , for any , .2 . as an example , for , repeating the above procedure will yield us .note that the above values are in fact the thresholds reported in table [ tab : osa_two_users ] .3 . in remark[ rem : osa_2 ] , for the case , we noticed that the feedback from the base station corresponding to a threshold can be viewed as the binary representation of the threshold itself .for general , the feedback from the base station can still be viewed as the binary representation of the threshold , however , with the weights corresponding to a position computed from the thresholds obtained from the pseudo - code .for example , the weight of the first position will be equal to . .we have also plotted the entropy of the threshold random variable corresponding to mpa . ] in figure [ fig : osa_vs_mpa ] , we plot the average delay performance of osa ( as described in section [ sec : osa ] ) and the maximal probability allocation code .as expected , the performance of osa and mpa are similar and in fact , mpa performs better than osa as it identifies the optimal thresholds without any approximations ( see remark [ rem : osa ] ) .we have also plotted in the figure [ fig : osa_vs_mpa ] , the entropy of the maximal probability allocation code in bits . as expected , the entropy of the random variable reflects the average delay performance of the contention resolution algorithm as a function of very well .entropy is a concave function of the distribution .the maximal probability allocation code identifies a local minima in the space of probability distributions . from limited numerical work (not reported in this paper ) , we conjecture that the maximal probability allocation code is a globally entropy minimizing strategy as well . the following theorem proves the optimality of mpa for the case .mpa is a delay minimizing strategy and an entropy minimizing strategy for case .let ] independent of the other users .then , osa can be used to resolve contention among the users by identifying the user with the largest value of ; this is a popular strategy to apply osa for discrete channel distributions .the average number of slots required to resolve contention using osa is then slots ( obtained from simulations ) .the osa , in every slot , attempts to identify a such that the probability of a unique user in the interval ] or in .the following algorithm is a contention resolution strategy optimized for this problem .initialize : initialize : feedback for interval $ ] feedback from interval ( ) using simulations , we observe that the average effort needed to resolve contention is slots much less than the slots required by osa .the proposed algorithm makes use of the fact that , in the event of a collision , the probability that two users are involved is significantly higher than the probability that three users are involved in the collision .the contention resolution problem was formulated as identifying a random threshold between and ( ) or between and ( ) .the entropy of the proposed strategy was observed to be strictly smaller than the entropy of the maximal probability allocation scheme of osa .we consider a wireless downlink channel with users .we assume that the wireless channel of the two users is correlated with the sample space , and with the joint probabilities , where .osa would maximize the probability of success in every minislot and hence , would consider the thresholds in the following sequence ( if we restrict to integer thresholds ) and .the average number of minislots required to resolve contention with osa / mpa is . in general ,if there are channel states , then the average number of slots required to resolve contention is approximately . consider the following alternative strategy in resolving contention .in the first minislot , we consider the threshold value to resolve contention .if a collision occurs in the first minislot , then the next threshold would be for the second minislot and in the event of an idle first minislot , the next threshold would be set to for the second minislot .similarly , if there is collision in the first two minislots , then , the threshold would be set to for the third minislot and so on .if there is a unique user attempting in a minislot , the contention resolution algorithm stops .the average number of minislots required to resolve contention with this strategy is approximately ; in general , if there are channel states , then the average number of minislots required would be .we note that , for large , the above strategy is strictly optimal than the osa .the contention resolution problem can be formulated as identifying a random threshold such that .clearly , the minimum entropy for the wireless channel is approximately and is equal to the average number of minislots required to resolve contention .the two examples clearly illustrate that a maximal probability strategy like the osa is not optimal for all channel scenarios .also , the source - coding technique could provide us a way to identify the optimal contention resolution strategy under general channel scenarios as well .in this paper , we have modeled contention resolution for opportunistic scheduling as a source - coding problem . the entropy of a certain random variable is seen to approximate the average number of slots required to resolve contention .we characterized osa as a maximal probability allocation scheme and obtained the thresholds for contention resolution ( in osa ) from its source code .we note that mpa provides us a local optima , and we conjecture that mpa is globally optimal as well ( for i.i.d .channel conditions ) .we believe that the information theoretic view point can be used to develop contention resolution algorithms for a variety of other network scenarios as well ( e.g. , partial network information , limited channel feedback ) .
|
we consider a slotted wireless network in an infrastructure setup with a base station ( or an access point ) and users . the wireless channel gain between the base station and the users is assumed to be i.i.d . , and the base station seeks to schedule the user with the highest channel gain in every slot ( opportunistic scheduling ) . we assume that the identity of the user with the highest channel gain is resolved using a series of contention slots and with feedback from the base station . in this setup , we formulate the contention resolution problem for opportunistic scheduling as identifying a random threshold ( channel gain ) that separates the best channel from the other samples . we show that the average delay to resolve contention is related to the entropy of the random threshold . we illustrate our formulation by studying the opportunistic splitting algorithm ( osa ) for i.i.d . wireless channel . we note that the thresholds of osa correspond to a maximal probability allocation scheme . we conjecture that maximal probability allocation is an entropy minimizing strategy and a delay minimizing strategy for i.i.d . wireless channel . finally , we discuss the applicability of this framework for few other network scenarios .
|
observations of the cosmic microwave background ( cmb ) is fundamental for our understanding the primordial inhomogeneity of the universe .after the successful cobe experiment , attention has been focused on the investigation of small scale perturbations , that can provide unique information about the most important cosmological parameters .one of the major problems in the modern cmb cosmology is to separate noise of various origins ( such as dust emission , synchrotron radiation and unresolved point sources ( see e.g. banday et al . 1996 ) ) from the original cosmological signal .many authors have already applied various methods such as wiener filtering ( tegmark and efstathiou 1996 , bouchet and gispert 1999 ) , maximum entropy technique ( hobson et al .1999 ) , radical compression ( bond et al .1998 ) , power filtering ( gorski et .1997 , naselsky et .al . 1999 ) and wavelet techniques ( e.g. sanz et al . 1999 ) to extract noise from the cmb data .all of these techniques have been tested for removing the noise from the real observational data .it is necessary to note that , for different strategies and for different experiments , different schemes could be chosen as most appropriate .the choice of the algorithm also depends on the particular type of foreground emission to be extracted .the aim of our paper is to overcome the problem of detecting and extracting the background of unresolved point sources from the original map .the measured signal in the real observational data is always smoothed with some filtering angle because of the final antenna beam resolution .therefore , unresolved point sources could make a significant contribution to the resulting signal on all scales .this type of noise should be removed from the original map before any subsequent analysis is made .recently ( cayon et al . 1999 ) have proposed the use of isotropic wavelets for removing noise in the form of point sources .their technique is based on the fact , that the field in the vicinity of the source should be in the form of the antenna profile .unfortunately the gaussian cmb field can also form real peaks with the same profile , so that a lot of artificial sources could be found using this technique .besides , the antenna profile is not necessarily isotropic ( indeed , as a rule it is very anisotropic ) .therefore , isotropic wavelets should not be considered as an absolute cure against such a type of noise . in this paperwe consider an approach , which is based on the distribution of phases .the idea of using phases of random fields was introduced by a.melott et al ( 1991 ) ; coles and chiang ( 2000a , b ) for the large scale structure formation in the universe . belowwe develop the phase - amplitude analysis method for investigation of the cmb anisotropy and foreground .the outline of the paper is as follows . in section 2we briefly review the basic definitions , consider a simulated one - dimensional scan of the cmb first with a single point source , then with a background of such a sources . in section 3we generalize our results into two - dimensional maps . finally , we suggest an algorithm for denoising . in section 4we discuss the results and potential of the method for analyzing high resolution maps .in this section we consider 1d cmb scans with a background of point sources .this approach could be very useful for data analysis of one - dimensional experiments with high resolution ( such as ratan 600 ) .we extend this discussion to two - dimensional experiments ( such as the new generation of interferometer experiments ) in section 3 .the investigation of point sources is especially easy in one dimension , can be easily generalized into two - dimensional maps and will help us to understand the advantage of the proposed technique . *definitions * in 1d the deviation of the temperature from its mean value in a scan is described by the simple fourier series : where k is an integer number and can be expressed in terms of of the real angle on the sky ( ) as follows : . here means the total length of the scan .the detected temperature fluctuations can as usual be naturally divided into two parts : cosmological signal and noise : where and denote signal and noise respectively . therefore , the fourier transform components can be also expressed as a sum of fourier decomposition of these two terms : the statistically isotropic distribution of the cmb temperature anisotropy is supposed to be in the form of a random gaussian field with the power spectrum , which determined by the appropriate cosmological model .the coefficients depend on the spectrum of the cmb , the antenna filtering function and the actual realization of the random gaussian process on the sky .in general , they obey the formulae : . here, is the fourier transform of the filtering function and is the antenna resolution angle . is a wavenumber which corresponds to this resolution : . in our simulations we use the usual expression for : where are independent gaussian numbers with zero mean and unit dispersion . in this paperwe consider the noise in the form of isolated unresolved point sources .this means that the average distance between sources is larger than the resolution scale .therefore , the shape of the noise field around the point source determined by the filtering function f : where are the amplitude and the position of the j - th point source , respectively , and is the total number of point sources in the considered scan .according to equation [ 5 ] , the fourier components of the noise can be described by the following very simple and convenient formulae : for further investigation we have to introduce the phase : of the k - th harmonic .using equations [ 1,3 ] one can write : = \arctan\left[\frac{b_k^s+b_k^n}{a_k^s+a_k^n}\right]\ ] ] if the resulting field at the scales is dominated by the gaussian cmb signal ( ) , then . in this casethe phases of the k - th harmonics are random independent uncorrelated values , uniformly distributed from 0 to . on the other hand , if the signal at these scales is much smaller than the noise , then the distribution of phases is determined by the positions and amplitudes of point sources on the scan . in fig.1, we present the spectrum of cmb in one dimension for the standard cdm model together with the spectrum of point sources . both spectra are smoothed with the gaussian filtering function .it is well known that the cmb signal disappears when becomes larger than some value .this value corresponds to the damping scale of the cmb fluctuations .therefore , at the small scales the resulting field is dominated by the noise .note that should not be necessary interpreted as the damping scale . roughly speaking ,this is the scale where noise from sources becomes larger then the cmb signal .it is easy to see , from equations [ 4,6,7 ] , that the process of smoothing does not change the phases of the primordial signal .the filtering function has simply disappeared from the right hand side of the equation [ 7 ] .therefore , if , we have the possibility of measuring the phases only for high values of the noise .below we describe how the information about the phase distribution for high values of can be used for very precise detection and extraction of the contribution from the sources for all values of * detection of a single point source * let us consider the simplest example by dealing with a single unresolved point source on the scan . in order to remove the contribution from this source , we have to know its precise location and amplitude ( see fig.2 ) . the contribution from this source to the resulting field according to equation [ 5 ] is then : where is the maximum value of that can be detected in the experiment .as has been already mentioned , for larger then some value , phases are just the phases of the point source . from equations [ 6,7 ] ,we obtain : it suffices to have only two phases ( for example and , ) to find the location of the source : in fig.3 we show the behavior of the phases , together with the phases of the source . for small values of : the phases are distributed uniformly and at large we can definitely see the regular structure that is consistent with equation [ 9 ] . in fig.4we also show the positions of maxima for all harmonics .location of the maxima for the k - th harmonic can be found by the formulae : where n is an integer number .the straight vertical line points to the location of the source because one of the maxima in each harmonic is coincident with this location .the remaining part of the problem is to find the amplitude - .let us defined the field as a part of the field that consists only of the high harmonics : using the formulae [ 8 ] , we now can write down the obvious relation : therefore , according to [ 3,6 ] , we have found the contribution from this source to all harmonics from to .* background of point sources * in this subsection we generalize our algorithm to the case where there are an unknown number of point sources in the considered scan . in a situation like this, we have to find not only positions and amplitudes of each source but also the total number of them : .we believe that many different techniques based on the results of the previous subsection could be proposed to solve this problem .we suggest a simple iteration scheme .as has been already noticed above , we can consider the field , which consists only of high harmonics .therefore , only point sources make a contribution to this field : we now introduce the filter function and consider the filtered field : according to [ 14,15 ] , one can write : if we can put , then the second term on the right hand side of equation [ 16 ] is small and : this equation is very close to [ 8 ] and , therefore , the procedure of filtering gives us the possibility of localizing the field in the vicinity of a point source .in reality equation [ 17 ] is not quite correct because and are values of approximately the same order and , therefore , peaks , that are more or less close to each other can interfere ( fig .this is the reason why we choose the iteration technique to remove point sources .we propose the following algorithm .let us construct the field and find its highest maximum .this maximum most probably corresponds to the most powerful isolated point source on the scan .the position and value of this maximum give us the location and the amplitude ( eq[13 ] ) of this source .after that , we construct the field without this point source : the contribution from this source to the field and its interference with other sources is now removed .this allows us to find more precisely the next highest maximum .therefore , we apply the same procedure to the field and find , and so on ( fig.5 ) .we perform these iterations until the dispersion ( ) becomes significantly smaller then ( fig.6 ) .the total number of iterations that is needed to significantly reduce the initial dispersion gives us approximately the number of point sources and each iteration gives the location and the amplitude of the i - th source .note , that and roughly speaking , in fig.7 we can see the cumulative distribution of point sources over the power . finally , since we have the position and amplitude , the contribution to the field from all point sources may be removed in the same manner , as was done for a single point source in the previous subsection .in this section we briefly describe our results in two dimensions . without loss of generality we may consider a small region of the sky and assume that the geometry is approximately flat . under this assumption ,the part of the detected signal which is determined by the noise associated with point sources can be represented according to the previous section by writing : where is the position of the j - th point source in the cartesian coordinate system and corresponds to the antenna resolution .analogously to the one - dimensional case , this field should be filtered with some appropriate function .the convenient filter function that we use in this case is as follows : according to [ 15,20,21 ] one can write : therefore , the filtered function at the point has a peak with amplitude equal to the power of j - th point source times the number of modes that we can use for data analysis in the appropriate experiment . in our simulations of the signal+noisewe use the standard cdm model and background of 100 point sources randomly distributed over the map . without loss of generalitywe use the simple symmetric gaussian antenna profile .all these calculations , of course , could be done for any arbitrary antenna beam . in fig.8we show the map of the cmb together with the maps of noise , cmb+noise and the filtered map of cmb+noise .the last one shows us more or less clearly the positions and powers of point sources .the significant anisotropy that appears in the last map occurs for the following reason . according to formulae[ 21 ] we use only the set of harmonics , that obey the relation .therefore , the number of horizontal and vertical waves is larger than the number of waves in any other direction .( this problem does not occur if we use spherical harmonics with ) .we apply the same iteration technique as in the previous section and therefore separate noise from the cosmological signal ( fig.9 ) .it is necessary to note that each iteration removes an appropriate point source at the beginning of this process for the most powerful and separated sources . for the weaker sources ,additional iterations are needed .the signal from the j - th source decreases as , where is the distance from the peak ( in one dimension this dependence is linear ) .8.5 in -1.5 in -0 in this affects the neighboring peaks and can change their amplitudes .therefore , this approximation works if , where is the separation between the i - th and j - th peaks ( the i - th peak is the closest to the j - th peak ) .otherwise , the amplitudes that have been found in each iteration would not correspond to the powers of the sources and we therefore have to perform a number of iterations that is larger than the number of sources .in this paper we present a powerful method for extraction of unresolved point sources from future high resolution cmb maps ( such as map , planck , vsa , cbi , dasy , ami and ratan 600 ) .our method is based on the distribution of phases .the most important advantage of our technique is that we do not make any strong assumptions about the expected cmb signal as well as about the antenna profile .most other techniques use the estimated power spectrum of the cmb and noise before the data analysis is implemented ( e.g. wiener filter ) or they require special assumptions about the antenna profile ( e.g. , wavelets techniques ) .it is worth stressing that , for example , assumptions about the cmb power spectrum can lead to incorrect interpretations of the observational data .roughly speaking , by making such assumptions , one runs the risk of generating the result one wants and any discrepancies are consider to be errors .our algorithm is numerically very efficient .it is a linear algorithm and requires operations , where n is the number of pixels .therefore it can be easily applied to the analysis of large data sets .we have demonstrated the accuracy which can be achieved using our algorithm to remove the contribution from point sources on all scales .we believe that this technique is potentially a very powerful tool for extracting this type of noise from future high resolution maps .we are very grateful to i.novikov and a.doroshkevich for discussions and p.coles and r.scherrer for informative communications .this investigation was partly supported by intas under grant number 97 - 1192 , by rffi under grant 17625 and by danmarks grundforkskningfond through its support for tac . .+ banday , a.j . ,gorski , k.m . ,bennett , c.l . ,hinshaw , g. , kogut , a. , & smoot , g.f . , apj .letters , * 468 * , 85 , 1996 + bond , j.r . , a.n.jaffe and l.knox , phys .d. 57 , 2117 , 1998 .+ bouchet , f.r . & gispert , r. 1999 , astro - ph/9903176 + cayon , l. , sanz , j.l . ,, r.b . , martinez - gonzalez , e. , vielva , p. , toffolatti , l. , silk , j. , diego , j.m . and f. argueso astro - ph/9912471 + coles , p. and l.y.chiang , mnras , * 311 * , 809 , 2000a .+ coles , p. and l.y.chiang , nature , * 406 * , 376 - 378 , 2000b .+ hobson , m.p . ,barreiro , r.b ., toffolatti , l. , lasenby , a.n . ,sanz , j.l . ,jones , a.w . & bouchet , f.r .1999 , mnras , 306 , 232 .+ gorski , k.m . , proceedings of the 31-st recontres de marion astrophysics meeting , p. 77, 1997 , astro - ph/9701191 .+ guiderdony , b. 1999 , astro - ph/9903112 + melott , a. , s. shandarin and r. scherrer , apj . * 377 * , 79 , 1991 .+ novikov , d.i . ,naselsky , p.d . ,jorgensen , h.e . , christensen , p.r ., novikov , i.d ., norgaarrd - nielsen , h.u . , astro - ph/0001432 + sanz , j.l . , barreiro , r.b . , cayon , l. , martinez - gonzalez , e. , ruiz , g.a . ,diaz , f.j . ,argueso , f. , silk , j. , and l. toffolatti , 1999 , astro - ph/9909497 + tegmark , m . &efstathiou , g. 1996 , mnras , 281 , 1297 .
|
we propose a novel method for the extraction of unresolved point sources from cmb maps . this method is based on the analysis of the phase distribution of the fourier components for the observed signal and unlike most other methods of denoising does not require any significant assumptions about the expected cmb signal . the aim of our paper is to show how , using our algorithm , the contribution from point sources can be separated from the resulting signal on all scales . we believe that this technique is potentially a very powerful tool for extracting this type of noise from future high resolution maps . _ subject headings : _ cosmic microwave background , cosmology , statistics , observations .
|
one of the most common marine pests in waterways around the world is algae .harmful algal blooms occur across the world and have a wide range of detrimental impacts ( ) .for example , they can replace or degrade other algal species that act as fish breeding grounds , poison fish and mammal marine life through the production of toxins ( ; ) , adversely affect coastal economies through reduced tourism and fishing ( ) , and affect human health through dermatitis ( ; ) , neural disorders and contamination of other seafood such as shellfish ( ) .one of the most common forms of harmful algae is cyanobacteria , or blue - green algae , and one of the most common species of cyanobacteria in tropical and subtropic coastal areas worldwide is _ lyngbya majuscula _( ; ) ._ lyngbya _ , also known as mermaid s hair , stinging limu or fireweed , appears to be increasing in both frequency and extent ( ; ) .these blooms are due to a complex system of biological and environmental factors , exacerbated by human activities ( ) .thus , while there is a wealth of scientific and social literature on different aspects of the _ lyngbya _ problem , for example , the role that nutrients play in the initiation and extent of _ lyngbya _ blooms , or the effect of industry practices in the catchment on the nutrients available for _ lyngbya _ growth , effective management of _ lyngbya _ requires a `` whole - of - system '' approach that comprehensively integrates the different scientific factors with the available management options ( ) .there is also a need to understand the different factors that trigger the initiation of a bloom versus the sustained growth of the cyanobacteria bloom .bayesian models are natural vehicles for describing complex systems such as these ( ). key attributes of bayesian models in this context include flexibility of the model structure , the ability to incorporate diverse sources of information through priors and the provision of probabilistic estimates that take appropriate account of uncertainty in the system ( ; ; ) .a bayesian network ( bn ) is a graphical bayesian model that uses conditional probabilities to encode the strength of the dependencies between any two variables ( ) .causal and evidential inferential reasoning may be performed by the bn , depending on the nature of the dependencies ( ) .bns are increasingly used to model complex systems ( ) .variables in the model are represented by nodes , and links between variables are represented by directed arrows .each node is then ascribed a probability distribution conditional on its parent nodes .the information used to develop these distributions can be obtained from a variety of sources , including data relevant to the system , related experiments or observations , literature and expert judgement ( ; ) .a common practice is to discretise the variables into a set of states , resulting in a series of conditional probability tables ; hence , under the assumptions of directional separation ( d - separation , so that the nodes are conditionally independent ) and the markov property ( so that the probability distribution of a node depends only on its parents ) , the target response node is quantified as the product of the cascade of conditional probability tables in the network ( ) .the quantified model can then be used to identify influential factors , perform scenario assessments , identify configurations of node states that lead to optimal response outcomes and so on .bns can be expanded into object - oriented and dynamic networks ( ; ) ; they can include extensions such as decision , cost and utility nodes ( ) ; and they can be linked to other bns to create systems of systems models .in this paper we describe an integrated bayesian network ( ibn ) approach developed by our research team to address the problem of _ lyngbya _ blooms in deception bay , queensland , australia . with its proximity to brisbane , australia s third largest city ,deception bay is a popular tourist destination in the moreton bay region .the many waterways feeding from intensive and rural agricultural activities into the bay and its use for commercial and recreational fishing put pressure on the marine environment and compound the issues resulting from a _ lyngbya majuscula _ bloom ( ) .our project was undertaken as part of the _ lyngbya _ management strategy funded by the local and queensland government s healthy waterways program .the project team comprised a _lyngbya _ science working group and a _ lyngbya _ management working group , representing diverse scientific disciplines , industry groups , government agencies and community organisations .the ibn is now a living part of the healthy waterways program and has been expanded beyond moreton bay .the ibn approach that we developed involved a `` science model '' linked to a `` management model '' .the components of the ibn are detailed below .the science bn [ depicted in supplemental figure 1 ( ) ] comprised the target node , `` bloom initiation '' , and 22 other nodes which were identified by the _lyngbya _ science working group as potentially playing a key role in the initiation of a _ lyngbya _ bloom ( ) .it was transformed into an object - oriented bn with subnetworks describing water ( comprising nodes for past and present rain , groundwater and runoff ) , sea water ( tide , turbidity and bottom current climate ) , air ( wind and wind speed ) , light ( surface light , light quality , quantity and climate ) and nutrients ( dissolved concentrations of iron , nitrogen , phosphorus and organics , particulates , sediments nutrient climate , point sources and available nutrient pool ) ( ) .the nodes of the science model were quantified using a range of information sources and models , including process and simulation models , bayesian hindcasting models , expert elicitation , published and grey literature , and data obtained from monitoring sites , industry records , research projects and government agencies ( ) .the science object - oriented bn model was further extended to incorporate temporal trends through a dynamic bayesian network comprising five time slices , one for each of the summer months november to march ( ) .lag effects of rainwater and groundwater runoff were incorporated in the object - oriented bn , allowing information and influence from one month to flow through to the next ( ) .additional bns were also constructed to more fully evaluate the _ lyngbya _ problem .these included separate bns to model _biomass , duration and decay ( as opposed to initiation ) , and a bn to focus on the critical two month summer period in which most _ lyngbya _ initiations occur ( as opposed to annual averages of rainfall and temperature used in the original model ) .a variety of other statistical models were used to quantify some of the nodes of the bn .for example , random forest models were created to predict benthic photosynthetically active radiation ( ) and bayesian regression models were developed using data obtained from the monitoring stations in the catchment ( ) .the latter data set comprised _ lyngbya _ occurrences for each month during january 2000 to may 2007 , a total of 77 observations , and monthly averages of minimum and maximum air temperature ( as proxies for water temperature ) , solar exposure and amount of sky not covered by cloud ( as proxies for light ) , and total rainfall ( as a proxy for nutrients available in the water column ) , measured over the same period .a bayesian probit time series regression model was developed to predict the monthly probability of bloom based on a total of 17 covariates , comprising five main effects , five first - order autoregressive terms and seven selected interactions .covariate selection was performed using a bayesian reversible jump markov chain monte carlo algorithm and bayesian model averaging was used to obtain a final predictive model .eight of the 890 models identified by the algorithm accounted for over 75% of the posterior model probability , and the model comprising a single term , average monthly minimum temperature , accounted for almost 50% .the aim of the management network [ supplemental figure 2 ( ) ] was to facilitate evaluation of options available to government agencies , communities and industry groups that could potentially influence the delivery of nutrients to deception bay .point sources , such as industries ( e.g. , aquaculture , poultry ) and council facilities ( e.g. , waste water treatment plants ) , and diffuse sources , such as landuse ( e.g. , grazing land , forestry ) and urban activities ( e.g. , stormwater ) , were geographically located in the catchment .each of these sources was then quantified with respect to the probability of high or low emissions of different types of nutrients under current , planned and best practice scenarios . while not a bayesian network in the sense of propagating these probabilities , the network structure was a valuable vehicle for collating and displaying this information .a gis - based nutrient hazard map for the catchment was then developed for each unit of land in the catchment , based on the nutrient emissions of the sources , the soil ph and soil type at each source location , and distance of the sources to the nearest waterway ( ) .this included a nutrient risk rating which was interpreted as the perceived risk that there will be `` enough '' of that nutrient to cause an increase in growth , extent and duration of a _ lyngbya _ bloom .the science bn and the management network described above were integrated via a water catchment simulation model that was developed as part of the _ lyngbya _ project .the ibn was conceived as a series of steps , whereby a management intervention is proposed , and the management model is used to inform about the expected nutrient discharge into the deception bay catchment .the catchment model simulates the movement of these nutrients to the _ lyngbya _ site in the bay , and the science network then integrates this nutrient information with the other factors in the bn to determine the probability of bloom initiation .we briefly discuss here three ways in which the ibn was interrogated to learn about _ lyngbya majuscula _ bloom initiation in deception bay .first , the science bn provided an overall probability of _ lyngbya _ bloom initiation based on the bn structure and its inputs .for example , in a typical year , as defined by the _management working group , the probability of a bloom was predicted to be .based on the dynamic network , this probability was much higher in the months of november and december and fell slightly in march .second , the ibn informed about important factors affecting this probability .for example , based on the science network , the seven most influential factors were available nutrient pool ( dissolved ) , bottom current climate , dissolved iron , dissolved phosphorus , light and temperature . based on both networks , the comparative impact of different management land uses on the probability of a bloomcould be computed : these probabilities were lowest for waste water treatment plant ( ) and grazing ( ) , and highest for waste disposal ( ) , aquaculture ( ) and poultry ( ) .third , the ibn facilitated the evaluation of scenarios , for example , about the impacts of management options such as upgrading nominated point sources from current to best practice ( e.g. , eliminating potassium output from sewage treatment plants ) , climate events ( e.g. , a severe storm ) and conditions most or least favourable for bloom initiation .for example , under optimal light climate and high temperature conditions , a storm event increased the probability of bloom initiation from to and initiation was certain if the available nutrient pool ( dissolved ) was enough .as another example , changing the management land use from natural vegetation to agriculture throughout the catchment area ( based on the management network ) results in an increase of 8.8% in available nutrients compared with baseline levels ( based on the gis hazard map ) , which in turn results in a substantial increase in the probability of a _ lyngbya _ bloom initiation to ( based on the science bn ) .note that the effect of this land use change is diluted by the fact that the proportion of the catchment designated as natural vegetation is only 18.24% .investigation of the bn also revealed unexpected results that required discussion and reflection by the science and management teams .for example , the model supported early suggestions that iron was a key nutrient in _ lyngbya _ bloom initiation ( ) , which motivated additional research into this important issue ( ) .as another example , land runoff and point sources contributed approximately equally to the probability of bloom initiation under the developed science model , provoking questions about the relative effects of population pressure and industrial growth in the catchment .alternatively , it suggests that the information available to quantify these nodes is somewhat uncertain .in fact , it is a methodological challenge to accurately model the nutrient load into deception bay from land runoff ( ) and more accurate models are currently under development .by their nature , a complex system is challenging to model using traditional statistical approaches .this is illustrated well in the _ lyngbya _ case study described here , which is characterised by multiple interacting factors drawn from science and management , piecemeal knowledge and diverse information sources ( ) .furthermore , bayesian models are able to capture the uncertainty in the data and parameter estimates which is generally agreed to be lacking in many ecological modelling paradigms ( ) .more specifically , bayesian networks ( bns ) are capable of diagnostic , predictive and inter - causal ( or `` explaining away '' ) reasoning ( ; ) , which was particularly relevant for the _ lyngbya _ problem described here .there are several alternatives to the ibn approach that could be considered for modelling the _ lyngbya _ problem . proposed a decision tree approach , but this was less able to represent the many interactions between the factors in the system .other methods include stochastic petri nets which are able to model concurrent systems ( ) , but require the modeller to have advanced statistical knowledge and were unlikely to engage the diverse group of _ lyngbya _ stakeholders .process - based modelling , which is commonplace in ecology , requires substantial data for calibration and validation of the models , which is very time consuming and resource hungry and may take several years ( ) . in contrast , a bn allows us to assimilate current knowledge and modelling effort without having to wait until `` perfect '' and `` sufficient '' data are available .this is particularly important when dealing with a major environmental hazard such as toxic algal blooms .none of the alternative approaches had the unique combination of qualities of bns which integrated the different sources of information , represented the dependencies and uncertainty in the information , guided future data collection and research , and engaged a diverse group of stakeholders .the ibn described in this paper is the most comprehensive local systems model of _ lyngbya _ that has been developed to date .there are many other examples of the use of bns to solve `` big '' problems .we have employed them to investigate infection control in medicine , airport and train delays , wayfinding , import risk assessment ( ) , peak electricity demand and sustainability of the dairy industry in australia . furthermore , there are other conceptual and methodological approaches to bns ; examples include decision making in business ( ) and protein networks in biology ( ). finally , bns are just one tool in the kit of statistical methods that should be considered for solving these types of problems and that can be considered as complements to other approaches in order to reveal the full picture of a complex system .the authors gratefully acknowledge kerrie mengersen for her significant contribution to this manuscript .
|
toxic blooms of _ lyngbya majuscula _ occur in coastal areas worldwide and have major ecological , health and economic consequences . the exact causes and combinations of factors which lead to these blooms are not clearly understood . _ lyngbya _ experts and stakeholders are a particularly diverse group , including ecologists , scientists , state and local government representatives , community organisations , catchment industry groups and local fishermen . an integrated bayesian network approach was developed to better understand and model this complex environmental problem , identify knowledge gaps , prioritise future research and evaluate management options . , ,
|
the theory of surface waves in homogeneous anisotropic elastic half - spaces has enjoyed remarkable progress in the 1960 s and 1970 s .the general framework developed by stroh for solving static and dynamic elasticity problems has proved to be very fruitful for study of surface waves . in a series of classic papers , e.g. , barnett and lothe employed the framework of stroh to develop an elegant integral matrix formalism that underpins existence and uniqueness considerations for surface waves and that allows one to determine the surface wave speed without having to solve for any partial wave solutions .the barnett - lothe integral formalism was quickly realized to serve as a corner stone for the surface wave theory . on this basis, provided a thorough exposition of the complete theory for surface waves in anisotropic elastic solids , summarizing the major developments up to that time , 1977 .later on , a significant contribution came from alshits . with his co - workers ,he has done much work on extending the formalism of stroh , barnett and lothe to various problems of crystal acoustics , see the bibliography in this special issue . a full historical record and a broad overview of the surface wave related phenomenamay be found in .the purpose of this paper is both to present a fresh perspective on the integral formalism for surface waves and also to provide new results , including a generalization to laterally periodic half - spaces .the central theme is the use of the matrix sign function which allows a quick derivation and a clear interpretation of the integral formalism of .the application of the matrix sign function in the context of the stroh formulation of elastodynamics was apparently first noted by in the course of calculation of impedance functions for a solid cylinder . we reconsider the classical surface wave problem in terms of the matrix sign function , showing in the process that it provides a natural solution procedure .for instance , it is known that the surface impedance matrix satisfies an algebraic riccati equation , but it has not been used to directly solve for . herewe give the first explicit solution of this riccati equation for the impedance .another important attribute of the matrix sign function is that it allows the barnett - lothe formalism to be readily generalized to finding the surface wave speed in a periodically inhomogeneous half - space whose material properties are independent of depth ( i.e. a 2d laterally periodic half - space ) . for this case ,the present results provide a procedure that circumvents the need for partial wave solutions .instead it establishes the dispersion equation in terms of the matrix sign function which can be evaluated by one of the optional methods , in particular in the integral form similar to the barnett - lothe representation of the homogeneous case .the outline of the paper is as follows .the surface wave problem is defined in [ sec_2 ] in terms of the stroh matrix .the matrix sign function is introduced and discussed in sec_3 where it is shown to supply a novel and possibly advantageous route to the barnett - lothe integral formalism .an explicit solution of the algebraic riccati equation for the surface impedance matrix is derived in [ sec_4 ] .application of the matrix sign function to formulating and solving the surface wave dispersion equation in a laterally periodic half - space is considered in [ sec_5 ] , with the numerical examples given for a bimaterial configuration .the appendix highlights explicit links between the sign function and some related matrix functions .the equations of equilibrium for time harmonic motion ( with the common factor everywhere omitted but understood ) are is mass density , are the elements of the elastic tensor referred to an orthonormal coordinate system , and are elements of the displacement and stress .we first consider a uniform elastic half - space , ( parallel to the free surface ( , ) : the equations of equilibrium take the form of a differential equation for the 6-vector , , the 3 matrix has components for arbitrary vectors and , and is the identity matrix .the real - valued stroh matrix satisfies where indicates transpose and in block matrix form comprises zero blocks on the diagonal and identity blocks off the diagonal .denote the eigenvalues and eigenvectors of by and ( ) , and introduce the matrix with columns assume in the following the normal situation where all are distinct .then the above symmetry of yields the orthogonality / completeness relations in the form barnett73 the normalization has been adopted .hereafter we use the same notation for the identity matrix regardless of its algebraic dimension . throughout this paperwe restrict our interest to the subsonic surface waves and thus assume that is less than the so - called limiting wave speed , see .this implies that are all nonzero and in pairs of complex conjugates ( denoted below by ) , so the set of eigenvalues and eigenvectors of can be split into a pair of triplets as two triplets are commonly referred to as physical and nonphysical since they define partial modes that , respectively , decay or grow with increasing .the eigenvector matrix partitioned according to is written in the block form as the blocks , and , describe the physical and nonphysical ( decaying and growing ) wave solutions , respectively . from and so the orthogonality / completeness relations may be cast in the subsonic domain to the form denotes the hermitian transpose .note that the relations admit interpretation in terms of the energy flux into the depth .the surface wave solution comprises the decaying solutions only and therefore must have the form where is some fixed vector .the surface wave problem for the homogeneous medium is posed as finding for a given wavenumber the surface wave speed for which the surface traction vanishes , and hence is a null vector of ( although this is not a fruitful avenue to follow , which is the whole point of the barnett - lothe solution procedure based on the integral representation ) . a variety of related notations have been used for the surface wave problem .we generally follow where the notation is based upon that of . also provide comparisons of their notation with that used by , which is closer to that of .the slight notational differences are related to the choice of the in , and amounts to different signs for the diagonal or off - diagonal elements of the matrix analogous to .the sign function of a matrix is conveniently defined by analogy with the scalar definition as the principle branch of the square root function with branch cut on the negative real axis is understood ; with if . as a result , the sign function of a matrix with eigenvalues and eigenvectors denoted by and satisfies that is unchanged under , , and it is undefined for eigenvalues lying on the imaginary axis ( ) .we also note for later use the property .evaluation of the matrix sign function is possible with a variety of numerical methods , the simplest being newton iteration of , , with in the limit as , although this can display convergence problems .schur decomposition , which does not require matrix inversion , is very stable , and is readily available , e.g. .the function also has integral representations , which we will use in order to shed fresh light on the integral formalism in the surface wave theory .see kenney:1995:msf , higham:2008:fm for reviews of the matrix sign function .the following expression for the matrix sign function is based on combined with an integral representation for the matrix square root function may be converted into the following form using a change of integration variable the average .differentiation of yields implies for any .this provides alternative identities for the matrix sign function , such as the value of is arbitrary as long as the denominator , or , does not vanish . the special case of eq. corresponds to the limit as in .let us apply the above definition and properties of the matrix sign function to the case where is the stroh matrix given in and considered in the subsonic domain .the eigenspectrum of is assumed partitioned according to ( [ 883 ] ) .hence equation ( [ 274]) taken with reads the blocks of **(** ) as are all real since so is . note that and hence are traceless .appearance of the barnett - lothe notations on the right - hand side will become clear in the course of the upcoming derivation .the involutory property of the matrix sign function , , implies the identities into account the spectral representation and the relation ( [ 883 - 1]) yields the spectral decomposition of the matrix in the form we have noted the projector on physical modes : ( see more in appendix ) . from ( [ def ] ) and ( [ spectral]), assume that and hence are invertible ( we will return to this point later ) . introduce the surface impedance .it is hermitian due to ( [ 884]) plugging ( [ def ] ) into ( [ eigen]) gives which , with regard for the invertibility of , enables expressing through the blocks of thus from ( [ 7.22 ] ) that , for any ( since is antisymmetric , see also ( [ 7.16]) ) and that is real ( since ). general significance of for surface waves is immediate when one considers the boundary condition which demands that a non - zero surface displacement exerts zero surface traction .since a surface wave is composed of decaying modes only , this means there must exist a vector such that both and must vanish at the surface wave speed . for the case in hand of a uniform material ,these are not two independent conditions but one condition .indeed , the traction - free boundary condition can be posed in the form hence from ( [ = =3 ] ) and ( [ 7.22]) determinants of of and of vanish simultaneously .note that the equation reduces to ( since and ) .also at the common null vector of and can be cast as with linear independent real vectors and hence is rank one . to summarize , the subsonic surface wave speed can be determined from or else from any of the equivalent real dispersion equations ( see ( * ? ? ?( 12.10 - 1)-(12.10 - 4 ) ) ) all are expressed through the blocks of the matrix .equations ( [ 7.16 ] ) , ( [ = =3 ] ) , ( [ 7.22 ] ) , ( [ 36 ] ) are basic relations of the barnett - lothe formalism with regard to surface wave theory .interestingly , we have arrived at these relations by single means of the definition of sign function of times the stroh matrix , without specifying the method of this function evaluation and without explicitly attending to the fundamental elasticity tensor which underlies development of barnett - lothe formalism in .it is instructive to highlight an explicit link between the two lines of derivation , see next .following , introduce the so - called chadwick77 fundamental elasticity tensor \!]^{-1}[\![sr]\ ! ] & [ \![ss]\!]^{-1 } \\[0pt ] \lbrack \![rs]\!][\![ss]\!]^{-1}\![\![sr]\!]-[\![rr]\ ! ] & [ \![rs]\!][\![ss]\!]^{-1}\end{pmatrix } \\ & \mathrm{where}\ [ \![pq]\!]=\left ( pq\right ) -\left [ \mathbf{m\cdot p}\right ] \left [ \mathbf{m\cdot q}\right ] \rho v^{2}\mathbf{i\ \ } \mathrm{with}\ \mathbf{p , q}=\mathbf{r},\mathbf{s } : \\ & \qquad \qquad \mathbf{r}=\cos \theta \mathbf{m}+\sin \mathbf{n},\ \mathbf{s=-}\sin\theta \mathbf{m+}\cos \theta \mathbf{n.}\end{aligned } \label{fund}\]]the matrix is defined in the subsonic domain which means that is small enough to guarantee existence of \!]^{-1} ] in ( [ fund ] ) is non - singular .since ] is positive within the whole subsonic domain ( this of itself is an alternative definition of the subsonic domain ) . using ^{-1}\right\rangle ] ) of the periodic lattice .let define an orthonormal base in and denote material parameters can therefore be expressed as in practical terms the matrices are limited to finite size by restricting the set of reciprocal lattice vectors to where comprises a finite number of elements , say .we look for solutions in the floquet form **-**periodic .the surface wavenumber vector resides in the first brillouin zone of the reciprocal lattice and is otherwise arbitrary . employing a plane wave expansion .becomes an ordinary differential equation for the comprised of all , , , and are matrices with blocks associated with pwe wavenumbers , , given by the definition of is the natural extension of to include the fourier transform , that is negative definite and hence invertible. equation is valid regardless of whether the material properties depend on or not .we consider here the case of a laterally periodic material whose properties are periodic along the surface and uniform in the depth direction , so that the system matrix is constant .a method for treating the case of periodic is discussed in . for constant governing equation is analogous to that in the uniform elastic half - space and its solution is in a way analogous to ( [ -91 ] ) , with two major differences .the first is that we are now dealing with large , formally infinite , vectors and matrices ( for convenience , we continue to refer to of size ) .the second is that , in contrast to the real - valued stroh matrix , the system matrix of is generally complex .its symmetry is is a equivalent of the matrix with two zero and two identity blocks on and off the diagonal .denote the eigenvalues and eigenvectors of by and as everywhere above , we restrict our attention to the subsonic domain where , by virtue of ( [ n+ ] ) , the set of eigenvalues of can be split in two halves as correspond to physical ( decaying ) and nonphysical ( growing ) modes . adopting the same partitioning for the eigenvectors, denote also follows from ( [ n+ ] ) that the orthogonality / completeness relations in the subsonic domain hold in the form similar to ( [ 884 ] ) , namely, that , in contrast to the uniform case ( [ 883 ] ) , and are not complex conjugated , i.e. and so ( 884 + ) has no equivalent that would be a generalization of ( [ 883 - 1 ] ) .this is reminiscent of the stroh formalism in cylindrical coordinates , see and .now we can follow the line of derivation proposed in [ 3.2 ] .introduce the sign function associated with namely, denote its blocks as definition ( [ eigen+ ] ) , and so ( [ 884 + ] ) , the spectral decomposition is ( cf . from ( [ def+ ] ) and ( [ spectral+]) assuming invertibility of and hence of and , introduce the surface impedance from ( [ 884 + ] ) is hermitian and combining ( [ eigen+]) with ( [ def+ ] ) relates to the blocks of .thus from ( [ 7.22 + ] ) that is real ( since and are hermitian ) and , unlike the case of a uniform medium ( hence of real and ) , the determinant of is generally nonzero . .finally , the evaluation of is possible by different methods , including the integral representation analogous to , yielding existence of a surface wave with a speed under the traction - free boundary implies that there must exist some non - zero vector at such that ( [ def+ ] ) and have been used .the situation is reminiscent of eq .( [ -0 ] ) for a uniform half - space , certainly apart from the fact that eq .( [ -37 ] ) involves matrices of a large , formally infinite , size .another particularity is that the equality may signify occurrence of either a physical ( if ) or a nonphysical ( if ) surface wave solution . at the same time , the condition is both necessary and sufficient for the physical wave in , specifically , the subsonic domain , where and are invertible ( and so exists ) as can be argued similarly as in 3.3 . interestingly , this is no longer so in the upper overlapping band gaps of the floquet spectrum , where the condition is recast in the form with a positive definite matrix provides a single dispersion equation that those considerations above which used spectral decomposition under the assumption of distinct eigenvalues can be reproduced in the invariant terms of the projector matrix , see .significant simplification follows for the case of a symmetric unit cell where the fourier expansion reduces to a cosine expansion with real valued coefficients . in consequencethe matrix is real and hence the basic conclusions obtained above for the case of a uniform half - space can be extended to the present case in hand . in particular by analogy with eq .( [ 36 ] ) it follows that the dispersion equation on the subsonic surface wave speed can be taken in any of the following equivalent real - valued forms is the adjugate matrix .note that the double zero of at makes the two other forms of the dispersion equation more appealing for numerical evaluation of .numerical results are presented for a laterally periodic half - space composed of layers of two alternating materials .interfaces between the layers are normal to the surface of the half - space .note that a periodically bilayered structure infinite along the periodicity axis can be seen as a periodically tri - layered with a symmetric unit cell and hence with a real matrix see 5.2.1 .we assume bimaterial structures of equidistant layers of any two of the three isotropic solids : copper ( cu , young s modulus = 115 gpa , poisson s ratio = 0.355 , = 8920 kg / m ) , aluminum ( al , 69 gpa , 0.334 , 2700kg / m ) or steel ( st , 203 gpa , 0.29 , 7850 kg / m ) .the surface wave propagation direction that is the direction of the wavenumber vector ( see ( [ = 5 ] ) ) , is measured by the angle between and the plane of interlayer interfaces , so that corresponds to in the layering direction , i.e. , normal to the interfaces .the values of wavenumber considered are restricted to the first brillouin zone defined by the cell length in the layering direction , which is taken as unity .figure [ fig1 ] demonstrates the computed surface wave speed as a function of wave number for seven distinct directions of .the curves displayed are azimuthal cross - sections of the subsonic dispersion surface .the numerical results show that surface wave speed decreases as a function of and increases with for all bimaterials considered .the dependence of on at fixed is greatest for waves traveling across the interfaces and least for waves traveling along the interfaces .use of the matrix sign function provides a new and broader perspective of the surface wave problem in anisotropic media .it straightens out the methodology of the underlying matrix formalism and offers a direct method to compute the matrices involved . starting from the defining property of the matrix sign function we have obtained known relations for the barnett - lothe matrices , the impedance matrix and the dispersion equation for the surface wave speed , see eqs . , and .these expressions are achieved using only the sign function of times the stroh matrix without specifying its method of evaluation .an integral representation for the matrix sign function , combined with the explicit structure of the stroh matrix , leads immediately to the barnett - lothe integral relations . in this paperwe have concentrated on using the matrix sign function for direct formulation of the dispersion equation , without discussing further properties of the barnett - lothe matrices which underlie the existence and uniqueness considerations in the surface wave theory barnett73,lothe76,chadwick77 .we have constructed an explicit solution of the matrix riccati equation for the impedance by rewriting the riccati equation in a form that involves the matrix sign function . apart from providing for the first time a direct solution of the riccati equation ,the use of the matrix sign function shows how this nonlinear equation is intimately related with the stroh matrix .perhaps the greatest advantage gained by using the matrix sign function is that it provides a natural formalism for framing the problem of subsonic surface waves in laterally periodic half - spaces which are inhomogeneous along the surface and uniform in the depth direction .we have shown how much of the structure for the homogeneous case carries over to the case of laterally periodic materials .for instance , the conditions for surface waves and the form of the block matrices in the matrix sign function mirror their counterparts for the homogeneous case , eqs . and , respectively . naturally , there are major differences between the problems. conditions for the existence of surface waves in periodically inhomogeneous materials have only been recently established but the methods that have been proposed for finding them are not as straightforward as for the homogeneous case .the approach that we have presented for the laterally periodic case , being linked via the matrix sign function to the well known formalism for the homogeneous half - space , offers , we believe , a clear and logical route for finding surface waves .future work will examine this calculation method and the properties of the subsonic waves in more detail and will also consider supersonic solutions in the upper stopbands and passbands - the extension motivated by the classical paper by alshits and lothe .the matrix sign function of is closely related to other standard matrix functions .the matrix _ projector _ functions and are defined and therefore the projector functions can be expressed via integration , e.g. where counter - clockwise encloses the finite right(left)-half plane , arbitrarily large .the schwartz - christoffel transformation reduces this to an integral around the unit circle while are singular if possesses eigenvalues , we note that is unchanged for , , , and is invariant for , , , and hence can always be made regular by selecting in the right half - plane .the integral representation for the matrix sign function may be easily obtained from eqs . and .the _ disk _ function is defined such that if where the eigenvalues of , have magnitudes , , respectively , then .it follows that b. honein , a. m. b. braga , p. barbone , and g. herrmann .wave propagation in piezoelectric layered media with some applications ._ j. intell ._ , 20 ( 4):0 542557 , 1991 .doi : 10.1177/1045389x9100200408 .a. v. shuvalov and a. g. every .some properties of surface acoustic waves in anisotropic - coated solids , studied by the impedance method . _ wave motion _ , 36:0 257253 , 2002 .doi : 10.1016/s0165 - 2125(02)00013 - 6 . y. b. fu and a. mielke . a new identity for the surface - impedance matrix and its application to the determination of surface - wave speeds . _ proc .a _ , 4580 ( 2026):0 25232543 , 2002 .doi : 10.2307/3067326 .
|
the matrix sign function is shown to provide a simple and direct method to derive some fundamental results in the theory of surface waves in anisotropic materials . it is used to establish a shortcut to the basic formulas of the barnett - lothe integral formalism and to obtain an explicit solution of the algebraic matrix riccati equation for the surface impedance . the matrix sign function allows the barnett - lothe formalism to be readily generalized for the problem of finding the surface wave speed in a periodically inhomogeneous half - space with material properties that are independent of depth . no partial wave solutions need to be found ; the surface wave dispersion equation is formulated instead in terms of blocks of the matrix sign function of times the stroh matrix .
|
[ 1intr1 ] easter island history is a very famous example of an evolved human society that collapsed for over exploiting its fundamental resources that in this case were essentially in palm trees .it were covering the island when , few dozens of individuals , first landed around 400 a.d .its advanced culture was developed in a period of one thousand years approximately .its ceremonial rituals and associated construction were demanding more and more natural resources especially palm trees .the over exploitation of this kind of tree , very necessary as a primary resource ( tools construction , cooking , erosion barrier , etc . ) was related with the collapse . in this paper ,a mathematical model concerning growing and collapse of this society is presented .different to usual like lotka - volterra models where the carrying capacity variation becomes from external natural forces , in this work it is directly connected with the population dynamics .namely , population and carrying capacity are interacting dynamics variables , so generalizing the leslie model prey - predator .the general mathematical treatment of a model describing such a complex society is a very hard task and probably not unique .our aim is to settle the most simple model describing with acceptable precision the evolution of easterner society . with the idea of writing a model that could be generalized to a more complex system, we first divide the elements into two categories : the _ resource _ quantity with and the _ inhabitants _ numbers ( species ) with . with the concept of resources we are meaning resources in a very large sense , it could be oil , trees , food and so on .the several kind of resources are described by the index . in similar way , with the concept of inhabitants , we are meaning different species of animals or internal subdivision of human being in country or town or even tribes . leaving the idea of a constant quantity of resources that leads to the logistic equation for the number on inhabitants , the aim of this paper is to include in the dynamical description of the time evolution of the system the resources too which can not be considered as constant .a generalization of the logistic equation to an arbitrary number of homogeneous species interacting among the individuals ( with non constant resources ) can be written as -\sum\limits_{j=1\,,i\neq j}^{m}\chi _ { ij}n_{j}n_{i}. \label{log - inter}\ ] ] where is the usual growing rate for species . in the denominator it appears the carrying capacity of the system with respect to the number of inhabitants . beside the dependence on the resources we could have also a dependence on an other species that would be then a resource for some other species .this fact is expressed even by the quantities that in general are not symmetric expression ( ) since that the prey is a resource for the predator and not the inverse .similarly , for the resources we have : -\sum\limits_{j=1}^{m}\alpha _{ ij}n_{j}r_{i}. \label{res - inter}\ ] ] it is clear that set of equations ( [ res - inter ] ) could be formally included into eq .( [ log - inter ] ) redefining the quantities .nevertheless , we shall keep this distinction for the sake of clarity especially referred to resources , such as trees , oil or oxygen , where the carrying capacity is not determined by other species and can be considered a constant ( ) .also the meaning of the parameters is more or less the same of the analogous parameters .it is suitable to define as renewability ratios , since describe the capacity of the resources to renew itself and clearly are depending on the kind of resource .for example , the renewability of the oil is clearly zero since the period of time to get oil from a natural process is of the order of geological time - scale processes . in general all parameters of eqs .( [ log - inter ] ) and ( [ res - inter ] ) are time dependent , including stochasticity .we can assume reasonably slowly time - varying for the ancient societies so that we can consider it as constant , particularly the s .anyway the set of is worthy of a more detailed discussion .we can call this set of parameters _ technological parameters _ in the sense that they carry the information about the capacity to exploit the resources of the habitat .we shall see in the next section that the technological parameters combined with the renewability ratios will be the key point to decide wether , or not , a society is destined to collapse .the particular history of easter island society presents several advantages for modelling its evolution .in fact it can be with very good approximation considered a closed system .the peculiar style of life and culture allows us to consider a basic model where trees are essentially the only kind of resources .many of activities of the ancient inhabitants involved the trees , from building and transport the enormous moai , to build boats for fishing , etc .in fact the cold water was not adapt to the fish life and the impervious shape of the coast made difficult fishing . finally from historical reportsit can be inferred that the inhabitants did not change the way to exploit their main resource , even very near to exhaust it so that we can consider the technological parameter as constant .considering eqs .( [ log - inter ] ) and ( [ res - inter ] ) for one inhabitant species and one kind of resource , we obtain : , \label{log - inter1}\ ] ] -\alpha nr , \label{res - inter1}\ ] ] where we introduce the notation : .the unknown function has to satisfy few properties . for a quantity of primary unlimited resource , , even that means that the population can grow unlimited too . in the opposite case clearly also the population must vanish , , and finally when the resource is constant we are back to the ordinary equation ( logistic ) so that .it is clear that the choice of this relation is quite arbitrary but following the simplicity criteria we can select , where is a positive parameter .this choice formalizes the intuition that the maximum number of individuals tolerated by a niche is proportional to the quantity of resources .we note that in ( [ res - inter1 ] ) the interacting term depend on the variable .namely , for no variation of resource exist ( ) corresponding to a biological criterion and different from this one of reference .a more sophisticate model should include also fishing as resource and consider for an expression such as where is the fishing carrying capacity .this resource was limited near the coast and could not be fully exploited without boats , so that , we are going to neglect this resource .we can rewrite eqs .( [ log - inter1 ] ) and ( [ res - inter1 ] ) as : , \label{log - inter2}\ ] ] , \,\,\,\,\textrm{where } \,\,\alpha _ { e}\equiv \frac{\alpha } { r^{\prime } } .\label{res - inter2}\ ] ] the dimensionless parameter is the ratio between the technological parameter , representing the capability of to exploit the resources , and the renewability parameter representing the capability of the resources to regenerate .we will call deforestation parameter since it gives a measure of the rapidity with which the resources are going to exhaust and then a measure of the reversibility or irreversibility of the collapse . using the historical data we can have an estimation of the parameters . at the origin ( )we can assume that the trees were covering the entire island surfaces of 160 km .when the first humans arrived to the island , around the 400 a.d ., their number was of the order of few dozens of individuals and it grew until to reach the maximum around the 1300 a.d .finding the equilibrium points of eqs .( [ log - inter2 ] ) and ( res - inter2 ) we obtain ( see section iii , for stability ) : while the point of eq .( [ equin ] ) represents the trivial fact that in absence of human being the number of trees is constant ( carrying capacity ) .( [ equir ] ) describes the fact that , due to the interaction humans - environment , the more interesting equilibrium point does not coincide with with since . to study the stability of the point , we have linearize the system of eqs .( [ log - inter2 ] ) and ( [ res - inter2])around the equilibrium point .in fact , in the next section we shall show that it is a stable equilibrium point .let us first cast eqs .( [ log - inter2 ] ) and ( [ res - inter2 ] ) in term of dimensionless quantities ; setting , and , we have : \label{adnu}\ ] ] , \,\,\,\,\bar{\alpha}\equiv \alpha _ { e}n_{c},\,\,\,\,\bar{r}\equiv \frac{r^{\prime } } { r}. \label{adro}\ ] ] for sake of clarity we rewrite also the equilibrium point : with obvious meaning of the symbols . perturbing the equilibrium point ] with and infinitesimal functions , after straightforward algebra we obtain eq .( [ equir ] ) : \label{lineps1}\ ] ] the eigenvalues of the system are : restricting ourself to the case of positive values of , eq . ( [ eig ] ) shows that both the eigenvalues always have a real negative part , so that the equilibrium point is a stable equilibrium point .more in detail we have that for the eigenvalues are real and negative so that the equilibrium point is reached in exponential damped way , otherwise the eigenvalues acquire an imaginary part and the system reach the equilibrium point via exponential damped oscillations .even if , mathematically speaking , the stable point is an acceptable result we have to take in account the biological constraints that allow to a specie to survive . a reasonable number of individuals is required for viability of a given species .this is so because genetic diversity , social structures , encounters , etc ., need a minimum numbers of individuals since under this critical numbers the species is not viable and collapse .it is worthy to stress that while the trees can reach an equilibrium point without the humans , eq .( [ equin ] ) , the opposite does not hold , as stated by eq .( [ equir ] ) . calling the minimum number of humans we can find a upper bound for the parameter so that a civilization can survive .imposing the condition that at the equilibrium point we obtain : as further simplification of inequality ( [ bond ] ) , we assume that and we find that or .it is a natural condition since it tells that collapse exists when the production rate is minor than the deforestation rate .more in general it can be showed , numerically , that considering the standard case with the starting population number , we can have a solution that can exceed the value ( depending on initial conditions and ) .it is worthy to stress that in the case of ordinary logistic map the population number never can exceed this limit value . in the example of fig .[ figlor ] , the maximum of is reached at a value that is almost three times .then , according to the region of the parameters that we are considering , the paths to the final equilibrium is exponentially fast , reaching eventually the point and collapsing .as we saw in the previous section , the collapse condition ( 15 ) gives a sufficient condition on the deforestation rate per individual . on other hand ,the last period of tree extinction was governed essentially by the deforestation rate . in this way, we have the rate of tree extinction as discussed , the path to the equilibrium point is exponentially fast , so that a rough estimation of the left side of eq .( [ 16 ] ) is the time scale of the deforestation , , while the right side can be taken at the end of the collapse process ( the equilibrium point ) : the final number of individuals .it can be deduced pointing that the range of is yrs . to yrs . and , the rate of deforestation ( per individual ) could be estimated as : a range estimation has validity in the case of exponential decay which is the our case , as it has been showed by the analysis performed in sec . equilibrium . assuming the number of trees as proportional to the area we can now estimate the rate - deforestation - area .the island has a surface at order of km and initially it can be supposed that was covered of trees so that we can estimate : comparison we can consider that in the last 500 years the deforestation of amazonian forest rate is 15 . considering that in 500 years the deforestation technology became more and more efficient , especially in the last century , we can consider it as an upper limit , giving us an idea of the technological change . in the human historythere are several examples of over exploiting the natural resources even if not so known as easter island .in particular , the copn maya history has certain similarity with respect to the technology level and the over exploiting of the natural resources . in short ,this ancient civilization reached almost 20.000 individuals and declined to 5000 individuals in the 9th century . using the estimation of the technological parameter obtained for eastern island civilization, we get a collapse time from eq .( [ 17 ] ) that is years .the collapse time based on historical reports is , showing that the adopted model is consistent with the available data .the estimation of the parameter is consistent with the idea that similar civilizations , in technological meaning , have similar capacity to exploit the natural resources .a mathematical model considering the interaction among carrying capacity and population in an isolated system has been considered .the model takes in account the fact that a population can over exploit the carrying capacity without saturate , a fact of relevant importance on the path leading to a collapse of a society .its application to the collapse of the easter island civilization has been presented .an estimation of the technological parameter is obtained and applied to an other ancient civilization , the copn - maya , with a reasonably precise expectation about their collapse time . all confirming the consistency of the adopted model . on the other hand , its relative reasonable prediction suggests a possible extension to more complex system .the effort to mathematical modelling of ancient civilizations could be important considering the actual human growing and resources exploitations . an adequate equilibrium between competition ,demand and exploitation is the key of surviving .the authors acknowledge support from the project uta - mayor 4787 ( 2006 - 2007 ) and cihde - project .
|
in this paper we consider a mathematical model for the evolution and collapse of the easter island society , starting from the fifth century until the last period of the society collapse ( fifteen century ) . based on historical reports , the available primary sources consisted almost exclusively on the trees . we describe the inhabitants and the resources as an isolated system and both considered as dynamic variables . a mathematical analysis about why the structure of the easter island community collapse is performed . in particular , we analyze the critical values of the fundamental parameters driving the interaction humans - environment and consequently leading to the collapse . the technological parameter , quantifying the exploitation of the resources , is calculated and applied to the case of other extinguished civilization ( copn maya ) confirming , with a sufficiently precise estimation , the consistency of the adopted model . social system , evolution , ecology 87.23.ge,87.23.kg,87.10.+e
|
probabilistic ensembles with one or more adjustable parameters are often used to model complex networks , including social networks , biological networks , the internet , etc . ; see e.g. fienberg , lovsz and newman .one of the standard complex network models is the exponential random graph model , originally studied by besag .we refer to snijders et al . , rinaldo et al . and wasserman and faust for history and a review of recent developments .the phenomenon of phase transitions in exponential random graph models has recently attracted a lot of attention in the literature .the statistical content of such models can be described by the _ free energy density _ , an appropriately scaled version of the probability normalization .the free energy density is also a standard quantity in statistical physics . in particular , its limit as the system size becomes infinite can be used to draw phase diagrams corresponding ( for example ) to the familiar fluid , liquid and solid phases of matter . using the large deviations formula for erds - rnyi graphs of chatterjee and varadhan ,chatterjee and diaconis obtained a variational formula for the limiting free energy density for a large class of exponential random graph models .radin and yin used this to formalize , for the first time , the notion of a phase transition for exponential random graphs , explicitly computing phase diagrams for a family of two - parameter models .a similar three - parameter family was studied by yin .previous non - rigorous analysis using mean - field theory and other approximations can be found in park and newman and the references in hggstrm and jonasson .we consider a family of directed exponential random graphs parametrized by edges and outward directed -stars .such models are standard and important in the literature of social networks , see e.g. holland and the references therein . for directed graphs , recently developed techniques based on the graph limit theory of lovasz and the large deviations formula of chatterjee and varadhan not be directly applied . instead of trying to adapt these techniques to the directed case, we use completely different methods which lead to _ better _asymptotics for the free energy density . from the limiting free energy density , we find that the model has a phase diagram essentially identical to the one of .because our asymptotics are more precise , we are able to build on the results in .in particular , by carefully studying partial derivatives of the free energy density along the phase transition curve , we obtain precise scaling laws for the variance of edge and star densities and we compute exactly the limiting edge probabilities . to explain how our results fit into the phase transition framework of , we need to make the notions of free energy and phase transition more precise .consider the probability measure on the set of graphs on nodes defined by \right),\ ] ] where , are real parameters , is the probability normalization , and ( resp . ) is the probability that a random function from a single edge ( resp .a -star ) into is a homomorphism , i.e. , an edge preserving map between the vertex sets .the quantities and are called homomorphism densities ; see e.g. for details and a discussion .we consider both undirected and directed graphs .the model has at least a superficial similarity to the grand canonical ensemble in statistical physics , which describes the statistical properties of matter in thermal equilibrium .the grand canonical ensemble consists of a probability measure , defined on the set of locally finite subsets of ^d ] . note that is essentially identical to the function of the same name studied in : after multiplying and by two the functions differ only by a constant .this allows us to use results from concerning .we rederive the following formula for the limiting free energy density , first obtained in in the undirected graph case : [ free_energy ] for any , we have we restate the following result proved in : [ trans_curve ] there is a certain curve in the -plane with the endpoint such that off the curve and at the endpoint , has a unique global maximizer , while on the curve away from the endpoint , has two global maximizers , and , with .the curve in theorem [ trans_curve ] will be called the _ phase transition curve _ and written .the endpoint will be called the _critical point_. it is not possible to write an explicit equation for the curve ; see for a graph obtained numerically .however , in is is shown that is continuous and decreasing in , with .we have the following more precise result , which , since it concerns only the function , holds for both our model and that of : [ propertycurve ] ( i ) is differentiable for with in particular , \(ii ) is convex in . when , along the line the function is symmetric around .it follows that along this line , so theorem [ propertycurve ] implies .see figure 2(i ) . as discussed in the introduction ,the following theorems give the scaling of the variance of and .note that we compute this for any , including on the phase transition curve and at the critical point ; this extends the results in , which only hold off the phase transition curve .[ mainthm ] off the phase transition curve , on the phase transition curve except at the critical point , at the critical point , [ starvariance ] off the phase transition curve , on the transition curve except at the critical point , at the critical point , [ covariance ] off the phase transition curve , on the transition curve except at the critical point , at the critical point , when , with the critical point labeled by .( ii)-(iv ) : scaling of the variance of ( ii ) off the phase transition curve , ( iii ) at the critical point , and ( iv ) on the phase transition curve away from the critical point . for ( ii)-(iv )we use and values of , and , respectively .the straight lines are obtained from the scaling in theorem [ mainthm ] , and the squares are obtained by monte carlo simulation . ] the next theorem gives the limiting edge densities .[ marginaldensities ] off the phase transition curve and at the critical point , on the phase transition curve except at the critical point , where in it is proved that , off the phase transition curve and at the critical point , for large a typical graph behaves like the erds - rnyi graph , where is the unique global maximizer of .( see theorem 3.4 of for a more precise statement . )it is also shown that , on the phase transition curve except at the critical point , for large a typical graph behaves like , where is a mixture of the two global maximizers of .however , is not determined explicitly . in our model , since we consider only directed graphs , we do not obtain an erds - rnyi graph in the limit .nevertheless , theorem [ marginaldensities ] is a qualitatively similar result about limiting edge probabilities , with an explicit formula for the mixture of edge probabilities , , along the phase transition curve .first we have the following formula for the normalization : [ zn ] let be a binomial random variable with parameters and : then \right)^n.\ ] ] next we approximate the expectation in proposition [ zn ] in terms of an integral : [ e ] let be a binomial random variable with parameters and . then for any , \\ & = \begin{cases } \left(1+o\left(n^{1/2-r}\right)\right ) n^k 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}\,dx , & ( \beta_1,\beta_2 ) \ne ( \beta_1^c , \beta_2^c)\\ \left(1+o\left(n^{1/4-r}\right)\right ) n^k 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}\,dx , & ( \beta_1,\beta_2 ) = ( \beta_1^c , \beta_2^c ) \end{cases}\end{aligned}\ ] ] lastly we give a technical lemma for computing the integral in proposition [ e ] : [ laplace ] let be an analytic function in with taylor expansion at given by for , define assume that ] and in particular .\(ii ) if , there exist so that on , on and on . moreover . if , has a unique local and hence global maximizer ; if , has a unique local and hence global maximizer . finally , if , then has two local maximizers and so that .since vanishes only at and , we have proved that .\(iii ) if , on ] . note that for all , we use this , the fact that , and the mean value theorem to write where and are between and , and is between and . observe that and that let and .the last three displays show that }\left|e^{n\ell(x ) } - e^{n\ell(y)}\right| = \begin{cases } e^{n\ell(x^*)}o(\exp(-\omega n^t ) ) , & j \notin b_n \\ e^{n\ell(x^*)}o(n^{-q } ) , & j \in b_n \end{cases},\ ] ] and so from , observe that latexmath:[\[\label{anbn } any , now by and proposition [ laplace ] , thus , now from we conclude \\ & = \left(1+o\left(n^{1/2-r}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx.\end{aligned}\ ] ] next , consider on the transition curve away from the critical point . by theorem [ trans_curve ] ,there are two maximizers of , say and .defining it is not hard to see that the arguments above can be repeated to obtain the same result .finally , consider the case at the critical point . here ,equation still holds , but needs to be modified , as follows . by proposition [ order ], we have and , so by the mean value theorem we have where , , and are between and , and is between and .let and note that and that let and .the last three displays show that }\left|e^{n\ell(x ) } - e^{n\ell(y)}\right| = \begin{cases } e^{n\ell(x^*)}o(\exp(-\omega n^t ) ) , & j \notin b_n \\ e^{n\ell(x^*)}o(n^{-3q } ) , & j \in b_n \end{cases}.\ ] ] so from , using and , for any , now by and proposition [ laplace ] , thus , now from we conclude \\ & = \left(1+o\left(n^{1/4-r}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx.\end{aligned}\ ] ] we will prove only ( i ) and ( iii ) , as ( ii ) is standard .we first consider ( i ) .note that for and , so for any , now let and , and pick .we use taylor expansions of and at and of at zero , along with proposition [ order ] , to compute e^{n(b_0 + b_1u + b_2 u^2 + \ldots)}\,du\\ & = e^{n\ell(c)}\int_{-\delta}^{\delta}\left[d_0 + d_1u + \ldots\right ] e^{nb_2 u^2 + nb_3u^3 + \ldots}\,du\\ & = e^{n\ell(c)}\int_{-\delta}^\delta\left[d_0 + d_1u + \ldots\right ] \left[1 + ( nb_3u^3 + \ldots ) + \frac{1}{2}(nb_3u^3 + \ldots)^2 + \ldots\right]e^{nb_2 u^2}\,du\\ & = e^{n\ell(c)}\left[n^{-1/2}d_0\alpha_1 + n^{-3/2}\lambda + o(n^{-5/2})\right]\end{aligned}\ ] ] where the last step is obtained by collecting terms of the same order , and the interchange of sum and integral is justified by dominated convergence theorem . since is the unique global maximizer of , we conclude that for some , it follows that .\ ] ] now we turn to ( iii ) .note that for and , so for any , as before we let and , pick and use taylor expansions of and at and at zero , along with proposition [ order ] , to write e^{n(b_0 + b_1u + b_2 u^2 + \ldots)}\,du\\ & = e^{n\ell(c)}\int_{c-\delta}^{c+\delta}\left[d_0 + d_1u + \ldots\right ] e^{nb_4 u^4 + nb_5u^5 + \ldots}\,du\\ & = e^{n\ell(c)}\int_{c-\delta}^{c+\delta}\left[d_0 + d_1u + \ldots\right ] \left[1 + ( nb_5u^5 + \ldots ) + \frac{1}{2}(nb_5u^5 + \ldots)^2 + \ldots\right]e^{nb_4 u^4}\,du\\ & = e^{n\ell(c)}\left[n^{-1/4}d_0\gamma_1 + n^{-3/4}\theta + o(n^{-5/4})\right],\end{aligned}\ ] ] where again the last step is obtained by collecting terms of the same order , and the interchange of sum and integral is justified by dominated convergence theorem .as before , since is the unique global maximizer of , we can conclude that .\ ] ] the remainder of the proofs are for the results in section [ theorems ] . by propositions [ e ] and [ laplace ], we have \\ & = o(n^{-1}\log n ) + \frac{1}{n}\log \int_0 ^ 1 \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx\\ & = o(n^{-1}\log n ) + \ell(x^ * ) .\end{split}\end{aligned}\ ] ] \(i ) along the phase transition curve , we have let be the two local maximizers of in the v - shaped region that contains the phase transition curve except the critical point . by proposition [ order ] , and are nonzero away from the critical point .the implicit function theorem implies that then and are analytic functions of both and .differentiating with respect to and using and , we can show that which implies that . as , and both and to the common maximizer .therefore , since and as , we get .\(ii ) differentiating with respect to , we get \frac{\partial x_{1}^{\ast}}{\partial\beta_{1 } } \nonumber \\ & \qquad -\frac{1}{((x_{1}^{\ast})^{p}-(x_{2}^{\ast})^{p})^{2 } } \left[(1-p)(x_{2}^{\ast})^{p } + p(x_{2}^{\ast})^{p-1}x_{1}^{\ast}-(x_{1}^{\ast})^{p}\right ] \frac{\partial x_{2}^{\ast}}{\partial\beta_{1}}.\label{secondderivative}\end{aligned}\ ] ] differentiating and with respect to , we get \frac{\partial x_{1}^{\ast}}{\partial\beta_{1}}=0,\label{eqniv } \\ & 1+pq'(\beta_{1})(x_{2}^{\ast})^{p-1 } + \left[pq(\beta_{1})(p-1)(x_{2}^{\ast})^{p-2}-\frac{1}{x_{2}^{\ast}(1-x_{2}^{\ast})}\right ] \frac{\partial x_{2}^{\ast}}{\partial\beta_{1}}=0.\label{eqnv}\end{aligned}\ ] ] notice that and moreover , in proposition [ order ] , we showed that therefore , from , , , , and , we conclude that and .finally , by noticing that in , we conclude that . in the proofsbelow , let be defined as in proposition [ laplace ] for the function off the phase transition curve , the result follows immediately from theorem [ free_energy ] and results in .thus , we prove only the last two displays in theorem [ mainthm ] . from the second line of , we have } { { \mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \\ & \qquad\qquad\qquad - \left(\frac{{\mathbb e}\left[w\exp\left(\beta_{1 }w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } { { \mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right]}\right)^2\bigg\}. \nonumber\end{aligned}\ ] ] we use proposition [ e ] and proposition [ laplace ] to estimate each of the terms in . we first consider the case on the transition curve excluding the critical point . by theorem [ trans_curve ] ,there are two global maximizers of .let us write . by proposition [ laplace ] and proposition [ e ] , for any , we have \\ & = \left[1+o(n^{\frac{1}{2}-r})\right ] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\int_{0}^{1}\sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}dx \nonumber \\ & = \left[1+o(n^{\frac{1}{2}-r})\right ] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\frac{e^{n\ell(x^{\ast})}}{\sqrt{n } } \left[\frac{\sqrt{\frac{(x_{1}^{\ast})^{2k}}{x_{1}^{\ast}(1-x_{1}^{\ast } ) } } } { \sqrt{2\pi\ell''(x_{1}^{\ast } ) } } + \frac{\sqrt{\frac{(x_{2}^{\ast})^{2k}}{x_{2}^{\ast}(1-x_{2}^{\ast } ) } } } { \sqrt{2\pi\ell''(x_{2}^{\ast})}}+o(n^{-1})\right ] \nonumber \\ & = \frac{n^{k}2^{-n}e^{n\ell(x^{\ast})}}{2\pi } \left[\frac{(x_{1}^{\ast})^{k } } { \sqrt{x_{1}^{\ast}(1-x_{1}^{\ast})\ell''(x_{1}^{\ast } ) } } + \frac{(x_{2}^{\ast})^{k } } { \sqrt{x_{2}^{\ast}(1-x_{2}^{\ast})\ell''(x_{2}^{\ast})}}+o(n^{\frac{1}{2}-r})\right ] .\nonumber\end{aligned}\ ] ] hence , next consider the case at the critical point . by proposition [ laplace ] and proposition [ e ] , for any , \\ & = \left[1+o(n^{\frac{1}{4}-r})\right ] \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}\int_{0}^{1}\sqrt{\frac{x^{2k}}{x(1-x)}}e^{n\ell(x)}dx \nonumber \\ & = \frac{n^{k}2^{-n}\sqrt{n}}{\sqrt{2\pi}}e^{n\ell(x^{\ast})}\left[n^{-1/4}d_{0}^{(k)}\gamma_1 + n^{-3/4}\theta^{(k ) } + o(n^{-r})\right ] , \nonumber\end{aligned}\ ] ] where then it is easy to observe that . by differentiating this identity, we get . therefore , by and , +o(n^{-\frac{3}{2 } } ) } { n^{-\frac{1}{2}}(d_{0}^{(0)})^{2}\gamma_{1}^{2}}+o(n^{\frac{5}{4}-r } ) \nonumber \\ & = \frac{n^{\frac{1}{2}}}{(d_{0}^{(0)})^{2}\gamma_{1 } } \left[\gamma_{3}\left(d_{0}^{(2)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(2 ) } -2d_{0}^{(1)}d_{2}^{(1)}\right)\right ] \nonumber \\ & \qquad\qquad + \frac{n^{\frac{1}{2}}}{(d_{0}^{(0)})^{2}\gamma_{1 } } \left[b_{5}\gamma_{7}\left(d_{0}^{(2)}d_{1}^{(0)}+d_{0}^{(0)}d_{1}^{(2 ) } -2d_{0}^{(1)}d_{1}^{(1)}\right)\right ] + o(n^{\frac{5}{4}-r } ) \nonumber \\ & = \frac{n^{\frac{1}{2}}\gamma_{3}}{(d_{0}^{(0)})^{2}\gamma_{1 } } \left(d_{0}^{(2)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(2 ) } -2d_{0}^{(1)}d_{2}^{(1)}\right ) + o(n^{\frac{5}{4}-r } ) \nonumber \\ & = n^{\frac{1}{2}}\frac{\gamma_{3}}{\gamma_{1 } } + o(n^{\frac{5}{4}-r } ) \nonumber \\ & = n^{\frac{1}{2}}\frac{\gamma(\frac{3}{4})}{\gamma(\frac{1}{4 } ) } \frac{1}{\sqrt{\frac{\ell^{(4)}(x^{\ast})}{4 ! } } } + o(n^{\frac{5}{4}-r } ) \nonumber = n^{\frac{1}{2}}\frac{\gamma(\frac{3}{4})}{\gamma(\frac{1}{4 } ) } \frac{2\sqrt{6}(p-1)}{p^{5/2 } } + o(n^{\frac{5}{4}-r } ) , \nonumber\end{aligned}\ ] ] where we used proposition [ order ] in the last line .we prove only the last two displays in theorem [ starvariance ] , since the first display follows immediately from theorem [ free_energy ] and results in . from the second line of, we have } { { \mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \\ & \qquad\qquad\qquad - \left(\frac{{\mathbb e}\left[\frac{w^{p}}{n^{p-1}}\exp\left(\beta_{1 }w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } { { \mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right]}\right)^2\bigg\}. \nonumber\end{aligned}\ ] ] consider first the case on the phase transition curve excluding the critical point .then , similar to the proof of theorem [ mainthm ] , for any , now consider the case at the critical point .we have it is easy to observe that . by differentiating this identity, we get .similar to the proof of theorem [ mainthm ] , for any , again we prove only the last two displays in the theorem .from the second line of , we have } { { \mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \nonumber \\ & \qquad -\frac{{{\mathbb e}\left[w\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } { \mathbb e}\left[\frac{w^{p}}{n^{p-1 } } \exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } { \left({\mathbb e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right]\right)^{2}}. \nonumber\end{aligned}\ ] ] similar to the proof of theorem [ mainthm ] , on the phase transition curve excluding the critical point , for any , consider now the case at the critical point .it is easy to observe that . by differentiating this identity, we get .therefore , similar to the proof of theorem [ mainthm ] , we get for any , +o(n^{-\frac{3}{2 } } ) } { n^{-\frac{1}{2}}(d_{0}^{(0)})^{2}\gamma_{1}^{2}+o(n^{-1 } ) } + o(n^{\frac{5}{4}-r } ) \nonumber \\ & = \frac{n^{\frac{1}{2}}\gamma_{3}}{(d_{0}^{(0)})^{2}\gamma_{1 } } \left(d_{0}^{(p+1)}d_{2}^{(0)}+d_{0}^{(0)}d_{2}^{(p+1 ) } -d_{0}^{(1)}d_{2}^{(p)}-d_{0}^{(p)}d_{2}^{(1)}\right ) + o(n^{\frac{5}{4}-r } ) \nonumber \\ & = p(x^{\ast})^{p-1}\frac{\gamma_{3}}{\gamma_{1}}n^{1/2}+o(n^{\frac{5}{4}-r } ) \nonumber \\ & = p\left(\frac{p-1}{p}\right)^{p-1}\frac{\gamma(\frac{3}{4})}{\gamma(\frac{1}{4 } ) } \frac{2\sqrt{6}(p-1)}{p^{5/2}}n^{1/2}+o(n^{\frac{5}{4}-r } ) .\nonumber\end{aligned}\ ] ] observe first that = \frac{1}{n}\mathbb{e}_{n}[\sum_{j=1}^{n}x_{1j}]$ ] .thus , off the transition curve we have \\ & = \lim_{n\rightarrow\infty}\frac{1}{n}\frac{\mathbb{e}\left[w\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } { \mathbb{e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \\ & = \lim_{n\rightarrow\infty } \frac{\left(1+o\left(n^{1/2 - 4q}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{x^{2}}{x(1-x)}}e^{n\ell(x)}\,dx}{\left(1+o\left(n^{1/2 - 4q}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx } \\ & = \lim_{n\rightarrow\infty}\frac{\sqrt{\frac{2\pi(x^{\ast})^{2 } } { x^{\ast}(1-x^{\ast})|\ell''(x^{\ast})|}}n^{-\frac{1}{2}}e^{n\ell(x^{\ast } ) } } { \sqrt{\frac{2\pi}{x^{\ast}(1-x^{\ast})|\ell''(x^{\ast})|}}n^{-\frac{1}{2}}e^{n\ell(x^{\ast } ) } } \\ & = x^{\ast}.\end{aligned}\ ] ] similarly , at the critical point , } { \mathbb{e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \\ & = \lim_{n\rightarrow\infty } \frac{\left(1+o\left(n^{1/4 - 4q}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{x^{2}}{x(1-x)}}e^{n\ell(x)}\,dx}{\left(1+o\left(n^{1/4 - 4q}\right)\right ) 2^{-n}\sqrt{\frac{n}{2\pi}}\int_{0}^{1 } \sqrt{\frac{1}{x(1-x)}}e^{n\ell(x)}\,dx } \\ & = \lim_{n\rightarrow\infty}\frac{e^{n\ell(x^{\ast})}n^{-\frac{1}{4}}d_{0}^{(1)}\gamma_{1 } } { e^{n\ell(x^{\ast})}n^{-\frac{1}{4}}d_{0}^{(0)}\gamma_{1 } } \\ & = x^{\ast}.\end{aligned}\ ] ] finally , on the phase transition curve except at the critical point , } { \mathbb{e}\left[\exp\left(\beta_{1 } w + \frac{\beta_{2}}{n^{p-1}}w^p\right)\right ] } \\ & = \lim_{n\rightarrow\infty}\frac{\left(\sqrt{\frac{2\pi(x_{1}^{\ast})^{2 } } { x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})| } } + \sqrt{\frac{2\pi(x_{2}^{\ast})^{2 } } { x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)n^{-\frac{1}{2}}e^{n\ell(x^{\ast } ) } } { \left(\sqrt{\frac{2\pi}{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})| } } + \sqrt{\frac{2\pi}{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}\right)n^{-\frac{1}{2}}e^{n\ell(x^{\ast } ) } } \\ & = \frac{x_{1}^{\ast}\sqrt{\frac{1 } { x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})| } } + x_{2}^{\ast}\sqrt{\frac{1 } { x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})| } } } { \sqrt{\frac{1}{x_{1}^{\ast}(1-x_{1}^{\ast})|\ell''(x_{1}^{\ast})| } } + \sqrt{\frac{1}{x_{2}^{\ast}(1-x_{2}^{\ast})|\ell''(x_{2}^{\ast})|}}}.\end{aligned}\ ] ]the authors are very grateful to mei yin for helpful discussions .
|
we consider a family of directed exponential random graph models parametrized by edges and outward stars . essentially all of the statistical content of such models is given by the _ free energy density _ , which is an appropriately scaled version of the probability normalization . we derive precise asymptotics for the free energy density of _ finite _ graphs . we use this to rederive a formula for the limiting free energy density first obtained by chatterjee and diaconis . the limit is analytic everywhere except along a phase transition curve first identified by radin and yin . building on their results , we carefully study the model along the phase transition curve . in particular , we give precise scaling laws for the variance and covariance of edge and outward star densities , and we obtain an exact formula for the limiting edge probabilities , both on and off the phase transition curve .
|
when dealing with high volumes of vector - valued data of some large dimension , it is often assumed that the data possess some intrinsic geometric description in a space of unknown dimension and that the high dimensionality arises from an unknown stochastic mapping of into .we can pose the problem of _ nonlinear dimensionality reduction _ ( nldr ) as follows : given raw data with values in , we wish to obtain optimal estimates of the intrinsic dimension and of the stochastic map with the purpose of modeling the intrinsic geometry of the data in .one typically considers the following set - up : we are given a sample , where are i.i.d . according to an unknown absolutely continuous distribution .the corresponding pdf has to be estimated from the observation as .the intrinsic dimension of the data may not be known in advance and would also have be estimated as .since the pdf is assumed to arise from a stochastic map of the low - dimensional space into the high - dimensional space , we can use our knowledge about and in order to make inferences about the intrinsic geometry of the data . in the absence of such knowledge, any such inference has to be made based on the estimates and . in this paperwe introduce a complexity - regularized quantization approach to nldr , assuming that the intrinsic dimension of the data is given ( e.g. , as a maximum - likelihood estimate ) .we begin with a quick sketch of some notions about smooth manifolds .a _ smooth manifold _ of dimension is a set together with a collection , where the sets cover and each map is a bijection of onto an open set , such that for all with the map is smooth .the pairs are called _ charts _ of , and the entire collection is referred to as an _ atlas_. intuitively , the charts describe the points of by _ local _ coordinates : given and a chart , maps any point `` near '' ( i.e. , ) to an element of .smoothness of the transition maps ensures that local coordinates of a point transform differentiably under a change of chart . assuming that is compact , we can always choose the atlas in such a way that the indexing set is finite and each is an open ball of radius ( * ? ? ?* thm . 3.3 ) ( one can always set for all , but we choose not to do this for greater flexibility in modeling ) .the next notion we need is that of a _ tangent space _ to at point , denoted by .let be an open interval such that .consider the set of all curves such that .then for any chart we have a function , such that for all in a sufficiently small neighborhood of .we say that two such curves are equivalent iff , , for all such that , where are the components of .the resulting set of equivalence classes has the structure of a vector space of dimension , and is precisely the tangent space .intuitively , allows us to `` linearize '' around .note that , although all the tangent spaces are isomorphic to each other and to , there is no meaningful way to add elements of and with distinct .next , we specify the class of stochastic embeddings dealt with in this paper .consider three random variables , where takes values in the finite set with , takes values in , and takes values in .conditional distributions of given and of given are assumed to be absolutely continuous and described by densities and , respectively .since for a compact the images of charts in are open balls of radii , let us suppose that the conditional mean ] of given is equal to .it is convenient to think of the eigenvectors of as giving a basis of the tangent space .the unconditional density of is the finite mixture , where .the resulting pdf follows the local structure of the manifold and accounts both for low- and high - dimensional noise . as an example ,let all be -dimensional zero - mean gaussians with unit covariance matrices , , and , , for some means , covariance matrices , and matrices , so that .consider a random vector with an absolutely continuous distribution , described by a pdf .we wish to find a mixture model that would not only yield a good `` local '' approximation to , but also have low complexity , where the precise notion of complexity depends on application . in order to set this up quantitatively, we use a complexity - regularized adaptation of the quantizer mismatch approach of gray and linder .we seek a finite collection of pdf s from a class of `` admissible '' models and a measurable partition of that would minimize the objective function , \label{eq : ibar}%\vspace{-10pt}\ ] ] where is the pdf defined as , is the relative entropy , is a regularization functional that quantifies the complexity of the model pdf relative to the entire collection , and is the parameter that controls the trade - off between the relative - entropy ( mismatch ) term and the complexity term . this minimization problem can be posed as a _ complexity - constrained quantization problem _ with an encoder corresponding to the partition through if , a decoder defined by , and a length function satisfying the kraft inequality . in order to describe the encoder and to quantify the performance of the quantization scheme , we need to choose a distortion measure between an input vector and an encoder output in such a way that minimizing average distortion would yield the -functional ( [ eq : ibar ] ) of the corresponding partition and codebook .consider the distortion ( this is not a distortion measure in the strict sense since it can be negative , but its expectation with respect to is nonnegative by the divergence inequality ) . for a given codebook and length function , the optimal encoder is the minimum - distortion encoder with ties broken arbitrarily .the resulting partition yields the average distortion ,\end{aligned}\ ] ] where . then \\ & & \qquad \ge \sum_{m \in \cm}p_m\big[d(f_m\|g_m ) + \mu\phi_\gamma(g_m)\big],\end{aligned}\ ] ] with equality if and only if .thus , the optimal decoder and length function for a given partition are such that the average -distortion is precisely the -functional. we can therefore iterate the optimality properties of the encoder , decoder and length function in a lloyd - type descent algorithm ; this can only decrease average distortion and thus the -functional .note that the term in does not affect the minimum - distortion encoder .thus , as far as the encoder is concerned , the distortion measure is equivalent to .when the distribution of is unknown , we can take a sufficiently large training sample and use a lloyd descent algorithm to empirically design a mixture model for the data : \1 ) * initialization : * begin with an initial codebook , where is the class of admissible models , and a length function .set iteration number , pick a convergence threshold , and let be the average -distortion of the initial codebook .\2 ) * minimum - distortion encoder : * encode each sample into the index .\3 ) * centroid decoder : * update the codebook by minimizing over all the empirical conditional expectation \equiv \frac{1}{n^{(r)}_m } \sum_{i : \alpha^{(r)}(x_i ) = m } \rho_0(x_i , g),\ ] ] where , i.e. , set ] , where is a suitable distortion measure on pairs of -vectors , e.g. , the squared euclidean distance , and the expectation is w.r.t . the empirical distribution of the sample .the first step is to use the above quantization scheme to fit a complexity - regularized gaussian mixture model to the training sample. our class of admissible model pdf s will be the set of all -dimensional gaussians with nonsingular covariance matrices , , and for each finite set we shall define a regularization functional that penalizes those that are `` geometrically complex '' relative to the rest of .the idea of `` geometric complexity '' can be motivated by the example of the gaussian mixture model from sect .[ sec : manifolds ] .the covariance matrix of the component , , is invariant under the mapping , where is a orthogonal matrix , i.e. , . in geometric terms , a copy of the orthogonal group associated with the component of the mixture is the group of rotations and reflections in the tangent space to at .thus , the log - likelihood term in is not affected by assigning arbitrary and independent orientations to the tangent spaces associated with the components of the mixture . however , since our goal is to model the intrinsic _ global _ geometry of the data , it should be possible to smoothly glue together the local data provided by our model .we therefore require that the orientations of the tangent spaces at `` nearby '' points change smoothly as well .( in fact , one has to impose certain continuity requirements on the orientation of the tangent spaces in order to define measure and integration on the manifold ( * ? ? ?xi ) . )given a finite set , we shall define the regularization functional as where is a smooth positive symmetric kernel such that as , and is the relative entropy between two gaussians .possible choices for the kernel are the inverse euclidean distance , a gaussian kernel for a suitable value of or a compactly supported `` bump '' , where is an infinitely differentiable reflection - symmetric function that is identically zero everywhere outside a closed ball of radius and one everywhere inside an open ball of radius .the relative entropy serves as a measure of position and orientation alignment of the tangent spaces , while the smoothing kernel ensures that more weight is assigned to `` nearby '' components .this complexity functional is a generalization of the `` global coordination '' prior of brand to mixtures with unequal component weights . with these definitions of and , the -distortion for a codebook and a length function is where we have also removed the term as it does not affect the encoder .the effect of the geometric complexity term is to curve the boundaries of the partition cells according to locally interpolated `` nonlocal information '' about the rest of the codebook . determining the lloyd centroids for the decoder will involve solving simultaneous nonlinear equations for the means and the same number of equations for the covariance matrices . for computational efficiency we can use the kernel data from the previous iteration , which would sacrifice optimality but avoid nonlinear equations .the output of the previous step is a gauss mixture model and a partition of .suppose that for each the eigenvectors of are numbered in the order of decreasing eigenvalues , .the next step is to design the dimension - reducing map and the reconstruction map .one method , proposed by brand , is to use the mixture model of the underlying pdf [ obtained in his case by an em algorithm with a prior corresponding to the average of the complexity over the entire codebook and with equiprobable components of the mixture ] to construct a mixture of local affine transforms , preceded by local karhunen- transforms , as a solution to a weighted least - squares problem .however , we can use the encoder partition directly : for each , let , where is the projection onto the first eigenvectors of , and then define .this approach is similar to local principal component analysis of kambhatla and leen , except that their quantizer was not complexity - regularized and therefore the shape of the resulting voronoi regions was determined only by local statistical data .we can describe the operation of dimension reduction ( feature extraction ) as an encoder , so that , where is the minimum - distortion encoder for the -distortion .the corresponding reconstruction operation can be designed as a decoder which receives a pair , , and computes , where denotes the usual scalar product in .this encoder - decoder pair is a composite karhunen- transform coder matched to the mixture source .if the data alphabet is compact , then the squared - error distortion is bounded by some , and the mismatch due to using this composite coder on the disjoint mixture source can be bounded from above by , where is the norm . provided that the mixture is optimal for in the sense of minimizing the -distortion , we can use pinsker s inequality ( * ? ? ?* ch . 5 ) and convexity of the relative entropy to further bound the mismatch by .note that the maps and are not smooth , unlike the analogous maps of brand .this is an artifact of the hard partitioning used in our scheme .however , hard partitioning has certain advantages : it allows for use of composite codes and nonlinear interpolative vector quantization if additional compression of dimension - reduced data is required .moreover , the lack of smoothness is not a problem in our case because we can use kernel interpolation techniques to model the geometry of dimension - reduced data by a smooth manifold , as explained next .our use of mixture models has been motivated by certain assumptions about the structure of stochastic embeddings of low - dimensional manifolds into high - dimensional spaces .in particular , given an -dimensional gaussian mixture model , we can associate to each component of the mixture a chart of the underlying manifold , such that the image of the chart in is an open ball of radius centered at the origin , and we can take the first eigenvectors of the covariance matrix of as coordinate axes in the tangent space to the manifold at the inverse image of under the chart . owing to geometric complexity regularization ,the orientations of tangent spaces change smoothly as a function of position .ideally , one would like to construct a smooth manifold consistent with the given descriptions of charts and tangent spaces .however , this is a fairly difficult task since we not only have to define a smooth coordinate map for each chart , but also make sure that these maps satisfy the chart compatibility condition .instead , we can construct the manifold _implicitly _ by gluing the coordinate frames of the tangent spaces into an object having a smooth inner product . specifically , let us fix a sufficiently small , and let be an infinitely differentiable function that is identically zero everywhere outside a closed ball of radius and one everywhere inside an open ball of radius , with both balls centered at .let .the inner product of two vectors , treated as elements of the tangent space , is given by .then for each the map , is a symmetric form , which is positive definite whenever for at least one value of . in addition, the map is smooth . in this way, we have implicitly defined a _riemannian metric _vii ) on the underlying manifold .the functions form a so - called _ smooth partition of unity _ , which is the only known way of gluing together local geometric data to form smooth objects ( * ? ? ?ii ) . in geometric terms , for all if and only if is an image under the dimension - reduction map of a point in whose first principal components w.r.t . each gaussian in the mixture modelfall outside the covariance ellipsoid of that gaussian .if the mixture model is close to optimum , this will happen with negligible probability .a practical advantage of this feature of our scheme is in rendering it robust to outliers .our mixture modeling scheme can also be used to estimate the `` true '' but unknown pdf of the high - dimensional data , if we assume that belongs to some fixed class .indeed , the empirically designed codebook of gaussian pdf s , the corresponding component weights , and the mixture are random variables since they depend on the training sample .we are interested in the quality of approximation of by the mixture . following moulin and liu , we use the relative - entropy loss function . we shall give an upper bound on the loss in terms of the _ index of resolvability _ ,\ ] ] where , which quantifies how well can be approximated , in the relative - entropy sense ( and , by pinsker s inequality , in sense ) , by a gaussian of moderate geometric complexity relative to the rest of the codebook .we have the following result : let the codebook of gaussian pdf s be such that the log - likelihood ratios uniformly satisfy the _bernstein moment condition _ , i.e. , there exists some such that for all . let be the smallest number such that for all ( owing to the bernstein condition , it is nonnegative and finite ) . then , for any and , where .the expected loss satisfies \le \frac{1+\alpha}{1-\alpha}r_{\mu , n}(f^ * ) + \frac{4\abs{\cm}\mu}{(1-\alpha)n}. \label{eq : lossbound2}\ ] ] the probabilities and expectations are all w.r.t .the pdf .due to the fact that for all , the composite complexity satisfies the kraft inequality .then we can use a strategy similar to that of moulin and liu to prove that for each .hence , by the union bound for all , except for an event of probability at most . by convexity of the relative entropy , for all implies that for .therefore with probability at least . to prove ( [ eq : lossbound1 ] ) , we use the fact that if is a random variable with , then \le \int^\infty_0 \pr[z\ge t]dt ] , which proves ( [ eq : lossbound2 ] ) . to discuss consistency in the large - sample limit ,consider a sequence of empirically designed mixture models .this is different from the usual empirical quantizer design , where we increase the training set size but keep the number of quantizer levels fixed .the scheme is consistent in the relative - entropy sense if as , where and the expectation is with respect to .a sufficient condition for consistency can be determined by inspection of the upper bound in eq .( [ eq : lossbound2 ] ) .specifically , we require that the codebooks satisfy : ( a ) , ( b ) for all , and ( c ) . condition ( c ) can be satisfied by initializing the lloyd algorithm by a codebook of size much smaller than the training set size , which is usually done in practice in order to ensure good training performance .the first two conditions can also be easily met in many practical settings .consider , for instance , the class of all pdf s supported on a compact and lipschitz - continuous with lipschitz constant . then, if we take as our class of admissible gaussians for suitably chosen constants independent of , the relative entropy of any two can be bounded independently of , and condition ( a ) will be met with proper choice of the component weights .condition ( b ) is likewise easy to meet since the maximum value of any depends only on the set , the lipschitz constant , and the dimension . in general , the issue of optimal codebook design is closely related to the problem of universal vector quantization : we can consider , e.g. , a class of pdf s with disjoint supports contained in a compact . then a sequence of gaussian codebooks that yields a consistent estimate of each in the large - sample limit is weakly minimax universal for and can also be used to quantize any source contained in the -closed convex hull of .we have introduced a complexity - regularized quantization approach to nldr .one advantage of this scheme over existing methods for nldr based on gaussian mixtures , e.g. , , is that , instead of fitting a gauss mixture to the entire sample , we design a codebook of gaussians that provides a good trade - off between local adaptation to the data and global geometric coherence , which is key to robust geometric modeling .complexity regularization is based on a kernel smoothing technique that allows for a meaningful geometric description of dimension - reduced data by means of a riemannian metric and is also robust to outliers .moreover , to our knowledge , the consistency proof presented here is the first theoretical asymptotic consistency result applied to nldr .work is currently underway to implement the proposed scheme for applications to image processing and computer vision .also planned is future work on a quantization - based approach to estimating the intrinsic dimension of the data and on assessing asymptotic _ geometric _ consistency of our scheme in terms of the gromov - hausdorff distance between compact metric spaces .e. levina and p. bickel , `` maximum likelihood estimation of intrinsic dimension , '' in _ adv .neural inform . processing systems _ , l. saul , y. weiss , and l. bottou , eds .17.1em plus 0.5em minus 0.4emcambridge , ma : mit press , 2005 .s. roweis , l. saul , and g. hinton , `` global coordination of locally linear models , '' in _ adv .neural inform .processing systems _ , t. dietterich , s. becker , and z. ghahramani , eds . ,14.1em plus 0.5em minus 0.4emcambridge , ma : mit press , 2002 , pp .889896 .m. brand , `` charting a manifold , '' in _ adv .neural inform . processing systems _ ,s. becker , s. thrun , and k. obermayer , eds .15.1em plus 0.5em minus 0.4emcambridge , ma : mit press , 2003 , pp .
|
we consider the problem of nonlinear dimensionality reduction : given a training set of high - dimensional data whose `` intrinsic '' low dimension is assumed known , find a feature extraction map to low - dimensional space , a reconstruction map back to high - dimensional space , and a geometric description of the dimension - reduced data as a smooth manifold . we introduce a complexity - regularized quantization approach for fitting a gaussian mixture model to the training set via a lloyd algorithm . complexity regularization controls the trade - off between adaptation to the local shape of the underlying manifold and global geometric consistency . the resulting mixture model is used to design the feature extraction and reconstruction maps and to define a riemannian metric on the low - dimensional data . we also sketch a proof of consistency of our scheme for the purposes of estimating the unknown underlying pdf of high - dimensional data .
|
imaging atmospheric cherenkov telescopes ( iact ) have been very successful in detecting very high energy ( 0.130tev ) -rays from cosmic sources .the key component is a pixelated camera which has to resolve flashes of cherenkov light from air showers ( duration 15ns for -induced air showers , main wavelength range 300650 nm ) .high - sensitivity photo - sensors are needed , even using light - collecting optics , since e.g. a 1tev primary photon hitting the atmosphere only results in about one hundred cherenkov photons per square meter . until now , matrices of photomultiplier tubes ( pmt ) have always been employed for this task .this is a well - known technology which , however , comes with some intrinsic disadvantages for telescope applications .pmts are rather heavy and bulky , but at the same time fragile .they require high voltages of several 100v or even kv and are damaged when exposed to sunlight .typical pmts furthermore have photon detection efficiencies ( pde ) of only 2030 . since a few years, a new type of semiconductor light - sensor is being developed , the so - called geiger - mode avalanche photodiode ( g - apd ) .these light - weight devices are built up from multiple apd cells operated in geiger - mode .all the cells are connected in parallel and the overall signal is the sum of all simultaneously fired cells .they need bias voltages of 50100v , usually 15v above the breakdown voltage .a high gain similar to pmts is reached ( ) and , potentially , higher pdes of up to 50 .the market for g - apds is continuously growing , and several manufacturers are working on their improvement . for an iact application under realistic ambient conditions , several technical challenges have to be met .this mostly concerns the necessity to compensate for gain variations due to changes in temperature or night - sky background light ( nsb ) .while the former is an intrinsic feature of g - apds , the latter is a general problem and applies to other photo - sensors as well .an advantage of g - apds is that they can be operated at nsb rates up to several ghz per sensor , thus allowing measurements under twilight and moonlight conditions .first tests to detect cherenkov light with small g - apd arrays have been performed in the past . in order to develop a complete g - apd camera andfind solutions for the technical challenges , the fact project ( first g - apd camera test ) has been launched .the prototype presented in this paper marks the first step towards a large camera with a field of view of about ( 0.10.2 per pixel ) .this device is foreseen for the dwarf telescope ( dedicated multi - wavelength agn research facility ) which will perform monitoring of strong and varying -ray sources .the prototype module comprises 144 g - apds of type hamamatsu mppc .each has a sensitive area of 3 mm 3 mm , covered by 3600 cells of 50 m 50 m size .the operating bias voltage ranges from 71.15v to 71.55v for the 144 sensors ( at ) , corresponding to a gain of .the dark count rate is below 5mhz .a non - imaging light collector is placed on top of each g - apd to compensate for dead spaces due to the diode - chip packaging ( see figure [ fig : schematic ] , top part ) .open aluminum cones with an affixed reflecting foil ( esr vikuitiby 3 m ) and a quadratic base are in use ( bottom side : 2.8 mm 2.8 mm , top side : 7.2 mm 7.2 mm , height : 17.5 mm , effective solid angle : 0.55sr ) . with such collectors , the nsb rate during the darkest nights at e.g. the observatorio roque de los muchachos , la palma , is per sensor .the signals from four g - apds are summed up in front - end electronics boards ( feb ) , resulting in 36 quadratic pixels of 14.4 mm 14.4 mm size .these boards also perform a signal shaping and amplification on the analog level ( 4mv voltage output per a current input , pulse decay times 10ns ) .each feb comprises twelve channels , with a power consumption of 150mw per channel .figure [ fig : camera ] ( left ) shows a photograph of the camera module during assembly , including the febs .also 16 of the 144 g - apds are visible , and a block of four light collectors which are usually mounted on top of the sensors .such a block corresponds to one pixel ( readout channel ) . in order to keep the gain of the g - apds stable in case of changes of the ambient temperature and nsb conditions ,a bias voltage feedback system has been foreseen .a calibration signal from a pulsed and temperature - stabilized light emitting diode can be monitored continuously and , if a change in amplitude is detected on a certain channel , the voltage of the corresponding pixel is modified to readjust the amplitude .special power supply modules have been developed for this task , which can communicate with the camera control software ( see also section [ sec : daq ] ) via an usb interface and allow the bias voltage of each pixel to be regulated individuallymv .this translates into gain variations of a few percent . ] .in addition , a water cooling system has been installed .an external cooling unit provides water of an adjustable temperature , which is pumped through copper tubes soldered onto a copper plate .the g - apds are coupled to this plate by a thermally conductive but electrically insulating filling material .several temperature sensors and one humidity sensor are used to monitor the conditions inside the box and on the cooling plate , as can be seen from figure [ fig : camera ] ( middle ) .the cooling system is not obligatory to operate the sensors , but can be used optionally .the whole camera module is mounted inside a water - tight box ( see figure [ fig : camera ] , right ) , which has a protection window , a separate shutter ( not shown in the photograph ) and connectors for the signal and voltage supply cables . 4 pixel ) attached to the front , and the three front - end electronics boards attached to the back ; a block of four light collectors ( 1 pixel ) is mounted separately for demonstration purposes .middle : fully assembled module including cooling plate .right : integration into water - tight box ; signal and voltage supply cables attached.,title="fig:",height=139 ] 4 pixel ) attached to the front , and the three front - end electronics boards attached to the back ; a block of four light collectors ( 1 pixel ) is mounted separately for demonstration purposes .middle : fully assembled module including cooling plate .right : integration into water - tight box ; signal and voltage supply cables attached.,title="fig:",height=139 ] 4 pixel ) attached to the front , and the three front - end electronics boards attached to the back ; a block of four light collectors ( 1 pixel ) is mounted separately for demonstration purposes .middle : fully assembled module including cooling plate .right : integration into water - tight box ; signal and voltage supply cables attached.,title="fig:",height=139 ]the analog signals are transferred from the camera box to the counting room by means of 20 m long coaxial cables .they are fed into linear fan - out modules ( nim standard ) .one module comprises twelve sub - units , each with one input channel and two normal and one inverted output channels .one of the ( positive ) outputs is connected to the data acquisition ( daq ) system , while the inverted ( negative ) signal is used for triggering purposes ( see figure [ fig : schematic ] ) .the trigger logic consists of a caen v812 vme board where a majority coincidence of the innermost pixels is formed .individual trigger thresholds can be set via the vme bus .the daq itself is based on the drs2 ( domino ring sampling ) chip containing ten analog pipelines of capacitive cells .signal sampling is performed with a frequency of , generated on - chip by a series of inverters .such high sampling frequencies are desired for an iact , since excellent timing significantly improves the reconstruction of the properties of the primary particle .each pipeline is read out at and externally digitized by a multiplexed flash adc ( analog - to - digital converter ) .the drs2 chips are mounted on mezzanine cards , which are hosted by vme boards ( two chips per card and two cards per vme board ) . in this design ,eight channels of each chip are available for external signals ( see also ) .a single - board computer is used as vme controller , with an attached hard disk for data storage .the daq and bias voltage control software are also running on this computer . for a trigger rate of ,typical data rates are of the order of / s .sampling the signal with the ring sampler at frequencies in the ghz range allows a precise photon arrival time measurement , provided the so - called fixed - pattern aperture jitter of the drs2 chip is corrected .this relatively large , but systematic and gradual spread in its bin widths results from the manufacturing process . asthe occurrence of a trigger is random with respect to the physical pipeline , the time measured between the trigger and the signal , being the sum of the widths of the involved bins , is also randomized to some degree . at 2ghz sampling ,this jitter can amount to about 5ns and can have a complicated distribution , depending on the distance between trigger and signal .it can be corrected by calibrating with a high - frequency signal from a precision generator .the procedure involves measuring the period of this signal with the drs2 , and stretching or shrinking of the bin widths within that particular period to match the generator output .as the revolution frequency of the domino wave is phase locked to a stable oscillator , the revolution time ( the sum over all 1024 individual bin widths ) itself is fixed .therefore , the inverse correction is applied to all bins outside the current period . doing this for all periods of a sampled waveform and iteratively for many waveforms with random phase relative to the domino wave converges towards the required correction values .the effectiveness of the jitter correction is shown in figure [ fig : jitter ] ( left ) .the histograms show the time distributions of the rising edge of a pulse relative to the trigger signal with and without the correction at 2ghz sampling frequency .both the ( square ) test - pulse and the trigger were derived from a single output of a pulse generator .the correction values were determined in this case using a 200mhz rectangular signal . after applying them, the pulse time distribution has a width of 390ps root - mean - square .in figure [ fig : jitter ] ( right ) the experimental setup used for the detection of cosmic air showers is shown .the camera was mounted in the focus of a zenith - pointing spherical mirror ( 90 cm diameter ) .the focal length of the mirror was 80 cm which , taking into account the pixel dimensions , translates into a field of view of per pixel .the measurements were performed during the night of july 2 , 2009 , near the city center of zurich .thus the nsb from surrounding buildings and also from partial moonlight was rather high .it was determined with an external sky quality meter ( unihedron ) to be 300mhz per g - apd ( 1.2ghz per pixel ) . during the measurements , which took about an hour , the outdoor temperature was around .the g - apd plane was cooled to and the bias voltage for all pixels set to the nominal values for this temperature ( lower than for operation at ) . under these stable conditions ,no bias voltage feedback had to be used .a sampling frequency of 2ghz was set for the drs2 chips .the trigger threshold was adjusted for each of the innermost 16 pixels , such that single pixel trigger rates of 13khz were obtained for an opened camera shutter .this was achieved by applying discriminator thresholds of to the analog signals . a majority coincidence of four channels above threshold within a time window of 20nswas furthermore required . under these circumstances , the daq system recorded data with an average trigger rate of 0.02hz ( also including noise triggers ) .the latency of the whole system was about 350ns .for the offline analysis , events corresponding to cherenkov light flashes have been filtered out from the recorded data .several neighboring channels containing a clear signal , at the time position expected from the trigger latency , have been required .this is demonstrated in figure [ fig : signal ] , which presents the raw data for a certain pixel ( event # 16 of run # 207 ) .the large plot shows the full drs2 pipeline with a cherenkov - light signal at about 160ns , while the inset presents a zoom to the data between 0 and 150ns . in the latter ,the fluctuating signals from nsb photons are visible , which are very frequent and therefore pile up .the red line indicates the trigger threshold . from the data analysisit has been estimated that the trigger rate for air shower events was about 0.01hz . in figure[ fig : shower ] ( left ) the intensity distribution for the whole event introduced above is plotted .the maximum sample amplitude is shown on each of the 36 pixels , searched for within a time window of 100ns around the expected signal position .each amplitude has been corrected for the drs2 baseline , evaluated event - by - event from the data samples recorded before the signal arrival ( see figure [ fig : signal ] ) . on the right side of figure[ fig : shower ] , the jitter corrected ( cf .section [ sec : daq ] ) arrival time of the signals is presented of its amplitude ( no interpolation ) . ] .only pixels with a signal amplitude above 60mv have been used for the timing calculation , the others are displayed in white color .a clear shower development is apparent for this event , starting in the top left corner of the display and extending to the bottom right corner within almost 20ns .figure [ fig : shower2 ] shows an event with a less extended time distribution ( event # 14 of run # 206 ) .especially the core pixels with the largest amplitudes are within a few ns .the two outer pixels with a time offset of 34ns compared to the core pixels are likely due to a sub - shower .taking into account that the camera covers a field of view of in both dimensions ( see also section [ sec : setup ] ) , such events certainly come from air showers induced by very high energetic cosmic - ray particles .dedicated simulations have been carried out , concluding that the experimental setup described in this paper has an energy threshold of several tev .primary protons with off - axis angles up to 5 have been simulated , and photon arrival time distributions consistent with the event presented in figure [ fig : shower2 ] have been observed .the event shown in figure [ fig : shower ] most probably corresponds to a primary particle with an off - axis angle of 10 or larger . because of the comparatively long trigger coincidence window , it was possible to trigger such events. belongs to the pixel in the second column from the left , third row from the top .left : intensity distribution over the 6 pixels 6 pixels .right : corresponding signal arrival time distribution ( see text).,title="fig:",scaledwidth=49.0% ] belongs to the pixel in the second column from the left , third row from the top .left : intensity distribution over the 6 pixels 6 pixels .right : corresponding signal arrival time distribution ( see text).,title="fig:",scaledwidth=49.0% ] 6 pixels .right : corresponding signal arrival time distribution ( see text ) .the color scales are the same as in figure [ fig : shower].,title="fig:",scaledwidth=49.0% ] 6 pixels .right : corresponding signal arrival time distribution ( see text ) .the color scales are the same as in figure [ fig : shower].,title="fig:",scaledwidth=49.0% ]a 36-pixel prototype g - apd camera for cherenkov astronomy has successfully been constructed and commissioned . a daq system based onthe drs2 chip has been set up . in a self - triggered mode ,images of air showers induced by cosmic rays have been recorded . for the first time , this has been achieved with a complete g - apd camera .stable operation at room temperature and under high nsb light conditions is possible . in summary, these photo - sensors have proven to fulfill the requirements of iact applications , with the potential to replace or complement pmts for future projects like the planned cherenkov telescope array ( cta ) .i. braun et al . , _ first avalanche - photodiode camera test ( fact ) : a novel camera using g - apds for the observation of very high - energy gamma - rays with cherenkov telescopes _ , in proceedings of _5th international conference on new developments in photodetection _ , june , 1520 , 2008 , aix - les - bains , france , in press at nucl .instrum . meth . * a*. t. bretz et al . , _ long - term monitoring of bright blazars with a dedicated cherenkov telescope _ , in proceedings of _4th international meeting on high energy gamma - ray astronomy _, july , 711 , 2008 , heidelberg , germany , aip conf . proc .* 1085 * ( 2008 ) 850 .t. bretz and d. dorner , _ mars - cheobs goes monte carlo _ , in proceedings of _31st international cosmic ray conference _ , july , 715 , 2009 , lodz , poland , to be published online at http://icrc2009.uni.lodz.pl .
|
geiger - mode avalanche photodiodes ( g - apd ) are promising new sensors for light detection in atmospheric cherenkov telescopes . in this paper , the design and commissioning of a 36-pixel g - apd prototype camera is presented . the data acquisition is based on the domino ring sampling ( drs2 ) chip . a sub - nanosecond time resolution has been achieved . cosmic - ray induced air showers have been recorded using an imaging mirror setup , in a self - triggered mode . this is the first time that such measurements have been carried out with a complete g - apd camera .
|
the einstein equations of general relativity are highly nonlinear and their solution presents a challenge that has been addressed by many researchers . an early solution of these equations is credited to schwarzschild for the field exterior to a star . however , interior solutions ( inside space occupied by matter ) are especially difficult to find due to the fact that the matter energy - momentum tensor is not zero. solutions for this case were derived for static spherical and cylindrical symmetry .in addition various constraints were derived on the structure of a spherically symmetric body in static gravitational equilibrium .a conjecture stating that general relativistic solutions for shear - free perfect fluids which obey a barotropic equation of state are either non - expanding or non - rotating has been discussed in a recent review article .interior solutions in the presence of anisotropy and other geometries were considered also .in addition , interior solutions to the einstein - maxwell equations have been presented in the literature .an exhaustive list of references for exact solutions of einstein equations ( up to the year 2009 ) appears in . in most cases the interior solutions derived in the past considered idealized physical conditions such as constant density and pressure and ignored thermodynamic irreversible processes that might take place in the interior of the ( compact ) object which lead to the emission of radiation and heatthese processes are important in the process of protostar formation due to self gravitation ( prior to nuclear ignition ) .to take this fact into account at least partially , we shall assume that the gas in the interior of these objects is isentropic .that is , the entropy produced within the object ( due to the irreversible thermodynamic and turbulent processes taking place ) is removed by heat and radiation and the gas maintains a constant entropy .the same reasoning may apply to mass ejections during gamma ray bursts . for isentropic gaswe have the following relationship between pressure and density where is constant and is the * isentropy index*. two models for will be considered in this paper , one with constant and the other with as a function of , the distance from the sphere center .it is our objective in this paper to derive interior solutions for spheres which consist of isentropic gas .in particular we shall investigate solutions to the einstein equations which represent spheres in which mass is arranged in shells .this structure might then evolve to represent the early stages of the process that leads to the formation of a solar system .in fact it was laplace in 1796 who originally put forth the hypothesis that planetary systems evolve from a family of isolated rings formed from a primitive solar nebula " .such a system of rings around a protostar was observed recently by the atacama large millimeter / submillimeter array in the constellation taurus .the plan of the paper is as follows : in section 2 we review the basic theory and equations that govern mass distribution and the components of the metric tensor . in section 3 we present exact , approximate and numerical solutions to these equations for spheres made of isentropic gas in which the isentropic index is a function of . in section 4we do the same for spheres with constant isentropic index but with .we summarize with some conclusions in section 5 .in this section we present a review of the basic theory , following chapter in . the general form of the einstein equations is where and are respectively the contracted form of the riemann tensor and the ricci scalar , is the matter stress - energy tensor , is newton s gravitational constant , is the speed of light in a vacuum and is the metric tensor .the general expression for the stress - energy tensor is where is the proper density of matter and is the four vector velocity of the flow . in the followingwe shall assume that , and a metric tensor of the form .\ ] ] where , and are the spherical coordinates in 3-space .when matter is static and takes the following form , after some algebra one obtains equations for , , , and ( where is the total mass of the sphere up to radius ) .these are + \frac{1}{2r}\left(\frac{d\nu}{dr}+\frac{d\lambda}{dr}\right)- \frac{1}{2}\frac{d^2\nu}{dr^2}\ ] ] where is the speed of light .in addition we have the tolman - oppenheimer - volkoff ( tov ) equation which is a consequence of ( [ 2.5])-([2.8 ] ) , in the following we normalize to ; remains .assuming that is known we can solve ( [ 2.7 ] ) algebraically for and substitute the result in ( [ 2.8 ] ) to derive the following equation for , although this is a nonlinear equation it can be linearized by the substitution which leads to the equations given in the previous section one can derive a single equation for for a generalized isentropic gas where both and are functions of to this end we substitute the isentropy relation ( [ 7.1 ] ) in ( [ 2.8 ] ) to obtain using ( [ 2.5 ] ) to substitute for in ( [ 7.2 ] ) , normalizing to and using the fact that it follows that using ( [ 2.6 ] ) to substitute for in ( [ 7.3 ] ) and solving the result for yields , differentiating this equation to obtain an expression for and substituting in ( [ 2.10 ] ) leads finally to the following general equation for \ } \left(\frac{dm(r)}{dr}\right)^{\alpha(r)+1 } \\ \notag & & + 2r^{4 - 2\alpha(r)}b^{1-\alpha(r)}a(r)(2m(r)-r ) \left(\frac{dm(r)}{dr}\right)^{\alpha(r)+2}+ 2r^{6 - 4\alpha(r)}b^{2 - 2\alpha(r)}a(r)^2 \left(\frac{dm(r)}{dr}\right)^{2\alpha(r)+1 } \\\notag & & -2r^{4 - 2\alpha(r)}b^{1-\alpha(r)}(2m(r)-r)^2 \left(\frac{dm(r)}{dr}\right)^{\alpha(r)+1 } \left[\frac{da(r)}{dr } + a(r)\ln\left(\frac{\frac{dm(r)}{dr}}{br^2}\right)\frac{d\alpha(r)}{dr}\right]=0.\end{aligned}\ ] ] this is a highly nonlinear equation but it simplifies considerably when is a constant or is an integer .we explore some of the numerical solutions of this equation in the next two sections .a solution of this equation can be used then to compute the metric coefficients using ( [ 2.6 ] ) and ( [ 7.4 ] ) . with this equationit is feasible to investigate the dependence of the mass distribution on the parameters and .although ( [ 7.5 ] ) is highly nolinear , one can obtain analytic solutions for some predetermined functional values for . 1 .this ansatz leads to the following relation between and : this relation implies that under present assumptions must be negative .2 . , yields 3 . and ( where is a constant ) leads to the following value for : here is an integration constant .a similar but algebraically more complicated result can be obtained for where is a constant .4 . for and it follows that } { b(2r+\sqrt{6})(\sqrt{6}-2r)(c_1r+\sqrt{2r^2 - 3})}\ ] ] where is an integration constant .a similar result can be obtained for where is a constant .it should be observed that the material density for the last three examples is constant .these examples might therefore represent different routes for the evolution of a uniform interstellar gas towards the creation of a protostar ( and nuclear ignition ) .however we were able to obtain also analytic solutions in terms of hypergeometric and heun functions for with and or .in the following we solve ( [ 2.5 ] ) through ( [ 2.8 ] ) for an isentropic gas sphere in which the isentropy index varies with .we discuss three examples .the first presents an analytic solution of these equations while the other two utilize numerical computations .when is a constant and is a function of it natural to start by choosing a functional form for the density and then solve ( [ 2.5 ] ) for .( [ 2.6 ] ) becomes an algebraic equation for while ( [ 2.7 ] ) is a differential equation for .finally , substituting this result in ( [ 7.2 ] ) one can compute the isentropy index .the following illustrates this procedure and leads to an analytic solution for the metric coefficients .consider a sphere of radius ( where ) with the density function where is the constant in ( [ 2.5 ] ) . using ( [ 2.5 ] ) with the initial condition we then have for observe that although is singular at the total mass of the sphere is finite . using ( [ 2.6 ] ) yields substituting ( [ 3.3 ] ) in ( [ 2.12 ] ) we obtain a general solution for which is valid for , and .it is where for r=1 the solution is \ ] ] at we have and the metric is singular at this point .this reflects the fact that the density function ( [ 3.2 ] ) has a singularity at ( but the total mass of the sphere is finite ) . to determine the constants and we use the fact that at the value of should match the classic schwarzschild exterior solution and the pressure ( see [ 2.8 ] ) is zero .these conditions lead to the following equations : the solution of these equations is a plot of on a semi - log scale is given for this example in fig .this graph displays an unexpected feature which shows that remains close to zero except within a region in the middle of the sphere .a possible interpretation of this may relate to ongoing thermodynamic processes within the sphere .for the differential equation for is the solution of this equation is \ ] ] and applying the boundary conditions on and the pressure at we find that a plot of exhibits several local spikes in the range but is zero otherwise .for the metric coefficient in ( [ 2.3 ] ) becomes therefore for this metric coefficient is positive and the space has euclidean structure .however for this metric coefficient is negative and the space has a lorentzian signature . for whole interior of the sphere has a euclidean metric .we consider these solutions spurious and have no physical interpretation for their peculiar properties at this time .for the corresponding differential equation for is whose general solution is consider a sphere of infinite radius with the density function where , are constants and the division by normalizes the density to at . solving ( [ 2.5 ] ) with the initial condition yields - 4k^2\}.\ ] ] observe that although the sphere is assumed to be of infinite radius the density approaches zero exponentially as and the total mass of the sphere is finite. substituting this result for into ( [ 2.10 ] ) or ( [ 2.12 ] ) we can solve numerically for and then , using ( [ 7.2 ] ) , for . depicts for and .we consider a sphere of radius with density function > from ( [ 2.5 ] ) with we then have ( the total mass of the sphere is ) .( [ 2.10 ] ) was used to solve for numerically with the boundary conditions and so that the value of matches that of the schwarzschild exterior solution at this point .we then used ( [ 7.2 ] ) to solve for . and depict respectively , and for .we considered spheres of radius , total mass of , , and different fluctuating .two different sets of functions were used in these simulations to compute using ( [ 7.5 ] ) . in the first set we used the functions : * a. , * b. , * c. .the results of these simulations are presented in fig .we observe that in this figure there are intervals where is constant which implies that in these regions . on the other handa step function " in the value of corresponds to a spike in .therefore for the functions and the mass is distributed in two shells , one around the middle " of the sphere and the other at the boundary . for the second set we used the functions * d. , * e. * f. .the results of these simulations are presented in fig . . in this figurethe plot for the function represents a two shell structure .however , for the functions and there are only ripples in ( which imply the existence of similar ripples in ) .in this section we consider isentropic spheres where or with different functions . to solve for the mass distribution under these constraints we use the proper reductions of ( [ 7.5 ] ) . when ( [ 7.5 ] ) reduces to \}\frac{dm(r)}{dr}- \\ \notag & & m(r)^2(2m(r)-r-1)-r^2(2m(r)-r)^2\frac{dm(r)}{dr}\frac{da(r)}{dr}=0\end{aligned}\ ] ] for a sphere of radius and we display the numerical solution of this equation with , , and in fig .the corresponding densities are displayed in fig . .for this set of functions the total mass is represented by smooth functions .however there are two peaks in the density , one near and the other at the boundary .similarly when we obtain for a sphere of radius with and we display the numerical solutions of this equation with , in fig . . in this case is wavy and as a result frequent fluctuations occur in the corresponding density function .a shell structure emerges clearly for .in this paper we considered the steady states of a spherical protostar or interstellar gas where general relativistic considerations have to be taken into account .in addition we considered the gas to be isentropic , thereby removing the ( implicit or explicit ) assumption that it is isothermal . under these assumptions we were able to derive a single equation for the total mass of the sphere as a function of . from a solution of this equation ,the corresponding metric coefficients may be computed in straightforward fashion .our approach was two - pronged .in the first we chose the density distribution and derived the isentropic index throughout the gas or we let be a predetermined non - constant function of and computed . in the second approach we set the isentropy index to a constant and solved the corresponding equation for . in both caseswe were able to derive solutions in which the mass is organized in shells .these solutions represent a new and different class of interior solutions to the einstein equations which has not yet been explored in the literature .
|
in the process of protostar formation , astrophysical gas clouds undergo thermodynamically irreversible processes and emit heat and radiation to their surroundings . due the emission of this energy one can envision an idealized situation in which the gas entropy remains nearly constant . in this setting , we derive in this paper interior solutions to the einstein equations of general relativity for spheres which consist of isentropic gas . to accomplish this objective we derive a single equation for the cumulative mass distribution in the protostar . from a solution of this equation one can infer readily the coefficients of the metric tensor . in this paper we present analytic and numerical solutions for the structure of the isentropic self - gravitating gas . in particular we look for solutions in which the mass distribution indicates the presence of shells , a possible precursor to solar system formation . another possible physical motivation for this research comes from the observation that gamma ray bursts are accompanied by the ejection of large amounts of thermodynamically active gas at relativistic velocities . under these conditions it is natural to use the equations of general relativity to inquire about the structure of the ejected mass .
|
the gravitational potential influences the rate at which time passes .this means that a hypothetical measurement of the age of a massive object like the sun or the earth would yield different results depending on whether performed at the surface or near the center . in this connection ,clearly , issues such as the initial assembly of cosmic dust to form the protoplanet eventually leading to the earth is not what is alluded to when considering the age .rather , the age is understood as e.g. the aging of radioactive elements in the earth , i.e. that fewer radioactive decays of a particular specimen have taken place in the earth center than on its surface .furthermore , arguments based on symmetry will convince most skeptics , including those from the general public , that there is no gravitational force at the earth center .consequently , such an effect can not be due to the force itself , but may instead be due to the accumulated action of gravity ( a layman expression for the gravitational potential energy being the radial integral of the force ) .thus , there is also a good deal of pedagogical value in this observation . in a series of lectures presented at caltech in 1962 - 63 ,feynman is reported to have shared this fascinating insight with the audience using the formulation `` ... since the center of the earth should be a day or two younger than the surface ! ''this thought experiment is just one among a plethora of fascinating observations about the physical world provided by richard feynman .although this time difference has been quoted in a few papers , either the lecturer or the transcribers had it wrong ; it should have been given as years instead of days. in this paper , we first present a simple back - of - the - envelope calculation which compares to what may have been given in the lecture series .we then present a more elaborate analysis which brings along a number of instructive points .we believe that this correction only makes the observation of age difference due to gravity even more intriguing . we stress that this paper is by no means an attempt at besmearing the reputation of neither feynman nor any of the authors who trustingly replicated his statement ( including one of the authors of the present paper , uiu ) .instead the , admittedly small , mistake is used as a pedagogical point much like the example the human failings of genius that ohanian has used in his book about einstein s mistakes . realising that even geniuses make mistakes may make the scientist more inclined towards critically examining any postulate on his / her own .we initially suppose that the object under consideration is a sphere with radius and mass , homogeneously distributed .its gravitational potential as a function of distance to its center is then given by such that the potential on its surface is and the potential in its center is .the difference between the gravitational potential at the center and at the surface is then a difference in gravitational potential implies a time dilation at the point with the lower potential .this is given by the standard gravitational redshift which here relates the ( angular ) frequencies at the center , , and at the surface .being the inverse of the period , the frequency is indirectly a measure of how quickly time passes .it is customary to use the symbol in this connection , and we emphasise that this variable has nothing to do with the earth rotation .we combine equation with the result for in equation and use that , we note that this treatment is based on equation which `` ... refers only to identically constructed clocks located at different distances from the center of mass of a gravitating body along the lines of force .all that is required is that the clocks obey the weak equivalence principle [ ... ] and the special theory of relativity . ''see ref . for a recent , instructive example that can be easily performed in the undergraduate laboratory to display one aspect of the equivalence principle . for the case of the earth , upon rewriting and setting the surface acceleration m / s , with being the earth radius , equation becomes such that the earth mass and the gravitational constant are not needed explicitly .for the sake of a back - of - the - envelope calculation we may exploit that year ( within 3% , although there is no direct connection between the motion of the earth around the sun and ) . the earth age is years and its average radius is 6371 km so that is approximately 10 ms .a year is approximately s. clearly , the use here of is a mnemonic device , not an expression of precision , although it is precise to about half a percent ( one could use instead of which , however , is imprecise to 5 percent ) .thus the difference between the age of the earth surface and its center becomes approximately years , with the center being youngest .this is the _ type _ of back - of - the - envelope calculation that one could imagine that feynman had in mind when he expressed his `` ... since the center of the earth should be a day [ which thus should have read year ] or two younger than the surface ! ''where the mistake actually entered in the lecture and transcription process is unlikely to ever be ascertained , and its exact origin is not important for the following discussion . with tabulated values for and a more precise number for the homogeneous earth is obtained : with the center being youngest . rather than assuming a homogeneous earth, we now turn to a more realistic density distribution .this yields a significantly different result and reveals some insights to the origin of the time difference . a rather precise description , but not the only one available , of the earth density profile is tabulated in the so - called preliminary reference earth model ( prem ) .very recently , the prem has been applied to give a detailed description of the earth gravity tunnel problem .we shall consider a spherically symmetric earth with a density only dependent on radius , , as given by the prem , see figure [ fig : density ] ..,scaledwidth=50.0% ] the gravitational potential caused by this sphere is then given by where is the mass specific force , or acceleration , due to gravity , with being the gravitational force ( which is why is the gravitational potential and not the gravitational potential _ energy _ ) .the gravitational potential energy is equal to the work done by taking a test particle of mass from infinity to a distance away from the center of the earth .we split the expression in two parts : with the first term being the work per unit mass done inside the object - in this case the earth - and the last term the work per unit mass done moving the test particle from infinity to the earth surface .the gravitational acceleration at a distance , , outside a sphere of mass is , the sign showing that it is directed towards the center . inside the sphere ,when , only the mass closer than to the center matters .we denote this by now we can write the sum specifically for as where the last term is the potential at the surface of the object .the integrand in the first term is the gravitational acceleration as a function of .when evaluated at the surface , , the result is the normal gravitational acceleration , .this can be seen on figure [ fig : acceleration ] where the acceleration felt at different distances to the earth center is shown . due to the mass distribution kink seen in figure [ fig : density ] at a radius of about 3500 km ,the acceleration becomes almost constantly equal to its surface value from this radius , outwards .m / s at the surface .the analytical curve is given by the simple scaling by assuming a homogeneous mass distribution.,scaledwidth=50.0% ] using the prem density distribution in eq .as an input to eq . , the more elaborate result for the age difference of the earth center and the surface is with the center being youngest . as a , perhaps , intriguing side - effect ,we show the time difference as a function of radius , see figure [ fig : tvariatione ] .as expected , the two theories predict similar time differences near the surface of the earth .closer to the center , the prem yields a larger result than the homogeneous distribution .this is because for small .in fact , assuming for simplicity that the object of radius consists of a region of high density for and zero density for , respecting that the total mass equals , and with , , the potential difference between center and surface becomes where is that of the homogeneous distribution .thus , the factor yields the increase in time difference compared to the homogeneous model .so for the earth , where this approximation is rather crude , we may set to be 3480 km as seen from the prem curve in figure [ fig : density ] , i.e. such that somewhat above the factor 1.7 obtained by the numerical method , as expected from the crudeness of this approximation .we end this section by showing that the time dilation due to the rotational speed of the surface of the earth makes a negligible contribution .the surface speed is given from the period of rotation as m where s is the stellar day ( the earth rotation period with respect to the fixed stars ) .since the time dilation in special relativity is given from the lorentz factor as we get with that years , which can be neglected in the present discussion .clearly , the calculations performed in connection with the earth can be performed for essentially any other cosmic object with known mass and radius , at least in the limit of a homogeneous mass distribution .however , we limit the additional cases to our cosmic neighbourhood , i.e. to that of the sun , in order to demonstrate the applicability of eq . .for the sun , in analogy with the prem which is based on seismic data , we choose the so - called model s for its density distribution , a model in good agreement with helioseismic data . in the homogeneous case ,the age difference between the sun center and surface , which can be rewritten as with being the surface escape velocity , is whereas with the model s solar model it becomes see figure [ fig : tvariations ] .the factor of difference between these two numbers is substantially larger than that between the same two numbers for the earth .this is a result of the earth being relatively homogeneous while for the sun , a significantly larger part of its mass is located close to its center . using eq . andapproximating from the density distribution , the model s curve in figure [ fig : densitysun ] , we get a factor , a much better approximation than for the case of the earth .as a final discussion we address the question : why did famous , respectable and clever physicists publish feynman s claim ( although not verbatim , actually ) that `` [ feynman ] concluded that the center of the earth should be a day or two younger than its surface '' , or `` [ feynman ] concludes that the center of the earth should be by a day or two younger than its surface '' and reversely `` atoms at the surface of the earth are a couple of days older than at its center '' , in the latter case even with the comment `` this was confirmed by airplane experiments in 1970s '' ? and why did other , equally talented physicists not correct _ that _ particular mistake in the foreword to the transcribed lectures , in spite of quite extensive discussions , spanning 24 pages , of among other things a few misconceptions etc . ? not to mention the transcribers - postdocs with feynman - who , along the way , probably have corrected a few mistakes here and there ? or the editor , who also provided introductory notes on quantum gravity ? why did one of us ( uiu ) , repeat the same mistake in a science book for the layman ?this , of course , was not because any of these physicists were unable to check the original claim , or found it particularly laborious to do so .instead , it seems likely that they knew that the qualitative effect had to be there , and simply trusted that feynman and his transcribers had got the number right .this is here considered an example of proof by ethos .the term proof by ethos refers to cases where a scientist s status in the community is so high that everybody else takes this person s calculations or results for granted . in other words ,nobody questions the validity of that scientist s claim because of the particular ethos that is associated with that person .the result is accepted merely by trust .indeed , the proof by ethos is not really a proof as it does not follow logically from a set of premises .but it is a proof in the sense that it is persuasive , and tells us something about how scientists work in practice when they accept a calculation or an experimental result .scientists must to a large extent rely on the validation of other fellow s work , and it happens to be a psychological default condition among many ( scientists ) , that if a famous peer has publicly announced a result , it is accepted at face value .this seems also to be the situation in the case of the flawed estimate of the relativistic age of the earth s core . in science , one route to becoming famous is being right on some important topics .however , just because someone has become famous , this person is evidently not necessarily right on all matters .feynman himself would most likely have agreed with this and he would probably not have fallen for his own miscalculation : for a long time , his own theory of beta decay was at odds with the then prevalent , but false , understanding of existing experimental results . upon finally realizing and correctingthis community - wide misunderstanding feynman wrote : `` since then i never pay any attention to ' ' experts `` .i calculate everything myself . '' . andwhen faced with a mistake of his own , he put it even more bluntly : `` what it says in the book [ i have written ] is absolutely wrong ! ''+ in spite of the small numerical mistake , feynman s observation that the center of the earth is younger than its surface is a fascinating demonstration of time dilation in relativity , and as such a very illustrative example for use in the classroom .ohanian , _einstein s mistakes , the human failings of genius _ , ( w.w .norton 2008 ) r.p .feynman , f.b .morinigo og w.g .wagner , _ feynman lectures on gravitation _ , edited by b. hatfield , ( westview press advanced book program , 2003 ) .a.m. nobili _ et al ._ , `` on the universality of free fall , the equivalence principle , and the gravitational redshift , '' am. j. phys . *81 * , 527 - 536 ( 2013 ) ._ , `` laboratory test of the galilean universality of the free fall experiment '' phys . ed .* 49 * , 201 ( 2014 ) .adam m. dziewonski and don l. anderson , `` preliminary reference earth model , '' physics of the earth and planetary interiors * 25 * , 297 - 356 ( 1981 ) .
|
we treat , as an illustrative example of gravitational time dilation in relativity , the observation that the center of the earth is younger than the surface by an appreciable amount . richard feynman first made this insightful point and presented an estimate of the size of the effect in a talk ; a transcription was later published in which the time difference is quoted as one or two days. however , a back - of - the - envelope calculation shows that the result is in fact a few years . in this paper we present this estimate alongside a more elaborate analysis yielding a difference of two and a half years . the aim is to provide a fairly complete solution to the relativity of the aging of an object due to differences in the gravitational potential . this solution - accessible at the undergraduate level - can be used for educational purposes , as an example in the classroom . finally , we also briefly discuss why exchanging years for days - which in retrospect is a quite simple , but significant , mistake - has been repeated seemingly uncritically , albeit in a few cases only . the pedagogical value of this discussion is to show students that any number or observation , no matter who brought it forward , must be critically examined .
|
the four fundamental measurements made in astronomy are the intensity , flux density , or surface brightness of the electromagnetic radiation emitted by a celestial object , the wavelength , or frequency , of the radiation , its location on the sky , and the polarization of the radiation .measurements of the latter two , location and polarization , follow the statistics of direction .the association of directional statistics with the measurement of location is obvious , but the application of directional statistics to polarization measurements is not immediately apparent until one recalls that the stokes parameters q , u , and v describe the orientation of a polarization vector within the poincar sphere .the stokes parameter v defines the circular polarization of the radiation and establishes the z - coordinate " of the polarization vector in the poincar sphere .the stokes parameters q and u describe the radiation s linear polarization and establish the vector s x- and y - coordinates , respectively . here, polarization measurements are shown to follow directional statistics , and these statistics are applied to polarization observations of radio pulsars .pulsars are rapidly rotating , highly magnetized neutron stars .their rotation periods range between about 1ms and 10s , and the strength of the magnetic field at their surfaces ranges from g for the oldest pulsars to over g for the youngest .a beam of radio emission is emitted from each of the star s magnetic poles .a pulse of radio emission is observed as the star s rotation causes the beam to sweep across an observer s line of sight .pulsar radio emission is generally thought to originate from charged particles streaming along open magnetic fields lines above the star s magnetic pole , but unlike other astrophysical radiative processes ( e.g. synchrotron radiation , maser emission , and thermal radiation ) , it is poorly understood .polarization observations of the individiual pulses from pulsars are made in an attempt to understand the radio emission mechanism and to study the propagation of radio waves in ultra - strong magnetic fields .polarization observations of individual pulses ( lyne et al . 1971 ; manchester et al .1975 ; backer & rankin 1980 ; stinebring et al .1984 ) show that the radiation can be highly elliptically polarized and highly variable , if not stochastic . in many cases ,the mean of the polarization position angle varies in an s - shaped pattern across the pulse . buthistograms of position angle created from the single pulse observations show the angles follow the pattern in two parallel paths separated by about 90 degrees ( stinebring et al .furthermore , histograms of fractional linear polarization show that the radiation is significantly depolarized at pulse locations where these orthogonally polarized ( opms ) modes occur .the opms are thought to be the natural modes of wave propagation in pulsar magnetospheres ( allen & melrose 1982 ; barnard & arons 1986 ) .the narrow bandwidths and short sampling intervals used in single pulse observations cause the instrumental noise in these observations to be large .the narrow bandwidths are used to overcome pulse smearing effects caused by the dispersion measure of , and multipath scattering in , the interstellar medium .the short sampling intervals , typically of order 100us , are needed to adequately resolve the short duration radio pulse .the combination of the stochastic nature of the intrinsic emission and the high instrumental noise suggests that a statistical approach is needed to analyze the single pulse data .most results from single pulse polarization observations have been reported as histograms of fractional linear polarization , fractional circular polarization , and polarization position angle ( backer & rankin 1980 ; stinebring et al .while these display methods are extremely useful , they do not provide a complete picture of pulsar polarization because they force a separate interpretation of the circular and linear polarization , instead of a combined one as the observed elliptical polarization of the radiation would suggest .a complete , three - dimensional view of the polarization can be made by plotting the polarization measurements from a specific pulse location in the poincar sphere and projecting the result in two dimensions .the projections show how the orientation of the polarization vector fluctuates on the poincar sphere and reveal a wide variety of quasi - organized patterns .for example , in the cone emission at the edges of the pulse in psr b0329 + 54 ( edwards & stappers 2004 ) , the patterns consist of two clusters of data points , each in a separate hemisphere of the poincar sphere . in the precursor to the pulsar s central core component, the pattern is a single cluster of data points . within the pulsar s core emission at the center of the pulse ,one of the two clusters seen in the cone emission stretches into an ellipse or bar , while the other spreads into an intriguing partial annulus .the signatures of these patterns are not apparent in histograms of fractional polarization or position angle , emphazing the benefit of analyzing the stokes parameters together. any viable model of pulsar polarization must be able to replicate the observed patterns in addition to the histograms of fractional polarization .the details of the statistical model for pulsar polarization are summarized in a series of papers by mckinnon and stinebring ( mckinnon & stinebring 1998 , 2000 ; mckinnon 2003 , 2004 , 2006 , 2009 ) .the main hypothesis of the model is the radiation s polarization is determined by the simultaneous interaction of two , highly polarized , orthogonal modes . by definition ,the unit vectors representing the orthogonal modes are antiparallel in the poincar sphere and thus form a mode diagonal " in the sphere .the model accounts for the statistical nature of the observed polarization fluctuations by assuming the mode intensities are independent random variables .the assumption of statistical independence requires the difference in mode phases to be large ( melrose 1979 ) and greatly simplifies the model by allowing the mode intensities to be added ( chandrasekhar 1960 ) .the model also accounts for the additive instrumental noise in each of the stokes parameters . by assuming the mode intensities and instrumental noise are normal random variables, one can derive analytical expressions for the distributions of total intensity , polarization , and fractional polarization , as well as distributions for the orientation angles of the polarization vector .the main result from the model for the purposes of this paper is the derivation of the conditional density of the polarization vector s orientation angles .the conditional density is the joint probability density of the vector s colatitude , , and longitude , , at a fixed value of polarization amplitude , .it captures the functional form of the more general joint density in a simple analytical expression .it is known as the bingham - mardia ( bingham & mardia 1978 ) , or von mises - fisher , distribution .\over{w(\kappa,\gamma ) } } \label{eqn : bm}\ ] ] the conditional density is parameterized by the constants and and is normalized by the constant . the constant can be regarded as a signal - to - noise ratio in polarization .the constant satisfies the relation . by construction ,the distribution is symmetric in longitude , which is uniformly distributed over .the vector s longitude and colatitude are statistically independent of one another .the plus signs in the argument of the exponential in equation [ eqn : bm ] occur when the polarization fluctuations are predominantly parallel to the mode diagonal .they are caused by the randomly varying intensities of the opms . in this case, the functional form of the colatitude conditional density is generally bimodal .the polarization pattern formed by a projection of the conditional density generally consists of a set of concentric circular contours in each hemisphere of the projection .the circular shape of the pattern arises from the symmetry in longitude .the minus signs in the argument of the exponentail in equation [ eqn : bm ] occur when the polarization fluctuations are predominantly perpendicular to the mode diagonal .the origin of these perpendicular fluctuations is not known , but is discussed in the following section . in this case , the conditional density is always unimodal because it is normal in .the polarization pattern formed by the projection of this conditional density is generally a complete annulus in only one of the two projection hemispheres ( mckinnon 2009 ) .the general applicability of equation [ eqn : bm ] can be illustrated with a few special cases .when , the polarization fluctuations are dominated by instrumental noise , and the conditional density becomes isotropic , as one would expect for pure noise .when , the fluctuations are very small in comparison to the polarized signal , and the conditional density becomes a fisher distribution ( fisher et al .when and the fluctuations are predominantly along the mode diagonal , as caused by opms , the mean intensities of the modes are equal , the modes occur with equal frequency , and the conditional density becomes the watson bipolar distribution ( mckinnon 2006 ; fisher et al .1987 ) .the joint probability density of the vector s colatitude and longitude has been shown to be a reasonable representation of the distribution of angles that are actually observed ( mckinnon 2006 ) .the conditional density has been shown to produce projections of the poincar sphere that are qualitatively consistent with the polarization patterns observed in pulsar radio emission ( mckinnon 2009 ) .two aspects of the model and its application to the observations require additional explanation .these are ( 1 ) an explanation for the mechanism that causes the difference in mode phases to be large , thereby providing additional justification for the assumption of independent mode intensities and ( 2 ) a physical explanation for the mechanism that creates the fluctuations perpendicular to the mode diagonal , which were incorporated in the model to account for annular polarization patterns .the explanation for both may reside with generalized faraday rotation ( gfr ; edwards & stappers 2004 ; mckinnon 2009 ) . in general terms ,faraday rotation is the physical process that alters the difference between the phases of the modes as they propagate through a plasma ( melrose 1979 ) .the modes are incoherent when the difference in their phases _ at a given wavelength _ is large ( ) and are coherent ( coupled ) as long as the phase difference is small ( ) .the modes retain their individual polarization identity in an observation when they are incoherent , but effectively lose their individual identity when they are coherent .faraday rotation can become stochastic when the fluctuations in phase difference are large ( ; melrose & macquart 1998 ) .gfr alters the component of the radiation s polarization vector that is perpendicular to the polarization vectors of the plasma s wave propagation modes . for any plasma ,the unit vectors representing the polarization states of the two modes are anti - parallel on a diagonal through the poincar sphere . for the cold , weakly - magnetized plasma that is the interstellar medium ( ism ) , the propagation modesare circularly polarized , and the mode diagonal defined by their polarization vectors connects the poles of the poincar sphere .faraday rotation in the ism causes the orientation of the radiation s polarization to vary in a plane perpendicular to the mode diagonal , either on the poincar sphere s equator or on a small circle parallel to it , depending upon the polarization state of the plasma - incident radiation .for the relativistic plasma in the strong magnetic field of a pulsar s magnetosphere , the modes are thought to be linearly polarized ( allen & melrose 1982 ; barnard & arons 1986 ; melrose 1979 ) so that the mode diagonal lies in the equatorial plane of the poincar sphere .similar to faraday rotation in the ism , gfr in a pulsar s magnetosphere causes the polarization vector to rotate on a small circle in the poincar sphere that is perpendicular to and centered on the mode diagonal ( e.g. see fig . 3 of kennett & melrose 1998 ) .random fluctuations in ( i.e. stochastic gfr ) would appear as a partial annulus around the mode diagonal , as is observed in the core component of psr b0329 + 54 ( edwards & stappers 2004 ) .figure [ fig : gfr ] is a plot of versus and summarizes the discussion above .the plot is divided into four regions , i through iv , that define the conditions under which opm and stochastic gfr can occur .opms can occur only in regions iii and iv , to the right of , where the modes are incoherent .the modes are coherent when ; therefore , opms will not be observed when conditions in the pulsar magnetosphere ( or the ism ) are consistent with those in regions i and ii .the faraday rotation that is typically observed in the ism or in the lobes of extragalactic radio jets occurs in region ii where the modes are coherent , but the fluctuations in are small .stochastic gfr can occur only under the conditions specific to region i , where the modes are coherent but the fluctuations in are large .returning now to the observations , the bimodal polarization pattern observed in the cone emission of psr b0329 + 54 arises from opms .the mean and standard deviation of at this location of the pulse would reside in region iii or iv of figure [ fig : gfr ] .the properties of in the pulsar s core precursor also likely reside in region iii or iv of the figure , even though the polarization pattern at this pulse location consists of a single cluster of data points .opms clearly occur everywhere else within the pulse .opms may also occur in the precursor , but one of the modes may be so strong that the other mode is never detected .however , one can not rule out the possibility that the properties of in the precursor reside within region ii of the figure .the polarization pattern in the core component of psr b0329 + 54 is much more complicated because both modes are present , but one of them reveals itself as a partial annulus .the statistical model decribed here can not completely explain this behavior .the pattern may arise from a condition that falls on the regional boundaries of figure [ fig : gfr ] , where the modes are occasionally coherent with large fluctations in ( i.e. in region i of the figure ) , thus explaining the partial annulus , but are otherwise incoherent ( i.e. in region iii or iv ) to account for the bimodal aspect of the polarization pattern .a statistical model has been developed for the polarization of pulsar radio emission .the model can explain a wide variety of polarization patterns observed in the radio emission .the observations are thus consistent with the model s hypothesis that the polarization of the radiation is determined by the simultaneous interaction of two , highly polarized , orthogonal modes .the analysis of the polarization data shows that polarization signatures of physical processes can become apparent when the stokes parameters are analyzed together , instead of separately .an interpretation of the model s assumptions and its application to the observations suggest that generalized faraday rotation may be operative in pulsar magnetospheres .the model shows , in a rigorous way , that polarization measurements follow the statistics of direction .allen , m. c. & melrose , d. b. 1982 , proc .aust . , 4 , 365 backer , d. c. & rankin , j. m. 1980 , , 42 , 143 barnard , j. j. & arons , j. 1986 , , 302 , 138 bingham , c. & mardia , k. v. 1978 , biometrika , 65 , 379 chandrasekhar , s. 1960 , radiative transfer , ( new york : dover ) edwards , r. t. & stappers , b. w. 2004 , a&a , 421 , 681 fisher , n. i , lewis , t. , & embleton , b. j. j. 1987 , statistical analysis of spherical data , ( cambridge : cambridge ) kennett , m. & melrose , d. 1998 , procaust . , 15 , 211 lyne , a. g. , smith , f. g. , & graham , d. a. 1971 , , 153 , 337 manchester , r. n. , taylor , j. h. , & huguenin , g. r. 1975 , , 196 , 83 melrose , d. b. 1979 , aust ., 32 , 61 melrose , d. b. & macquart , j .- p .1998 , , 505 , 921 mckinnon , m. m. 2003 , , 148 , 519 mckinnon , m. m. 2004 , , 606 , 1154 mckinnon , m. m. 2006 , , 645 , 551 mckinnon , m. m. 2009 , , 692 , 459 mckinnon , m. m. & stinebring , d. r. 1998 , , 502 , 883 mckinnon , m. m. & stinebring , d. r. 2000 , , 529 , 435 stinebring , d. r. et al . , 1984 , 55 , 247
|
radio polarimetry is a three - dimensional statistical problem . the three - dimensional aspect of the problem arises from the stokes parameters q , u , and v , which completely describe the polarization of electromagnetic radiation and conceptually define the orientation of a polarization vector in the poincar sphere . the statistical aspect of the problem arises from the random fluctuations in the source - intrinsic polarization and the instrumental noise . a simple model for the polarization of pulsar radio emission has been used to derive the three - dimensional statistics of radio polarimetry . the model is based upon the proposition that the observed polarization is due to the incoherent superposition of two , highly polarized , orthogonal modes . the directional statistics derived from the model follow the bingham - mardia and fisher family of distributions . the model assumptions are supported by the qualitative agreement between the statistics derived from it and those measured with polarization observations of the individual pulses from pulsars . the orthogonal modes are thought to be the natural modes of radio wave propagation in the pulsar magnetosphere . the intensities of the modes become statistically independent when generalized faraday rotation ( gfr ) in the magnetosphere causes the difference in their phases to be large . a stochastic version of gfr occurs when fluctuations in the phase difference are also large , and may be responsible for the more complicated polarization patterns observed in pulsar radio emission .
|
in this paper we introduce a new method for constructing randomized online algorithms , which we call the _ knowledge state _ model .the purpose of this method is the address the trade - off between memory and competitiveness .the model is introduced and fully described for the first time in this publication , but we note that a number of published algorithms are implicitly consistent with the model although not in its full power .for example , the algorithm equitable is a knowledge state algorithm for the -cache problem that achieves the optimal randomized competitiveness of for each , using only memory , as opposed to the prior algorithm , partition , that uses the full information contained in the work function , and hence requires unlimited memory as the length of the request sequence grows . at the other end of the scale ,the randomized algorithm random_slack is in fact an extremely simple knowledge state algorithm , which achieves randomized 2-competitiveness for the 2-server problem for all metric spaces , and which achieves randomized -competitiveness for the -server problem on some spaces , including trees .we also note that random_slack is _ trackless _ and is an order 1 knowledge state algorithm , _i.e. _ , its distribution is supported by only one state .( see the recent acm sigact column for a summary of tracklessness ; see also . )we also note that we have recently used the knowledege state technique to develop an optimally competitive algorithm for the caching problem in shared memory multiprocessor systems .it is still an open question , whether there exists an optimally competitive order bookmark randomized algorithm for the -cache problem .an affirmative answer to this question would settle an open problem listed in . in this paperwe describe progress on this question .we give an order 2 knowledge state algorithm which is provably -competitive .since an equivalent behavioral algorithm must keep one bookmark , " namely the address of an ejected page , it is not an improvement over our earlier result , but it does illustrate the knowledge state technique in a simple way .we then give an order 3 knowledge state algorithm which is provably -competitive , which is an improvement , in terms of memory requirements , over equitable for the case ( section [ sec : ks cache ] ) .we also consider the problem of breaking the 2-competitive barrier for the randomized competitiveness of the 2-server problem , a goal which has , as yet , been achieved only in special cases ( section [ sec : server ] ) .for the class of uniform spaces , this barrier was broken by partition .for the line , a -competitive algorithm was given by bartal _et al . _ . in this paperwe give a formal description of the knowledge state method .it is defined using the mixed model of online computation , which is described in section [ sec : online opt prob ] .this section relates the mixed model to the standard models of online computation , and explains how a behavioral algorithm can be derived from a mixed model description .section [ sec : know ] defines the knowledge state method ( in terms of the mixed model ) and shows how potentials can be used to derive the competitive ratio of a knowledge state algorithm . even though the concepts in section [ sec : online opt prob ] and [ sec : know ] are natural and intuitive some of the formal arguments to prove our method are somewhat involved .in section [ sec : ks cache ] the method is applied to the paging problem ; two optimally competitive algorithms are presented .we discuss ongoing experimental work for the server problem in section [ sec : server ] .we will introduce a new model of randomized online computation which is a generalization of both the classic behavioral and distributional models .we assume that we are given an online problem with states ( also called configurations ) , a fixed _start state _ , and a requests .if the current state is and a request is given , an algorithm for the problem must _ service _ the request by choosing a new state and paying a cost , which we denote .it is convenient to assume that there is a distance " function on , and it is possible to choose to move from state to state at cost at any time , given no request .we will assume that and for any states .it follows that for any states and request . formally in this paperwe refer to an online problem as an ordered triple .examples of online problems satisfying these conditions abound , such as the server problem , the cache problem , _etc._. given a _ request sequence _ , an algorithm must choose a sequence of states , the _service_. the _ cost _ of this service is defined to be . an _ offline _algorithm knows before choosing the service sequence , while an _ online _ algorithm must choose without knowledge of the future requests .we will assume that there is an optimal offline algorithm , , which computes an optimal service sequence for any given request sequence .as is customary we say that a deterministic online algorithm is _ -competitive _ for a given number if there exists a constant ( not dependent on ) such that for any request sequence .similarly , we say that a randomized online algorithm is -competitive for a given number if there exists a constant ( not dependent on ) such that for any request sequence , where denotes expected value . in order to make the description of various models of randomized online computation more precise , we introduce the following notation .let be the set of all finite distributions on .if and , we say that _ supports _ the distribution if .the _ distributional support _ ( or _ support _ " for short ) of any is defined to be the unique minimal set which supports . by an abuse of notation , if the support of is a singleton , we write .an instance of the _ transportation problem _ is a weighted directed bipartite graph with distributions on both parts .formally , an instance is an ordered quintuple where and are finite non - empty sets , is a distribution on , is a distribution on , and is a real - valued function on .a _ solution _ to this instance is a distribution on such that 1 . for all . for all .then , and is a _ minimal _solution if is minimized over all solutions , in which case we call the _ minimum transportation cost_. there are three standard models of randomized online algorithms ( see , for example ) .we introduce a new model in this paper , which we call the _mixed model_. those three standard models are : distribution of deterministic online algorithms , the behavioral model , and the distributional model .we very briefly describe the three standard models . in this model, is a random variable whose value is a deterministic online algorithm . if the random variable has a finite distribution , we say that is _ barely random_. in this model uses randomization at each step to pick the next configuration .we assume that has memory .let be the set of all possible memory states of .we define a _full state _ of to be an ordered pair .let be the initial memory state , and let be the memory state of after servicing the first requests .then uses randomization to compute , the full state after steps , given only and . a behavioral algorithmcan then be thought of as a function on whose values are random variables in . if , let be the support of and be the support of then define to be the minimum transportation cost of the transportation problem , and if , we define to be the minimum transportation cost of the transportation problem , where .a distributional online algorithm is then defined as follows . 1 .there is a set of memory states of .there is a start memory state .2 . a _ full state _ of is a pair .the initial full state is , where .3 . for anygiven full state and request , deterministically computes a new full state , using only the inputs , , and .we write or alternatively .thus , is a function from to .4 . given any input sequence , computes a sequence of full states , following the rule that for all .define .we note that a distributional online algorithm , despite being a model for a randomized online algorithm , is in fact deterministic , in the sense that the full states are computed deterministically .the following theorem is well - known .( it is , for example , implicit in chapter 6 of . )[ thm : all models equivalent ] all three of the above models of randomized online algorithms are equivalent , in the following sense . if is an algorithm of one of the models , there exist algorithms , , of each of the other models , such that , given any request sequence , the cost ( or expected cost ) of each for is no greater than the cost ( or expected cost ) of .the _ mixed model _ of randomized algorithms is a generalization of both the behavioral model and the distributional model .a mixed online algorithm chooses a distribution at each step , but , as opposed to a distributional algorithm , which must make that choice deterministically , can use randomization to choose the distribution . a _ mixed _ online algorithm for an online problem is defined as follows . as before , let be the set of finite distributions on . 1 .there is a set of memory states of .there is a start memory state .2 . a _ full state _ of is a pair .the initial full state is , where .3 . for any given full state and request , there exists a finite set of full states and probabilities , where , such that if the current full state is and the next request is , uses randomization to compute a new full state , by selecting for some . the probability that selects each given is .we call the the _ subsequents _ and the the _ weights _ of the subsequents , for the request from the full state .+ is a function on whose values are random variables in . we can write .alternatively , we write . for fixed and ; , and can be regarded as random variables .4 . given any input sequence , computes a sequence of full states , following the rule that for all .note that , for all , , , and are random variables .computing the cost of a step of a mixed model online algorithm is somewhat tricky .we note that it might seem that would be that cost ; however , this is an overestimate . without loss of generality , is sensible .let and let .let be the support of .let be the subsequents and the weights of the subsequents , for the request from the full state .let be the union of the supports of the .define .note that , and its support is .define . finally ,if is the input request sequence , and the sequence of full states of is , we define .we now prove that the mixed model for randomized online algorithms is equivalent to the three standard models .[ lem : mixed yields beh ] if is a mixed online algorithm , there is a behavioral online algorithm such that , for any request sequence , .a memory state of will be a full state of , _ i.e. _ , we could write . by a slight abuse of notation ,we also define a full state of to be an ordered triple such that is a full state of and .intuitively , keeps track of its true state , while remembering the full state of an emulation of . for clarity of the proof ,we introduce more complex notation for some of the quantities defined earlier .let , , and .if is a full state of , define to be the probability that , _i.e. _ , the conditional probability that chooses to be the next full state , given that the current full state is and the request is .we assume that there can be at most finitely many choices of for which . in case not a full state of , then is defined to be zero . if is a full state of and , write , and choose a finite distribution on which is a minimal solution to the transportation problem , where . thus for ; for ; .we now formally describe the action of the behavioral algorithm .the initial full state of is .given that the full state of is and the next request is , and given any , we define , the probability that chooses the next full state to be , as follows : if , then . otherwise , let be a given request sequence .we now prove that . forany and any knowledge state of , define to be the probability that the full state of is after steps . additionally ,if , define to be the probability that the full state of is after steps . to prove the lemma we consider first the following two claims : 1 . for any , , , and , . for any , , and , .we prove claims 1 and 2 by simultaneous induction on .if , both claims are trivial by definition .now , suppose .we verify claim 1 for . by the inductive hypothesis ,claim 2 holds for .write .let . if is not a full state of or , we are doneotherwise , recall that for all , and we obtain which verifies claim 1 for . claim 2 for follows trivially . for the conclusion of the lemma , let , and let .we use claim 1 for . recall that for any full state of . then and we are done .[ thm : mixed implies all standard ] if is a mixed model online algorithm for an online problem , there exist algorithms , , and for , of each of the standard models , such that , given any request sequence , the cost ( or expected cost ) of each for is no greater than the cost ( or expected cost ) of . from lemma [ lem : mixed yields beh ] and theorem [ thm : all models equivalent ] .[ cor : mixed is comp imples comp ] if there is a -competitive mixed model online algorithm for an online problem , there is a -competitive online algorithm for for each of the three standard models of randomized online algorithms .we say that a function is _ lipschitz _ if for all . an _estimator _ is a non - negative lipschitz function .if , we say that _ supports an estimator _ if , for any there exists some such that . if is supported by a finite set , then there is a unique minimal set which supports , which we call the _ estimator support _ of .( we use the term _ support _ " instead of estimator support " if the context excludes ambiguity . )we note that all estimators considered in this paper have finite support .we say that an estimator has _ zero minimum _ if .the next lemma allows us to compare estimators by examining finitely many values .[ lem : just check support ] suppose and are estimators , and is the support of .then for all if and only if for all .one direction of the proof is trivial .suppose and for all . then there exists such that .it follows that , contradiction .an example of an estimator is the _ work function _ of a request sequence . if , we write to denote the minimal cost of servicing the request sequence starting at configuration and ending at configuration . then , if is a request sequence , the _ work function _ is defined by . if is a request sequence , the _ offset function _ is defined to be , a zero minimum estimator . if is an estimator and if is a request , we define function as . we call " the _ update operator_. the following lemma allows us to compute the update in finitely many steps .[ lem : update uses only support ] if is supported by , then .trivially , . pick such that .pick such that . then and we are done .we note that it is easy to verify that is also an estimator .we briefly note the following lemma , which is well - known ( see , for example , ) .[ lem : update work ] if , let for all .then for all and for all .we use _ estimators _ and _ adjustments _ to analyze the competitiveness of an online algorithm .more specifically , the combination of estimators and adjustments allows us to estimate the optimal cost .an online algorithm does not know the optimal offline algorithm s cost at any given time , but can keep track of the estimator , and use it as a guide .the estimator is a real - valued function on configurations that is updated at every step , and which estimates the cost of the optimal offline algorithm , while the adjustment is a real number that is computed at every step .both the estimator and the adjustment may be calculated using randomization .a _ knowledge state algorithm _ is a mixed online algorithm that computes an adjustment and an estimator at each step , and uses the current estimator as its memory state .more formally , if is a knowledge - state algorithm , then : 1 . at any given step ,the full state of is a pair , where and is the current estimator .we call that pair the _ current knowledge state .if is the knowledge state and the next request is , then computes an adjustment , a number which we call , and uses randomization to pick a new knowledge state . more precisely , there are subsequent knowledge states and subsequent weights for such that 1 .[ eqn : las vegas update ] for each .2 . for each , chooses to be with probability .3 . let .define .( as defined in the previous section in terms of the transportation problem ) 3 . finally , if is the input request sequence , and the sequence of full states of is , where , we define if , we say that a knowledge state is _ supported as a knowledge state _ by if is supported by ( in the estimator sense ) and is supported ( distributionally ) by .note that , in this case , can be represented by the finite set of triples .we say that a knowledge state algorithm _ has finite support _ if there is a uniform bound on the cardinality of the supports of the knowledge states .this bound is also called the _ order _ of the knowledge state algorithm .we say that _ is -competitive as a knowledge state algorithm _ if there is a constant such that for any request sequence and any .[ lem : is opt ] given a request sequence , then for all let be the optimal service of that ends in .thus : . by ( [ eqn :las vegas update ] ) : for all . by definition : for all . summing the inequalities over all , and adding to the equation, we obtain the result . [lem : ks comp implies comp ] if a knowledge state algorithm is -competitive as a knowledge state algorithm , then is -competitive. let be the constant given in the definition of -competitiveness for a knowledge state algorithm .let be any request sequence , and let be the optimal service of . since is -competitive as a knowledge state algorithm : we now define a -knowledge state potential ( -ks - potential , for short ) for a given knowledge state algorithm .let be a real - valued function on knowledge states .then we say that is a _-ks - potential for _ if 1 . for any .if is the current knowledge state and is the next request , are the subsequents of that request , and are the weights of the subsequents , let . then [ thm : ks pot implies comp ] if a knowledge state algorithm has a -ks - potential , then is -competitive .the proof follows easily from the definition of a -ks - potential and lemmas [ lem : is opt ] and [ lem : ks comp implies comp ] by straightforward arguments .let be a request sequence .let be the sequence of knowledge states of given the input , where .let , a random variable for each .note that is a constant .let .note that .let be the configuration of the optimal algorithm after steps . then the first inequality above is from lemma [ lem : is opt ] .the last two inequalities are from the definition of a -ks - potential .it follows that , and , by lemma [ lem : ks comp implies comp ] , we are done .we can define a _ forgiveness _ online algorithm to be a knowledge state algorithm with the special restriction that there is always exactly one subsequent .we note that historically , forgiveness came first , so we can think of the knowledge state approach as being a generalization of forgiveness .a forgiveness algorithm can be deterministic , such as equipoise , a deterministic online 11-competitive algorithm for the 3-server problem ( that was the best known competitiveness for that problem at that time ) , or distributional , such as equitable , an -competitive distributional online algorithm for the -cache problem .( see . )we now consider the -cache problem for fixed .the -cache problem reduces to online optimization , as defined in section [ sec : online opt prob ] of this paper , as follows : 1 .there is a set of _pages_. 2 . is the set of all -tuples of distinct pages . if the configuration of an algorithm is , that means that the pages that constitute are in the cache .the initial configuration is the initial cache .4 . if , then is the cost of changing the cache from to . since we assume that it costs 1 to eject a page and bring in a new page , is the cardinality of the set . is simply the set of all pages .if a page is requested , it means that the algorithm must ensure that is in the cache at some point as it moves between configurations .thus , for any and any , we have to complete the reduction , we observe that the support of any configuration request pair is finite . if , that support has only one element , namely , while otherwise , it has elements , namely .we introduce a convenient notation , a modification of the bar notation of koutsoupias and papadimitriou , for offset functions for the -cache problem , which we call the _ bar notation_. let be a string consisting of at least page names and exactly bars , with the condition that at least page names are to the left of the bar .then defines an offset function as follows .let be the set of all configurations such that , for each , the names of at least members of are written to the left of the bar .let be the estimator such that is the support of , and such that for each .for example for , denotes the estimator whose support consists of just the configuration , and which takes the value zero on that configuration . for , denotes the estimator whose support consists of the configurations , , , , and , and which takes the value zero on those configurations . from , we have : [ lem : bar lemma ] a function is an offset function for the -cache problem if and only if it can be expressed using the bar notation .recall that partition ( introduced in ) is optimally competitive for the -cache problem , but uses unbounded memory to achieve the optimal competitiveness of .the memory state of partition is , in fact , the classic offset function , which , in the worst case , requires keeping track of every past request .we now show how the use of knowledge states simplifies the definition , and in fact the memory requirement , of an optimally competitive randomized algorithm for the 2-cache problem , which we call .we will follow the rule that , at each step , the adjustment is as large as possible , so that the minimum of the estimator will always be zero .this guarantees that any potential will always be non - negative .if there are infinitely many pages , has infinitely many knowledge states , but , up to symmetry , it has only two .each such knowledge state of is supported by a set of cardinality at most 2 , hence has at most three active pages , and therefore its equivalent behavioral algorithm has at most one bookmark . in the definitions given below , we say that two pages to are _ equivalent _ for a given knowledge state if they can be transposed without changing the knowledge state . 0.2 in 1 .if are pages , let . in this case , and are equivalent , _i.e. _ , .2 . if are pages , let , where denotes the distribution which is on the configuration and on the configuration .in this case and are equivalent , _i.e. _ , . we list below the action of . in each case , are distinct pages . 1 .if is the initial cache , the initial knowledge state is .if the current knowledge state is then a. [ act : k2 trivial a ] if the request is , the new knowledge state is .b. [ act : k2 ac ] if the request is , then the new knowledge state is .if the current knowledge state is then a.[ act : k2 trivial b ] if the new request is , the new knowledge state is .b. [ act : k2 bb ] if the new request is , the new knowledge state is . c. [ act : k2 bd ] if the new request is , then there are three subsequents , namely , , .the distribution on the subsequents is uniform , _ i.e. _ , each is chosen with probability .actions [ act : k2 trivial a ] and [ act : k2trivial b ] are requests to the first block of pages , in the sense of the bar notation .since the bar notation implies that each page in the first block can be assumed to be in the cache , such a request is ignored by any sensible online algorithm , which means , in our case , that the estimator is unchanged and the adjustment is zero .we call such requests _ trivial_. we define a potential by and . [lem : pot for k2 ] is a -ks - potential for .let be the current knowledge state and the new request .write for increase in potential in the given step. we will show that in all cases . in trivial actions , namely cases [ act : k2 trivial a ] and [ act : k2 trivial b ] , , and we are done .we first note that : by lemma [ lem : just check support ] , the last inequality need only be verified for configurations in , the support set of . in this case and is a new page , . .thus . since the algorithm must bring in a new page , and since the probability is zero that the minimum transport brings in any other page , . , and we are done . , _i.e. _ , and . recall .note that , since , as functions , on the set of all configurations . , since the probability is that the algorithm does nothing , and the probability is that it ejects and brings in . , and we are done . , _i.e. _ , and is a new page , .recall , thus . since the algorithm must bring in a new page , andsince the probability is zero that the minimum transport brings in any other page , . , and we are done .this completes the proof of all cases .we have : [ thm : k2 comp pot ] is -competitive .we note that the number of active pages , _i.e. _ , pages contained in a support configuration , is never more than three .the number three is minimal , as given by the theorem below : [ thm : at least 3 active pages ] there is no knowledge state algorithm for the 2-cache problem that is -competitive as a knowledge state algorithm , and which never has more than two active pages , _i.e. _ , no bookmarks .if a knowledge state algorithm for the 2-cache problem never has more than two active pages , then it can have no bookmarks , hence is trackless . by theorem 2 of , there is no -competitive trackless online algorithm for the 2-cache problem .we define a knowledge state algorithm which is -competitive for the 3-cache problem .recall that . up to symmetry , has six knowledge states . the number of active pages , _i.e. _ , pages contained in a support configuration , is never more than five .the knowledge states of will be defined as follows . as in the case of , we say that two pages are _ equivalent _ if they can be transposed without changing the knowledge state . 1 . for any three pages .the pages , , and are all equivalent , _i.e. _ , , _ etc . for any four pages .the pages , , and are all equivalent .3 . for any four pages .the pages and are equivalent , and and are equivalent .4 . for any five pages .the pages are equivalent . for any five pages . the pages and are equivalent , and and are equivalent . for any five pages .the pages and are equivalent , and and are equivalent .the actions are of are formally defined below . in each case , are distinct pages .we do not need to consider separate cases for requests to pages which are equivalent . 1 .[ act : k3 initial a ] if is the initial cache , the initial knowledge state is .if the current knowledge state is then a.[ act : k3 aa ] if the new request is , the new knowledge state is .b. [ act : k3 ad ] if the new request is some page , the new knowledge state is .if the current knowledge state is then a.[ act : k3 ba ] if the new request is , the new knowledge state is .b. [ act : k3 bb ] if the new request is , the new knowledge state is . c. [ act: k3 be ] if new request is some page , the new knowledge state is .if the current knowledge state is then a.[ act : k3 ca ] if the new request is , the new knowledge state is .b. [ act : k3 cc ] if the new request is , the new knowledge state is . c. [ act :k3 ce ] if the new request is some page , the new knowledge state is .if the current knowledge state is then a.[ act : k3 da ] if the new request is , the new knowledge state is .b. [ act : k3 db ] if the new request is , the new knowledge state is c. [ act : k3 df ] if the new request is some page , then the new knowledge state is chosen uniformly from among the following ten knowledge states : , , , , , , , , , and .if the current knowledge state is then a. [ act : k3 ea ] if the new request is , the new knowledge state is .b. [ act : k3 ec ] if the new request is , the new knowledge state is . c. [ act :k3 ed ] if the new request is , the new knowledge state is .d. [ act : k3 ef ] if the new request is some page , then the new knowledge state is .. if the current knowledge state is then a. [ act : k3 fa ]if the new request is , the new knowledge state is .b. [ act : k3 fb ] if the new request is , the new knowledge state is . c. [ act :k3 fd ] if the new request is , the new knowledge state is .d. [ act : k3 ff ] if the new request is some page , the new knowledge state is chosen uniformly from among the following six knowledge states : , , , , , and .we define a potential on the knowledge states as follows : , , , , , and . [lem : pot for k3 ] is an -ks - potential for . for each action of , let be the increase in potential .we will show that in each case , the value of can be computed by simple subtraction .we need only compute the values of and for each action , after which the inequality ( [ eqn : k3 pot ] ) follows by simple arithmetic . .these actions are trivial , and thus , and we are done . .in these actions , the request is to a new page , and the probability that any other page is in the cache after the action does not increase : thus .we also know that because the remainder of the verification of ( [ eqn : k3 pot ] ) for each of those actions consists of simple arithmetic . .note that since in each case , we must keep and and eject the other two unrequested pages .the probability is that is in our cache , and that is in our cache , thus for action [ act : k3 ec ] , and for action [ act : k3 ed ] . since for both actions , we are done . .note that since for action [ act : k3 db ] , recall that the distribution of is uniform on six configurations . to compute , we describe a minimal transport between the distribution of and the distribution of . that transport is defined as follows: if the previous configuration is , , or , do nothing .if the previous configuration is , eject .if the previous configuration is , eject .if the previous configuration is , eject with probability , and eject with probability .thus , .it is a routine verification that the required distribution for is achieved . since , we have verified ( [ eqn : k3 pot ] ) for action [ act : k3db ] . for action[ act : k3 fb ] , recall that the distribution of is on , and is on each of , , , and .a minimal transport can be defined as follows : if is already in the cache we do nothing , while otherwise , we eject . thus , .it is a routine verification that the required distribution for is achieved .since , we have verified ( [ eqn : k3 pot ] ) for action [ act : k3 fb ] . .note that since for action [ act : k3 bb ] , recall that the distribution of is uniform on , , and .if is already in the cache we do nothing , while otherwise , we eject with probability and eject with probability . thus , .it is a routine verification that the required distribution for is achieved . since , we have verified ( [ eqn : k3 pot ] ) for action [ act : k3 bb ] . for action[ act : k3 cc ] , recall that the distribution of is uniform on and .if is already in the cache we do nothing , while otherwise , we eject . thus , .the resulting distribution is concentrated at , as required for the knowledge state . since , we have verified ( [ eqn : k3 pot ] ) for action [ act : k3 cc ] . for action[ act : k3 fd ] , recall that the distribution of is on , and is on each of , , , and .if is already in the cache we do nothing .if is in the cache , we eject . otherwise , the cache must be , in which case we eject or with equal probability .thus , .it is a routine verification that the required distribution for is achieved . since , we have verified ( [ eqn : k3 pot ] ) for action [ act : k3 fd ] . .note that since . by lemma [ lem : just check support ] , this inequality need only be verified for the configurations in the support of .whatever the initial configuration is , and are in the cache . simply eject the other page .thus , . , and we are done . .let and let we note : by lemma [ lem : just check support ] , these inequalities need only be verified for the configurations in the support of and , respectively .we thus have for [ act : k3 df ] , and for [ act : k3 ff ] . to compute ,we give minimal transportations from the distribution of , respectively , to the weighted sum of distributions of the subsequents , for each of the two cases . for action[ act : k3 df ] , whatever the initial configuration is , is in the cache .eject with probability , and eject each of the other two pages with probability each .it is a routine verification that the required distribution is achieved .thus , . , and we are done . for action[ act : k3 ff ] , the probability is that the initial configuration is . in this case , eject one of the three pages , each with probability .otherwise , the cache will contain , and either or but not both : eject with probability , and otherwise eject either or .it is a routine verification that the required distribution is achieved .thus , . , and we are done .this completes the proof of all cases .[ thm : k3 comp pot ] is -competitive .it is our hope that our technique will yield an order 2 knowledge state algorithm whose competitiveness is provably less than 2 for all metric spaces .we mention briefly progress by giving results for a class of is one step up " in complexity from the class of uniform metric spaces .we consider the class of metric spaces , which consists of all metric spaces where every distance is either 1 or 2 , and where the perimeter of every triangle is either 3 or 4 .( the classic octahedral graph , which has six points , is a member of this class , as defined by schlfli . )we have a computer generated order 2 knowledge state algorithm for the 2-server problem in this class : its competitiveness is .we note that we also have calculated ( through computer experimentation ) the minimum value of in the sense that no lower competitiveness for any order 2 knowledge state algorithm for can be proved using the methods described here .this value is .we briefly mention that there is an order 3 knowledge state algorithm for which has , up to equivalence , only seven knowledge states , and is -competitive .we also can prove that no randomized online algorithm for the 2-server problem for can achieve competitiveness less than .all knowledge states and probabilities in this order 3 algorithm can be described using only rational numbers .these results , as well as our results for the server problem in uniform spaces ( equivalent to the caching problem ) , indicate a natural trade - off between competitiveness and memory of online randomized algorithms .wolfgang bein , lawrence l. larmore , and rdiger reischuk .knowledge states for the caching problem in shared memory multiprocessor systems . in _ proceedings of the 7th international symposium on parallel architectures , algorithms and networks _ , pages 307312 , ieee , 2004 .don coppersmith , peter g. doyle , prabhakar raghavan , and marc snir .random walks on weighted graphs and applications to online algorithms . in _ proc .22nd symp .theory of computing ( stoc ) _ , pages 369378 .acm , 1990 .
|
we introduce the concept of knowledge states ; many well - known algorithms can be viewed as knowledge state algorithms . the knowledge state approach can be used to to construct competitive randomized online algorithms and study the tradeoff between competitiveness and memory . a knowledge state simply states conditional obligations of an adversary , by fixing a work function , and gives a distribution for the algorithm . when a knowledge state algorithm receives a request , it then calculates one or more subsequent " knowledge states , together with a probability of transition to each . the algorithm then uses randomization to select one of those subsequents to be the new knowledge state . we apply the method to the paging problem . we present optimally competitive algorithm for paging for the cases where the cache sizes are and . these algorithms use only a very limited number of bookmarks . _ keywords : _ design of algorithms ; online algorithms ; randomized algorithms , paging .
|
massive multiple - input and multiple - output ( mimo ) has been considered as a key technology for the next - generation cellular system , where multiple users ( mu ) are simultaneously served at the same frequency band by the base station ( bs ) equipped with a large number of antennas .the benefits brought by massive bs antennas include high energy efficiency , high spectrum efficiency , high spatial resolution , and so on . for mu massive mimo systems , downlink ( dl )transmission relies on the precocding to reduce inter - user interference ( iui ) . in timedivision duplex ( tdd ) systems where channel reciprocity holds , the channel state information ( csi ) required by dl precoding can be obtained via uplink ( ul ) pilots to avoid the significant training overhead .clearly , the ul channel estimation and the dl precoding are the foundation of the tdd massive mimo systems . when the number of the antennas approaches infinity , channel vectors corresponding to different users are spatially orthogonal and the optimal dl precoding is simply the matched filtering .however , if there are a finite number of antennas in practical systems , such an orthogonality does not hold and the massive mimo system naturally turns into a non - orthogonal multiple access ( noma ) system . therefore , more sophisticated precoding schemes are necessary .the concept of noma has been originally proposed to enhance the spectrum efficiency by allowing multiple users allocated different power levels to share the same resource block .another dominant noma category is code - domain multiplexing ( cdm ) , including multiple access low - density spreading ( lds ) cdma , sparse - code multiple access ( scma ) , multi - users shared access ( musa ) , and so on .several other multiple access schemes have also been put forward , such as pattern - division multiple access ( pdma ) , bit - division multiplexing ( bdm ) , and interleave - division multiple access ( idma ) .an earlier study of spatially non - orthogonal multiple access scheme has been presented in for conventional mimo where multiple users are served by non - orthogonal beams . like other noma systems , understanding the rationale behind the channel non - orthogonality in massive mimo will definitely benefit the system design .another critical issue of massive mimo is the practical cost associated with a large number of rf chains . as a cost - effective solution ,massive antennas with limited rf chains have attracted a great deal of attention , where the precoding is performed in a hybrid manner by combining phase shifters based analog precoding and a much smaller size digital precoding . since its first appearance under the name of antenna soft selection ,hybrid precoding has been studied extensively , as we can see from and the references therein .however , it is still unclear how to well form analog precoding , especially for low - cost phase shifters with _ finite resolution_. meanwhile , most existing hybrid precoding schemes depend on the perfect knowledge of the channel . however , when there are only limited rf chains available , channel estimation becomes challenging .an adaptive algorithm to estimate the hybrid mmwave channel parameters has been developed in , where the poor scattering nature of the channel is exploited and adaptive compressed sensing ( cs ) has been employed .the accuracy of the cs method is limited by the finite grid and its computational complexity is also high for the practical deployment .a beam training procedure has been provided in , which aims to search only several candidate beam pairs for fast channel tracking .although this category of schemes work well for the point - to - point scenarios , the pilot overhead is very high for the multi - user scenarios .a beam cycling method has been developed in , where channel estimation comparable to the full digital system is achieved by sweeping the beam directions over all spatial region .a priori knowledge aided hybrid channel tracking scheme has been developed in for tera - hertz beamspace massive mimo systems , which excavates a temporal variation law of the physical direction for predicting the support of the beamspace channel . however , the prior knowledge of the user is not always known in practice and the method can not be applied for the more general case .recently , an array signal processing aided channel estimation scheme has been proposed in , where the angle information of the user is exploited to simplify the channel estimation .nevertheless , the scheme is only applicable for full digital systems and can not be directly extended to hybrid systems . in this paper, we develop a new hybrid massive mimo transmitter from the angle domain perspective .we first analyze _ instantaneous _ channels in a tdd massive mimo system , where the bs is equipped with an -antenna uniform linear array ( ula ) and each user has single antenna .it is shown that the channel vectors corresponding to different users are asymptotically orthogonal as goes large when the angles of arrival ( aoas ) of users are different . using the discrete fourier transform ( dft ) ,the _ cosine _ of the aoa can be estimated with a resolution proportional to .the resolution can be further enhanced by using zero padding technique with fast fourier transform ( fft ) .we then decompose the channel matrix into an angle domain basis matrix and a corresponding channel matrix .the former can be formulated either by the orthogonal or the non - orthogonal steering vectors and the latter has the same size as rf chains .accordingly , the precoding scheme consists of either orthogonal or non - orthogonal beamforming towards users and an angle domain precoding dealing with the iui . by mapping the above beamforming and precoding matrices to the analog and the digital precoding , respectively, the proposed scheme perfectly matches the hybrid precoding with _ finite resolution _phase shifters . from the aoa - based analysis, the mu massive mimo can be viewed as an angle division multiple access ( adma ) system . as a result, a novel hybrid channel estimation scheme can be designed that significantly saves the overhead compared to the conventional beam cycling method .the rest of the paper is organized as follows : section ii investigates the channel orthogonality from the viewpoint of aoa , and then training - based aoa estimation is discussed in section iii .next , angle - domain decomposition aided hybrid precoding is proposed in section iv , followed by a novel hybrid channel estimation scheme in section v. simulation results are presented in section vi , and section vi concludes the paper .consider an mu massive mimo system , where the bs is equipped with a ula of elements to serve single - antenna users .for the sake of convenience , only one cell is considered here and therefore there is no pilot contamination while similar principle can be easily extended to multi - cell cases . from the well - established narrowband transmission model, the ul channel vector between the user and the bs can be expressed as where is the number of i.i.d paths , is the complex gain of the path of the user , and is the steering vector . the steering vector can be expressed as ,\end{aligned}\ ] ] where is the bs antenna spacing , is the wavelength of the carrier frequency , and ] and ^t ] for , where the angular spread ._ this happens when the bs is installed on very tall building and there are a limited number of surrounding scattering or when millimeter wave ( mmwave ) scenario is exploited . with the above assumption , s are very close to each other and s are highly correlated .hence , in ( [ eq : hkmatrix ] ) can be re - expanded more compactly by a much smaller number of steering vectors .denote \end{aligned}\ ] ] as the dft matrix , where .it is obvious that . as in fig.[aoa_narrow_as_ly ] , for any ] and is the row of the dft matrix . owing to the channel reciprocity , the dl channel is , where is an equivalent dl mimo channel whose size is .then , the corresponding dl precoding matrix is given by where and are the orthogonal beamforming matrix and the beam domain precoding matrix , respectively .because of , as long as , can be calculated from using the existing precoding schemes , for example , the well - known minimum - mean square error ( mmse ) precoding in .let the transmission power constraint be , the mmse precoder can be calculated as and then normalized by .since the beamforming vectors are orthogonal , no interference exists among formulated beams .the iui left is to be handled by the subsequent beam domain precoding matrix , which is much smaller in size .meanwhile , from ( [ precoder ] ) , it is clear that the obs - based precoding is a perfect match to the hybrid precoding since consists only phase shifters and can be implemented in the analog domain and is with much smaller size and can be implemented in the digital domain .furthermore , _ the aoa estimation resolution can be directly linked to the finite resolution of phase shifter in the analog precoding_. let be the number of available rf chains .when , a beam selection process to find the best beams for transmission is necessary . in practical design , is usually fixed and a smaller one is always better in terms of the cost .it is known that the minimum number of rf chains for a -user system is , we then can limit the following discussion to due to the stringent cost requirement , for example , in a mmwave massive mimo system .below , we provide a beam selection algorithm , which basically selects the most significant beam of each user by substituting into the function .obtain and from the size of return error record to record to and record its size form the matrix from using sort the rows of in descending order of norm reorder accordingly let be the first indexes of return return the obs based scheme relies on the fixed - directional beams towards the equally spaced aoas due to the usage of the dft matrix .hence , obs has two critical drawbacks : * the aoa resolution is only ; * the orthogonal beams obtained may not necessarily point to the strongest direction of users , and thus will suffer some performance loss . if the aoa of the user is not exactly an integer times of , then the dft leakage will cause wider beam occupancy and subsequently a large . in other words , the orthogonal beamforming constraint may bring unnecessary beam spread .since we only have rf chains , that is , one beam for each user , it is desirable to improve the accuracy of the analog beamforming such that the beam domain channel gain of a single element can be maximized .an effective way is to consider non - orthogonal beams to suppress the dft leakage and narrow the beam spread . for the cs based beamforming , there is no obvious way to form non - orthogonal beams because the dictionary matrix should be predesigned . on the contrary , for angle - based beamforming , the non - orthogonal beams can be easily obtained via rotating the beam by a small angle such that the beams will point to the strongest direction of the users .for rotating obs , let us introduce a refining angle for each user .denote to be the refining angle for user , and we can also denote similar to ( [ gkapprox ] ) , we can calculate then , the optimal refining angle for the user will be determined by where is the maximum element of and the position of in is denoted as .the estimation of the optimal refining angle in ( [ bia ] ) needs a one - dimensional search over the range in ( [ iar ] ) .however , bearing in mind that is a small , we actually need a high resolution frequency " estimation via digital approach with finite grid . with the dft operation, such high resolution estimation can be performed by padding zeros at the end of channel vector .let be an integer lager than , we can calculate ,\end{aligned}\ ] ] where is the dft matrix .then , we can obtain vectors where the vector is formed by the entry of with . from the basic sampling theory , we know which implies that the cost function in ( [ bia ] ) is evaluated for .then , one can easily obtain and by comparing the elements in , .compared to the original , the beam domain resolution improves from to , as illustrated in fig .[ beamsinc ] with for a simple case of .hence , a valid approximation to with one beam vector is written as where is the row of .in fact , steps ( [ gvm ] ) to ( [ eq : gao3 ] ) can be viewed as high - resolution aoa estimation for the user .since we have spatial samples of the user channel , the -point dft only provide angle / beam domain responses within a bandwidth " of .the zero - padding and the -point dft actually further divide the bandwidth " by to obtain angle / beam domain responses . the overall channel matrix for all users can be approximated by ,\end{aligned}\ ] ] for which the strongest beam of each user channel is chosen and is handled with only one rf chain .hence , the non - orthogonal analog beamforming vector for the user can be selected as .bearing in mind that is a diagonal matrix , the analog precoding matrix becomes ^t.\end{aligned}\ ] ] the non - orthogonal beams actually form a non - orthogonal beamspace , or strictly speaking , non - orthogonal angle space ( noas ) , pointing to the strongest direction of each user while the caused inter beam interference ( ibi ) will be handled by the digital precoding part . with the noas - based beamforming , the precoding matrix can be calculated from , where the calculation of can be significantly simplified using the result in ( [ ckn ] ) .the structure of the proposed noas - based hybrid precoding schemes is shown in fig .[ noas ] . in practice , the phase shifter with continuous variable phase is not only inaccurate but also expensive .in contrast , a phase shifter with finite phase shift is with low - cost and can be controlled precisely . in the obs - based hybrid precoding ( ) , we basically need phase shifters with a relatively - lower resolution of .the high - resolution noas - based hybrid precoding ( ) requires phase shifters with a high resolution of .therefore , only finite resolution phase shifters are required in the proposed hybrid precoding scheme and a low - cost implementation is possible .till now , there are three beamforming methods in the literatures from different space viewpoint , which are summarized as follows : 1 . as in fig .[ fig:1a ] , the eigen - space method utilizes the best eigen - directions for beamforming , .specifically , the beamforming vectors of this method do not physically formulate beams towards users but rather change the amplitude and the phase of each antennas such that the optimal signal - to - interference - noise ( sinr ) ratio ( or other criterion ) can be achieved at users .hence , the method is only valid for full digital operation .2 . as in fig .[ fig:1b ] , the angle - space method utilizes the orthogonal steering vectors for beamforming , as proposed in section iv.a and in , ( sometimes called beamspace method ) .specifically , the beamforming vectors of this method are chosen from the steering vectors with equally spaced , such that the beamforming vectors physically formulate orthogonal beams towards the fixed directions .however , the beam directions generally do not point to the exact directions of users and will suffer from power leakage , since users are randomly distributed .this method is valid both for full digital and hybrid operation .3 . as in fig .[ fig:1c ] , the angle - space method utilizes the non - orthogonal steering vectors for beamforming , as presented in section iv.b .specifically , the beamforming vectors of this method are also chosen from the steering vectors but the beams point to the exact directions of users .hence , the beams are non - orthogonal to each other and there will be ibi . this method is valid both for full digital and hybrid operation .the observation from the spatial and the angle / beam domains implies that in the case of single user , the ula - based massive mimo system can be viewed as an orthogonal beam division multiplexing ( obdm ) system ( with more than one rf chains for this user from orthogonal columns of dft matrix ) , which is analogous to the orthogonal frequency division multiplexing ( ofdm ) system as compared in table i. it is then clear that the existing research results of ofdm systems can be directly applied to the su massive mimo system by projecting the user channel onto the dft - based obs . .obdm versus ofdm [ cols="^,^",options="header " , ] on the other hand , for the case of multi - user multiple access , there is a fundamental difference between the massive mimo and the orthogonal frequency division multiple access ( ofdma ) systems . in ofdma systems ,the bandwidth is defined on the frequency domain , and therefore and the orthogonality among different users can be easily maintained .in contrast , the `` bandwidth '' in the massive mimo systems is defined on the angle / beam domain and depends on the users spatial location .therefore the orthogonality among different users is out of our control .consequently , an mu massive mimo system could either be an orthogonal adma system by using obs or be a non - orthogonal adma system by using noas , where the former removes ibi with the sacrifice on power leakage while the latter does reversely .it can then be imagined that when the number of the rf chains is very small , the power leakage will be the dominant issue and noas would performs better , while when the number of the rf chains is very large , the power leakage is very small and the obs may perform better .all previous discussions are based on the availability of channel matrix and the corresponding aoas . with limited rf chains ,such channel estimation is normally obtained by sequentially sending the pilot and sweeping the beam on all directions , named as beam cycling . however , from the angle domain viewpoint , it is possible to reduce the amount of beam sweeping , and thus greatly reduce the training overhead .let denote the unitary pilot matrix in the ul .the received pilot signal at the bs is where the additive white gaussian noise has been ignored for the sake of convenience .let us consider the phase shifters at the rf chains as an analog combiner represented by a matrix .then , we right - multiply the output of the rf chains by and left - multiply by to obtain a matrix where we have used the property of .we next explore the sparsity of in the angle domain to recover from .the similarity of adma to the ofdm / ofdma system inspires us that we are actually dealing a problem similar to frequency domain ofdma channel estimation .in particular , we know that the user channel is quite sparse in frequency domain even if we do not know the position. moreover , due to the limited number of rf chains , we only have part of its spatial ( time ) domain impulse response depending on how the rf chains are connected to antennas .the problem becomes _ ofdm sparse frequency domain channel estimation with insufficient impulse response _ . due to the orthogonal pilots , these portions of user channels have been separated ,namely , the column of can be expressed as since we do not know the aoa information of the users , we require the training to be sent twice .the key idea is that if the index of significant beams in ( [ dotgindex ] ) is known , i.e. , the aoa information , we can let the phase shifters at each row of the analog combiner act as the corresponding rows of the dft matrix to obtain , which will become a fine estimate of the angle / beam domain response of the spatial channel . in other words , each row of the analog combiner can become a _ receive beamformer _ if the aoa information is available .for the current case of rf chains , we actually need to know the index of the most significant beam ( imsb ) for each user .hence , in the first time training , we target at roughly achieving the aoa information of each user . without loss of generality , we propose the following structure for the analog combiner matrix , ,\end{aligned}\ ] ] that is , we simply connect rf chains to the first antennas .then , from the previously discussions , we can apply the dft approach to achieve aoa estimation by .\end{aligned}\ ] ] since is the dft of an deficient sampling of , does not have the same envelope as but is rather a `` fatter version '' of . hence , would have the its maximum value at a position close to that of .an example of and calculated from a normalized is given in fig .[ zp ] with , , and . from the figure , and ,though with different shape , would have the same maximum value at the same position . therefore , we can obtain an estimate of imsb from in the first place . th user ., width=340 ] + when , i.e. , the obs scheme , let be the imsb of the user .the analog combiner at the second time training is set as ^t.\end{aligned}\ ] ] with , which is exactly an obs - based receive beamformer , we can obtain in fact , can be considered as a size - reduced approximation of the beam domain equivalent mimo channel . for , i.e. , noas scheme, we can obtain an approximation of because becomes the noas - based receive beamformer .in summary , the proposed channel estimation approach includes two steps , the imsb estimation and the beam domain channel estimation ( ) , which are shown in fig . [ fig_bdoe ] and fig .[ fig_bdce ] , respectively .since the dl transmitting and the ul receiving share the same beamforming vectors , once the analog receive beamformer is fixed , the dl transmission is just the calculation using the digital precoding matrices , corresponding the obs or the noas based beamformers , respectively . for conventional beam cycling based hybrid channel estimation , it needs to sweep the beam over all antennas , which costs times of training . in comparison ,the proposed method needs two times of training , which greatly saves overhead .in this section , we will present simulation results to verify our discussion before . in our simulation, we consider a tdd massive mimo system , where the ula at the bs has antennas .there are single - antenna users uniformly located inside a semicircular cell with a radius of one kilometer , and the users angular spread is . the channel fading coefficients are generated from the urban micro model in the 3gpp standard with .the path loss is given by where is the distance between a user to the bs , and .also , a bulk log normal shadowing with a standard deviation of db is applied to all sub - paths . for the ul training , the signal - to - noise ratio ( snr ) observed at the bs is db . for the dl precoding ,the bs power constraint is dbm , and the noise variance at the user side is dbm .the proposed noas - based hybrid precoding ( noas - hp ) and the obs - based hybrid precoding ( obs - hp ) are compared with the matched filter based maximum ratio transmission ( mrt ) precoding and the mmse precoding in terms of sum - rate .the mrt and the mmse precodings are full - digital ( fd ) while the digital part in the proposed hybrid precoding schemes is an mmse precoder .+ the cumulative distribution funciton ( cdf ) of the sum rate of all users are shown in fig .[ fig : cdf ] for different and , respectively . from fig .[ cdf1 ] and fig .[ cdf2 ] , it can be seen that when the angular spread is , both the proposed obs- and noas - based hybrid precoding schemes combined with the proposed channel estimation significantly outperform the mrt .the reason is that when is not infinitely large and when the channel paths are not infinitely rich ( ) , then the real channel s have very poor orthogonality .however , our angle domain approaches do not require such property and hence the performances are satisfactory .especially , the noas - based hybrid precoding scheme presents performance very close to the high - complexity all - digital mmse precoding because it forms more `` focusing '' beams to increase the gain of the equivalent mimo channel . for the same reason, the noas - based scheme has the sum rate bps / hz higher than that of the obs - hp . with larger angle spread of , the results in fig .[ cdf3 ] and fig .[ cdf4 ] show that mrt is still not good while mmse becomes apparently better than the proposed methods .the reason is that the proposed methods assign only rf chains for users , i.e. , one rf chain per user . in this case, the one beam approximation of the whole channel would become worse as increases .nevertheless , such performance degradation could be mitigated if more rf chains are available for transmission , say in full digital transmission .moreover , the performance gap between the obs - hp and the noas - hp narrows down to around bps / hz because a single rotation would not catch more power of the channel with a single beam compared with non - rotation case under wide angular spread .hence , in terms of throughput , the proposed hybrid schemes , especially the noas one , are more suitable for narrow angular spread scenario , such as in the mmwave case .figure [ nmse ] demonstrates the normalized mse of the proposed channel estimation for different . from the figure , with the increase of snr , both methods performs better butwill meet an error floor at a high snr . in all cases ,the noas - based channel estimation can achieve better nmse performance than the obs - based estimation because it forms more `` focusing '' beams towards the strongest direction of the users by sacrificing the orthogonality among users .meanwhile , increasing also increases the number of rf chain .the reason is that we assume spatial sampling in the first around and increasing will improve the accuracy of imsb estimation , which in turn results in a better estimation performance .moreover , the performance gap between the noas - based channel estimation and the obs - based one reduces as increases , because when is large , the step one can already provide good prediction of imsb with . ., width=340 ]in this paper , we have proposed a novel hybrid transmission scheme for a mu massive mimo system from the array signal processing perspective .efficient channel estimation and aoa estimation algorithms under limited rf chains have been investigated . with the aoa information, the channel matrix can be decomposed into an angle domain basis matrix and the corresponding angle domain channel matrix .we then present two adma schemes , i.e. , the orthogonal obs and the non - orthogonal noas , to serve multiple users wth hybrid precoding .it has been shown that by pointing to the strongest direction of the user , the noas scheme can alleviate power leakage effect and performs better than the obs .from ( [ eq : hkmatrix ] ) , the orthogonality between two user channel vectors and can be measured by the correlation where .the entry of can be written as where . since for , for .therefore , is bounded as , and .similar to ( [ ckn ] ) , we have and for .direct calculation yields that and therefore .99 e. g. larsson , f. tufvesson , o. edfors , and t. l. marzetta , massive mimo for next generation wireless systems , " _ ieee commun .2 , pp . 186195 , feb . 2014 .f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , scaling up mimo : opportunities and challenges with very large arrays , " _ ieee signal process . mag .4060 , jan . 2013 .l. lu , g. y. li , a. l. swindlehurst , a. ashikhmin , and r. zhang , an overview of massive mimo : benefits and challenges , " _ieee j. sel .topics signal process . _ , vol . 8 , no .5 , pp . 742758 , oct . 2014 .t. l. marzetta , noncooperative cellular wireless with unlimited numbers of base station antennas , " _ ieee trans .wireless commun ._ , vol . 9 , no . 11 , pp .35903600 , nov .2010 .y. liu , z. qin , m. elkashlan , et al ., `` enhancing the physical layer security of non - orthogonal multiple access in large - scale networks , '' to appear in _ ieee trans . on wireless commun ._ , available at arxiv preprint arxiv:1612.03901 , 2016 .r. hoshyar , f. p. wathan , and r. tafazolli , `` novel low density signature for synchronous cdma systems over awgn channel , '' _ ieee trans .signal prococessing _ , vol .4 , pp . 16161626 , apr . 2008 . m. al - imari et al . , `` uplink nonorthogonal multiple access for 5 g wireless networks , '' _ proc .wireless commun .2014 , pp . 781785 .h. nikopour and h. baligh , `` sparse code multiple access , '' _ in proc . of inter .symp . on pers ., ind . and mob .( pimrc ) _ , sept .2013 , pp . 332336 .z. yuan , g. yu , and w. li , `` multi - user shared access for 5 g , '' _ telecommun .network technology _ , vol . 5 , no .5 , pp . 2830 , may 2015. j. huang et al . , `` scalable video broadcasting using bit division multiplexing , '' _ ieee trans . broadcast .701706 , dec . 2014 .k. kusume , g. bauch , and w. utschick , `` idma vs. cdma : analysis and comparison of two multiple access schemes , '' _ ieee trans .wireless commun .1 , pp . 7887 , jan .. m. xia , y .- c .wu , and s. aissa , non - orthogonal opportunistic beamforming : performance analysis and implementation , " _ ieee trans . wireless .4 , pp . 14241433 , apr . 2012 .a. alkhateeb , o. e. ayach , g. leus , and r. w. heath jr . , channel estimation and hybrid precoding for millimeter wave cellular systems , " _ ieee j. sel .topics signal process . _ , vol .5 , pp . 831846 , oct . 2014 .j. wang , z. lan , c .- w .pyo , t. baykas , et al . , beam codebook based beamforming protocol for multi - gbps millimeter - wave wpan systems , " _ ieee j. sel .areas commun .27 , no . 8 , pp .13901399 , oct .r. w. heath , n. g. prelcic , s. rangan , w. roh , and a. sayeed , `` an overview of signal processing techniques for millimeter wave mimo systems , '' _ ieee j. sel .topic signal process .436453 , apr .2016 .j. tsai , r.m .buehrer , b.d .woerner , the impact of aoa energy distribution on the spatial fading correlation of linear antenna array , " _ proc .ieee vtc spring _ , 2002 .h. yin , d. gesbert , m. filippou , and y. liu , a coordinated approach to channel estimation in large - scale multiple - antenna systems , " _ ieee j. sel .area commun .2 , pp . 264273 , feb . 2013 .a. adhikary , j. nam , j. -y ahn , and g. caire , joint spatial division and multiplexing : the large - scale array regime , " _ ieee trans .inf . theory _64416463 , oct .c. sun , x. gao , s. jin , m. matthaiou , z. ding , and c. xiao , beam division multiple access transmission for massive mimo communications , " _ ieee trans .63 , no . 6 , pp. 21702184 , june 2015 .x. gao , o. edfors , f. tufvesson , and e. g. larsson , `` massive mimo in real propagation environments : do all antennas contribute equally ? '' _ ieee trans .39173928 , july 2015 .m. joham , k. kusume , m. h. gzara , w. utschick , and j. a. nossek , transmit wiener filter for the downlink of tddds - cdma systems , " _ieee 7th international symposium on spread spectrum techniques and applications _ , 2002 , pp .913 . c. b. peel , b. m. hochwald , and a. l. swindlehurst , a vector - perturbation techniquefor near - capacity multiantenna multiuser communcations - part i : channel inversion and regularization , " _ ieee trans .1 , pp . 195202 , jan .
|
this paper introduces a new view of multi - user ( mu ) hybrid massive multiple - input and multiple - output ( mimo ) systems from array signal processing perspective . we analyze a time division duplex massive mimo system where the base station ( bs ) is equipped with a uniform linear array and each user has a single antenna , and show that the _ instantaneous _ channel vectors corresponding to different users are asymptotically orthogonal as the number of antennas at the bs goes large when the angles of arrival ( aoas ) of users are different . applying the discrete fourier transform ( dft ) , the _ cosine _ of aoa can be estimated with a resolution inverse proportional to the number of antenna at the bs , and this resolution can be enhanced via zero padding technique with fast fourier transform ( fft ) . we then decompose the channel matrix into an angle domain basis matrix and the corresponding channel matrix . the former can be formulated by steering vectors and the latter has the same size as the number of rf chains , which perfectly matches the structure of hybrid precoding . hence , the mu massive mimo system with the proposed hybrid precoding can be viewed as angle division multiple access ( adma ) , either orthogonal or non - orthogonal , to simultaneously serve multiple users at the same frequency band . based on the new view of hybrid massive mimo , a novel hybrid channel estimation is designed and can save much overhead compared to the conventional beam cycling method . finally , the performance of the proposed scheme is validated by computer simulation results . massive mimo , angle division multiple access ( adma ) , angle of arrival ( aoa ) , channel estimation , hybrid precoding .
|
@ @ plus .1pt 40004000 = 1000 = biblabel#1 bcite#1#2(#1 , # 2 ) pcite#1#2#1 , # 2 citefmta#1#2#1 ( # 2 ) citefmtb#1#2#1 # 2 = citefmta citex[#1]#2 citeaciteforciteb:=#2#1 c pthree dimensional patterns formed by the spatial distribution of galaxies in the universe have already been described and quantified by various methods : correlation functions , counts in cells , the void probability function , the genus , the multifractal spectrum , skewness and kurtosis , and minkowski functionals ( , ) .some of these descriptors are complementary and suggest a physical interpretation of cosmic patterns by emphasising different spatial features of the galaxy distribution .the treatment of the galaxy distribution as a realization of a spatial point process promises useful insights through the application of methods from the field of spatial statistics .the forthcoming three dimensional galaxy catalogues with more than half a million redshifts additionally motivate the development of new statistical techniques . in this articlewe want to reinforce a morphological measure for the study of the distribution of galaxies , the , which has recently been introduced into the field of spatial statistics by and is related to the nearest neighbor distribution and the spherical contact distribution . indeed , the is equal to the first conditional correlation function ( , ) , and was used by to test a hierarchical ansatz for correlation functions .we will focus on different features of the showing its discriminative power as a measure of the strength of clustering .our article is organised as follows : in sect .[ sect : nearest ] we present the distribution functions and and show how the function is constructed .a matrn cluster process is considered as a simple example of a clustering point distribution . in sect .[ sect : galaxies ] we study the clustering properties of a galaxy sample and of galaxies in groups extracted from the perseus pisces redshift survey ( pps ) .we compare the observed galaxy distribution with mock samples extracted from a mixed dark matter ( mdm ) simulation in sect .[ sect : nbody ] .we summarise and conclude in sect .[ sect : outlook ] .in the theory of spatial point processes the distribution of a point s distance to its nearest neighbor is a common tool for the analysis of point patterns .we consider the redshift space coordinates of galaxies inside a region as a realization of the point process describing the spatial distribution of galaxies in the universe .the nearest neighbor distribution is the distribution function of the distance of a point of the process to the nearest other point of the process .similarly , the spherical contact distribution is the distribution function of the distance of an arbitrary point in to the nearest point of the process . is equal to the volume fraction occupied by the set of all points in which are closer than to a point of the process .hence , coincides with the volume density of the first minkowski functional ( and ) and is related to the void probability function via . for a homogeneous poisson processwe have where is the number density . boundary corrected estimators for both the nearest neighbor distribution and the spherical contact distribution used in our studies are provided by minus ( reduced sample ) estimators ( , also detailed in ) . in a recent paper , have suggested to use the quotient for characterising a point process ; in that way the surroundings of a point belonging to the process and the neighborhood of a random point are compared .they consider several point process models and provide limits and exact results on ( see also section [ sect : matern ] ) .if the process under consideration is clustered , an arbitrary point usually lies farther away from a point of the process than in the case of a poisson process .hence , clustering is indicated by .consistently , , since clustered points tend to lie closer to their nearest neighbors than randomly distributed points .so , for a clustered point distribution , . in case of anti correlated , `` regular '' structures the situation is the opposite : on average a point of a regular process is farther away from the nearest other point of the process , so , and a random point is closer to a point of the process , resulting in .therefore , regular structures are indicated by . for a homogeneous poisson processwe obtain , separating regular from clustering structures . before attempting to apply to galaxy samples ,we want to test it on a model with non trivial yet analytically tractable behaviour of . in order to describe the clustering of galaxies , suggested a class of point processes that was subsequently named after them .we concentrate on a subclass called matrn cluster processes .they are constructed by first distributing uniformly cluster centres . around each cluster centre , which is itself not included in the final point distribution , galaxies are placed randomly within a sphere of radius , where is a poisson distributed random variable with mean . in figure[ fig : matern ] we show a sketch of such a process .note that overlapping clusters are allowed. for a matrn cluster process , proved that is monotonically decreasing from 1 at and attains a constant value for , where is the radius of a cluster .this constant value can be interpreted as a relic of the uniform distribution of the cluster centres . in three dimensions where denotes the ratio of the volume of the intersection of two balls to the volume of a single ball . here is a ball of radius centred at the point , while is a ball of radius centred at the origin .this quantity can be calculated from basic geometric considerations , both in two and in three dimensions , where the result is with and in figure [ fig : jmatern ] we show denotes the value of the hubble parameter measured in units of . ] for and several values of ; this represents typical situations of galaxy clustering .obviously discriminates between the varying richness classes of the matrn cluster processes .in this section we want to go one step further by applying to catalogues of galaxies and groups of galaxies , and compare them with a matrn cluster process .the pps database was compiled in the last decade ( , ) .the full redshift survey is magnitude limited down to a zwicky magnitude of , and at least 95% complete to ( see figure 1 in ) .we extract a volume limited subsample with and radius , confined to and , i.e. a solid angle of .redshifts are corrected for the motion of the sun relative to the rest frame of the cosmic microwave background ( cmb ) as in , and we also correct zwicky magnitudes for interstellar extinction as in . the final volume limited sample pps79 contains 817 galaxies . to find groups , we use the redshift space friends of friends algorithm of , suitably adapted to our case .it is a truncated percolation algorithm with two independent linking parameters and .briefly , two galaxies are `` friends '' if their transverse and radial separations and satisfy and , respectively .friendship is transitive , and a set of _ three _ or more friends is called a loose group of galaxies . usually , loose groups are identified in magnitude limited samples . here, we consider only volume limited samples .values of and give very good agreement of global properties ( e.g. the total fraction of galaxies in groups , the ratio of groups to galaxies , or the median velocity dispersion ) between our volume limited group catalogue and the magnitude limited catalogue constructed by .the final sample contains 230 galaxies in 48 loose groups .a typical group has 5 observed members , a `` virial mass '' of some , and an observed luminosity of some .both its radius and its inter member pairwise separation are around , and the line of sight velocity dispersion amounts to roughly , so the groups appear thin and elongated in redshift space .we calculated for all galaxies from the pps79 sample ; the results are shown in figure [ fig : j_pps_gal ] . with lying outside the area occupied by realizations of a poisson process , one can clearly see that galaxies are strongly clustered not a particularly surprising result . in sect .[ sect : pps - simulation ] , somewhat more interesting comparisons with galaxy mock samples extracted from simulations are performed .figure [ fig : j_pps_gr ] displays the results for grouped galaxies . since each group contains at least three members , the nearest neighbour of a grouped galaxy is certainly found within the largest link length used in the friends of friends procedure .hence we observe and subsequently for in the grouped galaxy sample . is in general _ not _ invariant under changes of the number density . to compare the for grouped galaxies with the for all galaxies , we subsample the denser pps79. is calculated from 50 subsamples of 230 galaxies randomly selected from the whole pps79 sample . with measure the strength of clustering , which is emphasised when we consider galaxies in groups only , and is less pronounced when we look at the whole sample with field galaxies included .similarly the value of for the sub sampled pps79 is higher than for the whole pps79 , because random sub sampling ( thinning ) tends to increase towards the poisson value .the centers of loose groups show a strong correlation themselves , therefore a matrn cluster process can only serve as a rough approximation to the true distribution of galaxies in groups . despite this, a matrn cluster process with galaxies per group ( cluster ) and a group radius of shows a comparable to the obtained from the galaxies in groups , where in the mean 4.8 galaxies reside in a group ( see fig .[ fig : jmatern ] ) .we see a low , almost constant value of for .this suggests that we are indeed looking at highly clustered galaxies with small contamination by `` field '' galaxies .the of a matrn cluster process gets constant for radii twice as large as the cluster radius .already , express their hope to deduce a cluster scale in a point distribution from .however , this must be taken with extreme caution .as can be seen from fig .[ fig : jmatern ] we may be fooled by a factor of three by the fluctuations in the estimated .the uncertainty becomes even worse when we consider certain cox processes , where decreases strictly monotonically towards a constant value , and in principle no scale can be deduced from the comparison with the oversimplified matrn cluster process .either we have to restrict ourselves to qualitative statements , or come up with more refined and realistic models .the preceding section showed that the qualitative features of the galaxy distribution are well described by the . in this sectionwe explicate that the is also suitable for a quantitative comparison , and allows us to constrain cosmological models .we extract 64 mock pps catalogues from a cosmological n body simulation of a mixed dark matter ( mdm ) model .we consider a mdm model with one species of massive neutrinos , dimensionless hubble parameter and density parameters , for cold and hot dark matter , respectively .the analytical expressions for the mdm power spectra was taken from .the initial was normalised to the cobe 4-yr data , giving a corresponding value of for the r.m.s .mass fluctuation in an sphere .the simulation was run from an initial expansion factor down to using a p m code with particles of mass , on a cubic grid of cells , with a force softening radius , in a box of side .the integration was performed in comoving coordinates using as time variable for a total of 225 steps .we identify `` galaxies '' in our simulation with a method similar to the one discussed by : first , we associate with each particle a number of galaxy scale peaks calculated from the initial density contrast field . in the peak background split approximation ( , , ) is the number of galaxy peaks with height , where denotes the field smoothed with a gaussian kernel of width , and gives the smoothed field s variance .the field is subject to the constraint that it takes the value when smoothed on a scale ( see for more details ) .choosing , at the particle two point correlation function , weighted according to , matches in slope and amplitude the galaxy two point correlation function . for the adopted parameters ,the total number of peaks in the box is .then , we select the particle as a galaxy if , where is a uniformly distributed random variable , and is a constant of proportionality . the latter is set by the requirement that the mean number density of `` galaxies '' in the box matches the mean density of galaxies expected from the schechter luminosity function with , , appropriate for pps (; ) .this monte carlo procedure makes the implicit assumption that the higher the peak , the more luminous the associated galaxy .the mock pps catalogues were built as follows .the simulation cube was divided into sub cubes of side length . within each sub cube we fit a pps like wedge of radius .space coordinates , , were assigned to all the `` galaxies '' of the sub cubes .finally , we kept only the `` galaxies '' within the redshift space boundaries of the mock pps catalogues .although we are looking at a large volume with a depth 79 and a solid angle of , we observe large fluctuations of 25% in the number of points per mock sample ( fig .[ fig : counts_histogram ] ) .this is consistent with the large scale fluctuations of the clustering properties of iras galaxies , as found by out to scales of 200 , and expresses cosmic variance in agreement with expected sample to sample variations .as we will see , this slightly complicates the analysis . at firstwe investigate the mock samples selected in redshift space .if we use all the 64 mock samples we are dominated by the fluctuations between samples with a different number density ( see fig . [ fig : red - all - simul ] ) . therefore , we restrict ourselves to mock samples with approximately the same number of points as in the observed galaxy sample : , with . for only six samplesenter , whereas for we already have seventeen mock samples to analyse .the mean value of hardly changes between samples with different .obviously , samples with low density tend to be centred on voids , and high density samples typically include large , coma like clusters .so large fluctuations in the number density lead to large fluctuations in the clustering properties measured by but cancel in the mean .these fluctuations decrease for smaller ( see fig .[ fig : red - all - simul ] ; this was confirmed by inspecting samples with and ) . in order to look at structures comparable to the pps sample we consider mock samples with a similar number density as in the observed galaxy sample , and do not subsample the mock samples with high number density . in fig .[ fig : real - red - data ] the results of the mock samples in real and redshift space are compared .samples selected in redshift space show a weaker clustering than mock samples selected in real space on small scales out to at least , as can be deduced from the higher . this can be traced back to redshift space distortions .the peculiar motions act by erasing small scale clustering ; therefore the j value of redshift - space samples is larger ( less clustering ) than that of real - space samples .this effect changes at a given distance ( 2 ) .the same effect was found by in volume limited subsamples , extracted from cfa - i , by means of the two point correlation function . in fig .[ fig : redshift - data ] the results of the mock samples in redshift space are compared with the results of the observed galaxy distribution in the pps .the mock samples show insufficient clustering on small scales out to at least , as can be deduced from the higher .this is probably due to the high velocity dispersion in mdm models . in real space , which is_ not _ directly comparable with the pps data , the mock samples reproduce the clustering on small scales out to 1 , but again show not enough clustering , even though they become marginally consistent with the observed galaxy distribution on larger scales .we have to conclude that this mdm simulation is unable to reproduce the observed strong clustering of galaxies on small scales .of course this result depends on our method of galaxy identification .a different biasing prescription might change this . on large scalesa definitive answer is not possible , since for larger than 6 an estimation of becomes unreliable ; the empirical and approach unity , and the quotient is ill defined .we have highlighted promising properties of the global morphological descriptor .it connects the distribution functions and and , hence , incorporates all orders of correlation functions . measures the strength of clustering in a point process and distinguishes between correlated and anti correlated patterns .the example of a matrn cluster process illustrates that sensitively depends on the richness of the clusters or groups .since is built from cumulated distribution functions , we do not encounter spurious results due to binning .this becomes particularly important on small scales .the application of the -function to galaxies in a volume limited sample and to a sample of galaxies in loose groups clearly showed the stronger clustering of galaxies in groups . in a comparison with a matrn cluster we found that internal properties , like the richness of loose groups , are satisfactorily modelled .however , for the large scale distribution of galaxies , the matrn cluster process clearly is an over simplification .we used the -function for a comparison of the observed galaxy distribution with galaxy mock samples .although the mock samples extracted from a mdm simulation cover a large volume , we detected large fluctuations of the order of 25% in the number of points per sample .on small scales , out to 1 , the clustering in real space is as strong as in the observed galaxy distribution , but the comparable redshift space mock samples show too weak clustering . on larger scales from 26both real and redshift space mock samples show too weak clustering .hence , this mdm simulation is not able to reproduce the observed strong clustering of the galaxies on small scales .the function has proved to achieve comparable discriminative power as the minkowski functionals , and is most suitable for addressing the question of `` regularity '' on large scales as demonstrated in an analysis of the distribution of superclusters . in this articlewe have shown that the function is a useful tool for quantifying the clustering of galaxies on small scales and is capable of constraining cosmological models of structure formation .it is a pleasure to thank adrian baddeley , bryan scott , claus beisbart and herbert wagner for useful discussions and comments .we thank simon d.m .white for pointing out the relation to the conditional correlation function .this work was partially supported by the ec network of the program human capital and mobility no . chrx - ct93 - 0129 , the accin integrada hispano alemana ha-188a ( mec ) , the sonderforschungsbereich sfb 375 fr astroteilchenphysik der deutschen forschungsgemeinschaft , and the spanish dges ( project number pb96 - 0797 ) .
|
we present the function as a morphological descriptor for point patterns formed by the distribution of galaxies in the universe . this function was recently introduced in the field of spatial statistics , and is based on the nearest neighbor distribution and the void probability function . the descriptor allows to distinguish clustered ( i.e. correlated ) from `` regular '' ( i.e. anti correlated ) point distributions . we outline the theoretical foundations of the method , perform tests with a matrn cluster process as an idealised model of galaxy clustering , and apply the descriptor to galaxies and loose groups in the perseus pisces survey . a comparison with mock samples extracted from a mixed dark matter simulation shows that the descriptor can be profitably used to constrain ( in this case reject ) viable models of cosmic structure formation . # 1
|
cooperative control of multi - agent systems constitutes a highly active area of research during the last two decades .typical objectives are the consensus problem , which is concerned with finding a protocol that achieves convergence to a common value , reference tracking and formation control .a common feature in the approach to these problems is the design of decentralized control laws in order to achieve a global goal . in the case of mobile robot networks with limited sensing and communication ranges ,connectivity maintenance plays a fundamental role .in particular , it is required to constrain the control input in such a way that the network topology remains connected during the evolution of the system .for instance , in the rendezvous and formation control problems are studied while preserving connectivity , whereas in swarm aggregation is achieved by means of a control scheme that guarantees both connectivity and collision avoidance . in our approachwe provide a control law for each agent comprising of a decentralized feedback component and a free input term , which ensures connectivity maintenance , for all possible free input signals up to a certain bound of magnitude .the motivation for this approach comes from distributed control and coordination of multi - agent systems with locally assigned linear temporal logic ( ltl ) specifications . in particular , by virtue of the invariance and robust connectivity maintenance properties , it is possible to define well posed decentralized abstractions for the multi - agent system which can be exploited for motion planning .the latter problem has been studied in our recent work for the single integrator dynamics case . in this work ,we design a bounded control law which results in network connectivity of the system for all future times provided that the initial relative distances of interconnected agents and the free input terms satisfy appropriate bounds .furthermore , in the case of a spherical domain , it is shown that adding an extra repulsive vector field near the boundary of the domain can also guarantee invariance of the solutions and simultaneously maintain the robust connectivity property .the latter framework enables the construction of finite abstractions for the single integrator case .the rest of the report is organized as follows .section 2 introduces basic notation and preliminaries . in section 3 ,results on robust connectivity maintenance are provided and explicit controllers which establish this property are designed . in section 4 ,the corresponding controllers are appropriately modified , in order to additionally guarantee invariance of the solution for the case of a spherical domain .we summarize the results and discuss possible extensions in section 5 .we use the notation for the euclidean norm of a vector . for a matrix we use the notation for the induced euclidean matrix norm and for its transpose . for two vectors we denote their inner product by .given a subset of , we denote by , and its closure , interior and boundary , respectively , where .for , we denote by the closed ball with center and radius . given a vector define the component operators , .likewise , for a vector we define the component operators , . consider a multi - agent system with agents .for each agent we use the notation for the set of its neighbors and for its cardinality .we also consider an ordering of the agent s neighbors which we denote by . stands for the undirected network s edge set and iff .the network graph is connected if for each there exists a finite sequence with , and , for all .consider an arbitrary orientation of the network graph , which assigns to each edge precisely one of the ordered pairs or .when selecting the pair we say that is the tail and is the head of edge . by considering a numbering of the graph s edge set we define the incidence matrix corresponding to the particular orientation as follows : the graph laplacian is the positive semidefinite symmetric matrix . if we denote by the vector , then .let be the ordered eigenvalues of .then each corresponding set of eigenvectors is orthogonal and iff is connected .we focus on single integrator multi - agent systems with dynamics we aim at designing decentralized control laws of the form which ensure that appropriate apriori bounds on the initial relative distances of interconnected agents guarantee network connectivity for all future times , for all free inputs bounded by certain constant .in particular , we assume that the network graph is connected as long as the maximum distance between two interconnected agents does not exceed a given positive constant .in addition , we make the following connectivity hypothesis for the initial states of the agents . * ( ich ) * we assume that the agents communication graph is initially connected and that we proceed by defining certain mappings which we exploit in order to design the control law and prove that network connectivity is maintained .let be a continuous function satisfying the following property . *( p ) * is increasing and .also , consider the integral for each pair with we define the potential function as notice that .furthermore , it can be shown that is continuously differentiable and that where stands for the derivative with respect to the -coordinates .notice that we are only interested in the values of the mappings and in the interval ] be a lipschitz continuous function that satisfies we define the vector field as with as given above and appropriate positive constants , which serve as design parameters .then , it follows from , and the lipschitz property for that the vector field is lipschitz continuous on .having defined the mappings for the extra term in the dynamics of the modified controller which will guarantee the desired invariance property , we now state our main result .[ invariance : result ] for the multi - agent system , assume that , for certain and that ( ich ) is fulfilled .furthermore , let , and as defined by and , respectively and assume that the initial states of all agents lie in .then , there exists a control law ( with free inputs ) which guarantees both connectivity and invariance of for the solution of the system for all future times and is defined as with given in and certain satisfying property ( p ) .we choose the same positive constant in both and and select the constant in greater that 1 .then the connectivity - invariance result is valid provided that the parameters , and the function satisfy the restrictions , and the input terms , satisfy .we break the proof in two steps . in the first step ,we show that as long as the invariance assumption is satisfied , namely , the solution of the closed loop system - is defined and remains in , network connectivity is maintained . in the second step , we show that for all times where the solution is defined , it remains inside a compact subset of , which implies that the solution is defined and remains in for all future times , thus providing the desired invariance property .* step 1 : proof of network connectivity . * + the proof of this step is based on an appropriate modification of the corresponding proof of proposition [ connectivity : maintainance ] . in particular , we exploit the energy function as given by and show that when , namely , when the maximum distance between two agents exceeds then its derivative along the solutions of the closed loop system is negative .thus by using the same arguments with those in proof of proposition [ connectivity : maintainance ] we can deduce that the system remains connected .indeed , by evaluating the derivative of along the solutions of - we obtain by taking into account and using precisely the same arguments with those in proof of steps 1 and 2 of proposition [ connectivity : maintainance ] it suffices to show that the first term of inequality , which by virtue of is equal to is nonpositive for all .given the partition , of , we consider for each agent the partition , of its neighbors set , corresponding to its neighbors that belong to and , respectively .also , we denote by the set of edges with both . then , by taking into account that due to , for , it follows that \label{lyapunov : function : derivative : extra : terms}\end{aligned}\ ] ] in order to prove that both terms in are less than or equal to zero and hence derive our desired result on the sign of , we exploit the following facts .+ * fact v. * consider the vectors with the following properties : then for every quadruple satisfying it holds where we provide the proof of fact v in the appendix . +* fact vi . * for any with , and it holds the proof of fact vi is based on the elementary properties and and .hence we have that we are now in position to show that both terms in the right hand side of are nonpositive , which according to our previous discussion establishes the desired connectivity maintenance result .* proof of the fact that the first term in is nonpositive .* for each in the first term in we get by applying fact vi with and that and hence that the first term is nonpositive .* proof of the fact that the second term in is nonpositive .* we exploit fact v in order to prove that for each the quantity in the second term of is nonpositive as well . notice that both and without loss of generality we may assume that latexmath:[\[\label{distance : xixj : boundary } namely , that is farther from the boundary of than .then by setting with and it follows from and that and from , , , , and that furthermore , we get from and that . thus , it follows from , , and application of fact vi with , and that and similarly that it follows that all requirements of fact v are fulfilled .furthermore , by taking into account - , we get that thus we establish by virtue of , , , , , , and that as desired . * step 2 : proof of forward invariance of with respect to the solution of - . * + we proceed by proving that the control law also guarantees the desired invariance property for the solutions of system - , provided that the input terms , satisfy .let be the maximal forward interval for which the solution of - with exists and remains inside .we claim that for all the solution remains inside with and where and are given in the statement of the proposition and , respectively .then , it also follows from the fact that remains in the compact subset of for all , that , which provides the desired result . in order to prove our claim , we need to define certain auxiliary mappings . for each define the functions and where denotes the distance of agent from at time and is the maximum over those distances for all agents .hence , for all and all $ ] we have the following equivalences and for all that \ ] ] notice that the functions , and are continuous and due to our hypothesis that , satisfy we claim that with as given in . indeed , suppose on the contrary that there exists such that and define :m(t)\ge\frac{1}{2}\left(\tilde{\varepsilon}+\frac{\varepsilon}{\tilde{c}}\right),\forall t\in[\tilde{\tau},t]\right\rbrace\ ] ] then it follows from that is well defined and from , and the continuity of that and that there exists a sequence with from , , and the infinite pigeonhole principle we deduce that there exists and a subsequence of such that thus , it follows by virtue of and that set and notice that due to it holds , .the latter implies that on the other hand , we have that \end{aligned}\ ] ] by taking into account and we get that and hence from , and that for certain .then we get from , and the fact that that & \le x_{i}(\tau)^t g(x_{i}(\tau))+|x_{i}(\tau)||v_{i}(\tau)| \nonumber \\ & = -|x_{i}(\tau)||g(x_{i}(\tau))|+|x_{i}(\tau)||v_{i}(\tau)| \nonumber \\ & = -|x_{i}(\tau)|(|g(x_{i}(\tau))|-|v_{i}(\tau)|)<0\end{aligned}\ ] ] furthermore , we have from and that for all and from that .thus , it follows from fact vi that from and , we obtain that is negative , which contradicts .hence , holds , which implies that remains in the compact subset of for all .thus , and we conclude that the solution of the system remains in for all .we have provided a distributed control scheme which guarantees connectivity of a multi - agent network governed by single integrator dynamics .the corresponding control law is robust with respect to additional free input terms which can further be exploited for motion planning . for the case of a spherical domain , adding a repulsive vector field near the boundary ensures that the agents remain inside the domain for all future times .the latter framework is motivated by the fact that it allows us to abstract the behaviour of the system through a finite transition system and exploit formal method tools for high level planning .further research directions include the generalization of the invariance result of section [ invariance : analysis ] for the case where the domain is convex and has smooth boundary and the improvement of the bound on the free input terms , by allowing the bound to be state dependent .in the appendix , we provide the proofs of facts i , ii , iii and iv which were used in proof of proposition [ connectivity : maintainance ] and of fact v , in proof of proposition [ invariance : result ] . for convenience we state the elementary inequality * proof of fact i.* let be an orthonormal basis of eigenvectors corresponding to the ordered eigenvalues of .then , for each we have that and hence , that thus , we get that which establishes .* proof of fact ii . * by taking into account the cauchy schwartz inequality we obtain and hence holds .* proof of fact iii .* by the definition of and , it follows that there exists such that .hence , we have that where stands for the edge set of the complete graph with vertex set .then , it follows from that which provides the desired result .* proof of fact iv .* notice that is equivalently written as with as in proof of fact iii .let such that .then , by taking into account we have and thus is fulfilled .* proof of fact v. * by taking into account - and we evaluate and hence holds .this work was supported by the eu strep reconfig : fp7-ict-2011 - 9 - 600825 , the h2020 erc starting grant bucophsys and the swedish research council ( vr ) .a. ajorlou , a. momeni , and a. g. aghdam .a class of bounded distributed control strategies for connectivity preservation in multi - agent systems ._ ieee transactions on automatic control _ , 55(12):28282833 , 2010. z. kan , a. p. dani , j. m. shea , and w. e. dixon . network connectivity preserving formation stabilization and obstacle avoidance via a decentralized controller . _ ieee transactions on automatic control _ , 57(7):18271832 , 2012 .p. yang , r. a. freeman , g. j. gordon , k. m. lynch , s. s. srinivasa , and r. sukthankar .decentralized estimation and control of graph connectivity for mobile sensor networks ._ automatica _ , 46(2):390396 , 2010 .m. m. zavlanos , m. b. egerstedt , and g. j. pappas .graph - theoretic connectivity control of mobile robot networks ._ in proceedings of the ieee : special issue on swarming in natural and engineered systems _, 99(9):15251540 , 2011 .
|
in this report we provide a decentralized robust control approach , which guarantees that connectivity of a multi - agent network is maintained when certain bounded input terms are added to the control strategy . our main motivation for this framework is to determine abstractions for multi - agent systems under coupled constraints which are further exploited for high level plan generation .
|
social products and services from fax machines and cell phones to online social networks inherently exhibit ` network effects ' with regard to their value to users .the value of these products to a user is inherently non - local , since it typically grows as members of the user s social neighborhood use the product as well . yet randomized experiments ( or ` a / b tests ' ) , the standard machinery of testing frameworks including the rubin causal model , critically assume what is known as the ` stable unit treatment value assumption ' ( sutva ) , that each individual s response is affected only by their own treatment and not by the treatment of any other individual . addressingthis tension between the formalism of a / b testing and the non - local effects of network interaction has emerged as a key open question in the analysis of on - line behavior and the design of network experiments . under ordinary randomized trials where the stable unit treatment value assumption is a reasonable approximation for example when a search engine a / b tests the effect of their color scheme upon the visitation time of their users the population is divided into two groups : those in the ` treatment ' group who see the new color scheme a and those in the control group who see the default color scheme b. assuming there are negligible interference effects between users , each individual in the treated group respondsjust as he or she would if the entire population were treated , and each individual in the control group responds just as he or she would if the entire population were in control . in this manner , we can imagine that we are observing results from samples of two distinct ` parallel universes ' at the same time ` universe a ' in which color scheme a is used for everyone , and ` universe b ' in which color scheme b is used for everyone and we can make inferences about the properties of user behavior in each of these universes . this tractable structure changes dramatically when the behavior of one user can have a non - trivial effect on the behavior of another user as is the case when the feature or product being tested has any kind of social component .now , if is placed in universe a and is placed in universe b , then our analysis of s behavior in a is contaminated by properties of s behavior in b , and vice versa ; we no longer have two parallel universes . [[ average - treatment - and - network - exposure ] ] * average treatment and network exposure * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our goal is to develop techniques for analyzing the average effect of a treatment on a population when such interaction is present . as our basic scenario ,we imagine testing a service by providing it to a subset of an underlying population ; the service has a ` social ' component in that s reaction to the service depends on whether a neighbor in the social network also has the service .we say that an individual is in the _ treatment group _ if the individual is provided with the service for the test , and in the _ control group _ otherwise .there is an underlying numerical response variable of interest ( for example , the user s time - on - site in each condition ) , and we want to estimate the average of this response in both the universe where everyone has the service , and the universe where no one has the service , despite the fact that since the population is divided between treatment and control we do nt have direct access to either universe .we express this question using a formalism introduced by aronow and samii for causal inference without this stable unit treatment value assumption , with strong similarities to similar formalism introduce by manski , and adapt it to the problem of interference on social networks .let be the treatment assignment vector , where means that user is in the treatment group and means the user is in the control .let be the potential outcome of user under the treatment assignment vector .the fundamental quantity we are interested in is the average treatment effect , , between the two diametrically opposite universes and , }. \label{eq : avg - effect}\end{aligned}\ ] ] this formulation contains the core problem discussed in informal terms above : unlike ordinary a / b testing , no two users can ever truly be in opposing universes at the same time . a key notion that we introduce for evaluating ( [ eq : avg - effect ] ) is the notion of _ network exposure_. we say that is ` network exposed ' to the treatment under a particular assignment if s response under is the same as s response in the assignment , where everyone receives the treatment .. ] we define network exposure to the control condition analogously . with this definition in place , we can investigate several possible conditions that constitute network exposure .for example , one basic condition would be to say that is network exposed to the treatment if and all of s neighbors are treated . another would be to fix a fraction and say that is network exposed if and at least a fraction of s neighbors are treated .the definition of network exposure is fundamentally a modeling decision by the experimenter , and in this work we introduce several families of exposure conditions , each specifying the sets of assignment vectors in which a user is assumed to be ` network exposed ' to the treatment and control universes , providing several characterizations of the continuum between the two universes .choosing network exposure conditions is crucial because they specify when we can observe the potential outcome of a user as if they were in the treatment or control universe , without actually placing all users into the treatment or control universe .[ [ graph - cluster - randomization ] ] * graph cluster randomization * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + following the formulation of network exposure , a second key notion that we introduce is a generic graph randomization scheme based on graph clustering , which we call _ graph cluster randomization_. at a high level , graph cluster randomization is a technique in which the graph is partitioned into a set of _ clusters _ , and then randomization between treatment and control is performed at the cluster level .the probability that a vertex is network exposed to treatment or control will then typically involve a graph - theoretic question about the intersection of the set of clusters with the local graph structure near the vertex .we show how it is possible to precisely determine the non - uniform probabilities of entering network exposure conditions under such randomization . using inverse probability weighting , we are then able to derive an unbiased estimator of the average treatment effect under any network exposure for which we can explicitly compute probabilities .we motivate the power of graph cluster randomization by furnishing conditions under which graph cluster randomization will produce an estimator with asymptotically small variance .first , we observe that if the graph has bounded degree and the sizes of all the clusters remain bounded independent of the number of vertices , then the estimator variance is , a simple but illustrative sufficient condition for smallness .the key challenge is the dependence on the degrees in general , a collection of bounded - size clusters can produce a variance that grows exponentially in the vertex degrees .more precisely , when performing graph cluster randomization with single - vertex clusters , the variance of the estimator admits a _ lower bound _ that depends _ exponentially _ on the degrees .this raises the important algorithmic question of how to choose the clustering : bounded - size clusters provide asymptotically small variance in the number of vertices , but if the clusters are not chosen carefully then we get an exponential dependence on the vertex degrees which could cause the variance to be very large in practice .[ [ cluster - randomization - in - restricted - growth - graphs ] ] * cluster randomization in restricted - growth graphs * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we identify an important class of graphs , which we call _ restricted - growth graphs _ , on which a non - trivial clustering algorithm admits an _ upper bound _ on the estimator variance that is _ linear _ in the degrees of the graph .the restricted - growth condition that we introduce for graphs is an expansion of the bounded - growth condition previously introduced for studying nearest - neighbor algorithms in metric spaces , designed to include low - diameter graphs in which neighborhoods can grow exponentially .formally , let be the set of vertices within hops of a vertex ; our restricted - growth condition says that there exists a constant , independent of the degrees of the graph , such that for all vertices and all , we have .note the comparison to the standard bounded - growth definition , which requires , a much stronger condition and not necessary for our results to hold . for restricted - growth graphs ,we provide a clustering algorithm for which the estimator variance grows only linearly in the degree .the challenge is that the variance can grow exponentially with the number of clusters that intersect a vertex s neighborhood ; our approach is to form clusters from balls of fixed radius grown around a set of well - separated vertices .the restricted growth condition prevents balls from packing too closely around any one vertex , thus preventing vertex neighborhoods from meeting too many clusters .we note that for the special case of restricted - growth graphs that come with a uniform - density embedding in euclidean space , one can use the locations of vertices in the embedding to carve up the space into clusters directly ; the point , as in work on the nearest - neighbor problem , is to control this carving - up at a graph - theoretic level rather than a geometric one , and this is what our technique does .our class of restricted - growth graphs provides an attractive model for certain types of real - world graphs .restricted - growth graphs include graphs for which there exists an embedding of the vertices with approximately uniform density in a euclidean space of bounded dimension , such as lattices or random geometric graphs , where edges connect neighbors within some maximal metric distance .[ [ summary ] ] * summary * + + + + + + + + + our work thus occupies a mediating perch between recent work from the statistical literature on causal inference under interference , as well as recent work from the computer science literature on network bucket testing .our contribution extends upon the ordinary inference literature by developing exposure models and randomization schemes particularly suited for experiments on large social graphs , also showing how previous approaches are intractable .meanwhile , we show that reducing estimator variance involves non - trivial graph - theoretic considerations , and we introduce a clustering algorithm that improves exponentially on baseline randomization schemes .our contribution also connects to existing work on network bucket testing by contributing an exposure framework for the full graph and a randomization scheme that is capable of considering multiple exposure conditions at once , a necessity for true concurrent causal experimentation . in section 2we describe our models of network exposure . in section 3we present our graph cluster randomization scheme , an algorithm for efficiently computing exposure probabilities , and an unbiased estimator of average treatment effects under graph cluster randomization . in section 4we introduce restricted - growth graphs , and show how the estimator has a variance that is linearly bounded in degree for such graphs .section 5 concludes .for a / b randomized experiments , the _ treatment condition _ of an individual decides whether or not they are subject to an intervention .this typically takes two values : ` treatment ' or ` control ' . in most randomized experiments ,the experimenter has explicit control over how to randomize the treatment conditions , and generally individuals are assigned independently .meanwhile , the _ exposure condition _ of an individual determines how they experience the intervention in full conjunction with how the world experiences the intervention . without the stable unit treatment value assumption ,at worst each of the possible values of define a distinct exposure condition for each user .aronow and samii call this `` arbitrary exposure '' , and there would be no tractable way to analyze experiments under arbitrary exposure .consider the potential outcomes for user . in the arbitrary exposure " case, is completely different for every possible .this means that we will never be able to observe for either or without putting all users into the treatment or control universes .thus , to make progress on estimating the average treatment effect under any other conditions , we require further assumptions . we do this here by assuming that multiple treatment vectors can map to the same potential outcomes : essentially , as long as treatment vectors and are `` similar enough '' from the perspective of a vertex , in a sense to be made precise below , then will have the same response under and .specifically , let be the set of all assignment vectors for which experiences outcome .we refer to as an _ exposure condition _ for ; essentially , consists of a set of assignment vectors that are `` indistinguishble '' from s point of view , in that their effects on are the same .our interest is in the particular exposure conditions and , which we define to be the sets that contain and respectively . in this way, we are assuming that for all , we have , and for all , we have .-sized bins , and define the `` exposure conditions '' as all assignment vectors that produce a potential outcome in that bin . in cases where no other potential outcomes correspond to the outcomes for or , it may be more appropriate to manage bias using distances on potential outcomes this way . ]note that it is possible that and belong to the same exposure condition and that , which corresponds to a treatment that has no effects .we define an _ exposure model _ for user as a set of exposure conditions that completely partition the possible assignment vectors .the set of all models , across all users , is the exposure model for an experiment . for our purposes though , it is unnecessary to entirely specify an exposure model , since we are only trying to determine the average treatment effect between the extreme universes .we only care about the exposure conditions and for which each user experiences exposure to the treatment or control universe and could become relevant . ] .of course , the _ true _ exposure conditions and for each user are not known to the experimenter a priori , and analyzing the results of an experiment requires choosing such conditions in our framework . if the wrong exposure conditions are chosen by the experimenter , what happens to the estimate of the average treatment effect ?if users are responding in ways that do not correspond to and , we will be introducing bias into the average treatment effect .the magnitude of this bias depends on how close the outcomes actually observed are to the outcomes at and that we wanted to observe. it may even be favorable to allow such bias in order to lower variance in the results of the experiment .[ [ neighborhood - exposure ] ] * neighborhood exposure * + + + + + + + + + + + + + + + + + + + + + + + we now describe some general exposure conditions that we use in what follows . in particular , we focus primarily on _ local exposure conditions _ , where two assignments are indistinguishable to if they agree in the immediate graph neighborhood of .we consider absolute and fractional conditions on the number of treated neighbors .note we are not asserting that these possible exposure conditions are the _ actual _ exposure conditions with respect to the actual potential outcomes in an experiment , but rather that they provide useful abstractions for the analysis of an experiment , where again the degree of bias introduced depends on how well the exposure conditions approximate belonging to the counterfactual universes .* _ full neighborhood exposure : _vertex experiences full neighborhood exposure to a treatment condition if and all s neighbors receive that treatment condition . *_ absolute -neighborhood exposure : _vertex of degree , where , experiences absolute -neighborhood exposure to a treatment condition if and neighbors of receive that treatment condition . *_ fractional -neighborhood exposure : _ vertex of degree experiences fractional -neighborhood exposure to a treatment condition if and neighbors of receive that treatment condition .the -absolute and -fractional neighborhood exposures can be considered relaxations of the full neighborhood exposure for vertex in that they require fewer neighbors of to have a fixed treatment condition for to be considered as belonging to that exposure condition . in fact , the set of assignment vectors that correspond to -absolute and -fractional neighborhood exposures are each nested under the parameters and respectively .increasing or decreases the set of assignment vectors until reaching full neighborhood exposure for vertex .it is natural to consider heterogeneous values or values that differ for each user but we limit our discussion to exposure conditions that are homogeneous across users as much as possible . we do incorporate a mild heterogeneity in the definition of -neighborhood exposure when vertices have degree : for these vertices we consider full neighborhood exposure instead .fractional exposure does not require this adjustment .[ [ core - exposure ] ] * core exposure * + + + + + + + + + + + + + + + full neighborhood exposure is clearly only an approximation of full immersion in a universe . beyond local exposure conditions, we also consider exposure condition with global dependence .as one approach , consider individuals as exposed to a treatment only if they are sufficiently surrounded by sufficiently many treated neighbors who are in turn also surrounded by sufficiently many treated neighbors , and so on .this recursive definition may initially appear intractable , but such recursive exposure can in fact be characterized precisely by analyzing the -core and more generally the heterogeneous -core on the induced graph of treatment and control individuals .recall that the -core of a graph is the maximal subgraph of in which all vertices have degree at least .similarly , the heterogeneous -core of a graph , parameterized by a vector , is the maximal subgraph of in which each vertex has degree at least . using the definition of heterogeneous -core, we introduce the following natural fractional analog . the fractional -core is the maximal subgraph of in which each vertex is connected to at least a fraction of the vertices it was connected to in .thus , for all , .equivalently , if is the degrees of vertex , the fractional -core is the heterogenous -core of for . since the heterogeneous -core is a well - defined object , so is the fractional -core . using this definition , we now define exposure conditions that are all stricter versions of corresponding earlier neighborhood conditions . * _ component exposure : _vertex experiences component exposure to a treatment condition if and all of the vertices in its connected component receive that treatment condition .* _ absolute -core exposure : _ vertex with degree experiences absolute -core exposure to a treatment condition if belongs to the -core of the graph ] , the subgraph of induced on the vertices that receive that treatment condition .component exposure is perhaps the strongest requirement for network exposure imaginable , and it is only feasible if the interference graph being studied is comprised of many disconnected components .we include it here specifically to note that the fractional -core exposure for reduces to component exposure .again like the neighborhood exposure case , absolute core exposure requires heterogeneity in across users for it to be a useful condition for all users . a parsimonious solution analogous to the solution for -neighborhood exposure may be to consider heterogeneous max(degree , )-core exposure .fractional -core exposure , like fractional -neighborhood exposure , is again free from these parsimony problems .core exposure conditions are strictly stronger than the associated neighborhood exposure conditions above . in fact, every assignment vector in which a vertex would be component or core exposed corresponds to neighborhood exposure , but not vice versa .so the assignment vectors of core and component exposure are entirely contained in those of the associated neighborhood exposure .[ [ other - exposure - conditions ] ] * other exposure conditions * + + + + + + + + + + + + + + + + + + + + + + + + + + + other exposure conditions may prove relevant to particular applications .in particular , we draw attention to the intermediate concept of placing absolute or fractional conditions on the population of vertices within hops , where is the neighborhood exposure conditions above. we also note that on social networks with very high degree , for many applications it may be more relevant to define the exposure conditions in terms of a lower degree network that considers only stronger ties .using the concept of network exposure , we can now consider estimating the average treatment effect between the two counterfactual universes using a randomized experiment . recall that is the treatment assignment vector of an experiment . to randomize the experiment ,let be drawn from , a random vector that takes values on , the range of .the distribution of over given by is what defines our randomization scheme , and it is also exactly what determines the relevant probabilities of network exposure . for a user , is the probability of network exposure to treatment and is the probability of network exposure to control . in general, these probabilities will be different for each user and each treatment condition , and knowing these probabilities makes it possible to correct for allocation bias during randomization .in particular , it becomes possible to use the horvitz - thompson estimator , , to obtain an unbiased estimate of , here given by }{\pr(z \in \sigma^1_i ) } -\frac{y_i(z)\mathbf{1}[z \in \sigma^0_i]}{\pr(z \in \sigma^0_i)}\right)},\end{aligned}\ ] ] where ] and ] for both exposure to treatment and control , .the reason for the positive lower bounds is that without them the users could all be responding zero to all treatments , making the variance zero regardless of the treatment scheme .we also assume the randomization probability is not degenerate , i.e. .we present the results for -regular graphs to keep expressions manageable , but analogous results can be derived for arbitrary degrees .we first establish an exponential lower bound for the variance under vertex - level randomization , and then we show a contrasting linear upper bound for the variance under our 3-net cluster randomization scheme .the variance of the ht estimator under full neighborhood exposure for vertex randomization of a graph with vertices is lower bounded by an exponential function in the degree of the graph , \ge o(1/n ) ( p^{-(d+1)}+(1-p)^{-(d+1)}-1) ] , which is exponential in the degree of each vertex , meaning that even a single high degree vertices can easily explode the variance .we now turn to our linear upper bound for growth - restricted graphs when using our 3-net clustering .the variance of the ht estimator under full , -fractional , or -absolute neighborhood exposure for a 3-net cluster randomization of a restricted - growth graph is upper bounded by a function linear in the degree of the graph .recall that the variance of the estimator is given by : we begin by upper bounding the variance of , and the upper bound for follows the same principle .we conclude by bounding the covariance term . by proposition [ claim2 ] , each vertex is connected to at most clusters .thus we have the lower bound , for both full and fractional neighborhood exposure . \le \frac{y_m^2}{n^2 } \bigg [ n(\frac{1}{p^{\kappa^3}}-1 ) + \sum_{i=1}^{n } \sum_{\substack{j=1 \\ j\ne i}}^n ( \frac { \pi^1_{ij}}{\pi^1_i \pi^1_j } - 1 ) \bigg ] .\end{aligned}\ ] ] for each vertex , the inner of the two sums is only nonzero at those vertices for which the assignments are dependent . if the assignments for and are dependent , then they must each have neighbors in the same cluster associated with a vertex in the set of cluster centers . since the proof of proposition [ claim2 ] established that , it follows that and are each within distance 3 of and hence within distance 6 of each other .thus , any whose assignment is dependent on s must lie within , and so by the restricted - growth condition , there can be at most such vertices .thus the sum over such has at most terms .also , applies , since the two vertices must depend on at least one cluster .we obtain \le y_m^2 [ ( p^{-\kappa^3}-1 ) + \kappa^5 ( d+1 ) ( p^{-2\kappa^3 - 1 } - 1 ) ] \frac{1}{n}.\end{aligned}\ ] ] now , consider the contribution of the covariance term to the variance , , a positive quantity . starting from equation ( [ eq : covar ] ) , we apply the upper bound for the responses to obtain \le - \frac{2y_m^2}{n^2 } \sum_{i=1}^{n } \sum_{\substack{j=1 \\ j\ne i}}^n \left ( \frac{\pi_{ij}^{10}}{\pi_i^1 \pi_j^0 } - 1 \right ) + \frac{2y_m^2}{n}.\end{aligned}\ ] ] as with the previous analogous expression , for each the inner sum is non - zero for at most other vertices . for the remaining terms ,the quantity is trivially upper bounded by .thus we obtain \le \frac{2y_m^2}{n } [ \kappa^5(d+1 ) + 1].\end{aligned}\ ] ] combining the upper bounds , we obtain a total upper bound that is linear in degree , as desired .the restricted - growth condition we used was derived for regular graphs , but as we noted earlier , for restricted - growth graphs with arbitrary degree distributions we can apply a weaker but still constant bound on the cluster dependencies to obtain a variance bound that is still linear in the degree .the design of online experiments is a topic with many open directions ( see e.g. ) ; in this work we have focused on the open question of a / b testing when treatment effects can spill over along the links of an underlying social network .we introduced a basic framework for reasoning about this issue , as well as an algorithmic approach graph cluster randomization for designing a / b randomizations of a population when network spillover effects are anticipated .appropriate clustering can lead to reductions in variance that are exponential in the vertex degrees .we emphasize that beyond the class of graphs where we prove bounds , graph cluster randomization is a technique that can be applied to arbitrary graphs using arbitrary community detection or graph partitioning algorithms , though we do not provide any variance bound guarantees for these scenarios .there are many further directions for research suggested by the framework developed here .a first direction is to formulate a computationally tractable objective function for minimizing the variance of the horvitz - thompson estimator .one approach would be via minimizing an adversarial variance , as in .another problem that may be relevant is to find a clustering that minimizes a / a variance for full neighborhood exposure under the assumption of known control potential outcomes .can good clusterings for a / a variance lead to good solutions for a / b testing ?we note that a / a variance minimization would not be useful when the treatment is expected to be dominated by heterogeneous responses . adding further structure to the potential treatment responses is another interesting direction .we currently have a discrete notion of network exposure to treatment and control , but one could ask about responses that depend continuously on the _ extent _ of exposure . as one simple example , we could consider a response that was linear in , when a vertex had exposed neighbors . how could we properly take advantage of such structure to get better estimates? methods for analyzing bias under network exposure condition misspecification would also be a natural addition to the framework .8 e. airoldi , e. kao , p. toulis , d. rubin . causal estimation of peer influence effects . in _icml _ , 2013 .p. aronow and c. samii .estimating average causal effects under general interference . ,september 2012 .b. bollobs . .cambridge univ . press , 2001 .d. cellai , a. lawlor , k. dawson , j. gleeson .critical phenomena in heterogeneous k - core percolation . , 87(2):022134 , 2013 .s. fienberg .a brief history of statistical models for network analysis and open challenges . , 2012 .s. fortunato .community detection in graphs ., 486(3):75174 , 2010 .d. horvitz , d. thompson .a generalization of sampling without replacement from a finite universe ., 1952 d. karger , m. ruhl .finding nearest neighbors in growth - restricted metrics . in _ stoc _ , 2002 .l. katzir , e. liberty , o. somekh .framework and algorithms for network bucket testing . in _www _ , 2012 .r. kohavi , a. deng , b. frasca , r. longbotham , t. walker , y. xu .trustworthy online controlled experiments : five puzzling outcomes explained . in _kdd _ , 2012 . c. manski .identification of treatment response with social interactions ., 16(1):s1s23 , 2013 .d. rubin . estimating causal effects of treatments in randomized and nonrandomized studies . , 1974 .e. tchetgen , t. vanderweele . on causal inference in the presence of interference . , 2012 .j. ugander , l. backstrom .balanced label propagation for partitioning massive graphs . in _ wsdm _ , 2013 .
|
a / b testing is a standard approach for evaluating the effect of online experiments ; the goal is to estimate the ` average treatment effect ' of a new feature or condition by exposing a sample of the overall population to it . a drawback with a / b testing is that it is poorly suited for experiments involving social interference , when the treatment of individuals spills over to neighboring individuals along an underlying social network . in this work , we propose a novel methodology using graph clustering to analyze average treatment effects under social interference . to begin , we characterize graph - theoretic conditions under which individuals can be considered to be ` network exposed ' to an experiment . we then show how graph cluster randomization admits an efficient exact algorithm to compute the probabilities for each vertex being network exposed under several of these exposure conditions . using these probabilities as inverse weights , a horvitz - thompson estimator can then provide an effect estimate that is unbiased , provided that the exposure model has been properly specified . given an estimator that is unbiased , we focus on minimizing the variance . first , we develop simple sufficient conditions for the variance of the estimator to be asymptotically small in , the size of the graph . however , for general randomization schemes , this variance can be _ lower bounded _ by an _ exponential _ function of the degrees of a graph . in contrast , we show that if a graph satisfies a _ restricted - growth condition _ on the growth rate of neighborhoods , then there exists a natural clustering algorithm , based on vertex neighborhoods , for which the variance of the estimator can be _ upper bounded _ by a _ linear _ function of the degrees . thus we show that proper cluster randomization can lead to exponentially lower estimator variance when experimentally measuring average treatment effects under interference .
|
spectrum sensing is one of the key elements in opportunistic spectrum access design , . in literature , there are a few main categories of practical spectrum sensing schemes for osa including : energy detection , cyclostationarity based detection , and eigenvalue - based detection ( ebd ) ; see , e.g. and references therein .energy detection is simple for implementation , however , it requires accurate noise power information , and a small error in that estimation may cause snr wall and high probability of false alarm . for cyclostationarity based detection, the cyclic frequency of pu s signal needs to be acquired _ a priori_. on the other hand , ebd , first proposed and submitted to ieee 802.22 working group , performs signal detection by estimating the signal and noise powers simultaneously , thus it does not require the cyclic knowledge of the pu and is robust to noise power uncertainty . in general ,two steps are needed to complete a sensing design : ( 1 ) to design the test statistics ; and ( 2 ) to derive the probability density function ( pdf ) of the test statistics . for spectrum sensing with multiple antennas , the test statistics can be designed based on the standard principles such as generalized likelihood ratio testing ( glrt ) , or other considerations , , , .surprisingly these studies all give the test statistics using the eigenvalues of the sample covariance matrix .it is thus important to derive the pdf of the eigenvalues so that the sensing performance can be quantified . for ebd schemes ,the pdf of the test statistics is usually derived using random matrix theory ( rmt ) ; see , e.g. , , , , . in fact , the maximum and minimum eigenvalues of the sample covariance matrix have simple explicit expressions when both antenna number ( ) and ample size ( ) are large .it is reasonable to assume that is large especially when the secondary user is required to sense a weak primary signal , in practice , however , the number of antennas equipped at a single secondary user is usually small , say or .thus the results obtained under the assumption that both and are large may not be accurate for practical multi - antenna cognitive radios . in this paper ,our objective is to derive the asymptotic distributions of the eigenvalues of the sample covariance matrices for arbitrary but large .the asymptotic results obtained form the basis for quantifying the pdf of the test statistics for ebd algorithms .it is noticed that there are studies on the exact distribution of the condition number of sample covariance matrices for arbitrary and , the formulas derived however are complex and can not be conveniently used to analyze the sensing performance . furthermore , there are no results published for the case when the primary signals exist .the rest of the paper is organized as follows .section [ sec : spectrum sensing ] presents the system model for spectrum sensing using multiple antennas .two ebd algorithms are reviewed in section [ sec : ebd ] , including maximum eigenvalue detection ( med ) and condition number detection ( cnd ) algorithms . in section [ sec : asym-1 ] , we derive the asymptotic distributions of the test statistics of the two ebd algorithms for the scenario when the primary users are inactive . in section [ sec : asym-2 ] , the results are derived for the scenario when there are active primary users in the sensed band .performance evaluations are given in section [ sec : evaluations ] , and finally , conclusions are drawn in section [ sec : conclusions ] .the following notations are used in this paper .matrices and vectors are typefaced using bold uppercase and lowercase letters , respectively .conjugate transpose of matrix is denoted as . stands for the norm of vector . ] is the channel matrix from the active pus to the su . * scenario 0 ( ) : * there are no active pus in the sensed band , thus the sampled outputs collected at su are given by we make the following assumptions for the above models : * the noises are independent and identically distributed ( iid ) both spatially and temporally .each element follows gaussian distribution with mean zero and variance . * for each , the elements of are iid , and follow gaussian distribution with mean zero and variance .thus the covariance matrix of is = \sigma_{s}^{2 } { \bf i} ] , the objective of spectrum sensing design is to choose one of the two hypothesis : : there are no active primary users in the sensed band , and : there exist active primary users .for that , we need to design the test static , , and a threshold , and infer if , and infer if .let us define the _ sample covariance matrix _ and _ covariance matrix _ of the measurements of the sensed band as , \label{eq : rmx1}\end{aligned}\ ] ] respectively .for a fixed , under scenario and when , we have which means that all the eigenvalues of the _ covariance matrix _ are equal to . however , under and when , approaches based on assumption ( iv ) , the eigenvalues of can be ordered as .denote the ordered eigenvalues of the sample covariance matrix as .we consider the following two ebd algorithms . ** maximum eigenvalue detection ( med ) * : for med , the test static is chosen as : * * condition number detection ( cnd ) * : the cnd chooses the following test statistic : an essential task to complete the sensing design is to determine the test threshold , which affects both probability of detection and probability of false alarm .to do so , it is important to derive the pdf of the test statistics . for arbitrary pair , the closed - form expressions of the pdf of the test statics are in general complex . in practice ,the primary users need to be detectable in low snr environment .for example , in ieee 802.22 , the tv signal needs to be detected at snr with target probability of detection and target probability of false alarm . to achieve that, the number of samples , , required for spectrum sensing is usually very large . in the paper, we thus turn our attention to derive the pdfs of the test statistics for any fixed , but large . the asymptotic distributions ( when ) of the test statics of the ebd algorithms will be derived for and scenarios , respectively .let for , and denote .define for , and . note that . under , by theorem 1 and ( 2.12 ) in , we have the following proposition .[ prop : p1 ] for real - valued case , when , the limiting distribution of , , is given by where }.\end{aligned}\ ] ] here is the gamma function . for real - valued case , and when , we have , i.e. , .when , we have for _ complex - valued case _ , by lindberg s central limit theorem , converges to a hermitian matrix with elements above diagonal being complex gaussian distribution , a so - called gaussian unitary matrix .thus , according to the joint density of the eigenvalues of a gaussian unitary matrix , similar to theorem 1 in we have the following : [ prop : p2 ] for complex - valued case , when , the limiting distributions of , , is given by where for complex - valued case , when , we have , i.e. , . when , we have for a given , the limiting distribution , , of , can be calculated from or . for med , , thus .we have the following : under , for a fixed and when , whose distribution , , is the same as the limiting distribution of , . for cnd , .the following theorem states the limiting distribution of the condition number of the sample covariance matrix . under , for a fixed and when , whose pdf and cumulative density function ( cdf ) are given by respectively , where is the joint distribution of and .see .next , we derive the closed - form expressions of the limiting distributions of the condition number of the sample covariance matrix for case .[ lemma-1 ] consider scenario . for and , pdf is for real - valued case , and for complex - valued case . from the above two subsections, it is seen that for a given and when , the _ regulated test statistics _ , converges to a random variable with pdf and cdf , where for med and for cnd . here , we look at how to set the decision threshold to achieve a target probability of false alarm , .since thus the decision threshold is determined by ^{-1}(1-\bar{p}_{f}),\end{aligned}\ ] ] where ^{-1}(y)$ ] denotes the inverse function of .in this section , we derive the asymptotic distributions of the test statistics under . consider the ordered eigenvalues of covariance matrix , , and the ordered eigenvalues of sample covariance matrix , .with the same notation as in , let the multiplicities of the eigenvalues of be .that is where moreover , let where and and are corresponding eigenvector matrices .we then partition the matrices and into block matrices as follows : note in both and , the diagonal sub - block is matrix .following the proofs of proposition 1 and proposition 2 , and theorem 1 of , we have the following results . [ theorem : h1 ] the limiting distribution of is given by and for real - valued case and complex - valued case , respectively , with the parameter being replaced by .furthermore , is asymptotically independent of for .let and . from theorem [ theorem : h1] , and are asymptotically independent , and the limiting distributions of and are and , respectively , which are defined in section [ sec : asym-1 ] . for med , , thus where .we have the following theorem . under , when , whose pdf is .if there are no repeated maximum eigenvalues ( i.e. , ) , for real - valued case , and for complex - valued case . for cnd , . since and are asymptotically independent , and when , , , similar to theorem 2 , we have the following theorem . under , for a fixed andwhen , where , and the pdf of is given by now let us look at the case when .since both and follow the same gaussian distribution asymptotically , we can easily derive that for real - valued case , and for complex - valued case .computer simulations are presented in this section to validate the effectiveness of the results obtained in this paper .we choose , and there is one primary signal under . monte carlo runs are carried out in order to compute cdf of the test statistics .we first compare the cdfs of the regulated test statistics for med derived using computer simulations , and theoretic analysis under fixed assumption of this paper and large assumption of .the results for real - valued case with are shown in fig .[ fig : fig1 ] .it is seen that the simulated cdf and theoretical cdf derived in this paper are close to each other , while the result derived under large assumption are far from the simulated one .we next evaluate the accuracy of threshold setting for the cnd detection using the formula in section iv .[ threshold1 ] shows the threshold values at different probability of false alarms for real - valued case with . for comparison , the true thresholds ( based on simulations ) and the thresholds by using the theory for large in also included in the figure .the proposed theory based on fixed is much more accurate than the theory in and gives the threshold values approaching to the true values . :real - valued case , cnd with .,width=302 ] we then compare the detection probability results predicted by the formulas derived in this paper and reference with large assumption . the probabilities of detection for cnd by using simulations and different theoretical formulas are shown in fig .[ pd1 ] with and snr = ( real - valued case ) . for a target probability of false alarm, we first use simulations to determine the decision threshold , then obtain the probability of detection based on simulations or the related formulas .again , the proposed formula based on fixed is much more accurate than the formula in and gives the values approaching to the true values . from fig .[ pd1 ] , it is also seen that , interestingly , for a target probability of false alarm , the probability of detection predicted by our method tends to be conservative , which seems to be good to primary users in terms of protection requirement .finally , as the results derived in this paper are asymptotic , they will approach the simulated ones more accurately when the sample size increases .: real - valued case , cnd with and snr = .,width=302 ]in this paper , theoretic distributions of the test statistics for some eigenvalue based detections have been derived for any fixed but large , from which the probability of detection and probability of false alarm have been obtained .the results are useful in practice to determine the detection thresholds and predict the performances of the sensing algorithms .extensive simulations have shown that theoretic results have higher accuracy than existing stuties .y. zeng , y .- c .liang , a. t. hoang , and r. zhang `` a review on spectrum sensing for cognitive radio : challenges and solutions , '' _ eurasip journal on advances in signal processing _, article number : 381465 , 2010 .l. cardoso , m. debbah , p. bianchi , and j. najim , `` cooperative spectrum sensing using random matrix theory , '' in _ proc .3rd international symposium on wireless pervasive computing ( iswpc ) _ , may 2008 , pp.334338 .f. penna , r. garello , and m. a. spirito , `` cooperative spectrum sensing based on the limiting eigenvalue ratio distribution in wishart matrices , '' _ ieee communications letters , _vol.13 , no.7 , pp.507509 , july 2009 .
|
in recent years , some spectrum sensing algorithms using multiple antennas , such as the eigenvalue based detection ( ebd ) , have attracted a lot of attention . in this paper , we are interested in deriving the asymptotic distributions of the test statistics of the ebd algorithms . two ebd algorithms using sample covariance matrices are considered : maximum eigenvalue detection ( med ) and condition number detection ( cnd ) . the earlier studies usually assume that the number of antennas and the number of samples are both large , thus random matrix theory ( rmt ) can be used to derive the asymptotic distributions of the maximum and minimum eigenvalues of the sample covariance matrices . while assuming the number of antennas being large simplifies the derivations , in practice , the number of antennas equipped at a single secondary user is usually small , say or , and once designed , this antenna number is fixed . thus in this paper , our objective is to derive the asymptotic distributions of the eigenvalues and condition numbers of the sample covariance matrices for any fixed but large , from which the probability of detection and probability of false alarm can be obtained . the proposed methodology can also be used to analyze the performance of other ebd algorithms . finally , computer simulations are presented to validate the accuracy of the derived results . spectrum sensing , cognitive radio , random matrix theory .
|
we consider a simple model of concurrent processes in computer systems , or more generally , a logical representation of the physical universe that consists of causal sequences of events occurring at different spatial locations .these locations are here referred to as _sites_. formally , a site is a sequence of processes ordered by the happened - before relation .the term _ process _ denotes one of the continuing states in a site , including in particular nothing happening states .it is also assumed for simplicity that ( i ) the processes in each site occur consecutively with no time gaps , ( ii ) each process lasts for some non - zero time duration .any change of processes is referred to as an _ event _ ( fig .for the later comparison , this section is devoted to introduce a formalization of time in non - relativistic situations .the earliest such treatment is due to russel ( 1926 ) .since we assume that any process is not instantaneous , but occupies some non - zero time duration , we can say that a process _ is earlier than _ a process if ends before begins , and that _ is simultaneous with _ if partly or completely overlaps with , i.e. neither is earlier than nor vice versa . in particular, is simultaneous with itself .note that this relation of simultaneity is reflexive and symmetric , but not transitive . a maximal set of simultaneity , which is referred to as a _ time point _, is then defined as a maximal set of processes ( with respect to the set inclusion ordering ) , any two members of which are simultaneous with each other .the collection of all time points is denoted by , i.e. where * proc * denotes the set of all processes , denotes the power set of * proc * ( the set of all subsets of * proc * ) , and denotes the simultaneity relation . in fig .2 , we have , , , , , , , , , .the broken lines represent the simultaneous time points .2 ] it is worth noting that the happened - before relation on * proc * defines a linear ordering on . for any distinct time points and , there must be a process that is in but not in , and a process that is in but not in . by the definition of time point , either of the following holds : `` is earlier than '' or `` is earlier than . '' in the former case let and in the latter case .time points are used to introduce the concept of _ time intervals ._ for each process , its time interval ] , = \ { t_1\} ] . we are now led to the logic of simultaneity by considering a process and its time interval ] ( set - theoretic complement relative to ) , which amounts to the time interval that is not true . *for any propositions and , denotes the proposition that the propositions and are both true .the truth value of is defined as \equiv [ \ , p\ , ] \cap [ \ , q\ , ] ] ( set - theoretic union ) , which amounts to the time interval that at least one of and is true .since the operations coincide with the usual set - theoretic ones , it is obvious that the resulting logic is boolean .in the preceding section , we have implicitly assumed the existence of the global clock , which is represented by the linearly arranged time points .however , the classical concept of simultaneity loses its meaning in a real distributed system or a relativistic universe .what the principle of special relativity says is that it does take a non - zero time duration to transmit any causal signals between spatially separated sites ( for a basic reference , see taylor ( 1992 ) ) . as shown in fig .3 , we refer to any signal capable of transmitting information between sites as a _message_. while a message transmission is needed to synchronize two clocks at different sites , the transmission time duration is not measurable without reference to synchronized clocks located at both sites ; it is a vicious circle .hence we must abandon the attempt to provide the global time points , i.e. the clock common to all sites . thus the simultaneity based on the overlap relation can not be defined since it can not generally be determined whether two processes at different sites overlap temporally .we should therefore focus on the special cases where we can say with certainty that two or more processes run simultaneously . in simple cases where a message is sent from a site to a site and then another messageis sent back from to , it is verifiable that the processes in that occur consecutively with no time gaps between the sending event and the receiving event _ temporally contain _ the processes in that occur between the receiving event and the sending event . to sum up , a relativistic universe admits simultaneity only in the sense of temporal containment shown in fig 5 . in the following discussion, we drop the idea of defining time interval via simultaneous time points , but directly construct the logic employing temporal containment relation between processes .we say that two processes have a _ causal relationship _ in a relativistic universe if they are linked with the happened - before relation ( lamport , 1978 ) .the happened - before relation on * proc * is defined as the smallest relation satisfying the following conditions : ( i ) if and are processes of the same site , and occurs before , then .( ii ) if ends with an event of sending a message and begins with an event of receiving that message , then .( iii ) if and , then . the causality relation on * proc * is then defined as the smallest relation satisfying the following condition : if and only if or .note that the relation is irreflexive and symmetric , but not transitive .using the same notation as before , we informally denote by ] .this containment relation is characterized by the fact that _ any process that has a causal relationship with has a causal relationship with _, i.e. to formalize a more general setting where a process is covered by two or more processes ( fig .6 ) , we need a slight modification of the formula ( [ eq : pcc ] ) . letting ] if _ any process that has a causal relationship with any process of has a causal relationship with , i.e. now taking the containment relation as fundamental , we can conceive of a non - boolean model for the logic of spatiotemporal structures .letting be the collection of all time intervals , we stipulate that the time interval ] ( orthocomplement relative to ) , which + amounts to the time interval that is not true .the following facts follow from the definition . * * \in { \cal i} ] * * \subseteq [ \ , q\ , ] ] * for any propositions and , denotes the proposition that the propositions and are both true .the truth value of is defined as \equiv [ \ , p\ , ] \cap [ \ , q\ , ] ] , which amounts to the time interval that at least one of and is true .since has the above - mentioned properties and is the infimum operator on , corresponds to the supremum operator on with respect to the set inclusion ordering . note that we have = [\ , p_1 \vee p_2 \vee \ldots\ , ] ] and = \{ q_3 \} ] , while since = \phi ] , we infer = \phi$ ] .the failure of distributivity illustrates the fact that the analysis of spatiotemporal structures deduces non - boolean orthologic when global synchronized clocks are not available .100 birkhoff , g. : 1967 , _ lattice theory _ , 3rd ed ., american mathematical society colloquium publications , vol .xxv , american mathematical society , providence .dalla chiara , m .. l. : 2001 , quantum logic , in d. m. gabbay and f. guenthner , editors , _ handbook of philosophical logic _ ,volume 6 , 2nd ed . ,kluwer , dordrecht , pp .129 - 228 .lamport , l. : 1978 , time , clocks and the ordering of events in a distributed system , _ communications of the acm _ * 21 * ( 7 ) , 558 - 565 .russell , b. : 1926 , _ our knowledge of the external world as a field for scientific method in philosophy _ , revised and reset , george allen and unwin , london .taylor , e. f. , j. a. wheeler : 1992 , _ spacetime physics : an introduction to special relativity _ ,2nd ed . , w. h. freeman and company , new york
|
a logical model of spatiotemporal structures is pictured as a succession of processes in time . one usual way to formalize time structure is to assume the global existence of time points and then collect some of them to form time intervals of processes . under this set - theoretic approach , the logic that governs the processes acquires a boolean structure . however , in a real distributed system or a relativistic universe where the message - passing time between different locations is not negligible , the logic has no choice but to accept time interval instead of time point as a primitive concept . from this modeling process of spatiotemporal structures , orthologic , the most simplified version of quantum logic , emerges naturally .
|
as we retire more and more synchronous machines and replace them with renewable sources interfaced with power electronic devices , the stability of the power grid is jeopardized , which has been recognized as one of the prime concerns by transmission system operators .both in transmission grids as well as in microgrids , low inertia levels together with variable renewable generation lead to large frequency swings .not only are low levels of inertia troublesome , but particularly spatially heterogeneous and time - varying inertia profiles can lead to destabilizing effects , as shown in an interesting two - area case study . it is not surprising that rotational inertia has been recognized as a key ancillary service for power system stability , and a plethora of mechanisms have been proposed for the emulation of virtual ( or synthetic ) inertia through a variety of devices ( ranging from wind turbine control over flywheels to batteries ) .also inertia monitoring and markets have been suggested . in this article, we pursue the questions raised in regarding the detrimental effects of spatially heterogeneous inertia profiles , and how they can be alleviated by virtual inertia emulation throughout the grid .in particular , we are interested in the allocation problem `` where to optimally place the inertia '' ? the problem of inertia allocation has been hinted at before , but we are aware only of the study explicitly addressing the problem . in , the grid is modeled by the linearized swing equations , and eigenvalue damping ratios as well as transient overshoots ( estimated from the system modes ) are chosen as optimization criteria for placing virtual inertia and damping .the resulting problem is non - convex , but a sequence of approximations led to some insightful results . in comparisonto , we focus on network coherency as an alternative performance metric , that is , the amplification of stochastic or impulsive disturbances via a quadratic performance index measured by the norm . as performance index ,we choose a classic coherency criterion penalizing angular differences and frequency excursions , which has recently been popularized for consensus and synchronization studies as well as in power system analysis and control .we feel that this performance metric is not only more tractable than spectral metrics , but it is also very meaningful for the problem at hand : it measures the effect of stochastic fluctuations ( caused by loads and/or variable renewable generation ) as well as impulsive events ( such as faults or deterministic frequency errors caused by markets ) and quantifies their amplification by a coherency index directly related to frequency volatility .finally , in comparison to , the damping or droop coefficients are not decision variables in our problem setup , since these are determined by the system physics ( in case of damping ) , the outcome of primary reserve markets ( in case of primary control ) , or scheduled according to cost coefficients , ratings , or grid - code requirements .the contributions of this paper are as follows .we provide a comprehensive modeling and analysis framework for the inertia placement problem in power grids to optimize an coherency index subject to capacity and budget constraints .the optimal inertia placement problem is characteristically non - convex , yet we are able to provide explicit upper and lower bounds on the performance index .additionally , we show that the problem admits an elegant and strictly convex reformulation for a performance index reflecting the effort of primary control which is often advocated as a remedy to low - inertia stability issues . in this case , the optimal inertia placement problem reduces to a standard resource allocation problem , where the cost of each resource is proportional to the ratio of expected disturbance over inertia .a similar simplification of the problem is obtained under some reasonable assumptions on the ratio between the disturbance and the damping coefficient at every node . for the case of a two - area network ,a closed - form global allocation is derived , and a series of observations are discussed .furthermore , we develop a computational approach based on a gradient formula that allows us to find a locally optimal solution for large networks and arbitrary parameters .we show how the combinatorial problem of allocating a limited number of inertia - emulating units can be also incorporated into this numerical method via a sparsity - promoting approach .finally , any system norm such as assumes that the location of the disturbance ( or a distribution thereof ) is known .while empirical fault distributions are usually known based on historical data , the truly problematic faults in power grids are rare events that are poorly captured by any disturbance distribution . to safeguard against such faults, we also present a robust formulation of the inertia allocation problem in which we optimize the norm with respect to the worst possible disturbance .a detailed three - region network has been adopted as case study for the presentation of the proposed method .the numerical results are also illustrated via time - domain simulations , that demonstrate how an optimization - based allocation exhibits superior performance ( in different performance metrics ) compared to heuristic placements , and , perhaps surprisingly , the optimal allocation also uses less effort to emulate inertia . from the methodological point of view, this paper extends the performance analysis of second - order consensus systems to non - uniform damping , inertia , and input matrices ( disturbance location ) .this technical contribution is essential for the application that we are considering , as these parameters dictate the optimal inertia allocation in an intertwined way .the remainder of this section introduces some notation .section [ section : problem formulation ] motivates our system model and the coherency performance index .section [ section : optimal inertia allocation ] presents numerical inertia allocation algorithms for general networks and provides explicit results for certain instances of cost functions and problem scenarios .section [ section : case study ] presents a case study on a three - region network accompanied with time - domain simulations and a spectral analysis .finally , section [ section : conclusions ] concludes the paper .[ [ notation ] ] notation + + + + + + + + we denote the -dimensional vectors of all ones and zeros by and .given an index set with cardinality and a real - valued array , we denote by the vector obtained by stacking the scalars and by the associated diagonal matrix .the vector is the -th vector of the canonical basis for .consider a power network modeled by a graph with nodes ( buses ) and edges ( transmission lines ) .we consider a small - signal version of a network - reduced power system model , where passive loads are eliminated via kron reduction , and the network is reduced to the sources with dynamics where and refer to the power input and electrical power output , respectively .if bus is a synchronous machine , then describes the electromechanical swing dynamics for the generator rotor angle , is the generator s rotational inertia , and accounts for frequency damping or primary speed droop control ( neglecting ramping limits ) .if bus connects to a renewable or battery source interfaced with a power electronics inverter operated in grid - forming mode , then is the voltage phase angle , is the droop control coefficient , and accounts for power measurement time constant , a control gain , or arises from virtual inertia emulation through a dedicated controlled device .finally , the dynamics may also arise from frequency - dependent or actively controlled frequency - responsive loads .in general , each bus will host an ensemble of these devices , and the quantities and are lumped parametrizations of their aggregate behavior . under the assumptions of constant voltage magnitudes , purely inductive lines , and a small signal approximation , the electrical power output at the terminalsis given by where is the susceptance between nodes .the state space representation of the system ( [ eq : basic])-([eq : elec ] ) is then = \left [ \setlength\arraycolsep{1.5pt}\begin{array}{cc } { 0 } & i \\ -{m}^{-1}l & -{m}^{-1}{d}\\ \end{array } \right]\left [ \setlength\arraycolsep{1.5pt}\begin{array}{cc } \theta\\ \omega\\ \end{array } \right ] + \left [ \setlength\arraycolsep{1.5pt}\begin{array}{cc } 0\\ { m}^{-1}\\ \end{array } \right ] { { p_\text{in}}}\,,\ ] ] where and are the diagonal matrices of inertial and damping / droop coefficients , and is the network laplacian ( or susceptance ) matrix with off - diagonal elements and diagonals .the states are the stacked vectors of angles and frequencies and is the net power input all of which are deviation variables from nominal values .we consider the linear power system model driven by the inputs accounting either for faults or non - zero initial values ( modeled as impulses ) or for random fluctuations in renewables and loads .we are interested in the energy expended in returning to the steady - state configuration , expressed as a quadratic cost of the angle differences and frequency displacements : here , are positive scalars and we assume that the nonnegative scalars induce a connected graph not necessarily identical with the power grid itself .we denote by the matrix , and by the laplacian matrix of the graph induced by the . in this compact notation, would be an example of local error penalization , while penalizes global errors . aside from consensus and synchronization studies the coherency metric has recently also been also used in power system analysis and control . following the interpretation proposed in ,the above metric can represent a generalized energy in synchronous machines .indeed , for ( where are the power line conductances ) and , the metric accounts for the heat losses in the grid lines and the mechanical energy losses in the generators . adopting the state representation introduced in , the performance metric can be rewritten as the time - integral of the performance output } _ { = c } \left [ \setlength\arraycolsep{1.5pt}\begin{array}{cc } \theta \\\omega \end{array}\right ] \ , .\ ] ] in order to model the localization of the disturbances in the grid , we parametrize the input as where is assumed to be known from historical data amongst other sources .we therefore obtain the state space model in the following , we refer to the input / output map , as . if the inputs are dirac impulses , then measures the squared norm of the system .there is a number of interpretations of the norm of a power system .the relevant ones in our context are : 1 .the squared norm of measures the energy amplification , i.e. , the sum of norms of the outputs , for unit impulses at all inputs : these impulses are of strength for each node and can model faults or initial conditions .2 . the squared norm of quantifies the steady - state total variance of the output for a system subjected to unit variance stochastic white noise inputs where denotes the expectation .the white noise inputs can model stochastic fluctuations of renewable generation or loads .the matrix quantifies the probability of occurrence of such fluctuations at each node .in general , the norm of a linear system can be calculated efficiently by solving a linear lyapunov equation . in our casean additional linear constraint is needed to account for the marginally stable and undetectable mode ^\mathsf{t} ] : following the typical derivation of the norm for state - space systems , we have , where is the observability gramian . note fromthat the mode ^\mathsf{t} ] and therefore holds for .the fact that satisfies can be verified by inspection , as it remains to show that the is the unique solution of and . to this end , note that and the rank nullity theorem imply that the kernel of is given by a vector .it can be verified that holds for ^{\mathsf{t}} ] which implies .this fact together with the identity , implies that .then , by using the ring commutativity of the trace , and its invariance with respect to transposition of the argument , we obtain on the other hand , equation ( 2,2 ) of implies that similarly as before we left - multiply by , use trace properties and the commutativity of and , and obtain thus , and together deliver from ( [ eq : tracenew ] ) we obtain the relations which can be further bounded as the structural similarity of and allows us to state upper and lower bounds by rewriting as in .notice that in the bounds proposed in theorem [ theorem : bounds on performance index ] , the network topology described by the laplacian enters only as a constant factor , and is decoupled from the decision variables .moreover , in the case ( short - range error penalty on angles differences ) , this offset term becomes just a function of the grid size : .theorem [ theorem : bounds on performance index ] ( and its proof ) sheds some light on the nature of the optimization problem that we are considering , and in particular on the role played by the mutual relation between disturbance strengths , damping coefficients , their ratios , frequency penalty weights , and the decision variables .these insights are further developed in the next section . in this section, we consider some special choices of the performance metric and some assumptions on the system parameters , which are practically relevant and yield simplified versions of the general optimization problem , enabling in most cases the derivation of closed - form solutions .we first consider the performance index corresponding to the effort of primary control . as a remedy to mitigate low - inertia frequency stability issues ,additional fast - ramping primary control is often put forward .the primary control effort can be accounted for by the integral quadratic cost hence , the effort of primary control mimics the performance where the performance matrices in are chosen as and .this intuitive cost functions allows an insightful simplification of the optimization problem . *( primary control effort minimization ) * [ theorem : performance index for primary control cost ] consider the power system model - , the squared norm , and the optimal inertia allocation problem .for a performance output characterizing the cost of primary control : and , the optimization problem can be equivalently restated as the convex problem [ eq : primarycontrol] where , we recall , describes the strength of the disturbance at node .with and , the lyapunov equation together with the constraint is solved explicitly by the performance metric as derived in therefore becomes this concludes the proof .the equivalent convex formulation yields the following important insights .first and foremost , the optimal solution to is unique ( as long as at least one is greater than zero ) and also independent of the network topology and the line susceptances .it depends solely on the location and strength of the disturbance as encoded in the coefficients .for example , if the disturbance is concentrated at a particular node with and for , then the optimal solution is to allocate the maximal inertia at node : . if the capacity constraint is relaxed , the optimal inertia allocation is proportional to the square root of the disturbance .we now consider a different assumption that also allows to derive a similar simplified analysis in other notable cases . *( uniform disturbance - damping ratio ) * [ ass : uniform disturbance - damping ratio ] the ratio is constant for all .notice that the droop coefficients are often scheduled proportionally to the rating of a power source , to guarantee fair power sharing .meanwhile , it is reasonable to expect that the disturbances due to variable renewable fluctuations scale proportionally to the size of the renewable power source .hence , assumption [ ass : uniform disturbance - damping ratio ] can be justified in many practical cases , including of course the case where both damping coefficient and disturbances are uniform across the grid . aside from that , assumption [ ass : uniform disturbance - damping ratio ] may be of general interest since it is common in many studies with a _ spatially invariant _ setting . under this assumption , we have the following result .* ( optimal allocation with uniform disturbance - damping ratio ) * [ theorem : properties of general performance index ] consider the power system model - , the squared norm , and the optimal inertia allocation problem .let assumption [ ass : uniform disturbance - damping ratio ] hold .then the optimization problem can be equivalently restated as the convex problem [ eq : uniformratio] where we recall that is the penalty coefficient for the frequency deviation at node . from assumption [ass : uniform disturbance - damping ratio ] , let be constant for all . then we can rewrite as this is equal , up to the scaling factor , to the left hand side of .we therefore have which is equivalent , up to multiplicative factors and constant offsets , to the cost of the optimization problem . again , as in theorem [ theorem : performance index for primary control cost ] , theorem [ theorem : properties of general performance index ] reduces the original optimization problem to a simple convex problem for which the optimal inertia allocation is _ independent _ of the network topology , and in most cases can be derived as a closed form expression of the problem parameters .while this conclusion can also be drawn from physical intuition , it clearly shows that cost function needs to be chosen insightfully . under assumption[ ass : uniform disturbance - damping ratio ] , we can also identify an interesting special case .assume that the frequency penalty is chosen proportional to inertial coefficients , for some : this choice corresponds to penalizing the change in kinetic energy a reasonable and standard penalty in power systems .we have the subsequent result , that follows directly by evaluating for this specific choice of ( which also includes the case where no frequency penalty is considered , i.e. , and therefore only angle differences are penalized ) . *( kinetic energy penalization with uniform disturbance - damping ratio ) * let assumption [ ass : uniform disturbance - damping ratio ] hold , and let the penalty on the frequency deviations be proportional to the allocated inertia , that is , .then the performance metric is independent of the inertia allocation , and assumes the form where , for all , is the uniform disturbance - damping ratio . in this subsection ,we focus on a two - area power grid as in to obtain some insight on the nature of this optimization problem .we also highlight the role of the ratios , which play a prominent role in assumption [ ass : uniform disturbance - damping ratio ] and the bounds . in the case of a two - area system , it is possible to derive an analytical solution of the lyapunov equation , as a closed form function of the vector of inertia allocations .we thus obtain an explicit expression for the cost as where in the two - area case reduces to a rational function of polynomials of orders 4 in the numerator and the denominator , in terms of inertial coefficients .as the explicit expression is more convoluted than insightful , we will not show it here , but only report the following statements which can be verified by a simple but cumbersome analysis of the rational function : 1 .the problem admits a unique minimizer .2 . for sufficiently large bounds , the budget constraint becomes active , that is , the optimizers satisfy . in this case, can be eliminated , and can be reduced to a scalar problem .3 . in the absence of capacity constraints and for identical ratios and frequency penalties , the optimal inertial coefficients are identical ( as predicted by theorem [ theorem : properties of general performance index ] ) . if , then ( see the example in figure [ fig:2nodetrace ] , where we eliminated ) .4 . for sufficiently uniform ratios , the problem is strongly convex .we observe that the cost function is fairly flat over the feasible set ( see figure [ fig:2nodetrace ] ) .5 . for strongly dissimilar ratios ,we observe a less flat cost function . if the disturbance affects only one node , for example , and , strong convexity is lost .ratios for the two - area case.,width=326 ] + , with non - identical damping coefficients , with disturbances inputs varying from = [ 0,1]],title="fig : " ] + from the above facts , we conclude that the input scaling factors play a fundamental role in the determination of the optimal inertia allocation . to obtain a more complete picture ,we linearly vary the disturbance input matrices from = [ 0,1] ] , that is , from a disturbance localized at node 2 to a disturbance localized at node 1 .the resulting optimizers are displayed in figure [ fig_simb ] showing that inertia is allocated dominantly at the site of the disturbance , which is in line with previous case studies .notice also that depending on the value of the budget , the capacity constraints , and the ratios , the budget constraint may be active or not .thus , perhaps surprisingly , sometimes not all inertia resources are allocated .overall , the two - area case paints a surprisingly complex picture . in subsections [ subsec : analytic closed - form results ] and[ subsec : two - area ] , we considered a subset of scenarios and cost functions that allowed the derivation of tractable reformulations and solutions of the inertia allocation problem . in this section, we consider the optimization problem in its full generality .similarly as in section [ subsec : two - area ] and in , we denote by the solution to the lyapunov equation , and we express the cost function as a function of the vector of inertia allocations . in the following ,we derive an efficient algorithm for the computation of the explicit gradient of in . in general ,most computational approaches can be sped up tremendously if an explicit gradient is available . in our case ,an additional significant benefit of having a gradient of is that the large - scale set of nonlinear ( in the decision variables ) lyapunov equations can be eliminated and included into the gradient information . in the following , we provide an algorithm that achieves so , using the routine , which returns the matrix that solves together with . \smallskip ] * ( gradient computation ) * [ theorem : gradient computation ] consider the objective function , where is a function of via the lyapunov equation .the objective function is differentiable for , and its gradient at is given by algorithm [ algorithm ] .the proof of theorem [ theorem : gradient computation ] is partially inspired by and relies on a perturbation analysis of the lyapunov equation combined with taylor and power series expansions . in order to compute the gradient of at , we make use of the relation where is the directional derivative of in the direction , defined as whenever this limit exists . fromwe have that where is a solution of the lyapunov equation and where by we denote the system matrix defined in , evaluated at .the matrices and viewed as functions of scalar can thus be expanded in a taylor series around as with coefficients and , . to compute the coefficients of the taylor expansion in ,we recall the scalar series expansion of around : using the shorthand , we therefore have accordingly , the solution to the lyapunov equation can be expanded in a power series as and therefore the lyapunov equation becomes where we dropped the subscript for readability . by collecting terms associated with powers of , we obtain two lyapunov equations determining and : [ lyap0 ] by the same reasoning as used for equation , the first lyapunov equation is feasible with a positive semidefinite satisfying . the second lyapunov equation is feasible by analogous arguments . finally ,by using together with and , we obtain where and from , it follows that as defined in , thereby implicitly establishing differentiability of .this concludes the proof , as the algorithm computes each component of the gradient by using the relation with the special choice of for . in this subsection ,we focus on the planning problem of optimally allocating virtual inertia when economic reasons suggest that only a limited number of virtual inertia devices should be deployed ( rather than at every grid bus ) . since this problem is generally combinatorial , we solve a modified optimal allocation problem , where an additional -regularization penalty is imposed , in order to promote a sparse solution . the regularized optimal inertia allocation problem is then [ eq : sparmin ] where trades off the sparsity penalty and the original objective function . as in the allocations are lower bounded by a positive , the objective can be rewritten as : observe that the regularization term in the cost is linear and differentiable .thus , problem fits well into our gradient computation algorithm , and a solution can be determined within the fold of algorithm [ algorithm ] by incorporating the penalty term .likewise , our analytic results in section [ subsec : analytic closed - form results ] can be re - derived for the cost function .we highlight the utility of the performance - sparsity trade - off in section [ section : case study ] .thus far we have assumed knowledge of the disturbance strengths encoded in the matrix . while empirical disturbance distributions from historical data are generally available to system operators , the truly problematic and devastating faults in power systems are rare events that are poorly predicted by any ( empirical ) distribution .given this inherent uncertainty , it is desirable to obtain an inertia allocation profile which is optimal even in presence of the most detrimental disturbance .this problem fits into the domain of robust optimization or a zero - sum game between the power system operator and the adversarial disturbance .the robust inertia allocation problem can then be formulated as the min - max optimization problem [ eq : genminprobfb] where = and where is convex hull of the set of possible disturbances with a non - empty interior . as a special instance , consider where we normalized the disturbances by .recall from , that the objective is linear in , and we can write it as , where for . hence , by strong duality , we can rewrite the inner maximization problem as the equivalent dual minimization problem where and are the dual variables associated with the constraints .the min - max problem is then equivalentto : [ eq : genminprobfb-2] the minimization problem has a convex objective and constraints , barring .however , we already have the gradient of the individual elements , which can be computed from algorithm [ algorithm ] as by substituting and for .the availability of the gradient of this set of non - linear equality constraints considerably speeds up the computation of the minimizer . by direct inspection or computation( see section [ section : case study ] ) we observe that the robust optimal allocation profile tends to make the cost indifferent with respect to the location of the disturbance , as is customary for similar classes of min - max ( adversarial ) optimization problems . for the special case of _ primary control effort minimization _ as in theorem [ theorem : performance index for primary control cost ] , the min - max problem simplifies to [ eq : genminprobfb-3] in this case , the robust optimal allocation profile tends to make the inertia allocations equal for all s , inducing a _ valley - filling _ strategy that allocates the entire inertia budget and prioritizes buses with lowest inertia first .in this section , we investigate a 12-bus case study illustrated in figure [ fig_sim ] .the system parameters are based on a modified two - area system from ( * ? ? ?* example 12.6 ) with an additional third area , as introduced in .after kron reduction of the passive load buses , we obtain a system of 9 buses , corresponding to the nodes where inertia can be allocated . + we investigate this example computationally using algorithm [ algorithm ] to drive standard gradient - based optimization routines , while highlighting parallels to our analytic results .we analyze different parametric scenarios and compare the inertia allocation and the performance of the proposed numerical optimization ( which is a locally optimal solution ) with two plausible heuristics that one may deduce from the conclusions in and the special cases discussed following theorem [ theorem : performance index for primary control cost ] : namely the uniform allocation of the available budget , in the absence of capacity constraints , that is , ; or the allocation of the maximum inertia allowed by the bus capacity constraints in the absence of a budget constraint , that is , ( which we set as ) .[ [ uniform - disturbance ] ] uniform disturbance + + + + + + + + + + + + + + + + + + + we first assume that the disturbance affects all nodes identically , . in figure[ fig : grid1 ] we consider the case where there are only capacity constraints at each bus , and we compare the different allocations vis - - vis : the initial inertia , a locally optimal solution , and the maximum inertia allocation .figure [ fig : grid2 ] compares the results in the case where there is only a budget constraint on the total allocation .we compare the initial inertia , the locally optimal allocation , and the uniform placement .[ [ localized - disturbance ] ] localized disturbance + + + + + + + + + + + + + + + + + + + + + we then consider the scenario where a localized disturbance affects a particular node , in this example , node 4 with . as in figures [ fig : grid1 ] and [ fig : grid2 ] , a comparison of the different inertial allocations and the performance values is presented in figures [ fig : grid3 ] and [ fig : grid4 ] for the cases of capacity and budget constraints . for ( a ) uniform disturbance , ( b ) localized disturbance at node 4 with capacity constraints .0% performance loss corresponds to the optimal allocation , 100% performance loss corresponds to no additional inertia allocation . ]we draw the following conclusions from the above test cases some of which are perhaps surprising and counterintuitive . 1 .first , our locally optimal solution achieves the best performance among the different heuristics in all scenarios .2 . in the case of uniform disturbances with only capacity constraints on the individual buses ( figure [ fig : grid1 ] ) , the optimal solution does not correspond to allocating the maximum possible inertia at every bus .3 . in the case of uniform disturbances with only the total budget constraint ( figure [ fig : grid2 ] ) ,the optimal solution is remarkably different from the uniform allocation of inertia at the different nodes .4 . in case of uniform disturbances ,the performance improvement with respect to the initial allocation and the different heuristics is modest .this confirms the intuition developed for the two - area case ( section [ subsec : two - area ] ) regarding the flatness of the cost function .5 . in stark contrast is the case of a localized disturbance , where adding inertia dominantly to disturbed node is an optimal choice in comparison to heuristic placements .the latter is also in line with the results presented for the two - area case and the closed - form results in theorem [ theorem : performance index for primary control cost ] .6 . in the case of a localized disturbance ,adding inertia to all undisturbed nodes may be detrimental for the performance , even for the same ( maximal ) allocation of inertia at the disturbed node , as shown in figure [ fig : grid3 ] .the optimal robust allocation approach proposed in section [ subsection : robustness ] is investigated in figures [ fig : robust1 ] and [ fig : robust2 ] .these figures depict the optimal inertia profiles which are robust to disturbance location and furthermore have a significantly lower worst - case cost compared to the heuristics .the sparsity - promoting approach proposed in section [ subsection : l1_norm ] is examined in figure [ fig : gridn ] . for a uniform disturbance without a sparsity penalty ,inertia is allocated at all nine buses of the network . for an allocation at only seven buses is optimal with hardly a 1.3% degradation in performance . for sparser allocations ,the trade - off with performance becomes more relevant .the sparsity effect is significantly pronounced for localized disturbances .the optimal solution for for a localized disturbance at node 4 , requires allocating inertia at buses ( 4 , 6 ) .however , for , we observe an allocation of inertia only at bus 4 does not affect the performance significantly , while being preferable from an economic perspective .figure [ fig : grid7 ] shows the time domain responses to a localized impulse at node 4 , modeling a post - fault condition .subfigure ( a ) ( respectively , ( b ) ) show that the optimal inertia allocation according to the proposed performance criteria is also superior in terms of peak ( overshoot ) for angle differences ( respectively , frequencies ) .subfigure ( c ) displays the frequency response at node 5 of the system .note from the scale of this plot that the deviations as potentially insignificant .similar comments apply to other signals which are not displayed here .finally subfigure ( d ) shows the control effort expended by the virtual inertia emulation at the disturbed bus .perhaps surprisingly , observe that the optimal allocation requires the least control effort .figure [ fig : grid8 ] plots the eigenvalue spectrum for different inertia profiles .the case of no additional allocation , , marginally outperforms with respect to both the best damping asymptote ( most damped nonzero eigenvalues ) as well as the best damping ratio ( narrowest cone ) . as apparent from the time - domain plots in figure [ fig : grid7 ], this case also leads to the worst time - domain performance ( with respect to overshoots ) compared to the optimal allocation , which has slightly poorer damping asymptote and ratio .these observations reveal that the spectrum holds only partial information , and advocate the use of the -norm as opposed to spectral performance metrics ( as in ) .we considered the problem of placing virtual inertia in power grids based on an norm performance metric reflecting network coherency .this formulation gives rise to a large - scale and non - convex optimization program .for certain cost functions , problem instances , and in the low - dimensional two - area case , we could derive closed - form solutions yielding some , possibly surprising insights .next , we developed a computational approach based on an explicit gradient formulation and validated our results on a three - area network .suitable time - domain simulations demonstrated the efficacy of our locally optimal inertia allocations over intuitive heuristics .we also examined the problem of allocating a finite number of virtual inertia units via a sparsity - promoting regularization .all of our results showcased that the optimal inertia allocation is strongly dependent on the location of disturbance .our computational and analytic results are well aligned and suggest insightful strategies for the optimal allocation of virtual inertia .we envision that these results will find application in stabilizing low - inertia grids through strategically placed virtual inertia units . as part of our future work, we consider the extension to more detailed system models and specifications as well as a comparison with the results in .the authors wish to thank mihailo jovanovic , andreas ulbig , theodor borsche , dominic gross , and ulrich mnz for their comments on the problem setup and analysis methods .
|
a major transition in the operation of electric power grids is the replacement of synchronous machines by distributed generation connected via power electronic converters . the accompanying `` loss of rotational inertia '' and the fluctuations by renewable sources jeopardize the system stability , as testified by the ever - growing number of frequency incidents . as a remedy , numerous studies demonstrate how virtual inertia can be emulated through various devices , but few of them address the question of `` where '' to place this inertia . it is however strongly believed that the placement of virtual inertia hugely impacts system efficiency , as demonstrated by recent case studies . in this article , we carry out a comprehensive analysis in an attempt to address the optimal inertia placement problem . we consider a linear network - reduced power system model along with an performance metric accounting for the network coherency . the optimal inertia placement problem turns out to be non - convex , yet we provide a set of closed - form global optimality results for particular problem instances as well as a computational approach resulting in locally optimal solutions . further , we also consider the robust inertia allocation problem , wherein the optimization is carried out accounting for the worst - case disturbance location . we illustrate our results with a three - region power grid case study and compare our locally optimal solution with different placement heuristics in terms of different performance metrics .
|
since the atomic force microscope ( afm ) was first demonstrated , it has become an indispensable tool for probing the physical characteristics of microscopic systems .working by hooke s law , = , the tip of the afm is displaced proportional to an applied force , transducing forces into a detectable signal .this has been used to great effect for surface imaging , where interatomic forces between an afm tip and substrate are measured as raster images of the surface structures down to the atomic scale and beyond .the ability to use afms in liquid environments has led to their widespread use in biological applications , such as live imaging of biological specimens , and non - scanning applications like studying receptor - ligand binding of surface proteins and deciphering the mechanics of proteins through unfolding experiments . for both aqueous and high - speed afm ( hs - afm ) ,it is advantageous to use low - mass , high - frequency cantilevers , yet current technology is limited in detecting the motion of such cantilevers . in biological applications , where the sample is often continuously moving, the time resolution of the measurement process is critical .high - speed afm has enabled the dynamics of molecular systems to be visualized at speeds of up to 80 ms for a pixel image .this has permitted the real - time imaging of individual motor proteins , proteins diffusing and interacting in lipid bilayers , and the folding of synthetic dna origami structures .when operated dynamically , the maximum time resolution of the measurement is limited by the frequencies of the structural modes of the cantilever . in the simple harmonic approximationthese are , where and are the effective spring constant and mass of a particular mode . since is chosen to optimize the displacement response of the afm to the application s characteristic forces , minimizing the dimensions , and therefore , grants access to the regime of both delicate force sensing and exceptional time resolution through increased mechanical frequencies .( a ) sem image of the optomechanical device with a 20 optical microdisk evanescently coupled to an 8 cantilever .coordinates are aligned such that is parallel to the axis of the cantilever , points along in - plane motion of the cantilever , and points out - of - plane .( b ) 10 , 4 and ( c ) 5 , 2 ; scale bars 5 all panels .( d)-(f ) fem simulations reveal the first three modes of the 8 cantilever as an example : an out - of - plane mode , an in - plane mode and a second out - of - plane mode .mechanical modes of the shorter cantilevers are similar .colour scale indicates relative displacement ., width=4 ] common methods to detect the displacement of a cantilever include reflecting a laser beam off the cantilever onto a position sensitive photodetector , termed optical beam deflection ( obd ) , or recombining the reflected beam interferometrically .an important benchmark of a displacement detection system is the displacement noise floor : the noise corresponding to the minimum displacement resolvable by the detection system .obd has obtained displacement noise floors of 5 , while an all - fiber interferometer has achieved noise floors of 2 , both with standard low - frequency cantilevers ( 300 khz ) . however , these detection methods scale poorly as the dimensions of the nanomechanical devices fall below the spot size of the laser beam ( 1 ) , creating an effective limit on cantilever sizes ( and frequencies ) that has already been reached .the technique of optomechanics offers unprecedented displacement sensitivity while being well suited for nanoscale devices . by spatially localizing optical cavity modes with a mechanical resonator , motional degrees of freedomare coupled to frequency ( or phase ) shifts of the optical modes .monitoring the transmission of laser light coupled to the optical cavity then provides sensitive readout of the mechanical motion , exemplified by experiments measuring the motion of nanomechanical resonators to the standard quantum limit ( sql)the theoretical noise floor of a continuous measurement determined from dynamical backaction and photodetector shot noise as well as observing quantum behavior .here , three sizes of low mass , mhz frequency , optomechanical devices suited to afm applications are presented .they consist of cantilever - style nanomechanical resonators coupled to the whispering gallery modes of optical microdisks and are commercially fabricated from a 215 nm thick silicon layer of a silicon - on - insulator ( soi ) wafer , ensuring simple fabrication with automatic and reproducible optomechanical cavity formation .the cantilevers have lengths of 8 , 4 , and 2 , and are on average 400 nm wide , broadening towards the end to allow binding of molecules to the cantilever , for pulling experiments , without compromising the optical cavity quality ( for 20 disk ) .they couple to disks of 20 , 10 and 5 respectively , and scanning electron microscopy ( sem ) images , and finite element method ( fem ) simulations of the first three structural modes of the 8 cantilever , are shown in figure [ fig.devs ] .we envision single molecule force ( folding / unfolding ) experiments as the ideal afm application for these devices , as this would not degrade the optical of micro disk due to a sample , nor would a separate tip need to be attached .l @ l * 7c & & & & ( air ) & ( air ) & ( air ) & + & ] & ] & ] + 2 & out - of - plane & 140 & 2.2 & 20.1 & 3,600 ( 120 ) & 20 ( 18 ) & 290 ( 1,500 ) & 2,000 + 2 & in - plane & 180 & 3.3 & 21.4 & 5,000 & 120 & 280 & 340 + 4 & out - of - plane & 240 & 0.30 & 5.43 & 4,300 ( 35 ) & 2 ( 3 ) & 180 ( 2,000 ) & 720 + 4 & in - plane & 260 & 0.48 & 7.04 & 4,400 & 300 & 200 & 6 + 8 & out - of - plane & 610 & 0.087 & 1.90 & 6,500 ( 22 ) & 18 ( 17 ) & 135 ( 2,300 ) & 150 + 8 & in - plane & 610 & 0.11 & 2.18 & 7,800 & 390 & 132 & 7 + 8 & 2 out - of - plane & 610 & 13 & 23.2 & 5,600 & 55 & 510 & 57 + despite the slightly larger displacement noise floor , the design presented here has a number of advantages over other optomechanical cantilevers .our devices were fabricated at a commercial foundry ( imec ) using deep uv photolithography with a feature resolution of approximately 130 nm .this allows for the straightforward , high - throughput fabrication of many such devices for commercial applications , which time - consuming electron beam lithography does not provide .further , our devices showcase resonators with two nearly orthogonal modes : in - plane ( figure [ fig.devs](d ) ) and out - of - plane ( figure [ fig.devs](e ) ) .the out - of - plane mode is a better fit to traditional raster scanning or molecule pulling experiments , as current generations of afms also operate out - of - plane .additionally , these orthogonal modes occur at similar frequencies , providing multi - dimensional force sensing capabilities a feature especially suited for applications such as protein unfolding experiments , where the unfolding energy landscapes can be highly dependent on the direction of applied force . to measure the motion of our device s cantilever , single mode light from a tunable diode laser ( new focus tlb-6330 , 1550 - 1630 nm ) is passed through a dimpled , tapered optical fiber placed on the top edge of the optical microdisk opposite to the mechanical device ( figure [ fig.how](b ) ) using three axes of nanopositioning stages . by slightly detuning the laser from an optical resonance of the disk ,modulations in the frequency of the optical modes induced by the movement of the mechanical resonator are transduced to a voltage signal from a photodetector ( pd ) measuring the transmission through the tapered fiber .the high - frequency spectral density of the pd voltage ( ) is analyzed to identify peaks corresponding to cantilever modes .a lock - in amplifier ( zurich h2fli ) is then used to measure across a narrow bandwidth at the mechanical peak frequency , and the polarization and frequency of the tunable laser is iteratively optimized to maximize the mechanical signal ( see figure [ fig.how ] ) .finally , the lock in amplifier is discretely stepped to measure across the desired frequency range . by analyzing the power spectral density of laser transmission through the tapered fiber coupled to the microdisk , mechanical motion of the cantilevers can be observed .for each device we were able to identify peaks in the voltage spectral density corresponding to thermodynamic actuation of the fundamental in - plane and out - of - plane modes , while the second out - of - plane mode ( figure [ fig.devs](f ) ) was additionally visible for the 8 cantilever ( figure [ fig.peaks](e ) , ( f ) ) .mode identity was verified by directional piezo actuation and comparison to fem simulations .the voltage spectral density , , was calibrated thermomechanically to displacement spectral density ( ) of the cantilever s tip .displacement noise floors of 2 observed for the out - of - plane motion of the 4 , equivalent to the best noise floors observed using traditional afm detection methods , yet for radically smaller , lighter , and higher - frequency cantilevers . the linear susceptibility , , relates displacements of the cantilever s tip , , to applied forces , . by dividing the measured displacement spectral density by ,the observed force spectral density can be found ( figure [ fig.peaks](b ) , ( d ) , ( f ) , ( h ) ) .the thermal forces on the cantilever impose a minimum force sensitivity , and in all cases in which the thermomechanical motion of the cantilever was detected , the force noise reached a minimum at the cantilever resonant frequency equal to the thermal noise , . for both the in - plane and out - of - plane modes of the 8 , a force sensitivity of achieved , figure [ fig.peaks](b ) , ( d ) .this is less than a factor of two higher than a recent hybrid device with a very high mechanical quality factor .while reached regardless of detector noise , low displacement noise floors broadened the frequency range over which thermally limited force noise was observed .therefore , small displacement noise floors , achievable with optomechanics , allow for more accurate , larger bandwidth ( faster ) force measurements .a similar effect could be achieved for optomechanical devices such as these through dissipative feedback with optical cooling , broadening the width of the peaks without affecting , and allowing wide bandwidth measurements at the thermal noise level for fast scanning .operating these devices at low bath temperatures would reduce thermal noise on the cantilevers , and a thermal force noise of 1 10 mk is expected to be detectable with the observed noise floors of the 8 s out - of - plane motion , making them excellent candidates for low - temperature precision force measurements .device geometry plays a large part in determining the thermal forces on the cantilevers , and minimizing , where is the full width half max of the spectral peak , will optimize force sensing ability .the extremely low effective masses of these devices , ranging here from 140 to 610 fg ( table [ thetable ] ) , has enabled delicate force sensing despite the modest quality factors of the cantilevers ( 5000 in vacuum ) .when using the cantilevers to detect forces in liquid or air , thermal force noise on the cantilever is drastically increased due to additional damping of the cantilever , lowering the quality factor of the devices ( table [ thetable ] ) .higher frequency cantilevers are less affected by viscous dissipation and therefore exhibit better quality factors and force sensitivity .this is the case for our devices , as the smallest cantilevers are dampened the least in air ( see table [ thetable ] , figure 3(g)-(h ) , and appendix d ) .( a ) tilted sem image of a device with a 4 ; scale bar 500 nm .side walls have a slope of approximately 10 from vertical , creating asymmetries in the optomechanical coupling .( b ) fem simulation of an optical mode in cylindrically symmetric coordinates .color bar indicates the relative log magnitude of the electric field .( c ) the whispering gallery cavity modes used to optomechanically detect the cantilever s motion provide an additional restoring force . by increasingthe blue - detuned laser power dropped into the optical disk used to detect the 4 s motion , stiffening of the cantilever is observed .the frequency of the out - of - plane motion increased by % , while the in - plane motion showed negligible effect due to its smaller , table [ thetable ] .error bars from numerical fits are smaller than the marker size , and dashed lines are guides to the eye . , width=4 ] the small displacement noise floors achieved with these devices are a direct result of the efficiency in which displacements of the cantilever are transduced into frequency changes in the optical disk .this efficiency can be described to first order by the optomechanical coupling coefficient , , where is the optical mode resonance frequency .cantilevers curve with the microdisk to optimize by increasing overlap between the optical whispering gallery modes and the cantilever s motion ( table [ thetable ] ) . in all devices ,the out - of - plane motion of the cantilever had considerably better optomechanical coupling than the in - plane motion .the apparent symmetry of the out - of - plane motion would suggest a very small linear optomechanical coupling for the out - of - plane mode , however slanted sidewalls of the devices due to fabrication ( figure [ fig.couple](a),(b ) ) , and the placement of the dimpled fiber touching the top of the optical disk introduce sufficient asymmetries to explain the large linear optomechanical coupling observed . finally , optomechanics provides schemes to introduce feedback and control over afm cantilevers , such as optomechanical heating and cooling of the mechanical modes , and optical gradient forces .for example , we show in figure 4(c ) the optical spring effect can be used to tune the cantilever frequency _ in situ _ by controlling the optical gradient force through the laser power used for detection .such techniques introduce new methods for manipulating optomechanical afms for increased functionality .optomechanical afm provides the path to ultra - sensitive molecular force probe spectroscopy , hs - afm , and other afm applications .we have demonstrated optomechanical detection of sub - picogram effective mass multidimensional afm cantilevers that are commercially fabricated , with displacement noise floors down to 2 , and 130 an sensitivity at room temperature .challenges remain , including selective attachment of relevant molecules , yet we envision that extension of the devices presented here to aqueous environments will open new doors in high - speed , high - resolution molecular force measurements .the authors wish to thank university of alberta , faculty of science ; the canada foundation for innovation ; the natural sciences and engineering research council , canada ; and alberta innovates technology futures for their generous support of this research .we also thank greg popowich for technical support , along with mark freeman , wayne hiebert and paul barclay for helpful discussions .to calibrate the voltage spectral density , , measured from a photodetector into a displacement spectral density , , of the movement of the cantilever s tip , the thermal forces on the cantilever can be used . by way of the fluctuation - dissipation theorem ,the thermal forces acting on a cantilever s mode are constant across frequencies with a spectral density of where is the boltzmann constant , is the system temperature , is the mode s resonance frequency , is the quality factor , and is the effective mass of the cantilever ( described further below ) . using the linear susceptibility of a damped harmonic oscillator , },\ ] ] the theoretical displacement spectral density corresponding to thermomechanical actuation of the cantilever modeis known : .further , assuming the voltage measured is linearly proportional to cantilever displacement , and the noise from the measurement apparatus is constant across frequencies of interest , a theoretical fit to the voltage spectrum can be found , where is the voltage noise floor density and is a conversion factor between volts and meters , ie . . substituting in the thermal displacement noise , because it is not possible to differentiate both and from the fit , is calculated beforehand from measured cantilever dimensions . by modelling the structural modes of the cantilever using the finite element method ( fem ), the mode shape of interest , , which is the mechanical displacement of the mode from it s undeformed position , , normalized to the maximum displacement , can be determined and the effective mass can be computed by carrying out an integral over the volume of the cantilever , by fitting the measured to ( [ eq.fit ] ) , the resonance frequency ( ) , quality factor ( ) , noise floor ( ) , and the voltage - displacement conversion factor ( ) used to calibrate the spectrum , can be determined .calibrated displacement spectral densities and force spectral densities for the cantilevers discussed above are shown in appendix d.by performing thermomechanical calibration , the voltage - displacement conversion factor , , was found . since linearly converts displacements of the cantilever ( ) to volts from the photodetector ( ) , .examining the optomechanical detection mechanism , the displacement to voltage transduction can be divided into two steps , displacement to optical cavity frequency ( ) shifts , and to transmission ( voltage ) transduction .therefore , with help of the chain rule , ) . here is the optomechanical coupling coefficient , . by calculating the slope of laser transmission _ vs. _ laser frequency at the frequency of lightthe mechanical signal was detected at ( _ e.g. _ from figure 2(c ) in the main text ) , can be determined , enabling calculation of .the optical power coupled into the microdisk was estimated by splitting off a small portion ( 10% ) of the light before the vacuum chamber and sending it to a power meter ( thorlabs pm100d ) , instead of the wavelength meter ( figure 2(a ) ) .power meter readings were calibrated to photodector voltages in the absence of an optical resonator to compensate for the wavelength - dependent response of the photodetector and intrinsic resonances in the fibers used .the total optical power in the fiber was then calculated by monitoring the power at the small split - off as the attenuation was modified using the variable attenuator .the net power dropped into the disk was found by comparing the transmission through the dimpled fiber before and after coupling to the optomechanical devices .* spectral peaks for the 8 cantilever .* * a * , spectral displacement density and * b * , spectral force density corresponding to thermal motion of the out - of - plane mode at atmospheric pressure .* c * , * d * out - of plane motion in vacuum , and * e * , * f * , in - plane motion in vacuum ., width=576 ] * spectral peaks for the 4 cantilever . * * a * , spectral displacement density and * b * , spectral force density corresponding to thermal motion of the out - of - plane mode at atmospheric pressure . * c * , * d * out - of plane motion in vacuum , and * e * , * f * , in - plane motion in vacuum . ,width=576 ] * spectral peaks for the 2 cantilever . * * a * , spectral displacement density and * b * , spectral force density corresponding to thermal motion of the out - of - plane mode at atmospheric pressure . * c * , * d * out - of plane motion in vacuum , and * e * , * f * , in - plane motion in vacuum . ,99 viani m b , pietrasanta l i , thompson j b , chand a , gebeshuber i c , kindt j h , richter m , hansma h g and hansma p k 2000 probing protein - protein interactions in real time_ nature struct .* 7 * 644 - 647 brockwell d j , paci e , zinober r c , beddard g s , olmsted p d , smith d a , perham r n and radford s e 2003 pulling geometry defines the mechanical resistance of a beta - sheet protein _ nature struct* 10 * 731 - 737 rasool h i , wilkinson p r , stieg a z and gimzewski j k 2010 a low noise all - fiber interferometer for high resolution frequency modulated atomic force microscopy imaging in liquids _ rev .instrum . _* 81 * 023703 anetsberger g , arcizet o , unterreithmeier q p , rivire q p , schliesser a , weig e m , kotthaus j p and kippenberg t j 2009 near - field cavity optomechanics with nanomechanical oscillators _ nature phys . _* 5 * 909 - 914 bagheri m , poot m , li m , pernice w p h and tang h x 2011 dynamic manipulation of nanomechanical resonators in the high - amplitude regime and non - volatile mechanical memory operation _ nature nanotech ._ * 6 * 726 - 732 barton r a , storch i r , adiga v p , sakakibara r , cipriany b r , ilic b , wang s p , ong p , mceuen p l , parpia j m and craighead h g 2012 photothermal self - oscillation and laser cooling of graphene optomechanical systems _ nano lett . _* 12 * 4681 - 4686 anetsberger g , gavartin e , arcizet o , unterreithmeier q p , weig e m , gorodetsky m l , kotthaus j p and kippenberg t j 2010 measuring nanomechanical motion with an imprecision below the standard quantum limit _ phys .a _ * 82 * 061804
|
high - frequency atomic force microscopy has enabled extraordinary new science through large bandwidth , high speed measurements of atomic and molecular structures . however , traditional optical detection schemes restrict the dimensions , and therefore the frequency , of the cantilever - ultimately setting a limit to the time resolution of experiments . here we demonstrate optomechanical detection of low - mass , high - frequency nanomechanical cantilevers ( up to 20 mhz ) that surpass these limits , anticipating their use for single - molecule force measurements . these cantilevers achieve 2 noise floors , and force sensitivity down to 132 an . furthermore , the ability to resolve both in - plane and out - of - plane motion of our cantilevers opens the door for ultrasensitive multidimensional force spectroscopy , and optomechanical interactions , such as tuning of the cantilever frequency _ in situ _ , provide new opportunities in high - speed , high - resolution experiments .
|
though quantum entanglement is a concept which has attracted much of the attention of physicists working in various fields , still , there remains room for further progress on its understanding .one of the main open problems is the efficient detection and characterization of multipartite entanglement of density matrices representing open quantum systems undergoing non - unitary evolution .all experimentally addressable information about a quantum physical system is summarized in its density matrix .we focus on a multipartite quantum system , which comprises a finite number of parts numerated by index , each of which has the hilbert space of a finite dimensionality , whence is the dimensionality of the hilbert space of the entire system .this system assembly of parts , is called entangled ( or inseparable ) if and only if its density matrix can not be caste as a statistical sum ( , ) of various ( ) direct products of the density matrices of pure states of each part .this condition provides the most general case of entangled systems opposite to a separable quantum system comprised of statistically independent elements , where eq.([eq0 ] ) holds as an equality .many approaches have been developed so far aiming to answer the question whether or not a density matrix is separable . concerning exact analytic results , up to now ,there is no method applicable to the multipartite problem , and we believe that such a solution does not exist at all . an algorithmic solution to the `` decision '' problem associated with separability has been conjectured to be a np hard problem but valuable progress has been done ( mainly on the bi - separability problem ) in approaches - where semidefinite programming is merged with analytic criteria . in this workwe provide a geometric point of view on the problem of inseparability that suggests an efficient solution based on linear programming . employing simple geometric arguments we suggest an algorithm that results to a _unique _ decomposition of the density matrix as where is , what we call in this work , the _ separable component _, the _ essentially entangled _ part which can not have any separable states as components and is a positive number in the range ] .the requirement of the unit trace in this representation means that the inner product of a vector representing a density matrix and a vector representing the unit matrix equals to unity .henceforth we call this manifold `` liouville vector space '' .furthermore , the density matrix of a pure state has rank one , which implies that the length of the vector corresponding to a pure state , equals to unity .the density matrix manifold is thus a convex hull at the unit - length length vectors having unit projection on the unity matrix .a natural basis exists for such a vector space suggested by the properly normalized generators of the unitary group , including the unity .this basis allows one to cast a density matrix of a quantum system as with ] , but in contrast to the bloch vector of pure quantum states , these states do not cover all the surface of the hypersphere of dimension but are confined at a manifold of lower dimensionality , .this can be easily understood when the characteristic polynomial =\lambda ^{n}+c_{1}(\left\{r_{i}\right\ } ) \lambda ^{n-1}+c_{2}(\left\{r_{i}\right\ } ) \lambda ^{n-2}+\ldots + c_{n}(\left\ { r_{i}\right\ } ) ] ) even though for .one can find the maximum separable and the essentially entangled components of an arbitrary density matrix straightforwardly with the help of the linear programming algorithm applied to the convex hull of general pure states and the `` polytope '' of pure separable quantum states .the main obstacle on this way is a high dimensionality of the corresponding liouville vector space , which makes intractable the direct approach within any approximation .in fact , even for the simplest multipartite system of three qubits , the dimensionality ( ) of the density matrix space is , such that even for the rather low - accuracy approximation attributing just points per dimension , one encounters a polytope of already vertices . here, we suggest a way to crucially decrease the number of the vertices that enter as samples in the algorithm and , in consequence , the computational complexity of the procedure .we first notice the fact that the solution of the problem and , in general , any convex decomposition of the form eq.([eq1 ] ) , allows for at most non - zero coefficients and . this observation can be formally justified by a theorem of carathodory as mentioned in . in the limit pure states are the vertices associated with the corners of the facets corresponding to the solution , as illustrated in fig .[ fig1 ] ( e)-(f ) , while other vertices can be discarded. therefore , at first step we may randomly take product states , general states and in order to ensure the algorithmic stability , complement this set by the eigenvectors of the given density matrix .we find the solution of the linear programming problem , which typically has complexity , and thereby identify at most product states and general states with nonzero coefficients and , respectively .the linear constraint imposed on the algorithm is the minimization of and the solution provided is a ` local ' minimum , for the given set of vectors fed to the algorithm .our aim is to find the global minimum value of that is equal to and to this end we create an iterative optimization loop which guides us there . at the second and subsequent steps ,we take the product states resulting from the solution of the optimization problem at the former step and by applying to each of them randomly chosen local transformations we generate new product states .we also generate new entangled states by applying random generic transformations to each of the entangled states obtained at the former step . here numerates generators of the group while mean generators of the subgroup of local transformation .random parameters and are normally distributed with width gradually decreasing with the number of the iteration step .we again solve the linear programming problem for vertices at these two new polytopes and iteratively repeat all the procedure till the result converges .note that each next step , the presence of the solution of the former step of the loop is essential in order to guarantee an outcome from the linear programming algorithm .the set of the eigenvectors of the density matrix plays this role for the first step .numerical inspection shows that the final results of the algorithm i.e. , the product component and the essentially entangled part , eqs.([sep])-([ent ] ) , are always the same for different runs . the algorithm described above concerns the case of full separability of a state or else , the identification of the essentially -entangled component .the same steps , can be applied if we make a repartition of the initial system and consider -separability of the state with . furthermore ,if the set of separable states is enlarged to include other special classes of pure states e.g. states of the class , then one can apply the idea of the algorithm for revealing the classification of mixed multipartite entangled state as the one introduced in for three qubits .we would like to add here that for the specific case of three qubits in mixed state , a lot of progress has been recently made on the classification of entanglement via analytic criteria and efficient algorithms - .finally it is important to mention that linear programming scales polynomially with the dimension of the vector space under consideration in the general case but not always still a zero - measure of non - polynomial cases may exist . in consequence ,the same additional ` rule ' has to be applied to the proposed algorithm and the identification of the special cases where the algorithm becomes non - polynomial is an interesting open problem , not resolved in this work .however , on a practical level even in this case , a small random variation of the initial density matrix brings the problem back to a polynomially complexity .we may claim that all information relevant to entanglement is contained in the essentially entangled part of the density matrix .though this is not the main object of this work , we make some simple suggestions for analyzing entanglement properties of employing previous results about characterization of entanglement for pure states . for pure quantum states ,entanglement is directly related to the factorizability at state vectors , and therefore one can characterize entanglement by identifying the orbit of local transformations for a given state .this orbit can be marked by a complete set of polynomial invariants or alternatively by the coefficients of the tanglemeter of a given state .the state defined as \left\vert 0\right\rangle ] describing the time evolution of the density matrix of the assembly , can be averaged over these rapid fluctuations , yielding the following lindblad master equation -i\sum_{i , j=1}^{3}\overline{\delta f_{i}(t)\delta f_{j}(t)}\left [ \widehat{\lambda }_ { i},\left [ \widehat{\lambda } _ { j},\widehat{\rho } \right ] \right ] , \label{eq1d}\ ] ] where the upper bar denotes time average .substitution to this master equation in the liouville representation of the density matrix in terms of the generators of the unitary group , yields a system of linear , first - order differential equations \right\ } -i\mathcal{r}_{k , m}\right ) r_{m } \label{eq2d } \\\mathcal{r}_{k , m } & = & \sum_{i , j=1}^{3}\overline{\delta f_{i}(t)\delta f_{j}(t)% } \mathrm{tr}\left\ { \widehat{g}_{k}^{12}\left [ \widehat{\lambda } _ { i},\left[\widehat{\lambda }_ { j},\widehat{g}_{m}^{12}\right ] \right ] \right\ } \notag\end{aligned}\ ] ] for the real vector components .straightforward analytic solution of eq.([eq2d ] ) gives oscillations with time for some of the coefficients while others die off with rates determined by the relaxation operator .a considerable amount of work on the understanding of the dynamics of entanglement has been performed so far and we refer an interested reader to for a complete review and reference list . in fig . [ fig2 ] we graphically represent a generic solution for this example , as a spiral in the liouville space , gradually approaching a stationary solution .this picture also provides a complementary point of view on the phenomenon of sudden death and revival of entanglement . with the course of time, we expect the essentially entangled part to oscillate between different subspaces and eventually to vanish for sometime when the density matrix is passing inside the polytope of separable states as it is illustrated in fig .the revival of entanglement is marked by the exit of the density matrix from the polytope .this graphical representation is justified by the calculations which we present in the following .we now solve the model eq.([eq2d ] ) for a set of given values for presented in fig .[ fig2 ] , and reconstruct the density matrix with the help of eq.([re ] ) .we summarize the results of calculations in fig .[ fig3 ] . in fig .[ fig3 ] ( a ) we plot the purity ] , vanishes implying that the state enters inside the polytope of separable states .this physical situation describes a sudden death and sudden revival of entanglement a phenomenon - which has been studied extensively with other methods .our geometric decomposition offers additional information on the origin of this phenomenon , see fig .[ fig2 ] . in order to analyze the entanglement properties of the essentially entangled component we first note that for the chosen model system in the vast majority of the time steps , there is a dominant eigenvector for with a corresponding eigenvalue ,see fig .[ fig3 ] ( d ) .therefore , for this specific example and assigned parameters , it makes sense just to analyze entanglement properties of , whenever the condition is satisfied , and to conclude from this analysis the entanglement properties of .naturally , this analysis together with the weight , give all the information necessary to describe entanglement in .we analyze the entanglement properties of with the help of the method of nilpotent polynomials . in the appendix we provide an explicit method for deriving the general expression for the tanglemeter of a wavevector describing an assembly of a three - level system and two two - level systems : with being positive numbers and being complex .the matrix representation of the nilpotent variables ( operators ) , , is also provided in the appendix .concerning now the physical meaning of the coefficients .the coefficients of the tanglemeter even though are not entanglement monotones in the strict sense , these are invariant under the action of local transformations and the presence of any non - zero term in the tanglemeter ensures the presence of entanglement .more precisely , the coefficient ensures the presence of genuine tripartite entanglement in the state while the rest of the coefficients are related to bipartite entanglement . in fig .[ fig3 ] ( e)-(h ) we plot those coefficients which are positive , and we observe that these oscillate with time without dissipation . the same holds for the real and imaginary parts of the complex ones not shown on the figure . with this example , in addition to the death and revival of entanglement , we observe two interesting phenomena which need more case study in order to decide whether are specific to this example or general .the first is the presence of a dominant eigenvector in the essential entangled component and the second is the oscillations without dissipation of the entanglement characteristics of the essential entangled component .in this work we have studied a concept related to entanglement of mixed states namely the essentially entangled component of a mixed multipartite state and more important , we have suggested an efficient algorithm for its identification .the essentially entangled component is the complementary part to the best separable approximation introduced in and this naturally contains all the entanglement of the density matrix .we analyze some properties of the essentially entangled components and we suggest methods for characterizing its entanglement content . our main tool is the accustomed geometric description of mixed quantum states in the spirit of bloch vector representation , which results from the decomposition of a density matrix over the generators of the relevant group . we have shown that pure states are not everywhere on the surface of this hypersphere , in the contract to the bloch vector , and that the convex hull of pure states from a convex `` body '' inside the sphere .the convex hull of separable states forms a convex `` polytope '' inside the `` body '' of general states . as a consequencethe entangled states inside the body and outside the polytope can be represented as sum of a separable state on the surface of the polytope and an essentially entangled component located on the surface of the `` body '' .this geometric picture gives the guidance for constructing the algorithm and for analyzing the properties of the essentially entangled component .the latter being located on the surface of the `` body '' , form there sets of lower dimensions , such that the rank of the relevant density matrix does not exceed a number which depends on the dimensions of the total system , and on its chosen partition . finally , at a particular example we study the dynamics of an open quantum system and we reconstruct the time trajectory of the decomposed density matrix inside the convex `` body '' .sudden death and sudden birth of entanglement can be seen as the results of crossing of the of the trajectory of the density matrix with the surface of the `` polytope '' of separable states .there are some other interesting phenomena appearing in this example but these still need further studies to lead to general conclusions . concerning possible applications of the results .the algorithm introduced in this work scales polynomially with the dimension of the system in the general case , and it can be employed to study open questions about entanglement in mixed states .for instance , this can be applied straightforwardly to address the question of the relative volume of separable states over entangled mixed states as function of the total purity of the system and the total dimension of the system .an answer to this question can serve to the evaluation of emerging quantum technologies and their quantum limits .moreover , the essentially entangled component containing all entanglement properties of the density matrix may also provide new directions to entanglement detection and entanglement distillation techniques .va acknowledges stimulating and useful discussions with mikhail tsvasman and sergey pirogov and the hospitality accorded to him at laboratoire j .- v .poncelet cnrs .va and am are thankful to jens siewert for indicating to them important related works .am acknowledges financial support from the ministry of education and science of the republic of kazakhstan ( contract ) ._ the maximum rank of an essentially entangled component is , where is the dimension of the cartan subgroup of the group of all transformations on the state and is the dimension of cartan ( sub)subgroup generating only local transformations . _consider a density matrix and its decomposition to the essentially entangled and separable part . since corresponds to a minimum value of all possible weights ,we conclude that no and product vector exist such that is a positive matrix .considering now the essentially entangled subspace spanned by the eigenvectors with non - zero eigenvalues of with , this condition means that no product state exists in .indeed , for the case where with for every one identifies the vector orthogonal to the subspace of eigenvectors which makes and therefore extremality implies that no product state is orthogonal to the orthogonal compliment of spanned by the eigenvectors of with zero eigenvalues and . in other words , in order to find such a state we have to satisfy equations with for a product state given by specification of its parametersthis is impossible when , which determines the maximum rank of .the system under consideration consists of the two modes of the field interacting with a three - level atom .the hilbert space thus is of dimension , a direct product of the spaces of two two - level systems ( qubits ) and of one three level system ( qutrit ) . in the standard computational basisa state vector of the system is expressed as or alternatively using the nilpotent creation operators the next step that should be performed is the application of all the available local transformations ( , , ) on the given state in order to construct the corresponding canonic state which marks the orbit of local transformations . to simplify the procedure ,we apply the local transformations on a given in the following order : _ ( a ) _ we first apply local operations generated by the operators and we require that the polulation of the reference level is getting maximum . under this conditionthe populations of the levels : are vanishing .the final step for arriving to the tanglemeter of the state is to take the logarithm of the polynomial on the nilpotent variables , in eq.([f ] ) .it is easy to show that with , _etc_. l. amico , r. fazio , a. osterloh , and v. vedral rev .phys . * 80 * , 517 ( 2008 ) .r. horodecki , p. horodecki , m. horodecki , and k. horodecki , rev .phys . * 81 * , 865 ( 2009 ) .l. aolita , f. de melo , and l. davidovich , rep .prog . phys . *78 * , 042001 ( 2015 ) .l. gurvits , in stoc 03 : procedding of the thirty - fifth annual acm symposium on theory of computing ( acm press , new york , 2003 ) , pp .a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .* 88 * , 187904 ( 2002 ) . h. j. woerdeman , phys . rev .a * 67 * , 010303(r ) ( 2003 ) .l. m. ioannou , b. c. travaglione , d. c. cheung and a. k. ekert , phys .a * 70 * , 060303(r ) ( 2004 ) .f. hulpke and d. bru j. phys .a : math . gen . * 38 * , 5573 ( 2005 ) . f. m. spedalieri , phys . rev .a * 76 * , 032318 ( 2007 ) .l. m. ioannou , quant .* 7 * , 335 ( 2007 ) .a. peres , phys .lett . * 77 * , 1413 ( 1996 ) ; m. horodecki , p. horodecki and r. horodecki , physics letters a * 223 * , 1 - 8 ( 1996 ) .m. lewenstein and a. sanpera , phys .* 80 * , 2261 ( 1998 ) .w. dr , g. vidal and j. i. cirac , phys .a * 62 * , 062314 ( 2000 ) .a. acin , d. bruss , m. lewenstein , and a. sanpera , phys .87 * , 040401 ( 2001 ) .r. lohmayer , a. osterloh , j. siewert , and a. uhlmann , phys .lett . * 97 * , 260502 ( 2006 ) . b. jungnitsch , t. moroder and o. ghne , phys .lett . * 106 * , 190502 ( 2011 ) .s. rodriques , n. datta , and p. love , phys .rev . a * 90 * , 012340 ( 2014 ) .t. yu and j. h. eberly , phys .* 93 * , 140404 ( 2004 ) ; j. h. eberly and t. yu , science * 316 * , 555 ( 2007 ) .b. v. fine , f. mintert and a. buchleitner , phys .b * 71 * , 153105 ( 2005 ) . c. e. lopez , g. romero , f. lastra , e. solano and j. c. retamal , phys . rev .* 101 * , 080503 ( 2008 ) .k. zyczkowski , p. horodecki , a. sanpera , and m. lewenstein , phys .a * 58 * , 883 ( 1998 ) .ghne and g. tth , phys . rep . *474 * , 1 ( 2009 ) . c. h. bennett , g. brassard , s. popescu , b. schumacher , j. a. smolin , and w. k. wootters , phys .. lett . * 76 * , 722 ( 1996 ) .
|
we introduce with geometric means a density matrix decomposition of a multipartite quantum system of a finite dimension into two density matrices : a separable one , also known as the best separable approximation , and an essentially entangled one , which contains no product states components . we show that this convex decomposition solving the separability problem , can be achieved in practice with the help of an algorithm based on linear programming , which in the general case scales polynomially with the dimension of the multipartite system . furthermore , we suggest methods for analyzing the multipartite entanglement content of the essentially entangled component and derive analytically an upper bound for its rank . we illustrate the algorithm at an example of a composed system of total dimension undergoing loss of coherence due to classical noise and we trace the time evolution of its essentially entangled component . we suggest a `` geometric '' description of entanglement dynamics and show how it explains the well - known phenomena of sudden death and revival of multipartite entanglement .
|
since the beginning of the last century the non - homogeneous spatial properties of solar activity have been studied extensively . these early investigations conjectured initially that the longitudinal distribution of sunspot groups or sunspot numbers shows non - homogeneous behaviour .these analyses concluded that there are preferred longitudes , where solar activity concentrates .later , different approximations and assumptions were applied to understand the essence of this phenomenon .the topic soon became controversial ( see e.g. * ? ? ?* ; * ? ? ?overall , three approaches can be distinguished .the first approach is the quasi - rigid structure model by .this model describes a constantly rotating frame which carries the persistent domains of activity . applied an autocorrelation statistical method based on long - term sunspot number data . applied period - analysis to the greenwich photoheliographics results ( gpr ) .these studies concluded that the angular velocity of the quasi - rigid rotating frame varies .the angular velocity depends on the solar cycle , but during one cycle the angular velocity is constant . the second approach , promoted by e.g. and discovered the active nest and defined it as a small and isolated area on the solar surface . here , the enhanced longitudinal activities are considered as individual entities .these isolated entities can be absent for several rotations .the third group of models assumes a migrating activity in the carrington coordinate system . found persistent als under the influence of the differential rotation . and introduced a dynamic reference frame. this frame describes the longitudinal migration of active longitude in carrington coordinate system and the frame has a similar dynamics to differential rotation . concluded that the migration of the enhanced activity is just apparent . in the dynamic reference frame ,the rotation of the active longitude remains constant and the active longitude itself is a persistent quasi - rigidly rotating phenomenon . proposed that a seemingly migrating al may occur as a result of interaction between the equatorward propagating dynamo wave and a quasi - rigidly rotating non - axisymmetric active zone .the role of differential rotation is also controversy .various studies concluded that the differential rotation is not the reason of the migration of the al . in our previous work ( i.e. , hereafter gy16 ) , we found evidence supporting the third group of models . moreover , the migration of als does not appear to correspond very well to the form of the 11-year cycle as suggested by previous studies .the half - width of the active longitudinal belt is fairly narrow during moderate activity but is wider at maximum activity .we also found that the al is not always identifiable .several studies suggested that the spatial distributions of eruptive solar phenomena also show non - axisymmetric properties . analysed the coordinates of energetic solar flares based of 5 years of time period and concluded that longitudinal spatial distribution is non - homogeneous . concluded that the dominant and co - dominant al contains 80% of c- and x - flares . in gy16, we conducted a similar study based on four solar cycles and we did not find significant co - dominant activity ; instead , we found that only the dominant al contains 60% of the solar flares .the flares and cmes could occur independently of each other .numerous cmes have associated flares but several non - flaring filament lift - offs also lead to cme .furthermore , in the case of the stealth cmes there are no easily identifiable signatures to locate the source of the eruption on the solar surface . hence , separate flare - cme spatial distribution investigations are justified .we used the soho / lasco halo cme catalog by the http://cdaw.gsfc.nasa.gov/cme_list/halo/halo.html[cdaw data centre ] .the cme catalog spans over 20 years , i.e. between 1996 and 2016 .this is the most extensive catalog that contains the source location of cmes .only halo cmes are reported , i.e. their angular width is 360 degrees in c3 coronagraph field of view . describes the catalog in great details .the cme source is defined as the centre of the associated active region .the source of the cme is identified using soho eit running difference images .later , the ability of the stereo mission to observe the backside of sun was used to identify the source of back - sided halo cmes .this catalog also provides the space speed of cmes which is the actual speed with which the cme propagates in the interplanetary space . the plane of sky speed obtained from the single soho viewpoint , converted into space speed using the cone model .the http://fenyi.solarobs.unideb.hu/dpd/ [ debrecen photoheliographic data ] ] ( dpd ) sunspot catalogue is used for estimating the longitudinal position of the al .the catalog provides information about the date of observation , position and area for each sunspot .the precision of the position is 0.1 heliographic degrees and the estimated accuracy of the area measurement is percent .the first step of our identification procedure is to divide the solar surface by 18 equally sliced longitudinal belts .hence , one bin equals to a zone with width .we take into account all sunspot groups from the moment when they reach their maximum area .this filtering criteria is chosen for the following reasons : firstly , if we select all sunspot groups at every moment of time then the statistics will be biased by the long - lived sunspot groups .secondly , the maximum area of the sunspot groups is a well - defined and easily identifiable moment .this is different from the used practice of considering only the first appearance of each sunspot group .let us define the matrix by : here , the area of all sunspot groups ( ) is summed up in each longitudinal bin for each cr . is divided by the summarised area over the entire solar surface ( ) .the range of must be always between and , depending on the local appearance of the activity . in the case of ,all of the flux emergence takes place in one single longitudinal strip .a significance level threshold is applied to filter the noise .moving average is also applied for data smoothing with a time - window of 3 crs .we standardised the matrix ( defined in eq [ w ] ) by removing the mean of the data and scaling to unit variance .then , cluster analysis was performed for grouping the obtained significant peaks . here, the dbscan clustering algorithm was chosen which is a density - based algorithm .the method groups together points that are relatively closely packed together in a high - density region and it marks outlier points that stand alone in low - density regions ( for details see ) .the parameter epsilon ( ) defines the maximum distance between two points to be considered to be in the same group .the parameter m ( ) specifies the desired minimum cluster size .clusters , containing less than three points , were omitted .the longitudinal location of the clusters ( ) represent the position of the al .panel of figure [ al ] shows an example of the initial identification steps outlined above .the sample time period covers 6 years , and it corresponds to 80 cr between cr and cr .the quantity is represented by the shades with blue colour .the dark blue regions denote the significance ( ) presence of activity .the brighter shades stand for a weaker manifestation of activity ( and ) .the grey squares shows the sunspot group clusters . in panel of figure[ al ] , the carrington longitudes of the most significant cluster is transformed into carrington phase for each cr : the range of the quantity must be between and where represents the entire circumference of the sun . in panel of figure [ al ] , panel is repeated three times so that we are able to track the migration of the activity thought the phases .to the data panels and , we applied a polynomial least squares fitting based on multiple models .linear , quadratic , cubic and higher - order polynomial models were tested . the quadratic or parabolic regression ( )shows the best goodness of fit , hence this model is chosen .table [ coefficient ] shows the coefficients and uncertainties .the shape of migration clearly follows parabolic - shaped path as found in several earlier studies .[ cols= " < , < , < , < , < " , ] {figure2_a.pdf } & \includegraphics[width=77mm]{figure2_b.pdf } \\\includegraphics[width=77mm]{figure2_c.pdf } & \includegraphics[width=77mm]{figure2_d.pdf } \end{array} ] , and is divided by equal bins ( ) .the range of ] shows the direction of the maximum variance of the data ( ) , i.e where the data is most spread out .based on the result of the pca we performed principal component regression ( pcr ) . in figure[ tilt_stat ] , the solid black line is the regression fit and the grey halo represents a standard deviation of the sample along the regression line .the difference between the samples of two hemispheres is statistically insignificant , therefore the regression was applied to the data of both hemispheres .the obtained statistics suggests that there is a clear relationship between the tilt angle and the separateness parameter of the sunspot groups .in this section , the connection between the cme occurrence and al is revealed .panel of figure [ cmespatial ] shows the kernel probability density function of the longitudinal distribution of cme occurrence .this statistics is based on data from both the northern and southern hemispheres .there is only one significant peak visible around .besides this remarkable peak , there is a long plateau with some insignificant local peaks .only one more peak , above the significance level at , is present , but with a relatively weak activity when compered to the first peak .a random - generated control group is also used in this statistic .the longitudinal position of al now is a random position .this test was inspired by who expressed a critical view on the identification method of employed for al . in our study, we applied the methodology introduced by , who reconstructed the distribution of al with random sunspot longitude data .the kde plot of the control group does not show any peaks .this homogeneous distribution means that al identification does not cause false significant peaks , which would affect the results .panel of figure [ cmespatial ] shows the cumulative distribution of the above - defined spatial distributions .the blue and red lines have a steep increasing phase between values of and followed by a less steep increasing trend .these results allow us to estimate that most of cmes ( around ) occur in a belt around the position of al .hence , the width of the longitudinal belt of cme occurrences is equal to the width of the longitudinal belt of solar flare occurrences ( gy16 ) .the black line is the cumulative distribution obtained from the analysis applied to random longitudinal positions .this distribution would only contain of cmes .this latter finding means that al plays a significant role in the spatial distribution of cme occurrences . and ( apparent velocity of cme , upper panel ) , ( space velocity of cme , lower panel ) .the shade of the grey colour represent the probability density .the significant islands are indicated by blue ( ) , dark green ( ) and bright green ( ) colours .the northern and souther hemispheres are not distinguished from each other.,width=321 ] let us now consider the apparent and space velocities ( and ) of cme events .two - dimensional kernel density estimations are applied with an axis - aligned bi - variate gaussian kernel , evaluated on a square grid of the and space .figure [ cmespatial2 ] shows the result , based on data from both hemispheres .the significance levels are indicated by coloured contour lines . in both panels of figure [ cmespatial2 ] , there are four islands above the significance level .the statistics shows that the source of fast cmes ( speeds between km / s and km / s ) is indeed an active region , located within al .however , slow ( i.e. speed less than km / s ) cmes can occur outside of al .above the significance level of , there are only two islands .these are only slow cmes inside and outside of al .analysis of this statistics also indicates that the probability of a slow cme is two standard deviation units higher than the probability that of a fast cme .the al identification method presented here reveals new spatial properties of the longitudinal distribution of the sunspot groups ( panels , , of the fig [ al ] ) .the spatial distribution of smaller sunspot groups ( less then ) is homogeneous .there is no enhanced longitudinal belt identifiable based on small sunspots ; small groups appear everywhere as function of longitude .moderate sunspot groups ( between ) show already inhomogeneous properties. however , these results still have to be treated with caution . only sunspot groups above the significance level have signatures of obvious and remarkable inhomogeneous spatial distribution .the idea of two , almost equally significant longitudinal zones is widely accepted by numerous studies , see e.g. .the dominant and co - dominant active longitude is separated by 180 degrees .however , we do not find such equally strong als ( neither here nor in gy16 ) . in our investigation ,the co - dominant al plays a less important role . the spatial distribution of the separateness parameter ( defined by eq .[ eq5 ] ) shows that complex active regions with a high cme capability appear near mostly the al ( figure [ cmp_stat ] ) .the appearance of moderate and simple complex configurations are everywhere on the solar surface .these groups are also able to have cmes with a significantly lower probability .we also found that , the most tilted sunspot groups have a complex configuration ( figure [ tilt_stat ] ) .simple bipolar sunspot groups show relatively small tilt angle . and concluded , that there is positive correlation between magnetic helicity and sunspot tilt angle .the sunspot rotation could play important role in helicity transport across the photosphere .sunspot rotation may increase helicity in the corona leading to flares and cmes .this property may also have the consequence of a more complex built - up of the underlying magnetic structure , and , the well - studied magnetic arches of the upper solar layer could be oriented at a large angle to the equator .the more complex active regions are the more flares and they will be associated with cmes .hence , we conclude that the above physical process can take place within al and anywhere else but only with low probability there .several studies have investigated east - west asymmetry of cmes occurrence . found asymmetry using data provided by the soho - lasco .the asymmetric behaviour could be a consequence of al .our result obtained here shows ( see e.g. fig [ cmespatial2 ] ) that the number of cme occurrence is marginally higher within al .the mean of the apparent and space velocity of cme occurrences is around km / s considering the entire surface of the sun .this mean velocity is known an slow cmes , found in a number of earlier studies .however , the mean velocity is significantly higher if only the al itself is considered . within al the average ( or space ) velocity is around km / s ( see e.g. * ? ? ?there is no fast cme occurrence found outside of al .therefore , interestingly and notably , the fast halo cmes are also al cmes .our new findings ( together with the results of gy16 ) could provide novel aspects both for space weather forecast and for solar dynamo theory . usually , the flare and/or cme prediction tools are based on only the behaviour of active regions , such as complexity of magnetic fields or other morphological properties .however , the spatial distribution of active regions can also assist in forecasting as suggested by e.g .we conclude , that the main source of cme and solar flare ( gy16 ) occurrences is the al .hence , the detection of this enhanced longitudinal belt may allow us to find the most flare- and cme - capable regions of the sun preceding the appearance of an active region .this potential flare and cme source is predictable even several solar rotations in advance .the observed properties of the non - axisymmetric solar activity need to be taken into account in developing and verifying suitable dynamo theory : the observations analysed here show that there is only one significant al with a relatively wide ( degrees ) belt .furthermore , the tilt angle of the active regions is also an important observed constraint for dynamo theory : the tilt angle of sunspot groups shows non - axisymmetric behaviour , which is a completely new ( and surprising ) finding .the results of this research was enabled partially by sunpy , an open - source and free community - developed python solar data analysis package .ng thanks for the support received from the university of sheffield .ng also thanks for the laborious work done by the assistant fellows at debrecen heliophysical observatory for composing the dpd sunspot catalog . re acknowledges the support received by the chinese academy of sciences president s international fellowship initiative , grant no .2016vma045 , the science and technology facility council ( stfc ) , uk and royal society ( uk ) .a.k.s . acknowledges the respond - isro ( dos / paogia205 - 16/130/602 ) project and the serb - dst project ( yss/2015/000621 ) grant .bai , t. , 1987 , apj , 314 , 795 bai , t. , 1988 , apj , 328 , 860 bai , t. , 2003a , apj , 585 , 1114 bai , t. , 2003b , apj , 591 , 406 balthasar , h. , 2007 , a&a , 471 , 281 balthasar , h. , schssler , m. , 1983 , solar phys ., 87 , 23 balthasar , h. , schssler , m. , 1983 , solar phys . , 93 , 177 baranyi t. mnras , 2015 becker , u. , 1955 , z. astrophys . , 37 , 47 berdyugina , s.v . ,usoskin , i.g . , 2003 ,a&a , 405 , 1121 berdyugina , s.v . , 2004 , solar phys , 224 , 123 - 131 berdyugina , s.v . , 2005 , asp conference series , 346 berdyugina , s.v . , moss , d. , sokoloff , d.d . ,usoskin , i.g .. astron .445 , 703714 , 2006 .brouwer , m. p. , zwaan , c. , 1990 , solar phys .129 , 221 bumba , v. et al .1965 , the astrophys .j. , 141 , 14921517 .bogart , r.s . : 1982 , solar phys .76 , 155 165 .bornmann , p.l . &d. shaw , v.i ., 1994 , , 150 , 127 bumba , v. , obridko , v. n. , 1969 , solar phys ., 6 , 104 bumba , v. , garcia , a. , klvana , m. , 2000 , solar phys . , 196 , 403 carrington , r. c. , 1863 , observations of the spots on the sun from november 9 , 1853 , to march 24 , 1861 made at redhill , williams and norgate , london , , section vi , p. 246 .castenmiller , m. j. m. , zwaan , c. , van der zalm , e. b. j. , 1986 , solar phys .105 , 237 canfield , r. c. , & pevtsov , a. a. 1998 , sol ., 182 , 145 chidambara , a. 1932 , mon . not .r. astron .soc . , 93 , 150152 .cui , y. , li , r. , zhang , l. , he , y. , & wang , h. 2006 , soph , 237 , 45 colak , t. & qahwaji r. , 2008 , , 248 , 277 connolly , a. j. , genovese , c. , moore , a. w. nichol , r. c.,schneider , j. and wasserman , l. , 2000 , arxiv astrophysics e - prints , http://adsabs.harvard.edu/abs/2000astro.ph..8187c gyenge , n. , baranyi , t. , ludmny , a. , 2014 , solar phys . , 289 , 579 gyenge , n. , baranyi , t. , ludmny , a. , 2016 , apj , 818:127 ( 8pp ) gopalswamy , n. ; yashiro , s. ; michalek , g. ; stenborg , g. ; vourlidas , a. ; freeland , s. ; howard , r. , earth , moon , and planets , volume 104 , issue 1 - 4 , pp . 295 - 313 n. gopalswamy , h. xie , s. akiyama , p. mkel , s. yashiro , and g. michalek ; the astrophysical journal letters , 804:l23 ( 6pp ) , 2015 may 1 ; doi:10.1088/2041 - 8205/804/1/l23 gosling , j.t . , hildner , e. , macqueen , r.m . , munro , r.h . ,poland , a.i . & ross , c.l . , 1976 , , 48 , 389 - 397 grigorev , v. , m. , ermakova , l. , v. and khlystova , a. , i. , 2012 , astronomy reports , 56 , 878 gyri , l. , baranyi , t. , ludmny , a. 2011 , iau symp .273 , 403 einbeck , j. , evers l. , bailer - jones , c. , gorban , a. , kegl , b. , wunsch , d. , zinovyev a. , lecture notes in computational science and engineering , springer , 2007 , pp .180204 hale , g. e. , e. , ferdinand , nicholson , s. b. & joy , a. h. 1919 , , 49 , p.153 harrison , r.a . , 1995 ,astrophys . , 304 , 585 henney , c. j. , durney , b. r. , 2005 , aspc , 346 , 381 howard r. , 1991b , solar phys . , 136 , 251 howard , t. a. & harrison , r. a. 2013 , , 285 , 1 - 2 , 269 - 280 huang , x. , zhang , l. , wang , h. , li , l. , 2013 , a&a , 549 , 127 isobe t. , feigelson e. d. , akritas m. g. , babu g. j. , 1990 , apj , 364 , 105 ivanov , e. v. , 2007 , adv .40 , 959 jetsu , l. , pohjolainen , s. , pelt , j. , tuominen , i. , 1997 , a&a , 318 , 293 juckett , d. a. , 2006 , solar phys . , 245 , 37 juckett , d. a. , 2007 , solar phys . , 245 , 37 kitchatinov , l. l. , olemskoi , s. v. , 2005 , astron.lett . , 31 , 280 kiepenheuer , k.o ., 1953 , the university of chicago press , p.322 korss , m. , b. , ludmny , a. ; baranyi , t. 2015 , apj , 789 , 107 korss , m. , b. , ludmny , a. ; erdlyi , r. ; baranyi , t. 2015 , apj lett , 802 , l21 korss , m. , b. ; erdlyi , r. ; 2016 , apj losh , h. m. , 1939 , publ.obs.michigan , 7 , no .5 . , 127 - 145 mason , j. p. , & hoeksema , j. t. 2010 , apj , 723 , 634 maunder , e. w. , 1905 , mnras , 65 , 538 mcintosh , p. s. , 1990 , , 125 , 251 michalek , g. ; gopalswamy , n. ; yashiro , s. , , volume 260 , issue 2 , pp.401 - 406 mumford , s. _ et al . _ 2013 , sunpy : python for solar physicists , proceedings of the 12th python in science conference , 74 - 77 .obridko , v n ; chertoprud , v e ; ivanov , e v. solar physics , 272.1 ( aug 2011 ) : 59 - 71 .pelt , j. , tuominen , i. , brooke , j. , 2005 , a&a , 429 , 1093 pelt , j. , brooke , j. m. , korpi , m. j. , tuominen , i. , 2006 , a&a , 460 , 875 pevtsov a. , astrophysics and space science proceedings , volume 30 .isbn 978 - 3 - 642 - 29416 - 7 .springer - verlag berlin heidelberg , 2012 , p. 83 - 91sakurai , t. and hagino , m. , 2003 .journal of the korean astronomical society , 36 , 7 - 12 schrijver , c. 2007 , apjl , 655 , l117 skirgiello , m. 2005 , annales geophysicae , 23 , 31393147 usoskin , i. g. , berdyugina , s. v. , poutanen , j. , 2005 .a&a , 441 , 347 i.g .usoskin , s.v .berdyugina , d. moss , d.d .sokoloff , 2007 , advances in space research 40 .951958 waldmeier , m. , 1938 , zeitschrift fr astrophysik , 16 , 276 waldmeier , m. , 1947 , publ .zrich , band ix , heft 1 .warwick , c. s. , 1965 , apj , 141 , 500 zhang , l.y . ,wang , h.n . ,du , z.l . , cui , y. m. , he , h. , 2007 , a&a , 471 , 711 zhang , l.y . ,wang , h.n . ,du , z.l . , 2008 , , * 484 * , 523 - 527 .zhang , l.y . ,mursula , k. , usoskin , i. g. , wang , h.n . , 2011 ,a&a , 529 , 23 zhang , l.y . , mursula , k. , usoskin , i. g. , wang , h.n . , 2011 , jastp , 73 .258 zhang , l. ; mursula , k. ; usoskin , i. , 2015 , a&a l. , 575 ester , m. , h. p. kriegel , j. sander , and x. xu , in proceedings of the 2nd international conference on knowledge discovery and data mining , portland , or , aaai press , pp .1996 warwick , c. s , 1966 , , 145 , 215 xie , h. , ofman , l. , lawrence , g. , `` cone model for halo cmes : application to space weather forecasting '' , j. geophys .109 , 2004 , a03109 .ying et al ., the astrophysical journal supplement series , 222 , 2 , http://stacks.iop.org/0067-0049/222/i=2/a=23
|
the spatial inhomogeneity of the distribution of coronal mass ejection ( cme ) occurrences in the solar atmosphere could provide a tool to estimate the longitudinal position of the most probable cme - capable active regions in the sun . the anomaly in the longitudinal distribution of active regions themselves is often referred to as active longitude ( al ) . in order to reveal the connection between the al and cme spatial occurrences , here we investigate the morphological properties of active regions . the first morphological property studied is the separateness parameter , which is able to characterise the probability of the occurrence of an energetic event , such as solar flare or cme . the second morphological property is the sunspot tilt angle . the tilt angle of sunspot groups allows us to estimate the helicity of active regions . the increased helicity leads to a more complex built - up of the magnetic structure and also can cause cme eruption . we found that the most complex active regions appear near to the al and the al itself is associated with the most tilted active regions . therefore , the number of cme occurrences is higher within the al . the origin of the fast cmes is also found to be associated with this region . we concluded that the source of the most probably cme - capable active regions is at the al . by applying this method we can potentially forecast a flare and/or cme source several carrington rotations in advance . this finding also provides new information for solar dynamo modelling .
|
computer science is an original discipline combining engineering and natural sciences as well as mathematics .it concerns itself with the representation and processing of information using algorithmic techniques .research in computer science includes two main flavors : _ theory _ , developing conceptual frameworks for understanding the many aspects of computing , and _ systems _ , building software artifacts and assessing their properties .a distinctive feature of computer science publication is the importance of prestigious conferences .acceptance rates at selective computer science conferences range between 10% and 20% ; for instance , in 2007 - 2008 , icse ( software engineering ) 13% , oopsla ( object technology ) 19% , popl ( programming languages ) 18% .journals have their role , but do not necessarily carry more prestige .the story of the development of computer science conferences is well reported by , page 33 : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the growth of computers in the 1950s led nearly every major university to develop a strong computer science discipline over the next few decades . as a new field ,computer science was free to experiment with novel approaches to publication not hampered by long traditions in more established scientific and engineering communities .computer science came of age in the jet age where the time spent traveling to a conference no longer dominated the time spent at the conference itself .the quick development of this new field required rapid review and distribution of results .so the conference system quickly developed , serving the multiple purposes of the distribution of papers through proceedings , presentations , a stamp of approval , and bringing the community together ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ these peculiarities of the field the dualities theory / system and journal / conference make computer science an original discipline within the sciences .it is , therefore , interesting to investigate how these distinctive features of the discipline impact on the classical laws of informetrics , e.g. , lotka s law of scientific productivity , bradford s law of scatter , and skewness of citations to scientific publications . in the present contribution, we study the skewness of the citation distribution of computer science papers .we distinguish between journal and conference papers , exploiting the conference proceeding index recently added by thomson reuters to web of science .we furthermore tackle the problem of finding a theoretical model that well fits the empirical citation distributions .finally , we compare the strength of impact of journal and proceeding papers as measured with bibliometric indicators , including the highly celebrated hirsch index .the outline of the paper is as follows . in section [ skewness ]we study the shape of the citation distributions of computer science journal and conference papers .section [ model ] is devoted to the finding of a theoretical model that well fits such citation distributions .finally , in section [ conclusion ] we summarize our findings and their implications .is the distribution of citations to computer science ( cs ) papers symmetric or skewed ?a distribution is symmetric if the values are equally distributed around a typical figure ( the mean ) ; a well - known example is the normal ( gaussian ) distribution .a distribution is right - skewed if it contains many low values and a relatively few high values .it is left - skewed if it comprises many high values and a relatively few low values .the power law distribution , for instance , is right - skewed . as a rule of thumb ,when the mean is larger than the median the distribution is right - skewed and when the median dominates the mean the distribution is left - skewed .a more precise numerical indicator of distribution skewness is the third standardized central moment , that is the expected value of , divided by , where and are the mean and the standard deviation of the distribution of the random variable , respectively .a value close to 0 indicates symmetry ; a value greater than 0 corresponds to right skewness , and a value lower than 0 means left skewness . in order to answer the posed research question about the skewness of the citation distribution of cs papers ,we analysed the citations received by both cs journal and proceeding papers . as to journal articles , we accessed thomson reuters journal citation reports ( jcr ) , 2007 science edition .the data source contains 281 computer science journals classified into the following six sub - fields corresponding to as many jcr subject categories : artificial intelligence ( accounting for 29.2% of the journals ) , theory and methods ( 27.8% ) , software engineering ( 27.6% ) , information systems ( 22.9% ) , hardware and architectures ( 19.3% ) , cybernetics ( 7.2% ) .notice that the classification is overlapping .we purposely excluded category interdisciplinary applications , since the journals therein are only loosely related to computer science .for each journal title , we retrieved from thomson reuters web of science database all articles published in the journal in 1999 ( 9,140 items in total ) and the citations they received until 1st august 2009 ( an overall of 106,849 references ) . a citation window of 9 yearshas been chosen because it corresponds to the mean cited half - life of cs journals tracked in web of science .the cited half - life for a journal is the median age of its papers cited in the current year .half of citations to the journal are to papers published within the cited half - life .it follows that the citation state of the analysed papers is steady and will not _ significantly _ change in the future . as to conference papers, we used the conference proceedings index recently added by thomson reuters to web of science database .unfortunately , for this index , corresponding proceedings citation reports are not yet published by thomson reuters .furthermore , an annoying limitation of web of science is the impossibility of retrieving all papers belonging to a specific subject category .therefore , we retrieved conference papers by country of affiliation addresses for authors .we took advantage of the country premier league compiled by .the ranking contains countries in declining order with respect to the share of top 1% of highly cited publications .publications refer to period 1997 - 2001 and citations are collected in year 2002 .the top-10 compilation reads : united states , united kingdom , germany , japan , france , canada , italy , switzerland , the netherlands , and australia .we hence retrieved all conference papers with at least one author affiliated to one of the mentioned top-10 countries that were published in 1999 and we tallied the citations they received until 1st august 2009 .this amounts to 9,013 papers and 38,837 citations .journal papers are cited on average 11.69 times , the median ( 2nd quartile ) is 4 , the 1st quartile is 1 and the 3rd quartile is 11 . the most cited journal paper received 1014 citations .the standard deviation , which measures the attitude of the data to deviate from the mean , is 31.66 , that is , 2.71 times the mean .the average number of authors per paper is 2.33 .the hirsch index , or simply h index , is a recent bibliometric indicator proposed by .the index , which found immediate interest both in the public and in the bibliometrics community ( see for opportunities and limitations of the measure ) , favors publication sets containing a continuous stream of influential works over those including many quickly forgotten ones or a few blockbusters .it is defined , for a publication set , as the highest number such that there are papers in the set each of them received at least citations .geometrically , it corresponds to the size of the durfee square contained in the ferrers diagram of the citation distribution .the egghe index , or simply g index , is a variant of the h index measuring the highest number of papers that received together at least citations .it has been proposed by to overcome some limitations of the h index , in particular the fact that it disadvantages small but highly - cited paper sets too strongly .we computed both indexes for the publication set of computer science journal papers : the h index amounts to 106 , while the g index is 170 .the citation distribution is right skewed ( figure [ pareto ] depicts the lorenz curve ) : 76% of the papers are cited less than the average and 21% are uncited .the most cited 7% of the articles collect more than half of the citations ; these papers are cited , on average , 13 times more than the other papers .the most cited half of the articles harvest 95% of the citations ; these papers are cited , on average , 19 times more than the other papers . in particular, we noticed that 78% of the citations come from 22% of the papers , matching quite well the pareto principle or 80 - 20 rule , which claims that 80% of the effects come from 20% of the causes .the skewness indicator is 13.08 , well beyond the symmetry value of 0 .the gini index is a measure of concentration of the character ( citations ) among the statistical units under consideration ( journals ) .the two extreme situations are equidistribution , in which each journal receives the same amount of citations ( the gini index is equal to 0 ) , and maximum concentration , in which the total amount of the citations is attributed to a single journal ( the gini index is equal to 1 ) .the gini index for the journal citation distribution is 0.73 , indicating a high concentration of citations among journal papers .proceedings papers are cited on average 4.31 times , significantly less than journal articles ( the median is 0 ) .nevertheless , the most cited proceeding paper collects a whopping number of citations ( 2707 ) .standard deviation is 34.11 , or 7.9 times the mean .citations to conference papers deviate even more from the average citation value than do citations to journal articles .the conference h index is 65 , which amounts to 61% of the journal h index .the conference g index is 129 , or 76% of its journal counterpart .notice that the disproportionately large number of citations harvested by the top - cited conference paper significantly shorten the gap between the conference g index and its journal counterpart , whereas this citational blockbuster has little influence on the conference h index score .the average number of authors per paper ( 2.85 ) is higher than that for journal papers , meaning that computer science authors are more motivated to collaborate when writing proceeding papers .the distribution of citations to conference papers is even more skewed than that of journal papers : the concentration curve for conference articles markedly dominates that for journal papers ( figure [ pareto ] ) .indeed , the majority of proceeding papers ( 56% ) sleep uncited , and 84% of the papers are cited less than the average .the most cited 3% of the papers harvest more than half of the citations , and 85% of the citations come from 15% of the papers .the skewness indicator amounts to 57.94 , much higher than the value computed for journal papers .citations are highly concentrated among few conference papers , as indicated by the gini index , which amounts to 0.88 , a higher score with respect to what we measured for journal articles .we observed a certain degree of skewness in the distribution of citations to articles published in each venue ( journal or conference ) as well .for instance , the 135 papers published in 1999 in the flagship acm magazine , _ communications of the acm _, received 3003 citations , with an average of 22 citations per paper , which largely dominates the median citation rate ( 8) and is even greater than the third quartile ( 20 ) .a share of 76% of the papers are cited less than the average paper .the distribution skewness is 3.65 and the coefficient of variation is 1.82 . the flagship ieee magazine , _ ieee computer _, published in the same year 130 papers receiving 1346 citations , a mean of about 10 citations per paper , which is bigger than the median value ( 2 ) and close to the third quartile ( 11 ) .the percentage of the papers cited less than the average is 73% .the distribution skewness is 4.01 and the coefficient of variation amounts to 2.09 .two related conferences are _ acm sigmod international conference on management of data _ ( held in philadelphia , pennsylvania , in 1999 ) and _ ieee international conference on data engineering _ ( located in sydney , australia , in 1999 ) .sigmod published 76 papers with an average of 3 citations per paper , a median of 1 citation , and a maximum of 26 citations .three papers over four are cited less than the average article ; the distribution skewness is 2.54 and the coefficient of variation amounts to 1.71 .icde published 67 papers ; the average paper collects 10 citations and the median one has 3 citations .the top - cited article received 102 citations .a share of 81% of the published articles receive less citations than the average article .skewness is at 3.23 and variation amounts to 1.94 .interestingly , the skewness of citations to papers within venues , albeit significant , is less noticeable than the asymmetry of citations to all papers in the field .this might indicate that venues ( journals or conferences ) represent citational homogeneous samples of the field .the distribution of mean citedness of different journals is also skewed . the -year impact factor for a journal with respect to a fixed census year and a target window of yearsis the mean number of citations that occurred in the census year to the articles published in that journal during the previous years .typical target windows are 2 and 5 years long .we analysed the distribution of 2007 impact factors of cs journals .the mean 2-year impact factor is 1.004 which is greater than the median 2-year impact factor that is equal to 0.799 ; the distribution skewness is 1.64 .the mean 5-year impact factor is 1.218 and dominates the median 5-year impact factor that is equal to 0.914 ; the distribution skewness is 2.69 .again , the skewness of journal mean citedness is less important than the asymmetry of citations to all papers in the field .unfortunately , till today , thomson reuters does not provide impact factor scores for conference proceedings .what theoretical model best fits the empirical citation distribution for cs papers ? having a theoretical model that well fits the citation distribution would increase our understanding of the dynamics of the underlying citational complex system .we compared our empirical samples with the following three well - known right - skewed distributions . the _ power law distribution _ ,also known as _ pareto distribution _ , is named after the italian economist vilfredo pareto who originally observed it studying the allocation of wealth among individuals : a larger share of wealth of any society ( approximately 80% ) is owned by a smaller fraction ( about 20% ) of the people in the society .examples of phenomena that are pareto distributed include : degree of interaction ( number of distinct interaction partners ) of proteins , degree of nodes of the internet , intensity ( number of battle deaths ) of wars , severity ( number of deaths ) of terrorist attacks , number of customers affected in electrical blackouts , number of sold copies of bestselling books , size of human settlements , intensity of solar flares , number of religious followers , and frequency of occurrence of family names ( see and references therein ) .bibliometric phenomena that provably follow a power law model are word frequency in relatively lengthy texts , scientific productivity of scholars , and , interestingly , number of citations received between publication and 1997 by top - cited scientific papers published in 1981 in journals catalogued by the isi ( the former name of thomson reuters ) .the probability density function for a pareto distribution is defined for in terms of the scaling exponent parameter as follows : the _ stretched exponential distribution _ is a family of extensions of the well - known exponential distribution characterized by fatter tails . show that different phenomena in nature and economy can be described in the regime of the exponential distribution , including radio and light emission from galaxies , oilfield reserve size , agglomeration size , stock market price variation , biological extinction event , earthquake size , temperature variation of the earth , and citation of the most cited physicists in the world .the probability density function is a simple extension of the exponential distribution with one additional stretching parameter : where , and .in particular , if the stretching parameter , then the distribution is the usual exponential distribution . when the parameter is not bounded from , the resulting distribution is better known as the _ weibull distribution_. the _ lognormal distribution _ is the distribution of any random variable whose logarithm is normally distributedphenomena determined by the multiplicative product of many independent effects are characterized by a lognormal model .the lognormal distribution is a usual suspect in bibliometrics . in a study based on the publication record of the scientific research staff at brookhaven national laboratory, observes that the scientific publication rate is approximately lognormally distributed .more recently , study the citation distribution for individual journals indexed in web of science and show that there exists a steady state period of time , specific to each journal , such that the number of citations to papers published in the journal in that period will not significantly change in the future .they also demonstrate that , with respect to the journal steady state period , the citations to papers published in individual journals follow a lognormal model .finally , analyse the distribution of the ratio between the number of citations received by an article and the average number of citations received by articles published in the same field and year for papers in different sub - fields corresponding to thomson reuters jcr categories ( the category closest to computer science is cybernetics ) .they find a similar distribution for each category with a good fit with the lognormal distribution .the lognormal probability density function is defined in terms of parameters and as follows : for .we compared the empirical article citation distributions of journal and conference articles and the mentioned theoretical models with the following methodology : 1 .we gauge the distribution parameters using the _ maximum likelihood estimation method _ ( mle ) , which finds the parameters that maximize the likelihood of the data with respect to the model ; 2 .we estimate the goodness - of - fit between the empirical data and a theoretical model taking advantage of the kolmogorov - smirnov ( ks ) test .the test compares an empirical and a theoretical model by computing the maximum absolute difference between the empirical and theoretical cumulative frequencies ( this distance is the ks statistic ) . to appreciate if the measured distance is statistically significant , we adopted the following monte carlo procedure , as suggested in : 1. we compute the ks statistic for the empirical data and the theoretical model with the mle parameters estimated for the empirical data ; 2 .we generate a large number of synthetic data sets following the theoretical model with the mle parameters estimated for the empirical data ; 3 . for each synthetic data set ,we compute its own mle parameters and fit it to the theoretical model with the estimated parameters ( and not to the model with the parameters of the original distribution from which the data set is drawn ) .we record the ks statistic for the fit ; 4 .we count what fraction of the time the resulting ks statistic for synthetic data is larger than or equal to the ks statistic for the empirical data .this fraction measures the fitness significance ( _ p - value _ ) .following , we generated 2500 synthetic data sets .this guarantees that the p - valued is accurate to 2 decimal digits .moreover , the hypothesis of goodness of fit of the observed data with respect to the theoretical model is ruled out if the p - value is lower than 0.1 , that is , if less than 10% of the time the distance of the observed data from the model is dominated by the very same distance for synthetic data .we performed all statistical computations using r .table [ ks ] contains the results of our tests . for both journal and conference data sets ,the best fit is achieved by the lognormal model .furthermore , for each surveyed theoretical model , the journal citation distribution fits better the model than the conference counterpart .nevertheless , the computed p - values are not statistically significant , hence we can not accept the hypothesis that the _ entire _ observed citation distributions follow one of the surveyed theoretical distributions ..mle distribution parameters and kolmogorov - smirnov statistic .all p - values are not significant . [ cols="<,^,^,^,^,^,^,^,^,^",options="header " , ] in practice , few empirical phenomena obey power laws on the entire domain. more often the power law applies only for values greater than or equal to some minimum . in such case, we say that the _ tail _ of the distribution follows a power law .for instance , analysed 24 real - world data sets from a range of different disciplines , each of which has been conjectured to follow a power law distribution in previous studies. only 17 of them passed the test with a p - value of at least 0.1 , and all of them show the best adherence to the model when a suffix of the distribution is considered .notable phenomena that were ruled out from the pareto fitting are size of files transmitted on the internet , number of hits to web pages , and number of links to web sites . for the latter two, the lognormal model represents a more plausible hypothesis .the relative sizes of the tails with respect to the size of the entire distribution for the power law distributed phenomena range from 0.002 to 0.61 , with a median relative tail length of 0.11 . in particular , for the data set containing citations received by scientific papers in journals catalogued by the isi , the relative tail size is 0.008 , meaning that only the distribution of citations to articles cited at least 160 times well fits the pareto model .we tested the hypothesis that a significantly large tail of the distribution of citations to computer science papers follows a power law model .for the estimation of the lower bound parameter , that is , the starting value of the distribution tail , we followed the approach put forward by .the idea behind this method is to choose the value of that makes the empirical probability distribution and the best - fit power law model as similar as possible beginning from .we used the ks statistic to gauge the distance between the observed data and the theoretical ones .finally , we estimate the significance of the goodness - of - fit between the empirical data and the best - fit power law model following the above - described monte carlo procedure .the results are that the citations to articles published in computer science journals that are cited at least 56 times indeed follow a power law distribution with scaling exponent .this means that the probability density distribution for citations to journal articles is , where and .the ks statistic is 0.033 and the computed p - value is 0.38 , well beyond the significance threshold of 0.1 .the pareto - distributed tail is 355 articles long , or 4% of the entire distribution . as to conference papers , the power law behaviour shows up for articles cited at least 26 times , which corresponds to a distribution tail of 260 articles , or 3% of the entire data set .the scaling exponent and , hence , the probability density distribution reads , where and . the ks statistic is 0.046 and the computed p - value is 0.23 ; the fit is hence statistically significant but less good than what computed for journal papers .figure [ powerlaw ] depicts the theoretical models for journal and conference papers starting from the corresponding lower thresholds .notice that the conference scaling exponent ( 2.38 ) is lower than the journal exponent ( 2.80 ) , meaning that the asymptotic decay of the probability density function is slower for conference papers than for journal ones .the journal multiplicative constant ( 2525 ) is , however , much bigger than the conference counterpart ( 124 ) .the consequence is that the probability density for journal papers dominates the probability density for conference papers up to a certain citation value , showing , up to this point , a fatter tail as depicted in figure [ powerlaw ] . from that pointonwards , however , the asymptotic behaviour shows up , and the conference tail results heavier , a consequence of the extraordinary number of citations ( 2707 ) collected by the top - cited conference paper .the meeting point of the two curves is , however , around 1313 , above the biggest citation score for journal papers ( 1014 ) .it is worth mentioning that , according to our experiments , the _ best _ cutoff , that is , the starting point that minimizes the ks statistic , is also the the lowest _ good enough _ cutoff , that is the smallest threshold that guarantees a power law fitting with a p - value of at least 0.1 . in other words , the cutoff we have found guarantees both that the distance from the theoretical model is minimum and that the length of the distribution tail starting at the cutoff is maximum .finally , we checked that both the lognormal and the stretched exponential models do not fit well the identified power - law distributed tails .our main findings and their implications are summarized in the following . _ the citation distribution for computer science papers is severely skewed ._ such an extreme asymmetry is the combination of the skewness of mean citedness of different venues and of the skewness of citedness of articles published in each venue .a similar two - leveled citational hierarchy has been noticed by in the field of biomedicine when authors ( and not journals ) are taken to represent functional units of the scientific system .skewness of citation distribution has an important consequence : the mean is not the appropriate measure of the central tendency of the citations received by articles .indeed , only a small number of articles are cited near or above the mean value and the great majority of them are endorsed less than the average , with a significant share of the papers that sleep uncited .assigning the same value to all articles levels out the differences that evaluation procedures should highlight .a more appropriate measure of central tendency in case of skewed distributions is the _ median _( see also and ) .since the thomson reuters impact factor is , roughly , the mean number of recent citations received by papers published in a given journal , such a popular measure of journal impact is not immune to the skewness property of citation distributions and , therefore , it might be misused . while sorting journals according to the mean or the median can yield rankings that statistically differ little overall, the absolute magnitude in the differences in mean citedness between journals is oftentimes misleading .similarly , it is biased to gauge the impact of individual papers or authors using the impact factor of the publishing journals . it would be more fair to judge individual contributions and their contributors by their own citation scores , as soon as this data is robustly available .a simple example might better convince the reader . let a and b be journals such that each of them published 4 papers .suppose papers in journal a are cited 1 , 1 , 1 , and 97 times , respectively , while those in journal b are each cited 25 times .clearly , the average paper in both journals has the same number of citations ( 4 ) .however , can we conclude that journals a and b have the same impact score ? to find a good answer , we have to start from the right question .my assessment is that the appropriate way to pose the question is : what is the _ probability _ that for two randomly drawn papers and published in journals and , respectively , the number of citations of is greater than the number of citations of ?this probability is , interestingly , rather low : 25% ( the top - cited a paper beats all b papers , but any other a paper loses against any b paper ) . hence , while the mean citedness assigns the same impact to both journals , there are high changes to find better papers in journal b. put another way , i would certainly buy journal b , if i were a librarian facing the choice to purchase only one of the two journals due to budget limitations .the reader might rightly argue that this is an artificial example , with scant bearing on real journal citation records .indeed , the citation record of journal b is rather uncommon .hence , let us consider a real example .ieee transactions on information theory ( tit ) published 364 articles during period 1997 - 1999 that received an average of 26 citations until 1st august 2009 .ieee transactions on computers ( tc ) issued 375 articles in the same period , collecting an average of 13 citations .hence , the mean impact of tit is twice as big as the mean impact of tc .nevertheless , both journals have a median impact of 8 citations .even more amazingly , the probability of finding a higher cited paper in tit is only 50.7% , the probability of finding a higher cited paper in tc is 45.5% , while in 3.8% of the cases we have a tie .the surprise vanishes as soon as we compute the relative deviation from the mean ( coefficient of variation ) for the two journal distributions : this is significantly larger for tit ( 2.68 ) than for tc ( 1.34 ) .curiously , the deviation of tit is exactly twice that of tc .networks with power law node degree distribution have been extensively studied : provides a captivating and elegantly written introduction to the field .these graphs are referred to as _ scale - free networks _ :they show a continuous hierarchy of nodes , spanning from rare kings to the numerous tiny nodes , with no single node that might be considered to be characteristic of all the nodes . by contrast , in random networks the degree distribution resembles a bell curve , with the peak of the distribution corresponding to the characteristic scale of node connectivity . in a random citation network , the vast majority of articles receive the same number of citations , and both poorly and highly endorsed papers are extremely rare .articles with a truly extraordinary knack of grabbing citations are the _ authorities _ of the citation network .articles , like review papers , that cite a considerably number of references are the network_ hubs _ .highly - cited review papers are both authorities and hubs : they are _connectors _ , with the peculiar ability to relate ostensibly different topics and to create short citation paths between any two nodes in the system , making the citation network look like a _ small world_. the emergence of scale - free networks has been theoretically explained with a simple model encompassing _ growth _ and _ preferential attachment _ . according to this model ,the network starts from a small nucleus and expands with the addition of new nodes .the new nodes , when deciding where to link , prefer to attach to the nodes having more links .preferential attachment matches the previously investigated bibliometric principle of _ cumulative advantage _ : a paper which has been cited many times is more likely to be cited again than one which has been little cited .moreover , extraordinary citation scores may be also the consequence of a number of recognized citation biases , including advertising ( self - citations ) , comradeship ( in - house citations ) , chauvinism , mentoring , obliteration by incorporation , flattery , convention , and reference copying .the role of conference publications in computer science is controversial .conferences provide fast and regular publication of papers , which is particularly important since computer science is a relatively young and fast evolving discipline .moreover , conferences help to bring researchers together .it is not a mere coincidence that the average conference article has more authors than the typical journal paper . lately, however , many computer scientists highlighted many flaws of the conference systems , in particular when compared to archival journals . gives a bibliometric perspective on the role of conferences in computer science and concludes that , wearing bibliometric lens , the best strategy to gain impact is that of publishing few , final , and well - polished contributions in archival journals , instead of many premature ` publishing quarks ' in conference proceedings .the present contribution reinforces these conclusions .radicchi , f. , fortunato , s. , castellano , c. , 2008 . universality of citation distributions : toward an objective measure of scientific impact .proceedings of the national academy of sciences of usa 105 ( 45 ) , 1726817272 .
|
computer science is a relatively young discipline combining science , engineering , and mathematics . the main flavors of computer science research involve the theoretical development of conceptual models for the different aspects of computing and the more applicative building of software artifacts and assessment of their properties . in the computer science publication culture , conferences are an important vehicle to quickly move ideas , and journals often publish deeper versions of papers already presented at conferences . these peculiarities of the discipline make computer science an original research field within the sciences , and , therefore , the assessment of classical bibliometric laws is particularly important for this field . in this paper , we study the skewness of the distribution of citations to papers published in computer science publication venues ( journals and conferences ) . we find that the skewness in the distribution of mean citedness of different venues combines with the asymmetry in citedness of articles in each venue , resulting in a highly asymmetric citation distribution with a power law tail . furthermore , the skewness of conference publications is more pronounced than the asymmetry of journal papers . finally , the impact of journal papers , as measured with bibliometric indicators , largely dominates that of proceeding papers . research evaluation ; bibliometric indicators ; citation distributions ; power law distributions .
|
we consider the following problem .we are given a set of computational agents connected by a ( physical or logical ) ring , and a set of items , each associated to one color from a given set .initially each agent holds a set of items and items with the same color may be held by different agents ( _ e.g. _ see fig [ fig : problem].(a ) ) .we wish the agents to agree on an assignment of colors to agents in such a way that each color is assigned to one agent only and that the maximum over all agents of the number of different colors assigned to the same agent is minimum .we call this a _ balanced assignment _ : fig [ fig : problem].(b ) and fig [ fig : problem].(c ) show two possible balanced assignments . among all such assignments ,we seek the one that minimizes the total number of items that agents have to collect from other agents in order to satisfy the constraints .for example , agent in fig [ fig : problem].(b ) is assigned colors and , and therefore needs just to collect four items colored , since no other agent has items colored .[ fig : problem ] the problem can be formalized as follows .let be a set of agents connected by a ring and let be a set of colors .let be the number of items with color initially held by agent , for every , and for every .[ def : bc ] a balanced coloring is an assignment of the colors to the agents in such a way that : * for every color , there is at least one agent such that ; * for every agent , ; _ i.e. _ , the number of color assigned to agents has to be balanced .in particular , colors are assigned to ] .then for stage such that , _ i.e. _ , agent speaks up when . observe that at the end of stage the leader sets , as all agents must have spoken up by that time . therefore , considering also the last extra stage in which the agents are informed of the value of , phase 2 ends after stages , i.e. time units . for what concerns message complexity , in each stage , for , either no messages are sent , or a message traverses a portion of the ring .observe that , as each agent speaks up only once during this phase , messages circulating on the ring must always be originated by different agents .hence , the number of stages in which a message circulates on the ring is at most and there must be at least silent stages . in conclusion ,phase 2 message complexity is bounded by . as for the ratio between the actual value of and its approximation computed in phase 2 , by construction we have that and * phase 3 . * as a preliminary step , each agent computes the number of colors it will assign to itself and stores it in a variable .namely , each agent , for computes and then sets as follows ( recall definition [ def : bc ] ) : in the rest of this phase , the agents agree on a color assignment such that each agent has exactly colors .algorithms [ b3.1 ] and [ b3.2 ] report the pseudo - code of the protocol performed by a general agent in this phase and that is here described .let be the upper bound on computed in phase 2 .phase 3 consists of stages . in each stage , for , the agents take into consideration only colors whose weights fall in interval defined as follows : observe that in consecutive stages , agents consider weights in decreasing order , as . at the beginning of each stage , all agents have complete knowledge of the set of colors that have already been assigned to some agent in previous stages . at the beginning of this phase , is the empty set , and after the last stage is performed , must be the set of all colors .stage is , in general , composed of two steps ; however , the second step might not be performed , depending on the outcome of the first one . in the first step ,the agents determine if there is at least one agent with a weight falling in interval , by forwarding a message around the ring only if one of the agents is in this situation .if a message circulates on the ring in step one , then all agents proceed to step two in order to assign colors whose weight fall in interval and to update the set of assigned colors .otherwise , step two is skipped .now , if there are still colors to be assigned ( _ i.e. _ , if ) , all agents proceed to stage ; otherwise , the algorithm ends . in more details : step 1 .agent ( leader included ) waits time units ( zero for the leader ) from the beginning of the stage , and then acts according to the following protocol : case 1 : : : if receives a message from its preceding neighbor containing the label of some agent , it simply forwards the same message to its following neighbor and waits for time units ; otherwise case 2 : : : if has a weight falling into interval , then it sends a message containing its label to its following neighbor and waits for time units ; otherwise case 3 : : : it does nothing and waits for time units .if case 1 or case 2 occurred , then agent knows that step 2 is to be performed and that it is going to start after waiting the designed time units .otherwise , if case 3 occurred , after units of time , agent might receive a message ( containing label ) from its preceding neighbor , or not .if it does , then learns that case 2 occurred at some agent having label and that step 2 is to be performed . hence , it forwards the message to its following neighbor in order to inform all agents having labels in the interval ] ) : analogously , by definition , , for , and thus , _i.e. _ , the cost associated with color is exactly the same for balance and .notice also that the term appears in both cost expressions .hence , to prove that , it is sufficient to show that we can assume without loss of generality that is a multiple of . indeed , if otherwise does not divide , we can add dummy colors ( for ) , i.e. such that for all agents and dummy color . since in our algorithmthe agents consider the weights in decreasing order , the dummy colors will be processed at the end and therefore they have no effect on the assignment of the other colors .moreover , as their weights are zero , they do not cause any change in the cost of the solution . to prove ( [ eq : diff ] ) , we build a partition of the set according to the following procedure .we start from any in and find another index such that for some .note that , since is a multiple of , every agent must have colors and therefore such an index must exist .if the procedure ends , otherwise we have found another index such that .again , if the procedure ends , otherwise we repeat until , for some , we eventually get . we then set if during this procedure we considered all indices in we stop , otherwise , we pick another index not appearing in and repeat the same procedure to define a second set , and so on until each index of appears in one . observe that each contains at least two pairs of indices and that each index appears in exactly two pairs of exactly one .then , using lemma [ massimo ] , we get the following theorem shows that the approximation factor given in theorem [ 3approx ] is tight . [ epsilon ] for any , there exist instances of the balanced color assignment problem such that is a factor larger than the optimal cost , for some .consider the following instance of the balanced color assignment problem .for the sake of presentation , we assume that and that is even , but it is straightforward to extend the proof to the general case . fix any rational , and let be such that is an integer and .consider an instance of the problem such that colors are distributed as follows : and that is the leader elected in the first stage of algorithm balance , and that the labels assigned to agents are , respectively .consider agents and , for any .we can always assume that is such that for some .that is , the weights of color for agents and belong to the same interval .it is easy to see that the optimal assignment gives to and to .the corresponding cost is . on the other hand , algorithm balance assigns to and to , with a corresponding cost .hence , for the approximation factor , we get even if the approximability result is tight , if we are willing to pay something in message complexity , we can get a -approximation algorithm .algorithm balance can be transformed into a -approximation algorithm , by paying an additional multiplicative factor in message complexity .algorithm balance is modified in the following way : colors in stage of step 2 in phase 3 are assigned to the agent having the largest number of items ( falling in the interval ) and not to the one close to the leader .this can be achieved by making the agent forward on the ring , not only their choice of colors , but also their for those colors .this requires extra bits per color , increasing total message complexity of such a multiplicative factor .for what concerns the approximation factor , this modification to the algorithm allows to restate the thesis of lemma [ stessostage ] without the multiplicative factor and , following the same reasoning of theorem [ 3approx ] , conclude the proof . finally ,if we are not willing to pay extra message complexity , but we are allowed to wait for a longer time , we get a -approximation algorithm . assuming , for some constant , for any , there is a -approximation algorithm for the distributed balanced color assignment problem with running time and message complexity .modify the two interval threshold values of algorithm sync - balance in the following way : and redefine accordingly , the statement of lemma [ stessostage ] becomes , and the statement of lemma [ massimo ] can be rewritten as the result on the approximation factor then follows by the same arguments of the proof of theorem [ 3approx ] .the message complexity is not affected by these changes , while the running time now depends on the number of stages in phase 3 , that is this paper we have considered the distributed balanced color assignment problem , which we showed to be the distributed version of different matching problems . inthe distributed setting , the problem models situations where agents search a common space and need to rearrange or organize the retrieved data .our results indicate that these kinds of problems can be solved quite efficiently on a ring , and that the loss incurred by the lack of centralized control is not significant .we have focused our attention to distributed solutions tailored for a ring of agents .a natural extension would be to consider different topologies and analyze how our techniques and ideas have to be modified in order to give efficient algorithms in more general settings .we believe that the main ideas contained in this work could be useful to extend the results even to arbitrary topologies .indeed , an polylog distributed leader election protocol ( that is needed in our algorithm ) is also available for arbitrary ad hoc radio networks . for what concerns the ring topology , it is very interesting to note that the value never appears in the message complexity for the synchronous case ( not even if the polynomial relation between and does not hold ) , while a factor appears in the asynchronous case .it is still an open question if it is possible to devise an asynchronous algorithm that shows optimal message complexity , under the same hypothesis of the synchronous one ; _i.e. _ , if it is possible to eliminate the extra factor .
|
consider a set of items and a set of colors , where each item is associated to one color . consider also computational agents connected by a ring . each agent holds a subset of the items and items of the same color can be held by different agents . we analyze the problem of distributively assigning colors to agents in such a way that ( a ) each color is assigned to one agent only and ( b ) the number of different colors assigned to each agent is minimum . since any color assignment requires the items be distributed according to it ( _ e.g. _ all items of the same color are to be held by only one agent ) , we define the cost of a color assignment as the amount of items that need to be moved , given an initial allocation . we first show that any distributed algorithm for this problem requires a message complexity of and then we exhibit an optimal message complexity algorithm for synchronous rings that in polynomial time determines a color assignment with cost at most three times the optimal . we also discuss solutions for the asynchronous setting . finally , we show how to get a better cost solution at the expenses of either the message or the time complexity . * keywords : * algorithms ; distributed computing ; leader election ; ring . [ definition]theorem [ definition]lemma [ definition]corollary [ definition]example
|
theoretical progress in understanding proteins in the recent years was concentrated on folding , along with connected questions of sequence design and evolution ( see book and references therein for the recent overview ) . folding attracts theorists not only because it is so important for fundamental biology and for pharmaceutical industry , but also because it is a robust universal phenomenon .vast number of proteins exhibit the ability to fold , and there is a widely recognized necessity to understand the physical principle behind the selection of sequences capable of fast and reliable folding . in our opinion, there is one more aspect of proteins which is equally robust and appealing for theoretical analysis in terms of some minimal model .we mean here the ability of many proteins to function in a machine - like fashion through ordered conformational rearrangement .this is most obvious for motor proteins whose function is directly related to certain mechanical ( conformational ) movements .this is also clear for ion channels whose function is to mechanically move molecules ( or ions ) from one place to the other ( e.g. , across the membrane ) .although less obvious , conformational motions appear to be also very much at play in proteins whose function is purely electronic , such as , e.g. , electron transfer in bioenergetics or catalysis of a chemical reaction .this latter point was first formulated by mcclare and independently by blumenfeld .more recently , it was extensively discussed in the book .somewhat different viewpoint on this subject was also recently presented in the book . the new experimental data by h. gruler et al support the idea of slow conformational relaxation being important for the operation of enzimatic molecular machines .more detailed phenomenological models of enzyme operation based on the concept of conformational relaxation as a biased diffusion process have been successfully implemented to interpret the experimental results .there is now the data base of conformational movements in proteins .we emphasize two general properties of function - related conformational movements in proteins .first , they occur without significant opening of the dense globular structure . viewed in the context of contemporary folding theories , this property seems quite exciting .indeed , as native globule is pretty dense , frequently modeled as a _ maximally _ compact self - avoiding polymer , the inside movements may be expected to be strongly suppressed .counterargument to this suggests that in fact real protein globule does have certain voids and is not absolutely dense .nevertheless , the density of a typical protein globule is similar to that of a polymer melt , for which reason it may be expected to be extremely viscous if not altogether glassy .the observation of significant conformational movements inside such a dense polymeric conglomerate challenges theory to offer an explanation .second general property we would like to mention is the presence of some preferred collective degree of freedom - which is almost a synonym to a functioning device .for instance , enzymes work in cycles , and each cycle means a turn around some loop in the conformational space . for channel - forming protein , a part of this loop corresponds to a transported ion moving from one place to the other , the rest of the loop corresponds to the protein coming back . according to the arguments developed a long time ago ( see the book and references therein ) ,it is important that there is only one collective degree of freedom along the loop of function ( which , of course , does not rule out `` transverse '' fluctuations ) .importantly , machine - like function is realized well away from equilibrium conditions .that means , there is no detailed balance , and the system moves along the loop in one direction and not in the other - there is no , and should not be , detailed balance .this , however , does not rule out the possibility that some parts of the cycle may present themselves as being the motions along the same path in opposite directions , like , e.g. , a piston moving up and down in a steam engine .we shall return to this point later .the notion of preferred function - related degree of freedom may be compared in some respects to the concept of reaction coordinate much discussed in folding studies . in both cases , the presence of transverse fluctuations is important . in case of folding , this gives rise to the understanding that , e.g. , transition `` state '' is not a microstate , determined to atomic details , but rather an ensemble of ( micro)states .the preferred functional degree of freedom must be considered in pretty much the same way .it is an exciting question whether these two collective degrees of freedom , relevant for folding and function , are connected to one another .one could even speculate that they may be the same , or similar to some approximation .as of today , this question remains open .however , we note in passing that turn - around times reported in modern single molecule experiments on enzymes such as , on the order of a fraction of a millisecond , are not drastically different from typical folding times . in the present paper ,our goal is modest , but two - fold .first , we want to see , at least for the simplest model , how one can imagine a collective degree of freedom allowing orderly motion without opening the compact globule .second , we want to design such a sequence that the globule energy changes in some desirable fashion while moving along the preferred coordinate .this way , we want to mimic a molecular spring .thus , our work is organized as follows .we first describe the model and formulate our problem in a more explicit way for that model .we then discuss the possibility to design the native state conformation , or , better to say , the ensemble of nearly - native state conformations , in such a way that they can realize the one - dimensional motion within an almost compact globule .after that , we design the sequence capable to fold into such a functioning conformation .at the end we study some properties of thus designed toy protein .we shall work with the toy lattice model of protein .we understand that these words will run the emotions high and negative with many readers , and so we want to answer that from the very beginning .everyone understands that there are no lattices in biological world , but this argument itself , however obvious , is too cheap to prove the lattice models useless .indeed , for example , everyone laughs at the famous anecdote about a spherical cow , but at the same time everyone tacitly agrees that the model of a spherical cow is useful , e.g. , to understand the scaling laws relating animal body mass and the rate of oxygen consumption .thus , the dispute about usefulness , or the lack of one , for the lattice models in protein studies can not be resolved on the level of philosophy , this is the question of specific purpose of certain studies .not entering the details , there are questions for which lattice models are totally inappropriate ( and may deserve laugh ) , there are some other questions for which using lattice models is legitimate . as we hope to prove by the results , our present paper belongs to the latter category .thus , we use standard toy lattice model in which protein is represented as a self - avoiding walk of the desirable length .the protein changes its conformation by means of elementary moves , including corner flips , end flips , crankshafts , and null moves .the advantage of this move set is that the resulting system is known to be ergodic . for this model ,polymer moves by making discrete succession of steps from one conformation to the next .accordingly , the preferred degree of freedom must be associated with certain a linear ( one - dimensional ) succession of conformations , in which every conformation may only move into either previous or next conformation of the same group .we stated above why the protein needs to have a selected degree of freedom for functional work . here , we design lattice toy protein in such a way that its conformations , while remain compact , can rearrange along one - dimensional linear path in conformational space . the concept of linear path can be easily explained if the conformational space of lattice protein is visualized as a graph . in this representation ,every conformation is denoted as a node of the graph , and two nodes are connected if and only if the transition between corresponding conformations is possible via single elementary move ( see figure [ fig : graph ] ) .we are looking for linear paths in the conformational space graph ( csg ) . obviously , linear path is the succession of such nodes each of which is connected to exactly two other nodes . for swollen ,non - compact polymer , a multitude of conformational motions is possible , the corresponding nodes of the graph have very many connections , and so none of the swollen conformations belong to the linear path . by constrast , in compact conformationsconformational freedom of the polymer is very limited , and there is a hope to find linear paths . in order to find one - dimensionally - connected paths of compact conformations , we examined from this point of view the properties of csg of the short lattice polymer .we build on the findings of the work . in that work, it was shown that placing the polymer inside a restricted size box makes the conformational space graph disconnected , consisting of several disjoint pieces , or chambers .this finding was based on computer simulation of lattice polymers of various lengths , each confined in the box on the lattice . for our present purposes , it is important to address another physical situation , in which is fixed , while the degree of compactness of the chain may change .we achieve this by restricting the gyration radius of the chain and then looking at various specific values of .more specifically , this was done in the following manner .we consider lattice polymer of the length .we start from maximally compact conformation and allow it to make all elementary moves consistent with the chain self - exclusion , but possibly ( and necessarily ! ) violating the compactness .we accept the conformation and place it as a node on the graph if and only if the chain gyration radius in this conformation is less than the chosen threshold .all the accepted conformations are pictured on the graph as nodes and their connections with all other accepted conformations are established through exhaustive search .then these new conformations again allowed to make elementary moves , new conformations are accepted if they do not exceed the same threshold , and the process repeats . as regards the limiting value of , we choose it experimentally , and it regulates both the compactness of the conformations and the number of conformations in the graph .the graph constructed in such a manner consists of several disjoint regions , or chambers .that means , for two conformations , which belong to different chambers , there is no sequence of elementary moves , which transforms one of them into another without breaking the restriction on .thus , the procedure must be repeated starting from different maximally compact conformations to list all the chambers of the graph .the number of conformations in different chambers varies , reflecting the distribution of the clusters in the bond percolation problem ( in this case , we deal with percolation in conformation space ) .figure [ fig : percolation ] shows the dependence of the number of the small chambers in the conformational space graph on the number of conformations locked in chamber . in terms of the underlying physical idea , this figure is similar to the result reported earlier in the work for a different model . in the present model, we vary at the fixed number of monomers . in the work , the similar alleged percolation in the conformational spacewas controlled by changing while locking the polymer inside the cube on the lattice , which of course implies fixed .further , we established the connectivities of the nodes which belong to the largest chamber in the csg .the distribution of the connectivities of the nodes is shown in figure [ fig : connectivity]a , curve 1 .it is compared with the distribution of the numbers of neighboring nodes for the same set of conformations , but not restricted with the -condition ( curve 2 ) .curve 2 is indistinguishable from the binomial distribution ( which does not contradict the idea that non - restricted csg is a small - world network ) .the csg built under the restriction on the values of of toy - protein conformations ( curve 1 ) is significantly different , one can imagine this graph as a percolation cluster of the bond percolation problem on the lattice with the topology of the small - world network .for the small values of ( weakly connected cluster ) the peak of the distribution corresponds to the graph nodes connected to only two neighbors .the sharp peak on the curve 1 corresponding to the poorly connected conformations can be easily explained .the change of the geometry of the voids in the bulk of the protein globule is possible only via small number of local moves , because the excluded volume effect is very strong in compact conformations .such subtle conformational moves do not affect significantly the value of the of the protein chain , whereas opening of some loop on the surface of the globule leads to the increase of the .on the other hand , the majority of the conformational moves accessible to the unrestricted protein chain occurs on the surface .accordingly , the limiting of from above forbids surface moves but does not restrict the changes in the bulk of the globule .that is why for small limiting values the sharp peak of the distribution rises at the poorly connected conformations .thus , the study of -restricted conformations of the -mer provides an example in which there is the peak of connectivity distribution which corresponds to the graph nodes with connectivity .these are desired conformations forming linear paths . however , most of these linear paths are rather short and consist of only few elementary moves .the distribution of the lengths of the paths is shown in figure [ fig : connectivity]b , it is exponential distribution .thus , our conclusion so far is this .long linear paths exist , but they are not very common , they are rare . in this sense , the situation is similar to that of selection of sequences for toy proteins capable of folding . in sequence selection case ,the `` good '' sequences are exponentially rare among the random ones .the goal of sequence design , or selection , is to fish them out .similarly , our goal now is to identify the rare conformations which are connected by long linear paths in the conformation space . in order to do that , we need to understand better the local geometry of conformations belonging to the linear paths .it can not be done for the chain length of which is too small .it is also small compared with typical protein lengths , about one hundred or more in average .thus , we repeated the same procedure of csg mapping for the toy - protein of the length . of course , in this caseno exhaustive enumeration of conformations is possible , and so we performed random sampling instead .we started from the limiting value , which is equal to the gyration radius of the maximally compact conformation . in this case , no conformational movement is possible , and , therefore , csg consists of as many disconnected nodes as the number of compact conformations .we now increase by a very small amount . by several attempts ,we choose the step of increment in such a way that after one step conformation space remains disintegrated , non - ergodic , consisting of disconnected chambers most of which contain only few conformations .we then increase step by step , every time mapping the newly obtained parts of conformation space on the graph , as described before .when approaches , the percolation transition occurs and there appears an `` infinite '' cluster .we mapped into the graph conformations from this cluster and searched for the long linear paths on the graph , as it was done in case of -mer .the longest paths we found for the -mer are organized as follows .the conformations along the path transform into one another in a set of subsequent corner flips in the bulk of compact conformation .when the first corner flip occurs , the vacant site opens producing the opportunity for another monomer to move .then the similar process repeats many times , leading effectively to the vacancy traveling through the globule , finding the new corner flips on its way .these subsequent transformations are demonstrated in figure [ fig : path ] which for the simplicity of presentation shows a smaller polymer , namely -mer .now , when the mechanism of rearrangements within the compact conformation for the toy protein is clarified , we can propose the effective and straightforward approach to design the linear path of compact conformations .the algorithm includes two main stages .first , we arrange the switching elements along the path .second , the rest of the polymer is computationally designed such that it is ( almost ) maximally compact and contains the necessary set of switching elements . as a result , we obtain one - dimensional path of compact conformations of the lattice toy protein . we build the linear path using flipping corners as switching elements . in the beginning of the procedurewe have an empty cubic lattice .we start from choosing initial position of the vacancy ( figure [ fig : design_of_path]a , node ) .the first flipping corner is drawn on the lattice in such a way that when it flips , the corner and the vacancy exchange their positions .the edges forming next switching element should be drawn in such a way , that when it will flip , vacancy will hop to the former corner position opening the way for the next switching element . in the figure [ fig : design_of_path]btwo subsequent flipping corners are shown .after they both will move , the vacancy will take the position .other flipping corners are drawn subsequently as many times as needed ( figure [ fig : design_of_path]c ) . at the same time as the switching elements arranged on the lattice we can control the linearity of the path .every time when the new flipping corner is set on the lattice there are several ways to locate it relative to the vacancy position .we choose only one location , but the problem is that the other flipping corners may appear later , on the stage of the design of the whole conformation ( see figure [ fig : design_of_path]d ) . to prevent appearance of such parasitic switches we draw additional pieces of the conformation surrounding the pathway sites , as it is shown in figure [ fig : design_of_path]e .we now have the set of switching elements , which are supposed to be just the disconnected pieces of the polymer .we have to find now the rest of the polymer , such that it fills ( almost ) completely the whole volume of the cube , and connects all switching elements into a linear chain . for this purpose, we developed computational method which is the modification of the approach proposed by ramakrishnan et al to generate maximally compact conformations .here , we describe our modified algorithm .the conformation design starts with placing on the lattice the new edges connecting randomly chosen neighboring vertices ( figure [ fig : filling_volume ] a , b ) .this process soon brings us to the state , where some vertex can not be connected to its randomly chosen neighbor , because the neighbor already has two edges incident on it ( figure [ fig : filling_volume ] c ) . in this case , special procedure called _ two - matching _ is applied . during this and the following steps of the algorithm , some randomly chosen edges can be removed from the lattice and changed by others .however , we impose the condition that the edges forming switching elements _ can not be removed _ at any step of conformation design . _two - matching _ starts from picking up the vertex , which is either not connected , or has only one incident edge .then its neighbor is chosen randomly as an opposite end of the new edge . if belongs to the linear subchain , then the ends of this subchainare checked for the possibility to be connected with their neighbors .if it can be done , a new edge ( the edge at the figure [ fig : filling_volume]b ) is added .one of the edges incident on is replaced by the edge .thus , in this procedure , the number of edges increases by one ( figure [ fig : filling_volume]d ) .if , on the other hand , the vertex belongs to the looped subchain , one of its incident edges is removed and replaced by .so , the loop is broken , and the total number of edges on the lattice remains unchanged .typically , as a result of the work of this procedure , we obtain several looped and one linear subchain packed into the cubic lattice ( figure [ fig : filling_volume]e ) .now , the subchains should be merged into one chain .this is achieved in the following way .suppose four neighboring vertices form a square .the connecting edges and belong to the different subchains . excluding these edges and including instead and , we would have merged subchains ( see figure [ fig : filling_volume]f ) .such an operation is known as _patching_. each patching operation transforms a pair of subchains into one subchain .the edges to be involved in patching are chosen randomly .the process is stopped when there is no more and no less than one linear chain on the lattice , which is the desired polymer conformation . in the original work ,this method was applied to generate maximally compact lattice conformations .it is worth repeating that for our purposes we use this method starting from a complicated lattice which is the cube minus elements chosen for switching elements . generally , it is possible to use also crankshafts and flipping ends to design the switching elements .one just needs to forbid all the states of these elementary moves but two .this should be done by placing on the lattice additional edges ( as it was done to prevent parasitic corner flips ) , which restrict the extra states of these moves .however , the end flip can be used only twice as switching element , because there are two ends of the chain .as regards the crankshaft move , it needs two vacant sites to make switching possible , which means the conformation in question should be slightly less compact . for these reasons, we use only corner flips as switching elements in this work .after we have chosen conformations , we should find the sequence which fits the target conformations with low energies . for this purposethe sequence of the model protein is annealed and monte carlo optimization in the sequence space is performed .the details of the algorithm are previously published ( see also review article and references therein for further details ) .it must be emphasized that we plan to work with the sufficiently large set of monomer species ; in fact , we shall even use the so - called independent interaction model , in which the number of distinct species is as large as the number of monomers in the chain .this allows us to avoid difficulties well known in the case of sequence design for the two - letter heteropolymers , such as the -model . in the context of the present work ,sequence design method had to be modified in two respects .first of all , we need not only one target conformation to have a low energy , as it is typically assumed in protein folding simulations .we need the whole family of conformations - all conformations belonging to the rearrangement path - to have distinctly lower energies than all other states .second , the more ambitious goal is to design the sequence in such a way that moving of the system along its pathway changes energy in an orderly fashion . since our rearrangement path is one - dimensional , we can pretend energy to increase monotonically along this path , in this sense making our toy protein a model of a molecular spring .let us consider these two aspects of sequence design one by one .the model protein is determined by the set of the coordinates of the monomers and the sequence of monomers ( the species denote the identity of each monomer , is the total number of species ) , index counts monomers along the chain .the hamiltonian is written as follows where the energy of the conformation is determined by the matrix of species - species energies for the contacting monomers and function is defined such that it is equal to if monomers are lattice neighbors and otherwise .we use independent interactions model ( iim ) and miyazawa - jernigan ( mj ) matrices for monte carlo simulations in this article . in our approachthe goal is to design sequences for which the whole set of conformations have energies sufficiently below that of the rest of conformation space . of course, our candidates for the target states are the conformations which belong to the previously designed linear rearrangement path in conformation space . these conformations are supposed to form a deep valley in the energy landscape .for the set of two target conformations , sequence design was performed in the paper . in that work ,the goal was to model proteins which can fold into two ( or more ) distinct `` native '' conformations , like prions .accordingly , two target conformations were chosen to be totally dissimilar ( non - overlapping , or weakly overlapping ) . in more details ,such design was examined in . in our case ,the problem is almost the opposite .the conformations in question are very closely related , they can be mutually transformed into one another in just a few moves .accordingly , the overlap between neighboring conformations along the path is very high .of course , as the system walks along its pathway from one end to another , the overlap decreases , but still remains significant .for example , the overlap between the conformations at the opposite ends of the path shown in the figure [ fig : path ] is as high as about .the sequence optimization is governed by the following hamiltonian : where is the total number of conformations along the rearrangement pathway .the question which arises now is this .how efficient is the sequence optimization in the case of multiple closely related target states ?let us consider the simple situation when only energies of two target conformations are optimized .these conformations , namely and , could be , for example , the ends of the rearrangement pathway .how deeply can their energies be lowered during the sequence design in comparison with an arbitrary other compact conformation ?to make this estimate we can calculate the energy of the conformation averaged over the sequence space where is the probability for the sequences made randomly from independent monomer species with occurrence probabilities .the details of similar calculations are described in the review article .the result for the present case of two target conformations reads \ , \ ] ] where are the mean value and the variance of the interaction matrix , is the total number of contacts in the conformation and its overlap with arbitrary target conformation is defined as as we can see from the expression ( [ estar ] ) , the energy of the designed sequence in the conformation depends on the similarity of this conformation to the target states .this similarity is measured by the overlap parameter ] takes values close to the maximum. therefore , not only energies of the target conformations are optimized , but conformations between them ( e.g. along the designed linear path ! ) are optimized , too . in principle , sequence design method may also lower the energies of other states , which are related to the target states , but do not belong to the rearrangement pathway , and thus are non desirable for us here .this is a well known problem , generally addressed through the `` negative design '' ( see , for instance , ) .luckily , in our specific case , conformation design , as discussed above , helps to address this problem .indeed , since there are no allowed conformations at all on the sides of the pathway , the only possible low energy decoys are structurally unrelated ones .sequence design method employed here may seem to contradict the results of the recent work . in this work, authors estimated the maximal possible number of `` native '' states which may be `` memorized '' by the sequence .they showed that this number is very limited , it is independent on protein length , and is fully determined by the alphabet - the number , , of distinct monomer species ( for instance , for the system with monomer species , there can be not more than four or five `` native '' states ) .in fact , there is no contradiction .the estimate of the work determines the maximal number of unrelated conformations which can be designed into the sequence . in our case ,all target states are very closely related , and they in fact belong to the same potential well , or funnel , in the free energy landscape .they , of course , can not be considered as independent . as we show below, our sequence design process works successfully even when the number of states in the rearrangement path is as large as about ten .so far , we have been discussing the sequence design procedure for which all the target states were equal . now ,following orwell , we shall consider some of the conformations more equal than others .specifically , we should remember that conformations form a one - dimensional path .we did it on purpose , and we should use it now . to be specific ,let us assume that conformations are labeled with index in a natural way , such that changes orderly from at the one end of the rearrangement pathway to the maximal value at the other end .then , we shall require that , say , has the lowest energy , has energy a little higher - preferably , by a certain amount higher ; we want to be about higher in energy than , ... , and this continues all the way up to , which we want to be about above , but still much lower than all the other conformations. why do we want such energy landscape ?first of all , being all connected in one valley , the conformations form _ together _ a basin of attraction for folding , or folding funnel .second , when correctly folded and at the bottom of the funnel , the system can still travel back and forth along the one - dimensional rearrangement path .this travel may be either due to fluctuations or due to some externally applied force . to achieve this aim we modified the design hamiltonian ( [ hdes ] ) in the following way : ^ 2 \ , \end{aligned}\ ] ] where is the `` experimentally '' adjusted parameter , and is the desired energy gap between neighboring conformations .the monte - carlo optimization ruled by the hamiltonian ( [ hdes2 ] ) has a bias towards sequences with lower energy in conformation and subsequently higher energies in conformations .various regimes are possible here depending on the relation between the design temperature , real temperature , and the value of .not entering the discussion of all these regimes , we mention that in what follows we have chosen the sequence optimization temperature to be .to demonstrate the work of our design method , we generated two sets of conformations of lattice -mer as described above in the section [ switches ] .the linear rearrangement path of the conformation shown in figure [ fig : path ] includes 7 conformations . in another example , shown below in figure [ fig : substrate ] , there are 6 conformations in the path .the location of the switching elements in these two cases is chosen differently .switching elements in the former case are located in the bulk of the globule , whereas in the latter case the switching elements are all located near the surface .for these two sets of target conformations , the sequence optimization procedure was applied .the values of parameters were as follows : , , , and the interaction matrix was chosen to be correspond to the independent interactions model ( iim ) .first of all , we have to check that the chains can correctly fold into the conformation as designed .we used the set of conformations shown in the figure 4[fig : path ] as target states .we compared folding rates for the sequence designed as proposed in this article and for the control sequence , designed in a more traditional way , with conformation as the only purported ground state .all the folding experiments were started from different random unfolded conformation .the monte carlo simulations were performed at temperatures in the range , where was the midpoint temperature .mean first passage time ( mfpt ) at every temperature was calculated by averaging over folding runs .the results are shown in the figure [ fig : rates ] .the folding times for the sequence which has multiple ground states are approximately times longer than for the sequence with the unique native conformation .further inspection suggests that this happens because the depth of the global minimum for the sequence with multiple ground states can not be as well optimized as that for the sequence with unique ground state .nevertheless , the emphasis of our result here is on the good news , not the bad ones : it is not important that our sequences are slower , it is important that they are insignificantly slower , _ only _ by a factor of about slower .that means , they do fold , and their folding time is of the same order of magnitude .it is worth to emphasize that during this folding experiments , the chain was not confined in any restricted volume .we used volume restriction in the preliminary stage of this work , to elucidate the method of conformation design .now , as we are done with the design , we let the polymer do whatever it wants , and the result is that it folds and spontaneously arrives into the valley where it has a linear chain of conformations at its disposal . now , since we have established that our model does fold , we have to check if it can move along the designed path . in reality ,conformational relaxation of a functioning protein machine is triggered by the attachment or detachment of a substrate or other ligand ; we shall consider this in the next section . here, we want to perform the simpler test to see what happens without any stimuli .in this situation , we expect our toy protein to move randomly back and forth along the designed path. it should be mostly in the lowest energy state , but since the energy difference between states along the path is only a fraction of thermal energy , bias towards the end should be relatively weak , there should be plenty of fluctuations along the path .the point to be checked is that the system , while performing this random walk along the path , should not open up too frequently , the globule should stay compact . to examine how this happens in the toy - protein shown in the figure [ fig : path ] , we run a long monte carlo simulations at different temperatures starting from the conformation .the events of passing the conformations of the pre - designed path are recorded .coordinate along the pathway takes the value if the vacancy position along the path coincides with conformation . a typical `` trajectory '' of conformational changes for the first designed toy protein is shown in figure [ fig : excursion ] .the inset of figure [ fig : excursion ] displays the details of one particular passage along the path from conformation to and back .the events when conformation changes in a way other than walking along the path ( say , some loop opens on the surface of the globule ) , are pictured as the change of the coordinate in the perpendicular plane .as one can see , below the midpoint temperature the toy - protein stays steadily on the rearrangement path and makes random walks back and forth along it , with very limited fluctuations in the multitude of transverse directions in the conformation space .thus , what we observe confirms that our toy protein performs thermal fluctuations in the form of one - dimensional movement along the designed path .this is important , because if fluctuations occur that way , that means , under some different circumstances , e.g. , applied external force , the system can perform also a forced movement along that same pathway , as it is expected for the function .thus , we need to test carefully that the observed fluctuations are indeed the random walks effectively in one dimension . to address this quantitativelywe calculate the correlation function for the toy protein and compare it with that for the artificial auxiliary random walker which diffuses on the discrete set of states with the same energy profile as for the protein .the good agreement between the two correlation functions is evident in figure [ fig : corr_function ] .hence the designed toy protein does indeed move along the one - dimensional pathway and in this sense it remembers its `` function . '' in the previous section we demonstrated that toy - protein can use the designed rearrangement path , performing a biased , but random motion .this biased walk can be triggered if the protein molecule is externally stimulated .as in the real biochemical world , this can be most efficiently done by attaching the ligand molecule to the protein globule . in this sense , the typical cycle of a protein work may be described as follows .we start from the protein molecule in its ground state . at some moment , a ligand molecule gets attached to the protein . withthe ligand attached , protein conformation is no longer the ground state , some other conformation is now lower in energy , and , therefore , attachment of ligand initiates the relaxation process of the globule into the new conformational state , which is the ground state for the protein - plus - ligand complex . when this relaxation is completed ( or nearly completed ), the new conformation turns out well suitable for certain chemical ( or other small length scale ) changes , the result of which for the protein is the desorbtion of a ligand initiating again the relaxation process , this time back into the original ground state .this type of conformational relaxation processes coupled with the ligand binding plays is well known for motor proteins , heme - containing proteins , etc . in this work , following our general strategy , we want to design a toy ligand with which our toy protein can perform the entire cycle of its `` function . ''we designed a toy protein which has binding site and is able to change its shape .the initial protein conformation and conformational changes activated by the adsorption of the toy ligand are shown in figure [ fig : substrate ] .the toy protein was designed as explained above .the linear rearrangement path of the molecule includes conformations .the conformation is the lowest energy state for the protein with no ligand .the energies of interaction of the ligand with the `` active center '' of the protein are chosen such that as soon as the complex forms , the energies of linear path conformations change in comparison with those in isolated protein molecule .the profile of the energy changes along the path of conformational rearrangements in the protein - ligand complex is shown in figure [ fig : cycle_energies ] .when the ligand is attached , the conformation of the path has the lowest energy .hence as the small molecule binds to the protein in the state , it induces the cascade of conformational changes which drives the protein to the conformation .the energy profile along the path of the conformational changes corresponds to the case of the suicide inhibitor due to the high barrier for the dissociation of the ligand in the conformation .thus , our model exhibits the conformational response to the ligand binding . when the ligand binds , it jump - starts the cascade of orderly occurring conformational transitions .in the computer experiment , we have measured the relaxation times - the number of monte carlo steps in which the toy protein goes from into upon ligand binding .we repeated this measurement in independent mc runs , and figure [ fig : relax_time ] shows the distribution of relaxation times .the tail of the distribution demonstrates exponential decay that is expected for the biased diffusion in one - dimensional system .these simulation results are in good agreement with some experimental data .for example , beece et . studied the rebinding of carbon monoxide to myoglobin protein after dissociation induced by 1 laser impulse . such a short pulse synchronized conformational transformations of many protein molecules in solution . under proper circumstances , the population of excited protein molecules exhibited exponential decay , quite similar to our data presented on figure [ fig : relax_time ] . to reproduce also the multi - exponential or power law decay on the medium time scaleobserved in some experiments will require some modifications of the present model .in this study , we designed a toy model which exhibits quite a few protein - machine - like properties . first and foremost , it has the possibility to move like a mechanical system along the one - dimensional path , which we called rearrangement path . like real protein, our model can fold into the valley of conformations the bottom of which is the rearrangement path .thus , it folds , but not only it just folds , it does so to form a functional , movable state . while folded into its low energy valley , our toy protein can move along the path which is the valley bottom . making only few transverse excursions , it performs a biased random walk along this path .attachment and detachment of the ligand can switch the direction of the bias , thus making toy protein to go orderly through the closed cycle mimicking protein function .the usefulness of a toy is the possibility to play . playing withour toy allows to gain some insights into the general concept of machine - like function of proteins .one question which we can discuss is that of storing energy in the deformed globule .our model is designed in such a way that the energy of the globule at one end of the rearrangement path is higher than at the other - see , e.g , figure [ fig : cycle_energies ] , left panel .can we say this globule acts as a molecular spring ? to begin with , this seems to contradict the formulation of the design hamiltonian which purports to make energy _linear _ in the coordinate along the path ( ) , while regular hookean spring , of course , should have energy quadratic in deformation .in fact , this is unimportant , because design hamiltonian can be easily modified to purport quadratic ( or any other ) energy profile ; only discrete character of the lattice renders such fine tuning of the model useless .more importantly , all energies involved in our model are in fact free energies .it should be understood that proteins , although machines , are not heat engines : they function in essentially isothermic environment . therefore , when we say , for instance , that contact of monomers and has energy , we should have in mind that pre - averaging has been performed over a multitude of small scale , rapidly relaxing degrees of freedom ( e.g. , -angles of the residues side groups ) , and then is the free energy of the corresponding contact . from that point of view , we can say that at the upper - energy end of the rearrangement path our model protein stores free energy rather than energy .in other words , it acts not so much like a hookean spring , but rather like a piece of rubber .this analogy gets even better if we remember that linearity of the strain - stress relation and reversibility of deformation are totally unrelated in the case of rubber . as in other mechanical devices ,the question of reversibility in our toy machine is related to its speed : the slower is the rate of operation , the closer it can approach to the limit of being reversible .of course , when our toy protein undergoes fluctuations along its designed path , it exchanges energy with the surrounding heat bath , because the process occurs at the constant temperature .when conformational relaxation takes place , because random walk along the path is strongly biased - then energy exchange with thermal bath is dominated by the energy transfer into the bath .however , when we imagine that ligand energy has driven the enzyme to the high energy end of its path , then this energy remains available and drives the subsequent conformational relaxation . in this delicate sense, our toy can be said to work like a molecular spring - or , once again , as a piece of rubber .it goes without saying that this is an overdamped spring ( or rubber ) , no oscillations are in question in the course of conformational relaxation .our approach is , no doubt , very schematic .it uses to the full extent the discrete geometry of moves on the cubic lattice .this is the heavy price we have to pay for the tractability of the model . in our opinion , the wonderful properties which we foundjustify the study of this model .we hope also that the fundamental ideas behind our approach will be useful for the more realistic models , including off - lattice ones .these fundamental ideas are the search for a one - dimensional rearrangement pathways in the compact globule and the design of sequences capable of folding into a `` collective funnel '' formed together by all states belonging to this pathway .design of the static protein backbone conformation , as well as sequence design for the static limited flexibility backbones are all familiar computational approaches .it is a challenge to incorporate the search for the one - dimensional rearrangement pathways into this already difficult area .we acknowledge minnesota supercomputing institute whose computational resources were employed in this study . 99 e. i. shakhnovich , r. a. broglia , g. tiana ( editors ) , protein folding , evolution and design . in : nato sci .i , ( 2001 ) r.d .vale , t. funatsu , d.w .pierce , l. romberg , y. harada , t. yanagida , _ nature _ * 380 * 451 - 3 ( 1996 ) c. mcclare _ j. theor. biol . _ * 30 * , 1 ( 1991 ) .problemy biologicheskoi fiziki ( problems of biological physics ) _( _ in russian _ ) .( moscow : nauka , 1977 ) [ translated into english ( berlin : springer - verlag , 1981 ) ] d. s. chernavskii , n. m. chernavskaya _ _ p__rotein as a machine .biological macromolecular constructions ( moscow university publishing , 1999 , _ in russian _ ) .l. a. blumenfeld , a. n. tikhonov _biophysical thermodynamics of intracellular processes : molecular machines of the living cell _( springer , ny , 1994 ) .h. gruler et al , _ phys .* 56 * , 7116 ( 1997 ) .b. hess et al , _ j. phys .b _ , * 102 * , 6273 ( 1998 ). m. gerstein , w. krebs , _ nucleic acid res . _ * 26*(18 ) 4280 - 4290 ( 1998 ) e. i. shakhnovich , _ curr_ * 7*(1 ) , 29 - 40 ( 1997 ) ; n. d. socci , j. n. onuchic , p. g. wolynes , _ proteins : struct . , func .gen . _ * 32*(2 ) , 136 - 158 ( 1998 ) ; k. a. dill et al . _protein science _ * 4 * 561 - 602 ( 1995 ) ; r. melin , h. li , n. s. wingreen , c. tang , _ j. chem . phys ._ * 110 * 1252 - 1262 ( 1999 ) ; a. maritan , c. micheletti , a. trovato , j. r. banavar , _ nature _ * 406*(6793 ) 287 - 290 ( 2000 ) j. liang , k.a .dill , _ biophys . j. _ , * 81*(2 ) 751 - 766 ( 2001 ) h. p. lu , l. xun. s. xie , _ science _ * 282 * 1877 - 1882 ( 1998 ) ; l. edman , z. fldes - papp , s. wennmalm , r. rigler _ chem .phys . _ * 247 * 11 - 22 ( 1999 ) r. du , v. s. pande , a. yu .grosberg , t. tanaka , e.i .shakhnovich , _ j. chem . phys . _ * 108*(1 ) 334 - 350 ( 1998 ) a. yu .grosberg , contribution in the book edited by p. nielaba and m. mareshal .r. l. baldwin nature , * 369 * , 183 - 184 , 1995 ; r. l. baldwin j. biomol .nmr , * 5 * , 103 - 109 , 1995 .f. gai , k.c .hasson , j.c .mcdonald , p.a .anfinrud , _ science _ , * 279 * 1886 - 1891 ( 1998 ) g.b .west , j.h .brown , b.j .nature _ , * 413*(6856 ) , 628 - 31 , 2001 ; g.b .west , j.h .brown , b.j .enquist _ science _ ,* 284 * ( 5420 ) , 1677 - 9 , 1999 ; g.b .west , j.h .brown , b.j .enquist _ science _ , * 276 * ( 5309 ) , 122 - 6 , 1997 .p. h. verdier , w. h. stockmmayer , _ j. chem. phys . _ * 36 * 227 ( 1962 ) this means , every state can be reached via some finite set of allowed moves starting from every other state .bryngelson , j.n .onuchic , n.d .socci , p.g .wolynes , _ proteins _ * 21*(3 ) 167 - 195 ( 1995 ) m. r. betancourt , j.n .onuchic , _ j. chem. phys . _ * 103*(2 ) 773 - 787 ( 1995 ) r. du , v. s. pande , a. yu .grosberg , t. tanaka , e.i .shakhnovich , _ j. chem .phys . _ * 111*(22 ) 10375 - 10380 ( 1999 ) r. du , a. yu .grosberg , t. tanaka , m. rubinstein , _ phys .* 84*(11 ) 2417 - 2420 ( 2000 ) a. scala , l. a. nunes amaral , m. barthlmy , _ europhys ._ * 55 * ( 4 ) 594 - 600 ( 2001 ) c. moore , m.e.j .newman , _ phys .e _ , * 62*(5 ) 7059 - 7064 ( 2000 ) r. ramakrishnan , j. f. pekny , j. m. caruthers ,_ j. chem . phys . _ * 103*(17 ) 7592 ( 1995 ) e.i .shakhnovich , a. m. gutin , _ proc .usa _ * 90 * 7195 ( 1993 ) a. yu .grosberg , _ physics - uspekhi _ * 40*(2 ) 125 - 158 ( 1997 ) k. yue , k. m. fiebig , p. d. thomas , h. s. chan , e. i. shakhnovich , and k. a. dill _ proc .usa _ , * 92 * , 325 - 329 ( 1995 ) v. i. abkevich , a. m. gutin , e.i .shakhnovich , _ proteins _ * 31 * 335 - 344 ( 1998 ) e.i .shakhnovich , a. m. gutin , _ biophys .chem . _ * 34 * 187 ( 1989 ) s. miyazawa , r. jernigan , _ macromolecules _ * 18 * 534 ( 1985 ) s.b .prusiner , _ proc .usa _ * 95 * , 13363 - 13383 , 1998 .locker , r. hernandez , _ proc .* 98 * , 9074 - 9079 , 1997 d. b. gordon , s. a. marshall , and s. l. mayo , _ current opinion in struct .biol . _ * 9 * 509 - 513 ( 1999 ) ; a. g. street , and s. l. mayo _ structure with folding and design _ , * 7 * , r105-r109 ( 1999 ). t. m. a. fink , r. c. ball , _ phys ._ * 87*(19 ) 198103 ( 2001 ) e.p .sablin , r.j .fletterick , _ curr . op .biol . _ * 11*(6 ) 716 - 724 ( 2001 ) d. beece , l. eisenstein , h. frauenfelder , d. good , m.c .marden , l. reinisch , a.h .reinolds , l.b .sorensen , t.k .yue , _ biochemistry _ * 19*(23 ) 5147 - 5157 ( 1980 ) g. orwell , 1984 _ new american library classics , ny , 1990 _ j. miller , c. zeng , n.s .wingreen , c. tang , _ proteins _ * 47 * 506512(2002 ) b. i. dahiyat , s. l. mayo , _ science _ , * 278 * 82 - 87 ( 1997 ) p.b .harbury , j.j .plecs , b. tidor , t. alber , p.s .kim , _ science _ * 282 * 14621467 ( 1998 )
|
we design toy protein mimicking a machine - like function of an enzyme . using an insight gained by the study of conformation space of compact lattice polymers , we demonstrate the possibility of a large scale conformational rearrangement which occurs ( i ) without opening a compact state , and ( ii ) along a linear ( one - dimensional ) path . we also demonstrate the possibility to extend sequence design method such that it yields a `` collective funnel '' landscape in which the toy protein ( computationally ) folds into the valley with rearrangement path at its bottom . energies of the states along the path can be designed to be about equal , allowing for diffusion along the path . they can also be designed to provide for a significant bias in one certain direction . together with a toy ligand molecule , our `` enzimatic '' machine can perform the entire cycle , including conformational relaxation in one direction upon ligand binding and conformational relaxation in the opposite direction upon ligand release . this model , however schematic , should be useful as a test ground for phenomenological theories of machine - like properties of enzymes .
|
dense small cell networks ( scns ) are considered as the most promising approach to rapidly increase network capacity and meet the ever - increasing capacity demands in the 5th generation ( 5 g ) systems . however , up to now , most theoretical studies on scns only consider simple path loss models that do not differentiate line - of - sight ( los ) and non - line - of - sight ( nlos ) transmissions [ 2 - 5 ] .the major conclusion [ 2 - 5 ] is that neither the number of small cells nor the number of cell tiers changes the coverage probability in interference - limited fully - loaded cellular networks .such conclusion implies that the area spectral efficiency ( ase ) will monotonically grow as small cells go dense .an intriguing question is : does this optimistic conclusion still hold when practical los and nlos transmissions are considered in scns ?it is well - known that los transmission often occurs when the distance between a transmitter and a receiver is small , while nlos is more common in long - distance transmissions as well as in office environments and in central business districts . for a given network environment ,when the distance between transmitter and receiver decreases , the probability that an los path exists between them increases , causing a _ transition _ from nlos transmission to los transmission . to the best of authors knowledge , up tonow , performance analysis considering both los and nlos transmissions are and . in , the authors assumed a multi - slope piece - wise path loss function .such multi - slope piece - wise path loss function does not fit well with the nlos and los model defined by the 3rd generation partnership project ( 3gpp ) standards , in which the path loss function is not a one - to - one mapping to the distance . in , the authors treated the event of los or nlos transmission as a probabilistic event for a millimetre wave communication scenario . to simplify the analysis ,the los probability function was approximated by a moment - matched equivalent step function .the single - piece path loss model and the proposed step function for modeling the transition from nlos to los transmissions are also not compatible with the model recommended by the 3gpp [ 8 , 9 ] . in this paper, we use a general path loss model that features piece - wise path loss functions with probabilistic los and nlos transmissions .note that the proposed model is very general and includes almost all existing models used to capture los and nlos transmissions [ 6 - 9 ] as its special cases .the main contributions of the paper are as follows : * analytical results are obtained for the coverage probability and the ase using a general path loss model incorporating both los and nlos transmissions . *using the above results , closed - form expressions are further obtained for the coverage probability and the ase for a special case based on the 3gpp standards . *our theoretical analysis reveals an important finding , i.e. , the ase will initially increase with the increase of the small cell density , but when the density of small cells becomes sufficiently large , the network coverage probability will decrease as small cells become denser .this in turn makes the ase suffer from a slow growth or even a notable _ decrease_. thereafter , when the small cell density is very large , the ase will then grow almost linearly with the network densification. these results are not only quantitatively but also qualitatively different from previous study results [ 2 - 7 ] .thus , our results shed new insights on the design and deployment of future dense / ultra - dense scns in realistic environments .the remainder of this paper is structured as follows .section [ sec : system - model ] describes the system model .section [ sec : general - results ] presents our main analytical results on the coverage probability and the ase , followed by their application in a 3gpp case in section [ sec : a-3gpp - special - case ] .the derived results are validated using simulations in section [ sec : simulation - and - discussion ] .finally , the conclusions are drawn in section [ sec : conclusion ] .we consider a dl cellular network in which bss are deployed in a plane according to a homogeneous poisson point process ( hppp ) of intensity .ues are poisson distributed in the considered network with an intensity of . is assumed to be sufficiently larger than so that each bs has at least one associated ue in its coverage area .the distance between an arbitrary bs and an arbitrary ue is denoted by in .considering practical los and nlos transmissions , we propose to model the path loss associated with distance as ( [ eq : general_pl_model_our_work ] ) , shown on the top of the next page . in ( [ eq : general_pl_model_our_work ] ) , the path loss function is segmented into pieces with the -th piece , . for each , is the -th piece of the path loss function for los transmission , is the -th piece of the path loss function for nlos transmission and is the -th piece of the los probability function . in more detail , * is modeled as with and being the path losses at a reference distance and and being the path loss exponents for the los and the nlos cases in , respectively . in practice , , , and are constants obtained from field tests [ 8 , 9 ] .* is the -th piece probability function that a transmitter and a receiver separated by a distance has an los path , which is usually a monotonically decreasing function of . for convenience , is further stacked into a piece - wise los probability function as our model is consistent with the ones adopted in the 3gpp [ 8 , 9 ] .note that the considered path loss model in ( [ eq : general_pl_model_our_work ] ) will degenerate to that adopted in and when and , respectively . as a common practice in the field[ 2 - 6 ] , each ue is assumed to be associated with the nearest bs to the ue , and the multi - path fading between an arbitrary bs and an arbitrary ue is modeled as independently identical distributed ( i.i.d . ) rayleigh fading , i.e. , the channel gain is denoted by and is modeled as an i.i.d . exponential random variable ( rv ) .the transmit power of each bs and the additive white gaussian noise ( awgn ) power at each ue are denoted by and , respectively .using the properties of the hppp , we study the performance of scns by considering the performance of a typical ue located at the origin .we first investigate the coverage probability and thereafter the ase .the coverage probability is defined as the probability that the signal to interference plus noise ratio ( sinr ) of the typical ue , denoted by , is above a threshold : ,\label{eq : coverage_prob_def}\end{aligned}\ ] ] where the sinr is computed by where is the cumulative interference given by where is the bs associated with the typical ue and located at distance from the typical ue , and and are the path loss and the multi - path fading channel gain associated with the -th interfering bs , respectively . according to and , the area spectral efficiency ( ase ) in for a given can be computed by where is the minimum working sinr for the considered scn , and is the probability density function ( pdf ) of sinr observed at the typical ue at a particular value of .since can be defined as the complementary cumulative distribution function ( ccdf ) of sinr , can be expressed as [ thm : p_cov_uas2]considering the path loss model of ( [ eq : general_pl_model_our_work ] ) , can be derived as where {r , n}^{\textrm{l}}\left(r\right)dr ] , and and are respectively defined as and .moreover , and are represented as and furthermore , ] are respectively computed by & = & \exp\left(-\frac{\gamma n_{0}}{p\zeta_{n}^{\textrm{l}}\left(r\right)}\right)\mathscr{l}_{i_{r}}\left(\frac{\gamma}{p\zeta_{n}^{\textrm{l}}\left(r\right)}\right),\label{eq : pr_sinr_req_uas2_los_thm}\end{aligned}\ ] ] and & = & \exp\left(-\frac{\gamma n_{0}}{p\zeta_{n}^{\textrm{nl}}\left(r\right)}\right)\mathscr{l}_{i_{r}}\left(\frac{\gamma}{p\zeta_{n}^{\textrm{nl}}\left(r\right)}\right),\label{eq : pr_sinr_req_uas2_nlos_thm}\end{aligned}\ ] ] where is the laplace transform of rv evaluated at .see appendix a. given the definition of the coverage probability and the ase presented in ( [ eq : coverage_prob_def ] ) and ( [ eq : ase_def ] ) respectively , and using the path loss model of ( [ eq : general_pl_model_our_work ] ) , we present our main result on in theorem [ thm : p_cov_uas2 ] shown on the next page . plugging from ( [ eq : theorem_2 ] ) of theorem[ thm : p_cov_uas2 ] into ( [ eq : cond_sinr_pdf ] ) , we can get the ase using ( [ eq : ase_def ] ) . as can be seen from theorem [ thm : p_cov_uas2 ] ,the coverage probability , , is a function of the piece - wise path loss function and the piece - wise los probability function .we will investigate their impacts in the sequel .as a special case of theorem [ thm : p_cov_uas2 ] , we consider the path loss function , , together with the linear los probability function , which is both respectively recommended in the 3gpp [ 8 , 9 ] . considering the general path loss model presented in ( [ eq : general_pl_model_our_work ] ) , the path loss model presented in ( [ eq : pl_bs2ue_2slopes ] ) and ( [ eq : los_prob_func_linear ] ) can be deemed as a special case of ( [ eq : general_pl_model_our_work ] ) with the following substitution : , , , , and . for clarity , this 3gpp special case is referred to as 3gpp case 1 in the sequel . according to theorem [ thm : p_cov_uas2 ] , can be obtained as in the following subsections , we investigate , , , and , respectively . from theorem [ thm :p_cov_uas2 ] , can be derived as where according to theorem [ thm : p_cov_uas2 ] and ( [ eq : los_prob_func_linear ] ) , becomes furthermore , to compute in the range of , we propose lemma [ lem : laplace_term_uas2_los_seg1 ] .[ lem : laplace_term_uas2_los_seg1] in the range of can be calculated by where {}_{2}f_{1}\left[1,\frac{\beta+1}{\alpha};1+\frac{\beta+1}{\alpha};-td^{\alpha}\right],\label{eq : rou1_func } \end{aligned}\ ] ] and {}_{2}f_{1}\left[1,1-\frac{\beta+1}{\alpha};2-\frac{\beta+1}{\alpha};-\frac{1}{td^{\alpha}}\right],\label{eq : rou2_func } \end{aligned}\ ] ] where ] , the ase first exhibits a slowing - down in the rate of growth ( when ) or even a notable _ decrease _ in its absolute value ( when ) .this is attributed to the fast decrease of the coverage probability at around \,\textrm{bss / km}^{2} ] , which in turn causes the notable _ decrease _ of the ase at that range of when in fig .[ fig : ase_3gpp_linear ] . with the defined thresholds and , scns can be roughly classified into 3 categories , i.e. , the sparse scn ( ) , the dense scn ( ) and the very dense scn ( ) .the ases for both the sparse scn and the very dense scn grow almost linearly with the increase of , while the ase of the dense scn shows a slow growth or even a notable _ decrease _ with the increase of . from fig .[ fig : ase_3gpp_linear ] , we can get a new look at the ultra - dense scn , which has been identified as one of the key enabling technologies of the 5 g networks .up to now , there is no consensus in both industry and academia on that at what density a scn can be categorized as an ultra - dense scn . according to our study , for 3gpp case 1, we propose that the 5 g systems should target the third category of scns as ultra - dense scns , i.e. , the scns with , because the associated ase will grow almost linearly as increases . numerically speaking , is around from fig .[ fig : ase_3gpp_linear ] .it is particularly important to note that the second category of scns ( ) is better avoided in practical scn deployments due to its cost - inefficiency shown in fig .[ fig : ase_3gpp_linear ] .in this paper , we have shown that a sophisticated path loss model incorporating both los and nlos transmissions has a significant impact on the performance of scns , measured by the two metrics of the coverage probability and the ase .such impact is not only quantitative but also qualitative .specifically , our theoretical analysis have concluded that the ase will initially increase with the increase of the small cell density , but when the density of small cells is larger than a threshold , the network coverage probability will decrease , which in turn makes the ase suffer from a slow growth or even a notable _ decrease _ as the small cell density increases .furthermore , the ase will grow almost linearly as the small cell density increases above another larger threshold . according to our study , for 3gpp cases ,we propose that the 5 g systems should target the scns with as ultra - dense scns . numerically speaking, appears to be around several .the intuition behind our conclusion is that when the density of small cells is larger than a threshold , the interference power will increase faster than the signal power due to the transition of a large number of interference paths from nlos to los , and thus the small cell density matters ! as our future work , we will incorporate more sophisticated ue association strategies and more practical multi - path fading model into the analysis of scns because the multi - path fading model is also affected by los and nlos transmissions .from ( [ eq : coverage_prob_def ] ) and ( [ eq : sinr ] ) , we can derive straightforwardly as {r}\left(r\right)dr\nonumber \\ & = & \int_{r>0}\textrm{pr}\left[\left.\frac{p\zeta\left(r\right)h}{i_{r}+n_{0}}>\gamma\right|r\right]f_{r}\left(r\right)dr\nonumber \\ & \stackrel{\bigtriangleup}{= } & \sum_{n=1}^{n}\left(t_{n}^{\textrm{l}}+t_{n}^{\textrm{nl}}\right),\label{eq : p_cov_general_form}\end{aligned}\ ] ] where and are piece - wise functions defined as {r , n}^{\textrm{l}}\left(r\right)dr ] , respectively .besides , and are respectively defined as and .moreover , and are the piece - wise pdfs of the event that the ue is associated with the nearest bs with an los path at distance and the event that the ue is associated with the nearest bs with an nlos path at distance , respectively .regarding , we define two events in the following , whose joint event is equivalent to the event that the ue is associated with a bs with an los path at distance .* event : the nearest bs is located at distance * event : the bs is one with an los path according to , the cumulative density function ( cdf ) of event with regard to is given by hence , taking the derivative of with regard to , yields the pdf of event as the pdf should be further thinned by the probability of event on condition of , which is , so that we can get the pdf of the joint event of and as regarding , we also define two events in the following , whose joint event is equivalent to the event that the ue is associated with a bs with an nlos path at distance .* event : the nearest bs is located at distance * event : the bs is one with an nlos path similar to ( [ eq : geom_dis_pdf_uas2_los ] ) , the pdf should be further thinned by the probability of event on condition of , which is , so that we can get the pdf of the joint event of and as as for the calculation of ] denotes the expectation operation by taking the expectation over the variable and denotes the complementary cumulative density function ( ccdf ) of rv .since we assume to be an exponential rv , we have and thus ( [ eq : pr_sinr_req_uas1_los ] ) can be further derived as &\hspace{-0.4cm}=\hspace{-0.3cm}&\mathbb{e}_{\left[i_{r}\right]}\left\ { \exp\left(-\frac{\gamma\left(i_{r}+n_{0}\right)}{p\zeta_{n}^{\textrm{l}}\left(r\right)}\right)\right\ } \hspace{1.2cm}\nonumber \\ & \hspace{-0.4cm}=\hspace{-0.3cm}&\exp\hspace{-0.1cm}\left(-\frac{\gamma n_{0}}{p\zeta_{n}^{\textrm{l}}\left(r\right)}\right)\hspace{-0.1cm}\mathscr{l}_{i_{r}}\hspace{-0.1cm}\left(\frac{\gamma}{p\zeta_{n}^{\textrm{l}}\left(r\right)}\right),\label{eq : pr_sinr_req_wlt_uas1_los}\end{aligned}\ ] ] where is the laplace transform of evaluated at . as for the calculation of ] in ( [ eq : laplace_term_los_uas1_seg1_proof_eq1 ] ) should consider interference from both los and nlos paths .thus , can be further derived as plugging into ( [ eq : laplace_term_los_uas1_seg1_proof_eq2 ] ) , and considering the definition of and in ( [ eq : rou1_func ] ) and ( [ eq : rou2_func ] ) , we can obtain shown in ( [ eq : lemma_6 ] ) , which concludes our proof .following the same approach in appendix b , it is ready to derive in the range of as similarly , in the range of can be calculated by our proof is thus completed by plugging ( [ eq : rou1_func ] ) and ( [ eq : rou2_func ] ) into ( [ eq : proof_lemma4_eq1 ] ) and ( [ eq : proof_lemma4_eq2 ] ) .following the same approach in appendix b , it is ready to derive in the range of as 10 d. l - p , m. ding , h. claussen , and a. h. jafari , `` towards 1 gbps / ue in cellular systems : understanding ultra - dense small cell deployments , '' arxiv:1503.03912 [ cs.ni ] , mar .2015 .m. haenggi , j. g. andrews , f. baccelli , o. dousse , and m. franceschetti , stochastic geometry and random graphs for the analysis and design of wireless networks , _ ieee j. sel .areas commun .27 , no . 7 , pp .10291046 , sep .
|
in this paper , we introduce a sophisticated path loss model into the stochastic geometry analysis incorporating both line - of - sight ( los ) and non - line - of - sight ( nlos ) transmissions to study their performance impact in small cell networks ( scns ) . analytical results are obtained on the coverage probability and the area spectral efficiency ( ase ) assuming both a general path loss model and a special case of path loss model recommended by the 3rd generation partnership project ( 3gpp ) standards . the performance impact of los and nlos transmissions in scns in terms of the coverage probability and the ase is shown to be significant both quantitatively and qualitatively , compared with previous work that does not differentiate los and nlos transmissions . particularly , our analysis demonstrates that when the density of small cells is larger than a threshold , the network coverage probability will decrease as small cells become denser , which in turn makes the ase suffer from a slow growth or even a notable _ decrease_. for practical regime of small cell density , the performance results derived from our analysis are distinctively different from previous results , and shed new insights on the design and deployment of future dense / ultra - dense scns .
|
in comparison to their passive counterparts , active suspensions exhibit a wide range of non - classical phenomena .such systems share the feature of self - motility which induces motions and mechanical stresses through the conversion of stored chemical - energy or ambient potential - energy into mechanical work .general descriptions and discussions of such systems can be found in the review articles by pedley and kessler , ramaswamy , koch and subramanian , and marchetti et al . .one of the most ubiquitous phenomena observed in active suspensions is the presence of correlated large - scale motions over length scales which exceed those associated with the energy injected by active agents .such patterns have been observed in experiments involving sessile droplets of self - motile bacterial suspensions ( dombrowski et al . , tuval et al . , cisneros et al . ) , two - dimensional free - standing self - motile bacterial films ( wu and libchaber , sokolov et al . , sokolov and aranson , sokolov et al . , and liu and i ) , quasi - two - dimensional thin layers of self - motile bacterial suspensions on surfaces ( zhang et al. , cisneros et al . , wensink et al . , and peruani et al . ) , as well as in bacterial suspensions in three - dimensional microfluidic chambers ( dunkel et al . see also the associated viewpoint by aranson ) .moreover , spatially extended motion patterns similar to those observed in self - motile bacterial suspensions have been observed in motility assays consisting of protein filaments propelled by molecular motors ( see , for example , ndlec et al . , surrey et al . , schaller et al . , simuno et al . , sanchez et al . ) and in the context of the swarming , herding , and flocking of fish , bird , or other animal colonies , as studied theoretically and numerically in the pioneering work of vicsek et al . .the observation of similar phenomena in active - matter systems for which disparate mechanisms underlie self - motility points at the presence of universal phenomena . however , the exact nature of the underlying kinematical and kinetic mechanisms leading to such phenomena are only partially understood .further , to what extent these mechanisms share universal features seems unclear .various different models aimed at reproducing , explaining , and predicting spatially extended motion in active suspensions have been proposed and investigated over the last decade .the present article focuses on agent - based models .some agent - based approaches treat self - propelled agents as point particles that interact via with repulsive and attractive forces ( see , for example , dorsogna et al . , chuang et al . , and carrillo et al .other agent - based models rely on local and non - local interaction and alignment rules motivated from phenomenological observations in swarms and flocks ( see , for example , vicsek et al . , cucker and smale , carrillo et al . , and degond and motsch ) .since the velocity and direction of motion of each individual self - propelled agent adjusts to the velocity and direction of its neighbours , numerical simulations based on such models exhibit behaviour in agreement with observed features of spatially extended correlated motion , flocking , or aggregation and alignment . however , the aggregation and alignment phenomena are _ prescribed _ through phenomenological interaction rules .it is therefore unreasonable to expect such models to provide insights regarding the mechanisms _ underlying _ these phenomena .a complimentary approach to modeling and understanding active suspensions is provided by agent - based models that do _ not _ include prescribed alignment rules or attractive forces but rather rely on mechanically motivated forces , such as volume exclusion forces penalizing particle overlap , frictional forces accounting for interaction with the surrounding solvent , and random forces accounting for thermal fluctuations and intrinsic noise due to the swimming mechanisms of individual agents .these interaction forces are usually combined with the assumption that the self - propelled agents have rod - like geometry .for example , peruani et al . found particle clustering for sufficiently large aspect ratios and rod concentrations in a two - dimensional model of self - propelled rods that interact solely through a soft volume exclusion force that penalizes overlap .similar results were obtained with self - propelled rod models by yang et al . , who used a collection of shifted and truncated lennard jones point potentials aligned along the axis of each rod , and by wensink et al . and wensink and lwen , who used a discrete set of repulsive yukawa point potentials aligned along the axis of each rod . in these models ,the agents are assumed to be immersed in an overdamped solvent modeled through stokes - type friction forces that act on each agent . however , none of these models accounts for pairwise hydrodynamic interactions between self - propelled agents . a recent experimental investigation by drescher et al . suggests that mechanical interactions between self - motile bacteria are dominated by short - range lubrication forces . and are the orthogonal unit - valued basis vectors in the two - dimensional periodic domain.,title="fig : " ] ( -60 , 30) ( -150,70) ( -100,30) ( -70,100) ( -17 , 47) ( -190 , 43) ( -180 , 0) ( -210 , 30) ( -160 , 28) ( -48 , 50) to overcome this drawback of agent - based models , we propose a model based on purely repulsive self - propelled soft - core dumbbells with random forces and an additional type of interaction forces , namely pairwise dissipative forces .these forces are conventional ingredients of the dissipative particle dynamics ( dpd ) method , which is a mesoscopic particle - based simulation method widely used to study complex fluids and colloidal suspensions .importantly , in agent - based models , where there is no explicit accounting for the suspending solvent , such pairwise dissipative interactions provide a simple means to account for hydrodynamic interactions between the agents .the inclusion of such pairwise dissipative forces in the dpd dumbbell model yields a novel model system to study the influence of hydrodynamic interaction forces in active suspensions .the dpd method was pioneered by hoogerbrugge and koelman . as a mesoscale particle simulation method ,it was originally designed to explore hydrodynamic behavior in complex fluids , as discussed , for example , in a review article by moeendarbary et al . . since their governing equations are usually simpler than the partial - differential equations arising from suitable continuum alternatives , mesoscale particle simulation methods are promising tools for simulating complex fluid flows in nonsimple geometries .further , structure of the equations of the dpd method resembles that of molecular dynamics ( md ) and , thus , can be solved efficiently in existing md codes .moreover , as groot and warren explain , the use of soft - core interaction potentials in the dpd method allows for large integration timesteps and stable simulations , providing a significant speedup in comparison to md simulations with hard - core interaction potentials .while the dpd method has a number of other desirable properties , such as , for example , net - momentum conservation and galilean invariance ( see , for example , allen and schmid ) , the ability to produce hydrodynamic behaviour with computational efficiency also makes it a potentially powerful tool for the simulation of active suspensions . an active brownian point - particle model with supplemental dissipative dpd - type interactionswas recently studied by lobaskin and romenskyy .they report a transition to an ordered swarming " state with increasing particle concentration and energy input through increasing magnitude of the self - propulsion force . based on comparison of correlation functions and order parameters obtained from their model with these of the vicsek model , lobaskin and romenskyy that pairwise dissipative interactions can be seen as an alternative alignment mechanism . in the present article, the dpd method is adopted to study a system of self - propelled agents .while the model is potentially applicable to a wide range of self - propelled particle systems , the specific class of self - motile bacterial suspensions is considered as a modeling scenario .using this model , the central aim of this study is to investigate the influence of hydrodynamic interaction forces .particular attention is placed on the influence of stokes - type friction forces and pairwise dissipative interactions on the behavior of the system along with limiting cases wherein either of the two dissipative forces dominates .such a generic study of hydrodynamic interaction forces is meant to assess the importance of different interaction mechanisms .this article is structured as follows . in section [ sec : sppmodel ] , the self - propelled soft - core dumbbell model is introduced . in section [ sec : nondim ] , the salient dimensionless parameters of the model are derived . in section [ sec : forces ] , the roles of the pairwise interaction forces and non - pairwise forces are explained in detail . in section [ sec :sppnumericalstudy ] , numerical simulations are performed to examine the influences of pairwise and non - pairwise dissipative forces on the statistics of the system in a two - dimensional square domain with periodic boundary conditions .finally , a summary and concluding remarks are provided in section [ sec : sppsummary ] .the motion of a particle with constant particle mass in a system of particles is governed by newton s equations where a superposed dot denotes time differentiation , and denote the location and velocity of particle , is the pairwise particle interaction force between particles and , and is the external force excerted on particle .the pairwise interaction forces of the model are based on the pairwise interaction forces of the dpd method ( see , for example , groot and warren ) where is purely conservative force , is a purely dissipative force , is a random force , is the distance between particles and , and is the cutoff radius for pairwise interactions . following groot and warren ,these forces are given as \hat{{{\bf e}}}^{ij } , \\ { { \bf f}}^{ij}_{\text{r}}= \sqrt{2\gamma k_b t } w(r ) \alpha^{ij } ( \delta t)^{-1/2 } \hat{{{\bf e}}}^{ij } , \end{array } \!\ !\right\}\ ] ] in which defined such that is the soft - core weighting function , is the unit vector directed from particle to particle , is a random number , is the conservative force parameter , and is the pairwise friction parameter . based on the assumption that all particle interactions are governed by the same parameters , and are constant and independent of and . in , , with and boltzmann s constant and the absolute temperature , provides a reference energy scale and is the simulation timestep . together with the pairwise friction parameter , characterizes the magnitude of the pairwise random forces accounting for brownian fluctuations . to mimic the rod - like geometry of a bacterium ,two dpd particles are connected by a stiff harmonic spring to form an aggregate dpd molecule , as depicted schematically in figure [ dpdschematic01 ] .the bond between two dpd particles forming a single dumbbell is modeled with a harmonic spring potential ( see , for example , schwarz linek et al . ) of the form where is the constant spring stiffness and is the equilibrium length between two bonded particles .notice that must be chosen consistent with the relatively inextensible nature of a bacterium .since two particles form a single agent , the bond forces are considered exclusively between these two particles and other possible interaction forces are switched off .apart from standard pairwise dpd forces and dumbbell bond forces associated with the harmonic spring potential , self - propulsion and stokes friction are incorporated through non - pairwise forces with the constant stokes friction parameter and the magnitude of the constant self - propulsion force along the unit - valued director of each agent , as shown in figure [ dpdschematic01 ] .the unit - valued director is defined through the location vectors of the particles and forming the single agent notice that indices and are used to refer to particles , whereas the index is used to refer to one agent consisting of two particles .importantly , the model involves two types of dissipative forces : 1 .the stokes friction acting on particle .the dissipative interaction force defined in acting between a pair of particles and . while the former is proportional to the velocity of particle , the latter is proportional to the relative velocity of the two particles and the soft - core weighting function defined in .the magnitudes of the dissipative forces are determined by the friction parameters and , which correspond respectively to the pairwise dissipative interaction force and the stokes friction .to obtain a dimensionless version of the formulation , introduce the characteristic length of a bacterium ( as modeled by a dumbbell ) , a characteristic magnitude of the self - propulsion force , and a characteristic friction parameter .define dimensionless counterparts , , , , and of , , , , and by since the equations involve the product , which carries dimensions of energy , there is no need to explicitly include a temperature in the nondimensionalization . applying to yields the dimensionless equations of motion , where a superposed dot now denotes differentiation with respect to the nondimensional time .similarly , in view of and , the dimensionless versions of the pairwise forces are given by \hat{{{\bf e}}}^{ij } , \\ { { \bf f}}^{*ij}_{\text{r}}= \sqrt{2\frac{\gamma}{\gamma_0 } \frac{k_b t}{f_0 l_0 } } w(r^ * ) \alpha^{ij } ( \delta t^*)^{-1/2 } \hat{{{\bf e}}}^{ij } , \end{array } \!\ !\right\}\ ] ] and the dimensionless self - propulsion force and stokes friction are given by inspection of and identifies the following important dimensionless parameters : while characterizes the ratio of conservative pairwise interaction forces to self - propulsion forces , is an active " pclet number ( see schwarz - linek et al . ) characterizing the importance of self - propulsion energy relative to the energy associated with random forces due to thermal fluctuations and intrinsic fluctuations of the swimming mechanisms of self - propelled agents . for ,random forces dominate over self - propulsion forces .the limit encompasses the scenario in which random forces are negligible in comparison to self - propulsion forces .the dimensionless friction parameters and respectively characterize pairwise dissipative interactions and stokes - type dissipative forces . with, the model explicitly allows for the limiting cases of vanishing stokes friction or vanishing pairwise dissipative interactions .more particularly , the limiting case represents the limit of negligible hydrodynamic interaction forces and dominant stokes friction .conversely , the limit case of represents the limit case of negligible stokes friction and dominant pairwise hydrodynamic interaction forces . from a physical perspective , it is reasonable to expect that while stokes friction should dominate in the regime of dilute agent concentrations , the influence of pairwise interaction forces become significant with increasing agent concentration . on substituting into and ,the pairwise and non - pairwise forces become \hat{{{\bf e}}}^{ij } , \\ { { \bf f}}^{*ij}_{\text{r}}= \sqrt{2\gamma \text{pe}^{-1 } } w(r^ * ) \alpha^{ij } ( \delta t^*)^{-1/2 } \hat{{{\bf e}}}^{ij } , \end{array } \!\ !\right\}\ ] ] and respectively . for convenience ,asterisks appearing in , , and are omitted hereafter .the model involves a collection of pairwise and non - pairwise interaction forces and .the nature of these forces may be summarized as follows : 1 .the self - propulsion force : this non - pairwise entity acts on each agent and accounts for the motility of each agent contained in the active suspension .2 . the stokes friction : this non - pairwise entity accounts for hydrodynamic forces exerted by the surrounding solvent , acting on each particle forming the aggregate agent in the active suspension .3 . the pairwise conservative force : this pairwise entity acts as a soft volume - exclusion force . together with the cutoff radius ,this force defines the geometry and size of each particle .( see also the discussion of the hydrodynamic radius of a dpd particle in section [ sec : param ] . )4 . the pairwise random force : this pairwise entity accounts for thermal fluctuations due to brownian motion and intrinsic fluctuations in the swimming mechanisms of each agent . 5 .the pairwise dissipative interaction force : this pairwise entity accounts for dissipative hydrodynamic interaction forces distinct from those encompassed by non - pairwise stokes friction .since this force is proportional to velocity differences of interacting particle pairs , it formally resembles a lubrication force ( see , for example , kim and karrila ) .further , the pairwise dissipative forces of the model depend quadratically on the dimensionless separation distance between interacting particle pairs through the square of the soft - core weighting function , which differs from the dependence of lubrication forces on a normalized separation distance. lubrication forces of two approaching spheres possess a hard - core functional dependence on the normalized separation distance between the spheres that is proportional to as .the model under consideration is intended to characterize the influence of such pairwise dissipative forces in a general context inspired by the dpd framework . at this stage ,the model does not account for alternative kinds of weighting functions .however , the impact of choosing different weighting functions should be a subject of future research .being a purely mechanical theory based on pairwise interaction forces and non - pairwise forces , the present model deviates from other available agent - based models .most importantly , the model does not include a priori biological alignment " rules or ad - hoc coordination forces .these ingredients are often incorporated through long - range attraction forces or specifically prescribed alignment or clustering requirements . here ,the only interaction forces are short - ranged repulsive , random , and dissipative ; in particular , no long - range attraction forces are introduced .further , it is left to the underlying dynamics of the system to determine the extent to which alignment or clustering develops .the approach taken here allows for investigations of the statistical behaviour of a generic self - propelled particle model system without biological rules , but with pairwise dissipative hydrodynamic interaction forces .the ensuing investigations are aimed at revealing universal features of self - propelled particle systems .while biological alignment and clustering rules or coordination forces might well exist , the characterization of such rules seems elusive , at least at present .the model used in the present work can not be expected to capture phenomena associated with such rules or forces .however , it enables a clear separation of purely mechanical phenomena from other erstwhile biological phenomena and provides a basis for characterizing regimes in which certain phenomena might dominate over other phenomena .in this section , results from numerical simulations based on the equations of motion with the pairwise interaction forces and non - pairwise forces along with the stiff harmonic spring constraint are presented .these simulations are conducted in a two - dimensional square domain with periodic boundary conditions .simulations are performed using a customized version of the molecular dynamics package lammps .the standard velocity - verlet time integration scheme with a dimnesionless integration timestep of is used for all simulations .initially , the agents are taken to be randomly distributed and randomly oriented throughout the computational domain .the initial velocities of the agents are set to zero .focus is on the influences of the volume fraction and two friction parameters and on the statistics of the system .all results presented are from the ( statistically ) steady - state regime and represent space - time averages . to compute statistical objects , snapshots of the agent and velocity distribution at different timesteps in the steady - state regimeare collected every timesteps .next , statistical objects computed for at least 10 such snapshots are averaged , the resulting statistical object representing a space - time average . the parameters entering the equations of motion along with pairwise and non - pairwise forces are determined from estimates and references in the literature .apart from the cutoff radius , pan et al . observed that the characteristic linear dimension of an individual dpd particles may be distiguished through a nondimensional hydrodynamic radius .the quantity and the nondimensional length of a single agent ( figure [ dpdschematic01 ] ) are used to define the aspect ratio ; for illustrative purposes , it is assumed that and . following pan et al . , the nondimensional cutoff radius takes the value . using the choice , the magnitude of the conservative interaction parameter may be estimated by adopting the view that , together with the nondimensional cutoff radius , the magnitude of the repulsive volume - exclusion force defines of the linear dimensions of a single particle . in other words ,the repulsive volume - exclusion force is expected to ( statistically ) balance other forces acting on the particle to prevent particle overlap beyond the hydrodynamic radius .granted that the self - propulsion forces are dominant , an approximate balance between the self - propulsion force and the magnitude of the conservative volume - exclusion force of two isolated interacting particles leads to the estimate and , thus to where inertia and other forces have been neglected ; the estimate gives an approximate relationship between the nondimensional self - propulsion force and the nondimensional repulsive volume - exclusion force depending on the nondimensional cutoff radius and the nondimensional hydrodynamic radius . with , , and the choice , gives the estimate .since accounts for only two isolated particles and neglects inertia and all other interaction forces , it should be interpreted as a lower bound for ; it hence seems reasonable to choose .furthermore , the value of the active pclet number is taken to be , as estimated by schwarz - linek et al .since bacteria are nearly inextensible , a large value for the nondimensional spring stiffness is selected to ensure that the length of the agents remains practically constant . with all the previously discussed parameters set , the remaining free parameters are the friction coefficients and along with the concentration of the bacteria .to measure the latter quantity , a nondimensional area fraction is introduced , with the dimensionless edge length of the square computational domain and the number of agents in the computational domain , ranging from for to for .( -5 , 100 ) ( -5 , 100 ) the influence of the pairwise dissipative interactions along with the volume fraction on the agent distributions is now considered .figure [ fig : particlefield01 ] displays the agent distribution fields with various values of for a fixed value of and four choices of the stokes friction parameter , namely , , , and the limiting case of . visual inspection of the agent distribution fields indicates that , with decreasing stokes friction and in the limiting case of vanishing stokes friction ( dominating pairwise dissipative interactions ) , the system develops agent density fluctuations with regions of high - density agent aggregation and regions with lower agent concentration .as might be expected , this effect is more prominent with increasing area fraction . a qualitative understanding of the spatial structures of the velocity fields of the self - propelled agents is provided by considering a filtered and projected velocity field .the filtering and projection procedure is explained in detail in section [ sec : filtesp ] .vortical structures in the velocity field are studied by considering the scalar measure of vorticity , where is a unit vector normal to the computational domain and the vorticity an illustrative qualitative comparison of the velocity and vorticity fields between the case and the limiting case is provided in figure [ fig : velfield01 ] , showing snapshots of the filtered and projected velocity field along with contours of the scalar vorticity in a region of the computational domain .visual inspection of figure [ fig : velfield01 ] suggests that velocity fields of both cases exhibit vortical structures of different characteristic size , specific to each case . the limiting case of leads to larger structures with correlated velocities over spatially extended regions of greater characteristic size , as can be seen from the scalar vorticity contours and the velocity vectors shown in figure [ fig : velfield01 ] ( b ) .further , the limiting case of exhibits larger positive and negative peak magnitudes of , as is evident from the different color scales in figure [ fig : velfield01 ] ( a ) and ( b ) . a more quantitative understanding of the spatial structure of the velocity field is provided by the normalized one - time spatial two - point velocity correlation function ( vcf ) defined by ( see , for example , cisneros et al . ) with the dimensionless center of mass ( com ) velocity of agent and the spatial average over the computational domain conditioned on the separation distance . specifically , given a quantity depending on and , denotes its spatial average over all agents with separation distance in the computational domain , where is the number of agents satisfying the condition .the vcf is a measure of the correlation of the velocity field over different length scales given through the separation distance .for example , the limit corresponds to a velocity field with statistically perfectly correlated velocities on length scales of linear dimension . on the other hand ,the limit corresponds to a velocity field with statistically uncorrelated velocities on length scales of linear dimension .heuristically , if a velocity field exhibits a high correlation at a certain length scale , this means that on average spatial regions of size move with the same velocity , forming spatial structures of size .the vcfs of three different types of systems , representing the previously discussed limiting cases for the two friction parameters and for selected representative volume fractions are considered . in particular , the vcfs of systems with stokes friction only are shown in figure [ fig : vcf ] , the vcfs of three systems with stokes friction and pairwise dissipative interactions are shown in panels ( a)(c ) of figure [ fig : vcf02 ] , and the vcfs of a system with pairwise dissipative interactions only are shown in panel ( d ) of figure [ fig : vcf02 ] .the weakest correlations , evident through non - zero values of the vcf only for very small separation distances , are found in systems with stokes friction only , as shown in figure [ fig : vcf ] .further , systems with stokes friction and pairwise dissipative interactions exhibit correlated velocities over a broader range of separation distances as shown for three examples of , , and in panels ( a)(c ) of figure [ fig : vcf02 ] , respectively . confirming the observations from visual inspection of the simulation snapshots , in the limiting case of pairwise dissipative interactionsonly , spatially extended strong correlations are evident up to large separation distances , as shown in panel ( d ) of figure [ fig : vcf02 ] .for the case of pairwise dissipative interactions only , the correlations increase consistently with increasing ( panel ( d ) of figure [ fig : vcf02 ] ) , in contrast to the cases of combined stokes and pairwise dissipative interactions ( panels ( a)(c ) of figure [ fig : vcf02 ] ) and stokes friction only ( figure [ fig : vcf ] ) , where the area fraction has only a weak influence on the vcfs .based on the vcf , it is useful to define a dimensionless integral length scale ( see , for example , pope ) which is the ( nondimensional ) area under the vcf and quantifies the average characteristic linear dimension of spatial structures of the com velocity field .the features of for the entire range of volume fractions and pairwise friction parameters considered are presented for two cases of stokes friction and pairwise dissipative interactions ( and ) and pairwise dissipative interactions only ( ) , as shown in panels ( a ) , ( b ) , and ( c ) of figure [ fig : integralscale ] , respectively .notice that the cases and shown in panels ( a ) and ( b ) of figure [ fig : integralscale ] both include the limiting case of stokes friction only that is , the case . for , is nearly constant for different values of the area fractions and increases only weakly with an increase in , as shown in panel ( a ) of figure [ fig : integralscale ] . for lower values of stokes friction , increases with increasing , as shown in panel ( b ) of figure [ fig : integralscale ] .consistent with the previously discussed vcfs , the smallest values of over the entire range of arise if only stokes friction is active , as shown in the limiting cases of in panels ( a ) and ( b ) of figure [ fig : integralscale ] . in contrast , for , significantly increases with increasing , as shown in panel ( c ) of figure [ fig : integralscale ] .this effect becomes substantially more prominent with increasing . in conclusion ,vanishing stokes friction in combination with large pairwise dissipative interactions lead to spatially extended correlated motion patterns .while present at all considered values of , this effect becomes substantially more prominent with increasing .bearing in mind that self - propulsion forces inject energy into the system on a length scale on the order of the dimensionless length of an individual agent , this behaviour points to the presence of upscale energy transfer and energy condensation leading to the formation of spatially extended structures in the com velocity field .an explanation for the observed behavior can be found by considering the microscopic interaction mechanism between particle pairs .dominating pairwise dissipative interactions reduce the elasticity of the forces between particle pairs .for example , consider two approaching particles .without pairwise dissipative interactions , the particles experience an elastic repulsive force that eventually reverses their direction of motion .however , this behaviour is not observed in actual systems of interacting self - motile bacteria .instead , short - ranged hydrodynamic interactions between the agents make the interactions inelastic .the model accounts for this mechanism through the simple pairwise dissipative interactions .although pairwise dissipative interactions provide only a very simple model for hydrodynamic interaction forces , the observed behavior suggests that such forces form spatially extended correlated motion patterns in self - propelled particle systems .( -29,50 ) ( -29,35) ( -29,50 ) ( -29,35) ( -29,50 ) ( -29,35) ( -29,40 ) ( -29,25) ( -29,40 ) ( -29,25) ( -29,40 ) ( -29,25) to explore the energetic properties of the system , two - dimensional kinetic - energy spectra are considered . to compute these spectra , the particle velocity fields at all particle locations filtered and projected onto a uniform grid with equidistant spacing in both coordinate directions .the uniform grid is comprised by a set of discrete spatial points , where is the number of uniformly spaced grid points in each coordinate direction .further , let denote the filtered and projected velocity field at a spatial point on the uniform grid , as computed in terms of the particle velocity fields through the convolution filter operation where is a sufficiently rapidly decaying filtering kernel and the number of particles for which . in the present study , a square grid filter with the nondimensional filter width ,is used to filter and project the particle velocity field .whereas the special case of , with the uniform spacing between grid points , represents a non - overlapping grid filter , a choice of results in overlapping filter bins and , thus , is referred to as an overlapping grid filter . in the present study ,an overlapping grid filter is considered and the bin size is determined based on the desired average number of agents in each filter bin .importantly , , , and are related by where is the desired average number of agents in each filter bin . to compute meaningful averages , is chosen according to and such that that every filter bin contains approximately agents . to estimate the number of points of the uniform grid necessary to resolve the energy spectrum in sufficient detail , consider the approximate length scale of small - scale energy injection through the self - propulsion forces .this scale may be estimated to be on the order of the length of an individual bacterium that is , . to capture the motion at length scales on the order of , it is thus required that the grid spacing used to compute the energy spectrum obeys .bearing in mind the ratio , the number of points in each coordinate direction on the uniform grid must therefore obey .for the present purpose , it is assumed that .let denote the complex fourier coefficient corresponding to , so that with a finite collection of fourier modes and the two - dimensional normalized wavenumber vector .in view of , the two - dimensional kinetic - energy spectrum of the filtered velocity field is defined as where is the magnitude of . the choice of the number of points on the uniform grid corresponds to the maximum representable normalized wavenumber and the energy injection lengthscale corresponds to the normalized energy injection wavenumber .to investigate the relevance of power - law scalings , the energy spectrum is normalized by the energy contained in the first wavenumber shell . for two- and three - dimensional flows of newtonian fluids , energy spectra exhibit power - law scalings of the general form with and dimensionless constants . in particular, corresponds to kolmogorov s scaling for the inertial range of three - dimensional homogeneous and isotropic navier stokes ( ns ) turbulence ( see , for example , frisch ) . notice that the vcf and the energy spectrum form a fourier transform pair and that the intermediate wavenumber power - law slope of the energy spectrum is related to the integral length scale of the vcf ( see , for example , pope ) .( -29,25 ) ( -29,8) ( -29,25 ) ( -29,8) ( -29,25 ) ( -29,8) in classical turbulence the discussion of spectral energy distribution and the corresponding scaling laws is usually accompanied by a notion of scale separation ( see , for example , pope ) . in the spectral distribution of energy low wavenumber contributions are associated with an integral range , intermediate wavenumber contributions are associated with an inertial range , and small wavenumber contributions are associated with a dissipation range . despite a tentative resemblance between the statistics for the energy spectrum obtained in the present setting and phenomena observed in classical turbulence , the notion of scale separation in the context of active suspensions is unorthodox .it therefore seems reasonable to distinguish between large , intermediate , and small scales associated with low , intermediate , and high wavenumbers , respectively , rather than integral , inertial , and dissipation ranges ( as is conventional in discussion of turbulent fluid flow ) . in the present investigation ,power - law scaling exponents are computed with a least - square fit of the energy spectrum to the power law in the interval of the first 10 wavenumber shells ( that is , ) .the influences of stokes friction only and mixed stokes and pairwise dissipative interactions on are presented in figure [ fig : espn01 ] . in particular , plots of are provided for different combinations of stokes friction and pairwise dissipative interactions .two different cases of stokes friction are displayed , namely in panels ( a)(c ) and in panels ( d)(f ) .the pairwise dissipative interaction force includes the limiting case of and two mixed cases ( and , ) . while the energy spectra exhibit a positive scaling law with the scaling exponent at low wavenumbers , they quickly decrease at high wavenumbers with an approximate power - law slope of .these results indicate that the low and high wavenumber behviors appear to be universal in the sense that all the considered cases feature a low wavenumber power - law scaling with and a high wavenumber scaling with .this result is confirmed in more detail in panel ( a ) and ( b ) of figure [ fig : espscale ] .the figure displays the low wavenumber scaling exponents versus the area fraction for with various values of ranging from to .strikingly , the low wavenumber scaling exponent is approximately constant for all considered pairwise dissipative interaction parameters , including the limiting case , with a weak decrease for very small values of the area fraction .furthermore , the positive low wavenumber scaling of the cases with stokes friction is accompanied by a global maximum of the energy spectrum at intermediate wavenumbers around .the exact location of this maximum appears to depend on the volume fraction of the system , with higher volume fractions leading to peaks at larger wavenumbers . since the energy injection through self - propulsion is associated with a band of high wavenumbers around ( marked with an arrow in figure [ fig : espn01 ] ) , the peak at corresponds to an accumulation of energy at the corresponding intermediate length scales . in the case of large stokes friction ( ) and pairwise dissipative interactions ,this accumulation of energy appears to be essentially independent of the choice of . for lower stokes friction ( ) ,pairwise dissipative interactions have more influence on the energy spectra , moving the peak in the energy spectrum to lower wavenumbers , as shown in panels ( d)(e ) of figure [ fig : espn01 ] .figure [ fig : espn02 ] shows energy spectra with the objective of investigating the influence of pairwise friction parameter by setting .plots are displayed for the cases of , , and , , , .for all three choices of , the energy spectra exhibit low wavenumber scalings that are significantly more sensitive to the area fraction than in the cases with stokes friction .in contrast , the high wavenumber range exhibits a scaling law similar to that arising in the cases with stokes friction , with . as and increase , the slope of the energy spectrum at low wavenumbers decreases .in particular , for , the slope at the low wavenumbers more strongly depends on than larger values of , as is evident from panel ( a ) of figure [ fig : espn02 ] .interestingly , while all energy spectra for show a low wavenumber scaling with a negative scaling exponent , as shown in panel ( c ) of figure [ fig : espn02 ] , the scaling exponent of the energy spectra for the other choices of changes from positive to negative with increasing .panel ( c ) of figure [ fig : espscale ] , provides the scaling exponents obtained from a least - squares fit of the energy spectra .these results further confirm that the pairwise friction parameter has a significant impact on the low wavenumber scaling of the normalized energy spectrum ; lower values of result in positive slopes and higher values of combined with high area fractions lead to negative low wavenumber scaling slopes , where appears to be the lower bound observed for the current combinations of parameters . in agreement with the previously discussed vcf statistics and the representative snapshots of the velocity and vorticity fields , the case of zero stokes friction induces a distinct behaviour of the normalized energy spectrum . for low values of the pairwise friction parameter ,the low wavenumber slope of strongly depends on the area fraction , as shown in panel ( a ) of figure [ fig : espn02 ] .increasing leads to a decrease in the slope of .further , increasing leads to a decrease in the slope at low wavenumbers of for all considered values of , as shown in panels ( a)(c ) of figure [ fig : espscale ] .the observed decrease in the slope of the low and interemediate wavenumber range of is more prominent for large ; however , for the largest considered pairwise friction coefficient , namely , all normalized energy spectra possess a low wavenumber scaling with a negative scaling exponent , as shown in panel ( b ) of figure [ fig : espscale ] .the behaviour of the normalized energy spectra point to the presence of different energy transfer mechanisms in the cases with and without stokes friction . as is apparent from figure [ fig : espn01 ] , the energy spectra with stokes friction exhibit a global maximum at intermediate wavenumbers for all considered values of the area fraction and pairwise friction parameters .however , without stokes friction , the energy spectra exhibit a global maximum at intermediate wavenumbers only if both and are sufficiently small , as depicted in figure [ fig : espn02 ] .in addition , for larger values of and , the normalized energy spectrum of systems without stokes friction decreases monotonically with increasing wavenumber and attains its global maximum at the smallest wavenumber shell .these results indicate that , in self - propelled agents without the effect of stokes friction ( that is , ) , the energy , after being injected at large wavenumbers cascades upwards to the smallest wavenumber shell , corresponding to the largest length scale in the system .this effect becomes more predominant as the influence of pairwise dissipative interactions and the area fraction increases .adding the effect of stokes friction results in the accumulation of the energy at an intermediate wavenumber , forming a peak in the normalized energy spectrum , irrespective of the value of or .the discussed behavior can be understood by considering the spectral distribution of the two different energy dissipation mechanisms . *the energy dissipation due to the stokes friction is proportional to the ( constant ) stokes friction coefficient and the magnitude of the squared velocity , which is twice the ( dimensionless ) kinetic energy .consequently , from a spectral perspective , the dissipation due to stokes friction acts mainly on wavenumbers containing large amounts of kinetic energy , namely the low wavenumbers . in two - dimensional flow , stokes friction , or similar types of friction mechanisms , for example rayleigh friction , therefore remove energy on large scales , that is , low wavenumber components of the velocity field ( see , for example , boffetta and ecke ) .such dissipation mechanisms penalize the formation of spatially extended correlated structures in the velocity field . *the pairwise dissipative interactions depend on local velocity differences rather than on the magnitude of the velocity .heuristically , they are thus expected to remove energy at very small scales , allowing for spatially extended correlated motion if the system is not over - damped by dissipation due to stokes friction . for the limiting case of vanishing stokes friction , this explains the observed spatially extended correlated motion patterns .a system of self - propelled soft - core dumbbells interacting with dpd - type forces was studied . more particularly , the influences of agent concentration and two different friction mechanisms , namely pairwise dissipative interactions and stokes friction , on the statistics of the system were presented .the pairwise dissipative forces provide a simple model system for studying the influence of hydrodynamic interactions in active suspensions .high agent concentrations combined with dominant pairwise dissipative interactions and vanishing stokes friction result in dynamic particle aggregation and spatially extended correlated motions . the characteristic length scales of the spatial structures exceed the characteristic size of an individual agent and point at the presence of phenomena reminiscent of the upscale energy transfer and energy condensation phenomena observed in classical two - dimensional turbulent flows , as discussed , for example , by boffetta and ecke . for cases with stokes friction , the normalized energy spectra possess a small wavenumber scaling with positive scaling exponent and a peak at intermediate wavenumbers .in contrast , the cases with dominant pairwise dissipative interactions have low wavenumber scalings with negative scaling exponents , pointing at the presence of an accumulation of energy at large scales .since the formulation includes neither ad - hoc biological alignment and coordination rules nor attractive forces , the obtained results demonstrate the potential importance of pairwise dissipative interactions in the formation of spatially extended structures in active suspensions .further , the soft - core interaction potentials used in dpd afford numerical stability and efficiency , even at large integration timesteps .larger timesteps enable efficient and robust simulations , allowing for the consideration of scenarios more complex than would be possible otherwise . at the present stage , only one simple form of pairwise dissipative hydrodynamic interactions has been considered .the investigation of different forms of such hydrodynamic interactions should be subject of future research .possible changes to the hydrodynamic interactions include modifying the weighting function through changing the exponent with which enters the dpd interactions .this possibility is discussed by pan et al . , fan et al . , fedosov et al . , and symeonidis et al . , with the goal of adapting bulk rheological properties like the viscosity or the schmidt number of a classical dpd fluid .alternatively , a generalization of might account for lubrication - type interactions .however , the asymptotic behavior of lubrication interactions as the separation distance vanishes can be expected to compromise part of the stability of the standard soft - core dpd method .dfh acknowledges the partial support of the antje graupe pryor foundation and the graduate research mobility award of the department of mechanical engineering at mcgill university along with the hospitality of the department of mathematics at washington state university .
|
a simple model for simulating flows of active suspensions is investigated . the approach is based on dissipative particle dynamics . while the model is potentially applicable to a wide range of self - propelled particle systems , the specific class of self - motile bacterial suspensions is considered as a modeling scenario . to mimic the rod - like geometry of a bacterium , two dissipative particle dynamics particles are connected by a stiff harmonic spring to form an aggregate dissipative particle dynamics molecule . bacterial motility is modeled through a constant self - propulsion force applied along the axis of each such aggregate molecule . the model accounts for hydrodynamic interactions between self - propelled agents through the pairwise dissipative interactions conventional to dissipative particle dynamics . numerical simulations are performed using a customized version of the open - source lammps ( large - scale atomic / molecular massively parallel simulator ) software package . detailed studies of the influence of agent concentration , pairwise dissipative interactions , and stokes friction on the statistics of the system are provided . the simulations are used to explore the influence of hydrodynamic interactions in active suspensions . for high agent concentrations in combination with dominating pairwise dissipative forces , strongly correlated motion patterns and a fluid - like spectral distributions of kinetic energy are found . in contrast , systems dominated by stokes friction exhibit weaker spatial correlations of the velocity field . these results indicate that hydrodynamic interactions may play an important role in the formation of spatially extended structures in active suspensions . dissipative particle dynamics , bacterial suspensions , hydrodynamic interactions , two - dimensional turbulence , upscale energy transfer , integral length scale
|
chaotic attractors typically possess a dense set of unstable periodic orbits ( upos ) .this form of phase space skeleton was exploited in a control scheme developed by ott , grebogi and yorke in the 1990s .their approach provided a method for stabilizing targeted upos of chaotic systems for application in both numerical simulations and laboratory experiments .this method spawned the development of a number of related and alternative control schemes , with similar goals , which can be utilized in systems for which strictly periodic behavior is attractive .one of these schemes , which has been especially well investigated and tested , was proposed by pyragas .pyragas control exploits the symmetry of a periodic orbit in a natural way by providing , in its simplest realization , additive feedback in the form . \nonumber\ ] ] here is the state vector of the dynamical system at time , is the period of the targeted upo , and is a constant feedback gain matrix .the scheme is manifestly noninvasive , since the feedback vanishes when the system reaches the -periodic target state . setting aside the difficult questions related to basins of attraction , there are then just two key ingredients to the successful implementation of this approach : the period of the targeted upo is needed , and the feedback gain matrix needs to guarantee stabilization . only to the extent that an appropriate choice of is requireddoes the method rely on detailed knowledge , beyond the period , of the structure of the upo in phase space . for a review of the extensive literature on applications of pyragas feedback , including successful experimental implementations , see .this paper is motivated by the question of how to choose the feedback gain in pyragas control to ensure that it will be effective .we focus on a simple , generic mechanism for the creation of an unstable periodic orbit : the subcritical hopf bifurcation of a stable equilibrium. other generic mechanisms for creating upos in dynamical systems include homoclinic bifurcations , saddle - node ( or fold ) bifurcations of limit cycles , saddle - node bifurcations of fixed points on an invariant circle , and period - doubling bifurcations .there have been a number of successful demonstrations of pyragas control of periodic orbits destabilized through a period - doubling bifurcation ( see , for instance , ) .postlethwaite has shown that pyragas - type feedback can stabilize a upo arising from a subcritical bifurcation from a robust heteroclinic cycle in a three - dimensional system of equivariant ordinary differential equations .interestingly , pyragas feedback works in this case even though the period of the targeted orbit , and hence the time - delay , diverges as the heteroclinic bifurcation point is approached .fiedler _ et al . _ have investigated an example of pyragas control that stabilizes a circular limit cycle ( _ i.e. _ a rotating wave " ) near a fold bifurcation in a planar system of ordinary differential equations with -symmetry , and successfully applied this to a higher - dimensional model taken from nonlinear optics that possesses a similar rotational symmetry .the first example that demonstrated the successful stabilization of a upo arising from a subcritical hopf bifurcation was given in , and further analyzed in , with an experimental implementation described in .in these papers , the authors added pyragas feedback directly to the hopf normal form : .\label{eq : normalform}\ ] ] here is the period of the upo , and the complex number plays the role of the feedback gain matrix .a beauty of this simple example is that it represents a rare instance in which solutions of a nonlinear delay differential equation can be computed analytically in closed form , and their bifurcations can be studied with comparable finesse .specifically , using methods of bifurcation theory , the authors were able to understand the mechanism for stabilization in this example .for instance , they showed that the feedback control leads to additional delay - induced hopf bifurcations of the equilibrium , and consequently it is possible to change the equilibrium s stability so that the original _ subcritical _ hopf bifurcation to the upo turns into a _ supercritical _bifurcation to a stable periodic orbit .an important contribution of the fiedler _ et al . _ paper was that it also provided a counterexample to a published claim that pyragas control is impossible when the upo has an odd number of real positive floquet multipliers greater than one .in the ten years between the published claims of the odd number limitation and the first counterexample to it , a number of modifications of the pyragas control scheme were developed .one of these , based on introducing an additional unstable direction via the controller , was proposed in order to stabilize upos created by a subcritical hopf bifurcation , including an example applied to the lorenz equations .recently an analysis of the lorenz equations with the standard pyragas feedback provided a second example of stabilization of a upo resulting from a subcritical hopf bifurcation .specifically , postlethwaite and silber demonstrated that the stabilization mechanism identified by fiedler _et al . _ can also apply to upos in higher - dimensional systems , provided that the feedback gain matrix is chosen correctly .the strategy they outlined is to add feedback of the type investigated in in the directions tangent to the center manifold of the uncontrolled lorenz system .stabilization of the upos is then possible over a broad range of control parameter values .the reduction of higher - dimensional systems to the two - dimensional normal form near a hopf bifurcation is a standard procedure . likewise ,such systems with additive pyragas feedback , now infinite - dimensional , can also be reduced to the standard two - dimensional normal form in the vicinity of a hopf bifurcation , where the parameters of the feedback gain matrix modify the coefficients in the normal form . in the fiedler example , the feedback terms are added directly to the hopf normal form , but a surprising result of was that the same sequence of bifurcations identified in the simpler normal form example also appears in this higher - dimensional example .this result is generalized further in the current paper .a further motivation for the work we present in this paper is to understand the origin of a particular degenerate hopf bifurcation problem that acts as the organizing center in both the simple normal form example and the lorenz example .specifically , we generalize the results of by studying an -dimensional system of equations containing a subcritical hopf bifurcation of a stable equilibrium . as in the lorenz example, the gain matrix for this system is such that the pyragas feedback only acts in the directions tangent to center manifold of the uncontrolled system near the hopf bifurcation point , and in this tangent plane the gain collapses to a matrix that is proportional to a rotation matrix .thus , we consider a family of gain matrices that are parameterized by a magnitude and a phase ( _ cf ._ written in terms of real variables ) . the additive pyragas feedback results in a delay differential equationwe use methods of bifurcation theory to show that pyragas control can stabilize the small - amplitude upo in a neighborhood of its bifurcation provided and are chosen appropriately .specifically , our analysis applies in a neighborhood of a threshold value for , which depends on the phase angle , which must lie in a particular interval that we determine .the interval depends only on the cubic coefficient of the hopf normal form for the uncontrolled problem .in particular , we find that this interval for always exists provided that the imaginary part of the cubic coefficient of this normal form is nonzero .the threshold value for is associated with a highly degenerate hopf bifurcation of the zero solution of the delay differential equation .specifically , for this gain modulus , the critical eigenvalue of the linearized problem does not cross the imaginary axis as the bifurcation parameter is varied so the nonzero - speed " eigenvalue crossing condition for a generic hopf bifurcation is violated .moreover , a center manifold reduction of the delay differential equation to hopf normal form reveals that the cubic coefficient is purely imaginary for this threshold value of , necessitating that one go to higher order than cubic in any analysis of the bifurcating periodic orbits .the analysis of this degenerate bifurcation problem provides the basis for our claims that pyragas control can stabilize upos that are born from a generic subcritical bifurcation of a stable equilibrium in the uncontrolled problem .it also explains why the same sequence of bifurcations occur in the normal form example as in the lorenz example analyzed in .the remainder of this paper is organized as follows .section 2 reviews the stabilization mechanisms identified by fiedler _ et al . _ for the hopf normal form example and formulates our generalized problem .section 3 contains our key results .it determines the restrictions on for effective stabilization .it also identifies and analyzes the degenerate bifurcation that acts as an organizing center for the control problem .section 4 presents a rigorous center manifold reduction for the delay differential equation , with certain details relegated to an appendix .it thereby substantiates our heuristic arguments made in section 3 .section 5 summarizes our findings and discusses some open questions and future directions of research .in this section we review the mechanism of stabilization identified by fiedler _et al . _ for the pyragas - controlled hopf normal form .we then formulate the generalized problem that will be studied in this paper : an -dimensional system of differential equations containing a generic subcritical hopf bifurcation of a stable equilibrium , with pyragas - type delay terms added only in particular directions .we use a center manifold reduction for the uncontrolled problem to estimate the period of the upo , which we use as the delay time for the feedback .fiedler _ et al . _ consider equation , where and parameters .the feedback gain is a complex number . for and have bifurcating unstable periodic orbits , or pyragas orbits " , with amplitude , coexist with the stable trivial equilibrium for .the goal is to stabilize this branch of periodic orbits in a neighborhood of by adding the feedback term ( ) .the pyragas orbits have minimal period , which is chosen as the delay time in .we now summarize the bifurcation structure associated with the solution of in the -plane to inform our discussion in subsequent sections . these results , and more details , can be found in .figure [ fig : intro](a ) shows two curves of hopf bifurcations in the -plane .one is the original hopf bifurcation to the pyragas orbit , which occurs at for every value of .the other hopf bifurcation is a consequence of the additive delay terms and occurs along the curve .it produces a branch of _ delay - induced periodic orbits _ ,i.e. a periodic orbit that arises due to the addition of delay terms and one for which the feedback does not vanish .the two bifurcation curves intersect at the point = .a curve of transcritical bifurcations also emanates from this point , which acts as the organizing center for the bifurcation structure of the problem .stabilization of the pyragas orbits involves two bifurcations . without feedback ,the trivial equilbrium is stable for and unstable for , and the hopf bifurcation at is subcritical .however , the delay terms can change the stability of the trivial equilibrium . for ,we find that the stability of the trivial equilibrium switches to being _ unstable _ for and _ stable _ for ( in a neighborhood of , ) . since both the location of the hopf bifurcation at , and the location of the pyragas orbits ( in ) are independent of , then the hopf bifurcation must change criticality from subcritical to supercritical .this in turn means that the pyragas orbits must now be stable .the second bifurcation involved in the stabilization mechanism occurs for .the pyragas orbit is unstable for small values of , but as the feedback magnitude is increased , the pyragas orbit and the delay - induced periodic orbit exchange stability in a transcritical bifurcation .these mechanisms can both be seen in fig .[ fig : intro ] . + the generalized problem is formulated for an -dimensional parameterized system of differential equations where is ( ) and is a bifurcation parameter .we assume there is an equilibrium solution branch in a neighborhood of , which loses stability at as a simple complex conjugate pair of eigenvalues of the linear stability matrix cross the imaginary axis , i.e. at a hopf bifurcation .we assume the hopf bifurcation is subcritical , that is , it gives rise to a branch of unstable upos which coexist with a stable equilibrium . without loss of generalitywe can introduce shifted variables and bifurcation parameter so that the equilibrium is located at for in a neighborhood of , and is proportional to .specifically , we define so that , in a neighborhood of , is stable for and unstable for . the bifurcating branch of upos exists for . in terms of the shifted variables ,the system is and the jacobian matrix has a pair of complex conjugate eigenvalues , such that where we further assume that the remaining eigenvalues of have negative real parts . in order to estimate the period , , of orbits on the branch of upos, we perform an ( extended ) center manifold calculation to reduce to hopf normal form in a neighborhood of . in polar coordinates , this is we assume that depends smoothly on so that the coefficients can be expanded in taylor series about : then becomes neglecting the higher order terms in we find : and it can be shown that the dynamics of this truncated normal form are qualitatively unchanged when one considers the influence of the higher order terms provided .pyragas orbits exist with amplitude for .( from we have , so for a subcritical hopf bifurcation we must have 0 . )these orbits have period where captures the dependence of the oscillation frequency on the amplitude of oscillations . choosing the delay such that ensures that the feedback vanishes whenthe targeted periodic orbit is reached .our estimate of in , which is based on the cubic normal form , is good through .next , we add pyragas - type feedback to which gives where is the constant gain matrix . as in , feedback is added only in the directions associated with the linear center eigenspace of at the hopf bifurcation point . specifically , after a ( -dependent ) coordinate transformation and a rescaling of time by the ( -dependent ) delay , takes the form : where and are -dimensional column vectors , is a zero matrix with rows and columns , and is an matrix in jordan normal form .the eigenvalues of have negative real part for sufficiently small .the feedback depends upon two parameters : an amplitude and a phase angle .we analyze the bifurcation structure of system by considering the appropriate two - dimensional hopf normal form in a neighborhood of .we show that as increases through some critical value , the hopf bifurcation at changes from subcritical to supercritical , provided the phase angle is chosen appropriately .hence there is a range of for which the pyragas orbit bifurcates stably .we show further that the hopf bifurcation at the point is degenerate for two reasons : ( a ) the nonzero eigenvalue crossing condition of the hopf bifurcation theorem is violated , and ( b ) the cubic coefficient of the normal form is purely imaginary .first we perform a linear stability analysis to show that the nonzero crossing condition is violated , which leads to an explicit formula for .another linear consideration , specifically the requirement that the pyragas branch bifurcates from a stable equilibrium , determines restrictions on the parameters and .in particular , we must assume that , and we require that lie within a specified range .we use results from the linear analysis , together with information on the pyragas branch , to argue that the real part of the cubic coefficient of the hopf normal form also vanishes at , .this result is later substantiated in section [ sec : cmreduction ] ( with details in appendix b ) by a center manifold reduction of the delay differential equation .the degeneracy at cubic order necessitates that quintic terms in the hopf normal form be retained .the bifurcation analysis for this problem is performed at the end of this section .we first perform a linear stability analysis to determine the feedback magnitude at which the eigenvalue crossing condition is violated . since the feedback terms in only act in the center directions , and all the other coordinates are linearly decaying , we focus our linear stability analysis on the and equations . in terms of the complex variable obtain the linear delay differential equation : with and .solutions take the form , where satisfies the characteristic equation , with at , this equation becomes since , , and .this has a solution , independent of , which is as expected since the original hopf bifurcation is not affected by the feedback .we next evaluate the eigenvalue crossing condition , that is , we consider to be a function of with , and compute ] when in order to have a positive ( finite ) value for , the phase angle must satisfy the restriction moreover , as discussed in , the feedback introduces additional delay - induced instabilities of the solution of . in order to ensure that the pyragas branch can bifurcate from a stable solution, we need to ensure that is below a -dependent cut - off where these additional instabilities set in .these considerations will determine a more stringent inequality for , which can be met provided that , where in .we know that as ( i.e. the feedback vanishes ) , solutions of the characteristic equation at include , and a countable set that have real parts tending to .we determine the value of ( with ) for the onset of the first delay - induced bifurcation of by seeking solutions of for .specifically , satisfy these equations have one solution at for all values of ( which corresponds to the original hopf bifurcation ) , and a sequence , indexed by , where for the hopf bifurcation with to be from a stable equilibrium when , we need to ensure that , for each , either or .since with , this condition is satisfied if where hence we require that satisfies note that for , so automatically ensures that is satisfied .* claim : * if , then there exists a value of such that is satisfied .* proof : * we rewrite as so , equivalently , we must show that there exists some such that which we can rewrite as where .( note , for , one obtains the smallest possible value of given by ) .now , observe that for . since ,then it is clear that there is an open interval of for which is satisfied .this interval will include the point where , that is , where . note that when , condition becomes , or equivalently , which is clearly never satisfied . if , then and is still never satisfied . since condition does not hold for any when , the mechanism of stabilization which we discuss in this paper is not valid for .henceforth we assume that and that is chosen to satisfy .we focus on a neighborhood of , expanding the eigenvalue in in a two - variable taylor series in order to zoom in on the behavior near the point : where note that there is no linear term in , since we already know that at , .also , when , is a solution to for all , which immediately implies that .we assume , generically , that .note that to determine we would need , among other quantities , the delay time through order , which can not be computed using the cubic truncation of the hopf normal form . substituting into and equating terms at yields where the inequality follows from and .finally , defining , we have where by , and ( generically ) .from this we are able to deduce the arrangement of the regions of stability of the zero equilibrium around the point .the arrangement depends on the sign of , and the two cases are shown in fig .[ fig : lin_stab ] .the linear stability analysis reveals that the nonzero eigenvalue crossing condition is violated at .we now argue that the cubic coefficient of the normal form equation is purely imaginary at , which is the second degeneracy of the hopf bifurcation at .this follows directly , as we now show , from the linear calculations in section [ sec : lin ] and the fact that the existence of the pyragas orbit is unaffected by the addition of the feedback terms .according to bifurcation theory for dynamical systems , including delay equations , restricted to its center manifold in a neighborhood of the hopf bifurcation point at can be converted into normal form via a series of nonlinear , near - identity coordinate transformations .this normal form is where .here we have retained the quintic terms in anticipation of the result that at .we will assume that , and analyze the quintic truncation of .we demonstrate for a specific numerical example in appendix a that the coefficient of the fifth - order term does not vanish , which we expect to be true generically . rewriting in polar coordinates , truncating the terms above fifth order , and considering only the real part of the equation, we have where is the expansion given by and , are the taylor series expansions for the real parts of the coefficients of the cubic and quintic terms , respectively . the zeros of correspond to the limit cycle solutions on the center manifold of the original problem in a neighborhood of .( although the periodic orbits are not circular in the original coordinates on the center manifold , the normal form transformation is the coordinate transformation that makes them circular . ) the pyragas orbit is , by construction , unaffected by the control and must exist as a zero of for . from, we have that the pyragas orbit satisfies , for which the first order approximation is . we then factor to obtain where , and .thus where is known from the linear problem , and are known from the original uncontrolled hopf bifurcation and are independent of . to find a taylor expansion of in terms of and ,we substitute the taylor expansions of ( given by ) and the taylor series of ( given by ) to obtain it follows immediately that .we have thus shown that the real part of the cubic coefficient of the normal form vanishes at .we now analyze in a neighborhood of with a goal of determining all qualitatively distinct bifurcation diagrams associated with the distinguished bifurcation parameter , and showing that there is a region of parameter space in which the pyragas orbit is stable .expanding all coefficients in in taylor series in and , we have , at leading order , where the inequality for follows from the fact that and .the pyragas orbits always exist for .on the other hand , the delay terms in create additional hopf bifurcations that give rise to delay - induced periodic orbits . from , we see that the delay - induced periodic orbits exist provided from which we find the delay - induced hopf bifurcation line in the -plane . if , the delay - induced periodic orbits exist in the region of the -plane above this line , while if they exist in the region below this line .a transcritical bifurcation of periodic orbits occurs when the pyragas and delay - induced periodic orbits have the same amplitude , so that the right - hand side of is a perfect square .the equation for the line of transcritical bifurcations is with .note that and are only correct through and that this bifurcation curve must terminate at since the pyragas orbits only exist for .the arrangement of the curves and in the -plane gives two qualitatively different cases for the bifurcation structure of , each of which may be divided into three subcases when the distinguished bifurcation parameter is varied with fixed .recall that and ; we assume the generic situation where .if , then the sign of is determined , but otherwise , it can be of either sign .( if , then the line of transcritical bifurcations occurs at . )the possible cases are thus : + for each subcase , we show the regions of existence of the pyragas orbits and delay - induced orbits , their stability , and the stability of the trivial equilibrium in fig .[ fig : fourcases ] .bifurcation diagrams showing the amplitude of the pyragas and delay - induced periodic orbits as a function of the parameter are shown in fig .[ fig : deltavaries ] . in both cases ( 1 ) and ( 2 )the pyragas orbits are stabilized as soon as is increased beyond the transcritical bifurcation .however , for case ( 1 ) , there is a smooth transition from the stable delay - induced orbit to the stable pyragas orbit , whereas for case ( 2 ) , the zero solution can coexist stably with the delay - induced or pyragas orbit , and hence hysteresis is expected . in fig .[ fig : lambdavaries ] we present bifurcation diagrams showing the amplitude of the periodic orbits vs. the distinguished bifurcation parameter .six distinct cases are manifest , but , if , there is always a region in which the pyragas orbits are stable in a neighborhood of .it is on the basis of this observation that we assert that pyragas control can stabilize the subcritical branch of periodic orbits , provided and satisfies .+ + the degenerate hopf bifurcation can be related to a degenerate steady - state bifurcation problem with a symmetry by focusing on the equation .this takes the form with defining condition ensuring that a bifurcation of the solution occurs at .the degenerate bifurcation of interest is defined by this is codimension - two as a bifurcation problem , or a codimension - three phenomena ( i.e. including ) .this bifurcation problem is analyzed using methods of singularity theory in the book by golubitsky and schaeffer ( chapter vi ) .they prove that it has the normal form provided , where , by suitable rescaling , it is possible to set and .they analyze its universal unfolding because we assume that the time delay coincides _ exactly _ with the period of the upo , the unfolding parameter in our problem .specifically , we have , and we retain a term proportional to in place of .the singularity theory unfolding results are expected to apply directly if we were to consider deviations of from the period of the upo .in this section , we use a center manifold reduction for delay differential equations to confirm that the cubic coefficient of the normal form is purely imaginary at .the theory is well - developed and is described thoroughly in , for example , . in general , because the center manifold can not be determined exactly , an approximation must be constructed , and this calculation can be facilitated by using a computer algebra program such as maple .we perform the reduction at so that . herewe will focus on the simple case in which has no quadratic nonlinearities when taylor - expanded about the origin .we relegate the general case , in which quadratic nonlinearities are also present , to appendix b. we rewrite as where and the matrices and are given by as in , is an zero matrix and is an matrix in real jordan normal form containing the decaying eigenvalues . for the casewe consider here , after taylor - expanding about the origin , the vector field of nonlinear terms takes the form where and ( cubic ) terms containing any of the , have not been explicitly written because they do not contribute to the subsequent calculation .the quantities are scalars and are vectors of dimension .we follow in performing the center manifold reduction of to hopf normal form at . in order to construct an appropriate phase space for the solutions of the delay differential equation ,we define : ( recall we rescaled time by the delay , so that the delay time is fixed to be equal to . )we write as a functional differential equation evolving in the banach space , \mathbb{r}^{n } \right) ] .second , we need a basis for the center eigenspace of a linear problem dual to , with ] and , , and are -dimensional vector - valued functions . as shown in , must satisfy the equation subject to the boundary condition where is given by .solving , for the coefficients of the first two rows of yields where is the identity matrix , are given by and to determine the vectors , , and we solve the following system of equations : where is a matrix of size , is the real matrix defined in and evaluated at , is a zero matrix , and is the identity matrix . finally , the equation on the center manifold is given by or in complex form , where contains both quadratic and cubic terms proportional to , , , , etc .after performing a near identity transformation , the real part of the cubic coefficient takes the form , \nonumber\end{aligned}\ ] ] with . once again , from equation we can see that when for completeness , we find which are the precisely the coefficients in the uncontrolled hopf normal form .b. fiedler , s. yanchuk , v. flunkert , p. hvel , h .- j .wuensche , e. schll , delay stabilization of rotating waves near fold bifurcation and application to all - optical control of a semiconductor laser , phys .e 77 ( 6 ) ( 2008 ) 066207 .r. qesmi , m. ait babram , m. l. hbid , a maple program for computing a terms of a center manifold , and element of bifurcations for a class of retarded functional differential equations with hopf singularity , applied mathematics and computation 175 ( 2 ) ( 2006 ) 932968 .k. engelborghs , t. luzyanina , g. samaey , dde - biftool v. 2.00 user manual : a matlab package for bifurcation analysis of delay differential equations , technical report tw-300 , department of computer science , k. u. leuven , leuven , belgium ( 2001 ) .
|
we show that pyragas delayed feedback control can stabilize an unstable periodic orbit ( upo ) that arises from a generic subcritical hopf bifurcation of a stable equilibrium in an -dimensional dynamical system . this extends results of fiedler _ et al . _ [ _ prl _ * 98 * , 114101 ( 2007 ) ] , who demonstrated that such feedback control can stabilize the upo associated with a two - dimensional subcritical hopf normal form . pyragas feedback requires an appropriate choice of a feedback gain matrix for stabilization , as well as knowledge of the period of the targeted upo . we apply feedback in the directions tangent to the two - dimensional center manifold . we parameterize the feedback gain by a modulus and a phase angle , and give explicit formulae for choosing these two parameters given the period of the upo in a neighborhood of the bifurcation point . we show , first heuristically , and then rigorously by a center manifold reduction for delay differential equations , that the stabilization mechanism involves a highly degenerate hopf bifurcation problem that is induced by the time - delayed feedback . when the feedback gain modulus reaches a threshold for stabilization , _ both _ of the genericity assumptions associated with a two - dimensional hopf bifurcation are violated : the eigenvalues of the linearized problem do not cross the imaginary axis as the bifurcation parameter is varied , and the real part of the cubic coefficient of the normal form vanishes . our analysis of this degenerate bifurcation problem reveals two qualitatively distinct cases when unfolded in a two - parameter plane . in each case , pyragas - type feedback successfully stabilizes the branch of small - amplitude upos in a neighborhood of the original bifurcation point , provided that the phase angle satisfies a certain restriction .
|
since i had the privilege to be the last phd student of lochlainn oraifeartaigh ( lor ) , and since the way i achieved it is related to the matter discussed here , it is perhaps worth to recollect that story in a conference in memory of lor .indeed , it is yet another anecdote about the rare human qualities of this first rank scientist , and , i believe , this is also an important aspect of his legacy . in the last years of his ground - breaking career , lor ( see fig.1 ) was way too advanced to have the time to rise - up phd students .the youngest collaborators he had been admitting in dias were smart post - docs .thus , when a stubborn , no - one - word - of - proper english character came along in his office in 1996 , to propose himself for a phd with `` _ _ professore oraifferti _ _ '' as supervisor , lor was probably disoriented .but lor was not a person to give - up for matters related to cultural barriers , so he handed over to me bailin and love s book on gauge theories , and said , slowly articulating the words : `` _ _ this is what i do _ _ '' .a few weeks later , as i came back to him , lor gave me a more important chance .he had me seat in front of his office s desk in dias ( see fig.2 , not a true picture of his hands , but a faithful reproduction of what i used to see for a few months ) .he showed to me a draft - paper by himself , ivo sachs and chris wiesendanger , on when scale invariance implies full conformal invariance .there local weyl symmetry plays a crucial role .the paper was nearly complete , and he said ( again , articulating the words one by one ) : `` _ _ apply this to fields of any spin , and spacetime dimensions _ _ '' .lor meant that to be an msc thesis work , but the assignment was finished in a couple of weeks only , for the surprise of lor ( and for the extra work of the two brilliant post - doc coauthors , who had to include my hand - written results in a revised version of the paper , and one more author , that is ref. ) . with this , lor eventually gave me the most valuable opportunity he could give me , and accepted me as his phd student .the recollection could go on for a long while , but then it would depart too much from a scientific paper , so i leave it to fig . 3 ( as above , not original pictures , but faithful reproductions of the images i remember ) .let me then move on to physics , by first summarizing here what , in the occasion described above , i learned to be weyl symmetry .suppose that the following action for fields of any spin , at most quadratic in the derivatives of the fields ( here refers to minkoswki spacetime , while refers to a generic , not necessarily curved ) is symmetric under rigid ( ) scaling that is a spatiotemporal symmetry , but , due to invariance under diffeomorphisms , is symmetric under ] are einstein indices , are lorentz indices , are spin indices , with , , are the lorentz generators , and is the spin connection coming from the metricity condition .we also introduced the vielbein and its inverse , satisfying , , , where . for more notationssee . ] hence rigid weyl transformations can be seen as _ internal _ transformations ( i.e. , they involve only the fields , the metric and , not the coordinates ) . as any other internal symmetry ,also weyl s can be gauged .first one promotes ( [ localweyl ] ) to a local transformation , i.e. , then one promotes the original action to a `` weyl - gauged '' one , , where , with weyl field and derivative responding to ( [ localweyl ] ) as respectively . herei used here the virial tensor , introduced in at the end of this ( standard ) procedure , the transformations ( [ localweyl ] ) , alongside with ( [ wfielddtrf ] ) , are indeed a symmetry of the new action : .the status of the massless dirac action in any dimension is very special because with a parameter with the dimensions of velocity , for graphene it is , the fermi velocity , for truly relativistic dirac it is , of course , . ] due to , where i used and the definition of the lorentz generators ] , $ ] , solves all those problems at once : it gives raise to a three - dimensional conformally flat spacetime . in the frame of reference , where the time is the lab time , the conformal factor is such that the predictability is valid globally .hence , through weyl symmetry , we are able to produce ( possibly exact ) formulae , that hold all over the surface , and for any time .the full line element is \;,\ ] ] and we see that , we have an extra bonus : the line element in square brackets is the rindler line element .this does not come as a surprise , if one looks at ( [ lobspacetime ] ) , where in square brackets we see _ what would be a rindler line element if the tilde coordinates were actually real , measurable coordinates_. it comes as a surprise , though , if one considers the argument given above about global validity of the coordinates and reference frames practically realizable in a laboratory ( for the beltrami , we are saying that all one needs to do is to curve the graphene sheet in that fashion ) .this does not mean that for the other surfaces of negative curvature we shall not be able to see any of the effects we are going to describe now for the beltrami .it means , though , that , without changing coordinates , we shall be able to import the beltrami results ( valid there over the whole surface ) only in a small neighbor of the surface , and that we shall not be able to infer from there what is going to happen in far parts of the surface .this might appear mind - twisting , but it is just a practical realization of what are the typical _ gedanken _ experiments of qft in curved spacetime , brought on the laboratory table .now , let us make use of this rindler line element .we use the customary results of qft in curved spacetimes , relating different observers to different quantum vacua ( an instance introduced earlier here ) .we have the graphene frame , and we need an inertial frame to give meaning to the operational _ measuring procedure_. we model that by requiring the quantum vacuum of reference to be always the minkowskian ( inertial ) one .so , the pseudoparticles live in a curved spacetime , the measuring apparatus , by following the profile of the surface , as well ( remember that the lab time is a rindler time , see later ) . but the measuring apparatus can not truly leave in a curved spacetime , there must be a place in the model where we stick in the information that the lab is in an inertial frame ( no gravity ! ) .this is done in by ascribing the inertial part of the measurement to the quantum vacuum of reference , hence chosen to be .the full explanation of this choice is in , and in the forthcoming . herelet me just add to the above that , with this choice , we also try to take into account the fact that the pseudoparticles , once they leave the lattice , will `` collapse '' all their relevant dirac properties , and will `` become '' again standard electrons .the positive frequency wightman 2-point function to consider is then through weyl , , , with , hence , where is the rindler green s function .the metric is that of a flat _ fictitious _ spacetime ( but weyl - equivalent to the real beltrami spacetime ) in the _ real _ coordinates .the physical result is recovered by simply multiplying the fictitious result by the proper factors .the fictitious spacetime s line element is .we can use minkowski coordinates ( somehow hidden in the vacuum ) , , , , so that for constant correspond to worldlines of observers moving at constant `` proper acceleration '' \;.\ ] ] this spacetime differs from standard rindler spacetime in the following aspects i ) a maximal acceleration , , ii ) confinement to one rindler wedge ( ) , and iii ) confinement on one side of the `` hilbert horizon '' ( ) as the beltrami world ends at . introducing the customary rindler coordinates , , , , we see that worldlines of constant proper acceleration in the fictitious rindler spacetime correspond to a fixed point on the real beltrami , with running time so , to see the unruh means to stay at a fixed point and `` wait '' till the horizon is reached , . for the typical curvature we shall consider ( we have to refer to the lattice spacing : ) we have typical waiting times of the order of for mm , or for m . the other time scale here is , i.e. the size in `` natural units '' of the detector .thinking of an stm needle or tip , we have : for a tungsten needle , while for a typical tip .thus , the conditions are : consider , where , and stay at each point for the largest among and the .this is doable ! the _ power spectrum _ \;,\ ] ] a part from inessential numerical factors , coincides here with the ( not yet physical ! ) electronic ldos , where is the degeneracy .as we are in a massless case , this quantity can be computed exactly where .note the bose - einstein distribution , due to the ( theoretically predicted ) phenomenon known as `` statistical swapping '' .going back to the physical spacetime is easy : .hence , the ldos we predict for a graphene sheet shaped as a beltrami pseudosphere is ( dimensional units are re - introduced ) }-1 } \label{ldosfinal } \;,\ ] ] where the temperature is a hawking temperature note that the exact does not reduce to for ( ) .this would be the case for a perturbative computation .currently , we are interacting with experimentalists towards a , direct or indirect , experimental detection of the behavior ( [ ldosfinal ] ) .the preliminary results of the theoretical setting - up for the experiment indicate that it should be possible to use more generic negatively curved surfaces .the logic is somehow the reverse , of what explained earlier about the impossibility to have globally valid predictions , for surfaces rather than the beltrami , within the reference frame of the laboratory , and by acting solely on the geometry of the surface ( i.e. , not considering external electromagnetic fields , or other interactions ) .if the approach we described here is valid , due to the local isometry between every surface of constant negative curvature , a sort of hawking - unruh effect should be visible on any portion of those surfaces .it is matter of using the isometry ( that is a change of _ spatial _ coordinates ) in a small neighbor , and of knowing that the results can not be extended beyond that neighbor .nonetheless , an adapted form of the prediction ( [ ldosfinal ] ) should be at work .these matters will be made more explicit in forthcoming publications .this research is the merging of various branches , but it is carried out mainly as an attempt to reconstruct in a laboratory a system as close as possible to what is believed to be a quantum field in a curved spacetime .the obvious `` classification '' of this type of activity would be under the class of `` analogue gravity '' . in a way it is , of course , correct to say so .in another way , though , the analogies on the table here are many more than for other analogue systems . herewe have : i ) a quantum pseudo - relativistic field description , ii ) the possibility to see the effects of gravity emerging , in the form of curvature of the spacetime , iii ) a low - dimensional setting , that points to the use of exact results , both on the field theory side , and on the gravity side , not least the weyl symmetry that points towards the importance of conformally flat spacetimes .this last instance , immediately suggests scenarios like the conformally flat black hole space time ( of constant negative curvature ) , discovered by baados , teitelboim and zanelli ( btz ) .some attempts towards trying to identify the physical conditions to reproduce such a situation on graphene were already mentioned in the first version of the arxiv entry .more recently , following the same general assumptions illustrated here , in it was actually found how to relate the btz on graphene to a different pseudosphere , namely the hyperbolic pseudosphere with .we are also currently investigating the consequences of this latter proposal . in the title of this sectioni use the words `` quantum gravity '' , with an abuse of language ( mitigated by the quotation marks ) , because i want to indicate that graphene has the potential to become a real laboratory to test fundamental ideas that go beyond the mere `` analogue gravity '' .indeed , as indicated in , many structures , typical of conformal field theories , arise in this contest .for instance , virasoro and liouville structures point to the possibility to test here ( a)ds / cft type of ideas .the amount of symmetries here , due to the dimensionality , and due to the lattice structure , is so high that the hawking effect , although of great importance on its own , could truly be just the beginning of a series of investigations of fundamental ideas of nature realized in a laboratory. a natural next goal , after the experimental testing of the hawking effect , could be the understanding of the `` black - hole thermodynamics '' , starting from entropy and relative holography .this far as for speculations , but there are various steps to be taken before getting anywhere near these goals .a crucial one is on the condensed matter side of the enterprise , and that is a solid geometrical / relativistic description of the elastic properties of the graphene membrane .it is matter there to have in the game the other actor of the story , i.e. the -bonds , and that is the other actor of the story also on the theory side , as this would tell us what sort of three - dimensional gravity theory is hiding here .there are also purely theoretical ( if not entirely mathematical ) questions to be fully addressed .the essential point , for this part , is that these spacetimes , e.g. what we called all along the beltrami spacetime , are the result of fitting the models into the real laboratory , that , spatially , is .now , the subtleties are everything in delicate constructions like the btz black - hole .it is clear , for instance , that only certain global identifications will make a simple spacetime a true black hole .but even the standard , to start with , is an issue here , because there is no such a thing as a spacetime with signature where to embed the avoiding `` hilbert horizons '' ( i.e. , the singular boundaries that we expect all the time from the hilbert theorem ) .`` real '' deserve a thoroughly study , that gives them the status of proper spacetimes , not of accidents due to a wrong choice of coordinates .if not for other reason , this should be done even simply because this is what we can actually make in our laboratories .a last side remark is fully on the mathematics .it is an interesting problem to find the negative curvature `` object '' corresponding to the icosahedron .it is a pleasure to thank ivo sachs and siddhartha sen for the invitation , and gaetano lambiase for the enduring collaboration .a. iorio , _ the hawking - unruh phenomenon on graphene _ , invited talk at `` qft aspects of condensed matter physics '' , workshop , 6 - 9 sept . 2011 ,infn headquarters - frascati , italy .
|
in the first attempt to introduce gauge theories in physics , hermann weyl , around the 1920s , proposed certain scale transformations to be a fundamental symmetry of nature . despite the intense use of weyl symmetry that has been made over the decades , in various theoretical settings , this idea never found its way to the laboratory . recently , building - up from work by lochlainn oraifeartaigh and collaborators on the weyl - gauge symmetry , applications of weyl - symmetry to the electronic properties of graphene have been put forward , first , in a theoretical setting , and later , in an experimental proposal . here i review those results , by enlarging and deepening the discussion of certain aspects , and by pointing to the steps necessary to make graphene a testing ground of fundamental ideas .
|
the social world is populated by organizations .the organizational landscape is changing all the time , with organizations being created , restructured and dissolved . in this dynamical environment, potentially several thousands of agents interact following different motivations , both at the individual level and at the collective level .the coevolution of various institutional settings , private and public , as well as the existence of actors and activities at different levels of aggregation , makes research on organizational dynamics a complex subject matter .we study _ organizational growth processes _ , that is to say , the time evolution of company size for a system of organizations .these processes exhibit statistical regularities , despite their complexity .one main striking regularity concerns the nature of the probability distribution for the growth rate , i.e. how fast the size changes in time .this distribution has two features .first , it follows a fat - tailed pattern , meaning that organizational size changes very little most of the time , but dramatically every once in a while , leading to rare ( yet possible ) booms and catastrophic crashes .the second feature is that fluctuations ( i.e. the variance ) in growth rates are less severe for larger organizations , so that the variance in growth is not uniform across all size scales .these empirical regularities are relevant for several reasons .the large fluctuations observed in real systems are rare , but certainly more likely than if the process were governed by a gaussian distribution .this kind of behaviour constitutes a counter - intuitive observation , since traditional models ( in economics , for example ) would expect it to be gaussian . the fat - tailed pattern adds unpredictability to the system , allowing for extreme events to take place in a short period of time .this has practical consequences for economic development and societal stability .finally , similar growth statistics have been observed in a wide variety of natural and artificial systems , which makes the understanding of underlying mechanisms behind growth processes an interdisciplinary topic .the reported systems can be mapped along several dimensions , as we illustrate below : country : : : us , italy , japan ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , united kingdom , brazil , sweden , profitability : : : commercial e.g. and voluntary organizations , industrial sector : : : pharmaceutical , furniture , printing , shoes , textiles , metals , chemicals , food , process : : : industrial production , firm growth of countries in the g7 , investments in mutual funds , stock price fluctuations , country gross domestic product ( gdp ) , exports / imports , bird population dynamics , university research output .our theoretical approach to the problem is the theory of social mechanisms in the analytical sociology framework . in this tradition ,models about social phenomena contribute to our theoretical understanding if they make clear the micro - level mechanisms that bring about a certain macro - level outcome , in this case the non - gaussian growth - rate pattern .it is individuals that , through their actions , bring about macro - level outcomes .the network of individual contacts is the setting in which the actors can influence each other , with activities in economic life being embedded in social networks .our proposed model has thus a focus on individuals in organizations as the relevant actors , and the network connecting them is of fundamental importance .this approach makes explicit the interplay between individuality and social influence , and shows how it can lead to an unexpected macro outcome . within this context, there are at least two possible reasons contributing to the features we observe in the growth - rate pattern .the fat tails could be caused by asymmetries in the structure of the social network that make certain actors more salient or popular than others .their involvement in the process of influencing group membership would generate these occasional large growth fluctuations . along this line ,the fat - tailed pattern would be a result of the underlying fat - tailed nature of the network structure .concerning fluctuations being smaller for larger companies , there could be a rich - get - richer phenomenon at play , by which large organizations become larger by a positive feedback process , and thus their size fluctuates less , while small organizations are more sensitive to perturbations caused by member entrance and exit .this is the firm diversification argument .several models have been proposed over a range of approaches ( see overview below ) , but there is no general consensus about a dominant mechanism to account for the emergence of fat - tailed growth - rate distributions .additionally , the majority of models in the literature do not focus on the network of agents that are members of an organization , but are rather aggregated models based on economic considerations or on various types of stochastic processes .for this reason , there is a need for models that describe the mechanism by which social actions at the micro level generate the pattern at the macro - level .we address these two possible reasons by modelling a social network where agents are subject to social influence when it comes deciding on organizational membership .we build and simulate a first model based on by , the _ saf model_. this first model represents the localized , network - dependent aspect of social influence .therefore , our first modelling aim is to implement the saf model and simulate it on different network topologies , in order to explore if a fat - tailed network structure is necessary for observing the non - gaussian growth pattern .secondly , we add a context - dependent aspect of social influence .we implement this by in the _ extended saf model _ with an influence parameter that weights an individual s membership choice by contextual influence .we propose an alternative to the diversification argument as a possible explanation of fluctuation dependence with organization size , in terms of a combination of network - dependent and context - dependent social influence . in this subsection, we describe existing models . none of the existing models , except for the one we take our departure from , incorporate micro - level mechanisms grounded on individuals and their interactions .we classify models into three categories : economic , physical and stochastic .we shall use ` group ' and ` organization ' as synonyms from here on ._ economic models _account for a large fraction of the models of growth processes in organizations .we find economic models for firm sizes as far back as the mid - twentieth century .simon developed the concept of growth opportunities ; sutton later elaborated on this concept .lucas considers on the distribution of managerial talent ; more recent models also use this notion .jovanovic introduced a model for firm learning through an evolutionary - like process .in more recent times , amaral et al . built a model on the concept of optimal size .a study by bottazzi used the concept of market diversification .finally , dosi studied the relation of growth with innovation and production efficiency .another model category is _ physical models _ , which combine physical and socio - economic concepts .we name three of them : microcanonical models , models using bose - einstein statistics derived from an urn - and - ball scheme , and percolation models ( see for a much deeper review ) . within the third category of _ stochastic models _is one of the oldest contributions to the literature , namely the _ gibrat model _ ( 1931 ) ( see also the reviews in ) .many of the classical models assume gibrat s law , or some more sophisticated version of it .therefore , we describe it in some more detail .the model is based on the following assumptions : 1 ._ law of proportionate effect _ or _ gibrat s law _ : the absolute growth rate of a company is independent of its size , i.e. : with the time period between measurements , an uncorrelated random noise , usually taken to be normally distributed with and , 2 .successive growth rates are uncorrelated in time , 3. companies do not interact . in order to measure growth , the central variable we look at is the growth rate , defined as .the choice of ( typically one year ) is conditioned by the sampling frequency in the data sets .we use here . naming , we write a rate in general as and call initial size ( this term is not to be confused with the size at the initial time step ; it is rather the size from which the growth rate is computed ) .we should also note that the statistical distributions depend on .the size distribution is typically approximated by a log - normal .on the other hand , the growth rate in the gibrat model follows a random - walk - like dynamics , and its distribution is normal .however , the literature agrees that a good way to describe at least the body of the growth - rate distribution for empirical data is through a laplace ( or `` tent - shaped '' ) distribution where is the mean value of the growth rates in the bin , and its standard deviation .this means that the gibrat pattern is qualitatively very different from what one observes in real data .the growth - rate distributions for all initial - size bins are alike ( due to gibrat s law ) , contrary to real observations , in particular regarding the decay of the fluctuations as increases .it is reported in the literature that a power law of the form provides a good description for this decay .moreover , the laplace tails are `` fatter '' than gaussian , i.e. extreme values have higher probability .this implies that large growth rates ( both positive and negative ) are more likely in reality than in the gibrat model . in other words ,organizational size changes very little most of the time , but it can occasionally also change dramatically . additionally , there is a subcategory of models called _ subunit models _ , in which the size of a company is constructed as the sum of the contributions of internal subunits , e.g. different divisions .one well - known model is by amaral et al .a variation of this model is the transactional model .another variation represents groups as classes composed of subunits . in the hierarchical tree model organizational hierarchy comes into play explicitly . a final model in this categoryis based on additive replication in its general form .we call it _ general saf model _ ,for schwarzkopf , axtell and farmer who first proposed it .a specific case of this model is the base for ours ( see saf model below ) . at each time step , each member of an organization is replaced by new ones , this last value taken from a replication distribution .there is a competition rule : the new element is either taken from another group with probability , or created from scratch with probability .the model is implemented on a social network , where vertices are individuals and edges are acquaintance relationships .the general saf model is the only model among the reviewed literature that makes an explicit reference to a social network .this model follows our approach when it comes to designing a micro simulation model that generates a macro - level pattern through a defined mechanism .the range of approaches from different disciplines shows the interdisciplinary nature of the phenomenon and the potential application of meaningful generative models to different scientific fields .the dynamics by which people become members of an organization has many different aspects .one of them is influence through contacts in social networks , which we call _ contact influence_. the setting for the _ saf model _ consists then of a contact network .see the illustration in fig .[ fig_syst ] .there are vertices representing individuals , and the arcs between them are links of contact influence .we follow here .we work in the strong - competition limit ( ) : each agent added to a group must be taken from another group .consequently , the total number of agents is constant .the number of groups is fixed as well .the only interaction we consider among agents is social influence , through edges in their contact networks .the network arc meaning that agent is influenced by agent .this simplification leaves out interactions coming from the formal ( or informal ) hierarchical structure of the organization .the underlying structure in the model is rather the social network of individual contacts , which we assume static over the time span of the problem .the model variables are the sizes of each organization at time , with .the size of an organization at a certain time is the sum of the individuals in that group at that time .previous research has shown that other size definitions for instance in terms of sales produce similar statistics ( see e.g. ref . ) .regarding the time evolution , at each time step , a vertex is picked at random . the probability for vertex to switch to group we call switching probability , and is computed as where is the degree of vertex in group , counting its own group .that is to say , from the number of vertices that influence vertex ( counting itself ) , how many belong to group .this rule conditions the decision to switch group on the group membership of network neighbours , and allows for the possibility to stay in the current organization .+ [ figure 1 here ] + we impose an extra rule stating that no group can die out permanently .this is done to avoid that groups hit the absorbing state at size zero and thereby keeping the system in equilibrium .we implement the rule as follows : every monte carlo step a check is performed ; if a group has zero size , a random vertex is switched to the empty group .a non - equilibrium version of the model is also possible .it would lead to different dynamics , with all system realizations going to a final one - group absorbing state .we tested this version as well , and found that the fat - tailed pattern is qualitatively reproduced .the implications are different , though .for instance , simulation time in the non - equilibrium version of the model could be translated more directly into some function of real time , while for an equilibrium model the association is not so direct .one advantage with this model is that it has a clear sociological interpretation : the more close acquaintances a person has in a certain group , the more likely it is that the person will choose to become member of that group .the decision is mediated by a contact network , which puts an emphasis on the relational component of membership choice .so far we have proposed a model where an individual is more influenced to join a group the more acquaintances she has in that group at the time .influence is exerted via the individual s contact neighbourhood .but social influence can be broader than that .several models for social influence have been proposed , for example regarding culture , opinion formation , information sharing in groups , etc .specifically , there is a contextual aspect of social influence .different settings can entail different pressures towards homogeneity of opinions or membership .we add one parameter to our model that represents the degree of this kind of social influence , which we call _ contextual influence_. the key element we want to incorporate is that influence in a certain setting is not a property of individual agents , but rather a property that affects all the members in the mentioned context . we assume , for simplicity , a uniform contextual influence . we model it through a parameter of_ contextual influence _ , with . in the limit , the person does not feel any pressure to align herself with the neighbourhood . on the contrary , in the limit the person acts solely based on the majority opinion in her surrounding neighbourhood .we now define the _ extended saf model_. the assumptions , parameters , and variables listed before are still valid .however , the time evolution is now governed by the following switching probability : setting recovers the first model .in the situation of low contextual influence ( ) a vertex can change its state at random , not being influenced by the groups of the contact vertices , while still retaining information on the possible groups she can choose to switch to .the system configuration tends thus to a random one .this can be thought as analogous to a high - temperature ( disordered ) situation in a physical system .in the situation of high contextual influence ( ) the vertex looks highly upon her contacts .configurations where the vertex is not aligned with the majority of her neighbours become less and less likely , and the system tends to polarize itself in domains .this is the analogous of a low - temperature ( ordered ) situation , the difference being that a physical unit does not have global information about the total number of groups in the system .we implement the saf model in an erds - rnyi ( er ) undirected network , by monte carlo simulation .the size distribution fits a log - normal distribution . the growth rate distribution for the basic case is shown in fig .[ fig_safgrdist]a .all initial - size bins fit a laplace distribution .the variance decay with an increase of is also verified .additionally , fig .[ fig_safgrdist]b plots the so - called scaled distributions ( used in e.g. ) . given the function in eq .( [ eqlaplace ] ) , one can rescale the variables under this rescaling , the distributions for the different initial - size bins should collapse onto a single curve close to the laplace distribution , as the figure shows .+ [ figure 2 here ] + looking at the simulated growth - rate distributions , the upper tail tends in general to underestimate the corresponding laplace curve , while the lower tail follows it more closely .we interpret this as a consequence of working with constant . in our model ,membership growth in one organization is done at the expense of membership decline in the rest of the groups .this is reflected in less frequent positive growth rates .the fit is still good because the laplace is a highly - peaked distribution , concentrating much of the mass around zero , so the main deviations represent a small fraction of the total deviation .the fact that real systems exhibit fatter tails on both sides could be due many factors , including the fact that growth - rate distributions could be a superposition of the distributions for different .we then implement the simulation with different network properties , in order to see the impact of network structure on the observables .+ [ figure 3 here ] + as a first change , we change the network from undirected to directed .we do so , keeping the mean degree constant , which implies that the number of influence arcs ( incoming and outgoing ) to a vertex remains on average unchanged .the reciprocity of each individual edge is lost , but the situation is still balanced on average , because the arc distribution along the network is random .that is to say , each agent on average influences and is influenced by the same number of alters .the comparison is illustrated by fig .[ fig_netparam]a b .we can observe that the pattern is similar , both in terms of variance and of the ranges of initial - size bins .next , we change the network degree distribution from er to scale - free ( sf ) , again keeping the mean degree constant .the plots are shown in fig .[ fig_netparam]c d .the undirected case is qualitatively similar , while the difference comes with the scale - free directed case . in the latter the distributions have larger variance in all initial - size bins , and more importantly , the higher initial - size bin comprises a much larger size range . the sf degree distribution appears to induce this behaviour , and our interpretation of this is as follows . in a sf network , by definition , there will be few `` popular '' vertices , and a lot of vertices with low popularity .as long as the influence is symmetric , the popular vertices drag its neighbours to their groups , but after a while , the equilibrium condition turns the tables the high degree of a few vertices just makes the process faster in certain moments . imposing a directed network breaks the symmetry . in the directed case ,some popular vertices are highly influential ( they receive a lot of arcs , and consequently them changing their group impacts many other vertices ) while other popular vertices are highly susceptible ( they radiate many arcs , but a group change is not as influential ) .we understand that this asymmetry manifests itself on the dynamics , causing the growth - rate distributions to be broader , and a lot of peaks of high group size reflected in the higher range .the fact that both er and sf networks are able to generate the laplace pattern can be interpreted in the light of the time - evolution rule of the model in eq .( [ safswitchp ] ) . in effect , it is a rule implementing some kind of preferential attachment , with the probability to switch group becoming greater as the vertex degree increases .using the extended saf model , we now test different contextual influences .the analysis is done on a square lattice to facilitate the visualization of domains of vertices belonging to the same organization .the lattice has periodic boundary conditions , and we use the moore neighbourhood ( nearest neighbours ) . in fig .[ fig_modsafdelta ] , we plot the group spacial distribution across the lattice , as well as the growth - rate distribution , for three values of .the first situation , , corresponds to a situation of low contextual influence the system has no clear domains , the organization assignment tending to a random uniform one .this is reflected in the growth - rate distribution with a pattern similar to the one encountered in the gibrat model .the second situation , , recovers the original saf model .the third situation , , corresponds to a situation of high contextual influence .there are clear domains where a few organizations absorb the majority of agents .this is reflected in the growth - rate distribution as a collapse of all distributions on highly - peaked curves close to .+ [ figure 4 here ] +in our study we show how individual agents , having local information on membership alternatives and interacting with local simple rules through their social network , can generate fat - tailed macro patterns of organizational growth . in doing so, we have not assumed any institutional constrain or external perturbation .rather , it is the internal dynamics of the interaction that bring the distribution about .individual agents are subject to contact influence in their localized network neighbourhoods , but the aggregation of their individual membership decisions brings about unexpected macro - level outcomes .sometimes , like in the case of large values of growth rates , the consequences at the macro level are quite extreme .this result is relevant for the design of policies and regulations , which are usually much more grounded in traditional approaches .non - gaussian patterns tend to challenge our worldview of how a lot of processes typically work .the growth - rate fat - tailed pattern shows up in erds - rnyi , scale - free and square - lattice regular networks , both in terms of the laplace distribution and in the decrease of the variance with .this suggests that the mechanism driven by influence - based rules is more relevant to the pattern s qualitative replication than the details of network topology .going back to our aims at the introduction , we find then that the scale - free character of the network is not a necessary condition to get a laplace - like growth - rate distribution .looking at the results from the extended saf model , the parameter region around provides a mechanism able to replicate the system s growth process features .when , the system has no clear group clusters , the organization assignment tending to a random one . on the contrary ,when , there are clear domains where a few organizations absorb the majority of agents .the intermediate situation best describes the real system s behaviour , and is modelled as a combination of contact and contextual influence .while the high- situation produces stable rich - get - richer dynamics , to the extent that single - group clusters do not break up , in the intermediate situation the rich - get - richer dynamics is no more stable .the situation around is also the only one where fluctuations are different for different initial sizes .this addresses our second aim , so that our model does not resort to the diversification argument to generate the decreasing variance with initial size .the sociological interpretation of our results is that the transition zone where the real system exists is an intermediate situation , dominated by neither totally random behaviour nor totally compliant behaviour .it is possible to interpret this from the point of view of the information an agent has to have in order to act .one way to implement the saf model ( ) is to think that , at a given time step , an agent chooses a random link amongst her neighbours , and switches to that contact s group .such an implementation means that , on average , each organization will be picked with a probability equal to the corresponding extended vertex degree of that agent .each agent needs to know only the group membership of the contact she last encounters , making this situation a reasonable model of a dynamics where people successively meet contacts without any further information .the high- situation demands more information , since the agent should know the membership of all her contacts at a given time to be able to determine which is the majority membership . on the side of low contextual influence , choosing at random requires to know at least how many groups there are .so the intermediate case offers the agent a localized decision rule with minimal information requirements .we therefore get to a realistic model without invoking any argument of the real system being self - organized around a critical region .the parameter can thus be reinterpreted as a way to weight different choice strategies , and tuning it around recovers a case of bounded rationality .therefore , we have to interpretations for : the degree of contextual influence , and the tuning of membership choice strategy .the former is external to the individual , while the latter is internal .both have an impact on the agent s behaviour , and both produce the aggregate pattern we observe empirically when tuned around the value for the saf model .this suggests a duality between agent and social context , where the two views are consistent with the statistics we observe , and compatible with each other .we think this way of thinking exemplifies how to model one of the core issues in sociology , i.e. , the interplay between individuality and social influence .further research should try to identify quantitatively how the statistical properties of the growth - rate distribution respond to systematic variations in both model and network parameters .for instance , in our explorations we found that the typical size of a group , given by , seems to affect the distribution s variance . a significant model extension to consider would be to allow the system size to change in time .this variation would have to be implemented with care in this network approach , because the properties of the growing network should be monitored dynamically throughout the simulation .other interesting extensions could be to incorporate community and hierarchical structure .the possibility to belong to more than one organization is another important point .another discussion concerns extensions to the parameter . in this studywe have introduced it as a parameter quantifying the effect of an agent s social context .contextual influence enters as an exponent that weights the probability to switch group . in our formulation ,the degree of contextual influence is uniform for all agents and constant in simulation time .there are relevant extensions to consider .for instance , one could assume that different types of organizations have particularities as to their social settings , and model this with a parameter that depends on organizational type .these different parameters can then be related to the growth - rate statistics .additionally , this framework of analysis should be quite dependent on the size rage of the organizations under study , i.e. small voluntary - oriented organizations with local range have a setting where the social networks may dominate the dynamics , while large formal organizations have other structural elements in place so that a direct application of our model would not be advisable .finally , a better understanding of organizational growth processes could be applicable to other processes producing similar statistical features , from bird populations to financial and economic systems .this being said , one should still be careful in signaling the apparent universal presence of these common features as evidence of the systems belonging to the same class . on that line , it is reported that the exponent of the variance power - law relation , despite its value being similar for different systems , may not be universal. however , it is likely that different growth processes share similarities in terms of the underlying mechanisms driving them .hm and fl conceived the model ; hm ran the simulations ; hm , ph and fl analyzed the results ; hm , ph and fl wrote the paper .xx amaral lan , buldyrev sv , havlin s , leschhorn h , maass p , et al .( 1997 ) scaling behavior in economics i : empirical results for company growth .j phys i ( france ) 7 : 621 - 633 .buldyrev sv , amaral lan , havlin s , leschhorn h , maass p , et al .( 1997 ) scaling behavior in economics ii : modeling of company growth .j phys i ( france ) 7 : 635 - 650 .amaral lan , buldyrev sv , havlin s , maass p , salinger ma , et al .( 1997 ) scaling behaviour in economics : the problem of quantifying company growth .physica a 244 : 1 - 24 .amaral lan , buldyrev sv , havlin s , salinger ma , stanley he ( 1998 ) power law scaling for a system of interacting units with complex internal structure .phys rev lett 80 : 1385 - 1388 .bottazzi g , secchi a ( 2003 ) common properties and sectoral specificities in the dynamics of u.s .manufacturing companies .review of industrial organization 23 : 217 - 232 .gupta h , campanha jr , de aguiar dr , queiroz ga , raheja cg ( 2007 ) gradually truncated log - normal in usa publicly traded firm size distribution .physica a 375 : 643 - 650 .plerou v , gopikrishnan p , amaral lan , meyer m , stanley he ( 1999 ) scaling of the distribution of price fluctuations of individual companies .phys rev e 60 : 6519 - 6529 .stanley mhr , amaral lan ( 1996 ) scaling behaviour in the growth of companies .nature 379 : 804 - 806 . bottazzi g ( 2001 ) firm diversification and the law of proportionate effect. lem paper series .bottazzi g ( 2007 ) on the irreconcialability of pareto and gibrat laws .lem paper series .bottazzi g , secchi a ( 2002 ) on the laplace distribution of firm growth rates .lem paper series .bottazzi g , secchi a ( 2003 ) a stochastic model of firm growth .physica a 324 : 213 - 219 .bottazzi g , secchi a ( 2006 ) explaining the distribution of firm growth rates .the rand journal of economics 37 : 235 - 256 .dosi g ( 2005 ) statistical regularities in the evolution of industries .a guide through some evidence and challenges for the theory .lem paper series .ishikawa a ( 2006 ) derivation of the distribution from extended gibrat s law .physica a 367 : 425 - 434 .ishikawa a ( 2006 ) pareto index induced from the scale of companies .physica a 363 : 367 - 376 .ishikawa a ( 2007 ) the uniqueness of firm size distribution function from tent - shaped growth rate distribution .physica a 383 : 79 - 84 .hart pe , oulton n ( 1996 ) growth and size of firms .the econ j 106 : 1242 - 1252 .singh a , whittington g ( 1975 ) the size and growth of firms .rev econ studies 42 : 15 - 26 .liljeros f ( 2001 ) the complexity of social organizing . ph.d .thesis , dept . of sociology , stockholm university , stockholm , sweden .de fabritiis g , pammolli f , riccaboni m ( 2003 ) on size and growth of business firms .physica a 324 : 38 - 44 .fu d , pammolli f , buldyrev sv , riccaboni m , matia k , et al .( 2005 ) the growth of business firms : theoretical framework and empirical evidence .pnas 102 : 18801 - 18806 .matia k , fu d , buldyrev sv , pammolli f , riccaboni m , et al .( 2004 ) statistical properties of business firms structure and growth .europhys lett 67 : 498 - 503 .fagiolo g , napoletano m , roventini a ( 2007 ) how do output growth - rate distributions look like ?some cross - country , time - series evidence .eur phys j b 57 : 205 - 211 .gaffeo e , gallegati m , palestrini a ( 2003 ) on the size distribution of firms : additional evidence from the g7 countries .physica a 324 : 117 - 123 .schwarzkopf y , axtell rl , farmer jd ( 2010 ) an explanation of universality in growth fluctuations .preprint ssrn ssrn.com/abstract=1597504 .castaldi c , dosi g ( 2009 ) the patterns of output growth of firms and countries : scale invariances and scale specificities .empir econ 37 : 475 - 495 .lee y , amaral lan , canning d , meyer m , stanley he ( 1998 ) universal features in the growth dynamics of complex organizations .phys rev lett 81 : 3275 - 3278 .podobnik b , horvatic d , pammolli f , wang f , stanley he , et al .( 2008 ) size - dependent standard deviation for growth rates : empirical results and theoretical modeling .phys rev e 77 : 056102 1 - 8 .keitt t , amaral lan , buldyrev sv , stanley he ( 2002 ) scaling in the growth of geographically subdivided populations : invariant patterns from a continent - wide biological survey .phil trans r soc lond b 357 : 627 - 633 .plerou v , amaral lan , gopikrishnan p , meyer m , stanley he ( 1999 ) similarities between the growth dynamics of university research and of competitive economic activities .nature 400 : 433 - 437 .ijiri y , simon ha ( 1977 ) skew distributions and the sizes of business firms .amsterdam : north - holland .simon ha ( 1964 ) comment : firm size and rate of growth .j pol econ 72 : 81 - 82 .simon ha , bonini c ( 1958 ) the size distribution of business firms .amer econom rev 48 : 607 - 617 .lucas r ( 1978 ) on the size distribution of business firms .bell j econ 9 : 508 - 523 .gupta h , campanha jr ( 2003 ) firms growth dynamics , competition and power - law scaling .physica a 323 : 626 - 634 .jovanovic b ( 1982 ) selection and the evolution of industry .econometrica 50 : 649 - 670 .sutton j ( 2002 ) the variance of firm growth rates : the ` scaling ' puzzle . physica a 312 : 577 - 590 .wyart m , bouchaud jp ( 2003 ) statistical models for company growth .physica a 326 : 241 - 255 .castellano c , fortunato s , loreto v ( 2009 ) statistical physics of social dynamics .rev mod phys 81 : 591 - 646. fu d , buldyrev sv , salinger ma , stanley he ( 2006 ) percolation model for growth rates of aggregates and its application for business firm growth .phys rev e 74 : 036118 1 - 7 .gibrat r ( 1931 ) les ingalits conomiques .paris : recueil sirey .kalecki m ( 1945 ) on the gibrat distribution .econometrica 13 : 161 - 170 .sutton j ( 1997 ) gibrat s legacy .j econ literature 35 : 40 - 59 .axtell rl ( 2001 ) zipf distribution of u.s .firm sizes .science , new series 293 : 1818 - 1820 .hymer s , pashigian p ( 1962 ) firm size and rate of growth .j pol econ 70 : 556 - 569 .mansfield e ( 1962 ) entry , gibrat s law , innovation , and the growth of firms .amer econom rev 52 : 1023 - 1051 .schweiger a , buldyrev sv , stanley he ( 2007 ) a transactional theory of fluctuations in company size . preprint arxiv physics/0703023v1 .riccaboni m , pammolli f , buldyrev sv , ponta l , stanley he ( 2008 ) the size variance relationship of business firm growth rates .pnas 105 : 19595 - 19600 .coleman js ( 1986 ) social theory , social research , and a theory of action .amer j sociology 91 : 1309 - 1335 .hedstrm p ( 2005 ) dissecting the social : on the principles of analytical sociology .cambridge : cambridge university press .granovetter m ( 1985 ) economic action and social structure : the problem of embeddedness .amer j sociology 91 : 481 - 510 .wilson j ( 2000 ) volunteering .annu rev sociol 26 : 215 - 240 .axelrod r ( 1997 ) the dissemination of culture : a model with local convergence and global polarization .j conflict resolut 41 : 203 - 226 .sznajd - weron k ( 2005 ) sznajd model and its applications .acta phys pol b 36 : 2537 - 2547 .carley km ( 1991 ) a theory of group stability .american sociological review 56 : 331 - 354 .kuran t ( 1995 ) private truths , public lies : the social consequences of preference falsification .cambridge : harvard university press .kahneman , d ( 2003 ) maps of bounded rationality : psychology for behavioral economics .the american economic review 93 : 1449 - 1475 .of organization at time is the number of vertices belonging to that group at that time . at a certain time step ,the probability for an individual to switch group depends on the group membership of its neighbourhood ( highlighted in the figure ) .note the loop on the individual to represent that the agent takes into account her own membership in the decision.,width=313 ] the size at one year , and the size after one year , we define the growth rate . herewe plot the conditional pdf to have a growth rate given an initial size , in log - scale .the data is binned by initial - size ranges , and shown by organization type .we also plot a fit by mle to the laplace distribution in eq .( [ eqlaplace ] ) .the overall fit is good , because that distribution carries most of its probabilistic mass in the body .( b ) same information , now in the scaled form of eq .( [ eqlaplacescaled ] ) .[ erds - rnyi ( er ) undirected network , , , .],width=313 ] increases , we observe how domains gradually appear .the higher the contextual influence , the more likely is that a vertex would align herself with her neighbours .( a ) . low contextual influence , random behaviour similar to gibrat model pattern .no domains exist .( b ) .original saf behaviour .domains begin to appear .( c ) .high contextual influence .presence of clear domains .[ square lattice , , , .],width=313 ]
|
organizational growth processes have consistently been shown to exhibit a fatter - than - gaussian growth - rate distribution in a variety of settings . long periods of relatively small changes are interrupted by sudden changes in all size scales . this kind of extreme events can have important consequences for the development of biological and socio - economic systems . existing models do not derive this aggregated pattern from agent actions at the micro level . we develop an agent - based simulation model on a social network . we take our departure in a model by a schwarzkopf et al . on a scale - free network . we reproduce the fat - tailed pattern out of internal dynamics alone , and also find that it is robust with respect to network topology . thus , the social network and the local interactions are a prerequisite for generating the pattern , but not the network topology itself . we further extend the model with a parameter that weights the relative fraction of an individual s neighbours belonging to a given organization , representing a contextual aspect of social influence . in the lower limit of this parameter , the fraction is irrelevant and choice of organization is random . in the upper limit of the parameter , the largest fraction quickly dominates , leading to a winner - takes - all situation . we recover the real pattern as an intermediate case between these two extremes .
|
let be a time homogeneous diffusion process which is the unique ( strong ) solution of the following stochastic differential equation : if is a time dependent boundary , we are interested in estimating either the pdf or cdf of the first passage time ( fpt ) of the diffusion process through this boundary that is we will study the following random variable : in general , there is no explicit expression for the first passage - time density of a diffusion process through a time - varying boundary . to this date, only a few specific cases provide closed formed formulas for example when the process is gaussian and the boundary is of a daniels curve type . thus , we mainly rely on simulation techniques to estimate this density in a general setting .the main goal of this work is to develop a computationally efficient algorithm that will provide reliable fpt density estimates .the paper is organized as follows . in section 2 ,we review existing techniques followed by the mathematical foundations leading to a novel algorithm .finally , section 3 is devoted to various examples enabling us to evaluate the algorithm s performance .this is the simplest and best - known approach based on the law of large numbers . after fixing a time interval, basically we divide the latter into smaller ones , simulate a path of the process along those time points and , if it occurs , note the subinterval where the first upcrossing occurs . generally , the midpoint of this subinterval forms the estimated first passage time of this simulated path .we repeat the process a large number of time to construct a pdf or cdf estimate of this stopping time .consider a brownian motion , a linear boundary , then for and , from standard theory the first passage time probability has an explicit form given by where denotes the cdf of a standard normal distribution .+ setting , and , table 1 gives us estimates of the fpt probability with various number of simulated paths and time - step discretization .clearly , even in a simple case as this one , in order to have a suitable estimation of the true value we have to rely on a large number of paths and a very fine partition of the time interval ..monte carlo estimates of the fpt probability of a brownian motion through the boundary [ cols="^,^,^,^",options="header " , ] another drawback of the crude monte carlo approach is that it tends to overestimates the true value of the first passage - time since an upcrossing may occur earlier in between simulated points of a complete path as illustrated in figure [ onepath ] .+ instead of continuously repeating the whole monte carlo procedure with an even finer interval partition to obtain better estimates , let us see how one could improve on the initial estimates without discarding the simulated paths .an astute idea that have been put forward by several authors , is to ideally obtain the probability law of an upcrossing between simulated points , thus if is the probability of an upcrossing in the time interval ] and assert that there is an upcrossing if . since the exact fpt probability of a diffusion bridge will more than often not be available , we need to consider an adequate estimation of this probability . for each subinterval, one could consider simulating paths of approximate tied - down processes as proposed by giraudo , sacerdote and zucca where they basically used a kloden platen approximation scheme with order of convergence 1.5 , or we could make use of more recent results from lin , chen and mykland or srensen and bladt to improve on the fpt probability estimates . although all of these may constitute adequate approximations of the true fpt probability it may prove costly in computation time since these methods tantamounts to generating numerous simulations of bridge paths on successive subintervals for each of the original sample paths . +another alternative , as first proposed in strittmatter , is to consider that for a small enough interval , the diffusion part of the process should remain fairly constant and then consider a brownian bridge approximation of the diffusion bridge and exploit known results on the fpt of brownian bridges .for example giraudo and sacerdote considered solving numerically the volterra type integral equation linked to the generalized brownian bridge fpt probability through a general time varying boundaries . since a certain number of iterations may be needed to obtain adequate solutions of integral equations specific to each sample paths and successive subinterval this may sensibly increase computing time .finally , it is worth mentioning that all of the above methods could , in many cases , be improved significantly as far as accuracy is considered by first applying a lamperti transform on both the original process and the frontier as described for example in iacus .indeed , define and apply it on the original process and the time - varying boundary .assuming that is one - to - one , then the original problem is equivalent to finding the fpt density of where the new boundary is given by and , by it s formula , the diffusion process follows the dynamic where since the diffusion part of the process is constant , then the simple brownian bridge will constitute a good approximation of the diffusion bridge . in our approach , while still considering a brownian bridge approximation of the diffusion bridges after a lamperti transform as described previously , we propose to consider localized daniels curve approximation of the time - varying boundary . since explicit formulas of the first passage time probability are available in this case , one would readily get an adequate approximation of the true probability .furthermore , under mild assumptions , a ( unique ) daniels curve approximation can easily be obtained by simply taking the endpoints of the segment and the value at midpoint ( or another point of our choosing ) .indeed , we will show that it leads to consider a non - linear system of three equations that can be explicitly solved . before describing our algorithm, we will need the following key results : * proposition 1*. consider a brownian bridge define on an time interval ] be a time interval , consider the points , , and set .if then there is a unique daniels curve ( [ dan ] ) passing through the three points with parameters _ proof ._ the set of points generate the following non - linear system of equations : & = & b\\ \nonumber\frac{\alpha}{2}-\frac{\deltat}{\alpha}\ln\left[\frac{\beta+\sqrt{\beta^{2}+4\gamma e^{-\alpha^{2}/\delta t}}}{2}\right ] & = & c\end{aligned}\ ] ] obviously the first equation gives , while simple algebraic manipulations on the last two equations lead us to solve the following linear system which can be rewritten in the form since then there exists a unique solution given by : \end{aligned}\ ] ] this would constitute the solution to the original system provided that and .notice first that and therefore and if furthermore then and clearly is satisfied .so if we assume now that then , thus we need to verify that which is the case since \\ & = & \frac{a^{2}}{\left(a^{3}b - c\right)^{2}}\left(\left(a^{4}b^{2}-c^{2}\right)^{2}+4a^{3}bc\left(c - ab\right)\left(a^{3}b - c\right)\right)\\ & = & \frac{a^{2}}{\left(a^{3}b - c\right)^{2}}\left(a^{4}b^{2}+c^{2}-2a^{3}bc\right)^{2}\end{aligned}\ ] ] the final step is to make sure that it solves the original system . substituting back in ( [ eq1 ] ) , ( where only positive square roots are involved ) , we see that is the case only if , or equivalently which is verified through ( [ cond1 ] ) . + the fpt algorithm is described as follows : 1 .apply the lamperti transform ( [ lamp ] ) to the original diffusion process ( [ diffx ] ) and frontier to obtain the new process ( [ diffy ] ) and boundary 2 . select a time interval $ ] and construct a partition 3 .initialize fpt vector counter to 4 .initialize path counter to + while is less than the number of desired paths do the following : + 1 . simulate a path of the process 2 .initialize subinterval counter to + while is less than the number of desired subintervals do the following : + 1 .if then set fpt vector component to and path counter to , go to step 5 2 .set , , , + , finally set , , , , and as in ( [ param ] ) of proposition 2 3 .if then set , + if then set , + if then set 4 .set probability upcrossing to 5 .generate a value taken from a uniform random variable 6 .if then set fpt vector component to and path counter to , go to step 5 , else set , go to step 7 note that step 9 includes extreme cases where the middle point of the frontier in a subinterval may not be reached by a daniels curve , thus we use the closest curve possible .we will focus our examples on diffusion processes which paths can be simulated exactly . therefore with known results on fpt density and bounds , it will allow us to better visualize the approximation error due essentially to the algorithm . where is the probability density function of the ornstein - uhlenbeck process starting at .+ figure [ ex1 ] compares the true fpt density with the empirical density histogram obtained through our algorithm using a time step discretization of 0.01 and 10 000 simulated paths .furthermore , the algorithm gives us a fpt probability estimate of over the whole interval compared to the true value of representing a relative error of about .example 2 .consider the following geometric brownian process and linear boundary by applying the lamperti transform to both the process and boundary we obtain respectively as in example 1 , this transformed diffusion process is also a gauss - markov process and , although the new frontier does not allow an explicit fpt density , using the deterministic algorithm in di nardo et al . with a 0.01 time step discretization , we can obtain a reliable approximation .+ figure [ ex1 ] compares the di nardo fpt density approximation with the empirical density histogram obtained through our algorithm using the same time step discretization with 10 000 simulated paths .in addition , the algorithm offers a fpt probability estimate of over the whole interval agreeing with the actual value of ( a relative error of about ) .example 3 .consider the modified cox - ingersoll - ross process and linear boundary by applying the lamperti transform to both the process and boundary we obtain respectively as opposed to the preceding examples , this transformed diffusion process is not gaussian however using beskos and roberts exact algorithm we can simulate exact sample paths .although an explicit fpt density is not available , using results of downes and borovkov we can , in this case , obtain the following lower and upper bounds : where is the probability density function of a standard brownian motion .+ figure [ ex1 ] compares the fpt bounds with the empirical density histogram obtained through our algorithm using a 0.01 time step discretization starting initially with 15 000 simulations and obtaining 11 768 valid paths through the exact algorithm .moreover , the algorithm suggests a fpt probability estimate of over the whole interval which lies within the values and .
|
in this paper , we develop a monte carlo based algorithm for estimating the fpt density of a time - homogeneous sde through a time - dependent frontier . we consider brownian bridges as well as localized daniels curve approximations to obtain tractable estimations of the fpt probability between successive points of a simulated path of the process . under mild assumptions , a ( unique ) daniels curve local approximation can easily be obtained by explicitly solving a non - linear system of equations . imene allab and francois watier +
|
while we all have a feeling of what `` complex '' means , it is notoriously hard to find quantitative measures .furthermore , there are various types of complexity .the three main examples in present - day science seem to be computational complexity , process complexity , and state complexity .computational complexity refers to the amount of resources required to perform a certain computation task , be it in terms of time , memory space or number of queries , contributing to different complexity classes in computer science .process complexity is often associated with the chaotic ( but not random ) behavior of the process , interconnectivity of many components in the process , and possibly the phenomenon of emergence .finally , state complexity , the focus of this paper , refers to the amount of information that is required to describe , generate or simulate a state of a physical system . why do we study the complexity of quantum states ? from a foundational point of view, complexity could be a parameter to test the limits of quantum mechanics .the direct extension of quantum effects ( coherent superpositions ) to daily objects might result in bizarre paradoxes , as schrdinger famously noticed .it is a current experimental trend to push the tests of quantum mechanics towards the macroscopic domain , see for example refs . . in all these experiments ,the superposition indeed involves large number of particles or excitations , nevertheless the states produced are somewhat `` simple '' : some involve the superposition of a single degree of freedom , the center of mass ; others target the ghz state , or the dicke state with few excitations , as ideal macroscopic states .the macroscopic objects of our daily experience are not only large in size , mass and number of particles , but at the same time also interconnected in a non - trivial manner : a cat , besides being large , is a complex object .do complex objects still obey quantum physics ?if yes , as most physicists would argue , can we create them in a controlled way ?these questions loomed behind the discussion on the possibility of large - scale quantum computation . in order to refine this discussion ,aaronson took a technical step and proposed the concept of tree size ( ts ) , as a measure of complexity for quantum states .this highlights that , besides exploring the limits of quantum mechanics , quantum state complexity is a way of capturing the deep relation between complexity and computation .the origin of quantum speed up might be sought in some features of entanglement .promisingly , early studies showed that states used in various quantum algorithm display multipartite entanglement ; while states with little entanglement could be efficiently simulated with classical computing .nonetheless , large entanglement is neither necessary nor sufficient condition for quantum speed up : to the contrary , having too much entanglement might be detrimental to performing computation . measures of entanglement developed with other operational meanings do not seem to capture the computational power of the state .another candidate is the phenomena of interference .previous works propose how to quantify interference with `` ibits '' and investigate how many ibits were `` actually used '' in various quantum algorithms .the different amount of actually used ibits seems to explain the different amount of speed up in shor s and grover s algorithm .the relation with success probability in algorithm with imperfections was studied in . in this work ,we look into the relation between complexity of quantum states and quantum speed up .it seems intuitive that , _ in order to be useful in computational tasks , a state must be complex to describe and yet be simple to prepare_. indeed , if on the one hand a state is simple to describe , it should be possible to simulate it efficiently with classical computers ; on the other hand , the preparation of the state from easily available resources is part of the overall computation process . here , we focus on the first aspect : how to quantify _ the complexity of describing a quantum state _ ? among the different measures of state complexity , quantum kolmogorov complexity is defined by length of the shortest possible program that would generate the state .this very common definition suffers from the setback that it is not computable .moreover , kolmogorov complexity captures the complexity of _ generating _ the state .the tree size ( ts ) complexity that we mentioned before and that we are about to discuss relates more to the _ description and simulation complexity _ of quantum states .the most common way to represent quantum states is the dirac notation .tree size complexity can be understood as the size of the minimal description using this notation .this article provides a concise summary of our knowledge of the tree size , as well as some new results on complex states with superpolynomial tree size , verification of complex states , and the connection between tree size complexity and the power of quantum computation .the definition of tree size complexity is given in sec ., we review the works on states of two , three and four qubits , and describe the most complex states according to this measure . moving to the case of qubits ,we first discuss a few examples of simple states with polynomial tree size . in sec . 4 ,a theorem by raz for showing superpolynomial lower bound on multilinear formula sizes , which in turn lower bounds tree size , is revisited . with this theorem at hand , we show some families of states with superpolynomial tree size : the immanant states , the deutsch - jozsa states and the subgroup states . based on numerical evidence , we construct an explicit example of a subgroup state with superpolynomial tree size .more importantly , the tree size of the 2d cluster state is shown to be superpolynomial . in sec . 5 , we describe how to verify the superpolynomial tree size of the complex subgroup states and the 2d cluster state with polynomial effort via measuring a witness . the possible relation between state complexity and quantum computation speed upis discussed in sec .finally , we offer a list of open problems and technical conjectures .just as bits to classical information , qubits are the basic building blocks of quantum information .any -qubit pure states can be written in the computational basis with at most coefficients : where each vector in is identified as a bit string in .this decomposition on a computational basis is not the most compact way of writing a pure state . in the case of two qubits ,the most economic representation is given by the schmidt decomposition .this decomposition can be iterated to deal with multi - partite states , but already for three qubits a different _ ad hoc _ decomposition is more compact .the shortest possible representation of a multiqubit state in dirac notation is given by the _minimal tree size _ introduced by aaronson in ref . as a measure of complexity of a pure state : any multiqubit state written in dirac s notation can be described by a rooted tree of and gates ; each leaf vertex is labelled with a single qubit state , which needs not be normalized .the three - qubit biseparable state , for instance , is represented by the tree in fig .the size of a tree is defined as the number of its leaves : thus , the size of the tree of fig . [ tb ] is five .a given state can have many different tree representations ( for instance , the biseparable state in fig .[ tb ] can also be written as whose size would be six ) .the minimal tree of a state is the tree with the smallest size that describes it ; and the tree size is the size of the minimal tree .this measure of complexity is in principle _computable _ , though we lack efficient algorithms .moreover , a relation with multilinear formulas leads to _ lower bounds _ on the tree size .it is thus possible to show that the tree size of some states is definitely superpolynomial in the number of qubits .such states can be considered genuinely complex , in the sense that they can not have a polynomial ( i.e. computationally efficient ) representation in dirac notation nor in a matrix - product representation with matrices of constant size .in contrast , if the ts scales _ polynomially _ with , the state can be considered simple as it can be described efficiently on a classical computer . before moving to the -qubit case ,let us familiarize ourselves with by looking at the states of a few qubits .one observation that is very useful for finding of few - qubit states is the fact that is invariant under invertible local operations ( ilos ) .formally , [ thm : ilo ] if , where all the single - qubit operators are invertible , then .any two states that can be transformed to each other by ilos as above are said to be equivalent under stochastic local operation and classical communication ( slocc ) .the above proposition implies that all states in a slocc equivalent class have the same .[ [ two - qubits ] ] two qubits : + + + + + + + + + + + any two - qubit state can be written in the schmidt decomposition as : where and are nonnegative real numbers satisfying , and form an orthonormal basis .the state is said to be separable if one of the coefficients or vanishes and entangled otherwise .the schmidt decomposition has size at most , hence the of any two - qubit state is at most .there are only two different rooted trees of size at most that can describe a two - qubit state , which are shown in fig .[ f2 ] . from this figurewe see that a two - qubit state has if it is entangled and if it is separable .this concludes the case for two qubits .[ [ three - qubits ] ] three qubits : + + + + + + + + + + + + + for three qubits , a useful decomposition that has a similar role as the schmidt decomposition does for two qubits is the canonical form derived by acn _ : any three - qubit state can be written as where the prime and double prime indicate different bases .the is upper bounded by the size of this decomposition and thus is at most .similarly to the case of two - qubit states , we first find all the possible trees for three qubits with at most eight leaves , and then try to see which one is the minimal tree of a given state . as stated in proposition [ thm : ilo ] , since all the states in a slocc class has the same , we need only find of one state in a class to know of all the states in that class .for three qubits , it is known that the pure states can be categorized into six different classes : the product class , three biseparable classes due to permutation , the ghz class and the w class .examples of states in these classes are , respectively , so , a state is said to be in a particular slocc class , say the w class , if there exist ilos such that .an exhaustive search shows that , , , and .so the of a three - qubit state can adopt only one of these four different values depending on what slocc class the state belongs to .state ] the most complex three - qubit states are obviously the ones in the w class , whose minimal tree is drawn in fig .interestingly , this complexity class is unstable in the sense that an arbitrarily small deviation could bring the w state to a state in the ghz class . for studying how changes in the presence of fluctuation , we define a smoothed version of the tree size over a small neighbourhood of the desired state . for a positive constant ,the -approximate tree size of is the minimal tree size over all pure states such that , that is , since in practice we can not know a state with arbitrary precision , the is a more physical measure .the instability of the state could be now phrased precisely as follows : for arbitrarily small , there exists a state in the ghz class such that therefore , . [ [ four - qubits ] ] four qubits : + + + + + + + + + + + + as in the case of three qubits , slocc equivalent classes can be used to find the tree size of four - qubit states . in ref . it is shown that the maximal of four - qubit state is 16 and a set of criteria for identifying whether a given state has this maximal is given .the class of most complex four - qubit states were found to belong to a slocc class not described in previous inductive classifications .these states have the following minimal decomposition : where and are two - qubit entangled states . noting the switching of order of qubits in the second branch, this form seems to preclude a recursive construction of the most economic description in terms of tree size .forms that look recursive do require 18 leaves for some states .an example of the most complex four - qubit states with is .\end{aligned}\ ] ] in this computational basis expansion its size is , but it can be shown that its minimal decomposition has indeed the form given in eq . with size .this state has already been created in experiments with four - photon down conversion . when fluctuation is taken into account, the maximal -tree size of four - qubit states reduces to 14 for : any state in the most complex class can be -approximated by a state with has the decomposition where and are two states in the class of three - qubit states .[ [ mixed - states ] ] mixed states : + + + + + + + + + + + + + the concept of tree size may be extended to a mixed state as follows : where the minimization is done over all the possible pure state decomposition of the mixed state .this is the same approach for extending an entanglement measure to mixed states discussed in ref .the intuition behind this definition is that the tree size of a mixed state should be at least as complex as the most complex pure state in its decomposition .according to the classification of three - qubit mixed sates introduced by acn _ , there are four different slocc classes : class , the set of states that can be written as combination of pure separable states ; class , for states that can be written as combination of separable and biseparable states ; class , for states that can be written as combination of separable , biseparable and states ; and class ghz , for states that can be written as combination of all possible three qubit states. clearly , . from the definition of for mixed states one sees that states belongs to the class have tree size 3 , tree size 5 . for states in ,the tree size is 8 if a state is required in the decomposition , otherwise it is 6 . as an examplewe look at a family of one - parameter mixed state , the so called generalized werner state , by looking at what slocc class belongs to for different values of , it is shown that ( ) when , ( ) when . with an obvious decomposition into and product states , for , even though when and when , where .we now move to the case of -qubit states and consider how scales as . a state ( more precisely , a family of states indexed by ) is simple if its scales polynomially with the number of qubits . for showing that a state is simple , it suffices to find an explicit decomposition with polynomial size .some examples of simple states are given in table [ tab : simple ] ..summary of -qubit simple states [ cols="^,^ " , ] the product state is the simplest state in terms of tree size .it is usually regarded as the input for the circuit model of quantum computation .obviously , which is the minimal for -qubit states .the -qubit ghz state , , which saturates most of the macroscopicity measures , has , which is linear in the number of qubits .this is a clear evidence that complexity is a different notion from macroscopicity. a maximally macroscopic state can yet be very simple .the dicke states represents the equal superposition of -qubit string with excitations ; formally it is the ( unnormailized ) uniform superposition of all the -bit strings with hamming weight : where the summation is over , all the distinct element subset of .we show that has tree size . to see this, one can consider the uniform superposition of the following fourier form ( omitting normalization ) : with tree size .a direct expansion yields .when , for some integer , ; when is not a multiple of , .hence , .for , can be only 0 or 1 , thus . for , can be 0 , 1 and 2 ; thus . for , by interchanging 0 and 1 we obtain the case .so for any , . the -qubit w state , which is , though representing the most complex class for the three - qubit case , has polynomial tree size .finally , matrix product states ( mps ) are a well - studied family because they approximate well the ground state of one - dimensional gapped hamiltonians .the tree size of an mps is related to the bond dimension .a recursive argument provides the upper bound for the of an mps : consider the following form of an mps : where and are matrices of dimension at most . by partitioning the qubits into two halves , we have where now is an mps of qubits .we can see that . by repeating this partitioning, we have .thus , if is bounded as increases , then ts is polynomial .one example of mps , the 1d cluster state , has , hence its tree size is . note that the same recursive argument applied to a more general form of tensor network states , the projected entangled pair states ( peps ) , gives the superpolynomial upper bound and for 2d and 3d peps respectively ; and indeed , as we are going to discuss later in sec .[ sec : computation ] , peps that are universal for measurement - based quantum computation ( mbqc ) should have superpolynomial tree size , assuming that factoring is not in .one of the advantages of tree size as a complexity measure is that there are tools for proving lower bound on tree size , hence certifying complex states .one way is to use _ counting argument _ as aaronson did in theorem 7 of .the fact that there are fewer states that has polynomial tree size than there are in the whole hilbert space ( or the state space of interest ) , some states are bound to have superpolynomial or even exponential tree size . another method , which will be discussed more often in this paper , is _ a theorem first proved by raz _ in the context of multilinear formula size ( mfs ) . although counting arguments could show that states with superpolynomial tree size must exist , but raz s theorem allows us to construct explicit examples .let us present here this important theorem , first in raz s original formulation , then in an equivalent way in terms of schmidt rank .a multilinear formula is a formula that is linear in all of its inputs .the mfs of a multilinear formula is defined as the number of leaves in its _ minimal tree representation _ similar to the tree size of a quantum state .consider a multilinear formula , let be a bipartition of the input variables into two sets , and .we now view as a function .then denote by the matrix whose rows and columns are labeled by and , respectively .the entry of this matrix is defined as .finally , let be the rank of over the complex numbers , and be the uniform distribution over all the possible bipartitions .we now state raz s theorem : if [ thm : raz ] = n^{-o(\log n ) } , \end{aligned}\ ] ] then . for any quantum state , we can define the associated multilinear formula .note that this formula computes the coefficients in the computational basis expansion of . given a tree representation of a quantum state, a tree for the associated multilinear formula can be obtained by interchanging and .thus , given the minimal tree of quantum state , we can obtain a tree for the associated multilinear formula with the same size .the true mfs of the formula can only be smaller , therefore : [ thm : tsmfs] .therefore , if satisfies raz s theorem , then . in fact , in the original paper of aaronson , he showed that the other way of the inequality is also true up to , .but for the purpose of the rest of this article , is sufficient . for its application in complexity of quantum states , we rephrase raz s theorem in terms of the schmidt rank , a well - known concept in quantum information : for a pure quantum state of qubits , consider all the uniformly distributed bipartitions , if = n^{-o(\log n ) } , \end{aligned}\ ] ] where is the schmidt rank of a particular partition , then .the statement follows indeed from raz s theorem , because partitioning of the input of the associated multilinear formula is the same as partitioning the qubits of the state .note that is a matrix each of whose element is a coefficient of the state in its computational basis .thus , the rank of is exactly the schmidt rank of the state for the bipartition . interestingly , from the point of view of complex systems and statistical physics , multipartite entanglement were found to be related to the average entanglement across equal bipartitions . instead of the schmidt rank ,the distribution of purity of the partial states over all equal bipartition was studied in those works .this suggests a possible deeper link between tree size complexity and multipartite entanglement . with the help of these theorems, we shall identify some explicit multiqubit states with superpolynomial tree size .an explicit family of states with superpolynomial tree size can be constructed based on the immanant of a ( 0,1 ) matrix .consider the case when the number of qubit is a square number , , for each bit string we arrange the bits row by row to an matrix such that so .the immanant states are defined in its computational basis expansion as here the immanant of a matrix is given by where is an element of the symmetric group of all the permutation of , and is the corresponding complex coefficient .when for all the immanant reduces to the permanent , and when for even permutations and for odd permutations it reduces to the determinant .it is proved in ref . that the immanant states as defined above have if the coefficients are all nonzero .the proof relies on raz s technique to show that the multilinear formula size of the immanant with nonzero coefficients is superpolynomial , and this theorem follows immediately from theorem [ thm : tsmfs ] .the permanent and the determinant states , are two examples in this family of complex states .the smallest known formula for computing permanent is the ryser s formula , which is multilinear : let be one of the subsets of and the number of its elements , then the permanent of the matrix is by substituting this formula to the permanent state and carrying out the summation over we obtain a decomposition with size .we conjecture that the tree size of the permanent state is , see sec .[ sec : exponential ] for detailed discussion .a common confusion sometimes arises : why do we treat the permanent state and the determinant state on the same footing while determinant is known to be much easier to compute than the permanent : in fact , there exists a formula that computes the determinant of a matrix with size .this does not contradict with raz s result of , since the optimal algorithm does not use a multilinear formula ; and only multilinear formulas can be used to find an upper bound on the of the corresponding state .the deutsch - jozsa algorithm outperforms its classical counterparts in the deterministic case .it is an algorithm that solves the following hypothetical question : a function is called _ balanced _ if exactly half of its input is mapped to 0 and the other half to 1 , and _ constant _ if all the inputs are mapped to 0 or 1 .given the promise that the function is either balanced or constant , how many queries do we need to find out whether the function is balanced or constant ? classically , in the deterministic and worse case scenario , it requires queries , in which case the function outputs all 0 or all 1 for the first queries .the deutsch - jozsa algorithm solves the quantum version of this problem with only one query , which is exponentially faster than the classical algorithm . in the quantum version ,a query is replaced by the quantum oracle . in this algorithm ,one first prepares the input state as , then applies the hadamard transformation to all the registers , resulting in . after applying the oracle ,the state becomes .since is either 0 or 1 , we can simplify this to , where the last qubit register can be left out at this point . applying the hadamard transformation to all the qubits once again, we have , where represents the sum of bitwise product . finally , a projection onto has probability , which evaluates to 1 if is constant and 0 if is balanced .this concludes the algorithm , now we switch the focus to the tree size of the state . if is constant , then the state is a simple product state with tree size .if is balanced , we would like to show that an overwhelmingly large fraction of balanced functions correspond to states with superpolynomial tree size . consider a function randomly drawn from the uniform distribution of all the balanced functions , let be a random equal bipartition of the input into and , then the matrix whose entries are .note that for a balanced , the matrix has exactly half entries equal to and half equal to . let be the event that has full rank , we need to compute the probability that happens , in order to see whether the balanced function leads to a state with superpolynomial ( c.f .theorem [ thm : raz ] ) .let us call a matrix with exactly half entries equal to and the other half a _ balanced ( 1,-1 ) matrix_. denote by a random balanced ( 1,-1 ) matrix , can be chosen by first drawing a random balanced function , then picking a random bipartition and assigning .now let be the event that has full rank , we have next , we split the set of into those which give rise to a complex state ( i.e. satisfy raz s theorem ) and those which do not .explicitly , let , where is a constant to be specified later , and be the complement of , then since for all and for all , we have note that the sums of the probability that is chosen from and give the fraction of states in the respective sets , that is , and . substituting this to the above inequality , we arrive at thus , to know how large is we need to know , the probability that is invertible where is a random balanced ( 1,-1 ) matrix .our numerical evidence shows that approaches 1 quickly as becomes large ( see figure [ fig : probe2 ] ) . if one believes that for large , which is strongly suggested by the numerical evidence , then by setting to a constant not close to 1 , say 0.5 , we see that .this means that nearly all balanced functions give rise to states with superpolynomial tree size .one may argue that the large tree size that arises from the deutsch - jozsa algorithm has its root in the oracle s access to completely - random balanced function .the link between large tree size and the usefulness of the algorithm is unclear .nonetheless , this provides us with an example of complex states that appear in a quantum algorithm .more on the relation between state complexity and quantum computation will be discussed in sec .[ sec : computation ] .the probability of the event versus the number of input bits . is the event that a random balanced ( 1,-1 ) matrix has full rank . ]shor s algorithm factors an integer in time , which is exponentially faster than the most efficient known classical algorithm .do states arising from this algorithm have superpolynomial tree size ?aaronson showed the answer is yes assuming a number - theoretic conjecture . to factorize , pick a pseudo random integer , coprime to ,consider the shor s state of qubits , which is given by to simplify the proof of the lower bound on the of the shor s state , it is convenient to measure the second register . since a measurement in the computational basis does not increase tree size for any outcome of the measurement , we can assume that the measurement outcome to be .then , the state of the first register has the form where is the order of modulo and . here is represented in binary with bits , so is a -qubit state . provides a lower bound for the tree size of the state of the two registers given in .the associated formula for this state is a function of a -bit string such that if and otherwise . lower bounds , so we shall focus on this formula .aaronson showed assuming the following number - theoretic conjecture : there exist constants and a prime for which the following holds .let the set consists of elements of chosen uniformly randomly .let consists of all sums of subsets of , and let . then = n^{-o(\log n)}.\end{aligned}\ ] ]subgroup states used in quantum error correction also exhibit superpolynomial tree size .let the element of be labeled by -bit strings . given a subgroup ,a subgroup state is defined as one way to construct a subgroup state is by considering the subgroup to be the null space of a matrix over the field .given a binary matrix , a bit string is in the null space of if and the subgroup state is the equal superposition of all such bit strings .aaronson shows in ref . that , if is drawn from the set of all possible binary matrices , then at least of these matrices give rise to subgroup states with superpolynomial .let us describe briefly how to prove that a subgroup state has superpolynomial tree size .consider a random equal bipartition of into and .denote by the submatrix of the columns in that applies to ( see eq . ) , and similarly the submatrix of the columns that applies to .then , the element of the partial derivative matrix is when , which means , and otherwise .so long as both and are invertible , for each there is only one unique value of that gives . in other words, is a permutation of the identity matrix , hence it has full rank .based on this observation , one sees that [ thm : epsilon ] let be a binary matrix and over the field . for random equal bipartitions of into and as described above , if and are both invertible with probability , then .moreover , with , where .the first part follows from the fact that both and being invertible implies that has full rank .if this happens with probability , then raz s theorem is satisfied , hence . for the second part, we use a lemma proved by aaronson in ref . : denote by a state close to a complex state that satisfies theorem [ thm : epsilon ] , such that .then , for a fraction of of all equal bipartitions , the rank of the partial derivative matrix is in order to satisfy raz s theorem , we want .a comparison with the above equation gives where .therefore , if .since is exponentially small in , one might think that most states in the hilbert space satisfy , and hence theorem [ thm : epsilon ] can be used to show that most states have superpolynomial .this is not correct : indeed , if is randomly and uniformly chosen from the hilbert space according to a haar measure , the probability that is smaller than $ ] , which is exponentially small .however , it is true that most states in the hilbert space have _ exponential _tree size , as showed by a counting argument in ref . .aaronson first showed an explicit construction by vandermonde matrix that leads to a superpolynomial complex subgroup state . herewe present a different construction of the matrix , for which strong numerical evidence suggests that the corresponding subgroup state has superpolynomial . consider the matrix , where is the identity matrix and a binary jacobsthal matrix , both of size .jacobsthal matrices are used in the paley construction of hadamard matrices .the binary version is defined as follows : for a prime number , one can define the quadratic character that indicates whether the finite field element is a perfect square .we have if for some non - zero element ; and otherwise. then is equal to .we study the partitioning of into and randomly .numerical evidence ( see fig . [fig : jacobsthal ] ) shows that when _ is a prime and with _, then , and are both invertible with a probability approaching to a constant around . from theorem [ thm : epsilon ]we see that the subgroup state defined by has where . the probability of both and being invertible over random equal bipartitions of . is the matrix , where is the jacobsthal matrix of size , where and is a prime . for large ,the probability approaches a constant around . ]it is known that measurement - based quantum computation ( mbqc ) on the 2d cluster state is as strong as the circuit model of quantum computation . in this scheme of computation , after the initial resource state is prepared , one only performs single qubit projective measurements and feedforward the outcomes .the power of the computation seems to lie in the initial resource state .therefore , an initial state that is universal for quantum computation , such as the 2d cluster state , should be highly complex .it is conjectured in ref . that the 2d cluster state has superpolynomial ts . by studying the generation a complex subgroup state via mbqc on the 2d cluster state, we can prove that this conjecture is true : the 2d cluster state of qubits has .suppose we aim to produce an -qubit complex subgroup state ( as described in sec . [ complexsubgroup ] ) that has tree size .these states are known to be stabilizer states .aaronson and gottesmannshowed that any -qubit stabilizer state can be prepared using a stabilizer circuit with number of gates .a stabilizer circuit is one that consists of only cnots , -phase gates and hadamard gates . in the mbqc scheme ,each of these gates can be implemented by measuring a constant number of qubits : 15 qubits for cnot , and 5 qubits for the phase gate and the hadamard gate . in order to obtain a -qubit complex subgroup state, one needs to prepare a -by- lattice ( see fig . [fig : mbqc ] ) , so the number of qubits in the 2d cluster state is . since single qubit projective measurements only decrease tree size ( c.f .the proof of theorem [ thm : mbqc ] ) , we have so , the -qubit 2d cluster state has the superpolynomial tree size . a schematic diagram of measurement - based quantum computation . starting from a 2d cluster state ,single qubit measurements are performed . represents a measurement , and other arrows represent measurements in the plane .the logical input state enters from the left and propagates to the right .single qubit rotations and controlled gates are realized by a certain sequence of adaptive measurements , from left to right .implementing a circuit on qubits with gates requires a cluster state of size -by- . ]we can also show that the -tree size of the 2d cluster state is also superpolynomial : for , .assume that we have prepared a state close to the 2d cluster state , , such that the fidelity .then we apply the same measurement sequence to the erroneous 2d cluster state as if we would to the ideal 2d cluster for preparing a complex subgroup state .consider the state after one of the single - qubit measurement in the orthonormal basis ; the single - qubit projectors are and .we now show that one of these outcomes will increase the fidelity between the two cases .if the measurement outcome is not observed , the resulting states on the ideal and -deviated 2d cluster states are : where and are the states of the remaining qubits in the cluster ; and and are the probability of the measurement outcomes .clearly , the above map is completely positive and trace - preserving ( cptp ) .thus , the fidelity of these two states should not decrease due to monotonicity of fidelity under cptp maps , with a bit of algebra , we can express in terms of the fidelity of the post - selected states for the same outcome : let us denote to be the larger overlap between the two , then since .combining eq ., we have therefore , for at least one of the outcomes , we have a non - decreasing fidelity on the unmeasured parts of the states . for every measurements we post - select on the outcome that do not decrease the fidelity . note that the complex subgroup states can be realized by a clifford circuit , which can be implemented by a series of non - adaptive measurements .this means that , regardless of the outcome , the state obtained from the ideal 2d cluster is a complex subgroup state upto local pauli operators . for the erroneous 2d cluster state ,we would obtain a state such that . from theorem [ thm : epsilon ], we see that when is large enough , , and hence , if . thus , for .in this section we address the problem of verifying the large of complex states .suppose one wants to create complex states such as the complex subgroup states and the 2d cluster state in the lab , in reality the produced states are at some distance away from the target states due to experimental imperfection .how do we verify that the produced state is superpolynomially complex ?full state tomography requires exponentially many operations and is hence not practical .nonetheless , _ for complex states that are stabilizer states , there exists a complexity witness that can be measured with only a polynomial number of basic operations_. this witness can be used for verifying the superpolynomial of _ pure states_. proving and verifying superpolynomial of _ mixed states _ remains an open problem .the subgroups states described in sec . [ complexsubgroup ] belong to the class of stabilizer states . a -qubit stabilizer state is uniquely defined by mutually commutative stabilizing operators in the pauli group , , satisfying the eigenvalue equation : the generators of the subgroups states can be read off from the corresponding matrix . let ; then there are linearly independent rows in . for the first generators , one simply replace 0 by and 1 by for each of the first linear independent row . for example , if row is , we write , where the position of the operators denotes the qubit on which they operate on .the remaining generators can be found from the linearly independent vectors that span the null space of .one replaces 0 with and 1 with for each vector , and the generator is the ordered product of these operators .the operators defined above are the generators of the stabilizer of .recall that is the uniform superposition of where is a vector in the null space of .for the first generators , we have , for all , hence for . for the generators obtained from the linearly independent vectors in the null space of , we have , where is the bitwise addition modulo 2 .note that is in the null space of , so , and hence .this shows that the stabilize . for the commutation relation ,it is obvious that the first generators commutes with each other and so do the obtained from the null space .it remains to show that from row commutes with from . implies the number of positions where the entries of both and are 1 must be even .the single - qubit operators in at these positions are ; and since there are an even number of these pairs we see that .now we show how to construct a complexity witness based on the complex subgroup states .consider a state that satisfies theorem [ thm : epsilon ] .for large , any -qubit state such that must have .the superpolynomial of these states can be verified by measuring the witness a negative value of implies that the overlap of the produced state and is larger than , and hence the of the produced state is superpolynomial .however , as such is not measurable in practice , under the natural constraint that only local measurements are feasible .if one decomposes into a sum of locally measurable operators , the number of such measurements increases exponentially with the number of qubits . nonetheless , when is a stabilizer state , it is possible to construct a _ stabilizer witness _ with the following properties : if then ; and can be decomposed into a sum of a linear number of operators in the pauli group , which in turn can be measured by a _number of basic operations .the stabilizer witness is defined as : to show that implies , one considers all the eigenvalue equations of the form but with possible eigenvalues .this defines the set of common eigenstates of the generators . since all the generators are hermitian operators , the common eigenstates are mutually orthogonal and form a complete basis .one can verify that , in this basis , the operator is a diagonal matrix with non - negative diagonal entries .thus , is a positive semi - definite operator ; so implies .if in an experiment the expectation value of the stabilizer witness is found to be negative , then one can certify that the produced state indeed has . while the witness detects all complex states with a fidelity ( with respect to ) larger than , detects a smaller set .it is necessary to know how close to a state needs to be for to be negative .if the required fidelity is exponentially close to 1 then no state would be detected by in practice .for this purpose , we first expand as where is a state orthogonal to and .we have since is a positive semi - definite matrix , .therefore , thus , when the overlap .so , the loss of fidelity must be smaller than for a state to be detected by .one needs to measure all the generators to estimate . with the help of an ancilla qubit ,all the generators , each with _ two _ possible outcomes , can be measured by applying a circuit of size followed by a measurement on the ancilla qubits ( see fig . [ measure ] ) .these measurements need to be repeated to obtain the desired accuracy .suppose the produced state has a fidelity with is a constant , we have . if the random error in each is then .thus , to be confident that one needs , or , which is achievable with a polynomial number of repetitions .therefore , a correct negative expectation value of can be obtained with polynomial effort .s ( only two are shown here ) .the controlled- gate can be decomposed into at most two - qubit controlled - pauli gates , and there are generators to be measured .projective measurements of the ancilla qubits in the computational basis give the outcome of . ]there is a similar stabilizer witness for detecting complex states close to the 2d cluster states .indeed , the 2d cluster state has and is also a stabilizer state .thus , the witness for the 2d cluster state has the same form as , with the replaced by the generators of the 2d cluster state .these generators are described in ref .one of the main motivation of this study is to investigate the relation between state complexity and quantum computation . to elaborate on this, we can divide all the quantum states into four categories according to their preparation complexity and state complexity ( see fig .[ fig : useful ] ) .the set of states with large preparation complexity but small state complexity is presumably empty because preparing simple states should not be too difficult .the states with small state complexity are not useful for quantum computation because they are too simple and hence a classical computer can simulate them efficiently .the states with large preparation complexity are not useful either because quantum computation with these states requires too much resource in space and time .states that are useful for quantum computation should be the ones that have large state complexity yet small preparation complexity .if tree size is a good measure of state complexity , then we might ask : is superpolynomial tree size a necessary condition for the state to provide advantage in some computational task ? in this section , we are going to discuss this link in the framework of measurement - based quantum computation and the circuit model of quantum computation .note that the complex subgroup states presented in sec .[ complexsubgroup ] belongs to the class of stabilizer states .they have superpolynomial tree size and can be realized by a quantum circuits consist of number of gates .therefore , these states belong to the bottom left corner of fig . [fig : useful ] .but they are not useful for quantum computation since stabilizer circuit can be simulated efficiently on a classical computer . dividing all quantum states into four categories according to their state complexity and preparation complexity .one out of the categories is presumably empty , two are not useful for quantum computation . the states that are useful for quantum computation should have large state complexity and small preparation complexity . ]there are several theoretical models of quantum computation , including the circuit model and the mbqc model . for the circuit model, the input state can always be the simple product state .the quantum power of the computation lies in the gates applied for coherently manipulating single qubits and entangling different qubits . on the contrary , for mbqc ,after the initial resource state is prepared , we perform projective measurements on single qubits and feedforward the results for choosing the basis of the next round of measurement . loosely speaking ,all the quantum advantage is contained in the resource state .if this resource state is simple , then mbqc will not offer any real speed up over classical computation . to make this intuition more rigorous, we proved that : [ thm : mbqc]if the resource state has , then mbqc can be simulated efficiently with classical computation .consider the resource state in its minimal tree representation , one sees that at the lowest layer there are a polynomial number of leaves . we will show that it requires polynomial effort to update the tree given a measurement outcome : assume we measure the qubit in the basis and obtain the the result , then for every leaf containing qubit , say , we update it to .this requires evaluation of the inner products for a polynomial number of leaves .the size of the tree after updating can only get smaller and thus is still polynomial .so , both the tree representation of the state at each step of the computation and the update of the state after a measurement can be carried out with polynomial effort .it follows that mbqc on resource states with polynomial can be simulated on a classical computer with polynomial overhead . for the circuit model , rather than checking for each algorithm, one would like to have a general proof that small tree size does not provide any computational advantage . in , aaronson raised the question of whether .this remains an open conjecture , here we prove a weaker version of it .first let us define what is .bounded - error quantum polynomial - time ( ) is the class of decision problems solvable with a quantum turing machine , with at most probability of error .is essentially with the restriction that at each step of the computation , the state is exponentially close to a state with polynomial tree size . in other words ,the of the state is polynomial with ( see eqn . ) . since we impose more restrictions , clearly .we can also simulate , the classical counterpart of , in : one simply implements reversible classical computation , applies a hadamard gate on a single qubit and measures in its computational basis to generate random bits if needed . since each classical bit string can be represented by a quantum product state , is at every steps , so this simulation is in .thus , we have : . if , then large tree size is a necessary condition for quantum computers to outperform classical ones .unfortunately , we can only prove a weaker version of this .for this purpose , we first show a proposition that relates tree size and schmidt rank .note that one can draw a rooted tree in a binary form ( each gate has only two children ) without changing the number of leaves ( its size ) .next , for any gate we denote as the set of qubits in the state described by the subtree with as the root .let be a bipartition of the qubits into two sets and .a gate is called _ separating with respect to _ when at least one of its children has the property or .a gate is called _ strictly separating _ if its children satisfy and . then , [ thm : separating]for a bipartition of the qubits into and , if there exists a polynomial sized tree such that all the gates are separating with respect to , then the schmidt rank of the state with respect to the bipartition is polynomial . identify all the strictly separating gates in the binary tree . since the number of leaves is polynomial andthe total number of gates in the binary tree is , the number of strictly separating gates , , is also polynomial .it is clearer to look at a representative example in fig .[ fig : treebqp ] . focus on the gate that joins two such gates , and . sincethis gate contain qubits in both and , and the gate at the top is separating , the qubits under the sibling of the gate must be contained strictly in either or . without lost of generality ,let them be contained in and denote their state as .we can exchange the gate and the gate at the top so that the state becomes .now let us relabel as and as , the state can be written as and .the same process can be applied upward until these gates joins at the root . in the final form of the tree, one sees that the state has a form similar to the schmidt decomposition : where contain qubits in and qubits in . , the number of terms in this schmidt - like decomposition upper bounds the true schmidt rank , hence the schmidt rank is polynomial .now suppose that at every step of the quantum computation , proposition[thm : separating ] is satisfied for all bipartitions , then the schmidt rank is polynomial for all bipartitions .it follows from a theorem by vidal that the computation can be efficiently simulated with classical computers . a gate that joins two gates with the following property : one of its children are contained in the set of qubit and the other contained in .the sibling of such a gate must be strictly contained in either or , for being separating .now we can exchange the order of and by distributing to and .this gate has the same property as before ; and this process can be repeated upward until it reaches the root , transforming the tree into a form similar to the schmidt decomposition . ]there are states that do not satisfy the condition of proposition [ thm : separating ] , one example is the optimal tree of the most complex four qubit states ( see eqn . ) .there are also states with polynomial that do not satisfy vidal s criteria , hence do not satisfy proposition [ thm : separating ] for some bipartitions .for example , the state has polynomial , but there is a bipartition for which the schmidt rank is .even though some properties of tree size have been studied , there are still many open problems remained to be addressed . here we list a few of the most interesting .is ?if this is true , then the role of tree size in quantum computation is clear : polynomial means efficient classical simulation , and hence large is a necessary condition for quantum speed up .is large tree size a resource for any particular task in quantum information ? given an explicit family of states , can raz s theorem be modified to provide an exponential lower bound , instead of ? is there an algorithm ( other than exhaustive search ) to find the optimal tree given a quantum state ? how to prove and verify the superpolynomial of mixed states ?has any ( family of ) states with superpolynomial tree size been produced in experiments ? as mentioned in sec . 3 , the ground state of a 1d gapped hamiltonian , which can be described by an mps , has polynomial .is there a physically reasonable 1d two - local hamiltonian whose ground state has superpolynomial ? a ground state at phase transition no longer obeys the area law because of high entanglement . at this pointthe state is not an mps so one can expect that its is large . below are a few more technical open problems : [ [ tree - size-2n - for - n - qubit - states ] ] tree size for qubit states :+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one observation we made for the tree size of a few qubits is that the most complex state has for .is this a pure coincidence , or is this generally true for any ? [ [ stable - tree - size ] ] stable tree size : + + + + + + + + + + + + + + + + + for the three and four qubit case , the most complex state are unstable .infinitesimal perturbation in suitable directions in the hilbert space could reduce its tree size to the second most complex class , hence the maximal value of the stable tree size is different from the maximal tree size . is the maximal stable tree size always smaller than the maximal tree size ? is it always equal to the second largest tree size ? [ [ sec : exponential ] ] exponential tree size : + + + + + + + + + + + + + + + + + + + + + + the of the permanent state and the 2d cluster state are shown to be superpolynomial .we conjecture that they in fact have exponential tree size , and with some respectively .there are strong evidences to believe these two states have exponential : for the permanent state it is known that computing the permanent of a matrix is -hard , and a subexponential tree size for a permanent state would imply a subexponential formula to compute permanent , contradicting the exponential time hypothesis ( or a variant of it , ) ; for the 2d cluster state , if we assume the contrary that it has subexponential tree size , by theorem [ thm : mbqc ] we can simulate the polynomial - time quantum factoring protocol with some subexponential effort .this contradicts with the belief that quantum computing offers exponential speed up compared with classical computing , shor s algorithm takes time time while the best classical algorithm takes about .since 2d cluster states are 2d peps with a bond dimension , so and thus is upper bounded by .in this paper , we revisited the complexity measure , called tree size , for pure -qubit states . for few qubits , the state with the largest tree sizeis identified for qubits . for 4 qubits ,the most compact states admits an optimal tree that is not recursive .the generalization of tree size to incorporate small fluctuation and to mixed state is also discussed .raz s theorem on the superpolynomial lower bound of multilinear formula size can be utilized to show that some multiqubit states have superpolynomial tree size .examples of such complex states , the immanant states and the subgroup states , are described . moreover , the conjecture that the 2d cluster state has superpolynomial is proved .we also show how to verify the superpolynomial of stabilizer states , such as the complex subgroup states and the 2d cluster state , with polynomial effort by measuring a stabilizer witness .the relation between tree size and quantum computation is discussed . in measurement - based quantum computation ,if the initial resource state has polynomial tree size , then the computation can be simulated on a classical computer with polynomial overhead . for the circuit model of quantum computation , we show that most of the states arising in deutsch - jozsa algorithm have large tree size . a similar result for shor s algorithmis also reviewed , although a number - theoretic conjecture need to be made in this case .finally , we present a proof for a weaker version of the conjecture , which says that if the tree size of the quantum state is polynomial through out the computation and obeys some extra conditions , then the computation can be simulated efficiently . in conclusion ,tree size of quantum states as a complexity measure possesses some desirable properties .the most important one is that it is possible to derive non - trivial lower bound on tree size .we have seen some signs on the complex relation between tree size and the usefulness of a state for quantum computation speed up , but the picture is still unclear . by further investigating tree size and other complexity measures ,we hope to identify state complexity as the resource for quantum computation . with that understanding one can rule out states which do not provide quantum advantage , and concentrate on producing and characterizing states with high complexity , and possibly identify new quantum algorithms based on complex states .we thank an anonymous referee for helpful comments and suggestions on how to prove the superpolynomial tree size of the 2d cluster state , and for showing us that the dicke states have polynomial tree size .this research is supported by the national research foundation singapore , partly under its competitive research programme ( crp award no .nrf - crp12 - 2013 - 03 ) and the ministry of education , singapore .the centre for quantum technologies is a research centre of excellence funded by the ministry of education and the national research foundation singapore .
|
the complexity of a quantum state may be closely related to the usefulness of the state for quantum computation . we discuss this link using the tree size of a multiqubit state , a complexity measure that has two noticeable ( and , so far , unique ) features : it is in principle computable , and non - trivial lower bounds can be obtained , hence identifying truly complex states . in this paper , we first review the definition of tree size , together with known results on the most complex three and four qubits states . moving to the multiqubit case , we revisit a mathematical theorem for proving a lower bound on tree size that scales superpolynomially in the number of qubits . next , states with superpolynomial tree size , the immanant states , the deutsch - jozsa states , the shor s states and the subgroup states , are described . we show that the universal resource state for measurement based quantum computation , the 2d - cluster state , has superpolynomial tree size . moreover , we show how the complexity of subgroup states and the 2d cluster state can be verified efficiently . the question of how tree size is related to the speed up achieved in quantum computation is also addressed . we show that superpolynomial tree size of the resource state is essential for measurement based quantum computation . the necessary role of large tree size in the circuit model of quantum computation is still a conjecture ; and we prove a weaker version of the conjecture .
|
silicon single - photon avalanche diode ( spad ) operating in geiger mode has become a standard device to detect ultra - weak optical signal in many fields , such as astronomy , biology , lidar , quantum optics , quantum information . compared with photomultiplier tube , for single - photon detection, si spad has higher detection efficiency , lower dark count rate , and does not require a high voltage operation . currently , there are two primary types of si spads , structure and thin depletion layer structure .the representative devices based on are spcms produced by perkinelmer ( now excelitas ) .the detection efficiency of a typical device is higher than for the spectral range from 600 nm to 800 nm and its peak efficiency can be around .this performance benefits from a thick depletion layer(20 - 25 m ) , which guarantees an adequate absorption of single photons .however , the thick depletion layer induces a large timing jitter , their typical time resolution full - width at half maximum ( fwhm ) is about 400 ps . in the thin depletion layer structure ,the thickness of depletion region is around 1 m .its timing resolution can be reduced down to 30 ps fwhm at room temperature . however, the thin depletion layer may result in inadequate absorption of single photons .the representative devices based on the narrow depletion layer are pdm photon counting modules produced by mpd .the peak detection efficiency of these devices is blue - shifted compared with spcms and the detection efficiency at 800 nm is about .+ one may ask an interesting question , whether it is possible to combine the excellent performance of high detection efficiency and low timing jitter in the same spad device .solutions for this question have been previously investigated , e.g. , using resonant cavity to enhance the efficiency without increasing the thickness .the cavity enhanced structure , however , induces a small spectral bandwidth of around a few nanometers , which severely limits its use in practice . over the spectral range from 400 nm to 1000 nm .the difference between the two lines clearly shows the advantages of the nanostructure , particularly in the near - infrared region .( b ) the detection efficiency as a function of wavelength with an excess bias voltage .here we calculate the detection efficiency at typical wavelength represented by solid dots and the dotted line between solid dots is just for guiding.,width=309 ] in this paper , we first propose a nanostructured silicon spad to solve such problem .the nanostructured spad achieves high detection efficiency with excellent timing jitter over a broad spectral range .the enhancement mechanism is based on the same principles as light trapping enhancement used in solar cells . in the following sections , we first describe the new device structure and its enhancement mechanism .then we provide optical and electric simulations to demonstrate the performance improvements .finally , we briefly discuss about the feasibility of device fabrication .photon detection efficiency depends on three parameters , i.e. , photon absorption efficiency , carrier collection efficiency , and avalanche probability . in this section, we analyze the light absorption in the semiconductor film of a spad , and show how to enhance the absorption efficiency using a nanostructure . for a thin film with a thickness of , the single - pass absorption is given by where is the absorption coefficient .we assume that there is no reflection of light in the interfaces between the semiconductor film and the air . for silicon, the thickness has to be tens or even hundreds of micrometers in order to achieve an adequate absorption .such thickness , however , severely limits other parameters of spad .we aim to enhance the photon absorption over a broad spectral range while reducing the required thickness .the same challenge arises in the development of solar cells , where the aim is to reduce the material cost of silicon by using thinner film . for this purpose, light trapping has been developed to improve the light absorption in solar cells .it allows a film to absorb much more light by using judiciously designed nanostructures to scatter light such that light propagates for a longer path in the film .a variety of structures has been proposed and implemented .it has been shown that the upper limit of light absorption by a film is where is the refractive index of the material . to see the effect of the absorption enhancement, we can take the limit of a very thin thickness such that , the absorption calculated from eq.[eq2 ] is times larger than that from eq.[eq1 ] .based on light trapping , we design a highly efficient spad with enhanced light absorption over a broad spectral range . the structure is based on a thin film with nanocone gratings coated on both sides as shown in fig.[fig1]a .the top nanocone grating serves the purpose of broadband anti - reflection .the periodicity of the grating is 400 nm .the base diameter and the height of the nanocone is 400 nm and 800 nm , respectively .the choices of parameters are optimized for efficient anti - reflection . on the other hand ,the bottom grating aims to scatter the light strongly toward the lateral direction . for this purpose ,the periodicity of the grating is chosen as 800 nm , which optimizes the scattering efficiency for near infrared light .the base diameter and the height of the nanocones are 750 nm and 250 nm , respectively .the combination of the effective anti - reflection and strong scattering is critically important in reaching the upper limit of the light absorption .we choose as the material for the nanocone gratings .apart from the nanocone gratings , the spad is designed as a typical pin structure as shown fig.[fig1]a , with p - layer of 300 nm thickness , i - layer of m thickness , and n - layer of 300 nm thickness , both p - type region and n - type region is heavily doped .on the back side of the device there is a silver mirror with 200 nm thickness , and a silicon oxide layer with 2000 nm thickness is inserted between the lower gratings and the silver layer to reduce the absorption by the mirror .the optical absorption is simulated using s4 , which is based on the rigorous coupled wave analysis ( rcwa ) method . to verify the enhancement , a conventional spad with flat thin film as shown in fig .[ fig1]b is also simulated for comparison .the upper gratings are replaced by an antireflection layer with 100 nm thickness , and the lower gratings are removed .the silicon pin structure is the same as the nanostructured spad .there are also a silver mirror and a silicon oxide spacer on the back side of the film . for a fair comparison ,a anti - reflection layer with a thickness of 100 nm is used on the top of the pin .the light absorption in the pin junction in both structures are calculated and shown in fig.[fig2]a . for the flat thin film structure ,the absorption is around in the short wavelength range .however , in the near infrared ( nir ) region , the absorption falls below , indicating a poor efficiency for spad . in great contrast , the nanostructured spad mains high absorption around from visible to nir regions .based on the simulation results of the absorption efficiency , we can further characterize the parameters of interest for single - photon detection , e.g. , detection efficiency and timing jitter , through the simulations in geiger mode . for the spads as shown in fig .[ fig1 ] there are three regions that can absorb incoming photons , i.e. , the upper p - type region , the middle i - type region and the lower n - type region . in the different regions , the probabilities that the photo - generated carriers reach the depletion region , i.e. , ,are also different .when a photon is absorbed in the depletion region , an avalanche is initiated immediately so that is close to 1 .since the depletion region is mainly in the intrinsic layer , the lifetime of photo - generated carriers is longer than the avalanche process such that the recombination effect can be ignored in this region .when a photon is absorbed in the p - type region or the n - type region , due to the absence of high electric field the photo - generated minority carriers move randomly before reaching the depletion region .this process has an effect on detection efficiency and timing jitter as discussed previously . in this paper , we use 1d random walk model to simulate such process .the diffusion coefficient in the simulations is taken from the literature . as the boundary conditions of the model, we assume that random walk starts when the first electron - hole pair is generated in the neutral region , meanwhile because high quality of interface , we ignore the recombination at the interface . as the last step for the evaluation of detection efficiency , we calculate the avalanche probability , i.e. , the probability that an electron - hole pair triggers a self - sustaining avalanche , by using the random path length ( rpl ) model . in the rpl model, the random ionization path length of electrons and holes can be described by their probability density functions , which are used as inputs . for electrons ,the probability density function is \!\!\!\ ! & , \xi > d_e , \end{array } \right.\ ] ] .the inset shows the excess voltage dependence of timing jitter at 900 nm for the nanocone spad.,width=309 ] where is the enabled ionization coefficient of electrons , and is the dead space length . , where is the ionization threshold energy of electrons and is the electric field in the depletion region .the parameters of and are calculated using the values of local ionization coefficient and taken from the literatures .the probability density function of holes can be obtained similarly .in such a way , the rpl model can be effectively simulated using the monte carlo method . for each trial ,when an avalanche breakdown occurs , a detection event and threshold crossing time of avalanche current are recorded . after enough turns of trials , can obtained by calculating the ratio of the number of recorded events to the number of trials .the results of detection efficiency simulation for the two spads are shown in fig .both spads are biased with same excess voltage , the nanostructured spad exhibits much higher detection efficiency than the conventional spad at any wavelength . particularly in the near - infrared range ,the improvement of the detection efficiency is significant that is similar to the trend as shown in fig .[ fig2]a .timing jitter is another key parameter of spad to characterize the time uncertainty between the photon absorption and avalanche detection .it can be calculated as where is defined as the time for an avalanche current reaching the threshold . for the spads, the timing jitter is mainly attributed to the time dispersion of the photo - generated carriers in the neutral region to reach the high field region , and the intrinsic randomness in avalanche process as well . in the simulations , we use the random walk method to evaluate the contribution of time dispersion in the neutral region .the intrinsic randomness in avalanche process includes the contributions from two parts , i.e. , avalanche buildup process and propagation process . here, a low threshold current of 0.2 ma is used to detect avalanche occurrence , such that the avalanche process is still confined around the seed point . in this scenario ,the avalanche buildup process dominates the intrinsic randomness .the time uncertainty contribution due to the avalanche buildup process is extracted directly from the simulations of the rpl model .m. ( a ) absorption spectrum for both structures when .both structures have higher absorption efficiency .for the nanostructured spad ( green solid line ) , the absorption efficiency is higher than over the spectral range from 400 nm to 1000 nm .( b ) the detection efficiency as a function of wavelength with .here we calculate the detection efficiency at typical wavelength represented by solid dots and the dotted line between solid dots is just for guiding , width=309 ] fig .[ fig3 ] shows the timing jitter of the two spads as a function of wavelength with .the overlap of two lines indicates that the timing jitter of the nanocone spad can be as low as the flat thin film spad .the inset in fig .[ fig3 ] exhibits the timing jitter characteristics of the nanocone spad decreases as the bias voltage increases . combining the simulation results of the detection efficiency and timing jitter, one can conclude that the nanostructured spad with a depletion width of 1 m presents the advantages of high efficiency and low jitter simultaneously .therefore , nanostructure is an effective approach to solve the coexistence problem of spad performance aforementioned . with the help of light absorption enhancement ,the nanostructured spad achieves high absorption efficiency with a very thin depletion layer . by increasing the thickness of the depletion layer, such high absorption efficiency can also be achieved using the conventional flat - film spad .however , as the thickness of the depletion layer increases , the jitter performance of spad significantly decreases . to verify such effect, we compare the jitter of the two spads at the same absorption efficiency .the depletion layer thickness of the flat - film spad is increased to keep the same absorption efficiency .the timing jitter results are shown in fig .[ fig4 ] , at the wavelength of 900 nm .the large difference between the two lines in fig .[ fig4 ] shows a remarkable improvement on the timing jitter using the nanostructured spad .for instance , with a depletion layer thickness of m , the nanostructured spad has a detection efficiency of . for the flat - film spad , in order to achieve the same efficiency the thickness of the depletion layer has to be increased from m to m . as a result of such thickness, the timing jitter increases with one order of magnitude . by increasing the thickness of the depletion layer, we can also increase the absorption efficiency of the nanostructured spad .however , unlike the flat film structure , the nanostructured spad does not require an extremely thick depletion layer in order to obtain high absorption efficiency .considering a nanostructured spad with a depletion layer of m thickness , a p - layer of 100 nm thickness and a n - layer of 100 nm thickness , when nanostructured spad is biased with excess voltage , is higher than in the range from 400 nm to 1000 nm as shown in fig . reaches around at the wavelength of 900 nm as shown in fig .[ fig5]b , which is much higher than that in the case of m thickness as shown in fig .the flat - film spad with the same layer parameters is also simulated for comparison . without nanostructure the detection efficiency of flat - film spaddrop to around at the same wavelength .similarly , we calculate the timing jitter performance of this thick nanostructured spad . in fig .[ fig6 ] , the timing jitter as a function of excess bias voltage is plot , and here we assume the wavelength of photon is 900 nm . with , the timing jitter is only increased to 20 ps . on the other hand , for the flat film structure spad, the thickness of the depletion layer has to be m to achieve a detection efficiency of , indicating a much longer jitter time .in addition to better device performance , this nanostructure is easy to be fabricated .silicon nitride nanocone structure can be made by applying self - assembled nickel nano - particle and dry etching .the double - sided nanocone structure can be processed sequentially and one may even directly process on thin film silicon wafer .after nanocone etching , device backside can be coated with oxide and metal to form the final devices .in summary , we have proposed and theoretically simulated a nanostructured spad that has the remarkable performance of high detection efficiency over a broad spectral range and low timing jitter at the same time . our approach effectively solves the coexistence problem of high efficiency and low jitter for silicon spads .moreover , by optimizing the structure , the detection efficiency of the nanostructured spad could be as high as the conventional thick silicon spad , particularly in the near - infrared range , while maintaining a pretty low timing jitter .such nanostructured spad is well suited for many practical applications requiring high efficiency and low timing jitter .jian ma and ming zhou contributed equally to this work .the authors would like to thank xintao bi , jin wang , jinrong wang for providing workstation for computation .this work has been supported by the national fundamental research program ( under grant no .2011cb921300 , 2013cb336800 ) , the national natural science foundation of china , the chinese academy of science .d. dravins , d. faria , and b. nilsson , `` avalanche diodes as photon - counting detectors in astronomical photometry , '' in `` astronomical telescopes and instrumentation , '' ( international society for optics and photonics , 2000 ) , pp. 298307 .m. a. albota , r. m. heinrichs , d. g. kocher , d. g. fouche , b. e. player , m. e. obrien , b. f. aull , j. j. zayhowski , j. mooney , b. c. willard _et al . _ , `` three - dimensional imaging laser radar with a photon - counting avalanche photodiode array and microchip laser , '' applied optics * 41 * , 76717678 ( 2002 ) .m. ghioni , g. armellini , p. maccagnani , i. rech , m. k. emsley , and m. s. unlu , `` resonant - cavity - enhanced single - photon avalanche diodes on reflecting silicon substrates , '' ieee photonics technology letters * 20 * , 413415 ( 2008 ) .j. zhu , z. yu , g. f. burkhard , c .- m .hsu , s. t. connor , y. xu , q. wang , m. mcgehee , s. fan , and y. cui , `` optical absorption enhancement in amorphous silicon nanowire and nanocone arrays , '' nano letters * 9 * , 279282 ( 2008 ) .k. x. wang , z. yu , v. liu , y. cui , and s. fan , `` absorption enhancement in ultrathin crystalline silicon solar cells with antireflection and light - trapping nanocone gratings , '' nano letters * 12 * , 16161619 ( 2012 ) .a. gulinatti , i. rech , s. fumagalli , m. assanelli , m. ghioni , and s. d. cova , `` modeling photon detection efficiency and temporal response of single photon avalanche diodes , '' in `` spie europe optics+ optoelectronics , '' ( international society for optics and photonics , 2009 ) , pp . 73550x73550x .k. c. sahoo , m .- k .lin , e .- y .chang , t. b. tinh , y. li , and j .- h .huang , `` silicon nitride nanopillars and nanocones formed by nickel nanoclusters and inductively coupled plasma etching for solar cell application , '' japanese journal of applied physics * 48 * , 126508 ( 2009 ) .
|
silicon single - photon avalanche diode ( spad ) is a core device for single - photon detection in the visible and the near - infrared range , and widely used in many applications . however , due to limits of the structure design and device fabrication for current silicon spads , the key parameters of detection efficiency and timing jitter are often forced to compromise . here , we propose a nanostructured silicon spad , which achieves high detection efficiency with excellent timing jitter simultaneously over a broad spectral range . the optical and electric simulations show significant performance enhancement compared with conventional silicon spad devices . this nanostructured devices can be easily fabricated and thus well suited for practical applications .
|
dags and other graphical models encode conditional independence ( ci ) relationships in probability distributions .therefore , ci tests are a natural building block of algorithms that infer such models from data . for example, the pc algorithm for learning dags and the fci and rfci algorithms for learning maximal ancestral graphs are all based on ci tests .ci testing is still an ongoing research topic , to which the uai community is contributing ( e.g. * ? ? ?* ; * ? ? ?but at least for continuous variables , ci testing will always remain more difficult than testing marginal independence for quite fundamental reasons . intuitively , the difficulty is that two variables and could be dependent `` almost nowhere '' , e.g. , for only a few values of the conditioning variable .this suggests a two - staged approach to structure learning : first try to learn as much as possible from simpler independence tests before applying ci tests . here , we present a theoretical basis for extracting as much information as possible from the simplest kind of stochastic independence pairwise marginal independence .more precisely , we will consider the following problem .we are given the set of pairwise marginal independencies that hold amongst some variables of interest .such sets can be represented as graphs whose missing edges correspond to independencies ( figure [ fig : marginalbutnotmarkov]a ) .we call such graphs _ marginal independence graphs_. we wish to find dags on the same variables that entail exactly the given set of pairwise marginal independencies ( figure [ fig : marginalbutnotmarkov]b ) .we call such dags _ faithful_. sometimes no such dags exist ( e.g. , figure [ fig : marginalbutnotmarkov]c ) .else , we are interested in finding the set of _ all _ faithful dags , hoping that this set will be substantially smaller than the set of all possible dags on the same variables .those candidate dags could then be probed further by using joint marginal or conditional independence tests .= [ circle , fill , inner sep=1pt ] ; = [ bi ] ; ( a1 ) at ( -0.5,0 ) ; ( a2 ) at ( 0,0 ) ; ( a3 ) at ( 0.5,0 ) ; ( b1 ) at ( -0.25,0.5 ) ; ( b2 ) at ( 0.25,0.5 ) ; ( c1 ) at ( 0,1 ) ; ( b1 ) edge ( c1 ) edge ( b2 ) edge ( a1 ) edge ( a2 ) ( a2 ) edge ( b2 ) edge ( a3 ) edge ( a1 ) ( b2 ) edge ( c1 ) edge ( a3 ) ; at ( 0,-.5 ) ( a ) ; = [ circle , fill , inner sep=1pt ] ; = [ dir ] ; ( a1 ) at ( -0.5,0 ) ; ( a2 ) at ( 0,0 ) ; ( a3 ) at ( 0.5,0 ) ; ( b1 ) at ( -0.25,0.5 ) ; ( b2 ) at ( 0.25,0.5 ) ; ( c1 ) at ( 0,1 ) ; ( a1 ) edge ( b1 ) edge ( a2 ) ; ( a3 ) edge ( a2 ) edge ( b2 ) ; ( c1 ) edge ( b2 ) edge ( b1 ) ; at ( 0,-.5 ) ( b ) ; = [ circle , fill , inner sep=1pt ] ; = [ bi ] ; ( a1 ) at ( -0.5,0 ) ; ( a2 ) at ( 0,0 ) ; ( b1 ) at ( -0.25,0.5 ) ; ( b2 ) at ( 0.25,0.5 ) ; ( c1 ) at ( 0,1 ) ; ( b2 ) edge ( c1 ) ( a2 ) edge ( b1 ) edge ( b2 ) ( a1 ) edge ( a2 ) edge ( b1 ) ( b1 ) edge ( b2 ) edge ( c1 ) ; at ( 0,-.5 ) ( c ) ; other authors have represented marginal ( in)dependencies using bidirected graphs , instead of undirected graphs like we do here .we hope that the reader is compensated for this small departure from community standards by the lower amount of clutter in our figures , and the greater ease to link our work to standard graph theoretical results .we also emphasize that we model only pairwise , and not higher - order joint dependencies .however , for gaussian data , pairwise independence entails joint independence . in that case ,our marginal independence graphs are equivalent to _ covariance graphs _ , whose missing edges represent zero covariances .our results generalize the work of who showed ( but did not prove ) how to find _ some _ faithful dags for a given covariance graph .we review these and other connections to related work in section [ sec : concepts ] where we also link our problem to the theory of partially ordered sets ( posets ) .this connection allows us to identify certain maximal and minimal faithful dags .based on these `` boundary dags '' we then derive a characterization of all faithful dags ( section [ sec : consistency ] ) , and construct related enumeration algorithms ( section [ sec : algo ] ) .we use these algorithms to explore the combinatorial structure of faithful dag models ( section [ sec : combinatorics ] ) which leads , among other things , to a quantification of how much pairwise marginal independencies reduce structural causal uncertainty . finally , we ask what happens when a set of independencies can _ not _ be explained by any dag . how many additional variables will we need ?we prove that this problem is np - hard ( section [ sec : latents ] ) .preliminary versions of many of the results presented in this paper were obtained in the master s thesis of the second author .in this paper we use the abbreviation _iff _ for the connective `` if and only if '' .a graph consists of a set of nodes ( variables ) and set of edges .we consider undirected graphs ( which we simply refer to as graphs ) , directed graphs , and mixed graphs that can have both undirected edges ( denotes as ) and directed edges ( denoted as ) .two nodes are _ adjacent _ if they are linked by any edge .a _ clique _ in a graph is a node set such that all are adjacent .conversely , an _independent set _ is a node set in which no two nodes are adjacent .maximal clique _ is a clique for which no proper superset of nodes is also a clique . for any ,the _ neighborhood _ is the set of nodes adjacent to and the _ boundary _ is the neighborhood of including , i.e. . a node is called _ simplicial _ if is a clique .equivalently , is simplicial iff for all .a clique that contains simplicial nodes is called a _simplex_. every simplex is a maximal clique , and every simplicial node belongs to exactly one simplex .the _ degree _ of a node is .if for two graphs and we have , then is an _ edge subgraph _ of and is an _ edge supergraph _ of .the _ skeleton _ of a directed graph is obtained by replacing every edge by an undirected edge .a _ path _ of length is a sequence of distinct nodes in which successive nodes are pairwise adjacent . a _ directed path _ consists of directed edges that all point towards . in a directed graph, a node is an _ ancestor _ of another node if or if there is a directed path . for each edge , we say that is a _ parent _ of and is a _ child _ of .if two nodes in a directed graph have a common ancestor ( which can be or ) , then the path is called a _ trek _ connecting and .a dag is called _ transitive _ if , for all , it contains an edge whenever there is a directed path from to .given a dag , the _ transitive closure _ is the unique transitive graph that implies the same ancestor relationships as , whereas the _ transitive reduction _ is the unique edge - minimal graph that implies the same ancestor relationships . in this paperwe encounter several well - known graph classes , e.g. , chordal graphs and trivially perfect graphs. we will give brief definitions when appropriate , but we direct the reader to the excellent survey by for further details .in this section we define the class of graphs which can be explained using a directed acyclic graph ( dag ) on the same variables . we will refer to such graphs as _ simple marginal independence graphs _ ( smigs ) . [ definition : dependency : graph ] a graph is called the _ simple marginal independence graph _ ( smig ) , or _ marginal independence graph _ of a dag if for all , iff and have a common ancestor in . if is the marginal independence graph of then we also say that is _ faithful _ to . is the set of all graphs for which there exists a faithful dag .note that each dag has exactly one marginal independence graph .again , we point out that marginal independence graphs are often called ( and drawn as ) _ bidirected graphs _ in the literature , though the term `` marginal independence graph '' has also been used by various authors ( e.g. * ? ? ?in this subsection we recall briefly the general setting for modeling ( in)dependencies proposed by and show the relationship between that model and smigs . in the definitions below denotes a set of variables and , and are three disjoint subsets of .a _ dependency model _ over is any subset of triplets which represent independencies , that is , asserts that is independent of given .a _ probabilistic dependency model _ is defined in terms of a probability distribution over . by definition iff for any instantiation , and of the variables in these subsets . a _ directed acyclic graph dependency model _ is defined in terms of a dag . by definition iff and -separated by in ( for a definition of -separation by a set see ) .we define a _ marginal _ dependency model , resp . marginal probabilistic and marginal dag dependency model , analogously as with the restriction that the second component of any triple is the empty set .thus , such marginal dependency models are sets of pairs .it is easy to see that the following properties are satisfied .let be a marginal probabilistic dependency model or a marginal dag dependency model .then is closed under : + symmetry : and + decomposition : .+ moreover , if is a marginal dag dependency model then it is also closed under + union : .the marginal probabilistic dependency model is not closed under union in general .for instance , consider two independent , uniformly distributed binary variables and and let , where denotes xor of two bits .for the model defined in terms of probability over we have that and belong to but does not .in this paper we will _ not _ assume that the marginal independencies in the data are closed under union .instead , we only consider pairwise independencies , which we formalize as follows .let be a marginal probabilistic dependency model over .then the simple marginal independence graph of is the graph in which iff .thus , in general , marginal independence graphs do not contain any information on higher - order _ joint _ independencies present in the data .however , under certain common parametric assumptions , dependency models would be closed under union as well .this holds , for instance , if the data are normally distributed . in that case ,marginal independence is equivalent to zero covariance , pairwise independence implies joint independence , and marginal independence graphs become covariance graphs .the following is not difficult to see .a marginal dependency model which is closed under symmetry , decomposition , and union coincides with the transitive closure of over symmetry and union .this proposition entails that if the marginal dependencies in the data are closed under these properties , then the entire marginal dependency model is represented by the marginal independence graph . to reach our aim of a complete and constructive characterization of the dags faithful to a given smig, it is useful to observe that marginal independence graphs are invariant with respect to the insertion or deletion of transitive edges from the dag .we formalize this as follows .a ( labelled ) _ poset _ is a dag that is identical to its transitive closure .the marginal independence graphs of a dag and its transitive closure are identical .two nodes are not adjacent in the marginal independence graph iff they have no common ancestor in the dag .transitive edges do not influence ancestral relationships .we thus restrict our attention to finding _ posets _ that are faithful to a given smig .note that faithful dags can then be obtained by deleting transitive edges from faithful posets ; since no dag obtained in this way can be an edge subgraph of two different posets , this construction is unique and well - defined .in particular , by deleting _ all _ transitive edges from a poset , we obtain a sparse graphical representation of the poset as defined below .given a poset , its _ transitive reduction _ is the unique dag for which and is the smallest set where .transitive reductions are also known as _hasse diagrams _ , though hasse diagrams are usually unlabeled .different posets can have the same marginal independence graphs , e.g. the posets with hasse diagrams and .similarly , markov equivalence is a sufficient but not necessary condition to inducing the same marginal independence graphs ( adding an edge to changes the poset and the markov equivalence class , but not the marginal independence graph ) .we first recall existing results that show which graphs admit a faithful dag at all , and how to find such dags if possible .note that many of these results have been stated without proof , but our connection to posets will make some of these proofs straightforward .the following notion related to posets is required . for a poset , the _ bound graph _ of is the graph where iff and share a _ lower bound _ , i.e. , have a common ancestor in .[ definition : boundgraph ] [ thm : graphcharact ] is the set of all graphs for which every edge is contained in a simplex .this is theorem 2 in ( who referred to simplexes as `` exterior cliques '' ) .alternatively , we can observe that the marginal independence graph of a poset ( definition [ definition : dependency : graph ] ) is equal to its bound graph ( definition [ definition : boundgraph ] ) .the characterization of bound graphs as `` edge simplicial '' graphs has been proven by by noting that simplicial nodes in correspond to possible minimal elements in .we note that this result predates the equivalent statement in .though all bound graphs have a faithful poset , not all bound graphs have one with the same skeleton ; see figure [ fig : marginalbutnotmarkov]a , b for a counterexample .however , the graphs for which a poset with the same skeleton can be found are nicely characterizable in terms of forbidden subgraphs . given a graph , a dag that is faithful to and has the same skeleton exists iff is trivially perfect ( i.e., has no = ( 1,0 ) edge [ - ] ( 0,0 ) edge [ - ] ( 2,0 ) ( 3,0 ) edge [ - ] ( 2,0 ) ; nor a = ( 0,0 ) edge [ - ] ( 1,0 ) edge [ - ] ( 0,1 ) ( 1,1 ) edge [ - ] ( 0,1 ) edge [ - ] ( 1,0 ) ; as induced subgraph ) .[ thm : triviallyperfect ] it is known that the trivially perfect graphs are the intersection of the bound graphs and the chordal graphs ( figure [ fig : inclusions ] ; * ? ? ? * ) .( cog ) at ( 1,1.5 ) chordal ; ( mdg ) at ( -1,1.5 ) ; ( tpg ) at ( 0,0 ) + + trivially + perfect ; = [ bi ] = [ circle , fill = black , inner sep=1pt ] ; \(a ) at ( -0.5,-0.5 ) ; ( b ) at ( 0.5,-0.5 ) ; ( c ) at ( -0.5,0.5 ) ; ( d ) at ( 0.5,0.5 ) ; ( a ) edge ( b ) ; ( a ) edge ( c ) ; ( b ) edge ( d ) ; ( c ) edge ( d ) ; ( a ) edge ( d ) ; ( b ) edge ( c ) ; ( -1,0 ) ellipse ( 2 cm and 1.2 cm ) ; ( 1,0 ) ellipse ( 2 cm and 1.2 cm ) ; \(a ) at ( -0.5,-0.5 ) ; ( b ) at ( 0.5,-0.5 ) ; ( c ) at ( -0.5,0.5 ) ; ( d ) at ( 0.5,0.5 ) ; ( a ) edge ( b ) ; ( a ) edge ( c ) ; ( b ) edge ( d ) ; \(a ) at ( -0.5,-0.5 ) ; ( b ) at ( 0.5,-0.5 ) ; ( c ) at ( -0.5,0.5 ) ; ( d ) at ( 0.5,0.5 ) ; ( e ) at ( 1.25,0 ) ; ( f ) at ( -1.25,0 ) ; ( g ) at ( 0,-1.25 ) ; ( h ) at ( 0,1.25 ) ; ( b ) edge ( a ) edge ( d ) ( c ) edge ( a ) edge ( d ) ; ( e ) edge ( b ) edge ( d ) ; ( f ) edge ( a ) edge ( c ) ; ( g ) edge ( a ) edge ( b ) ; ( h ) edge ( c ) edge ( d ) ; this nice result begs the question whether a similar characterization is also possible for . as the following observation shows , that is not the case .every graph is an induced subgraph of some graph .[ proposition : induced : subgraph : mdg ] take any graph and construct a new graph as follows . for every edge in , add a new node to and add edges and . obviously is an induced subgraph of . to see that is in ,consider the dag consisting of the nodes in and the edges and for each newly added node in .then is the marginal independence graph of .the graph class characterization implies efficient recognition algorithms for smigs .it can be tested in polynomial time whether a graph is a smig .[ thm : recognize : sdgs ] verifying the graphical condition of theorem [ thm : graphcharact ] amounts to testing whether all edges reside within a simplex .however , knowing that smigs are bound graphs , we can apply an efficient algorithm for bound graph recognition that uses radix sort and simplex elimination and achieves a runtime of , where is the number of simplexes in the graph .this is typically better than because large implies small and vice versa .alternatively , we can apply known fast algorithms to find all simplicial nodes .we now ask how to find faithful dags for simple marginal independence graphs .we observed that marginal independence graphs can not distinguish between transitively equivalent dags , so a perhaps more natural question is : which _ posets _ are faithful to a given graph ? as pointed out before , we can obtain all dags from faithful posets in a unique manner by removing transitive edges . a further advantage of the poset representation will turn out to be that the `` smallest '' and `` largest '' faithful posets can be characterized uniquely ( up to isomorphism ) ; as we shall also see , this is not as easy for dags , except for marginal independence graphs in a certain subclass .our first aim is to characterize the `` upper bound '' of the faithful set .that is , we wish to identify those posets for which no edge supergraph is also faithful .we will show that a construction described by solves exactly this problem . for a graph , the _ sink graph _ is constructed as follows : for each edge in , add to : ( 1 ) an edge if ; ( 2 ) an edge if ; ( 3 ) an edge if . for instance, the sink graph of the graph in figure [ fig : marginalbutnotmarkov]a is the graph in figure [ fig : marginalbutnotmarkov]b .a _ sink orientation _ of a graph is any dag obtained by replacing every undirected edge of by a directed edge .we first need to state the following .every sink orientation of is a poset .[ thm : sinkposet ] fix a sink orientation and consider any chain . by construction , this implies that .hence , if and are adjacent in the sink graph , then the only possible orientation is .there can be two reasons why and are not adjacent in the sink graph : ( 1 ) they are not adjacent in .but then would not be faithful , since implies the edge .( 2 ) the edge was not added to the sink graph . but this contradicts .this lemma allows us to strengthen theorem 2 by in the sense that we can replace `` dag '' by `` maximal poset '' ( emphasized ) : is a _ maximal poset _faithful to iff is a sink orientation of .[ thm : sinkor ] the following is also not hard to see . for a smig ,every dag that is faithful to is a subgraph of some sink orientation of .[ thm : sinkor2 ] obviously the skeleton of can not contain edges that are not in .so , suppose is an edge in but conflicts with the sink orientation ; that is , the sink graph contains the edge .that is the case only if is a proper subset of .however , in the marginal independence graph of , any node that is adjacent to ( has a common ancestor ) must also be adjacent to .thus , the marginal independence graph of can not be .every maximal faithful poset for can be generated by first fixing a topological ordering of and then generating the dag that corresponds to that ordering , an idea that has also been mentioned by .this construction makes it obvious that all maximal faithful posets are isomorphic . for curiosity of the reader, we note that can also be viewed as a _ complete partially directed acyclic graph _ ( cpdag ) , which represents the markov equivalence class of edge - maximal dags that are faithful with .cpdags are used in the context of inferring dags from data , which is only possible up to markov equivalence .a minimal faithful poset to is one from which no further relations can be deleted without entailing more independencies than are given by .let be a graph and let be an independent set . then is the poset consisting of the nodes in , their neighbors in , and directed edges for each where .for example , figure [ fig : minposet]b shows the unique for the graph in figure [ fig : minposet]a . .22 = [ circle , fill , inner sep=1pt ] ; = [ bi ] ( a ) at ( 0.5,0.25 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 2,0 ) ; ( d ) at ( 1.5,0.25 ) ; ( e ) at ( 1.5,-0.25 ) ; ( b ) edge ( a ) edge ( c ) edge [ bend left=35 ] ( d ) edge [ bend right=35 ] ( e ) ( c ) edge [ bend left=35 ] ( e ) edge [ bend right=35 ] ( d ) ; .22 = [ circle , fill , inner sep=1pt ] ; = [ bi ] ( a ) at ( 0.5,0.25 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 2,0 ) ; ( d ) at ( 1.5,0.25 ) ; ( e ) at ( 1.5,-0.25 ) ; ( b ) edge [ rdir ] ( a ) edge [ rdir , bend left=35 ] ( d ) edge [ rdir , bend right=35 ] ( e ) ( c ) edge [ rdir , bend left=35 ] ( e ) edge [ rdir , bend right=35 ] ( d ) ; .22 = [ circle , fill , inner sep=1pt ] ; = [ bi ] ( a ) at ( 0.5,0.25 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 2,0 ) ; ( d ) at ( 1.5,0.25 ) ; ( e ) at ( 1.5,-0.25 ) ; ( b ) edge [ rdir ] ( a ) ( c ) edge [ rdir , bend left=35 ] ( e ) edge [ rdir , bend right=35 ] ( d ) ( c ) edge [ dir ] ( b ) ; .22 = [ circle , fill , inner sep=1pt ] ; = [ draw,- ] = [ circle , fill , inner sep=1pt ] ; = [ bi ] ( a ) at ( 0.5,0.25 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 2,0 ) ; ( d ) at ( 1.5,0.25 ) ; ( e ) at ( 1.5,-0.25 ) ; ( b ) edge [ rdir ] ( a ) edge [ rdir , bend left=35 ] ( d ) edge [ rdir , bend right=35 ] ( e ) ( c ) edge [ rdir , bend left=35 ] ( e ) edge [ rdir , bend right=35 ] ( d ) ( c ) edge [ dir ] ( b ) ; let . then a poset is a minimal poset faithful to iff for a set consisting of one simplicial vertex for each simplex .[ thm : minimalposet ] we first show that if is a set consisting of one simplicial node for each simplex , then is a minimal faithful poset .every edge resides in a simplex , so it is either adjacent to or both of its endpoints are adjacent to some . in both cases , implies .also does not imply more edges than are in . now , suppose we delete an edge from .this edge must exist in , else was not simplicial .but now no longer implies this edge .thus , is minimal .second , assume that is a minimal faithful poset .assume would contain a sequence of two directed edges . then would also contain the edge .but then could be deleted from without changing the dependency graph , and was not minimal .so , does not contain any directed path of length more than 1 .next , observe that for each simplex in , the nodes must all have a common ancestor in . without paths of length , this is only possible if one node in the simplex is a parent of all other nodes , and there are no edges among the child nodes of .finally , each such must be a simplicial node in ; otherwise , it would reside in two or more simplexes , and would have to be the unique parent in those simplexes .but then the children of would form a single simplex in .like the maximal posets , all minimal posets are thus isomorphic .we point out that the minimal posets contain no transitive edges and therefore , they are also edge - minimal faithful dags. however , this does not imply that minimal posets have the smallest possible number of edges amongst all faithful dags ( figure [ fig : minposet ] ) .there appears to be no straightforward characterization of the dags with the smallest number of edges for marginal independence graphs in general .however , a beautiful one exists for the subclass of trivially perfect graphs .a _ tree poset _ is a poset whose transitive reduction is a tree ( with edges pointing towards the root ) .a connected smig has a faithful tree poset iff it is trivially perfect .the bound graph of a tree poset is identical to its _ comparability graph _ , which is the skeleton of the poset .comparability graphs of tree posets coincide with trivially perfect graphs .since no connected graph on nodes can have fewer edges than the transitive reduction of a tree poset on the same nodes ( i.e. , ) , tree posets coincide with faithful dags having the smallest possible number of edges .how do we construct a tree for a given trivially perfect graph ?every such graph must have a _ central point _, which is a node that is adjacent to all other nodes .we set this node as the sink of the tree , and continue recursively with the subgraphs obtained after removing the central point .each subgraph is also trivially perfect and can thus be oriented into a tree .after we are done , we link the sinks of the trees of the subgraphs to the original central point to obtain the full tree .if a given marginal independence graph admits faithful dag models , then it is of interest to enumerate these .a trivial enumeration procedure is the following : start with the sink graph of , choose an arbitrary edge , and form all 2 or 3 subgraphs obtained by keeping ( if it is directed ) , orienting ( if it is undirected ) , or deleting it .apply the procedure recursively to these subgraphs . during the recursion , do not touch edges that have been previously chosen .if the current graph is a dag that is faithful to , output it ; otherwise , stop the recursion .however , we can do better by exploiting the results of the previous section , which will allow us to derive enumeration algorithms that generate representations of multiple dags at each step . having characterized the maximal and minimal faithful posets , we are now ready to construct an enumeration procedure for all dags that are faithful to a given graph .we first state the following combination of theorem [ thm : sinkor ] and theorem [ thm : minimalposet ] .a dag is faithful to a smig iff ( 1 ) is an edge subgraph of some sink orientation of and ( 2 ) the transitive closure of is an edge supergraph of for some node set consisting of one simplicial node for each simplex .[ prop : dagcharact ] from this observation , we can derive our first construction procedure for faithful dags .a dag is faithful to a smig iff it can be generated by the following steps .( 1 ) pick any set consisting of one simplicial node for each simplex .( 2 ) generate any dag on the nodes that is an edge subgraph of some sink orientation of .( 3 ) add any subset of edges from such that the transitive closure of the resulting graph contains all edges of .[ prop : dagalgorithm ] while step ( 3 ) may seem ambiguous , figure [ fig : dagalgorithm ] illustrates that after step ( 2 ) , the edges from decompose nicely into _ mandatory _ and _ optional _ ones .this means that we can in fact stop the construction procedure after step ( 2 ) and output a `` graph pattern '' , in which some edges are marked as optional .this is helpful in light of the potentially huge space of faithful models , because every graph pattern can represent an exponential number of dags .= [ circle , fill , inner sep=1pt ] = [ bi ] .16 \(a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge ( b ) edge ( d ) edge ( e ) ( c ) edge ( b ) edge ( d ) edge ( e ) ( b ) edge ( d ) edge ( e ) ; .64 = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge ( b ) edge ( d ) edge ( e ) ( c ) edge ( b ) edge ( d ) edge ( e ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge [ dashed ] ( b ) edge ( d ) edge ( e ) ( c ) edge [ dashed ] ( b ) edge ( d ) edge ( e ) ( d ) edge [ thick ] ( b ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge ( b ) edge ( d ) edge [ dashed ] ( e ) ( c ) edge ( b ) edge ( d ) edge [ dashed ] ( e ) ( b ) edge [ thick ] ( e ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge [ dashed ] ( b ) edge ( d ) edge [ dashed ] ( e ) ( c ) edge [ dashed ] ( b ) edge ( d ) edge [ dashed ] ( e ) ( d ) edge [ thick ] ( b ) ( b ) edge [ thick ] ( e ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge ( b ) edge ( d ) edge [ dashed ] ( e ) ( c ) edge ( b ) edge ( d ) edge [ dashed](e ) ( d ) edge [ thick , bend right=90 ] ( e ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge [ dashed ] ( b ) edge ( d ) edge[dashed ] ( e ) ( c ) edge [ dashed ] ( b ) edge ( d ) edge [ dashed](e ) ( d ) edge [ thick , bend right=90 ] ( e ) ( d ) edge [ thick ] ( b ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge ( b ) edge ( d ) edge [ dashed](e ) ( c ) edge ( b ) edge ( d ) edge [ dashed](e ) ( d ) edge [ thick , bend right=90 ] ( e ) ( b ) edge [ thick ] ( e ) ; = [ dir ] ( a ) at ( 0.5,0 ) ; ( b ) at ( 1,0 ) ; ( c ) at ( 1.5,0 ) ; ( d ) at ( 1,1 ) ; ( e ) at ( 1,-1 ) ; ( e ) edge [ bi , bend left=90 ] ( d ) ; ( a ) edge [ dashed ] ( b ) edge ( d ) edge [ dashed](e ) ( c ) edge [ dashed ] ( b ) edge ( d ) edge [ dashed](e ) ( d ) edge [ thick , bend right=90 ] ( e ) ( d ) edge [ thick ] ( b ) ( b ) edge [ thick ] ( e ) ; the dags resulting from the procedure in proposition [ prop : dagalgorithm ] are in general redundant because no care is taken to avoid generating transitive edges . by combining propositions [ prop : dagcharact ] and [ prop : dagalgorithm ] , we obtain an algorithm that generates sparse , non - redundant representations of the faithful dags . a poset is faithful to iff it can be generated by the following steps .( 1 ) pick any set consisting of one simplicial node for each simplex .( 2 ) generate a poset on the nodes that is an edge subgraph of some sink orientation of .( 3 ) add to .[ th : posetenum ] a nice feature of this construction is that step ( 3 ) is unambiguous : every choice for in step ( 1 ) and in step ( 2 ) yields exactly one poset .figure [ algo : enumsep ] gives an explicit pseudocode for an algorithm that uses theorem [ th : posetenum ] to enumerate all faithful posets .our algorithm is efficient in the sense that at every internal node in its recursion tree , it outputs a faithful poset . at every nodewe need to evaluate whether the current is acyclic and atransitive ( i.e. , contains no transitive edges ) , which can be done in polynomial time .also simplexes and their simplicial vertices can be found in polynomial time .thus , our algorithm is a _ polynomial delay enumeration algorithm _ similar to the ones used to enumerate adjustment sets for dags .figure [ fig : consistentposets ] shows an example output for this algorithm .= [ circle , fill , inner sep=1pt ] = [ draw = black , fill = white ] ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ bi ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v5 ) edge ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v5 ) edge ( v7 ) ( v6 ) edge ( v7 ) ( v1 ) edge ( v8 ) ( v2 ) edge ( v8 ) ( v3 ) edge ( v8 ) ( v5 ) edge ( v8 ) ( v6 ) edge ( v8 ) ( v7 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v7 ) edge [ bi ] ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v1 ) edge ( v8 ) ( v2 ) edge ( v8 ) ( v3 ) edge ( v8 ) ( v5 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v1 ) edge ( v8 ) ( v2 ) edge ( v8 ) ( v3 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v3 ) edge ( v8 ) ( v5 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v7 ) edge ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v1 ) edge ( v8 ) ( v2 ) edge ( v8 ) ( v3 ) edge ( v8 ) ; + ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v7 ) edge ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v7 ) ( v4 ) edge ( v7 ) ( v3 ) edge ( v8 ) ( v5 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v7 ) edge [ rdir ] ( v6 ) ( v1 ) edge ( v7 ) ( v1 ) edge ( v8 ) ( v2 ) edge ( v8 ) ( v3 ) edge ( v8 ) ; ( v1 ) at ( 1.5,1.5 ) ; ( v2 ) at ( 3.5,0 ) ; ( v3 ) at ( 4,1 ) ; ( v4 ) at ( 2.25,1.5 ) ; ( v5 ) at ( 1,0.5 ) ; ( v8 ) at ( 2.5,.5 ) ; ( v6 ) at ( 3,1 ) ; ( v7 ) at ( 2.5,2 ) ; = [ dir ] ( v1 ) edge ( v5 ) ( v2 ) edge ( v5 ) ( v1 ) edge ( v6 ) ( v3 ) edge ( v6 ) ( v4 ) edge ( v6 ) ( v7 ) edge [ rdir ] ( v6 ) ( v1 ) edge ( v7 ) ( v3 ) edge ( v8 ) ( v5 ) edge ( v8 ) ;in this section , we apply the previous results to explore some explicit combinatorial properties of smigs and their faithful dags . we revisit the question : when can a marginal independence graph allow a causal interpretation ?more precisely , we ask _ how many _ marginal independence graphs on variables are smigs .we reformulate this question into a version that has been investigated in the context of poset theory .let the _ height _ of a poset be the length of a longest path in .the following is an obvious implication of theorem [ thm : minimalposet ] .the number of non - isomorphic smigs with nodes is equal to the number of non - isomorphic posets on variables of height 1 .enumeration of posets is a highly nontrivial problem , and an intensively studied one .the online encyclopedia of integer sequences ( oeis ) tabulates for up to 40 .we give the first 10 entries of the sequence in table [ tab : numbers ] and compare it to the number of graphs in general ( up to isomorphism ) . as we observe , the fraction of graphs that admit a dag on the same variables decreases swiftly as increases . .comparison of the number of unlabeled connected graphs with nodes to the number of such graphs that are also smigs . for (not shown ) , non - smigs outnumber smigs by more than . [ cols= " > , > , > , > " , ] we note that a similar but more technical analysis is possible for uncertainty reduction with respect to dags instead of posets .we omit this due to space limitations .in this section we consider situations in which a graph is not a smig ( which can be detected using the algorithm in theorem [ thm : recognize : sdgs ] ) .similarly to the definition proposed in for the general dependency models , to obtain faithful dags for such graphs we will extend the dags with some auxiliary nodes .we generalize definition [ definition : dependency : graph ] as follows .let be a graph and let , with , be a set of auxiliary nodes .a dag is faithful to if for all , iff and have a common ancestor in .the result below follows immediately from proposition [ proposition : induced : subgraph : mdg ] .for every graph there exists a faithful dag with some auxiliary nodes .obviously , if then there exists a faithful dag to with . for , from the proof of proposition [ proposition : induced : subgraph : mdg ] it follows that there exists a set of at most nodes and a dag such that is faithful to with auxiliary nodes .but the problem arises to minimize the cardinality of . the problem to decide if for a given graph and an integer , there exists a faithful dag with at most auxiliary nodes , is np - complete .it is easy to see that the problem is in np . to prove that it is np - hard, we show a polynomial time reduction from the edge clique cover problem , that is known to be np - complete . recall that the problem edge clique cover is to decide if for a graph and an integer there exist a set of subgraphs of , such that each subgraph is a clique and each edge of is contained in at least one of these subgraphs ?let and be an instance of the edge clique cover problem , with .we construct the marginal independence graph as follows .. then and .obviously , can be constructed from in polynomial time .we claim that can be covered by cliques iff for there exists a faithful dag with at most auxiliary nodes .assume first that can be covered by at most cliques , let us say , with .then we can construct a faithful dag for with auxiliary nodes as follows .its set of nodes is , where .the edges can be defined as it is easy to see that is faithful to .now assume that a dag , with at most auxiliary nodes , is faithful to . from the construction of follows that for all different nodes there is no directed path from to in .if such a path exists , then is an ancestor of in .since is an edge of , the nodes and have a common ancestor in , which must be also a common ancestor of and a contradiction because and are not incident in .thus , all treks connecting pairs of nodes from in must contain auxiliary nodes .next , we slightly modify : for each we remove all incident edges and add the new edge .the resulting graph , is a dag which remains faithful to .indeed , we can not obtain a directed cycle in the since no has an in - edge and the original was a dag . to see that the obtained dag remains faithful to note first that after the modifications , and have a common ancestor in whereas and , with , do not .otherwise , it would imply a directed path from to since is the only possible ancestor of both nodes a contradiction .finally , note that any trek connecting and in can not contain a node from .similarly , no trek between and in contains a node from .we get that and have a common ancestor in iff they have a common ancestor in .thus , in the auxiliary nodes are incident to , but not to nodes from .below we modify further and obtain a dag , in which every auxiliary node is incident with a node in via an out - edge only .to this aim we remove from all edges going out from a node in to a node in . obviously , if and have a common ancestor in , then they also have a common ancestor in , because .the opposite direction follows from the fact we have shown at the beginning of this proof that for all different nodes there is no directed path from to in .this is true also for .thus , if and have a common ancestor , say , in then and there exist directed paths and such that also all and belong to . but from the construction of it follows that both paths belong also to .since is faithful to , for every auxiliary node the subgraph induced by its children in is a clique in .moreover every edge of the graph belongs to at least one such clique .thus the subgraphs induced by , with , are cliques that cover .given a graph that represents a set of pairwise marginal independencies , which causal structures on the same variables might have generated this graph ?here we characterized all these structures , or alternatively , all maximal and minimal ones .furthermore , we have shown that it is possible to deduce how many exogenous variables ( which correspond to simplicial nodes ) the causal structure might have , and even to tell whether it might be a tree . for graphs that do not admit a dag on the same variables , we have studied the problem of explaining the data with as few additional variables as possible , and proved it to be np - hard .this may be surprising ; the related problem of finding a mixed graph that is markov equivalent to a bidirected graph and has as few bidirected edges as possible is efficiently solvable .the connection to posets emphasizes that sets of faithful dags have complex combinatorics . indeed ,if there are no pairwise independent variables , then we obtain the classical poset enumeration problem .our current , unoptimized implementation of the algorithm in figure [ algo : enumsep ] allows us to deal with dense graphs up to about 12 nodes ( sparse graphs are easier to deal with ) .we point out that our enumeration algorithms operate with a `` template graph '' , i.e. , the sink orientation .it is possible to incorporate certain kinds of background knowledge , like a time - ordering of the variables , into this template graph by deleting some edges .such further constraints could greatly reduce the search space .another additional constraint that could be used for linear models is the precision matrix , though finding dags that explain a given precision matrix is np - hard in general , we observed that the pairwise marginal independencies substantially reduce structural uncertainty even in the worst case ( table [ tab : numbers ] ) . causal inference algorithms could exploit this to reduce the number of ci tests .the pc algorithm , for instance , forms the marginal independence graph as a first stage before performing any ci tests . at that stage, it could be immediately tested if the resulting graph is a smig , and if not , the algorithm can terminate as no faithful dag exists . in summary , we have mapped out the space of causal structures that are faithful to a given set of pairwise marginal independencies using constructive criteria that lead to well - structured enumeration procedures .the central idea underlying our results is that faithful models for marginal independencies are better described by posets than by dags .our results allow to quantify how much our uncertainty about a causal structure is reduced when we invoke the faithfulness assumption and observe a set of marginal independencies .
|
we consider graphs that represent pairwise marginal independencies amongst a set of variables ( for instance , the zero entries of a covariance matrix for normal data ) . we characterize the directed acyclic graphs ( dags ) that faithfully explain a given set of independencies , and derive algorithms to efficiently enumerate such structures . our results map out the space of faithful causal models for a given set of pairwise marginal independence relations . this allows us to show the extent to which causal inference is possible without using conditional independence tests .
|
hamiltonian models of 3-d collinear vortex filaments or 2-d point vortices inside closed geometries have for many years attracted interest due to its central role in understanding the evolution of vorticity in real flows .2-d point vortex dynamics is also relevant in a series of real experimental applications such as vortices in superfluid helium , superconductors , and dynamics of non - neutral plasma filaments .it has also been shown recently that point - vortex models can provide a good formulation for developing control algorithms in realistic flows , since they are able to capture in certain limits most of the qualitative features of the vorticity dynamics .these results indicate that control algorithms based on these low - dimensional point vortex models can be successfully implemented for certain geometries even in fully - viscid navier - stokes equations . in this letter we will likewise attempt to develop a low - dimensional model for a fluid control algorithm , aimed at modifying a realistic flow of vorticity inside a bounded fluid domain . the model will be based on hamiltonian point vortex dynamics , and make use of unstable periodic trajectories in the natural flow dynamics to modify the flow via the ott , grebogi , yorke ( ogy ) scheme .the principal result here is that we develop a general control algorithm which can be utilized in higher - dimensional systems ( phase space ) , such as multi - vortex systems .we demonstrate the method s effectiveness using numerical examples that model an experimental system of confined non - neutral plasma . to develop the control scheme , we first briefly review the formulation of point vortex dynamics . let denote the complex coordinate of a single vortex .the effect of the boundary of the circular domain is simply accounted for by image charges at .the hamiltonian of a system of identical point vortices each of circulation inside a circle of radius is then given by , \label{ham}\ ] ] each vortex moves by being passively advected in the velocity field created by the other vortices and the image vortices , with the dynamical equations being obtained from the canonical hamilton relations : that is , \;\;\;\;\;k= 1 , \ldots , n. \label{dyn}\ ] ] here , the first sum includes the velocity field created by the other vortices at the position of the -th vortex ( infinite self - interaction excluded ) while the second sum gives the velocity field of the image vortices .the hamilton relations ( 2 ) imply that each vortex contributes one degree - of - freedom and therefore two dimensions to the phase space .in addition , since the hamiltonian does not contain time explicitly , the `` energy '' of the vortex system is a constant of the motion. there exists another constant of motion for a circular domain resulting from the invariance of under rotation , i.e. the `` angular momentum '' : because of these two integrals of the motion , the dynamics of the -vortex system is restricted to a dimensional manifold in the dimensional phase space .the hamilton relations for point vortices imply the unusual situation that the phase space is simply a re - scaled version of the configuration space .hence , integrability conditions which normally rely on symmetries of the hamiltonian via noether s theorem , can partly be deduced by looking for symmetries in the physical domain . in this view, one has global non - integrability ( chaos ) if , where ( number of vortices ) - ( number of remaining integrals of motion ) . for the circular domain discussed here , for . in the followingwe will restrict our discussion to the three - vortex case , which is the smallest number of vortices inside a cylinder that leads to chaotic dynamics .our central aim here is to achieve control and avoid chaos by stabilizing simple unstable periodic orbits of the three - vortex system , thereby taking advantage of the natural dynamics of the flow .although there are a large number of low - order periodic orbits of the generic vortex dynamics , we will restrict ourselves without loss of generality to those associated with symmetric configurations of vortices , with the center - of - vorticity in the origin .examples of such configurations are shown in fig . 1 for .we can simplify the discussion by rendering the equations - of - motion dimensionless with the substitution : performing a further transformation into a reference frame co - rotating with the vortex system with uniform angular velocity , again without loss of generality , eq .( [ dyn ] ) will read as : , \;\;\;\;\;k= 1 , 2 , 3.\ ] ] the main advantage of this co - rotating frame is that some of the periodic orbits in the 6-d phase space reduce to a single fixed point ( with proper choice of ) .the stability of such symmetric configurations has been first analyzed by havelock . in spite of the global non - integrability of the system ,the symmetric orbits are stable for some parameter regimes , i.e. they are elliptic quasiperiodic orbits embedded in the four - dimensional manifold determined by the constants of motion ( [ ham ] ) and ( [ ang ] ) .these orbits correspond to states where the vortices are away from the boundary of the cylinder , hence the effect of the `` image '' vortices is a weak perturbation , of the integrable three - vortex dynamics in free space .these regions are separated from the chaotic areas by kam tori containing orbits of marginal stability .many of these stable configurations have been found experimentally in the dynamics of magnetically confined non - neutral plasma columns , where the dynamics is very close to those of point vortices .although many examples of chaos control algorithms have been developed in recent years , these have been restricted to a relatively low - dimensional phase space . here, we develop a control algorithm for higher - dimensional systems which assumes explicit knowledge of the dynamical equations and therefore the location , eigenvalues and eigenvectors of the fixed point or periodic orbit to be stabilized .this method can also certainly be reformulated in the language of experimentally measured time series ( as in previous works ) , however this is not our immediate aim here . for the particular applicationwe discuss in this letter , we aim to modify the dynamics of the fully - viscid continuous fluid using a control model based on the point vortex dynamics .hence , the model governing equations are known , and the high - dimensional control is straightforward to calculate . in the case of an experimental system , the method can be implemented in an identical fashion , using the jacobian and the perturbation matrix obtained from experimental time - series .to begin , let represent the -dimensional unperturbed dynamical system of eq .( 6 ) . here , in addition to eq .( [ dyn ] ) we have introduced a set of experimentally accessible system parameters , represented by the -dimensional vector .we assume the number of such parameters is usually much smaller then the dimensionality of the phase space , typically one .suppose that is an unstable fixed point in the -dimensional phase space , i.e. , , where is a fixed set of parameters of interest .allowing small perturbations in the parameters , then in a neighborhood of this fixed point the dynamics is described by the linearized equations where is the standard jacobian and the matrix describes the effect of small parameter perturbations on the system ( i.e. the perturbation matrix ) .a stability analysis can be performed around the fixed point , by studying the properties of the jacobian , and typically can reveal a multitude of topologies .these topologies are defined by the set of eigenvalues of , which can be real , complex or zero .the instability of the fixed point implies that there is at least one eigenvalue with .since the system is hamiltonian ( ) there is also at least one eigenvalue with negative real part .although there is no guarantee of purely real eigenvalues and eigenvectors , with a proper transformation one can eliminate the imaginary part for at least one of the stable eigenvalues ( and the corresponding eigenvector ) without changing the stability . in the following , we will suppose without loss of generality that there is at least one purely real negative eigenvalue with a corresponding normalized , stable real eigenvector .to achieve control of the vortex trajectories using small perturbations , we impose the condition that after a short time the trajectory has approached the fixed point , i.e. .this can generally be accomplished in many different ways .one possible criteria is suggested by the low - dimensional chaos control of ogy , where the new point is driven onto the stable manifold of the fixed point .other possible choices are the self - locating ( geometric control ) method originally developed for low - dimensional chaos or a method using a newton algorithm developed for higher - dimensional systems . here, we adopt the approach of ogy , i.e. we require that the trajectory lies on the stable manifold of the fixed point after a time : where alpha is a small real number ( ) .intuitively , this implies that after time the trajectory lies on the stable manifold and simultaneously the distance from the fixed point has been decreased by . using a discretization of ( [ dev ] ) , this equation can be written explicitly as \ ;\delta{\bf r}(t ) + { \bf g}\;\delta{\bf \phi}(t ) \delta t , \label{con}\ ] ] where for simplicity the notations and have been introduced .this represents a system of linear equations with unknown perturbation parameters that typically has no solution when .this means that it is not possible to control such a system with one or a few control parameters in a one - step process , as described by eq .( [ con ] ) . here , however we develop an alternative way to overcome this difficulty .let us introduce the following notation for the rhs of eq .( [ con ] ) : \;\delta{\bf r}(t ) + { \bf g}\ ; \delta { \bf \phi}(t ) \delta t.\ ] ] then , instead of eq . ( [ cri ] )one can impose a weaker condition : namely , that after steps the trajectory lies on the stable manifold , i.e. or where the shorthand notation has been introduced for the perturbation at time , i.e. . eq ( [ iter ] ) then can be written explicitly as where .this provides us with linear equations and unknown perturbations .the number is choosen as the smallest integer satisfying the condition . in the following , we will assume that there is a such as .if is not divisible by , there are perturbation parameters that can be freely set to zero .we note that the size of the linear neighborhood of the fixed point gives a natural limit for the allowed size of the perturbations , and we assume that none of the perturbation parameters can be larger than a preselected value .however , since eq .( 10 ) contains the yet unspecified control time parameter , the perturbations scale with this parameter as .this means that we should impose a limit on the product rather than on alone .the upper bound of this product should naturally be set by the system s physical limitations .for example , in the 3-vortex problem is a length that is naturally limited by the typical size of the linearized region , i.e. .it is possible to derive general conditions for the controllability of the system . to have a solution for the -step control process ( 13 ) , the square matrix formed from the matrices of size , i.e. has to have nonvanishing determinant. in two - dimensions ,the system is always controllable provided the vector is not collinear with .intuitively , this means that small perturbations of the system do not move the fixed point exclusively along the stable direction . in higher - dimensions ,such a condition is not strong enough ; the perturbation may access only a very limited subspace , or may have a certain symmetry that leads to un - controllability , even if the two - dimensional analog condition is satisfied .instead incorporates all these additional conditions , and can be viewed as the basic criteria of controllability .this relation is also a useful tool to select appropriate perturbations for each particular control problem , and to find the minimum number of perturbations that can successfully control the system .the control parameter selection is probably the most important step in any such problem , especially when the jacobian is highly - symmetric as in the case of symmetric periodic orbits of identical vortices .for such orbits it is straightforward to show that most of the symmetric perturbations lead to un - controllability .an example of this is the simple perturbation of `` squeezing '' the boundary , i.e. making it slightly elliptic with a small eccentricity . here , the equations of motion can be simply deduced by using a simple conformal mapping that maps the unit circle into an ellipse , with semi - axis and , respectively ( for ) . a careful analysis of the collinear configurations of fig .1(a ) shows that the determinant of the corresponding controllability matrix vanishes , due to the symmetry of the jacobian and the perturbation matrix with respect to the non - central vortices .this means that it is not possible to control this particular configuration by using cylinder squeezing .in fact such behaviour is expected to be quite general in systems of interacting identical subsytems that respond in the very same way to external perturbations .to illustrate the above control algorithm , we actively stabilized the two simplest unstable periodic orbits of the three vortex system inside a cylinder , i.e. the symmetric states shown in fig .1(a , b ) . for the collinear state of fig .1(a ) , the angular velocity of the configuration is , while for the triangular state of fig .1(b ) , it is . here is the radial distance of the non - central vortices from the center of the cylinder . to simplify the equations of motion, the dynamics is viewed in a reference frame co - rotating with the vortices with this constant angular velocity . in this frame , the periodic orbits become single fixed points . analytic calculation of the jacobian for the collinear state at reveals the existence of four real eigenvalues , and , and two purely complex ones , .the triangular state at leads to two purely real eigenvalues and four complex ones , .the stable eigendirection is then obtained in each case as the eigenvector associated with the single negative real eigenvalue .one can observe that although the two topologies are completely different , the controller requires knowledge of only one of the possibly many stable directions . to control the vortex dynamics we require non - symmetric and non - homogeneous perturbatios .we have introduced simple perturbations applied only at the cylinder surface : namely , a uniform distribution of sources and sinks with variable strength .although such a perturbation may be difficult to implement in a real fluid experiment , it is a real possibility in the plasma analog of the vortex dynamics , where similar perturbations can be generated by external electric fields . for ,this leads to a uniform , a quadrupole , and a sextapole field , respectively , as shown in fig .2(a - c ) .since all these fields are highly symmetric , none of them alone can effectively control the vortex dynamics .however , their linear combinations , resulting in strongly asymmetric and inhomogenous fields , proved to be a successful choice that satisfies the controllability conditions ( [ cond ] ) . here , we have introduceed three linearly independent external fields as shown in fig .2(d - f ) . using these ,the equations of motion can be written : \nonumber \\ & & + \phi_1 \ ; ( 1+z_k+z_k^2)+ \phi_2 \;(-1+z_k - z_k^2)+ \phi_3 \ ; ( 1+z_k - z_k^2 ) \;\;\;\;\;k= 1 , 2 , 3.\end{aligned}\ ] ] we previously showed that there are two constants of motion in the unperturbed system , the energy and the angular momentum . due to the presence of the time - dependent perturbations that do not preserve these first integrals , however , the effective dynamics is no longer restricted to a 4-d manifold , but rather can occupy in the full 6-d phase space .note that if the dynamics initially lies in the same energy and angular momentum manifold that contains our fixed point , ergodicity ensures that the trajectory sooner or later reaches a close neighborhood of the desired fixed point , and then the control algorithm is switched on .if this is not the case , then in general this procedure can be supplemented with a targeting algorithm that first drives the system close to the proper energy and angular momentum values , then takes advantage of the infinite number of unstable periodic orbits in that manifold to reach the neighborhood of the desired fixed point in a short time .the development of such a targeting algorithm in higher dimensions is clearly an important step that deserves further study .since this is not the primary emphasis of this letter , in our numerical simulations we have started the system close to the unstable fixed point . to demonstrate this fig .3(a - b ) shows the time evolution of a single coordinate pair of the vortex system in the collinear case , at first without control ( up to mark a ) .then , after the dynamics reaches a small neighborhood of the desired fixed point ( mark b ) , the controller is switched on , and the trajectory is stabilized .3(c - e ) also shows the time evolution of the required perturbations .clearly this demonstrates that the algorithm can effectively control the dynamics with only tiny perturbations applied on the boundary . for better visualization , fig .4 shows a three - dimensional projection of the phase space trajectory with and without control .as previously mentioned , other control criteria can be used instead of the ogy method .for example , one can consider the high - dimensional control of xu and bishop , based on the newton root finding algorithm .this method does not assume knowledge of the stable direction , and it can be useful in cases where all eigenvalues are complex ( with at least two having non - vanishing real part ) .then , instead of condition ( 11 ) one can use the usual newton root finding algorithm however , this method fails when the jacobian is not invertible .( we note here that the transition from eq .( 5 ) to eq . ( 6 ) in ref . is mathematically incorrect , i.e. the solution ( 6 ) does not fulfill eq .therefore the weaker condition ( [ nw ] ) shown above should be used . ) in a realistic navier - stokes simulation our algorithm is effective on time scales significantly shorter than the viscous one ( ) .this is attractive since this time scale can be rather large in some plasma systems , where the viscosity is small and there are no boundary - layer effects due to the free - slip conditions on the boundary . even in these circumstances, it should be noted that there is an additional non - viscous effect that is not present in the hamiltonian model , the vortex merger due to the finite size of the vortices .therefore effective control can be reached only with concentrated vortices that are far from each other , typically with , where is the distance between vortices and the vortex radius computed according to ref . .although this condition can be easily fulfilled while the controller is working , during the targeting algorithm or full chaotic dynamics the vortices may easily come close to each other . in this context, the principal usefulness of such a control scheme will be to _ prevent _ the vortex merger on short time scales from symmetric unstable initial conditions .in conclusion , we have demonstrated that the control algorithm derived in this letter can effectively control higher - dimensional systems .we have also derived a controllability condition that does not depend on the particular control method used , since it contains only the jacobian and the perturbation matrix as input parameters . in this sense , the result is quite general , and can be used to decide the type and number of perturbations needed to control a higher - dimensional system . for the particular numerical example presented here , i.e. three - vortex dynamics inside a cylinder ,the controller was designed in such a way that it could be implemented in a magnetically confined plasma experiment . at present, we also plan to implement the control algorithm in a realistic viscous fluid framework , and hence an effective targeting algorithm has to be developed .in addition , in the present paper the controller was formulated using exactly known vortex coordinates .however , the control scheme itself can be entierly reformulated in a phase space reconstructed purely from wall signal ( e.g. boundary pressure or voltage ) measurements , thereby providing an algorithm which can be directly implemented in an experimental framework . after the acceptance of this paper for publication it was brought to our attention a different formulation of the control algorithm that yields to a similar controlability matrix .the authors wish to thank f. driscoll , g. pedrizzetti and t. tl for very useful discussions .ap and jk wish to acknowledge partial support for this project by a grant from the office of naval research , no . n00014 - 96 - 1 - 0056 , u.s .- hungarian science and technology joint fund under project jfno .286 and 501 , and by the hungarian science foundation under grant nos .otka t17493 , f17166 .zt wishes to acknowledge r.k.p .zia and b. schmittmann for their support and permanent encouragement .zt has also been sponsored by the national science foundation through the division of material research .
|
a chaos control algorithm is developed to actively stabilize unstable periodic orbits of higher - dimensional systems . the method assumes knowledge of the model equations and a small number of experimentally accessible parameters . general conditions for controllability are discussed . the algorithm is applied to the hamiltonian problem of point vortices inside a circular cylinder with applications to an experimental plasma system .
|
large networks are everywhere. can we understand their structure and exploit it ?for example , understanding key structural properties of large - scale data networks is crucial for analyzing and optimizing their performance , as well as improving their reliability and security . in prior empirical and theoretical studiesresearchers have mainly focused on features like small world phenomenon , power law degree distribution , navigability , high clustering coefficients , etc .( see ) .those nice features were observed in many real - life complex networks and graphs arising in internet applications , in biological and social sciences , in chemistry and physics .although those features are interesting and important , as it is noted in , the impact of intrinsic geometrical and topological features of large - scale data networks on performance , reliability and security is of much greater importance .recently , a few papers explored a little - studied before geometric characteristic of real - life networks , namely the _ hyperbolicity _( sometimes called also the _ global curvature _ ) of the network ( see , e.g. , ) .it was shown that a number of data networks , including internet application networks , web networks , collaboration networks , social networks , and others , have small hyperbolicity .it was suggested in that property , observed in real - life networks , that traffic between nodes tends to go through a relatively small core of the network , as if the shortest path between them is curved inwards , may be due to global curvature of the network .furthermore , paper proposes that hyperbolicity in conjunction with other local characteristics of networks , such as the degree distribution and clustering coefficients , provide a more complete unifying picture of networks , and helps classify in a parsimonious way what is otherwise a bewildering and complex array of features and characteristics specific to each natural and man - made network " .the hyperbolicity of a graph / network can be viewed as a measure of how close a graph is to a tree metrically ; the smaller the hyperbolicity of a graph is the closer it is metrically to a tree .recent empirical results of on hyperbolicity suggest that many real - life complex networks and graphs may possess tree - like structures from a metric point of view . in this paper, we substantiate this claim through analysis of a collection of real data networks .we investigate few more , recently introduced graph parameters , namely , the _ tree - distortion _ and the _ tree - stretch _ of a graph , the _ tree - length _ and the _ tree - breadth _ of a graph , the gromov s _ hyperbolicity _ of a graph , the _ cluster - diameter _ and the _ cluster - radius _ in a _ layering partition _ of a graph .all these parameters are trying to capture and quantify this phenomenon of being metrically close to a tree and can be used to measure metric tree - likeness of a real - life network .recent advances in theory ( see appropriate sections for details ) allow us to calculate or accurately estimate those parameters for sufficiently large networks . by examining topologies of numerous publicly available networks , we demonstrate existence of metric tree - like structures in wide range of large - scale networks , from communication networks to various forms of social and biological networks . throughout this paperwe discuss these parameters and recently established relationships between them for unweighted and undirected graphs .it turns out that all these parameters are at most constant or logarithmic factors apart from each other .hence , a constant bound on one of them translates in a constant or almost constant bound on another .we say that a graph _ has a tree - like structure from a metric point of view _( equivalently , _ is metrically tree - like _ ) if anyone of those parameters is a small constant .recently , paper pointed out that `` although large informatics graphs such as social and information networks are often thought of as having hierarchical or tree - like structure , this assumption is rarely tested , and it has proven difficult to exploit this idea in practice ; ... it is not clear whether such structure can be exploited for improved graph mining and machine learning ... '' . in this paper , by bringing all those parameters together ,we not only provide efficient means for detecting such metric tree - like structures in large - scale networks but also show how such structures can be used , for example , to efficiently and compactly encode approximate distance and almost shortest path information and to fast and accurately estimate diameters and radii of those networks .estimating accurately and quickly distances between arbitrary vertices of a graph is a fundamental primitive in many data and graph mining algorithms .graphs that are metrically tree - like have many algorithmic advantages .they allow efficient approximate solutions for a number of optimization problems .for example , they admit a ptas for the traveling salesman problem , have an efficient approximate solution for the problem of covering and packing by balls , admit additive sparse spanners and collective additive tree - spanners , enjoy efficient and compact approximate distance and routing labeling schemes , have efficient algorithms for fast and accurate estimations of diameters and radii , etc .. we elaborate more on these results in appropriate sections . for the first time such metric parameters , as tree - length and tree - breadth , tree - distortion and tree - stretch , cluster - diameter and cluster - radius , were examined , and algorithmic advantages of having those parameters bounded by small constants were discussed for such a wide range of large - scale networks . this paper is structured as follows . in section [ sec : notions ] , we give notations and basic notions used in the paper . in section [ sec : datasets ] , we describe our graph datasets .the next four sections are devoted to analysis of corresponding parameters measuring metric tree - likeness of our graph datasets : layering partition and its cluster - diameter and cluster - radius in section [ sec : layer - partit ] ; hyperbolicity in section[ sec : hyperbol ] ; tree - distortion in section [ sec : td ] ; tree - breadth , tree - length and tree - stretch in section [ sec : tb ] . in each sectionwe first give theoretical background on the parameter(s ) and then present our experimental results .additionally , an overview of implications of those results is provided . in section [ appl ] , we further discuss algorithmic advantages for a graph to be metrically tree - like .finally , in section [ sec : concl ] , we give some concluding remarks .all graphs in this paper are connected , finite , unweighted , undirected , loopless and without multiple edges . for a graph , we use and interchangeably to denote the number of vertices in . also , we use and to denote the number of edges . the _ length of a path _ from a vertex to a vertex is the number of edges in the path .the _ distance _ between vertices and is the length of the shortest path connecting and in .the _ ball _ of a graph centered at vertex and with radius is the set of all vertices with distance no more than from ( i.e. , ) .we omit the graph name as in if the context is about only one graph .the _ diameter _ of a graph is the largest distance between a pair of vertices in , i.e. , .eccentricity _ of a vertex , denoted by , is the largest distance from that vertex to any other vertex , i.e. , .the _ radius _ of a graph is the minimum eccentricity of a vertex in , i.e. , .the _ center_ of a graph is the set of vertices with minimum eccentricity .definitions of graph parameters measuring metric tree - likeness of a graph , as well as notions and notations local to a section , are given in appropriate sections .our datasets come from different domains like internet measurements , biological datasets , web graphs , social and collaboration networks .table [ tab : datasets ] shows basic statistics of our graph datasets .each graph represents the largest connected component of the original graph as some datasets consist of one large connected component and many very small ones ..graph datasets and their parameters : number of vertices , number of edges , diameter , radius . [ cols="^,^,^,^,^ " , ] recall that the _ eccentricity _ of a vertex of a graph , denoted by , is the maximum distance from to any other vertex of , i.e. , .the _ diameter _ of is the largest eccentricity of a vertex in , i.e. , .the _ radius _ of is the smallest eccentricity of a vertex in , i.e. , .a vertex of with ( i.e. , a smallest eccentricity vertex ) is called a _ central vertex _ of .the _ center _ of is the set of all central vertices of .let also be the set of vertices of furthest from . in general ( even unweighted ) graphs ,it is still an open problem whether the diameter and/or the radius of a graph can be computed faster than the time needed to compute the entire distance matrix of ( which requires time for a general unweighted graph ) . on the other hand, it is known that both , the diameter and the radius , of a tree can be calculated in linear time . that can be done by using 2 breadth - first - search ( bfs ) scans as follows .pick an arbitrary vertex of . run a bfs starting from to find run a second bfs starting from to find then i.e. , is a _diametral pair _ of , and . to find the center of it suffices to take one or two adjacent middle vertices of the -path of .interestingly , in , chepoi et al . established that this approach of 2 bfs - scans can be adapted to provide fast ( in linear time ) and accurate approximations of the diameter , radius , and center of any finite set of -hyperbolic geodesic spaces and graphs . in particular , for a -hyperbolic graph , it was shown that if and then and furthermore , the center of is contained in the ball of radius centered at a middle vertex of any shortest path connecting and in . since our graph datasets have small hyperbolicities ,according to , few ( 2 , 3 , 4 , ... ) bfs - scans , each next starting at a vertex last visited by the previous scan ) should provide a pair of vertices and such that is close to the diameter of .surprisingly ( see table [ tab : diamradius ] ) , few bfs - scans were sufficient to get exact diameters of all of our datasets : for 13 datasets , 2 bfs - scans ( just like for trees ) were sufficient to find the exact diameter of a graph .two datasets needed 3 bfs - scans to find the diameter , and only one dataset required 4 bfs - scans to get the diameter .we also computed the eccentricity of a middle vertex of a longest shortest path produced by these few bfs - scans and reported this eccentricity as an estimation for the graph radius .it turned out that the eccentricity of that middle vertex was equal to the exact radius for 6 datasets , was only one apart from the exact radius for 8 datasets , and only for 2 datasets was two units apart from the exact radius .based on solid theoretical foundations , we presented strong evidences that a number of real - life networks , taken from different domains like internet measurements , biological datasets , web graphs , social and collaboration networks , exhibit metric tree - like structures .we investigated a few graph parameters , namely , the tree - distortion and the tree - stretch , the tree - length and the tree - breadth , the gromov s hyperbolicity , the cluster - diameter and the cluster - radius in a layering partition of a graph , which capture and quantify this phenomenon of being metrically close to a tree .recent advances in theory allowed us to calculate or accurately estimate these parameters for sufficiently large networks .all these parameters are at most constant or ( poly)logarithmic factors apart from each other .specifically , graph parameters , , , , are within small constant factors from each other .parameters and are within factor of at most from , , , , .tree - stretch is within factor of at most from hyperbolicity .one can summarize those relationships with the following chains of inequalities : if one of these parameters or its average version has small value for a large scale network , we say that that network has a metric tree - like structure .among these parameters theoretically smallest ones are , and ( being at most ) .our experiments showed that average versions of and of have also very small values for the investigated graph datasets . in table[ tab : allmeasures ] , we provide a summary of metric tree - likeness measurements calculated for our datasets . fig .[ fig : tree - likeness-1charts ] shows four important metric tree - likeness measurements ( scaled ) in comparison .[ fig : tree - likeness - charts ] gives pairwise dependencies between those measurements ( one as a function of another ) .| c | c | c | c| c | c | c | c | c | c| graph & diameter & radius & cluster- & average & & tree & & & cluster- + & & & diameter & diameter & & average & average & average & radius + & & & & of clusters in & & distortion^*^ & distortion & distortion & + & & & & & & ( round . ) & & & + ppi & 19 & 11 & 8 & 0.118977384 & 3.5 & 1.38471 & 5.70566 & 5.29652 & 4 + yeast & 11 & 6 & 6 & 0.119575699 & 2.5 & 1.32182 & 4.37781 & 3.79318 & 4 + dutchelite & 22 & 12 & 10 & 0.070211316 & 4 & 1.41056 & 5.45299 & 6.53269 & 6 + epa & 10 & 6 & 6 & 0.06698375 & 2.5 & 1.26507 & 4.50619 & 4.06901 & 4+ eva & 18 & 10 & 9 & 0.031879981 & 1 & 1.13766 & 5.83084 & 7.77752 & 5 + california & 13 & 7 & 8 & 0.092208234 & 3 & 1.35380 & 4.15785 & 4.98668 & 4 + erds & 4 & 2 & 4 & 0.001113232 & 2 & 1.04630 & 3.08843 & 3.06705 & 2 + routeview & 10 & 5 & 6 & 0.063264697 & 2.5 & 1.23716 & 4.28302 & 4.80363 & 3 + homo release 3.2.99 & 10 & 5 & 5 & 0.03432595 & 2 & 1.18574 & 4.64504 & 3.96703 & 3 + as_caida_20071105 & 17 & 9 & 6 & 0.056424679 & 2.5 & 1.22959 & 4.24314 & 4.76795 & 3 + dimes 3/2010 & 8 & 4 & 4 & 0.056582633 & 2 & 1.19626 & 3.43833 & 3.35917 & 2 + aqualab 12/2007-09/2008 & 9 & 5 & 6 & 0.05826733 & 2 & 1.28390 & 4.23183 & 4.54116 & 3 + as_caida_20120601 & 10 & 5 & 6 & 0.055568105 & 2 & 1.16005 & 4.10547 & 4.53051 & 3 + itdk0304 & 26 & 14 & 11 & 0.270377048 & & 1.57126 & 5.370078 & 5.710122 & 6 + dblb - coauth & 23 & 12 & 11 & 0.45350002 & & 1.74327 & 5.57869 & 5.12724 & 7 + amazon & 47 & 24 & 21 & 0.489056144 & & 2.47109 & 8.81911 & 7.87004 & 12 + from the experiment results we observe that in almost all cases the measurements seem to be monotonic with respect to each others .the smaller one measurement is for a given dataset , the smaller the other measurements are .there are also a few exceptions .for example , eva dataset has relatively large cluster - diameter , , but small hyperbolicity , . on the other hand , erds dataset has while its hyperbolicity is equal to 2 ( see figure [ fig : delta - clusdiam ] ) .yet erds dataset has better embedability ( smaller average distortions ) to trees and than that of eva , suggesting that the ( average ) cluster - diameter may have greater impact on the embedability into trees and . comparing the measurements of erds vs. homo release 3.2.99 , we observe that both have the same hyperbolicity 2 , but erds has better embedability ( average distortion ) to trees .this could be explained by smaller and average diameter of clusters in erds dataset .comparing measurements of ppi vs. california ( the same holds for as_caida_20071105 vs. as_caida_20120601 ) , both have same and values but california ( as_caida_20120601 ) has smaller hyperbolicity and average diameter of clusters .we also observe that the datasets routeview and as_caida_20071105 have same values of , and but as_caida_20071105 has a relatively smaller average diameter of clusters .this could explain why as_caida_20071105 has relatively better embedability to and than routeview .we can see that the difference in average diameters of clusters was relatively small , resulting in small difference in embedability . from these observations, one can suggest that for classification of our datasets all these tree - likeness measurements are important , they collectively capture and explain metric tree - likeness of them .we suggest that metric tree - likeness measurements in conjunction with other local characteristics of networks , such as the degree distribution and clustering coefficients , provide a more complete unifying picture of networks .
|
based on solid theoretical foundations , we present strong evidences that a number of real - life networks , taken from different domains like internet measurements , biological data , web graphs , social and collaboration networks , exhibit tree - like structures from a metric point of view . we investigate few graph parameters , namely , the tree - distortion and the tree - stretch , the tree - length and the tree - breadth , the gromov s hyperbolicity , the cluster - diameter and the cluster - radius in a layering partition of a graph , which capture and quantify this phenomenon of being metrically close to a tree . by bringing all those parameters together , we not only provide efficient means for detecting such metric tree - like structures in large - scale networks but also show how such structures can be used , for example , to efficiently and compactly encode approximate distance and almost shortest path information and to fast and accurately estimate diameters and radii of those networks . estimating the diameter and the radius of a graph or distances between its arbitrary vertices are fundamental primitives in many data and graph mining algorithms .
|
in chemical reactions , it is common that a certain reaction should in principle be allowed , but in reality can not take place ( or occurs at extremely low rates ) because of the presence of some large energy barrier . fortunately , the situation is sometimes redeemed by the presence of certain chemical substances , referred to as catalysts , which effectively lower the energy barrier across the transformation .that is to say , catalysts significantly increase the reaction rates . importantly, these catalysts can remain unchanged after the occurrence of the reaction , and hence a small amount of catalytic substance could be used repeatedly and is sufficient to facilitate the chemical reaction of interest .the basic principles of chemical reactions are governed by thermodynamic considerations such as the second law .there have specifically been a number of recent advances in the quest of understanding the fundamental laws of thermodynamics .these efforts are especially focused on the quantum nano - regime , where finite size effects and quantum coherences are becoming increasingly relevant .one particularly insightful approach is to cast thermodynamics as a resource theory , reminding of notions in entanglement theory . in this framework , thermodynamics can be seen as the theory that describes conditions for state transformation from some quantum state to another under _ thermal operations _ ( to ) . the notion of to means allowing for the full set of global unitaries which are energy preserving in the presence of some thermal bath .this is a healthy and fruitful standpoint , and allows the application of many concepts and powerful tools derived from information theory . in the context of thermal operations , catalysts emerge as ancillatory systems that facilitate state transformation processes : there are cases where is not possible , but there exists a state such that _ is _ possible .the metaphor of catalysis is appropriate indeed : this implies that by using such a catalyst , one is enabled to perform the thermodynamic transformation , while returning the catalyst back in its _ exact _ original form .this is called _exact catalysis_. the inclusion of catalyst states in thermal operations serve as an important step in an eventual complete picture of quantum thermodynamics ; it allows us to describe thermodynamic transformations in the full picture , where the system is interacting with experimental apparatus , for example a clock system .furthermore , it has been shown that one can obtain necessary and sufficient conditions for exact catalysis in terms of a whole family of generalised free energies .the ordinary second law of ever - decreasing free energy is but the constraint on one of these free energies .naturally , for physically realistic scenarios inexact catalysis is anticipated , where the catalyst is returned except for a slight degradation .however , rather surprisingly , it has been shown that at least in some cases , the conditions for catalytic transformations are highly non - robust against small errors induced in the catalyst .the form of the second law thus depends crucially on the measure used to quantify inexactness .in particular , if inexactness is defined in terms of small trace distance , then there is no second law at all : for any one could pick _ any _ two states and , and starting from , get -close in terms of trace distance to via thermal operations .we refer to this effect as _ thermal embezzling _ : here one observes that instead of merely catalysing the reaction , energy / purity has possibly been extracted from the catalyst and used to facilitate thermodynamic transformations , while leaving the catalyst state arbitrarily close to being intact . on physical grounds ,such a setting seems implausible , even though it is formally legitimate .a clarification of this puzzle seems very much warranted . argued formally , a first hint towards a resolutionmay be provided by looking at how the error depends on the system size .naturally , the trace distance error depends on the dimension of the catalyst states ; nevertheless one can find examples of catalysts where as approaches infinity .while examples show that in principle thermal embezzling may occur , hardly anything else is known otherwise .indeed , it would be interesting to understand the crucial properties that distinguish between a catalyst and an active reactant in thermodynamics . from a physical perspective, it seems highly desirable to understand to what the effect of embezzling can even occur for physically plausible systems . in this work ,we highlight both the power and limitations of thermal catalysis , by providing comprehensive answers to the above questions raised .firstly , we construct a family of catalyst states depending on dimension , which achieves the optimal trace distance error while facilitating the state transformation , for and being some arbitrary -dimensional states .this is done for the regime where the hamiltonians of the system and catalyst are trivial .secondly , we show that thermal embezzling with arbitrary precision can not happen under reasonable constraints on the catalyst .more precisely , whenever the dimension of the catalyst is bounded , we derive non - zero bounds on the trace distance error . by making use of splitting techniques to simplify the optimization problems of interest , such boundscan also be obtained when the expectation value of energy of the catalyst state is finite , for catalyst hamiltonians with _ unbounded _ energy eigenvalues and a finite partition function .we hence set very strong limitations on the possibility of enlarging the set of allowed operations in quantum thermodynamics , if systems with reasonable hamiltonians are being considered .we begin by exploring the case for vanishing , trivial hamiltonians , where it is known that thermal embezzling can occur .this is also the simplest case of thermodynamics in resource theory , when all energy levels are fully degenerate , and the hamiltonian is simply proportional to the identity operator .entropy and information , instead of energy , become the main quantity that measures the usefulness of resources . in such cases ,the sole conditions governing a transition from some quantum state to is that the eigenvalue vector of majorizes that of .this is commonly denoted as .such a condition also implies that entropy can never decrease under thermal operations . to investigate thermal embezzling in this setting, one asks if given fixed , what is the smallest such that there exists a catalyst state that satisfies where the trace distance between the input catalyst and output catalyst is not greater than .this trace distance is used as a measure of catalytic error throughout our analysis .if some catalyst pair satisfies condition eq . with trace distance ,then it also facilitates for any -dimensional states .this is because a pure state majorizes any other state , while the maximally mixed state is majorized by any other state .since majorization conditions depend solely on the eigenvalues of the density matrices and , one can phrase this problem of state transformation in terms of a linear minimization program over catalyst states diagonal and ordered in the same basis ( see appendix ) .in fact , the eigenvalues of which give rise to optimal trace distance error can be solved by such a linear program , although these eigenvalues are non - unique . whenever and where is an integer , we provide an analytic construction of catalyst states , which we later show to be optimal for the state transformation in eq . .let the initial catalyst state , where , note that our catalyst state does not have full rank , and this is crucial for the majorization condition in eq .to hold , since implies that , and the joint state can have at most rank .the output catalyst can be obtained from , by subtracting a small value from the largest eigenvalue and distributing the amount equally over the indices .this brings to be a state of full rank .we show that this family achieves trace distance error which we prove by mathematical induction to be optimal , given fixed where ( see appendix ) .( blue ) versus those of proposed in ref . ( red , dashed ) , for and .,scaledwidth=40.0% ] ( blue ) versus those of proposed in ref . ( red , dashed ) , for and .,scaledwidth=40.0% ] figs .[ fig:1 ] and [ fig:2 ] compare our final catalyst state with the state with being the normalization constant .the family was proposed in ref . for embezzling in the locc setting . in fig .[ fig:3 ] , we compare the trace distance error achieved by catalyst from ref . with the error achieved by our catalyst .we see that for small dimensions , our catalyst outperforms , however asymptotically the error scales with for both catalysts .and [ fig:2 ] ( red , dashed ) , for the case where . ] in this section , we are interested in finding additional physical restrictions which prevent thermal embezzling . to do so , we look at general hamiltonians for both the system and catalyst , where the energy of the system comes into play . in , it is shown that the monotonicity of quantum rnyi divergences form necessary conditions for state transformations .more , precisely , for arbitrary and , if is possible via catalytic thermal operations , then for all , holds , where is the thermal state of system , at temperature of the thermal bath .. implies that one can use the monotonicity of rnyi divergences to find lower bounds on thermal embezzling error for state transformation between arbitrary states and . for simplicity , we present the case where and are diagonal ( in the energy eigenbasis of )the case for arbitrary states can be treated similarly , and details are given in the appendix .for the case where two states and are diagonal , the rnyi divergences are defined as where are the eigenvalues of , and . again , for states and diagonal , it suffices to look at a single transformation where is the pure energy eigenstate with energy .note that both and are diagonal in the energy eigenbasis . as explained in the appendix , eq .is sufficient to ensure universal thermal embezzling for aribtrary states and as long as they are diagonal in the same energy eigenbasis .similarly , one can take and to be diagonal in the energy eigenbasis of .this can be written as the following minimization problem , being the solution of where is the thermal state of the catalyst and system .the system hamiltonian is assumed to be finite . a straightforward relaxation of eq .allows us to now consider an alternative problem for some fixed from ref . , we know that any feasible for eq . is also feasible for eq . .therefore , for any , . by choosing can arrive at much simpler optimization problems , that provide lower bounds for the trace distance error .we apply this to study two cases , detailed as below . *1 . bounded dimension : * consider the case where both the system and catalyst hamiltonians have fixed dimensions , and denote the maximum energy eigenvalues as respectively .one sees that the solution of eq .is lower bounded by eq . for that w.l.o.g .we can assume that and are diagonal in the same basis , which we take to be the energy eigenbasis .eq . can be rewritten as where are the probabilities defined by the thermal state of the catalyst hamiltonian , and is the partition function of the catalyst system . to solve this problem, we note that the optimal strategy to maximize the quantity within the of is to increase one of the eigenvalues by , so that the quantity is maximized . with further details in the appendix, we show that the trace distance error can therefore be lower - bounded by where are the partition functions of the system and catalyst .although this bound is valid for arbitrary finite - dimensional hamiltonians , it is not tight . indeed , in the case of trivial hamiltonians where all states have constant energy value , normalized to 0 , the partition functions reduce to the dimension of the system and catalystthis bound then yields , which is much weaker than the optimal trace distance we derived in eq . .hamiltonians with unbounded energy levels : * a more general result holds for unbounded dimension and energy levels where the partition function is finite .more precisely , for such cases , we show that setting an upper bound on the average energy of the catalyst state limits thermal embezzling .let us now explain the proof of our results .consider some with unbounded energy levels .for simplicity , we restrict ourselves to the case where the catalyst states are diagonal in the energy eigenbasis , and assume the system hamiltonian to be trivial with dimension . a more general derivation involving arbitrary system hamiltoniansmay be found in the appendix .+ _ a ) formulation of the problem : _ consider the minimization of catalytic error under the relaxed constraint that monotonicity for the -rnyi divergence is satisfied .using eq .with , by substituting , the first constraint can be simplified as follows furthermore , we want that the initial catalyst state must have a expectation value of energy no larger than some finite . in summary , we now look at the minimization of trace distance under the following constraints where .denote the solution of this problem as .in the subsequent steps , our goal is to show that is lower bounded by a non - zero constant , by making use of techniques of convex relaxations of optimisation problems . as such, this is an intricate problem , as it is a non - convex problem both in and ._ b ) splitting a relaxed minimization problem : _ the key idea to proceed is to suitably split the problem into two independent optimization problems in a relaxation , which then turn out to be convex optimization problems the duals of which can be readily assessed .the starting point of this approach is rooted in the observation that for any ] , the rnyi divergence of relative to is defined as \ ] ] for diagonal in the same basis , let and denote the eigenvalue vectors of the respectively .then the rnyi divergences reduce to the form it has been shown that the quantities are thermal monotones , where is the thermal state of the system of interest .intuitively , this implies that thermal operations can only bring the system of interest closer to its thermal state with the same temperature as the bath .we detail this in the following lemma [ lem : secondlaw ] .[ lem : secondlaw ] given some hamiltonian , consider arbitrary states , where is possible via catalytic thermal operations .denote by the thermal state of system .then for any , furthermore , for any diagonal in , if eq . holds for all , then is possible via catalytic thermal operations .in essence , lemma [ eq : lemsecondlaw ] implies that the monotonicity of rnyi divergences are necessary conditions for arbitrary state transformation , and for the case of states diagonal ( in the energy eigenbasis ) , they are also sufficient .let us also use a notation which was introduced in for diagonal states : we say that there exists a catalyst such that , if via catalytic thermal operations .we refer to the notion as thermo - majorization .now , let us consider the scenario of preparing a pure excited state of maximum energy from a thermal state .intuitively , if we concern ourselves only with diagonal state transformations , then this is the hardest thermal embezzling scenario possible .this is because for is possible for any diagonal .therefore , whenever we investigate the case where involved states are diagonal , it suffices to analyse the preparation of such a pure excited state .the necessary and sufficient conditions are in the next lemma , we show that given fixed hamiltonians and dimensions , any catalyst state that succeeds in preparing such a state can also be used to facilitate any other state transformation .[ lem : univ_emb ] suppose there exists diagonal ( in ) such that holds , and .then for any states diagonal ( in ) , holds as well .this can be proven by noting that is equivalent with the existence a thermal operation denoted by , such that .it remains to show that for any , there exists a thermal operation such that . since the thermal state is thermo - majorized by any state , and thermo - majorizes any other state , there exist thermal operations such that and .finally , consider then one sees that .this implies that .in this section we look at a specific thermodynamic transformation involving system ( ) and catalyst ( ) states of any dimension and respectively . for the trivial hamiltonian where all states have same energy ,the thermal state of the system is simply the fully mixed state , while any pure state corresponds to , so we simply pick without loss of generality .note that thermo - majorization conditions are reduced to the simplest form , i.e. that is possible if and only if the initial state majorizes the latter , i.e. in this section we give a construction of catalyst states which allow this transformation , and prove that our construction achieves the optimal trace distance in any fixed dimension .furthermore , these states are universal embezzlers , since any catalyst which successfully creates from would also allow to obtain any from any , as shown in lemma [ lem : univ_emb ]. consider integers and where .let be the set of -dimensional catalyst state pairs enabling the transformation let .[ def : catalystset ] we offer the following construction of catalyst input and output states in any dimension where and are integers .we take the output catalyst , where a simple way to visualise this is as follows : for the first elements , the distribution is uniform with some probability ; for the next up to elements the distribution is uniform again , with probability ; and so on up to .the initial is then chosen so that the full distribution is normalised .we choose the input catalyst state to be , where such a state is obtained from by setting all the probabilities for to be zero , while renormalizing by increasing the largest peak of the probability distribution .note that while for all .the trace distance between and can be calculated to be this shows that since we have constructed a specific state pair achieving this trace distance . in the next section we will see that for catalysts satisfying eq . , smaller values of trace distance can not be achieved , which implies that eq .( [ ineq ] ) is true with equality , and the family presented above is optimal . in this sectionwe show by induction that recall that our problem is to minimize over states the trace distance such that eq .is satisfied .we first show that it suffices to minimize over states which are diagonal in the same basis .[ lem : min_diag_ord ] consider fixed n - tuples of eigenvalues and , such that and are diagonal in two different bases .if satisfies eq ., then there exists such that and that also satisfies eq . .there are two steps in this proof : firstly , we construct from and show that the trace distance decreases by invoking data processing inequality .then , we use schur s theorem to show that majorization holds .let , where is the fully dephasing channel in the basis .note that since is already diagonal in , .because the trace distance is non - increasing under quantum operations , we have on the other hand , we will show that . for any matrix ,let be the vector of its eigenvalues .we want to show that .recall that and , from the definition of , observe that the eigenvalues are precisely the diagonal elements of in the basis .schur s theorem ( , chapter 9 , theorem b.1 . ) says that for any hermitian matrix , the diagonal elements of are majorized by .therefore , and thus . making use of the initial assumption , we now see that which concludes the proof .we are now ready to establish our lower bound on , where will use the fact established in the previous lemma [ lem : min_diag_ord ] that we can take both states to be diagonal in the same basis .consider integers and where .then where is defined in eq . .hence , the family of catalyst states from section [ def : catalystset ] is optimal .the majorization condition only depends on the eigenvalues of and . for fixed eigenvalues ,the trace distance is minimized if the two states share the same eigenbasis and the eigenvalues are ordered in the same way , _e.g. _ , in decreasing order , as discussed in lemma [ lem : min_diag_ord ] .hence , from now on we consider only diagonal states and , where and .here , denotes the diagonal matrix with the corresponding diagonal elements . to prove the theorem we only need to show that as the other inequality follows from the family of embezzling states exhibited in section [ sec : optfamily ] .we use induction on the power . for the base case , we need to show that .consider any feasible solution in dimension .from the majorization condition it follows that and for . hence , and .since is the largest of the values , we get for all . finally , a simple calculation reveals that , which establishes the base case . for the inductive step , we assume that for some and aim to show that for .the main idea is to consider an optimal catalyst pair and from it construct a catalyst pair in dimension .since our construction will allow to relate to , we then obtain a lower bound on in terms of as in eq .( [ indu ] ) .let us start by using the state pair that satisfies eq .and achieves , and from it derive some useful properties .firstly , pick so that .as before , without loss of generality , we assume that and where and .the majorization condition again implies that and for . to further simplify matters, we can also assume that for all .this is because we can always replace with , where for and is chosen so that . in essence , all the majorization advantage of against can be piled upon the first , largest eigenvalue of .this replacement is valid since still satisfies the majorization condition .furthermore , implies that the distance is unchanged .subsequently , we proceed to bound . to do this , construct a catalyst pair in dimension .essentially , this is done by directly applying a cut to the dimension of the final catalyst state , reducing it to having dimension .similarly , the same amount of probability is cut from the initial state , and both states are renormalized .let us decribe this in more detail : denote and pick index and value so that .note that , since the majorization condition eq .implies that this inequality is obtained by summing up the first elements of both distributions in the l.h.s . and r.h.s .we now define since the states and are properly normalized . to establish that , we need to show that the majorization condition holds true .we consider two separate cases : when , and when .if , then the inequalities in the majorization condition for have already been enforced by the majorization condition of .hence , is a valid catalyst pair in dimension , _i.e. _ , .let us now make the following two observations .1 . . to see this , recall that for , and thus 2 . . to see this , note that since only the first diagonal element of is strictly larger than the corresponding diagonal element of . combining observations( 1 ) and ( 2 ) gives d(\sigma,\sigma ' ) \ge ( 1-d_{m , k } ) d_{m , n},\ ] ] since rearranging gives us and we have completed the inductive step . if , then the majorization inequalities involving might fail to hold .therefore , instead of we consider the following , slightly different , pair of states where the diagonal elements of are still in descending order , and the state is properly normalized . to argue that is a valid pair of catalyst states , we need to verify the majorization inequalities that are not directly implied by the majorization condition for .that is , we need to verify that for all , where and .we can see that this is true for the state pair because in this regime of eq ., both sides increase linearly with the indices , and for the endpoints and , the l.h.s . is higher than the r.h.s ., which is guaranteed by the majorization condition for , therefore , for any .taking yields the desired inequality ( [ eq : majcond ] ) and hence is a valid catalyst pair .lastly , note that reasoning similar to the one in equation can be used to deduce that therefore , and we can use the argument from the previous case to complete the inductive step . by this proof of inductionwe have shown that for all and .this together with the conclusion in section [ sec : optfamily ] that proves that and the state pair described in eq . andis optimal .in our work , we use two particular quantities , which are the rnyi divergences for and , which for classical probability distributions have the following form : as mentioned in section [ subsec : d_alpha ] , given hamiltonians and , it suffices to consider here , we prove whenever the dimension of the catalyst ( and system ) are finite , there exists a lower bound on the accuracy of thermal embezzling .such a bound is dependent on and . to do so , consider the problem in ref . , it has been shown that for initial and target states commuting with the hamiltonian ,it is sufficient to consider catalyt states commuting with . therefore , since and both commute with , it is sufficient to consider input and output catalysts states which are diagonal in the basis of . since all rnyi divergences are thermal monotones according to lemma [ lem : secondlaw ] , in particular the min - relative entropy ( ) , for , where and are the eigenvalues of respectively . therefore , satisfying the thermo - majorization conditions in eq .implies that to further simplify this expression , note that and that .the additivity of rnyi divergences under tensor products holds for all states .furthermore , for any .therefore , we arrive at the expression where is the partition function of the system .the spectral values of and are denoted as and , respectively .using the definition of as shown in eq . , we obtain where are the eigenvalues of the thermal state for the catalyst , for the energy eigenstate with energy eigenvalue , with normalization , the partition function of the catalyst .since is the minimum trace distance between states , and depends only on the maximum of across the distribution , the optimal strategy to increase while going from to is to increase a specific by an amount .therefore , we can consider a relaxation of eq . in the next lemma , we show that whenever .[ lem : lb_err_catalysis ] consider system and catalyst hamiltonians which are finite - dimensional , and denote , to be the set of energy eigenvalues respectively . then for some fixed , consider any probability distribution ( which corresponds to eigenvalues of a catalyst ) , and such that where . note that index runs over all energy levels .then in other words , thermal embezzling of diagonal states with arbitrary accuracy is not possible .firstly , let indicate the pair such that .then the first term of l.h.s . is equal to , and therefore can be grouped with the r.h.s . to form since we know that , therefore .finally , taking the maximization of over gives 1/ , recall that corresponds to probabilities of the thermal state being in the eigenstate with energy .therefore , , and we get the case of arbitrary states are treated separately , since our lemma [ lem : univ_emb ] on universal embezzlers hold only for diagonal states , where necessary and sufficient conditions are known for state transformations .nevertheless , since the monotonicity of is necessary for arbitrary state transformations , one can use techniques very similar to those in section [ subsec : thermalcat_diag ] to lower bound the embezzling error , if we minimize over diagonal catalysts .more precisely , denote to be the solution of recall that , and that is additive under tensor products .therefore , by defining we can rearrange the first constraint in eq . note that this is almost equivalent to eq ., except the constant previously is now replaced with . by following the same steps used to prove lemma [ lem : lb_err_catalysis ] , we obtain a lower bound depending on .[ lem_err_catalysis_arb ] consider system and catalyst hamiltonians which are finite - dimensional , and denote and to be the set of energy eigenvalues respectively .then for some fixed , consider any probability distribution ( which corresponds to eigenvalues of a catalyst ) , and such that where and .note that index runs over all energy levels .then \frac{e^{-\beta e^c_{\rm max}}}{z_c } \neq 0.\ ] ] this implies thermal embezzling with arbitrary accuracy , using a diagonal catalyst is not possible .comparing lemma [ lem : lb_err_catalysis ] and lemma [ lem_err_catalysis_arb ] which are very similar , one sees that for non - diagonal states lemma [ lem_err_catalysis_arb ] gives a state - dependent lower bound on the embezzling error .however for diagonal states , the bound in lemma [ lem : lb_err_catalysis ] can be made state - independent because of the existence of universal embezzlers . rather than bounding the dimension of the catalyst, one can ask if restrictions on other physical quantities such as the average energy of the catalyst would prevent indefinitely accurate embezzling from occurring . while this by itself is an independently interesting problem, we can first note that such restrictions are sometimes related to restrictions on the dimension . in one directionthis is straightforward : if the catalyst is finite - dimensional , then the average energy and all other moments of energy distribution would be finite as well . here, we show that by restricting the first and second moments of the energy distribution of the catalyst to be finite , this implies that the states involved are always close to finite - dimensional states .in other words , if we consider the set of catalysts such that the average and variance of energy is finite , then for any such catalyst state from this set , there always exists a finite - dimensional state -close to it .this can be shown by invoking a simple theorem , namely the chebyshev inequality which says that for given any finite non - zero error , the support of the energy distribution must be finite .[ lem : chebyshev ] consider a random variable with finite mean and finite variance , then for all , \leq \frac{\sigma_x^2}{k^2}.\ ] ] consider a probability distribution over some non - degenerate energy values , where both mean , and variance ^ 2\rangle ] . for any , let some .denote .then by lemma [ lem : chebyshev ] , \leq { \mathbb{p}}[|e-\bar{e}|\geq k ] \leq \varepsilon.\ ] ]in this section we provide lower bounds for the error in catalysis , given constraints on the average energy of the catalyst state .we do so by adding a constraint on the average energy of the catalyst to the problem stated in eq . .by looking at the rnyi divergence for , we can show a non - zero lower bound on the catalytic error , for cases where the partition function of the catalyst hamiltonian is finite .this minimal assumption covers most physical scenarios , especially if we want the thermal state to be a trace class operator to begin with .again we start with diagonal states , then later generalize to arbitrary states .firstly , let us recall the problem stated in eq . .we aim at minimizing the trace distance between all input and output catalyst states , such that the most significant thermal embezzlement of a smaller system can be achieved .we denote again the initial and final catalysts by and with spectral values and .again , by restricting ourselves to look at catalyst diagonal in the hamiltonian basis , and by invoking only the thermal monotone , one can find the alternative relaxed problem where and .furthermore , since with forming a probability distribution ( that of a thermal state ) , one can deduce that whenever the dimension of system is , holds as well .the solution of this minimization problem serves as a lower bound to the optimal trace distance error .this problem can be relaxed to a convex optimisation problem .we can arrive at a simple bound , however , with rather non - technical means .in essence , we introduce split bounds , so that the optimization can be written as two independent , individually significantly simpler optimization problems .we make use of the inequality which holds true for , a\geq 2 12 & 12#1212_12%12[1][0] `` , '' ( ) * * , ( ) * * , ( ) `` , '' ( ) , `` , '' ( ) , `` , '' ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) `` , '' ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) link:\doibase 10.1007/s00220 - 011 - 1309 - 7 [ * * , ( ) ] * * ( ) _ _ ( , ) _ _ ( , , )
|
quantum thermodynamics is a research field that aims at fleshing out the ultimate limits of thermodynamic processes in the deep quantum regime . a complete picture of quantum thermodynamics allows for catalysts , i.e. , systems facilitating state transformations while remaining essentially intact in their state , very much reminding of catalysts in chemical reactions . in this work , we present a comprehensive analysis of the power and limitation of such thermal catalysis . specifically , we provide a family of optimal catalysts that can be returned with minimal trace distance error after facilitating a state transformation process . to incorporate the genuine physical role of a catalyst , we identify very significant restrictions on arbitrary state transformations under dimension or mean energy bounds , using methods of convex relaxations . we discuss the implication of these findings on possible thermodynamic state transformations in the quantum regime .
|
sampling is at the heart of many dbmss , data warehouses , and data streaming systems . it is used both internally , for query optimization , enabling selectivity estimation , and externally , for speeding up query evaluation , and for selecting a representative subset of data for visualization .extensions to sql to support sampling are present in db2 and sqlserver ( the tablesample keyword ) , oracle ( the sample keyword ) , and can be simulated for other systems using syntax such as order by random ( ) limit 1 .users can also ensure sampling is used for query optimization , for example in oracle ( using dynamic - sampling ) .mathematically , we are here dealing with a set of weighted items and want to support queries to arbitrary subset sums . with unit weights , we can compute subset sizes which together with the previous sums provide the subset averages .the question addressed here is which sampling scheme we should use to get the most accurate subset sum estimates .more precisely , we study the variance of sampling based subset sum estimation .we note that there has been sevaral previous works in the data base community on sampling based subset sum estimation ( see , e.g. , ) .the formal set - up is as follows .we are dealing with a set of items ] .a subset ] of the items , that is , . as an estimate , we then use the sum of the sampled items from the subset , that is , . by linearity of expectation this is also unbiased , that is , from ( [ eq : single ] ) we get =w_i \quad\quad \forall i\subseteq[n].\ ] ] we are particularly interested in cases where the subset is unknown at the time the sampling decisions are made . for example , in an opinion poll , the subset corresponding to an opinion is only revealed by the persons sampled for the poll . in the context of a huge data base , sampling is used to reduce the data so that we can later support fast approximate aggregations over arbitrary selected subsets .applied to internet traffic analysis , the items could be records summarizing the flows streaming by a router. the weight of a flow would be the number of bytes .the stream is very high volume so we can only store samples of it efficiently .a subset of interest could be flow records of a newly discovered worm attack whose signature has just been determined .the sample is used to estimate the size of the attack even though the worm was unknown at the time the samples were chosen .this particular example is discussed in , which also shows how the subset sum sampling can be integrated in a data base style infrastructure for a streaming context . in use the threshold sampling from which is one the sampling schemes that we will analyze below .generally there are two things we want to minimize : ( a ) the number of samples viewed as a resource , and ( b ) the variance as a measure for uncertainty in the estimates . for several sampling schemes ,we already understand the optimality with respect to the sum of the individual variances } { \textnormal{var}}[\hat w_i]\ ] ] as well as the variance of the total sum }]\quad \left(={\textnormal{var}}[\sum_{i\in[n ] } \hat w_i]\right)\ ] ] however , what we are really interested in is the estimation of subsets of arbitrary sizes . before continuing , we note that there is an alternative use of sampling for subset sum estimation in data bases ; namely where data are organized to generate a sample from any selected subset .generating such samples on - the - fly has been studied with different sampling schemes in .when each subset gets its own sample , we are only interested in the variance of totals . in this paper, we generate the sample first , and then we use this sample to estimate the weight of arbitrary subsets . as discussed in ,this is how we have to do in a high volume streaming context where items arrive faster and in larger quantities than can be saved ; hence where only a sample can be stored efficiently .the sampling first is also relevant if we want to create a reduced approximate version of a large data ware house that can be downloaded on smaller device .the purpose of our sampling is later to be able to estimate arbitrary subset sums . with no advance knowledge of the subsets of interest ,a natural performance measure is the expected variance for a random subset .we consider two distributions on subsets : : : denoting the uniform distribution on subsets of size . : : denoting the distribution on subsets where each item is included independently with probability .often we are interested in smaller subsets with or .the corresponding expected variances are denoted \\ { { w_{{p}}}}&= & { \textnormal{e}}_{i\leftarrow { { \mathcal s_{{p}}}}}[{\textnormal{var}}[\hat w_i]]\end{aligned}\ ] ] note that and .we are not aware of any previous analysis of the average variance of subset sum estimation . our basic theorem below states that our subset sum variances are simple combinations of and .the quantities and are often quite easy to analyze , and from them we immediately derive any . [thm : main ] for any sampling scheme , we have theorem [ thm : main ] holds for arbitrarily correlated random estimators ] .that is , we have an arbitrary probability space over functions mapping indices ] , ] .we say that is sampled if for some ] .if is now sampled , we use the horvitz - thompson estimator .we denote this scheme .[ [ threshold - sampling - thr ] ] threshold sampling ( thr ) + + + + + + + + + + + + + + + + + + + + + + + + the threshold sampling is a kind of poisson sampling . in poisson sampling ,each item is picked independently for with some probability . for unbiased estimation, we use the horvitz - thompson estimate when is picked . in threshold samplingwe pick a fixed threshold . for the sample , we include all items with weight bigger than .moreover , we include all smaller items with probability .sampled items have the horvitz - thompson estimate . with expected number of samples , we denote this scheme .threshold sampling is known to minimize relative to the expected number of samples . in survey sampling ,one often makes the simplifying assumption that if we want samples , no single weight has more than a fraction of the total weight . in that case threshold sampling is simply poisson sampling with probability proportional to size as described in .more precisely , the threshold becomes }/k ] , and include in if and only if for some integer , we have it is not hard to see that =p_i ] be the expected number of samples . then the actual number of samples is either or . in particular , this number is fixed if is an integer . belowwe assume that is integer . in systematic threshold samplingwe perform systematic sampling with exactly the same sampling probabilities as in threshold sampling , and denote this scheme . hence for each item ,we have identical marginal distributions with and .[ [ priority - sampling - pri ] ] priority sampling ( pri ) + + + + + + + + + + + + + + + + + + + + + + + in priority sampling from we sample a specified number of samples . for each item ,a we generate a uniformly random number , and assign it a priority .we assume these priorities are all distinct .the highest priority items are sampled .we call the highest priority the threshold . then is sampled if and only if , and then the weight estimate is .this scheme is denoted .note that the weight estimate depends on the random variable which is defined in terms of all the priorities .this is not a horvitz - thompson estimator . in is proved that this estimator is unbiased , and that there is no covariance between individual estimates for .below we compare and for the different sampling schemes . using theorem [ thm : main ] most resultsare derived quite easily from existing knowledge on and .the derivation including the relevant existing knowledge will be presented in sections [ s : near - optimal][s : anti - optimal ] .when comparing different sampling schemes , we use a superscript to specify which sampling scheme is used .for example means that the sampling scheme obtains a smaller value of than does . for a given set of input weights , we think abstractly of a sampling scheme as a probability distribution over functions mapping items into estimates .we require unbiasedness in the sense that =w_i ] .we are aiming at samples .we assume that and . herefor this concrete example , we will show that > \\[-.55em]\sim\end{array } } & \ell^2m / k\\ { v_{{m}:n}}^{{\textnormal{p+r}_{{k}}}}&{\begin{array}{c}\\[-1em ] > \\[-.55em]\sim\end{array } } & \ell m / k\ ] ] here and >\\[-.55em]\sim\end{array}}y \iff x\geq ( 1-o(1 ) ) y ] . also , if is a horvitz - thompson estimator , then so is , that is , if is sampled , then . in survey sampling ,the main challenge is often to estimate the total } ] , we would simply accumulate the weights in a counter .hence , in our context , the challenge of survey sampling is trivial .one thing that makes reservoir sampling hard is that sampling decisions are made on - line .this rules out off - line sampling schemes such as sunter s method , where we have to sort all the items before any sampling decisions are made .a cultural difference between survey sampling and our case is that survey sampling appears less focused on heavy tailed distribution . for threshold or systematic threshold sampling onecan then assume that the threshold is bigger than the maximal weight , hence that these schemes use probabilities proportional to size . in our kind of applications ,heavy tailed distributions are very prominent . with a concrete internet example, we will now illustrate the selection of subsets and the use of reservoir sampling for estimating the sum over these subsets . for the selection , the basic point is that an item , besides the weight , has other associated information , and selection of an item may be based on all its associated information . as stated in ( [ eq : sum ] ) , to estimate the total weight of all selected items , we sum the weight estimates of all selected sampled items .internet routers export information about transmissions of data passing through .these transmissions are called flows .a flow could be an ftp transfer of a file , an email , or some other collection of related data moving together .a flow record is exported with statistics such as application type , source and destination ip addresses , and the number of packets and total bytes in the flow .we think of the byte size as the weight .we want to sample flow records in such a way that we can answer questions like how many bytes of traffic came from a given customer or how much traffic was generated by a certain application .both of these questions ask what is the total weight of a certain selection of flows . if we knew in advance of measurementwhich selections were of interest , we could have a counter for each selection and increment these as flows passed by .the challenge here is that we must not be constrained to selections known in advance of the measurements .this would preclude exploratory studies , and would not allow a change in routine questions to be applied retroactively to the measurements .a killer example where the selection is not known in advance was the tracing of the _ internet slammer worm _ .it turned out to have a simple signature in the flow record ; namely as being udp traffic to port 1434 with a packet size of 404 bytes .once this signature was identified , the worm could be studied by selecting records of flows matching this signature from the sampled flow records .we note that data streaming algorithms have been developed that generalize counters to provide answers to a range of selections such as , for example , range queries in a few dimensions .however , each such method is still restricted to a limited type of selection to be decided in advance of the measurements . in ,the above internet application is explored with experiments on a stream segment of 85,680 flow records exported from a real internet gateway router .these items were heavy tailed with a single record representing 80% of the total weight .subsets considered were entries of an traffic matrix , as well as a partition of traffic into traffic classes such as ftp and dns traffic .figure [ fig : matrix ] shows the results for the traffic matrix with all the above mentioned sampling schemes ( systematic threshold sampling was not included in , but is added here for completeness ) .the figure shows the relative error measured as the sum of errors over all entries divided by the total traffic . the error is a function of the number of samples , except with thr , where represents the expected number of samples .we note that u is worst .it has an error close to 100% because it failed to sample the large dominant item .the p is much better than u , yet much worse than the near - optimal schemes pri , thr , and sys . to qualify the difference , note that p use about 50 times more samples to get safely below a 1% relative error . among the near - optimal schemes, there is no clear winner . from our theory, we would not expect much of a difference . we would expect thr to be ahead of pri by at most one sample .also , we are studying a partitioning into sets , and then , as noted in section [ s : theorem ] , the average variance advantage of sys is a factor , which is hardly measurable .the experiments in figure [ fig : matrix ] are thus consistent with our theory .the strength of our mathematical results is that we now know that no one can ever turn up with a different input or a new sampling scheme and perform better on the average variance .conversely , experiments with real data could illustrate subsets with relevant special properties that are far from the average behavior .our analysis names systematic threshold sampling the best possible sampling scheme .however , in reservoir sampling we often have a resource bound on the number of samples we can store , e.g. , we may only have a certain amount of memory available for the samples .priority sampling is ideally suited for this context in that a standard priority queue can maintain the items of highest priority ( when a new item arrives , it is first assigned a priority , then it is added to the queue , and finally we remove the item of smallest priority item from the queue in time . however , with both threshold sampling and priority sampling it appears that we need to know the threshold in advance ( item is sampled with probability ) .this threshold is a function of all items such that .hence can only be determined after the whole stream has been investigated .as described in it is possible , though a bit more complicated , to adapt threshold sampling for a stream to provide an expected number of samples .the essential idea is that we increase the threshold as we move along the stream in such away that it always gives an expected number of samples from the items seen thus far .thus an item gets dropped from the sample when the threshold falls below its priority .however , if we want to be sure to no more than samples are made , we have to shoot for substantially less than samples .for example , to stay below with 99% probability , using normal approximation for larger , we should only go for an expected number of samples .in contrast , with priority sampling , we do better than threshold sampling with an expected number of samples .thus priority sampling works better when we are allowed at most samples . for systematic threshold sampling ,the problem is more severe because if one changes the threshold marginally , it may completely change the set of samples .one could conceivably resolve this if we only increased the threshold by an exact doubling starting .however , a doubling of the threshold can be shown to at least double the variance .another objection to systematic threshold sampling in a streaming context is that we may have a very strong correlations between items in a subset depending on how they are placed in the stream .normally , it is recommended that the items are appropriately shuffled , but that is not possible in reservoir sampling from a stream . with threshold and priority samplingthere is no such dependence as there is no covariance between different item estimates . as demonstrated in , it is possible to get good confidence bounds with priority sampling and threshold sampling so that we statistically know when we get good estimates for a subset .the correlation between items in systematic threshold sampling prevents us from providing good confidence intervals , so even if systematic threshold sampling gives better variance on the average , we have no way of knowing if we get these good estimates for a concrete subset .thus , among our near optimal sampling schemes , priority sampling is the most appropriate for resource constrained reservoir sampling .recall from ( [ eq : pri - opt ] ) that in our internet application we typically have thousands of samples .hence we are not concerned about the difference between and samples .the factor is only significant for larger sets .however , for larger sets , we expect to do great anyway because they relatively speaking have much smaller errors .more precisely , we typically expect that we have plenty of samples go get very a good estimate of the total , or in other words , that the relative standard deviation } ] . for a subset achieving both of these averages ,the relative standard deviation would be }}=\sqrt{n / m}\,{\varepsilon}_{n : n}\ ] ] however , if is big , then the optimality factor is close to .thus , it is when our variance is expected comparatively small that our relative distance to opt is greatest , the most extreme being in the estimation of the total .the estimate of the total has the smallest relative standard deviation , but since it is positive , it is infinitely worse than .another case where we do not need to worry so much about the non - optimality factor is if we are interested in the relative weight of a subset of size . as an estimator , we use } ] .most of the error in this estimate stems from the estimate \setminus i} ] , but for this small set size , we are at most a factor from optimality .as discussed in section [ s : k - sampling ] , we do not know if there is a scheme performing better than priority sampling in practice in the context of resource constrained reservoir sampling .the conclusion of this section is that even if there is a better scheme , it is not going to help us much .in this section we prove ( [ eq : main - mn ] ) and ( [ eq : main - p ] ) by the definitions of variance and covariance , for any subset ] with element . then for any , =m / n ] , so by linearity of expectation , &=&\sum_{i , j \in [ n ] , i\neq j}\pr[i , j\in i]{\textnormal{cov}}[\hat w_i,\hat w_j]\\ & = & m / n\cdot ( m-1)/(n-1)\cdot b_{[n]}.\end{aligned}\ ] ] thus = m / n\cdot a_{[n ] } + m / n\cdot ( m-1)/(n-1)\cdot b_{[n]}\ ] ] by definition , }={{\sigma}v} ] is picked independently for with probability . by linearity of expectation , =p a_{[n]}.\ ] ] also , for any , =p^2 ] and a large weight .we are aiming at samples .we assume that and that . as in the last section , we focus on the subset size rather than the inclusion probability . we will now analyze the variance with threshold sampling for the bad example . the variance with systematic and priority samplingwill then follow from ( [ eq : opt - sys - thr - mn ] ) and ( [ eq : pri - thr - mn ] ) . threshold sampling ( ) will use the threshold .this will pick the large weight with probability and weight estimate . hence =0 ] , we note that }^2]\geq { \textnormal{e}}[\hat w_n^2]= n\ell^2/k.\ ] ] hence }^2]-w_{[n]}^2\geq n\ell^2/k-(\ell+n-1)^2 \approx n\ell^2/k\ ] ] since and are both lower bounded by , it follows from ( [ eq : main - mn ] ) that for any subset size > \\[-.55em]\sim\end{array}}(m / n)n\ell^2/k = m\ell^2/k\ ] ] this is roughly a factor worse than what we had with any of the near optimal schemes . in probability proportional to size sampling with replacement ( ) ,each sample ] , is independent , and equal to with probability } ] .this happens with probability })^k ] , or equivalently , that by definition }]-w_iw_{[i-1]}\right),\ ] ] so }-{\textnormal{e}}[\hat w_i\hat w_{[i-1]}]\right)\nonumber\\ & = & 2\sum_{i>1}\left(w_i(w_{[i-1]}-{\textnormal{e}}[\hat w_{[i-1]}|i\in s]\right ) \label{eq : sum1}\end{aligned}\ ] ] to bound this sum , first we consider the term with .}={\textnormal{e}}[\hat w_{[n-1]}]=p_n{\textnormal{e}}[\hat w_{[n-1]}| n\in s]+(1-p_n ) { \textnormal{e}}[\hat w_{[n-1]}| n\not\in s]\ ] ] so }-{\textnormal{e}}[\hat w_{[n-1]}| n\in s]\geq ( 1-p_n ) { \textnormal{e}}[\hat w_{[n-1]}| n\not\in s]\ ] ] here =((n-1)/(\ell+n-1))^k < ( n/\ell)^k ] , so }| n\not\in s]=o(n / k) ] .then the total is right in the sense that }=x_{[n]} ] in the total. then . to minimize , the optimal choice is to pick the largest weights , setting the rest to . for the largest weights , we distribute the error equally , setting . then .the last term is fixed , so to optimize , we should choose so as to minimize for , we choose , and then for as discussed above .obviously , picking the largest weights and giving them a specific estimate is not a good `` sampling '' scheme .the above more illustrates the danger of just looking at averages and the deceptiveness of biased estimation . for non - random subsets such as a large set of small items ,the above scheme would always return a zero .this kind of unfairness is nt right .recall that we had a similar criticism of systematic sampling in section [ s : k - sampling ] if we could not shuffle the items .an ideal sampling scheme should both have a reasonable fairness and perform reasonably well on the average .threshold and priority sampling have no covariance , so all partitions have the same total variance . here by considering partitions rather than individual subsets , we ensure that each item is counted exactly once .moreover , among unbiased schemes , they essentially got within a factor from optimality on the average variance for subsets of size , so when is not too close to , this is close to ideal .as a formal measure for ability to estimate subset sums of a set of weighted items , we suggested for each set size , to study the average variance over all subsets : , |i|=m}\left[{\textnormal{var}}[\hat w_i]/m\right]\ ] ] we discovered that was the following simple combination of the sum of variances and the variance of the total sum : a corresponding formula was found for the expected variance for subsets including each item independently with probability .we then considered different concrete sampling schemes .the optimality of and was already known for some sampling schemes , and this now allow us to derive the optimality with respect to for arbitrary subset size .we found that systematic threshold sampling was optimal with respect to , and that threshold sampling was off exactly by a factor .finally , we know that priority sampling performs like threshold sampling modulo one extra sample . we argued that this distance to optimality is not significant in practice when we use many samples .this was important to know in the context of resource constrained reservoir sampling , where priority sampling is the better choice for other reasons .for contrast , we also showed that more classic schemes like uniform sampling with replacement and probability proportional to size sampling without replacement could be arbitrarily far from optimality .the concrete example was stylistic heavy tailed distribution .duffield , c. lund , and m. thorup .flow sampling under hard resource constraints . in _ proc .acm ifip conference on measurement and modeling of computer systems ( sigmetrics / performance ) _ , pages 8596 , 2004 .
|
for high volume data streams and large data warehouses , sampling is used for efficient approximate answers to aggregate queries over selected subsets . mathematically , we are dealing with a set of weighted items and want to support queries to arbitrary subset sums . with unit weights , we can compute subset sizes which together with the previous sums provide the subset averages . the question addressed here is which sampling scheme we should use to get the most accurate subset sum estimates . we present a simple theorem on the variance of subset sum estimation and use it to prove variance optimality and near - optimality of subset sum estimation with different known sampling schemes . this variance is measured as the average over all subsets of any given size . by optimal we mean there is no set of input weights for which any sampling scheme can have a better average variance . such powerful results can never be established experimentally . the results of this paper are derived mathematically . for example , we show that appropriately weighted systematic sampling is simultaneously optimal for all subset sizes . more standard schemes such as uniform sampling and probability - proportional - to - size sampling with replacement can be arbitrarily bad . knowing the variance optimality of different sampling schemes can help deciding which sampling scheme to apply in a given context .
|
in many cases , an electromagnetic field shows near - field or far - field behavior when it is observed in a region near or far from the field source .the division of the near field and far field is not only helpful for simplifying theoretical calculation in different regions , but also easy to highlight the features of the field in respect regions so as to provide physical interpretations of the field behaviors .an important question of how to distinguish the near and far fields naturally arose .usually , the field can be expanded by negative powers of the distance between the field point and source .the far field is dominated by the lowest order term , and the near field by the higher - order terms .the former arises from the requirement of energy conservation and is call a `` far field approximation '' or `` leading - order approximation ( loa ) '' . for dipole radiations in a vacuum background , the near - filed / far - field ( nfff )boundary is of the order of light wavelength , .as the source becomes larger , this dimension should be corrected as by the diffraction theory , where represents slit widths or antenna lengths .however , what we frequently encounter in practical situations are stratified backgrounds , rather than the vacuum one .stratified backgrounds are of complex configurations , in which the nfff boundary has not been properly addressed yet .to reveal the nfff boundary in such backgrounds is not only a complement in theory , but also very important for many practical applications , such as remote sensing , antenna design , nfff transformation and so on . in these applications ,the distances between sources and observation points are usually much larger than operating wavelengths , so that the sources can be treated as dipoles . herewe investigate the far - field asymptotic behaviors of dipole radiations in stratified structures , and address a universal nfff boundary .calculation of the dipole radiations in stratified structures is mathematically equivalent to dealing with sommerfeld integrals ( sis ) . in general , numerical evaluations of the sisare not easy since the integrals have an oscillatory feature and possess singularities along or near the integration paths .several numerical methods have been proposed to calculate the sis so far .et al_. developed a direct numerical integration method by appropriately choosing the integration path that avoided all possible singularities in the complex plane .this method has been considered as a standard test for the accuracies of other methods .the discrete complex image method and the rational function fitting method tried to expand the integrands by simple functions to obtain closed - form solutions . on the other hand , under the far - field approximation , closed - form expressions could be obtained from sis by asymptotic methods or reciprocal theorem .compared with the numerical ones , the asymptotic methods are of advantages of simplicity , easy programming , and clear physical interpretations . the stationary phase method andthe steepest descent method are two commonly used asymptotic methods .both of them approximate the value of a rapid - oscillating integral by the contributions around stationary points ( sps ) or saddle points , and provide the same loa results for the sis . in this paper, we adopt a simplified version of the stationary phase method to perform the asymptotic analysis .this simplified version has been successfully applied in the research of underwater communication , antenna design , and nfff transformation .theoretically , the loa can acquired precise results from sis when the observation point moves to infinity .but for practical usage , an empirical distance is needed to justify its applicability , which is the nfff boundary .although an error analysis by estimation of higher - order contributions may reveal the boundary , the analysis becomes very difficult in stratified structures .here we compare the asymptotic results with the accurate numerical ones in different stratified structures . in this way a universal empirical boundary is obtained .it is found that the nfff boundary is mainly affected by lateral waves , which correspond to the branch point contributions in sis . besides, the boundary is sensitive to the structure configurations and the location of the dipole .moreover , two kinds of treatments in the asymptotic method are carried out and their numerical results are compared .the equivalence between the field expressions obtained from the asymptotic method and reciprocal theorem is also demonstrated .the paper is arranged as follow : section 2 sets our model of the stratified structures and presents the loa formalism .section 3 and 4 discuss the far - field distributions and the nfff boundary in bilayer and trilayer structures , respectively .then the results are generalized to multilayer structures and proved to be universally applicable in sec .5 . finally ,presents our concluding remarks .the model studied in this paper consists of a lossless layered structure with the stratification along the direction .it contains layers and interfaces from top to bottom , as depicted in fig .1 . since we concentrate on the far field , the two semispaces , i. e. , the top and bottom layers , are focused on and the intermediate regions are ignored when their details are not needed .the lower surface of the top layer is at and the upper surface of the bottom layer is at , respectively .a vertical electric dipole ( ved ) is located on the origin inside the structure . for convenience ,we consider the component of the electric fields generated by the ved .other field components and other dipole orientations will be discussed in sec .5 . in fig .1 , the cylindrical coordinates are set .if the ved is either in the top layer or in the intermediate region , the field in the top and bottom layers is expressed as \delta_{1q}-\frac{\omega\mu_{0}\mu_{1}j_{z}}{8\pi k_{1}^{2 } } \int_{-\infty}^{\infty}dk_{\rho}\frac{k_{\rho}^{3}}{k_{qz}}h_{0}^{(1 ) } ( k_{\rho}\rho)[c_{1}e^{ik_{1z}(z - d_{1 } ) } ] , & & { z >d_{1}.}\\ \displaystyle -\frac{\omega\mu_{0}\mu_{q}j_{z}}{8\pi k_{q}^{2}}\int_{-\infty}^{\infty } dk_{\rho}\frac{k_{\rho}^{3}}{k_{qz}}h_{0}^{(1)}(k_{\rho}\rho ) [ c_{q}e^{-ik_{qz}(z - d_{q-1 } ) } ] , & & { z < d_{q-1}. } \end{array } \right . \eqno{(1)}\ ] ] the quantities in this expression are as follows . represents the hankel function of the first kind ; is the dipole moment in the direction ; and are respectively the scattering coefficients in the top and bottom layers ; is the wave vector in the layer ; is the angular frequency ; is light speed in vacuum ; and are respectively the permeability in vacuum and in the layer ; standing for primary field , is the generated by the ved in a homogeneous background ; is kronecker delta . in eq . ( 1 ) , the ved is not allowed to appear in the bottom layer .however , if the ved is in the bottom layer , one may use the reversed geometry instead .this section discusses the far - field asymptotic behaviors of dipole radiations and the nfff boundary in bilayer structures . in this case , the intermediated region in fig .1 is removed , and the interfaces at and merge into one , as shown by fig .the top and bottom layers are also denoted as layer 1 and 2 , respectively . in the case of bilayer structures shown in fig .2 ( ) , and in eq . ( 1 ) are expressed as where is the relative permittivity of the layer . in this paper , and denote the fresnel scattering coefficients of a single interface when light incidents from layers to . herewe carry out a detailed derivation on the transmission field with two asymptotic treatments . in this process ,the similarities and discrepancies between these two treatments are disclosed .substitution of eq .( 2 ) into ( 1 ) and application of the asymptotic form of hankel function give the expression of in layer 2 : the sp is obtained by taking derivative of the phase as follows : this way determining the sp is called method a. its physical meaning can be explained by ray theory . on the other hand , if the factor in eq . ( 3 ) is a slowly varying one , it can be taken out of the integral , and the remaining phase gives the sp as follow : this way is called method b . in eqs .( 4a ) and ( 4b ) , the subscript `` s '' stands for stationary points .here we emphasize the features of these two methods .method a treats as an oscillatory factor . as a result, the term is included in eq .( 4a ) in determining the sp .it describes the wave paths in both layers and gives the refraction picture shown by the blue lines in fig .2 . by contrast, method b treats as a slowly varying one so that the term is excluded in eq .( 4b ) in determining the sp .this description is merely applicable to the wave path in layer 2 , as shown by the yellow lines in fig .2 . under these points of view , method a gives the electric field as while method b gives the simplification in method b is that the integral in eq .( 5b ) can be expressed as spherical waves using the sommerfeld identity by contrast , eq .( 5a ) can not be further reduced in mathematics because its exponential term contains two different wave vectors .we have to carry out the integral according to eq .the corresponding ray interpretation is demonstrated by blue lines in fig .2 . after integrations ,the asymptotic expressions of in method a and b are respectively given by \\ & = & \displaystyle t_{12}(k_{\rho s})\frac{k_{2zs}}{k_{1zs}}e^{i(k_{1}-k_{2})\triangle r_{1}}\left\{\frac{i\omega\mu_{0}\mu_{2}j_{z}}{4\pi k_{2}^{2}}k_{\rho s}^{2 } \frac{e^{ik_{2}(\triangle r_{1}+\triangle r_{2})}}{\triangle r_{1}+ \triangle r_{2}}\right\}\\ \triangle r_{1 } & = & \displaystyle \frac{k_{1}}{k_{1zs}}|d_{1}|\\ \triangle r_{2 } & = & \displaystyle \frac{k_{2}}{k_{2zs}}|z - d_{1}| \end{array } \right .\eqno{(6a)}\ ] ] and \\ & = & \displaystyle t_{12}(k_{\rho s})\frac{k_{2zs}}{k_{1zs}}e^{-ik_{1zs}d_{1 } } \left\{\frac{i\omega\mu_{0}\mu_{2}j_{z}}{4\pi k_{2}^{2}}k_{\rho s}^{2 } \frac{e^{ik_{2}r_{2}}}{r_{2}}\right\}\\ r_{2 } & = & \sqrt{\rho^{2}+(z - d_{1})^{2 } } \end{array}. \right . \eqno{(6b)}\ ] ] the expressions in the square brackets in eqs .( 6a ) and ( 6b ) are the resultants of the integrals in eqs .( 5a ) and ( 5b ) , and those in the curly brackets represent the propagations of along the blue and yellow lines shown in fig .2 , respectively .thus it is easy to see that eq .( 6a ) considers as spherical waves in both layers and describes a refraction process at the interface .the factor ] and ] configuration .since layers 2 and 3 have higher ri , there are two critical angles : and . determines the forbidden region .the range between and is shaded in fig .7 and is termed as the second total internal reflection region . in fig .7(a ) , the thickness of the middle layer is .it is seen that the differences between the asymptotic results and numerical results emerge in the transmission field .these differences are caused by the lateral waves as well .the dimensional parameter in eq .( 9 ) now should equal to the distance between ved and the lower interface at .the calculated nfff boundary is approximately 35 times of the observation distance , which is enough to eliminate the influence of lateral waves ( numerical verified , not show here ) . in fig .7(b ) , the middle layer is thicker : its thickness is .this figure shows two significant features .one is that method a and b have noticeable differences in both of the reflections and transmissions . because method a takes into account the multi - reflection , its results are closer to the numerical ones .the other is that in the second total internal reflection region , the asymptotic curve shows rapid oscillation [ the coefficients and in eq .( 10 ) are of the form of multi - reflection , and in this sense , there is also multi - reflection in method b ] .the formation of the oscillation can be explained as following : although the light decays in the top layer , part of the energy can be transferred into the middle layer through evanescent waves and become propagation ones .then in the middle layer it scatters at the lower boundary , and has a total internal reflection at upper boundary , which further forms a multi - reflection to yield the oscillation .when the ved moves away from the interface , the oscillation will die away .this is verified by fig .7(c ) where changes from to .it is seen from fig .7(c ) that the oscillations disappear and the far - field distribution decreases .it is worth mentioning that the description here is the picture of leaky waves .a comparatively thicker middle layer allows many leaky modes to exist , resulting in the rapid oscillation in the far - field distribution . as a test of the nfff boundary , fig .7(d ) plots the field distribution of the same configuration in fig .7(b ) but with the observation point being at the nfff boundary , i. e. , with the observation distance computed by eq .the results fit each other very well . as a function of angle at distance = for two geometry configurations with [ .the two critical angles are and .structural schematics are illustrated in the insets .( a ) = .( b ) = .the fields in the middle layer are not presented.,width=302 ] fig .8 shows the distributions in $ ] configuration .since the middle layer possesses the largest ri , the structure supports the guided modes .the peaks around of the numerical results are the decays of the guide modes in layers 1 and 3 .moreover , there is no forbidden region in the structure .consequently , the asymptotic result and the numerical results fit very well . for reflections ,these two kinds of results show some differences around due to the effect of lateral waves , but they fit each other at the nfff boundary given by eq .( 9 ) . around , the guide modes decay in the lower ri layers , but they are not covered in the discussion here because a guided mode is a bounded state that is equivalence to cylindrical wave and have a decay rate as , which means that its field is much larger than the results of loa .= 0 is demonstrated here .method b sets the start point on the top interface right above the dipole .the transmissions in the bottom layer are similar to that in the top layer , but with the transmission direction reversed.,width=264 ] next , we discuss the case when the ved is located in the middle layer ( ) , as shown in fig . 9 . for this configuration ,the scattering coefficients and are expressed as for the in the top layer , method a expands the two terms in using geometric optics series , which is then substituted in eq .( 1 ) to get the sps : =0,\\ \displaystyle \rho-\frac{k_{\rho s}^{(m)}}{k_{1zs}^{(m)}}(z - d_{1})-\frac{k_{\rho s}^{(m)}}{k_{2zs}^{(m)}}[2(m+1)(d_{1}-d_{2})-d_{2}]=0 .\end{array } \right . \eqno{(14)}\ ] ] on the whole , eq .( 14 ) represents the light transmitting to the top layer after multi - reflections .the first equation represents the case of light propagating upward after being emitted from the ved .the second equation indicates the fact that light propagates downwards first , and after is reflected by the lower interface , it then propagates upwards .the intuitive picture is illustrated in fig .this course is also reflected by the expression of in eq .( 13 ) . concerning the ray interpretation of sps , we can write the asymptotic expressions similar to eq .the transmission picture of method b is the same as the above ones , and its asymptotic expressions are similar to eq .the transmission to the bottom layer is similar to that to the top layer , but with the direction in reverse .as a function of angle at distance = for two configurations .the two critical angles are ()= and ()= .structural schematics are illustrated in the insets .( .( b ) [ .the gray area in ( a ) represents the second total internal reflection region.,width=302 ] the distributions when the ved is in the middle layer of trilayer structures are shown in fig . 10 .figures 10(a ) and 7(b ) have similar far - field distributions , and figs . 10(b ) and 8(b ) as well .therefore , the discussions there are valid for fig . 10 .the correctness of eq .( 9 ) is again verified , where the parameter is now the thickness of the middle layer . moreover , fig .10 also shows that method a provides more accurate results than method b. it is known from the discussions above that conclusions about the nfff boundary in a trilayer structure are similar to that obtained in a bilayer structure . the boundary in a higher ri layeris mainly determined by the lateral waves and satisfies eq .( 9 ) , while the boundary in the lowest ri layer is about ten wavelengths .in this section , we generalize our conclusions to multilayer structures , and verify the universality of eq .( 9 ) . for multilayer structures ,light multi - reflects in each layer in the intermediate region .consequently , the expressions of the field will have a recursive fashion , which will make the ray interpretations very complicated and limit the applicability of method a. however , method b is still of the simplicity in mathematics and is suitable for multilayered structures . the nfff boundary in the multilayered structures is mainly affected by the lateral waves as well .the attenuation behaviors of these waves are similar to the ones shown in fig .4 , so that eq . ( 9 ) is correct in the order of magnitude .the value of the dimensional parameter depends on whether the ved is in the top layer or in the intermediated region .when the ved is in the top layer , equals the distance between the ved and the lowest interface at ; when the ved is in one of the intermediate layer , is the distance between the highest interface at and lowest interface at , which is the total thickness of the intermediate region .the multi - reflection processes do not affect the value for the following two reasons .firstly , the multi - reflections have a quick convergence .all the results of method a shown in the figures converge when is up to , which does not change the order of . secondly , as can be seen from fig .4 , the lateral waves have a nearly decay when is small , and have a more rapid decay when is relatively larger . therefore the influence from the multi - reflectionmay decay to a negligible one within the distance given by eq .moreover , it is this decay characteristic of the lateral waves that makes the nfff boundary almost independent of the wavelength . for other field components of the dipole radiations and the cases of different dipole orientations ,we have numerically verified the correctness of eq .( 9 ) . besides , all these cases depend on the evaluations of the scattering coefficients related sis , but the field expressions are slightly different , which results in different far - field patterns over the observation angles. however , in the asymptotic analysis , they have the same sps and similar asymptotic expressions and branch cut contributions .thus , eq .( 9 ) is applicable . in the beginning of sec .2 , we have assumed that all of the layers were lossless .now let us discuss what about the case where the intermediate region is not lossless .in such a situation , two cases are distinguished to discuss the nfff boundary : the loss is large or small .when the loss is large , as in undersea communications where short waves may not reach the sea bottom , the air - ocean - earth trilayer model simplifies to an air - ocean bilayer one . from the discussions in sec .3 , a distance of is enough to differentiate the near field and far field in air .when the loss is relatively small , the amplitude of light decreases as it arrives the bottom interface , which accordingly leads to a decrease of the differences between the asymptotic results and the accurate ones . generally speaking, losses make the nfff boundary reduced : its value will be less than that given by eq .our conclusion can extend to bulk sources .on the one hand , within the scope of volume integral method , a bulk source can be considered as a superposition of dipoles .the far - field radiations of each dipole have been fully discussed above . on the other hand , within the scope of nfff transformation , the near fields of an arbitrary source can be converted to equivalence surface dipoles on a virtual closed surface .therefore the dimensional parameter should anchor to the brightest spots in the near field .of course , our conclusions can be considered as an applicable scope of the nfff transformation .in this paper , we have investigated the far - field asymptotic behaviors of dipole radiations in stratified backgrounds and obtain a universal empirical expression of nfff boundary , eq .the asymptotic results are compared with the accurate numerical ones in various configurations to make sure the universality of eq .the nfff boundary is mainly affected by the lateral waves , which correspond to branch point contributions in the sis and are of a higher - order attenuation . in eq .( 9 ) , the parameter plays a key role , and it is much larger than the operating wavelength . as a result ,the nfff boundary in a stratified background is totally different from that in vacuum . to be more specific , eq .( 9 ) describes the boundary between the intermediate and far fields .however , since the intermediate field is vaguely defined in optics , we still call it as the nfff boundary . in the casethat the observation point is in a region where its ri is the lowest in the whole structure ( usually air ) , the lateral wave decay rapidly , and the nfff boundary is about ten wavelengths .it is believed that our conclusions are very helpful in understanding and applying the far - field approximation . in electromagnetic simulations , such as the finite - difference time - domain method and the finite element method ,the far - field results are obtained by a nfff transformation where the far field approximation , or the loa , is employed .the nfff boundary presented here reveals the applicability of the nfff transformation , especially in the forbidden region .moreover , we compare the different treatments for sps in the asymptotic method and improve the accuracy according to the ray theory ( method a ) .the equivalence between the results of the asymptotic method and reciprocal theorem is demonstrated .this work was supported by the china postdoctoral science foundation ( grant no .2013m542222 ) and the national natural science foundation of china ( grant nos .11334015 and 61275028 ) .
|
the division of the near - field and far - field zones for electromagnetic waves is important for simplifying theoretical calculations and applying far - field results . in this paper , we have studied the far - field asymptotic behaviors of dipole radiations in stratified backgrounds and obtained a universal empirical expression of near - field / far - field ( nfff ) boundary . the boundary is mainly affected by lateral waves , which corresponds to branch point contributions in sommerfeld integrals . in a semispace with a higher refractive index , the nfff boundary is determined by a dimensional parameter and usually larger than the operating wavelength by at least two orders of magnitude . in a semispace with the lowest refractive index in the structure ( usually air ) , the nfff boundary is about ten wavelengths . moreover , different treatments in the asymptotic method are discussed and numerically compared . an equivalence between the field expressions obtained from the asymptotic method and those from reciprocal theorem is demonstrated . our determination of the nfff boundary will be useful in the fields such as antenna design , remote sensing , and underwater communication .
|
a connection game is a kind of abstract strategy game in which players try to make a specific type of connection with their pieces . in many connection games ,the goal is to connect two opposite sides of a board . in these games , players take turns placing or / and moving pieces until they connect the two sides of the board .hex , twixt , and slither are typical examples of this type of game .however , a connection game can also involve completing a loop ( havannah ) or connecting all the pieces of a color ( lines of action ) .a typical process in studying an abstract strategy game , and in particular a connection game , is to develop an artificial player for it by adapting standard techniques from the game search literature , in particular the classical alpha - beta algorithm or the more recent monte carlo tree search paradigm .these algorithms explore an exponentially large game tree are meaningful when optimal polynomial time algorithms are impossible or unlikely .for instance , tree search algorithms would not be used for nim and shannon s edge switching game which can be played optimally and solved in polynomial time .the complexity class pspace comprizes those problems that can be solved on a turing machine using an amount of space polynomial in the size of the input .the prototypical example of a pspace - complete problem is the quantified boolean formula problem ( qbf ) which can be seen as a generalization of sat allowing for variables to be both existentially and universally quantified . proving that a game is pspace - hard shows that a variety of intricate problems can be encoded via positions of this game .additionally , it is widely believed in complexity theory that if a problem is pspace - hard , then it admits no polynomial time algorithms . for this reason , studying the computational complexity of gamesis a popular research topic .the complexity class of chess and go was determined shortly after the very definition of these classes and other popular games have been classified since then .more recently , we studied the complexity of trick taking card games which notably include bridge , skat , tarot , and whist .connection games have received less attention . besides even and tarjan s proof that shannon s vertex switching game is pspace - complete and reisch s proof that hex is pspace - complete , the only complexity results on connection games that we know of are the pspace - completeness of virtual connection detection in hex , the np - completeness of dominated cell detection in shannon s vertex switching game , as well as an unpublished note showing that a problem related to twixt is np - complete .the two games that we study in this paper rank among the most notable connection games .they were the main topic of multiple master s theses and research articles , and they both gave rise to competitive play .high - level online competitive play takes place on www.littlegolem.net . finally , live competitive playcan also be observed between human players at the mind sports olympiads where an international twixt championship has been organized every year since 1997 , as well as between havannah computer players at the icga computer olympiad since 2009 .havannah is a 2-player connection game played on a hexagonal board paved by hexagons .white and black place a stone of their color in turn in an unoccupied cell .stones can not be taken , moved nor removed .two cells are neighbors if they share an edge .a group is a connected component of stones of the same color via the neighbor relation .a player wins if they realize one of the three following different structures : a circular group , called _ ring _ , with at least one cell , possibly empty , inside ; a group linking two corners of the board , called _ bridge _ ; or a group linking three edges of the board , called _fork_. as the length of a game of havannah is polynomially bounded , exploringthe whole game tree can be done with polynomial space , so havannah is in pspace . in our reduction ,the havannah board is large enough that the gadgets are far from the edges and the corners .additionally , the gadgets feature ring threats that are short enough that the bridges and forks winning conditions do not have any influence . before starting the reduction, we define threats and make two observations that will prove useful in the course of the reduction .a _ simple threat _ is defined as a move which threatens to realize a ring on the next move on a unique cell .there are only two kinds of answers to a simple threat : either win on the spot or defend by placing a stone in the cell creating this very threat .double threat _ is defined as a move which threatens to realize a ring on the next move on at least two different cells .we will use _ threat _ as a generic term to encompass both simple and double threats .a _ winning sequence of threats _ is defined as a sequence of simple threats ended by a double threat for one player such that the opponent s forced move never makes a threat .thus , when a player is not threatened and can initiate a winning sequence of threats , they do win . to be more concise , we will denote by , ; , ; ;(, ) the sequence of moves starting with white s move , black s answer , and so on . is optional , for the last move of the sequence might be white s or black s .similarly , , ; , ; ; (, ) denotes the corresponding sequence of moves initiating by black .we will use the following lemmas multiple times : [ lem : simple - threat ] if a player is not threatened , playing a simple threat forces the opponent to answer on the cell of the threat .otherwise , no matter what have played their opponent , placing a stone on the cell of the threat wins the game .[ lem : double - threat ] if a player is not threatened , playing a double threat is winning . the player is not threatened , so their opponent can not win at their turn .let and be two cells of the double threat . if their opponent plays in , the player wins by playing in . if their opponent plays somewhere else , the player wins by playing in .generalized geography ( gg ) is one of the first two - player games to have been proved pspace - complete .it has been used to reduce to multiple games including hex , othello , and amazons . in gg , players take turns moving a token from vertex to vertex . if the token is on a vertex , then it can be moved to a vertex neighboring provided has not been visited yet .a player wins when it is their opponent s turn and the oppoent has no legal moves .an instance of gg is a graph and an initial vertex , and asks whether the first player has a winning strategy in the corresponding game .we denote by the set of predecessors of the vertex in , and the set of successors of .a vertex with in - degree and out - degree is called -vertex .the degree of a vertex is the sum of the in - degree and the out - degree , and the degree of is the maximal degree among all vertices of .if is the set of vertices of and is a subset of vertices , then ] is an equivalent instance , since playing in a predecessor of is losing .all edges coming to the initial vertex can be removed to form an equivalent instance .so , is a - , a - , or a -vertex .if , then ,v') ] .if , then player 1 is winning in if and only if player 1 is losing in at least one of the three instances ,v') ] , and ,v''')$ ] .in those three instances , , and are not -vertices since they had in - degree at least 1 in .therefore , we can also assume that is -vertex .we call an instance with an initial -vertex and then only - , - , and -vertices a _ simplified _ instance . in the following subsections we propose gadgets that encode the different parts of a_ simplified _ instance of gg .these gadgets have starting points and ending points .the gadgets are assembled so that the ending point of a gadget coincides with the starting point of the next one .the resulting instance of havannah is such that both players must enter in the gadgets by a starting point and leave it by an ending point otherwise they lose .wires , curves , and crossroads will enable us to encode the edges of the input graph .in the representation of the gadgets , white and black stones form the proper gadget . dashed stones and gray stones are respectively white and black stones setting the context . in the havannah boardwe name the 6 directions : north , north - west , south - west , south , south - east , and north - east according to standard designation .while figures and lemmas are mostly presented from white s point of view , all the gadgets and lemmas work exactly the same way with colors reversed . [ [ the - wire - gadget . ] ] the wire gadget .+ + + + + + + + + + + + + + + + basically , a wire teleports moves : one player plays in a cell and their opponent has to answer in a possibly remote cell . is called the starting point of the wire and is called its ending point .a wire where white prepares a threat and black answers is called a wb - wire ( fig .[ fig : hav - entire - wire ] ) ; conversely , we also have bw - wires .we say that wb - wires and bw - wires are _ opposite _ wires .note that wires can be of arbitrary length and can be curved with 120 angles ( fig .[ fig : hav - curved - wire ] ) . on an empty board, a wire can link any pair of cells as starting and ending point provided they are far enough from each other .[ lem : hav - wire ] if white plays in the starting point of a wb - wire ( fig .[ fig : hav - entire - wire ] ) , and black does not answer by a threat , black is forced to play in the ending point ( possibly with moves at and interleaved ) . if black does not play neither in nor in , then white plays in which makes a double threat in and and wins by lemma [ lem : double - threat ] . if black plays in ( resp . in ) , at the very least white can play in ( resp . in ) which forces black to play in by lemma [ lem : simple - threat ] .[ [ the - crossover - gadget . ] ] the crossover gadget .+ + + + + + + + + + + + + + + + + + + + + the input graph of gg might not be planar , so we have to design a crossover gadget to enable two chains of wires to cross .[ fig : hav - crossroad ] displays a crossover gadget , we have a south - west bw - wire with starting point which is linked to a north - east bw - wire with ending point , and a north bw - wire with starting point is linked to a south bw - wire with ending point .[ lem : hav - crossroad ] in a crossover gadget ( fig .[ fig : hav - crossroad ] ) , if white plays in the starting point , black ends up playing in the ending point and if white plays in the starting point , black ends up playing in the ending point . by lemma [ lem : simple - threat ] , if white plays in , black has to play in , forcing white to play in , forcing finally black to play in .if white plays in , again by lemma [ lem : simple - threat ] , black has to play in .note that the south wire is linked to the north wire irrespective of whether the other pair of wires has been used and conversely .that is , in a crossover gadget two paths are completely independent .we now describe the gadgets encoding the vertices .recall from section [ sec : gg ] that simplified gg instances only feature - , - , and -vertices , and a -vertex .one can encode a -vertex with two consecutive opposite wires .thus , we will only present three vertex gadgets , one for -vertices , one for -vertices , and one for the -vertex .[ [ the-21-vertex - gadget . ] ] the ( 2,1)-vertex gadget .+ + + + + + + + + + + + + + + + + + + + + + + + a -vertex gadget receives two wire ending points .if a stone is played on either of those ending points , it should force an answer in the starting point of a third wire .that simulates a vertex with two edges going in and one edge going out .[ lem : hav-21 ] if black plays in one of the two possible starting points and of a -vertex gadget ( fig .[ fig : hav - reentering-21 ] ) , and white does not answer by a threat , white is forced to play in the ending point .assume black plays in and white answers by a move which is not in nor a threat .this move from white has to be either in or in , otherwise , black has a double threat by playing in and wins by lemma [ lem : double - threat ] .suppose white plays in .black plays in with a simple threat in , so white has to play in by lemma [ lem : simple - threat ] . then black has the following winning sequence : b : , ; , ; , ; .black has now a double threat in and and so wins by lemma [ lem : double - threat ] .if white plays in instead of , the argument is similar .if black plays the first move in , the proof that white has to play in is similar . [[ the-1 - 2-vertex - and-02-vertex - gadgets . ] ] the ( 1 - 2)-vertex and ( 0,2)-vertex gadgets. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a -vertex gadget receives one ending point of a wire ( fig .[ fig : hav-12 ] ) .if a stone is played on this ending point , it should offer the choice to defend either by playing in the starting point of a second wire , or by playing in the starting point of a third wire .that simulates a vertex with one edge going in and two edges going out . the -vertex gadget ( or _ starting - vertex _ gadget )can be seen as a -vertex gadget where a stone has already been played on the ending point of the in - edge .the -vertex gadget represents the two possible choices of the first player at the beginning of the game .[ lem : hav-12 ] if black plays in the starting point of a -vertex gadget ( fig .[ fig : hav-12 ] ) , and white does not play a threat , white is forced to play in one of the two ending points and .then , if black does not answer by a threat , they have to play in the other ending point .black plays in .suppose white plays neither in nor in nor a threatening move .then black plays in . by lemma[ lem : simple - threat ] , white has to play in but black wins by playing in the ending point of the wire starting at by lemma [ lem : hav - wire ] .assume white s answer to is to play in . can now be seen as the ending point of the in - wire , so black needs to play in or make a threat by lemma [ lem : hav - wire ] .[ lem : hav - starting - vertex ] if white is forced to play a threat or to open the game in one of the two opening points and of the -vertex gadget ( fig .[ fig : hav - starting - vertex ] ) .then , if black does not play a threat , they are forced to play in the other opening point .let be a simplified instance of gg , and be its number of vertices . being bipartite, we denote by the side of the partition containing , and the other side .player 1 moves the token from vertices of to vertices of and player 2 moves the token from to .we denote by the reduction from gg to havannah .let us describe the construction of .as an example , we provide the reduction from the gg instance from fig . [fig : gg - instance ] in fig .[ fig : hav - reduction ] . .] the initial vertex is encoded by the gadget displayed in fig .[ fig : hav - starting - vertex ] .each player 1 s -vertex is encoded by the -vertex gadget of fig .[ fig : hav-21 ] , and each player 2 s -vertex is encoded by the same gadget in reverse color .each player 1 s -vertex is encoded by the -vertex gadget of fig .[ fig : hav-12 ] , and each player 2 s -vertex is encoded by the same gadget in reverse color .all white s vertex gadgets are aligned and all black s vertex gadgets are aligned on a parallel line . whenever is an edge in , we connect an exit of the vertex gadget representing to an entrance of a gadget encoding using wires and crossover gadgets .let be the number of vertices in , since is of degree 3 , we know that the number of edges is at most .the minimal size in terms of havannah cells for a smallest wire and the size of a crossover are constants .therefore the distance between black s line and white s line is linear in .note that , two wires of opposite colors might be needed to connect two vertex gadgets or a vertex gadget and a crossover .similarly , we can show that the distance between two vertices on black s line or on white s line is constant .[ lem : hav - reentering ] if black reenters a white s -vertex gadget ( fig .[ fig : hav - reentering-21 ] ) , and black has no winning sequence of threats elsewhere , white wins .if black reenters a white s -vertex by playing in , white plays in . as black can not initiate a winning sequence , whatever he plays white can defend until black is not threatening anymore .then white plays in or in with a decisive double threat in and .[ hav - main - th ] havannah is pspace - complete .we already mention that havannah pspace and we just presented a polynomial time reduction from a pspace - complete problem .we shall now prove that the reduction is sound , that is : player 1 is winning in if and only if white is winning in .first we show that the players in the game of havannah lose if they make a move which does not correspond to anything in the instance of gg .such a move will be called a _ cheating _ move .the exhaustive list of non cheating moves is : defending a threat , playing at the end of a wire when the opponent had just play at its starting point , choosing which wire starting point or to block when the opponent had just play in ( fig .[ fig : hav-12 ] ) , which forces them to take the other wire , and playing in the exit of a -vertex gadget when the opponent had just play in or in ( fig .[ fig : hav-21 ] ) . in order to conclude by invoking lemma [ lem : hav - wire ] up to corollary [ lem : hav - starting - vertex ], we should show that making a threat is not helpful in all the above situations .note that those lemmas imply the following invariant : while white and black play a legal game of gg , at their turn , a player is threatened or their opponent can initiate a winning sequence of threats .there is only two kinds of places where one can play a threat : the crossroad gadget ( fig .[ fig : hav - crossroad ] ) and the -vertex gadget while already being entered ( fig .[ fig : hav - reentering-21 ] ) .let us start by showing that playing a threat in a crossroad gadget which does not defend a threat , that is , the action was occurring in a different place , is losing .if white plays in then black plays in which is the starting point of a bw wire . and now , they are at least two places where black can initiate a winning sequence of threats , so white loses ( after possibly playing some additional but harmless threats ) . the same holds by reversing the colors or by reversing and , and is not affected by whether or not stones have been played in , , and . if , instead , white plays in , black answers in , white answers in and black plays in , and again black can initiate a winning sequence of threats in two places .if , instead , white plays in , black answers in and again white is losing .if , instead , black plays in , white plays in and black plays in , and now white plays their winning sequence of threats .now , let us show that the threats in the already entered -vertex gadget are harmless .consider now fig .[ fig : hav - reentering-21 ] .if black plays in , white answers in and there is no more threats for black . if black plays in , white answers in .black can threat again in or but white defends in or , respectively , and there are no more threats .note that this does not affect the fact that reentering in the gadget is losing for black .summing up , white and black has to simulate a proper game of gg in the instance .we now show that if a player in the game of havannah has no more move in the corresponding gg instance , they lose . the only non cheating move would be to reenter in a -vertex but it is losing by lemma [ lem : hav - reentering ] .alex randolph s twixt is one of the most popular connection games .it was invented around 1960 and was marketed as soon as in 1962 . in his book devoted to connection games , cameron browne describes twixt as one of the most popular and widely marketed of all connection games .we now briefly describe the rules of twixt and refer to moesker s master s thesis for an introduction and a mathematical approach to the strategy , and the description of a possible implementation .twixt is a 2-player connection game played on a go - like board . at their turn , player white and black place a pawn of their color in an unoccupied place .just as in havannah and hex , pawns can not be taken , moved , nor removed .when 2 pawns of the same color are spaced by a knight s move , they are linked by an edge of their color , unless this edge would cross another edge . at each turn ,a player can remove some of their edges to allow for new links .the goal for player white ( resp .black ) is to link top and bottom ( resp .left and right ) sides of the board .note that sometimes , a player could have to choose between two possible edges that intersect each other .the _ pencil and paper _ version twixtpp where the edges of a same color are allowed to cross is also famous and played online . as the length of a game of twixt is polynomiallybounded , exploring the whole tree can be done with polynomial space using a minimax algorithm .therefore twixt is in pspace .mazzoni and watkins have shown that 3-sat could be reduced to single - player twixt , thus showing np - completeness of the variant . while it might be possible to try and adapt their work and obtain a reduction from 3-qbf to standard two - player twixt, we propose a simpler approach based on hex .the pspace - completeness of hex has already been used to show the pspace - completeness of amazons , a well - known territory game .we now present how we construct from an instance of hex an instance of twixt .we can represent a cell of hex by the twixt gadgets displayed in fig .[ fig : twixt - cell ] .let be the size of a side of , fig .[ fig : twixt - board ] shows how a twixt board can be paved by twixt cell gadgets to create a hex board .it is not hard to see from fig .[ fig : tw - empty ] that in each gadget of fig . [ fig : twixt - board ] , move ( resp . ) is dominating for white ( resp .that is , playing is as good for white as any other move of the gadget .we can also see that the moves that are not part of any gadget in fig .[ fig : twixt - board ] are dominated for both players . as a result , if player black ( resp .white ) has a winning strategy in , then player black has a winning strategy in .thus , is won by black if and only if is won by black . therefore determining the winner in twixt is at least as hard as in hex , leading to the desired result .hex board reduced to a twixt board . ]twixt is pspace - complete .we already mentioned that twixt pspace .we presented a polynomial time reduction from a pspace - complete problem .we shall prove that the reduction is sound .observe that the proposed reduction holds both for the classic version of twixt as well as for the _ pencil and paper _ version twixtpp . indeed, the reduction does not require the losing player to remove any edge , so it also proves that twixtpp is pspace - hard .this paper establishes the pspace - completeness of two important connection games , havannah and twixt .the proof for twixt is a reduction from hex and applied to twixtpp .the proof for havannah is more involved and is based on the generalized georgraphy problem restricted to bipartite graphs of degree 3 .this havannah reduction only used the loop winning condition , but it is easy to show that havannah without the loop winning condition can simulate hex and is pspace - hard as well . for both reductions ,the size of the resulting game is only linearly larger than the size of the input instance . in lines of action ,each player starts with two groups of pieces and tries to connect all their pieces by moving these pieces and possibly capturing opponent pieces .while the goal of lines of action clearly makes it a connection game , the mechanics distinguishes it from more classical connection games as no pieces are added to the board and existing pieces can be moved or removed . as a result , it is not even clear that lines of action is in pspace .slither is closer to hex but each move actually consists of putting a new stone on the board and possibly moving another one . obtaining a pspace - hardness result for slither is not so easy since the rules allow a player to influence two different areas of the board in a single turn .yngvi bjrnsson , ryan hayward , michael johanson , and jack van rijswijck .dead cell analysis in hex and the shannon game . in adrian bondy , jean fonlupt , jean - luc fouquet , jean - claude fournier , and jorge l. ramrez alfonsn , editors , _ graph theory in paris _ , pages 4559 .springer , 2007 .douard bonnet , florian jamain , and abdallah saffidine . on the complexity of trick - taking card games . in francescarossi , editor , _23rd international joint conference on artificial intelligence ( ijcai ) _ , beijing , china , august 2013 .aaai press .cameron browne , edward powley , daniel whitehouse , simon lucas , peter cowling , philipp rohlfshagen , stephen tavener , diego perez , spyridon samothrakis , and simon colton .a survey of monte carlo tree search methods . , 4(1):143 , march 2012 .erik d. demaine and robert a. hearn .playing games with algorithms : algorithmic combinatorial game theory . in richardj. nowakowski , editor , _ games of no chance iii _ , pages 356 .cambridge university press , 2009 .timothy furtak , masashi kiyomi , takeaki uno , and michael buro .generalized amazons is pspace - complete . in leslie pack kaelbling and alessandro saffiotti , editors , _19th international joint conference on artificial intelligence ( ijcai ) _ , pages 132137 , 2005 .richard j. lorentz .improving monte carlo tree search in havannah . in h.jaap herik , hiroyuki iida , and aske plaat , editors , _ computers and games _ , volume 6515 of _ lecture notes in computer science _ , pages 105115 .springer berlin heidelberg , 2011 .thomas maarup .ex : everything you always wanted to know about hex but were afraid to ask .master s thesis , department of mathematics and computer science , university of southern denmark , odense , denmark , 2005 .fabien teytaud and olivier teytaud . creating an upper - confidence - tree program for havannah . in h.jaap van den herik and pieter spronck , editors , _ advances in computer games _ , volume 6048 of _ lecture notes in computer science _ , pages 6574 .springer berlin heidelberg , 2010 .
|
numerous popular abstract strategy games ranging from hex and havannah to lines of action belong to the class of connection games . still , very few complexity results on such games have been obtained since hex was proved pspace - complete in the early eighties . we study the complexity of two connection games among the most widely played . namely , we prove that havannah and twixt are pspace - complete . the proof for havannah involves a reduction from generalized geography and is based solely on ring - threats to represent the input graph . on the other hand , the reduction for twixt builds up on previous work as it is a straightforward encoding of hex . , connection game , havannah , twixt , generalized geography , hex , pspace
|
-5 mm during the twentieth century chain complexes , their exactness properties , and commutative diagrams involving them pervaded many branches of mathematics , most notably algebraic topology and differential geometry .recently such homological techniques have come to play an important role in a branch of mathematics often thought quite distant from these , numerical analysis .their most significant applications have been to the design and analysis of numerical methods for the solution of partial differential equations .let us consider a general problem , such as a boundary value problem in partial differential equations , as an operator equation : given data in some space find the solution in some space to the problem .a numerical method discretizes this problem through the construction of an operator and data and defines an approximate solution by the equation .of course the numerical method is not likely to be of value unless it is consistent which means that and should be close to and in an appropriate sense .before we speak of solving the original problem , numerically or otherwise , we should first confront the question of whether it is well - posed .that is , given , does a unique exist , and , if so , do small changes of induce small changes in ? the analogous questions for the numerical method , whether given a unique is determined by the discrete equation , and whether small changes in induce small changes in , is the question of _ stability _ of the numerical method .a common paradigm , which can be formalized in many contexts of numerical analysis , is that a method which is consistent and stable is convergent .well - posedness is a central issue in the theory of partial differential equations .of course , we do not expect just any pde problem to be well - posed .well - posedness hinges on structure of the problem which may be elusive or delicate .superficially small changes , for example to the sign of a coefficient or the type of boundary conditions , can certainly destroy well - posedness .the same is true for the stability of numerical methods : it often depends on subtle or elusive properties of the numerical scheme .usually stability reflects some portion of the structure of the original problem that is captured by the numerical scheme .however in many contexts it is not enough that the numerical scheme be close to the original problem in a quantitative sense for it to inherit stability .that is , it may well happen that a consistent method for a well - posed problem is unstable . in this paperwe shall see several examples where the exactness properties of discrete differential complexes and their relation to differential complexes associated with the pde are crucial tools in establishing the stability of numerical methods . in some casesthe homological arguments have served to elucidate or validate methods that had been developed over the preceding decades . in othersthey have pointed the way to stable discretizations of problems for which none were previously known .they will very likely play a similar role in the eventual solution of some formidable open problems in numerical pde , especially for problems with significant geometric content , such as in numerical general relativity . as in other branches of mathematics , in numerical analysis differential complexes serve both to encode key structure concisely and to unify considerations from seemingly very different contexts . in this paperwe shall discuss only finite element methods since , of the major classes of numerical methods for pde , they are the most amenable to rigorous analysis , and have seen the greatest use of differential complexes . but complexes have recently arisen in the study of finite differences , finite volumes , and spectral methods as well .-5 mm a finite element space on a domain is a function space defined piecewise by a certain assembly procedure which we now recall ; cf . . for simplicity , here we shall restrict to spaces of piecewise polynomials with respect to a triangulation of an -dimensional domain by -simplices with or ( so implicitly we are assuming that is polygonal or is polyhedral ) . on each simplex we require that there be given a function space of _ shape function _ and a set of _ degrees of freedom _ , i.e. , a set of linear functionals on which form a basis for the dual space .moreover , each degree of freedom is supposed to be associated with some subsimplex of some dimension , i.e. , in three dimensions with a vertex , an edge , a face , or the tetrahedron itself . for a subsimplex which is shared by two simplices in the triangulation , we assume that the corresponding functionals are in one - to - one - correspondence. then the finite element space is defined as those functions on whose restriction to each simplex of the triangulation belongs to and for which the corresponding degrees of freedom agree whenever a subsimplex is shared by two simplices .the simplest example is obtained by choosing to be the constant functions and taking as the only degree of freedom on the order moment ( which we associate with itself ) .the resulting finite element space is simply the space of piecewise constant functions with respect to the given triangulation .similarly we could choose ( by we denote the space of polynomial functions on of degree at most ) , and take as degrees of freedom the moments of degrees and also those of degree , .again all the degrees of freedom are associated to itself .this time the finite element space consists of all piecewise linear functions . of course, the construction extends to higher degrees . a more common piecewise linear finite element space occurs if we again choose , but take as degrees of freedom the maps , one associated to each vertex .in this case the assembled finite element space consists of all _ continuous _ piecewise linear functions .more generally we can choose for , and associate to each vertex the evaluation degrees of freedom just mentioned , to each edge the moments on the edge of degree at most , to each face the moments on the face of degree at most , and to each tetrahedron the moments of degree at most .the resulting finite element space , called the _ lagrange finite element _ of degree , consists of all continuous piecewise polynomials of degree at most .figure [ fg : assemb ] shows a mesh of a two dimensional domain and a typical function in the space of lagrange finite elements of degree with respect to this mesh .a mesh marked with the locations of the degrees of freedom for lagrange finite elements of degree and a typical such finite element function.,title="fig:",height=120 ] a mesh marked with the locations of the degrees of freedom for lagrange finite elements of degree and a typical such finite element function.,title="fig:",height=120 ] mnemonic diagrams as in figure [ fg : elts1 ] are often associated to finite element spaces , depicting a single element and a marker for each degree of freedom .element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions . second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] .1 in element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions . second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] .1 in element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] .1 in element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] element diagrams .first row : discontinuous elements of degrees , , and in two dimensions .second row : lagrange elements of degrees , , and in two dimensions .third and fourth rows : the corresponding elements in three dimensions.,title="fig:",width=96 ] next we describe some finite element spaces that can be used to approximate vector - valued functions .for brevity we limit the descriptions to the -dimensional case , but supply diagrams in both and dimensions .of course we may simply take the cartesian product of three copies of one of the previous spaces .for example , the element diagrams shown on the left of figure [ fg : eltsvec ] refer to continuous piecewise linear vector fields in two and three dimensions .more interesting spaces are the _face elements _ and _ edge elements _ essentially conceived by raviart and thomas in two dimensions and by nedelec in three dimensions . in the lowest order case ,the face elements take as shape functions polynomial vector fields of the form where , and , a 4-dimensional subspace of the 12-dimensional space of polynomial vector fields of degree at most .the degrees of freedom are taken to be the order moments of the normal components on the faces of codimension , where is a face and the unit normal to the face .the element diagram is shown in the middle column of figure [ fg : eltsvec ] . in the lowest order casethe edge elements shape functions are polynomial vector fields of the form where , which form a 6-dimensional subspace of .the degrees of freedom are the order moments over the edges of the component tangent to the edge , , as indicated on the right of figure [ fg : eltsvec ] .element diagrams for some finite element approximations to vector fields in two and three dimensions .multiple dots are used as markers to indicate the evaluation of all components of a vector field .arrows are used for normal moments on codimension subsimplices and for tangential components on edges .left : continuous piecewise linear fields .middle : face elements of lowest order .right : edge elements of lowest order.,title="fig:",width=96]-.18 in element diagrams for some finite element approximations to vector fields in two and three dimensions .multiple dots are used as markers to indicate the evaluation of all components of a vector field .arrows are used for normal moments on codimension subsimplices and for tangential components on edges .left : continuous piecewise linear fields .middle : face elements of lowest order .right : edge elements of lowest order.,title="fig:",width=96 ] .1 in element diagrams for some finite element approximations to vector fields in two and three dimensions .multiple dots are used as markers to indicate the evaluation of all components of a vector field .arrows are used for normal moments on codimension subsimplices and for tangential components on edges .left : continuous piecewise linear fields .middle : face elements of lowest order .right : edge elements of lowest order.,title="fig:",width=96]-.2 in element diagrams for some finite element approximations to vector fields in two and three dimensions .multiple dots are used as markers to indicate the evaluation of all components of a vector field .arrows are used for normal moments on codimension subsimplices and for tangential components on edges .left : continuous piecewise linear fields .middle : face elements of lowest order .right : edge elements of lowest order.,title="fig:",width=96 ] each of these spaces can be generalized to arbitrarily high order .for the next higher order face space , the shape functions take the form where and a linear scalar - valued polynomial .this gives a subspace of of dimension , and the degrees of freedom are the moments of degree at most of the normal components on the faces and the moments of degree of all components on the tetrahedron .this element is indicated on the left of figure [ fg : edgeface ] . for the second lowest order edge space ,the shape functions take the form with , giving a -dimensional space .the degrees of freedom are the tangential moments of degree at most on the edges ( two per edge ) and the tangential moments of degree on the faces ( two per face ) .this element is indicated on the right of figure [ fg : edgeface ] .-.16in-.20in-.01in-.0 in the choice of the shape functions and the degrees of freedom determine the smoothness of the functions belonging to the assembled finite element space .for example , the lagrange finite element spaces of any degree belong to the sobolev space of functions whose distributial first partial derivatives also belong to ( and even to ) .in fact , the distributional first partial derivative of a continuous piecewise smooth function coincides with its derivative taken piecewise and so belongs to .thus the degrees of freedom we imposed in constructing the lagrange finite elements are sufficient to insure that the assembled finite element space .in fact more is true : for the lagrange finite element space with shape function spaces , we have this says that , in a sense , the degrees of freedom impose exactly the continuity required to belong to , no less and no more .in contrast , the discontinuous piecewise polynomial spaces are subsets of but not of , since their distributional first derivatives involve distributions supported on the interelement boundaries , and so do not belong to . for the vector - valued finite elements there are more possibilities .the face and edge spaces contain discontinuous functions , and so are not contained in .however , for vector fields belonging to one of the face spaces the normal component of the vector field does not jump across interelement boundaries , and this implies , via integration by parts , that the distributional divergence of the function coincides with the divergence taken piecewise .thus the face spaces belong to , the space of vector fields on whose divergence belongs to . indeed ,for these spaces the degrees of freedom impose exactly the continuity of , no less or more .for the edge spaces it can be shown that the tangential components of a vector field do not jump across element boundaries , and this implies that the edge functions belong to , the space of vector fields whose curl belongs to .again the degrees of freedom impose exactly the continuity needed for inclusion in .-5 mm the de rham complex is defined for an arbitrary smooth -manifold . here denotes the space of differential -forms on , i.e. , for and , is an alternating -linear map on the tangent space .the operators denote exterior differentiation .this is is a complex in that the composition of two exterior differentiations always vanishes .moreover , and if the manifold is topologically trivial , then it is exact .if is a domain in , then we may identify its tangent space at any point with . using the euclidean inner product ,the space of linear maps on may be identified by as usual , so may be identified with the space of smooth vector fields on . moreover , the space of alternating bilinear maps on may be identified with by associating to a vector the alternating bilinear map .thus we have an identification of with as well .finally the only alternating trilinear maps on are given by multiples of the determinant map , and so we may identify with . in terms of such _ proxy fields_ , the de rham complex becomes alternatively we may consider -based forms and the sequence becomes the finite element spaces constructed above allow us to form discrete analogues of the de rham complex .given some triangulation of , let denote the space of continuous piecewise linear finite elements , the lowest order edge element space , the lowest order face element space , and the space of piecewise constants . then ( since contains all piecewise constant vector fields belonging to and the gradient of a continuous piecewise linear is certainly such a function ) , ( since contains all piecewise constant vector fields belonging to ) , and .thus we have the discrete differential complex this differential complex captures the topology of the domain to the same extent as the de rham complex .in particular , if the domain is topologically trivial , then the sequence is exact .it is convenient to abbreviate the above statement using the element diagrams introduced earlier .thus we will say that the following complex is exact : {000212.eps } } @>{\operatorname{grad } } > > \raise-.15in\hbox{\includegraphics[width=.5in]{000220.eps } } @>{\operatorname{curl } } > > \raise-.25in\hbox{\includegraphics[width=.5in]{000219.eps } } @>{\operatorname{div } } > > \raise-.15in\hbox{\includegraphics[width=.5in]{000209.eps } } @.\to0 \end{cd}\ ] ] by this we mean that if we assemble finite element spaces , , , and using the indicated finite elements and a triangulation of a topologically trivial domain , then the corresponding discrete differential complex is exact .there is another important relationship between the de rham complex and the discrete complex .the defining degrees of freedom determine projections , , and so on .in fact is just the usual interpolant , is the -projection into the piecewise constants , and the projections and onto the edge and face elements are determined by the maintenance of the appropriate moments. it can be checked , based on stokes theorem , that the following diagram commutes . finite element spaces appearing in this diagram , with one degree of freedom for each vertex for , for each edge for , for each face for , and for each simplex for , are highly geometrical .in fact , recalling the identifications between fields and differential forms , we may view these spaces as spaces of piecewise smooth differential forms .they were in fact first constructed in this context , without any thought of finite elements or numerical methods , by whitney .the spaces were reinvented , one - by - one , as finite element spaces in response to the needs of various numerical problems , and the properties which are summarized in the commutative diagram above were slowly rediscovered as needed to analyze the resulting numerical methods .the connection between low order edge and face finite elements and whitney forms was first realized by bossavit .analogous statements hold for higher order lagrange , edge , face , and discontinuous finite elements .for example , the following diagram commutes and has exact rows : {000213.eps } } @>{\operatorname{grad } } > > \raise-.15in\hbox{\includegraphics[width=.5in]{000224.eps } } @>{\operatorname{curl } } > > \raise-.25in\hbox{\includegraphics[width=.5in]{000222.eps } } @>{\operatorname{div } } > > \raise-.15in\hbox{\includegraphics[width=.5in]{000210.eps } } @.\to 0 \end{cd}\ ] ] we shall see many other discrete differential complexes below .-5 mm consider first the solution of the dirichlet problem for poisson s equation on a domain in : the solution can be characterized as the minimizer of the energy functional over the sobolev space ( consisting of functions vanishing on ) , or as the solution of the weak problem : find such that we may define an approximate solution by minimizing the dirichlet integral over a finite dimensional subspace of ; this is the classical ritz method .equivalently , we may use the galerkin method , in which is determined by the equations after choice of a basis in this leads to a system of linear algebraic equations , and is computable .let denote the discrete solution operator .then it is easy to check that is bounded as a linear operator from to by a constant that depends only on the domain ( and , in particular , does nt increase if the space is enriched ) .this says that the galerkin method is _stable_. a consequence is the _quasioptimality estimate _ for some constant depending only on the domain .note that there is no restriction on the subspace to obtain this estimate .galerkin s method for a coercive elliptic problem is always stable and convergence depends only on the approximation properties of the subspace .a natural choice for is the lagrange finite element space of some degree with respect to some regular simplicial mesh of maximal element size , in which case galerkin s method is a standard finite element method . in this casethe right hand side of is provided that is sufficiently smooth .next consider the related eigenvalue problem , which arises in the determination of the fundamental frequencies of a drum .that is , we seek standing wave solutions to the wave equation on some bounded domain which vanish on . assuming that the tension and density of the drum membrane are unity , these solutions have the form where and are constants and and satisfy the eigenvalue problem the eigenvalues form a sequence of positive numbers tending to infinity .the numbers are the fundamental frequencies of the drum and the functions give the corresponding fundamental modes .the eigenvalues and eigenfunctions are characterized variationally as the critical values and critical points of the rayleigh quotient latexmath:[\[\mathcal r(u)=\frac{\int_\omega |{\operatorname{grad}}u(x)|^2\,dx}{\int_\omega sobolev space .the classical rayleigh - ritz method for the approximation of eigenvalue problems determines approximate eigenvalues and eigenfunctions as the critical values and points of the restriction of to the nonzero elements of some finite dimensional subspace of .equivalently , we can write the eigenvalue problem in weak form : find and nonzero such that the galerkin approximation of the eigenvalue problem , which is equivalent to the rayleigh - ritz method , seeks and nonzero such that we now discuss the convergence of this method .let denote the eigenvalue of the problem .in the interest of simplicity we assume that is a simple eigenvalue , so the corresponding eigenfunction is uniquely determined up to sign by the normalization .similarly let and denote the eigenvalue of .it can then be proved ( see , e.g. , for much more general results ) that there exists a constant such that latexmath:[\[\label{eq : eigest } \|u - u_h\|_{h^1}\le c\inf_{v\in w_h}\|u - v\|_{h^1 } , \quad eigenfunction approximation is quasioptimal and the eigenvalue error is bounded by the square .again there is no restriction on the space .figure [ fg : lapeig ] reports on the computation of the eigenvalues of the laplacian on an elliptical domain of aspect ratio using lagrange finite elements of degree .-.75 in .0 in now consider an analogous problem , the computation of the resonant frequencies of an electromagnetic cavity occupying a region . in this casewe wish to find standing wave solutions of maxwell s equations .if we take the electric permittivity and the magnetic permeability to be unity and assume a lossless cavity with perfectly conducting boundary , we are led to the following eigenvalue problem for the electric field : find nonzero , such that this is again an elliptic eigenvalue problem and the eigenvalues form a sequence of positive numbers tending to infinity .the divergence constraint is nearly redundant in this eigenvalue problem . indeed if for , then since the divergence of a curl vanishes .thus the eigenvalue problem has the same eigenvalues and eigenfunctions as except that it also admits as an eigenvalue , and the corresponding eigenspace is infinite - dimensional ( it contains the gradients of all smooth functions vanishing on the boundary of ) .the eigenvalues and eigenfunctions are now critical points and values of the rayleigh quotient over the space of nonzero fields in , which is defined to be the space of functions for which both the above integrals exist and are finite and which have vanishing tangential component on the boundary ( i.e. , on ) . in figure [ fg : nodeig ]we show the result of approximating a two - dimensional version of this eigenvalue problem using the rayleigh - ritz method or , equivalently , the galerkin method with continuous piecewise linear vector fields on whose tangential components vanish on the boundary ( the first element depicted in figure [ fg : eltsvec ] ) .for we take a square of side length , in which case the nonzero eigenvalues are known to be all numbers of the form with not both zero , and the corresponding eigenfunctions are . for the mesh pictured ,the finite element space has dimension .we find that of the computed eigenvalues are between and and that they have no tendency to cluster near the integers which are the exact eigenvalues between and .thus this numerical method is useless : the computed eigenvalues bear no relation to the true eigenvalues !the analogue of is surely not true .the plot shows the first 73 eigenvalues computed with piecewise linear finite elements for the resonant cavity problem on the square using the mesh shown .they bear no relation to the exact eigenvalues , , , , , , , indicated by the horizontal lines.,title="fig:",height=192 ]the plot shows the first 73 eigenvalues computed with piecewise linear finite elements for the resonant cavity problem on the square using the mesh shown .they bear no relation to the exact eigenvalues , , , , , , , indicated by the horizontal lines.,title="fig:",height=192 ] if instead we choose the lowest order edge elements as the finite element space ( figure [ fg : eltsvec ] , top right ) , we get very different results . using the same mesh, the edge finite element space has dimension .it turns out that of the computed eigenvalues are zero ( to within round - off ) , and the subsequent eigenvalues are , , , , , , i.e. , excellent approximations of the exact eigenvalues .see figure [ fg : edgeeig ] .the first plot shows the first 100 positive eigenvalues for the resonant cavity problem on the square computed with lowest order edge elements using the mesh of figure [ fg : nodeig ] .the error in the first eigenvalues is below .the inset focuses on the first eigenvalues , for which the error is less than .the second plot shows the vector field associated to the third positive eigenvalue.,title="fig:",height=192].05 in -1.9 in .9 in the striking difference between the behavior of the continuous piecewise linear finite elements and the edge elements for the resonant cavity problem is a question of stability .we shall return to this below , after examining stability in a simpler context .-5 mm consider now the dirichlet problem where is a domain in and the coefficient is a symmetric positive definite matrix at each point .we may again characterize as a minimizer of the energy functional and use the ritz method .this procedure is always stable .however , for some purposes it is preferable to work with the equivalent first order system the pair is then characterized variationally as the unique critical point of the functional over .note that is a saddle - point of , not an extremum .numerical discretizations based on such saddle - point variational principles are called _mixed methods_. it is worth interpreting the system in the language of differential forms , because this brings some insight .the function is a -form , and the operation is just exterior differentiation .the vector field is a proxy for a -form and the operation is again exterior differentiation . the loading function is the proxy for a -form . since is the proxy for a -form , it must be that the operation on differential forms that corresponds to multiplication by takes -forms to -forms .in fact , if we untangle the identifications , we find that multiplication by is a hodge star operation .a hodge star operator defines an isomorphism of onto . to determine a particular such operator, we must define an inner product on the tangent space at each point of .the positive definite matrix does exactly that .many of the partial differential equations of mathematical physics admit similar interpretations in terms of differential forms . for a discussion of this in the context of discretization , see . a natural approach to discretization of the mixed variational principleis to choose subspaces , and seek a critical point .this is of course equivalent to a galerkin method and leads to a system of linear algebraic equations .however in this case , _ stability is not automatic_. it can happen that the discrete system is singular , or more commonly , that the norm of the discrete solution operator grows unboundedly as the mesh is refined . in a fundamental paper , brezzi established two conditions that together are sufficient ( and essentially necessary ) for stability .brezzi s theorem applied to a wide class of saddle - point problems , but for simplicity we will state the stability conditions for the saddle - point problem associated to the functional .* there exists such that for all such that for all .* there exists such that for all there exists nonzero satisfying * theorem ( brezzi ) * _ if the stability conditions ( s1 ) and ( s2 ) are satisfied , then admits a unique critical point over , the solution operator is bounded , and the quasioptimal estimate holds with depending on and ._ the stability conditions of brezzi strongly limit the choice of the mixed finite element spaces and .condition ( s1 ) is satisfied if the indicated functions , those whose divergence is orthogonal to , are in fact divergence - free .( in practice , this is nearly the only way it is satisfied . )this certainly holds if , and so such as inclusion is a common design principle of mixed finite element spaces .on the other hand , condition ( s2 ) is most easily satisfied if , because in this case , given , we can choose with , so , and the second condition will be satisfied as long as we can insure that . in short , we need to know that maps onto and that admits a bounded one - sided inverse .the face elements of raviart - thomas and nedelec were designed to satisfy both these conditions .specifically , let again denote the space of face elements of lowest degree ( whose element diagram is shown in the middle of the second row of figure [ fg : eltsvec ] ) , and the space of piecewise constants . in , a space of discrete -forms , rather than in a space of -forms , since is a -form .the resolution is through a hodge star operator , this time formed with respect to the euclidean inner product on . in the mixed method is a discrete -form , approximating the image of under this star operator . ]we know that so these elements are admissable for the mixed variational principle .moreover , we have , so ( s1 ) holds . to verify ( s2 ) , we refer to the commutative diagram . given , we can solve the poisson equation and take to obtain a function with and .now let .then where we have used the commutativity and the fact that . moreover , where we used the boundedness of on .this shows that and establishes a bound on the one - sided inverse , and so verifies ( s2 ) .of course , the same argument shows the stability of a mixed method based on higher order face elements as well .thus we see that the stability of the mixed finite element method depends on the properties of the spaces and encoded in the rightmost square of the commutative diagram .now let us return to the resonant cavity eigenvalue problem for which we explored the galerkin method : find , such that we saw that if is taken to be a space of edge elements this method gives good results in that the positive eigenvalues of the discrete problem are good approximations for the positive eigenvalues of the continuous problem .however , the simple choice of lagrange finite elements did not give good results .we now explain the good performance of the edge elements based on the middle square of the commutative diagram .following boffi et .al we set and introduce the following mixed discrete eigenvalue problem : find , such that it is then easy to verify that if , is a solution to with , then , is a solution to , and if , is a solution to then and , is a solution to . in short , the two problems are equivalent except that the former admits a zero eigenspace which the mixed formulation suppresses .as explained in , the accuracy of the mixed eigenvalue problem hinges on the stability of the corresponding mixed source problem .this is a saddle - point problem of the sort studied by brezzi , and so stability depends on conditions analogous to ( s1 ) and ( s2 ) .the proof of these conditions in case is the space of edge elements follows , as in the preceding stability verification , from surjectivity and commutativity properties encoded in the diagram .the diagram can also be used to explain the zero eigenspace computed with edge elements . recall that in the case of the mesh shown in figure [ fg : nodeig ] , this space had dimension .in fact , this eigenspace is simply the null space of the curl operator restricted to . referring again to the commutative diagram , this is the gradient of the space of linear lagrange elements vanishing on the boundary .its dimension is therefore exactly the number of interior nodes of the mesh .-5 mm let denote the space of symmetric matrices . given a volumetric loading density , the system of linearized elasticity determines the displacement field and the stress field induced in the elastic domain by the equations together with boundary conditions such as on . here is the symmetric part of the matrix , and the elasticity tensor is a symmetric positive definite linear operator describing the particular elastic material , possibly varying from point to point .the solution may be characterized variationally as a saddle - point of the hellinger - reissner functional over ( i.e. , is sought in the space of square - integrable symmetric - matrix - valued functions whose divergence by rows is square - integrable , and is sought among all square - integrable vector fields ) . for a mixed finite element method, we need to specify finite element subspaces and and restrict the domain of the variational problem .of course the spaces must be carefully designed if the mixed method is to be stable : the analogues of the stability conditions ( s1 ) and ( s2 ) must be satisfied .the functional is quite similar in appearance to and so it might be expected that the mixed finite elements developed for the latter ( the face elements for and discontinuous elements for ) could be adapted to the case of elasticity .in fact , the requirement of symmetry of the stress tensor and , correspondingly , the replacement of the gradient by the symmetric gradient , changes the structure significantly .four decades of searching for mixed finite elements for elasticity beginning in the 1960s did not yield any stable elements with polynomial shape functions . using discrete differential complexes ,r. winther and the author recently developed the first such elements for elasticity problems in two dimensions .( the three - dimensional case remains open . ) for elasticity , the displacement and stress fields can not be naturally interpreted as differential forms and the relevant differential complex is not the de rham complex . in three dimensionsit is instead the _ elasticity complex _ : here the operator is a _ second order differential operator _ which acts on a symmetric matrix field by first replacing each row with its curl and then replacing each column with its curl to obtain another symmetric matrix field .the resolved space is the six - dimensional space of infinitesimal rigid motions , i.e. , the same space of linear polynomials which arose as the shape functions for the lowest order edge elements .if the domain is topologically trivial , this complex is exact .although it involves a second order differential operator , and so looks quite different from the de rham complex , eastwood recently pointed out that it can be derived from the de rham complex via a general construction known as the bernstein - gelfand - gelfand resolution . in the lowest order case ,the finite elements we introduced in , for which the element diagrams can be seen in figure [ fg : aw ] , use discontinuous piecewise linear vector fields for the displacement field and a piecewise polynomial space which we shall now describe for the stress field .the shape functions on an arbitrary triangle are given by which is a -dimensional space consisting of all quadratic symmetric matrix fields on together with the divergence - free cubic fields .the degrees of freedom are * the values of three components of at each vertex of ( 9 degrees of freedom ) * the values of the moments of degree and of the two components of on each edge of ( 12 degrees of freedom ) * the value of the three components of the moment of degree of on ( 3 degrees of freedom ) note that these degrees of freedom are enough to ensure continuity of across element faces , and so will furnish a finite element subspace of .the continuity is not however , the minimal needed for inclusion in .the degrees of freedom also enforce continuity at the vertices , which is not required for membership in .for various reasons , it would be useful to have a mixed finite element for elasticity that does not use vertex degrees of freedom .but , as we remark below , this is not possible if we restrict to polynomial shape functions . in order to have a well - defined finite element, we must verify that the degrees of freedom form a basis for the dual space of .we include this verification since it illustrates an aspect of the role of the elasticity complex .since , we need only show that if all the degrees of freedom vanish for some , then .now varies cubically along each edge , vanishes at the endpoints , and has vanishing moments of degree and . therefore .letting , a linear vector field on , we get by integration by parts that since the integral of vanishes as well as .thus is divergence - free . in view of the exactness of the elasticity complex , for some smooth function .since all the second partial derivatives of belong to , . adjusting by an element of ( the null space of ) , we may take to vanish at the vertices .now on each edge , whence is identically zero on .this implies that the gradient of vanishes at the vertices . since on each edge ( with a unit vector tangent to the edge ), we conclude that vanishes identically on as well .since has degree at most , it must vanish identically .let denote the projection associated with the supplied degrees of freedom , and the -projection .for any triangle , , and , we have the degrees of freedom entering the definition of ensure that the right hand side vanishes , and from this we obtain the commutativity which is essential for stability .( actually a technical difficulty arises here , since as given is not bounded on .see for the resolution . )note that , by their definitions , and , using the commutativity , we have , i.e. , .035 in is exact . to complete this to a discrete analogue of the elasticity complex, we define to be the inverse image of under .then is exactly the space of piecewise quintic polynomials which are at the vertices of the meshes .this is in fact a well - known finite element space , called the hermite quintic or argyris space , developed for solving order partial differential equations ( for which the inclusion in and therefore continuity is required ) .the shape functions are and the degrees of freedom are the values of the function and all its first and second partial derivatives at the vertices and the integrals of the normal derivatives along edges .we then have a _ discrete elasticity complex _ or , diagrammatically , {000237.eps } } @>j > > \raise-.19in\hbox{\includegraphics[width=.5in]{000233.eps } } @>{\operatorname{div } } > > \raise-.11in\hbox{\includegraphics[width=.5in]{000242.eps } } @.\to0 .\end{cd}\ ] ] moreover this sequence is exact and is coupled to the two - dimensional elasticity sequence via a commuting diagram : the right half of this diagram encodes the information necessary to establish the stability of our mixed finite element method .the hermite quintic finite elements arose naturally from our mixed finite elements to complete the commutative diagram .had they not been long known , we could have used this procedure to devise a finite element space contained in .in fact , on close scrutiny we can see that any stable mixed finite elements for elasticity with polynomial shape functions will give rise to a finite element space with polynomial shape functions contained in . however , it is known that such spaces are difficult to construct and complicated .in fact , it can be proved that an finite element space must utilize shape functions of degree at least and the first and second partial derivatives at the vertices must be among the degrees of freedom .this helps explain why mixed finite elements for elasticity have proven so hard to devise . in particular , we can rigorously establish the stress elements must involve polynomials of degree , and that vertex degrees of freedom are unavoidable .in addition to the element just described , elements of all greater orders are also introduced in .the elements of next higher order can be seen as the final two elements in this discrete elasticity complex .{000234.eps } } @>j > > \raise-.19in\hbox{\includegraphics[width=.5in]{000235.eps } } @>{\operatorname{div } } > > \raise-.11in\hbox{\includegraphics[width=.5in]{000236.eps } } @.\to0 .\end{cd}\ ] ] it is also possible to simplify the lowest order element slightly . to do this we reduce the displacement space from piecewise linear vector fields to piecewise rigid motions , and we replace the stress space with the inverse image under the divergence of the reduced displacement space .this leads to a stable element shown in this exact sequence : {000237.eps } } @>j > > \raise-.19in\hbox{\includegraphics[width=.5in]{000238.eps } } @>{\operatorname{div } } > > \raise-.11in\hbox{\includegraphics[width=.5in]{000239.eps } } @.\to0 .\end{cd}\ ] ] because of the unavoidable complexity of finite elements , practitioners solving order equations often resort to _ nonconforming _ finite element approximations of .this means that the finite element space does not belong to in that the function or the normal derivative may jump across element boundaries , but the spaces are designed so that jumps are small enough in some sense ( e.g. , on average ) . the error analysis is more complicated for nonconforming elements , since in addition to stability and approximation properties of the finite element space , one must analyze the _ consistency error _ arising from the jumps in the finite elements . in winther andthe author investigated the the possibility of nonconforming mixed finite elements for elasticity , which , however are stable and convergent , and developed two such elements .these are related to nonconforming elements via nonconforming discrete elasticity complexes , two of which are pictured here : {000240.eps } } @>j > > \raise-.22in\hbox{\includegraphics[width=.5in]{000241.eps } } @>{\operatorname{div } } > > \raise-.13in\hbox{\includegraphics[width=.5in]{000242.eps } } @.\to0 . \\ \\\mathbb p_1\hookrightarrow\,@. \raise-.2in\hbox{\includegraphics[width=.65in]{000240.eps } } @>j > > \raise-.22in\hbox{\includegraphics[width=.5in]{000243.eps } } @>{\operatorname{div } } > > \raise-.13in\hbox{\includegraphics[width=.5in]{000239.eps } } @.\to0 .\end{cd}\ ] ] in both cases the shape function space for the stress is contained between and .the nonconforming finite element depicted in these diagrams was developed for certain order problems in .note the nonconforming mixed elasticity elements are significantly simpler than the conforming ones ( and , in particular , do nt require vertex degrees of freedom ) .aa d. n. arnold & r. winther , mixed finite elements for elasticity , _ numer ._ , 92(2001 ) , 401419 .d. n. arnold & r. winther , nonconforming mixed finite elements for elasticity , _ math .models methods appl ._ , to appear . i. babuka & j. osborn , eigenvalue problems , in : _ handbook of numerical analysis _ , vol .ii , p. g. ciarlet & j. l. lions , eds . ,elsevier , 1991 , 641788 .d. boffi , p. fernandes , l. gastaldi & i. perugia , computational models of electromagnetic resonators : analysis of edge element approximation , _ siam j. numer_ , 36 ( 1999 ) , 12641290 .a. bossavit , whitney forms : a class of finite elements for three - dimensional computations in electromagnetism , _ ieee proc .a _ , 135 ( 1988 ) , 493500 .f. brezzi , on the existence , uniqueness and approximation of saddle point problems arising from lagrange multipliers , _ rev .franaise automat .recherche oprationnelle sr .rouge anal ._ , 8 ( 1974 ) , 129151 .p. g. ciarlet , _ the finite element method for elliptic problems _ , north - holland , 1978 .m. eastwood , a complex from linear elasticity , _ rend .circ . mat .palermo ( 2 ) suppl ._ , 63 ( 2000 ) , 2329 .r. hiptmair , finite elements in computational electromagnetism , _ acta numerica _ , 11 ( 2002 ) , 237340 .nedelec , mixed finite elements in , _ numer ._ , 50 ( 1980 ) , 315341 . t. k. nilsen , x .- c .tai & r. winther , a robust nonconforming -element , _ math ._ , 70 ( 2001 ) , 489505 .p. a. raviart & j. m. thomas , a mixed finite element method for second order elliptic problems , springer lecture notes in mathematics vol .606 , springer - verlag , 1977 , 292315 .h. whitney , _ geometric integration theory _ , princeton university press , 1957 .a. eniek , a general theorem on triangular finite elements , _ rev .franaise automat .recherche oprationnelle sr .rouge anal ._ , 8 ( 1974 ) , 119127 .
|
differential complexes such as the de rham complex have recently come to play an important role in the design and analysis of numerical methods for partial differential equations . the design of stable discretizations of systems of partial differential equations often hinges on capturing subtle aspects of the structure of the system in the discretization . in many cases the differential geometric structure captured by a differential complex has proven to be a key element , and a discrete differential complex which is appropriately related to the original complex is essential . this new geometric viewpoint has provided a unifying understanding of a variety of innovative numerical methods developed over recent decades and pointed the way to stable discretizations of problems for which none were previously known , and it appears likely to play an important role in attacking some currently intractable problems in numerical pde . 4.5 mm * 2000 mathematics subject classification : * 65n12 . * keywords and phrases : * finite element , numerical stability , differential complex .
|
richard d. gill , a statistician , on the basis that probability and statistics have much to do with quantum mechanics ( qm ) , and by his own declaration fascinated by the exotica of foundations disputes , has called for increased attention to these matters by his profession. to this purpose , he himself has published studies that translate some of the current notions oft seen in the foundations of qm into the jargon and style of his discipline. so far gill s results have tended to support the conventional wisdom , the majority viewpoint held by , for example , several who have done experiments that they credit even with extending the mystical facets of qm right into the realm of practical applications ; e.g. , quantum computing , teleportation and the like .in addition , gill has criticized the views of those who are distressed by the implications of modern foundations orthodoxy of qm , specifically and particularly that the non - locality promised by john bell violates the order of cause and effect .some of their efforts are based on a point first seen , to this writer s best information , by jaynes : bell misapplied bayes sformula. an independent study in this vein initiated by accardi , as an `` anti - bellist , '' led to a local computer simulation of epr - b experiments that he claimed exhibit the so - called non - local statistics of qm. accardi s simulation fascinated gill , who found the tactic meritorious but the results so far unconvincing , and so challenged accardi to find a protocol that is beyond dispute .convinced that it ca nt be done , and to enliven the matter , gill offered a 3000 euro reward , essentially a bet , to accardi should he succeed. the present writer is also one long sceptical of arguments that non - locality is logically well founded. he holds the view that some essential feature is being overlooked , much as von neumann overlooked crucial contrary physics details in formulating his `` theorem '' to the same final effect as bell s .this writer s work led to a series of arguments indicating that bell s argumentation must contain a flaw. eventually this culminated in a study in which all the results from generic experiments credited with verifying bell s results were calculated using only principles of classical physics , without reference to anything intimating that non - locality was in play. in spite of a vigorous e - mail defense , these calculations failed to impress gill , in part , apparently , for lack of clarity on how non - locality was precluded .gill continued to insist that no protocol satisfying certain constraints that he formulated to enforce locality , could be envisioned .it is the purpose of this note to present an extention of these calculations in the form of a simulation algorithm or protocol conforming with gill s criteria . in the following sectionthe conditions set out by gill will be presented ; thereafter , the simulation we propose will be described .finally , we delineate the technical details that make our simulation possible .the purpose of gill s protocol is to preclude absolutely any structure that could covertly exploit non - locality .this is achieved by parceling out subroutines simulating the source and the two detector stations to three separate computers connected by wires conceptually equipped with diodes that allow only one way signaling . in this way, one can be sure that there is no feedback in the logic that mimics non - local interaction .this writer holds these specifications as a useful contribution to the discussion of this matter because , for lack of a unique definition of `` non - locality , '' alternates that are , e.g. , actually only indirect statistical congruences , have clouded the matter .gill s specifications are as follows, we quote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \1 .computer * o * , which we call the _ source , _ sends information to computers * x * and * y * , the _ measurement stations ._ it can be anything. it can be random ( previously stored outcomes of actual random experiments ) or pseudo - random or deterministic .it can depend in an arbitrary way on the results of past trials ( see item 5 ) . without loss of generalityit can be considered to be the same send to each computer , both its own message and the message for the other .computers * a * and * b , * which we call randomizers , send each a _ measurement - setting - label , _ namely a or a , to computers * x * and * y*. actually , i will generate the labels to simulate independent fair coin tosses ( i might even use the outcomes of real fair coin tosses , done secretly in advance and saved on my computers hard disks .computers * x * and * y * each output , computed in whatever way [ an opponent ] likes from the available information at each measurement station .he has all the possibilities mentioned under item 1 .what each of these two computers do not have , is the measurement - setting - label which was delivered to the other .denote the outcomes and .computers * a * and * b * each output the measurement - setting - label which they they had previously sent to * x * and * y. * denote these labels and .an independent referee will confirm that these are identical to the labels given to [ an opponent ] in item 2 .computers * x , o * and* y * may communicate with one another in any way they like .in particular , all past setting labels are available to all locations .as far as i am concerned , [ an opponent ] may even alter the computer programs or memories of his machines ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ at this point gill proceeds to delineate how the data is to be analyzed . what he calls foris the total of the number of times the outputs are equal , i.e. , $ ] , as well as the number of times they are unequal , , and the total number of trials , . with these numbers ,the correlation , , for each setting pair , , is then: these correlations , in turn , are used to compute the chsh contrast: which is to be tested for violation of bell s limit , , as is well known .computer * o * is implemented as an equal , flat , random selection of one of two possible signal pairs , one comprised of a vertically polarized pulse to the left , say , and a horizontally polarized pulse to the right ; the second signal exchanges the polarizations .after the source pulse pair is selected , the measurements to be simulated at computers * x * and * y * are selected by independent computers * a * and * b*. this they do for each run or each pulse pair individually by randomly specifying which of the two orientation angles for each side and iteration is to be used ; i.e. , they select and .the measurement stations * x * and * y * are simulated by models of polarizers for which the axis of the left ( right ) one is the angle ; each polarizer feeds a photodetector .these photodetectors are taken to adhere to malus law , that is , they produce photoelectron streams for which the intensity is proportional to the incoming field intensity , and the arrival time of the photoelectrons is a random variable described by a poisson process . in so far as these photodetectors are independent , each poisson process is uncorrelated with respect to the other . in conformity with bell s assumptions and the experiments , we take it that the source power and pulse duration are so low that , within a time - window , one and only one photoelectron will be generated if the polarization of the signal and the axis of the polarizer preceeding the detector are parallel . thus ,when they are not parallel , the count rate is reduced in proportion to the usual malus factor .this assumption is structurally parallel to assuming single photon states in qm .it is unrealistic to the extent that , even for the parallel regime , a true poisson distribution would lead to some trials with no hits , which do nt contribute to the photoelectron count statistics , and some trials with two or more hits , which are so seldom they effectively do nt contribute . as a matter of detail ,the photoelectron generation process is modeled by comparing a random number with the intensity of the field entering the polarizer filter .if it is less , it is taken that a photoelectron was generated ; if greater , none was generated .in other words , say , ( where for , indicates the setting of as chosen by * a * or * b * ; and indicates pulse mode from the source : vertical or horizontal ) is increased by , when the random number . also , is increased by when another , independent random number . in each case the polarization axis of the pulse sent from the source .the random input at the detector simulates background signals and detector noise .the final step of the simulation is simply to register the simulated ` creation ' of photoelectrons and to compute the correlations .this algorithm fully satisfies gill s stipulations as presented in section 2 above to preclude non - locality .all process steps can be implemented on separate computers such that the information flow is from * a * , * b * and * o * to * x * and * y*. there is no connection between * x * and * y*. at this point we note , however , that gill s analysis of epr experiments is not faithful to the relevant physics .it fails to take the quadratic relationship between the source intensity and the subsequent measured current density , i.e. , malus law , into account .thus , in our simulation , data acquisition and analysis proceed a bit differently .we take it , that _ currents _ are measured , that the total relative intensity between channels is given purely by geometrical considerations according to malus law ; i.e. , according to for like events , and for unlike events .this is essentially equivalent to requiring internal self consistency with respect to detection physics .that is , if the axis of one polarizer is parallel to the axis of the source , , say , then it is quite obvious that the relative intensity as measured by photodetectors on the output of the other polarizer must follow malus law by virtue of the photocurrent generation mechanism .further , since this geometric fact must be independent of the choice of coordinate system , it follows that a transformation of angular coordinates can be effected using just trigonometric identities .note that although equations ( [ trig ] ) are not in general factorable ( sometimes said to be a criteria for ` non - locality ' ) , all the information required is available on - the - spot at both sides independently , i.e. , locally .thus , in modeling the `` coincidence circuitry '' we use the fact that the individual terms on the right side of ( [ trig ] ) are , by virtue of photodetector physics , proportional to the square root of the number of counts in the channel , for example : where is the total number of signals intercepted by the considered photodetector per regime .( since there are four detector- and two source - regimes , for even distributions where is the total number of pairs used in the simulation .unlike experiments , this number is trivially available in simulations . ) in other words , the relative frequency of coincidences is determined after - the - fact as a function of the intensity of the photocurrents .no communication between right and left sides is involved in determining these currents ; what correlation there is , is there on account of the equality of the amplitudes ( and therefore _ intensities _ ) at the source of the signals making up the singlet state . of course, in doing a simulation , provision must be made to resolve sign ambiguities ; doing so , however , also does not violate locality as the required information is all available on - the - spot . in sum , instead of eq .( [ corr ] ) , we use : as required by malus law , but in which the terms are expressed using ( [ trig ] ) and ( [ key ] ) to get the result for each individual ` photoelectron ' locally and independently . to see intuitively how this algorithm works, it is perhaps advantageous to consider the individual steps in reverse order .the final step is the calculation of the chsh contrast .this requires calculating for the four combinations of polarizer settings according to eq .( [ korr ] ) .now , each factor , a coincidence correlation , according to eq .( [ korr ] ) is expressed as the intensity of coincidences in like channels minus the intensity in unlike channels , all according to conventional formulas .the individual terms in , however , involve information from each side , so it would appear that each requires in effect instantaneous communication between the sides for evaluation .however , each term can be expanded using eqs .( [ trig ] ) in which the individual factors , e.g. , , can be computed at the detection event without regard to information from other events .these individual factors , in turn , are provided by eq .( [ key ] ) , which again is just malus law . as the simulationis repeated , and more and more pairs of pulses are generated , the running ratios converge to provide ever improving estimates of the individual factors needed for eq .( [ trig ] ) . as described above , the individual are completely determined by the _ delayed _ input from the source , the local polarizer setting , and a random input at the detector ._ only the after - the - fact calculation of mixes information from both sides ._ thus , there is no non - locality involved contrary to bell s claim that a model without it does not exist .an example of the results from the simulation are presented on fig.1 .the top curve is the chsh contrast ; the lower four curves are the individual correlations for the four combinations of polarizer settings .the statistics stabilize after circa 700 trials and exhibit clear violation of the bell limit of by virtually exactly the amount calculated for the singlet state and observing angles and using bell inequalities : .experimental verification can be found in . that these statistics can be found in a sequence of individual events , each of which is calculated without recourse to non - locality ,constitutes a direct , unambiguous counterexample to bell - type `` theorems . '' as an aside , we note that what is described here is a _ simulation , _ not an experiment .as the algorithm runs on a digital computer , in fact it is a realization of an epr - b setup using _ information , _ which accords with some variations of current analysis of the issue .how does this protocol work ?how is it possible that now , after generations have examined and reexamined seemingly everything surrounding epr - b correlations and von neumann s and bell s theorems , a simple protocol can be found that delivers the offending statistics without non - locality ?the answer to these questions is in part , that so far the implied challenge has been so defined , that the option we used in constructing the simulation was precluded .the basic assumption constraining the orthodox approach is that the source emits `` ready - made '' pairs of photons . at an abstract statistical level , this is tantamount to assuming that there is just a single poisson process at the source , for which the time of conception of each pair is a random variable with a poisson distribution , followed by deterministic detection .that is , in the terminology of qm , the probability of the random creation of a photon pair is proportional to the modulus of a solution to schrdinger s equation , followed by detection with a `` quantum '' efficiency of % of converting photons to photoelectrons _ without further stochastic input _ ( except at the polarizer , see below ) .for our protocol we reject these notions .instead , we take it that the source is emitting pulses of classical , continuous radiation in both directions , which are finally registered in photodetectors that are the scene of independent poisson detection processes proportional to the square of the source intensity .although uncorrelated in terms of arrival times of elicited photoelectrons , the two processes , by virtue of the symmetry of the source ( i.e. , the source pulses have equal power and duration ) , nevertheless have correlated _intensities_. for the simulation , consistent with classical physics , polarizers are taken to be variable , linear attenuators , which are , therefore , deterministic . in ` quantum ' imagery ,polarizers are considered biased stochastic absorbers , i.e. , not deterministic .the latter notion is not relevant to the issue of bell inequalities , however , because they are meant to be limits imposed by local realism and are non - quantum from the start . in the end , the patterns seen in the correlations are due simply to the various malus factors attenuating the equal energy source pulses . independently , underneath all the confused aspects of the dispute initiated by epr , there is an inviolable mathematical truth , which has many forms .it is that both correlated and uncorrelated equal length dichotomic sequences with values , tautologically satisfy bell inequalities .being an ineluctable mathematical truth , it is also often mistaken for a physical indispensability .there is , however , an intervening complication .consider four dichotomic sequences comprised of s and length : and .now compose the following two quantities and , sum them over , divide by , and take absolute values before adding together to get : latexmath:[\[\begin{aligned } the right side equals , so this equation is in fact a bell inequality .this derivation demonstrates that this bell inequality is simply an arithmetic tautology .thus , certain dichotomic sequences comprised of s , identically satisfy bell inequalities. the fact is , however , that real data can not comply with this inequality ; it just does not fit the conditions of the derivation of eq .( [ bi ] ) .this is a result of the fact that two of the sequences on the left side are _ counterfactual _ statements , i.e. , what _ would have been measured _ if the setting _ had been _ otherwise . in real experiments ,such data can never be obtained . moreover ,real data can not be rearranged to fit eq .( [ bi ] ) either .suppose to start , that the second term is rearranged so that the factor sequence where the argument indicates that it is from the second term , is rearranged to match as closely as possible .( for ever longer samples , this becomes ever more precise . )let the rearranged version be denoted .then the second term becomes : .a similar rearrangement on the third and fourth terms converts the right side of eq .( [ bi ] ) to : from which it is obvious that unless by virtue of the structure of the setup , that eq .( [ bi ] ) is not germane ; in other words , data from real experiments can not conform to the conditions of derivation of bell inequalities .variations of this argument hold for all derivations of bell inequalities. also that the results of our simulation are in full accord with jaynes criticism of bell s arguments , to wit : bell , by ascribing the correlations to the hidden variables , effectively misused bayes formula for conditional probabilities with respect to the overt variables .the result is that he encoded statistical independence instead of non - locality , so that the resulting inequalities are valid only for uncorrelated sequences . in ref ,gill endeavors to respond to this observation with the argument that jaynes `` refuses to admit '' that: where is the _ conditional _probability for occurring if has occurred , and represent filar marks on a measurement device and , represents a `` hidden variable '' specifying the quality whose magnitude influences the magnitude registered as and . of course ,if one is measuring an assortment of objects ( nails , say ) , each engraved with its length ( ) , then we can dispense with the comparison to the filar marks on a ruler ( ) in favor of using the .but when the are not knowable , not to mention uncontrollable , the measured objects must be characterized by the result of comparison with the ruler , i.e. , the s . as this is exactly the situation with respect to epr and bell s arguments , all statistical characteristics , including correlations , of an ensemble must be specified also in terms of results from measurements in the form of comparisons with the filar marks , the only variables actually available .epr s purpose was to discern the possible existence of such hidden variables exclusively in terms of their effects on quantities expressed in terms of overt variables ; all evidence must appear , therefore , in the overt variables .that is , when jaynes rejects ( [ hidden ] ) , he is just faithful to reality if remains unaccessible .the conventional understanding also overlooks the fact that the space of polarization is fundamentally not quantum mechanical in nature. this is so in the first instance , because its variables , the two states of polarization , are not hamiltonian conjugates , and therefore do not suffer heisenberg uncertainty ( hu ) , and do not involve planck s constant .gill rejected this argument on the basis of the opinion that `` quantum mechanics is as much about incompatible observables as planck s constant . '' in essence this view implies that _ all _ commutivity somehow involves qm , which is manifestly false .non commutivity can arise for different reasons , for example , non parallel lorentz boosts do not commute , and this obviously has nothing to do with qm . in the case at hand ,non commutivity of polarization vectors , or stokes operators , arises only when the -vector common to both polarization states rotates in space , which brings in the geometric fact that the generators of rotations on the sphere do not commute .this has nothing specifically to do with qm , epr or bell inequalities , just geometry .a consequence of the existence of a local - realistic simulation of epr - b correlations is that it shows that such correlations do not arise because of the structure of qm .this could have significant practical consequences .in particular , to the extent that algorithms , usually considered to depend on exploiting quantum entanglement , for example , can be seen not to be dependant on intrinsically quantum phenomena .thus , these algorithms should also not be restricted to realization at the atomic level , where qm reigns . in so far as it is much more practical to fabricate and operate devices at a larger scale , this may be a very promising development for the exploitation of what is ( from this viewpoint , inaccurately ) denoted as quantum computing . in the end , does this simulation put gill in debt ?whatever the call , it is clear that his considerations , like that of many others , was based on some mathematical facts regarding dichotomic sequences , comprised of both hits and non - hits , and not on the actual details of done experiments .these mathematical facts concerning dichotomic sequences can also be simulated by doing the data analysis as envisioned by gill , and indeed this seems not to lead to a violation of bell inequalities no matter how the parameters are adjusted . in short , what is shown hereis , that epr - b correlations are a direct manifestation of malus law , not mystical gibberish or qm .10 r. d. gill , _ accardi contra bell : the impossible coupling , _ in _ mathematical statistics and applications : festschrift for constance von eeden , _ m. moore , c. leger and s. froda ( eds . ) ( lecture notes monograph series , inst . of math .statistics , hayward , ca , to appear ) .( quant - ph/0110137 ) .see references in e. t. jaynes , in _ maximum entropy and baysian methods , _ j. skilling ( ed.)(kluwer , dordrecht , 1989 ) p. 1( http://bayes.wustl.edu/etj/articles/cmysteries.ps ) l. accardi and m. ragoli , quant - ph/0007005 , quant - ph/0007019 , quant - ph/01110086 .r. d. gill , in _ foundations of probability and physics - 2 , _a. khrennikov ( ed.)(vxj university press , vxj , 2002 ) , p. 179 .( http://www.math.uu.nl /people /gill /preprints /vaxjo.pdf ) .a. f. kracklauer , ann .l. de broglie , * 25 * , 193 ( 2000 ) .a. f. kracklauer , in _ foundations of probability and physics , _ a. khrennikov ( ed . )( world scientific , singapore , 2001 ) , p. 219 .a. f. kracklauer in _ foundations of probability and physics - 2 , _a. khrennikov ( ed.)(vxj university press , vxj , 2002 ) , p. 385 .( quant - ph/0201121 ) .a. f. kracklauer , _ j. opt .b , quantum . semiclass . _ * 4 * , s121 ( 2002 ) .n. v. evdokimov , d. n. klyshko , d. n. komolov and v. a. yarochkin , _ physics - uspekhi _ * 39 , * 83 ( 1996 ) . a. f. kracklauer , _ phys. essays _ * 14 * ( 2 ) 162 ( 2002 ) .l. de la pea , a. m. cetto and t. brody , _ lett . a nuovo cimento _ * 5*(2 ) 177 ( 1972 ) .
|
as part of a challenge to critics of bell s analysis of the epr argument , framed in the form of a bet , r. d. gill formulated criteria to assure that all non - locality is precluded from simulation - algorithms used to test bell s theorem . this is achieved in part by parceling out the subroutines for the source and both detectors to three separate computers . he argues that , in light of bell s theorem , following these criteria absolutely precludes mimicking epr - b correlations as computed with quantum mechanics and observed in experiments . herein , nevertheless , we describe just such an local algorithm , fully faithful to his criteria , that yields results mimicking exactly quantum correlations . we observe that our simulation - algorithm succeeds by altering an implicit assumption made by bell to the equivalent effect that the source of epr pairs is a single poisson process followed by deterministic detection . instead we assume the converse , namely that the source is deterministic but detection involves _ multiple _ , independent poisson processes , one at each detector with an intensity given by malus law . finally , we speculate on some consequences this might have for quantum computing algorithms .
|
homophily patterns in networks have important consequences .for example , citations across literatures can affect whether , and how quickly , ideas developed in one field diffuse into another .homophily also affects a variety of behaviors and opportunities , with impact on the welfare of individuals connected in social networks . in this paperwe analyze a model that provides new insights into the patterns and emergence of homophily , and we illustrate its implications with an application to a network of scientific citations . our main objective is to study how homophily patterns behave in an evolving network .do nodes become more integrated or more segregated as they age ?how does this evolution depend on the link formation process ?in particular , does the network become more integrated if new connections are formed at random or if they are formed _ through _ the existing network ? to answer these questions , we study a stochastic model of network formation in which nodes come in different types and types , in turn , affect the formation of links .we accomplish this bye introducing individual heterogeneity to the framework of jackson and rogers , allowing us to focus on the issue of homophily generated through specific biases in link formation .a new node is born at each time period and forms links with existing nodes .the newborn node connects to older nodes in two ways .first , she meets nodes according to a random , but potentially type - biased , process .second , the newborn node meets neighbors of the randomly met nodes ( friends of friends ) .this is referred to as the search process and can also reflect type biases . to illustrate ,consider citation patterns .typically when writing a new paper , some references are known or found by chance by the authors while others are found _ because _ they are cited in known papers .biases arise because papers may cite references with greater frequency within their own field .we examine the long - run properties of this model and the structure of the emerging network .the biases could arise from agents preferences over the types of their neighbors and/or from biased meeting opportunities that agents face in connecting to each other .so , in one direction we enrich a growing network model by allowing for types and biases in connections , and in another direction we bypass explicit strategic considerations by studying a process with exogenous behavioral rules .since in the model search goes through out - links only , strategic considerations are to some extent inherently limited , since a node can not directly increase the probability of being found through its choice of out - neighborhood . while this may not be a good assumption in some contexts , such as business partnerships or job contacts , where search presumably goes both directions along a link , it is appropriate in other contexts , such as scientific citations where the time order of publications strictly determines the direction of search .we wish to understand the conditions under which the network becomes increasingly integrated over time .we consider three different notions of integration . under _ weak integration _ , nodes who are old enough are more likely to get new links than young nodes , independent of types . in this sense, age eventually overcomes any bias in link probabilities . under _ long - run integration _ , the distribution of types among the neighbors of a given node converges to the population distribution as the node ages .this is a strong property that requires biases among neighbors to eventually disappear , despite the biased formation process . finally , under __ partial integration _ _ , the type distribution among neighbors moves monotonically towards the population distribution as nodes age , although it may maintain some bias in the limit .these notions capture different , but related , aspects of the idea of network integration .our main theoretical results are as follows .first , weak integration is satisfied whenever the probability that a given node is found increases with that node s degree .this holds in any version of our model where at least some links are formed through search and there is some possibility of connecting across types .in contrast , long - run integration holds only when search is unbiased .that is , the random meeting process can incorporate arbitrary biases , but once the initial nodes are met , the new node chooses uniformly from the set of their neighbors , ignoring any further implication of their types .finally , we show that under a particular condition on the biases , the process evolves monotonically and satisfies partial integration . in particular , the biases in nodes links generally decrease with age .to understand where this tendency towards integration comes from , consider unbiased search .observe first that as a node ages , the proportion of his links obtained through search approaches unity , since the number of neighbors grows with age while the probability to be found at random decreases with population size .next , note that unbiased search does _ not _ imply an absence of bias among the neighbors of randomly met nodes . due to homophily ,randomly met nodes are relatively more connected with nodes of their own types . a critical factour analysis uncovers , however , is that bias among neighbors neighbors tends to be lower than among direct neighbors .this is because some nodes of other types are found at random and these nodes are relatively more connected to nodes like themselves .so the set of neighbors neighbors has a more neutral composition than the neighborhoods of same - type nodes .network - based search increases the diversity of connections and , conversely , nodes found through search are being found by a more diverse set of nodes . andsince search plays a larger role with age , older nodes are less biased in their connections . in order to analyze network structure in more detailwe consider a special , but natural , two - type specification of the model where random meetings are organized through a geographic or social space .nodes of a given type are more likely to reside in a given location and random meetings take place without further bias in the various locations . in this model , biases in random meetings are inherently tied in a precise way to the type - distribution of the population .this feature allows us to obtain a number of further results .in particular , we derive an explicit formula relating a node s local homophily among neighbors to its age or degree .this illustrates our general results and further shows how partial and long - run integration are affected by changes in types shares .we also study two important structural properties of the resulting network that are less tractable in the general model : degree distributions and group - level homophily .we show how to modify the existing analysis of degree distributions to account for individual heterogeneity and homophily , obtaining new insights .in addition , we obtain results on group - level homophily consistent with empirical results presented in .we find that relative group size has an important impact on how meeting biases map into aggregate properties of the network .in particular , relative homophily in the network is strongest when groups have equal size , and vanishes as the groups take increasingly unequal sizes . turning to degree distributions, we find that the majority and minority groups have different patterns of interactions .in particular , for the minority group , links from their own group are on the one hand rarer due to a size effect , but on the other hand a homophilic bias pulls in the other direction , creating a tension in the overall distribution of links .however a striking result that is , in principle , testable is that the distribution of total in - links is identical for the groups independent of their relative sizes . moving from the theoretical analysis, we illustrate the implications of the model using data on scientific citations in journals of the american physical society ( aps ) published between 1985 and 2003 .we find that the proportion of citations that a paper obtains from other papers in its own field decreases as the paper ages and becomes more cited .the observed citation patterns provide some evidence of the partial integration property and are at least partly consistent search follows a less biased ( possibly unbiased ) pattern in the citation process . in studying this applicationwe are motivated by two factors .first , patterns of scientific citations have important welfare consequences as they affect the diffusion of knowledge , with impacts on different research fields .previous research , such as , generalizing popular concepts such as the _ recursive impact factor _ , stress that the importance of a citation relies on the paths that it allows in the network of citations .we complement this argument by considering under which conditions citations are likely to bridge scientific production across different communities .second , scientific citations possess all the features of the network formation process that we study : nodes ( papers ) appear in chronological order and never die , they link directionally to previously born nodes , they have types ( scientific classifications ) , and they find citations both directly and though search among the citations of other papers .more generally , our study contributes to a growing literature in economics and other disciplines studying the causes and consequences of homophily in social networks . study a matching process of friendship formation .they document several empirical patterns of homophily and explain them through a combination of biases with respect to choice and chance . by design , all individuals have the same degree and age has little impact .in contrast , differences in age and degree are central to our analysis . incorporates homophily into the random graph model of . again by design , homophily is not affected by degree or age in this approach .thus , our study and these two papers study homophily patterns in networks from complementary perspectives .in particular , we provide the first study of how homophily patterns change over time and of the relation between homophily and a node s degree .this study also advances the analysis of stochastic models of network formation .earlier work has made great progress in explaining structural network features such as small diameter , high clustering and fat tails in degree distributions , see .however , most of these studies assume homogeneous agents and neglect homophily . with respect to this literature, we develop and study one of the first stochastic model of network formation incorporating individual heterogeneity .the rest of the paper is organized as follows .section [ sec : model ] presents the model with bias only in the random meeting process .section [ model_bias_only_random ] includes the main result about long - run integration in this setting .section sec_location_bias studies the special case of two - types and location based bias . section [ sec_model_biased_search ] studies the integration properties of the model when biases appear also in the search part of the meeting process .section [ sec_citations ] contains the empirical application to citation data .in our model , nodes are born with randomly assigned types and enter sequentially , meeting existing nodes upon entry .meetings result in ( directed ) links .meetings take place through two distinct processes , which we refer to as _random _ and _ search_. the meeting processes depend on the types of the nodes involved . in this section ,we study the impact of type - based biases on the random meeting process .time is indexed by . in each period a new node is born .we index nodes by their birth dates , so that node is born in period .nodes have `` types , '' with a generic type denoted belonging to a finite set ( with cardinality ) .a newborn s type is randomly drawn according to the time invariant probability distribution ( so that types are i.i.d . , across time ) .a newborn node sends ( directed ) links to the nodes that were born in previous periods . of these links ,a fraction selects nodes according to a type - dependent _ random _ process ; is an integer in the underlying process , but allowed to be arbitrary in the mean - field continuous - time approximation we analyze . ]these nodes are called parents .the remaining fraction selects nodes among the neighbors of the parents that have been found via the random process ; we refer to this second part of the process as search .nodes in a sequence , each connected to all predecessors . ] we define to be the ratio of the number of links formed by the random process to the number of links formed by the search process .looking first at the random part of the process , we denote by the probability that a link sent by a node of type reaches a node of type . among nodes of type , the link is formed uniformly at random , so there is no further discrimination in this part of the process . if the random meeting process were unbiased , the probability would equal the share of agents in the system .when we say that there is bias .this can be interpreted in different ways .one can view the bias as a reduced form for preferences that nodes have over the type of connections they form .the case of homophilistic preferences for type is then captured by a situation in which .the bias could also arise from constraints in the meeting process , or from spatial differentiation , as in the location model that we will analyze in section [ sec_location_bias ] .turning now to the search part of the process , the way in which friends are drawn from parents neighborhoods may be , in principle , either biased or unbiased .much of the paper will study a process with biases only in the random part , so that in the search part , links are formed according to a uniform distribution on the set of parents neighbors .this assumption has natural interpretations and various applications .it applies , for instance , to cases where agents face a bias in meeting strangers , but then get to meet the `` friends '' of their new friends without bias .when the original bias in meetings comes from biased opportunities , this seems to be a natural assumption ; when the bias originates in preferences , it may still be the case that this bias tends to vanish when meetings are mediated by friends . in section [ sec_location_bias ]we will explicitly analyze a model that relates these biases to location based differences in the meeting process . when search is itself biased , so that the additional nodes are found among parents neighbors using a type - dependent probability distribution , two types of biases are naturally defined : a bias that discriminates according to the types of the parents through which search is made , and a bias that discriminates according to the types of the parents neighbors . which type of bias is more appropriate depends on the instance of network one has in mind , and leads to formally different models of link formation . in section [ sec_model_biased_search ]we study the case of biased search and its consequences for integration .before formally deriving the dynamics of the various processes , in the next section we propose three notions of integration that measure the extent to which the bias in the random and/or search process translates into biases in the long run type - patterns of link formation .the definitions we provide capture different aspects of integration , focusing on how a node s type - pattern of connections evolves with age , and whether it gets progressively more ( or less ) integrated with the rest of the network .it is important to note that there are two different aspects of integration : the evolution of newborns newly formed links ( out - links ) and the evolution of older nodes incoming links .these will exhibit different dynamics . given the bias in the random part of the network formation process, it is clear that there will always be some bias in the out - links of newborn nodes .the main questions with regard to the out - links thus pertain to how the links formed by search behave over time , and this is related to the question of how the in - links of older nodes behave .all three notions of integration discussed here pertain to the behavior of in - degrees of nodes .out - degree dynamics are studied in sections sec_out_degree and [ sec_formulas_agg ] .our first notion requires , in particular , that old enough nodes are found by newborn nodes with higher probability than younger nodes , independently of the types of the nodes involved .[ weak]the network formation process satisfies the * weak integration property * if for every , there exists such that , for all and for all , the node born at time has a lower probability than node to receive a link from a node of type born at time .note that this form of integration requires that an old enough node of type ends up receiving a link from a newborn node of type with a higher probability than a young enough node of the same type as the newborn .so , even if link formation probabilities are biased in favor of similar nodes ( homophily ) , old enough nodes are found more often even when of a different type than the newborn .this form of integration is rather weak , and does not bear implications on the type - composition of any given node s in - degree .our second notion of integration requires that as nodes age , their local neighborhood grows to represent more and more the type composition of the population .it is therefore a `` monotonicity '' property , requiring that integration , here defined in terms of how close the composition of neighbors types is to what would obtain in the unbiased case , grows with age .[ partial ] the network formation process satisfies the * partial integration property * if for every node the fraction of each type in the in - degree of is weakly closer to s population share at time than at time , for , and strictly closer for some types .so , under partial integration , the in - neighbors of an agent become more and more representative of the overall population as time elapses .our final notion of integration is stronger and requires that nodes eventually attract in - links according to population shares .[ longrun ] the network formation process satisfies the * long - run integration property * if for every node the proportion of each type in the in - degree of converges to s population share as node ages . in other words , in the long runany surviving difference in the proportion of links received by old nodes from different types is due only to the distribution of types in the population , and the biases in link - formation have no consequences for eventual in - degree patterns .a benchmark model to study long run integration properties of link formation is one where only the random part of the process is biased , and no further bias is present in the search part of the process .more precisely , the search process is unbiased in the sense that additional ties are found through a uniform sample among parents neighbors , but remains indirectly biased through the bias that the random process has induced on the type composition of the parents neighborhoods .this model allows for a clear understanding of the mechanics that lead to integration , and why and when integration may fail .we study a continuous time approximation of the model , using the techniques of mean - field theory .this provides approximations and limiting expressions of the process that ignore starting conditions and other short - term fluctuations that can be important in shaping finite versions of the model , and so the results must be viewed with the standard cautions that accompany such approximations and limit analysis .we consider the expected change in the discrete stochastic process as the deterministic differential of a continuous time process .let us first look at the probability that node is found by newborn node .this depends on the shape of the network that has formed up to time .in particular , it depends on the type - profile of in - neighbors of at time , and on the bias of the newborn node towards such types . since search is not type - biased , each link that agent forms through searchis drawn from a uniform distribution over the set of all neighbors of all parent nodes that agent has found at random .letting denote the probability that a node born in period of type receives a link from a node of type born at time , the following expression is a mean - field approximation of the overall linking probability : the first term on the right hand side captures the probability of node being found at random .the probability that node is of type and links at random to a node of type is .this is divided by the number of nodes of type at time which , under a mean - field approximation , is equal to .it is then multiplied by the number of links formed at random , .the second term is the probability of node being found through search .it is given by the number of search links ( ) formed by the node born at , times the sum , over all possible types , of the probabilities that is found through a node of type . for each possible type , this probability is given by the joint probability of the following events ( corresponding to the four terms in the first summation over types ) : ( ) the newborn node is of type ; ( ) it forms a link with a -type node ; ( ) the -type node has linked to since was born ; from agents up to time as numerator , and the total number of nodes in the system at time as denominator ] ( ) among the neighbors of this -type node , that exactly is found .it is useful to express the terms of the above formula in a compact way . for all , write note that the ratio in the above expression is a measure of the bias that type applies to type , so that when this ratio is there is no bias , while when it is greater ( less ) than there is a positive ( negative ) bias of type towards type . in the case of no bias , is simply the probability of birth of a type node , and is times the joint probability that the newborn node is of type and that node is found by drawing uniformly at random from a population of nodes .we can decompose the matrix as the product of two matrices and , where may be seen as a transition matrix of a markov process ( a markov matrix ) , we derive some general results on markov matrices that will be useful in [ proofs ] , where we prove our propositions . ] and is a diagonal matrix where the diagonal is a probability vector : with using the matrix , equation ( [ 5 ] ) becomes : where expresses the expected in - degree , type by type , after time for a node born at time .we define with a continuous approximation : we study equation ( [ 6 ] ) in terms of ordinary differential equations in matrix form : with the initial condition from now on we will always assume that is invertible ( so that the specification of types is not redundant ) . with this assumption , the unique solutions to these differential equations are the following : where a constant to the power of a matrix is defined as follows : let us test the various notions of integration on this model with unbiased search .it is clear that the model with can not satisfy weak integration .we show instead that whenever there is some degree of search ( ) weak integration is satisfied .in fact , in section sec_model_biased_search we strengthen this result to show that weak integration is still satisfied when search is biased as well .[ prop_1 ] if , the model with unbiased search satisfies the _ weak integration property_. the proof ( which appears , along with the proofs of our other results , in [ proofs ] ) shows that the weak integration property is not specific to the unbiased search model .indeed , various models in which the in - degree of a node determines the probability of being found by a newborn node in a sufficiently increasing manner would give the same result .moreover , search is not needed for this type of dependence to take place . another model with type - biasedpreferential attachment in which the probability of receiving a link is positively correlated with a node s in - degree , and which exhibits the same weak integration property ,is discussed in the conclusion of .partial and long - run integration are , again , not satisfied when .the next propositions show that , otherwise , the long run integration property is always satisfied by the model , while the partial integration property needs an additional assumption .if , the model with unbiased search satisfies the _ long run integration property_. [ prop_lri ] partial integration , instead , occurs under an additional condition .consider a markov matrix . as formally stated in [ matrices] , writing , we say that satisfies the _ monotone convergence property _if , for every pair , and for every , the element satisfies : 1 . if , then ; 2 . if , then .the monotone convergence property captures the idea that transition probabilities are monotone over time .even with a strictly positive transition matrix , this condition does impose additional restrictions .it is beyond the scope of this paper to find general or even necessary conditions for monotone convergence of markov matrices .we then have the following result .if and satisfies the monotone convergence property , then the model with unbiased search satisfies the _ partial integration property_. [ prop_partial_no_bias ] let us focus on the intuition behind the long run integration property of the model with unbiased search . to fix ideas , let us examine the case in which random probabilities have a homophilous bias .a given node can be found by a newborn node of a different type via search in different ways : one is that the newborn finds a neighbor of the given node that is of the same type as the newborn , and another is that the newborn finds a neighbor of the given node that is of the same type as the given node .the first way is relatively more likely given the homophilous bias in the random part of link formation , but the fact that this can also occur via the second route leads this process to be less biased over time .once the process has become less biased , it even easier to be found by nodes of other types , and so the neighborhood becomes even less biased , and this trend reinforces itself leading to an unbiased process in the limit .to summarize , as a node ages it becomes more of a `` hub '' , attracting many links from all types in the search process .this property , that also underlies the weak integration property , together with unbiased search further decreases the bias in the in - degree of hubs . as a result ,the type composition of new connections becomes even less biased for these hubs , and eventually the bias is eliminated .the way in which an individual s neighborhood composition limits to the population frequencies as it ages is non - trivial . notice that if a particular individual became connected to a large proportion of others over time , then his neighborhood would necessarily approximate population frequencies .however , we emphasize that in our model , even though an individual s degree grows without bound , the proportion of others to whom he is connected still vanishes over time , so this effect is not what drives integration .this happens because the entry rate of new individuals is constant , while the probability for existing individuals to acquire a new link in any given period goes to zero .finally , we remark that , while the neighorhood of every node approaches a composition that reflects the aggregate population frequencies , it converges to that distribution from a point that is affected by biases in the link formation process .since those links are perpetually being formed and are subject to biases , the system never approaches a network that has unbiased link patterns .rather , it is always the oldest nodes in the system that have the least biased neighborhoods .in fact , one way to see the persistent bias is to focus on out - degree rather than in - degree .thus , we turn now to analyzing links by tracking where they originate .so far we have mostly focused on the dynamics of agents in - degree .of course , out- and in - degree dynamics are intimately related , as the search part of young nodes out - degree will consist predominantly of old nodes , with respect to whom the search part of the process is both directly and indirectly unbiased ( see section [ sec_formulas_agg ] ) .here we take a close look at the composition of out - degrees and how they evolve over time .this is of interest not only to better understand integration , but also to shed light on the evolution of homophily , that is , the tendency to form ties with agents of the same type .we first look at the steady state composition of the out - degree .let us denote by the proportion of links that originate from a node of type born at time that are directed towards nodes of type .the evolution of these proportions is given by : the out degree depends on the random part ( first term ) and on the search part ( second term ) through the average out degree of existing nodes . in matrix form, this is written as follows : to get a feeling for the limit of this process , it is useful to examine the steady state of this system .the steady - state is such that the out - degree of each type remains unchanged in time : [ prop_outdegree ] if , then the steady state equation ( steady_state_outdegree ) has a unique solution , and the system in ( [ out_degree_diff_eq ] ) converges to . for ,the second term approaches the null matrix as .as long as the matrix is more biased than the steady state ( which is true for = ) , the bias in excess of the steady state decreases with time , vanishing in the long run ( see also [ matrices ] ) .this means that the biases in the out - links formed by agents decrease over time , consistent with the homogenization of the search process and the in - degree of older nodes which are dominating the search part of the process .however , unlike the case of the in - degree of old nodes , full homogenization does not occur even in the limit , since the random part of the out - degree formation does not vanish over time .in this section we consider a specific form of bias in random meetings , and restrict the analysis to two types for simplicity . by making explicit how the bias in random meetings is generated , we accomplish two goals . first , we generate a closed form expression that describes the integration of individuals as they age .this formula allows us to study in more detail the integration process , and provides parameters which can be empirically estimated .second , we obtain additional results on other features of the network , specifically on aggregate homophily at the group level and in - degree distributions that are type - sensitive . for each of these categories of results , the location - based nature of the meeting biasesallows us to study the impact of changes in population frequencies on the structure and properties of the emerging networks .nodes belong to one of two types : and . with an abuse of notation ,wee let and in this section .there are two locations and .all biases in the meeting process are captured by the parameter ] . by driving we can impose to the weight of every . in this way {ij} ] , we have the result . satisfies the _ monotone convergence property _ if , for every couple , and for every , the element has the following properties : 1 . if , then ; 2 . if , then . what comes out directly from the definitionis that , if , then there is at least one for which the inequality is strict , i.e. . [ matrix_monotone ] for every couple , and for every if satisfies the _ monotone convergence property _, then 1 . if , then {ij}<0 ] .* proof : * we focus on case 1 , as the other is proven by reversing inequalities .first , note that the function is negative if and only if let us call the minimum integer strictly above , i.e. .now we can show that {ij } & = & \frac{1}{\left(e^{x}-1\right)^{2}}\sum_{\mu=1}^{\infty}\frac{x^{\mu}}{\mu!}\left(\frac{\mu}{x}\left(e^{x}-1\right)-e^{x}\right)\left[m^{\mu}\right]_{ij } \notag \\& < & \frac{1}{\left(e^{x}-1\right)^{2}}\sum_{\mu=1}^{\infty}\frac{x^{\mu}}{\mu!}\left(\frac{\mu}{x}\left(e^{x}-1\right)-e^{x}\right)\left[m^{\nu(x)}\right]_{ij } \label{derivative_monotone}\end{aligned}\ ] ] it is a matter of calculus to check that and then the derivative in ( [ derivative_monotone ] ) is strictly negative . finally , we provide a simple sufficient condition for a markov matrix .[ matrix_2x2 ] consider the markov matrix , if , then it has the _ monotone convergence property_. * proof : * by the perron frobenius theorem this matrix converges to , with .it is easy to check that , as , then . by symmetry between and , also .now consider a matrix , such that and .to finish the proof it is enough to show that as it will be proved symmetrically also with respect to and .the middle term of these inequalities is increasing in , as . if it is equal to , if instead , with some algebraic substitution , we have that again both inequalities are satisfied , as .this completes the proof .* proof of proposition [ prop_1 ] ( page ) :* note first that the node born at time in definition [ weak ] has , at the beginning of time ( before node sends its links ) an in - degree of .this directly implies that the probability of to receive a link at time from a node of type , given that such a node is born , is equal to the probability of being found at random among the nodes in the network . this probability is equal to : on the other hand , the probability that node is be found at time is the sum of the probability of being found at random and through search . in the model with homogeneous search, this is : note that in ( [ pi2 ] ) the terms in the vector grow without bound as tends to infinity , while the first terms in ( [ pi2 ] ) and in ( [ pi1 ] ) are constant once is eliminated from the denominator of both expressions .it follows that we can always choose a large enough for ( [ pi2 ] ) to be larger than ( [ pi1 ] ) .* proof of proposition [ prop_lri ] ( page prop_lri ) : * we want to see how the matrix of type by type links for a node born at time develops . to do this we compare its behavior with the behavior of the type blind process , where the in links evolve according to type case studied in jackson and rogers ( 2007 ) . ] to make this comparison in the long run we study limit ( [ limit ] ) implies that ( we use lemma [ matrix_limit ] from matrices ) where the row vector is the unique eigenvector associated with eigenvalue of matrix ( normalized to sum to 1 ) . in this way , in the long run a node of type born at time receives a fraction of in links from nodes of type which is given by the ratio of the overall nodes that it would receive in a type blind process .this proportion is the product of times a term that is constant for type .* proof of proposition [ prop_partial_no_bias ] ( page ) :* the result comes from the expression of matrix as defined in equation ( [ 19 ] ) , in the proof of proposition [ prop_lri ] : here is just a rescaling term so that the matrix in brackets is again a markov matrix ( see lemma [ matrix_markov ] in [ matrices ] ) . from the proof of proposition [ prop_lri ] we know that it converges to the distribution of the population shares .as satisfies the monotone convergence property , we can apply lemma [ matrix_monotone ] from [ matrices ] to prove that this convergence is monotonic . in the above expression ,the matrix in brackets is such that , as , the elements of each column homogenize ( see lemma matrix_limit of [ matrices ] ) .however , full homogeneity only occurs at the limit . to obtain some insight on the time evolution of the out - degree ,let us express equation ( [ out_degree_diff_eq ] ) as a differential equation , and solve it explicitly ( as we have done in ( [ 13 ] ) for the in - degree ) .consider a degree distribution obtained implicitly through a process such that the growth of a node born at is governed by and another degree distribution such that , with .assume and are weakly increasing and continuous on and that .* proof of lemma [ fosd ] : * assume for all .pick and arbitrarily .define as the birthdate of the node with degree at time under , and similarly for .we have , which , since is non - decreasing implies that .since and , we have . now take such that .pick and arbitrarily .define and to be the size of node at time under .then set to be the node with degree at time under .we have , which implies that .thus . to show necessity , fix and choose so that , and set .defining as the node with degree at time under , we know that .this implies that , completing the proof .recall that and .some manipulations show that , and symmetrically .is actually the eigenvector of associated to its asymptotic limit ( see the proof of lemma [ matrix_2x2 ] in [ matrices ] ) .considering the location based model , it is reasonable that this limit does not depend on but only on the initial distribution of the two types , given by . ]if we finally consider that , we have the result .* proof of proposition [ prop3 ] ( page ) :* we take the case of .the case of is analogous . define ; hence .next define .notice that at time , the proportion of in - links that a node of type born at time has from its own group is it then follows from lemma 1 that .evaluating this formula delivers the claimed expression . without loss of generality , we can set in what follows .introduce .next , ^{2}g^{\prime\prime}\circ f^{-1}+(f^{-1})^{\prime\prime}g^{\prime}\circ f^{-1} ] .developing , we get that has the same sign as where the first inequality comes from the fact that .thus , and . here , .since , we have . also , . by looking at its derivative ,we see that the function is either decreasing , or increasing and decreasing . in either case , since it is positive when and when , it must be greater than or equal to zero for any .thus , if hence is increasing . since , if .thus , is increasing and as , and if .finally , given that and is increasing in , , .* proof of proposition [ 3part prop ] ( page ) :* for ( i ) , use lemma [ l1 ] to write +n\frac{m_{r}}{m_{s}}\left[(\frac{t}{t_{0}})^{bm_{s}}-1\right ] \\\pi_{t_{0}}^{t}(2,1 ) & = & n\frac{m_{r}}{m_{s}}(1-p)\left[(\frac{t}{t_{0}})^{m_{s}}-(\frac{t}{t_{0}})^{bm_{s}}\right].\end{aligned}\ ] ] given that and that , we know that and the second term in the first equation is non - negative .thus for all , which allows us to apply lemma [ fosd ] .now consider the expressions for and obtained from the above equations by exchanging with .when ( meaning is the majority group ) then , and when ( i.e. , , meaning there is at least some inter - group linking ) then for large values of the second term in the expression for becomes negligible , in which case in the upper tail , proving ( ii ) by application of lemma [ fosd ] . since , the second term of the rhs is weakly increasing in are two cases .first , , in which case , thus is weakly increasing and .otherwise , in which case is first negative then positive above ( since ) , hence is first decreasing and then increasing , which also means that is first negative and then positive above .therefore , fosd if and only if . the condition reduces to * proof of proposition [ prop_fosd ] ( page prop_fosd ) : * observe that increases with .this means that and while and .the result then follows from lemma [ fosd ] .* proof of proposition [ prop5 ] ( page ) :* substituting from equation ( [ b1 gm ] ) , we have from this expression , it is easily verified that and that .the first derivative of is which has the same sign as , proving that is increasing below and then decreasing . to show concavity , write the second derivative as the denominator is negative , and the term in the numerator before the asterisk is positive , so is concave if and only if . dividing by and rearranging, we must show that . and ; using these inequalities and collecting terms proves the result .* proof of proposition [ prop - g1 ] ( page ) :* for what concerns the weak integration property , see the proof of proposition [ prop_1 ] .the only thing to change is the right hand part of equation ( [ rtb3 ] ) instead of the formula in ( [ pi2 ] ) . for long run integration ,consider the solution to the general model , with biased search , as described by equation ( [ rtb4 ] ) .we follow the same procedure as in the proof of proposition [ prop_lri ] , since is still a markov matrix .we obtain {ij } & = & \sum_{h=1}^{h}\left(\left[\left(\mathbf{b}_{s}\odot\mathbf{b}_{r}\right)^{-1}\mathbf{b}_{r}\right]_{jh}p(h)\frac{\vec{v}(\mathbf{b}_{s}\odot\mathbf{a})_{i}}{p(i)}\right ) \notag \\ & = & \sum_{h=1}^{h}\left(\sum_{k=1}^{h}\left(\left[\left(\mathbf{b}_{s}\odot\mathbf{b}_{r}\right)^{-1}\right]_{jk}[\mathbf{b}_{r}]_{kh}\right)p(h)\frac{\vec{v}(\mathbf{b}_{s}\odot\mathbf{a})_{i}}{p(i)}\right ) \notag \\ & = & \sum_{h=1}^{h}\left(\sum_{k=1}^{h}\left(\left[\left(\mathbf{b}_{s}\odot\mathbf{b}_{r}\right)^{-1}\right]_{jk}p(k)\frac{p(k , h)}{p(h)}\right)p(h)\frac{\vec{v}(\mathbf{b}_{s}\odot\mathbf{a})_{i}}{p(i)}\right ) \notag \\ & = & \left(\sum_{h=1}^{h}\sum_{k=1}^{h}\left(\left[\left(\mathbf{b}_{s}\odot\mathbf{b}_{r}\right)^{-1}\right]_{jk}p(k)p(k , h)\right)\right)\frac{\vec{v}(\mathbf{b}_{s}\odot\mathbf{a})_{i}}{p(i ) } \notag \\ & = & \left(\sum_{k=1}^{h}\left[\left(\mathbf{b}_{s}\odot\mathbf{b}_{r}\right)^{-1}\right]_{jk}p(k)\right)p(j)\frac{\vec{v}(\mathbf{b}_{s}\odot\mathbf{a})_{i}}{p(i)}\ \ .\label{eq_i}\end{aligned}\ ] ] of the overall links that it would receive in a type blind process , where the last line comes from the fact that .the second term is still a constant for type , but the first term is generically not proportional to . finally , for the partial integration property , the proof is analogous to the proof of proposition [ prop_partial_no_bias ] . in this case as satisfies the monotone convergence property , we can use lemma [ matrix_monotone ] from [ matrices ] , and the same reasoning applies .s. currarini , m.o .jackson , and p. pin , identifying the roles of choice and chance in network formation : racial biases in high school friendships , proceedings of the national academy of sciences 107 ( 2010a ) , 48574861 .a. b. jaffe and m. trajtenberg , flows of knowledge from universities and federal laboratories : modeling the flow of patent citations over time and across institutional and geographic boundaries , proc . natl .usa 93 ( 1996 ) , 1267112677 .m. o. jackson , average distance , diameter , and clustering in social networks with homophily , proceedings of the workshop in internet and network economics ( wine 2008 ) , lecture notes in computer science , edited by c. papadimitriou and s. zhang , springer - verlag , berlin heidelberg ( 2008a ) .
|
we model network formation when heterogeneous nodes enter sequentially and form connections through both random meetings and network - based search , but with type - dependent biases . we show that there is `` long - run integration , '' whereby the composition of types in sufficiently old nodes neighborhoods approaches the global type distribution , provided that the network - based search is unbiased . however , younger nodes connections still reflect the biased meetings process . we derive the type - based degree distributions and group - level homophily patterns when there are two types and location - based biases . finally , we illustrate aspects of the model with an empirical application to data on citations in physics journals . _ jel codes _ : d85 , a14 , z13 . _ keywords _ : network formation , social networks , homophily , integration , degree distribution , citations
|
this paper proposes design concepts of the diffusion coefficients and the stochastic control lyapunov functions for stochastic stabilization problems of general deterministic input - affine control systems . in deterministic control problems , such as nonholonomic systems , there exist systems not locally asymptotically stabilizable using any continuous time - invariant feedback law which are controllable .for such systems , previous workers proposed different control laws : the time - varying feedback laws , the discontinuous feedback laws , variable constraint control laws , and time - state control laws . for the deterministic systems , on the other hand , the authors of this paper propose the stochastic feedback laws via the stochastic control lyapunov functions .the aim of the authors previous work has been to design continuous stochastic feedback laws ; however , the continuity of the proposed controllers was not investigated .moreover , the randomizing problems were not well - considered ; i.e. , wong - zakai theorem was not considered when the deterministic systems were randomized . in this paper ,the general deterministic input - affine control systems are randomized using the wong - zakai theorem .then , sufficient conditions for diffusion coefficients are derived so that there exist stochastic control lyapunov functions for the systems .further , the stochastic continuous feedback law is proposed as it enables the nonholonomic system become globally asymptotically stable in probability .this paper is organized as follows . in, the motivation for this reserach is described using brockett integrator , a typical nonholonomic system . in ,the basic results of stochastic stabilities , stochastic stabilizabilities , and randomization problems are summarized . in , the main results of this paper are presented . in ,general deterministic input - affine control systems are randomized via the wong - zakai theorems , besides considering the design strategies of the diffusion coefficients and the stochastic control lyapunov functions . in , the validity of the strategies is confirmed by obtaining a sontag - type stochastic controller for the brockett integrator .the sufficient condition for the proposed controller to be continuous is also obtained .moreover , it is proven that the origin of the resulting closed - loop system is globally asymptotically stable in probability .shows the numerical simulation of the brockett integrator with the proposed controller . concludes the paper . in this paper, denotes an -dimensional euclidean space . for a vector and a mappings and , the lie derivative of represented by the conditional probability of some event , under the condition , is written .a one - dimensional standard wiener process is represented as . for ,the differential forms of the stratonovich and ito integrals in are denoted by and , respectively . in this paper ,one - dimensional wiener processes are used for randomizing , because the aim of this paper is to design continuous feedback laws .if multi - dimensional wiener processes with are applied , the ito mappings of the randomized systems become discontinuous with probability one . in control problems for deterministic nonlinear systems , such as nonholonomic systems , there exist systems which are not locally asymptotically stabilizable by using any continuous state feedback law , although they are controllable . the brockett integrator is one of typical nonholonomic systems .further , if and , the system is said to be a chained system .] , where satisfy , , and .this system is controllable because the rank of a matrix is for all ; however , the system does not have any continuous feedback stabilizer , because it is a driftless affine nonholonomic system .let the stabilization problem of the brockett integrator be considered from the viewpoint of the control lyapunov theory .a simple positive definite function is not a control lyapunov function for , because is derived for .hence , one has to consider a new positive definite function , which is concave down in ; i.e. , where fig .[ fig : sclf ] implies that if the initial state is not in , and if the feedback control law is designed so that except , then the trajectories of the system converge to the origin .however , the origin of the resulting closed - loop system is not locally asymptotically stable , because any neighborhood of the origin contains the nonempty subset of . on the other hand ,the hessian of in is calculated as ^t \right](x)= \begin{bmatrix } -(1+x_3 ^ 2 ) & 0 & 0 \\ 0 & -(1+x_3 ^ 2 ) & 0 \\ 0 & 0 & 4 \end{bmatrix},\end{aligned}\ ] ] which has negative - valued eigenvalues .this implies that the trajectory starting from possibly converges to the origin , if the hessian has a role in the flow of . in this paper ,to use the hessian effectively , feedback control laws involving the wiener processes are considerd .in other words , the aim of this paper is to solve the stochastic stabilization problems of the deterministic input - affine control systems by using stochastic control lyapunov functions .the foregoing approach is analogous to the globally asymptotically stabilization problems for systems with non - contractible state space .however , the basic idea of this paper is different from that of because , this paper proposes stochastic continuous feedback laws and deterministic discontinuous feedback laws . in ,scaledwidth=40.0% ]in this section , the basic results of stochastic stabilities , stochastic stabilizabilities , and randomization problems are summarized and discussed . in this paper ,the theories of lyapunov stability and stabilizability are used . in this subsection , the previous results of hasminskii and florchinger are described .let a stochastic system and a control stochastic system be considered , where the initial state is given in ; is a control input ; , , and are lipshitz functionals satisfying and . also , it is assumed that a non - negative constant satysfying latexmath:[\ ] ] become and in .if is so designed such that hold , then is obtained .therefore , is a stochastic control lyapunov function of the stochastic system . in , the pre - feedback law is so constructed as to make the wong - zakai correction terms for and vanish ; i.e. , is made to simplify the design problem of the diffusion coefficient .converts the stochastic stabilization problem of the brockett integrator into the construction problem of a stochastic stabilizing feedback law for the system via the stochastic control lyapunov function .the following theorem is immediately obtained .[ the : sontag ] a sontag - type controller with makes the origin of the system locally asymptotically stable in probability .yields that while .if , is obtained by substituting with .therefore , the sontag - type controller locally asymptotically stabilizes the origin of the system .then , the following theorem is obtained by using the sontag - type controller . [ the : sscp ] considering , if the diffusion coefficient satisfies then the stochastic control lyapunov function satisfies the small control property . because and are once differentiable , is continuous for all . therefore , the sontag - type controller is continuous except the origin .the controller is in . for , is obtained .therefore , yields that further , for , is obtained .therefore , yields that then , the conditions of are satisfied if . therefore , satisfies the small control property .moreover , the following corollary is derived .[ cor : global ] the origin of the closed - loop system with is globally asymptotically stable in probability .the function is positive definite and proper in .further , is negative definite in for the system with .therefore , the corollary is proved . and yield that any path of the closed - loop system with converges to the origin with probability one .the origin of the closed - loop system with is asymptotically stable in probability ; however , it is not exponentially stable in probability because is concave down in . to improve the convergence rate , for example, one can apply the sliding mode controls .in this section , the randomized brockett integrator with and is considered .because the randomization is operated for stochastic stabilization , the diffusion coefficient should vanish while the eigenvalues of are positive .therefore , is designed by where are the eigenvalues of , and and the design parameters . fig .[ fig : bro - lv ] confirms that is negative definite ; moreover , figs .[ fig : ex1-state ] and [ fig : ex1-input ] show that the sample paths of the state and the input converge to , respectively .the numerical simulation is calculated with the initial state and with the design parameters using euler - maruyama scheme . in with ., scaledwidth=40.0% ] and ).,scaledwidth=40.0% ] and ).,scaledwidth=40.0% ]in this paper , sufficient condition is proposed for the diffusion coefficients such that the origin of the input - affine systems becomes locally asymptotically stable in probability . moreover ,the stochastic continuous feedback law , which makes the origin of the brockett integrator be globally asymptotically stable in probability , is derived .the authors thank professor pavel pakshin for his valuable comments . , _ discontinuous control of nonholonomic systems _ , systems & control letters , vol.27 , pp . 3745 , 1996 ., _ stabilization and tracking in the nonholonomic integrator via sliding modes _ , systems & control letters , vol.29 , pp . 9199 , 1996 ., _ asymptotic stability and feedback stabilization _ , differential geometric control theory , vol.27 , pp . 181191 , 1983 ., _ a necessary condition for feedback stabilization _ , systems & control letters , vol.14 , 227/232 , 1990 ., _ lyapunov like techniques for stochastic stability _ , siam journal on control and optimization , vol.33 , no.4 , pp . 11511169 , 1995 . ,_ feedback stabilization of affine in the control stochastic differential systems by the control lyapunov function method _ , siam journal on control and optimization , vol.35 , no.2 , pp . 500511 , 1997 ., _ global stabilization onf composite stochastic systems _ , computers & mathematics with applications , vol.33 , no.6 , pp . 127135 , 1997 . , _ stabilization of hamiltonian systems with nonholonomic constraints based on time - varying generalized canonical transformations _ , systems & control letters , vol.44 , pp.309319 , 2001 . , _ applied stochastic processes and control for jump - diffusions _ , siam , 2007 . ,_ stochastic stability of differential equations _ , sijthoff & noordhoff , 1980 . ,stochastic differential equations and diffusion processes _ , north - holland , 1981 . ,_ stochastic stability and control _ , academic press , new york , 1967 . ,_ nonlinear dynamical control systems _ , springer , 1990 . , _ stabilization problems of nonlinear systems using feedback laws with wiener processes _ , in proceedings of joint 48th ieee conference on decision and control and 28th chinese control conference ( cdc / ccc 2009 ) , 2009 . , _stochastic differential equations : an introduction with applications _ , sixth edition , springer , 2003 . ,_ a control strategy for a class of nonholonomic systems time state control form and its application _ , in proceedings of the 35th ieee conference on decision and control , pp . 11201121 , 1994 . , _ a ` universal ' construction of artstein s theorem on nonlinear stabilization _ , systems & control letters , vol . 13 , pp . 117123 , 1989 . , _ exponential stabilization of nonholonomic chained systems _ , ieee transactions on automatic control , vol.40 , no.1 , pp.3549 , 1995 . , _ stabilizing control of symmetric affine systems by direct gradient descent control _ , in proceedings of the 2008 american control conference , 2008 . , _ exponential stabilization of nonholonomic dynamic systems by smooth time - varying control _ , automatica , vol.38 , pp. 11391146 . , _ wong zakai approximations for stochastic differential equations _ , acta applicandae mathematicae , vol.43 , pp . 317359 , 1996 . ,_ searching for control lyapunov - morse functions using genetic programming for global asymptotic stabilization of nonlinear systems _ , in proceedings of 45th ieee conference on decision and control , 2006 ., _ on the relation between ordinary and stochastic differential equations _ ,intenational journal of engineering science , vol .3 , issue 2 , pp . 213229 , 1965 . , _ nonlinear tracking control of a nonholonomic fish robot in chained form _ , in proceedings of the sice annual conference , 2003 . , _ some commnets on stabilizability _ , applied mathematics and optimization , vol.19 , pp . 19 , 1989 .
|
in this paper , a stochastic asymptotic stabilization method is proposed for deterministic input - affine control systems , which are randomized by including gaussian white noises in control inputs . the sufficient condition is derived for the diffucion coefficients so that there exist stochastic control lyapunov functions for the systems . to illustrate the usefulness of the sufficient condition , the authors propose the stochastic continuous feedback law , which makes the origin of the brockett integrator become globally asymptotically stable in probability .
|
financial bubbles are generally defined as transient upward acceleration of prices above fundamental value . however , identifying unambiguously the presence of a bubble remains an unsolved problem in standard econometric and financial economic approaches , due to the fact that the fundamental value is in general poorly constrained and it is not possible to distinguish between exponentially growing fundamental price and exponentially growing bubble price . to break this stalemate , sornette and co -workers have proposed that bubbles are actually not characterized by exponential prices ( sometimes referred to as `` explosive '' ) , but rather by faster - than - exponential growth of price ( that should therefore be referred to as `` super - explosive '' ) .see and references therein .the reason for such faster - than - exponential regimes is that imitation and herding behavior of noise traders and of boundedly rational agents create positive feedback in the valuation of assets , resulting in price processes that exhibit a finite - time singularity at some future time .see for a general theory of finite - time singularities in ordinary differential equations , for a classification and for applications .this critical time is interpreted as the end of the bubble , which is often but not necessarily the time when a crash occurs .thus , the main difference with standard bubble models is that the underlying price process is considered to be intrinsically transient due to positive feedback mechanisms that create an unsustainable regime .furthermore , the tension and competition between the value investors and the noise traders may create deviations around the finite - time singular growth in the form of oscillations that are periodic in the logarithm of the time to .log - periodic oscillations appear to our clocks as peaks and valleys with progressively greater frequencies that eventually reach a point of no return , where the unsustainable growth has the highest probability of ending in a violent crash or gentle deflation of the bubble .log - periodic oscillations are associated with the symmetry of discrete scale invariance , a partial breaking of the symmetry of continuous scale invariance , and occurs in complex systems characterized by a hierarchy of scales .see for a general review and references therein .recent literatures on bubbles and crashes can be summarized as the following kinds : first , the combined effects of heterogeneous beliefs and short - sales constraints may cause large movements in asset . in this kind of models ,the asset prices are determined at equilibrium to the extent that they reflect the heterogeneous beliefs about payoffs .but short sales restrictions force the pessimistic investors out of the market , leaving only optimistic investors and thus inflated asset price levels . however , when short sales restrictions no longer bind investors , then prices fall back down . while in the second type , the role of `` noise traders '' in fostering positive feedback trading has been emphasized .these models says trend chasing by one class of agents produces momentum in stock prices .the empirical evidence on momentum strategies can be found in .after the discussion on bubbles and crashes , the literatures on rebound should be summarized also . on the theoretical side , there are several competing explanations for price decreases followed by reversals : liquidity and time - varying risk . stresses the importance of liquidity : as more people sell , agents who borrowed money to buy assets are forced to sell too . when forced selling stops , this trend reverses . shows that it is risky to be a fundamental trader in this environment and that price reversals after declines are likely to be higher when there is more risk in the price , as measured by volatility . on the empirical front concerning the forecast of reversals in price drops, shows that the simplest way to predict prices is to look at past performance . shows that price - dividend ratios forecast future returns for the market as a whole .however , these two approaches do not aim at predicting and can not determine the most probable rebound time for a single ticker of the stock .the innovation of our methodology in this respect is to provide a very detailed method to detect rebound of any given ticker . in this paper, we explore the hypothesis that financial bubbles have mirror images in the form of `` negative bubbles '' in which positive feedback mechanisms may lead to transient accelerating price falls .we adapt the johansen - ledoit - sornette ( jls ) model of rational expectation bubbles to negative bubbles .the crash hazard rate becomes the rally hazard rate , which quantifies the probability per unit time that the market rebounds in a strong rally .the upward accelerating bullish price characterizing a bubble , which was the return that rational investors require as a remuneration for being exposed to crash risk , becomes a downward accelerating bearish price of the negative bubble , which can be interpreted as the cost that rational agents accept to pay to profit from a possible future rally . during this accelerating downward trend, a tiny reversal could be a strong signal for all the investors who are seeking the profit from the possible future rally .these investors will long the stock immediately after this tiny reversal . as a consequence , the price rebounds very rapidly .this paper contributes to the literature by augmenting the evidence for transient pockets of predictability that are characterized by faster - than - exponential growth or decay .this is done by adding the phenomenology and modeling of `` negative bubbles '' to the evidence for characteristic signatures of ( positive ) bubbles .both positive and negative bubbles are suggested to result from the same fundamental mechanisms , involving imitation and herding behavior which create positive feedbacks . by such a generalization within the same theoretical framework, we hope to contribute to the development of a genuine science of bubbles .the rest of the paper is organized as follows .section 2.1 summarizes the main definitions and properties of the johansen - ledoit - sornette ( jls ) for ( positive ) bubbles and their associated crashes .section 2.2 presents the modified jls model for negative bubbles and their associated rebounds ( or rallies ) .the subsequent sections test the jls model for negative bubbles by providing different validation steps , in terms of prediction skills of actual rebounds and of abnormal returns of trading strategies derived from the model .section 3 describes the method we have developed to test whether the adapted jls model for negative bubbles has indeed skills in forecasting large rebounds .this method uses a robust pattern recognition framework build on the information obtained from the calibration of the adapted jls model to the financial prices .section 4 presents the results of the tests concerning the performance of the method of section 3 with respect to the advanced diagnostic of large rebounds .section 5 develops simple trading strategies based on the method of section 3 , which are shown to exhibit statistically significant returns , when compared with random strategies without skills with otherwise comparable attributes .section 6 concludes . , , developed a model ( referred to below as the jls model ) of financial bubbles and crashes , which is an extension of the rational expectation bubble model of . in this model , a crash is seen as an event potentially terminating the run - up of a bubble .a financial bubble is modeled as a regime of accelerating ( super - exponential power law ) growth punctuated by short - lived corrections organized according the symmetry of discrete scale invariance .the super - exponential power law is argued to result from positive feedback resulting from noise trader decisions that tend to enhance deviations from fundamental valuation in an accelerating spiral . in the jls model ,the dynamics of stock markets is described as where is the stock market price , is the drift ( or trend ) and is the increment of a wiener process ( with zero mean and unit variance ) .the term represents a discontinuous jump such that before the crash and after the crash occurs . the loss amplitude associated with the occurrence of a crashis determined by the parameter .the assumption of the constant jump size is easily relaxed by considering a distribution of jump sizes , with the condition that its first moment exists .then , the no - arbitrage condition is expressed similarly with replaced by its mean .each successive crash corresponds to a jump of by one unit .the dynamics of the jumps is governed by a crash hazard rate .since is the probability that the crash occurs between and conditional on the fact that it has not yet happened , we have = 1 \times h(t ) dt + 0 \times ( 1- h(t ) dt) ] , where the expectation is performed with respect to the risk - neutral measure , and in the frame of the risk - free rate .this is the standard condition that the price process is a martingale . taking the expectation of expression ( [ eq : dynamic ] ) under the filtration ( or history ) until time reads = \mu(t ) p(t ) dt + \sigma(t ) p(t ) { \rm e}_t[dw ] - \kappa p(t ) { \rm e}_t[dj]~. \label{thetyjye}\ ] ] since = 0 ] ( equation ( [ theyjytuj ] ) ) , together with the no - arbitrage condition =0 ] .the index indicates the parameter , where refer to respectively .finally , represents the actual informative parameter . assuming that there are informative parameters in total and using the indexes , then calculated via for ] is the ` good region ' of class i. we consider a single and find that there are two fits in in this group with parameter values of and .we determine the ` answer ' as follows : * if ] and ] , fits near belong to class ii and . more succinctly , m_i \in [ a , b ] , m_j \notin [ a , b ] , i \neq j , i , j \in \{1,2\} m_1,m_2 \notin [ a , b] ] .if is high , then we expect that this day has a high probability that the rebound will start .we choose feature qualification pair ( 10 , 200 ) here , meaning that a certain trait must appear in trait class i at least 11 times _ and _ must appear in trait class ii less than 200 times .if so , then we say that this trait is _ a feature of class i_. if , on the other hand , the trait appears 10 times or less in class i _ or _ appears 200 times or more in class ii , then this trait is _ a feature of class ii_. the result of this feature qualification is shown in figure [ fg : f2212 ] .note that the choice ( 10 , 200 ) is somewhat arbitrary and does not constitute an in - sample optimization on our part .this can be checked from the error diagrams presented below , which scan these numbers : one can observe in particular that the pair ( 10 , 200 ) does not give the best performance .we have also investigated the impact of changing other parameters and find a strong robustness . with this feature qualification ,the rebound alarm index can distinguish rebounds with high significance .if the first number is too big and the second number is too small , then the total number of class i features will be very small and the number of features in class ii will be large .this makes the rebound alarm index always close to 0 . in contrast , if is too small and is too large , the rebound alarm index will often be close to 1 .neither of these cases , then , is qualified to be a good rebound alarm index to indicate the start of the next rebound .however , the absolute values of feature qualification pair are not very sensitive within a large range .only the ratio plays an important role .figures [ fg:22p ] - [ fg:33b ] show that varying and in the intervals and does not change the result much . for the sake of conciseness , only the rebound alarm index of feature qualification pair ( 10 , 200 ) is shown in this paper .once we generate the class i and ii features of the learning set for values of before ( jan . 1 , 1975 ) , we then use these features to generate the predictions on the data after . recall that the windows that we fit are defined such that the end time increases 50 days from one window to the next .also note that all predictions made on days between these 50 days will be the same because there is no new fit information between , say , and .assume that we make a prediction at time : , ~~t > t_p\ ] ] then the fits set is made using the past information before prediction day .we use as the subset mentioned in sec .[ sub : question ] to generate the questionnaire on day and the traits for this questionnaire . comparing these traits with features and allows us to generate a rebound alarm index using the same method as described in sec .[ sec : features - learning ] . using this technique , the prediction day scanned from 1975 - 01 - 01 until 2009 - 07 - 22 in steps of 50 days .we then construct the time series of the _ rebound alarm index _ over this period and with this resolution of 50 days .the comparison of this rebound alarm index with the historical financial index ( figure [ fg : p2212 ] ) shows a good correlation , but there are also some false positive alarms ( 1977 , 1998 , 2006 ) , as well as some false negative missed rebounds ( 1990 ) .many false positive alarms such as in 1998 and 2006 are actually associated with rebounds . but these rebounds have smaller amplitudes than our qualifying threshold targets . concerning the false negative ( missed rebound ) in 1990 , the explanation is probably that the historical prices preceeding this rebound does not follow the jls model specification .rebounds may result from several mechanisms and the jls model only provides one of them , arguably the most important .overall , the predictability of the rebound alarm index shown in figure [ fg : p2212 ] , as well as the relative cost of the two types of errors ( false positives and false negatives ) can be quantified systematically , as explained in the following sections .the major conclusion is that the rebound alarm index has a prediction skill much better than luck , as quantified by error diagrams .we have qualitatively seen that the feature qualifications method using back testing and forward prediction can generate a rebound alarm index that seems to detect and predict well observed rebounds in the s&p 500 index .we now quantify the quality of these predictions with the use of error diagrams .we create an error diagram for predictions after 1975 - 01 - 01 with a certain feature qualification in the following way : 1 .count the number of rebounds after 1975 - 01 - 01 as defined in section [ defreboundh2ysec ] and expression ( [ defreboundh2y ] ) .there are 9 rebounds .2 . take the rebound alarm index time series ( after 1975 - 01 - 01 ) and sort the set of all alarm index values in decreasing order .there are 12,600 points in this series and the sorting operation delivers a list of 12,600 index values , from the largest to the smallest one .the largest value of this sorted series defines the first threshold .4 . using this threshold , we declare that an alarm starts on the first day that the unsorted rebound alarm index time series exceeds this threshold .the duration of this alarm is set to 41 days , since the longest distance between a rebound and the day with index greater than the threshold is 20 days .then , a prediction is deemed successful when a rebound falls inside that window of 41 days .if there are no successful predictions at this threshold , move the threshold down to the next value in the sorted series of alarm index .once a rebound is predicted with a new value of the threshold , count the ratio of unpredicted rebounds ( unpredicted rebounds / total rebounds in set ) and the ratio of alarms used ( duration of alarm period / 12,600 prediction days ) .mark this as a single point in the error diagram . in this way , we will mark 9 points in the error diagram for the 9 rebounds .the aim of using such an error diagram in general is to show that a given prediction scheme performs better than random .a random prediction follows the line in the error diagram .a set of points below this line indicates that the prediction is better than randomly choosing alarms .the prediction is seen to improve as more error diagram points are found near the origin ( 0 , 0 ) .the advantage of error diagrams is to avoid discussing how different observers would rate the quality of predictions in terms of the relative importance of avoiding the occurrence of false positive alarms and of false negative missed rebounds . by presenting the full error diagram, we thus sample all possible preferences and the unique criterion is that the error diagram curve be shown to be statistically significantly below the anti - diagonal . in figure [ fg:22p ] , we show error diagrams for different feature qualification pairs .note the 9 points representing the 9 rebounds in the prediction set .we also plot the 11 points of the error diagrams for the learning set in figure [ fg:22b ] . as a different test of the quality of this pattern recognition procedure, we repeated the entire process but with a rebound now defined as the minimum price within a window of days instead of days , as before .these results are shown in figures [ fg:33p]-[fg:33b ] . given a value of the _ predictive _ rebound alarm index , we can also use the _ historical _ rebound alarm index combined with bayesian inference to calculate the probability that this value of the rebound alarm index will actually be followed by a rebound .we use predictions near the end of november , 2008 as an example . from figure[ fg : p2212 ] , we can see there is a strong rebound signal in that period .we determine if this is a true rebound signal by the following method : 1 .find the highest rebound alarm index around the end of november 2008 .calculate , the number of days in the interval from 1975 - 01 - 01 until the end of the prediction set , 2009 - 07 - 22 .calculate , the number of days which have a rebound alarm index greater than or equal to .the probability that the rebound alarm index is higher than is estimated by 5 .the probability of a day being near the bottom of a rebound is estimated as the number of days near real rebounds over the total number of days in the predicting set : where is the number of rebounds we can detect after 1975 - 01 - 01 and is the rebound width , i.e. the number of days near the real rebound in which we can say that this is a successful prediction . for example , if we say that the prediction is good when the predicted rebound time and real rebound time are within 10 days of each other , then the rebound width .the probability that the neighbor of a rebound has a rebound alarm index larger than is estimated as where is the number of rebounds in which 7 .given that the rebound alarm index is higher than , the probability that the rebound will happen in this period is given by bayesian inference : averaging for all the different feature qualifications gives the probability that the end of november 2008 is a rebound as 0.044 . by comparing with observations, we see that this period is not a rebound .we obtain a similar result by increasing the definition of rebound from 200 days before and after a local minimum to 365 days , yielding a probability of 0.060 . when we _ decrease _ the definition to 100 days , the probability that this period is a rebound jumps to 0.597 .the reason for this sudden jump is shown in figure [ fg : bn08 ] where we see the index around this period and the s&p 500 index value . from the figure, we find that this period is a local minimum within 100 days , not more .this is consistent with what bayesian inference tells us .however , we have to address that the more obvious rebound in march 2009 is missing in our rebound alarm index . technically , one can easily find that this is because the end of crash is not consistent with the beginning of rebound in this special period .in this case , we then test all the days after 1985 - 01 - 01 systematically by bayesian inference using only prediction data ( rebound alarm index ) after 1975 - 01 - 01 . to show that the probability that is stable , we can not start bayesian inference too close to the initial predictions so we choose 1985 - 01 - 01 as the beginning time .we have 5 ` bottoms ' ( troughs ) after this date , using the definition of a minimum within days . for a given day after 1985 - 01 - 01 , we know all values of the rebound alarm index from 1975 - 01 - 01 to that day .then we use this index and historical data of the asset price time series in this time range to calculate the probability that is the bottom of the trough , given that the rebound alarm index is larger than , where is defined as to simplify the test , we only consider the case of feature qualification pair ( 10 , 200 ) , meaning that the trait is a feature of class i only if it shows in class i more than 10 times and in class ii less than 200 times. figure [ fg : bayes ] shows that the actual rebounds occur near the local highest probability of rebound calculated by bayesian inference .this figure also illustrates the existence of false positive alarms , i.e. , large peaks of the probability not associated with rebounds that we have characterized unambiguously at the time scale of days .in order to determine if the predictive power of our method provides a genuine and useful information gain , it is necessary to estimate the excess return it could generate . the excess return is the real return minus the risk free rate transformed from annualized to the duration of this period .the annualized 3-month us treasury bill rate is used as the risk free rate in this paper .we thus develop a trading strategy based on the rebound alarm index as follows .when the rebound alarm index rises higher than a threshold value , then with a lag of days , we buy the asset .this entry strategy is complemented by the following exit strategy .when the rebound alarm index goes below , we still hold the stock for another days , with one exception .consider the case that the rebound alarm index goes below at time and then rises above again at time .if is smaller than the holding period , then we continue to hold the stock until the next time when the rebound alarm index remains below for days .the performance of this strategy for some fixed values of the parameters is compared with random strategies , which share all the properties except for the timing of entries and exits determined by the rebound alarm index and the above rules .the random strategies consist in buying and selling at random times , with the constraint that the total holding period ( sum of the holding days over all trades in a given strategy ) is the same as in the realized strategy that we test . implementing 1000 times these constrained random strategies with different random number realizations provide the confidence intervals to assess whether the performance of our strategy can be attributed to real skill or just to luck. results of this comparison are shown in table [ tb : performance ] for two sets of parameter values .the p - value is a measure of the strategies performance , calculated as the fraction of corresponding random strategies that are better than or equal to our strategies .the lower the p - value is , the better the strategy is compared to the random portfolios .we see that all of our strategies cumulative excess returns are among the top 5 - 6% out of 1000 corresponding random strategies cumulative excess returns .box plots for each of the strategies are also presented in figures [ fg : bp1]-[fg : bp2 ] .the cumulative returns as well as the cumulative excess returns obtained with the two strategies as a function of time are shown in figures [ fg : wt1]-[fg : wt2 ] .these results suggest that these two strategies would provide significant positive excess return . of course , the performance obtained here are smaller than the naive buy - and - hold strategy , consisting in buying at the beginning of the period and just holding the position .the comparison with the buy - and - hold strategy would be however unfair as our strategy is quite seldom invested in the market .our goal here is not to do better than any other strategy but to determine the statistical significance of a specific signal .for this , the correct method is to compare with random strategies that are invested in the market the same fraction of time .it is obvious that we could improve the performance of our strategy by combining the alarm indexes of bubbles and of negative bubbles , for instance , but this is not the goal here .we also provide the sharpe ratio as a measure of the excess return ( or risk premium ) per unit of risk .we define it per trade as follows }{\sigma}\ ] ] where is the return of a trade , is the risk free rate ( we use the 3-month us treasury bill rate ) transformed from annualized to the duration of this trade given in table [ tb : performance ] and is the standard deviation of the returns per trade .the higher the sharpe ratio is , the higher the excess return under the same risk .the bias ratio is defined as the number of trades with a positive return within one standard deviation divided by one plus the number of trades which have a negative return within one standard deviation : \}}{1 + \ # \{r | r \in [ -\sigma , 0)\ } } \label{eq : br}\ ] ] in eq .( [ eq : br ] ) , is the excess return of a trade and is the standard deviation of the excess returns .this ratio detects valuation bias . to see the performance of our strategies, we also check all the possible random trades with a holding period equals to the average duration of our strategies , namely 25 days and 17 days for strategy i and ii respectively .the average sharpe and bias ratios of these random trades are shown in table [ tb : performance ] .both sharpe and bias ratios of our strategies are greater than those of the random trades , confirming that our strategies deliver a larger excess return with a stronger asymmetry towards positive versus negative returns . as another test, we select randomly the same number of random trades as in our strategies , making sure that there is no overlap between the selected trades .we calculate the sharpe and bias ratios for these random trades .repeating this random comparative selection 1000 times provides us with p - values for the sharpe ratio and for bias ratio of our strategies .the results are presented in table [ tb : performance ] .all the p - values are found quite small , confirming that our strategies perform well .we have developed a systematic method to detect rebounds in financial markets using `` negative bubbles , '' defined as the symmetric of bubbles with respect to a horizontal line , i.e. , downward accelerated price drops .the aggregation of thousands of calibrations in running windows of the negative bubble model on financial data has been performed using a general pattern recognition method , leading to the calculation of a rebound alarm index .performance metrics have been presented in the form of error diagrams , of bayesian inference to determine the probability of rebounds and of trading strategies derived from the rebound alarm index dynamics .these different measures suggest that the rebound alarm index provides genuine information and suggest predictive ability .the implemented trading strategies outperform randomly chosen portfolios constructed with the same statistical characteristics .this suggests that financial markets may be characterized by transient positive feedbacks leading to accelerated drawdowns , which develop similarly to but as mirror images of upward accelerating bubbles .our key result is that these negative bubbles have been shown to be predictably associated with large rebounds or rallies . in summary, we have expanded the evidence for the possibility to diagnose bubbles before they terminate , by adding the phenomenology and modeling of `` negative bubbles '' and their anticipatory relationship with rebounds .the present paper contributes to improving our understanding of the most dramatic anomalies exhibited by financial markets in the form of extraordinary deviations from fundamental prices ( both upward and downward ) and of extreme crashes and rallies .our results suggest a common underlying origin to both positive and negative bubbles in the form of transient positive feedbacks leading to identifiable and reproducible faster - than - exponential price signatures ..[tb : symbol ] list of symbols [ cols="^ , < " , ]49 natexlab#1#1url # 1`#1`urlprefix blanchard , o. , watson , m. , 1982 .bubbles , rational expectations and speculative markets . in : wachtel ,, crisis in economic and financial structure : bubbles , bursts , and shocks .lexington books : lexington .gelfand , i. , guberman , s. , keilis - borok , v. , knopoff , l. , press , f. , e.ya.ranzman , rotwain , i. , sadovsky , a. , 1976 .pattern recognition applied to earthquake epicenters in california .physics of the earth and planetary interiors 11 ( 3 ) , 227283 . jiang , z .- q . , zhou , w .- x . ,sornette , d. , woodard , r. , bastiaensen , k. , cauwels , p. , 2010 .bubble diagnosis and prediction of the 2005 - 2007 and 2008 - 2009 chinese stock market bubbles .journal of economic behavior and organization 74 , 149162 .sornette , d. , woodard , r. , fedorovsky , m. , reimann , s. , woodard , h. , zhou , w .- x . , 2010 .the financial bubble experiment : advanced diagnostics and forecasts of bubble terminations ( the financial crisis observatory ) .sornette , d. , woodard , r. , fedorovsky , m. , reimann , s. , woodard , h. , zhou , w .- x . , 2010 .the financial bubble experiment : advanced diagnostics and forecasts of bubble terminations volume ii master document .http://arxiv.org/abs/1005.5675 . from 1973 - 01 - 01to 1974 - 10 - 01 ( time window delineated by the two black dashed vertical lines ) with very clear log - periodic oscillations , followed by a strong positive rebound .the best fits from taboo search are used to form a 90% confidence interval for the critical time shown by the light shadow area .the dark shadow area corresponds to the 20 - 80 quantiles region of the predicted rebounds .lower panel : the same phenomenon is observed in foreign exchange market . the plot shows the fitted results for usd / eur change rate from 2006 - 07 - 01 to 2008 - 04 - 01 .the usd / eur change rate performed a significant drawdown with very clear log - periodic oscillations , followed by a strong positive rebound.,title="fig:",scaledwidth=90.0% ] from 1973 - 01 - 01 to 1974 - 10 - 01 ( time window delineated by the two black dashed vertical lines ) with very clear log - periodic oscillations , followed by a strong positive rebound .the best fits from taboo search are used to form a 90% confidence interval for the critical time shown by the light shadow area .the dark shadow area corresponds to the 20 - 80 quantiles region of the predicted rebounds .lower panel : the same phenomenon is observed in foreign exchange market . the plot shows the fitted results for usd / eur change rate from 2006 - 07 - 01 to 2008 - 04 - 01 .the usd / eur change rate performed a significant drawdown with very clear log - periodic oscillations , followed by a strong positive rebound.,title="fig:",scaledwidth=90.0% ] over the set of 2,568 time intervals for which negative bubbles are detected by the condition that the fits of by expression ( [ eq : lppl ] ) satisfy condition ( [ eq : rb ] ) .( lower ) plot of versus time for the s&p 500 index. note that peaks in this figure correspond to valleys in actual price . ] and are both before jan . 1 , 1975 .( upper ) rebound alarm index for the learning set using feature qualification pair .the rebound alarm index is in the range $ ] .the higher the rebound alarm index , the more likely is the occurrence of a rebound . ( lower )plot of versus time of s&p index .red vertical lines indicate rebounds defined by local minima within plus and minus 200 days around them .note that these rebounds are the historical `` change of regime '' rather than only the jump - like reversals .the jump - like reversals , 1972 , 1974 as examples , are included in these rebounds .they are located near clusters of high values of the rebound alarm index of the upper figure . ] .the rebound alarm index is in the range [ 0,1 ] .the higher the rebound alarm index , the more likely is the occurrence of a rebound . ( lower )plot of versus time of the s&p index .red vertical lines indicate rebounds defined by local minima within in plus and minus 200 days .they are located near clusters of high values of the rebound alarm index of the upper figure . ]means that , if the occurrence of a certain trait in class i is larger than and less than , then we call this trait a feature of class i and vice versa .see text for more information . ] , given the value of the rebound alarm index at , derived by bayesian inference applied to bottoms at the time scale of days .the feature qualification is ( 10 , 200 ) . is the largest rebound index in the past 50 days .the vertical red lines show the locations of the realized rebounds in the history of the s&p500 index . ] ) .lower and upper horizontal edges ( blue lines ) of box represent the first and third quartiles .the red line in the middle is the median . the lower and upper black lines are the 1.5 interquartile range away from quartiles .points between quartiles and black lines are outliers and points out of black lines are extreme outliers .our strategy return is marked by the red circle .this shows our strategy is an outlier among the set of random strategies .the log - return ranked 55 out of 1000 random strategies.,scaledwidth=90.0% ] ) .major performance parameters of this strategy are : 77 trading times ; 66.2% trades have positive return ; 1894 total holding days , which is 15.0% of total time .accumulated log - return is 95% and average return per trade is 1.23% .average trade length is 24.60 days .p - value of this strategy is 0.055,scaledwidth=90.0% ] ) .major performance parameters of this strategy are : 38 trading times ; 65.8% trades have positive return ; 656 total holding days , which is 5.2% of total time .accumulated log - return is 45% and average return per trade is 1.19% .average trade length is 17.26 days .p - value of this strategy is 0.058,scaledwidth=90.0% ]
|
we introduce the concept of `` negative bubbles '' as the mirror ( but not necessarily exactly symmetric ) image of standard financial bubbles , in which positive feedback mechanisms may lead to transient accelerating price falls . to model these negative bubbles , we adapt the johansen - ledoit - sornette ( jls ) model of rational expectation bubbles with a hazard rate describing the collective buying pressure of noise traders . the price fall occurring during a transient negative bubble can be interpreted as an effective random down payment that rational agents accept to pay in the hope of profiting from the expected occurrence of a possible rally . we validate the model by showing that it has significant predictive power in identifying the times of major market rebounds . this result is obtained by using a general pattern recognition method that combines the information obtained at multiple times from a dynamical calibration of the jls model . error diagrams , bayesian inference and trading strategies suggest that one can extract genuine information and obtain real skill from the calibration of negative bubbles with the jls model . we conclude that negative bubbles are in general predictably associated with large rebounds or rallies , which are the mirror images of the crashes terminating standard bubbles . keywords : negative bubble , rebound , positive feedback , pattern recognition , trading strategy , error diagram , prediction , bayesian methods , financial markets , price forecasting , econophysics , complex system , critical point phenomena
|
grid computing consists of large sets of diverse , geographically distributed resources that are collected into a virtual computer for high performance computation .the success of the grid depends greatly on efficient utilization of these resource .the particle physics data grid ( ppdg ) is an example user of the data grid .ppdg is a collaboration of computer scientists in distributed computing and grid technology , and physicists who work on the major high - energy and nuclear physics experiments .these experiments include atlas , star , cms , d0 , and babar .there are many computing resources involved in ppdg physics experiments .for example , the computing resources at brookhaven national laboratory ( bnl ) includes 1100 dual processor pcs which come from six different vendors .the linux farms at the bnl rhic / usatlas provide 3.115 tflops of computation power .the storage system provides 140 tera - bytes of disk space and 1.2 peta - bytes of robotic tape storage space .the diversity of these computing resources and their large number of users make the grid environment vulnerable to faults and excessive loads .this seriously affects the utilization of grid resources .therefore , it is crucial to get knowledge about the status of all types of computing resources and services to enhance the performance and avoid faults .here we give an example to illustrate how a grid application relies on grid information service .a job scheduler needs information about available cpu resources in order to plan the efficient execution of tasks .a computing farm consists of a set of many hosts available for scheduling via grid resource management protocols .if required by the exact nature of the interrelationship between the farm monitor and the job scheduler , the hosts at a given site may be broken down into multiple clusters that consist of homogeneous nodes , such that the local job manager can assume that any queued job can be run on any available node within the computing farm .grid information service should provide the system status about each cluster , i.e. cluster configuration , associated storage system , and so on .many applications , fault detection , performance analysis , performance tuning , prediction , and schedule need information about the grid environment .good methods need to be designed to monitor resource usage , get the performance information and detect the potential failures . due to the complexity of the grid ,implementing a monitoring system for such a large scale computing resource is not a trivial task .the targets to be monitored in grid resource include cpu usage , disk usage , and network performance of grid nodes . the ability to monitor andmanage distributed computing components is critical for enabling high - performance distributed computing .monitoring data is needed to determine the source of performance problems and to tune the system and application for better performance .fault detecting and recovery mechanisms need monitoring data to determine where the problem is , what is the problem and why it happens .a performance prediction service might use monitoring data as inputs for a prediction model , which would in turn be used by a scheduler to determine which resources to use . as more people use the grid , more instrumentation needs will be discovered , and more facility status needs to be monitored .many researchers are focused on monitoring computer facility in a relatively small scale .the proposed systems are autopilot , network weather services , netlogger , grid monitoring architecture ( gma ) and grid information service ( mds ) . due to the diversity of the computing resource and applications in grid computing, existing monitoring architectures can not monitor all of the computing resources belonging to the grid .when the size of a computing facility grows , the existing monitoring strategy will significantly increase the system overhead .the dynamic characteristics of the grid allows the computing resources to participate and withdraw from the resource pool constantly .only a few existing monitoring systems address this characteristic . in this work, we present a grid monitoring system which is adaptive to the grid environment .it includes : * local monitoring : the local monitoring system monitors the facility which consists of computing , storage and network resources .the monitored information will be provided to different types of application with different requirements .* grid monitoring : it uses mds to publish the selected monitoring information into the grid system .the proposed architecture can separate the facility monitoring from the grid environment . by using the mds, it provides a well - designed interface between the grid and the facility .it can provide monitoring information for different grid applications as long they use the grid information protocols .when new hardware is added to the local facility , the local monitoring infrastructure can easily add the new software to monitor the system .the change of hardware and monitoring tools can be hidden from the grid computing environment .the rest of this paper is organized as follows : section [ section - system ] provide the architecture of grid monitoring .section [ section - relate ] summarizes recent work on grid monitoring and grid information service .section [ section - conclusion ] summarizes our conclusions and the scope of future work .in this section , we specify the system requirements of grid monitoring , provide the grid monitoring infrastructure , and describe the design of each component in the system . due to the complexity and dynamics of the grid computing model , the monitoring toolkits built on top of this computing model are also complex . to build an efficient and effective monitoring model , the designers and developers need to keep the following requirements in mind .* the monitoring toolkits can make use of existing monitoring tools .the overhead for incorporating a monitor tool should be minimum .the grid monitoring system should make use of existing facility infrastructures with well designed api , not force its own .* scalability : the system for monitoring and fault management should be scalable .the number of grid nodes will increase every year in order to satisfy the growing computing requirement of henp .the monitoring system should be scalable for the expanding grid system .* flexibility : the system for monitoring should be flexible because the target to be monitored and the grid architecture are likely to change over time .* extensibility and modularity should be implemented , which allows users to include those components easily that they wish to use .all communication flows should not flow through a single central component . having a single, centralized repository for dynamic data causes two performance problems .the centralized repository for information represents a single - point - failure for the entire system .the centralized server can create a performance bottleneck .* non - intrusiveness : the grid monitoring system should incur as small a system overhead as possible .it should not disrupt the normal running of the monitored system .this is extremely important if a large number of target systems are monitored . *security : typically , an organization defines policies controlling who can access information about their resource .the monitoring system must comply with these policies .* ability of logging : some important data should be archived . * inter - operability : different monitoring systems could obtain and share each other s monitoring information to avoid the functionality overlapping .grid monitoring infrastructure has a four - tiered structure : sensors , archive system , information providers and grid information browser .figure [ figure - architecture ] shows how monitoring information travels through the four - tiered structure and reaches end users .* sensors probe the target systems and obtain related statistics . *the data importer of the archive system fetches the statistics from sensors and stores them into the database . *the grid information provider retrieves information from the database , processes it according to application requirements and returns the results to grid systems . *the remote web server issues grid commands to fetch monitoring information from the grid information service .a sensor can measure the characteristics of a target system .it generates a time stamped performance statistics .a simple sensor example would typically execute one of the unix utilities , such as top , ps , ping , iperf , ndd , or read directly from system files , such as /proc/ * to extract sensor - specific measurements .sensors are used to monitor cpu usage , memory usage and network traffic .some sensors can monitor and capture abnormal system status .we define the type of measurement generated by a sensor , a `` metric '' .as shown in figure [ figure - architecture ] , four types of computer systems are monitored : file service , high performance storage system ( hpss ) , network equipment and computing nodes .each sensor relies on a set of standard apis and protocols to publish the sensor data .the design of the api and protocol is beyond the scope of this paper .the archival system is used to hold historical data that can be used for predication and analysis .there are two components in archival system : data importer and telemetry database .an importer fetches the monitoring data from sensors via standard api , and saves the received data into the telemetry database .the database consists mostly of telemetry gathered by different sensors of different metrics .some databases might include derived parameters , statistics , and any other data elements required by the grid application .a telemetry database can be any relational database , such as oracle , mysql , postgresql , it can even be a set of flat files .a telemetry database also acts as a server to answer all types of telemetry queries .therefore it needs to support flexible , complicate query operations and powerful query language .sql is a perfect candidate .but the sql s for different relational databases are different from each other , and they do not support interoperability . to hide the details of the underlying databases and provide the user a uniform sql interface , we use odbc to wrap the telemetry databases .this implementation allows greater flexibility in the archiving system .the information provider provides detailed , dynamic statistics about instrumentation to grid monitoring service , mds .it is managed and controlled by mds .the information provider either invokes and stops a set of sensors to do active probing , or interacts with running sensors to obtain the current status of resource .an information provider can also query the database to get historical information .we implemented our customized information provider to fetch information from the telemetry database , process the information if necessary , and return the necessary information to the mds which invokes the information provider .gridview was developed at the university of texas at arlington ( uta ) to monitor the u.s .atlas grid .it was the first application software developed for the u.s .atlas grid testbed , released in march , 2001 , as a demonstration of the globus 1.1.3 toolkit .the original text version of gridview provides a snapshot of dynamic parameters like cpu load , up time , and idle time for all testbed gatekeepers through a web page , updated periodically .mds information from gris / giis servers is available through linked pages .in addition , a mysql server is used to store archived monitoring information .this historical information is also presented through gridview .a java applet version of gridview is also available .this applet version presents a hierarchical display of grid services through a graphical map - centric view .as prototype grids become larger and offer more services it becomes desirable to have a quick and easy method for determining which sites have less than a complete set of operational services , along with detailed error messages for services that are failing .gridview fulfills this need by performing tests of the globus toolkit based services at grid computing resources and presenting the results via an applet that provides different views of the state of the testbed .gridview is comprised of two different subsystems , a data collection daemon and a java applet for visualization .the data collection process periodically contacts remote computing systems to ascertain the operational status of the three services offered by the globus toolkit . during the testing at the remote system, a transcript is maintained of the tests performed , the status of the tests and any generated error messages that indicate faulty services .also saved during testing is the information provided by the globus monitoring and discovery service ( mds ) for both grid information index services ( giis ) and grid resource information services ( gris ) . at the completion of a testing cycle, the data collection daemon publishes this information to an http server that provides the applet to users .the visualization applet uses the information recorded by the data collection daemon and presents it in three differing views via an http server .figure [ figure - gridview - map ] presents a geographical representation where top - level computing sites are shown on a map of the continental united states .a status icon for each site shows the combined status of all computers tested at the site .this view provides a quick snapshot of the overall status of the u.s .atlas computing testbed. a hierarchical view of the data shows users the status information for every test and test sequence along with the related transcripts associated with the tests . by following the color - coded status icons, users can quickly determine which tests failed at which sites . clicking on a particular test or test sequence will automatically bring the associated test transcript into view , as shown figure [ figure - gridview - hie ] .finally , the users can inspect the contents of the mds services offered in the testbed , as shown in figure [ figure - gridview - mds ] .this is a graphical and hierarchical view of the data retrieved during testing .the main goal of this screen is to allow users to view static mds entries as well as the exact values of dynamic entries that have violated the ldap consistency tests .figure [ figure - gridview ] shows the status of usatlas grid testbed as viewed through the gridview text interface .the proposed architecture can separate the facility monitoring from the grid environment .the changes in hardware and monitoring tools can be hidden from the grid computing environment . when new hardware is added in the local facility, the local monitoring infrastructure can easily add the new hardware to the monitoring system .this monitoring system simplifies the design of new sensors : new sensors can be plugged into the monitoring architecture with minimum effort .the sensors do not need to know who wants to subscribe to this metric and the number of the subscribers .the subscribers ( consumers ) can also be simplified because they just need to tell information provider what metric they are interested in .it is up to the information provider to deliver the specified metrics to the subscribers .the mds information provider provides a well - designed interface between the grid and computing facilities .it can provide monitoring information for different grid applications as long they use protocols provided by the mds . by distributing and replicating the telemetry databases and mds servers in different locations, we can avoid the problems caused by a single centralized server .* the mds provides the cache copy of the lastest value from the mysql database . * non - intrusiveness : sensors and local monitoring tools put less than 1 percent cpu load on the entire system. the information provider can prevent users from directly accessing to the database server , protect the sensitive information in the database effectively .* scalability : 1100 linux nodes and the network connectivity of eight usatlas testbeds are monitored by our system without adding too much load on the target systems .* flexibility : independent of sensors .many sensors can be easily plugged in as long as they have a well defined protocol and api .another advantage is that the archival system is independent to the underlying database .* ganglia * : ganglia is a scalable distributed monitoring system designed for high performance computing systems such as large localized clusters and even widely distributed grids .it relies on a multicast - based listen / announce protocol to monitor the state within clusters and uses point - to - point connections among representative cluster nodes to federate clusters into a grid and aggregate their states .ganglia has the advantages of low per - node overhead , high concurrency and robustness .ganglia has been deployed at bnl on the rhic and atlas clusters to successfully monitor over 1000 nodes , organized into 10 separate clusters based on experiment .one collection node has been setup to gather all of the data from these clusters and archive it locally to be displayed by ganglia s web front end .the data is made available to each experiment for their own monitoring and job scheduling needs and is also published through globus mds . * network weather service ( nws ) * : the goal of the network weather service is to provide accurate forecasts of dynamically changing performance characteristics from a distributed set of meta - computing resources .it can produce short - term performance forecast based on historical performance measurement .the network weather service attempts to use both existing performance monitoring utilities and its own active sensors to make use of resource , probe its own usage and measure the performance .it can measure the fraction of cpu time available for new processes , tcp connection time , end - to - end tcp network latency , and end - to - end tcp network bandwidth .it has nws sensors , cpu sensors , network sensors .it also has predictors that forecast the system performance .nws was widely adopted by many grid communities .therefore , we will incorporate nws in our grid monitoring toolkits .we can pull out the sensor modules and prediction modules and put them into our monitor architecture .we also need to design an interface that can bridge the communication between the sensors and the telemetry database .* simple network management protocol ( snmp ) * : since snmp was developed in 1988 , the simple network management protocol has become the standard for inter - network management . because it is a simple solution , requiring little code to implement , we can easily build snmp sensors for our monitoring architecture .snmp is extensible , allowing us to easily add network management functions to the monitoring system .snmp also separates the management architecture from the architecture of the hardware devices , which broadens the arena of our monitoring architecture .snmp is widely available today and has extensive support from academic , commercial vendors and research institutes .therefore , snmp based tools are widely used for network monitoring and management .snmp based tools and sensors should be evaluated for our grid monitoring system .* monitoring and discovery service ( mds ) * : mds provides the grid information in globus and ogsa .the mds stores the information collected by its information providers in a cache .these information providers are run periodically to update the information about the hosts , networks , memory usage , disk storage and software available on the system and batch queue status .mds is designed to monitor large number of entities and help users to discover and keep track of these resources .it supports a registration protocol which allows individual entities and their information providers to join and leave mds dynamically .the monitoring infrastructure is organized hierarchically , built on top of the ldap server ( light weight directory access protocol ) .mds provides ldap compatible client tools to access the mds server . due to its ldap - based implementation, mds is not designed to handle highly volatile monitoring data . * grid monitoring architecture ( gma ) * : the grid monitoring architecture consists of three components : directory service , producer and consumer .producers publish their existence , description and type of performance data to the directory service .consumers query the directory service and subscribe to the selected producer . the time - stamped performance data , called events , are directly sent from the producers to consumers based on subscription entries stored at the directory service .grid monitoring architecture supports both a streaming publish / subscribe model , and query / response model .compared with mds , the gma supports the highly dynamic monitoring data .the data stream continuously flows from producers to consumers until the subscription becomes invalid .grid computing benefits from a scalable monitoring system .gridmonitor is a promising candidate for this role .it can be used to monitor several thousand computers , geographically distributed among several computing centers .it naturally integrates large scale fabrication monitoring into the grid system .the initial prototype was deployed at brookhaven national laboratory .open challenges include performance , availability of crucial system status information , robustness and scalability .the authors wish to thank rhic / usatlas computing facility group for their valuable comments and discusses for this work . this work is supported by grants from the u.s .department of energy and the national science foundation .czajkowski , k. and fitzgerald , s. and foster , i. and kesselman , c. `` grid information services for distributed resource sharing '' , proceedings of 10th ieee international symposium on high performance distributed computing ( hdpc-10 ) , ieee press , san francisco , california , august , 2001 .wolski , r. , spring , n. , and hayes , j. , the network weather service : a distributed resource performance forecasting service for metacomputing " , journal of future generation computing systems , volume 15 , october , 1999 .ribler , r.l . ,vetter , j.s . ,simitci , h. , and reed , d.a ., autopilot : adaptive control of distributed applications " , proceedings of 7th ieee international symposium on high performance distributed computing ( hdpc-10 ) , ieee press , 1998 .
|
grid computing consists of the coordinated use of large sets of diverse , geographically distributed resources for high performance computation . effective monitoring of these computing resources is extremely important to allow efficient use on the grid . the large number of heterogeneous computing entities available in grids makes the task challenging . in this work , we describe a grid monitoring system , called gridmonitor , that captures and makes available the most important information from a large computing facility . the grid monitoring system consists of four tiers : local monitoring , archiving , publishing and harnessing . this architecture was applied on a large scale linux farm and network infrastructure . it can be used by many higher - level grid services including scheduling services and resource brokering . * keywords * : grid monitoring ( gridmonitor ) , grid monitoring architecture ( gma ) , monitoring and discovery service ( mds ) .
|
the search for more adequate statistical models of the forward interest rate curve is essential for both risk control purposes and for a better pricing and hedging of interest rate derivative products .a large number of models have been proposed , but it is the heath - jarrow - morton ( hjm ) model that has become widely accepted as the most appropriate framework for addressing these issues .this model has been the basis for a large amount of research in relation to the pricing and hedging of derivative products .however comparatively little has addressed how well this model describes empirical properties of the forward rate curve ( frc ) . in a previous paper , a series of observations concerning the u.s .frc in the period 1991 - 96 were reported , which were in disagreement with predictions of the standard models .these observations motivated a new interpretation of frc dynamics . *first , the average shape of the frc is well fitted by a square - root law as a function of maturity , with a prefactor very close to the spot rate volatility .this strongly suggests that the forward rate curve is calculated by the money lenders using a _ value - at - risk ( var ) like procedure _ , and not , as assumed in standard models , through an averaging procedure .more precisely , since the forward rate is the agreed value at time of what will be the value of the spot rate at time , a var - pricing amounts to writing : where is the value of the spot rate at time and is the market implied probability of the future spot rate at time .the value of is a constant describing the risk - averseness of money lenders .the risk is that the spot rate at time , , turns out be larger than the agreed rate .this probability is equal to within the above var pricing procedure . if performs a simple unbiased random walk , then eq .( [ var ] ) indeed leads to , where is the spot rate volatility and is some function of . *second , the volatility of the forward rate is found to be ` humped ' around year .this can be interpreted within the above var pricing procedure as resulting from a time dependent _ anticipated trend_. within a var - like pricing , the frc is the envelope of the future anticipated evolution of the spot rate . on average ,this evolution is unbiased , and the average frc is a simple square - root .however , at any particular time , the market actually anticipates a future trend .it was argued in that this trend is determined by the past historical trend of the spot rate itself over a certain time horizon . in other words ,the market looks at the past and extrapolates the observed trend in the future .this means that the probability distribution of the spot , , is not centered around but includes a maturity dependent bias whose magnitude depends on the historical spot trend .however , the market also knows that its estimate of the trend will not persist on the long run .the magnitude of this bias effect is expected to peak for a certain maturity and this can explain the volatility hump .the aim of this paper is two fold .first we wish to empirically test the new interpretation of the frc dynamics outlined above .specifically , we report measurements over several different data - sets , of the shape of the average frc and the correlation between the instantaneous frc and the past spot trend over a certain time horizon .we have investigated the empirical behaviour of the frc of four different currencies ( usd , dem , gbp and aud ) , in the period 1987 - 1999 for the usd and 1994 - 1999 for the other currencies .full report of the results can be found in .here we only present detailed results for the usd 94 - 99 , but we also discuss relevant results obtained with the other data - sets .second , for usd 94 - 99 , we wish to compare these empirical results with the predictions of the one - factor gaussian hjm model fitted to the empirical volatility .our study is based on data sets of daily prices of futures contracts on 3 month forward interest rates . in the usd casethe contract and exchange was the eurodollar cme - imm contract . in practice ,the futures markets price three months forward rates for _ fixed _ expiration dates , separated by three month intervals . identifying three months futures rates to instantaneous forward rates ( the difference is not important here ), we have available time series on forward rates , where are fixed dates ( march , june , september and december of each year ) , which we have converted into fixed maturity ( multiple of three months ) forward rates by a simple linear interpolation between the two nearest points such that .in our notation we will identify as the forward rate with fixed maturity .this corresponds to the musiela parameterization .the shortest available maturity is months , and we identify to the spot rate . for the usd 94 - 99 data - set discussed here , we had 38 maturities with the maximum maturity being 9.5 years. we will define the ` partial ' spread , as the difference between the forward rate of maturity and the spot rate : .the theoretical time average of will be denoted as .we will refer to empirical averages ( over a finite data set ) as . for infinite datasetsthe two averages are the same .first we consider the average frc , which can be obtained from empirical data by averaging the partial spread : in figure 1 we show the average frc , along with the following best fit : as first noticed in , the average curve can be quite satisfactorily fitted by a simple square - root law .the corresponding value of ( in per ) is 0.049 which is very close to the daily spot volatility 0.047 ( which we shall denote by ) .we have found precisely the same qualitative behaviour for our 12 year usd data - set and also for the gbp and aud .the only exception was the steep dem average frc which can be explained by its low average spot level .we have therefore greatly strengthened with much more empirical data the proposal of ref . that the frc is on average fixed by a var - like procedure , specified by eq .( [ var ] ) above . in figure 2we show the empirical volatility for the usd , defined as : where denotes the daily increment in the forward rates .we see a strong peak in the volatility at 1 year . for all the data - sets we have studied the volatility shows a steep initial _rise _ between the spot rate and 6 - 9 months forward .we also show the fit of the function : it is not _ a priori _ clear why the frc volatility should _ universally _ be strongly increasing for the first few maturities .this is actually in stark contrast to the vasicek model where the volatility is exponentially decaying with maturity .we will see that this universal feature is naturally explained with the anticipated trend proposal .we have studied the frc ` deformation ' determined empirically by : by construction the deformation process vanishes at and has zero mean . for the first few maturitieswe have observed that this quantity is strongly correlated with the past trend in the spot .therefore , in accordance with the anticipated trend proposal , we consider the following simple one - factor model : the function is the ` anticipated trend ' which by construction has zero mean .one of the main proposals of was that the anticipated trend reflects the past trend of the spot rate .in other words , the market extrapolates the observed past behaviour of the spot to the nearby future .here we consider a trend of the form : [ kernel ] b(t)= _ -^t e^-_b(t - t ) d r(t ) , which corresponds to an exponential cut - off in the past and is equivalent to an ornstein - uhlenbeck process for .we have also considered a simple flat window cut - off in .we choose here to calibrate to the volatility . neglecting the contribution of all drifts , we find from eq s .( 2.6 ) and ( 2.7 ) that the two are related simply by : .\ ] ] in accordance with the observed short - end behaviour of the frc volatility , we require to be _ positive _ and strongly increasing for the first few maturities . in our interpretation of the short - end of the frc , as described quantitatively by eq s ( 2.6 - 8 ) , this universal feature is a consequence of the markets extrapolation of the spot trend into the future . to determine the parameter in eq .( 2.7 ) , we propose to measure the following average error : to measure , we must first extract the empirical deformation using eq .we then determine using the empirical spot time series and eq .we actually use detrended spot increments , defined as . ] the error will have a minimum for some .this is the time - scale where the deformation and anticipated trend match up best , thereby fixing the values of .is also simply the average error between the empirical forward rates and the model forward rates as given by eq .( 2.6 ) . ] in figure 3 we plot the error , against the parameter , used in the simulation of .we consider 6 months which is the first maturity beyond the spot - rate .we see a clear minimum demonstrating a strong correlation between the deformation and anticipated trend . for a flat window modelthe minimum is even more pronounced .these results indicate the clear presence of a dynamical time - scale around trading days .we have observed that the time - scale obtained is independent of the maturity used . in figure 4we plot the empirical deformation against , where we have set trading days .indeed , we visually confirm a very close correlation . herewe have restricted ourselves to a one - factor model for ease of presentation .in we consider a two and three - factor version of our model where the definition of the deformation now includes the subtraction of a long spread component . in this casewe observe improved and very striking correlations that persist even up to 2 years forward of the spot ! for the other data - sets the strength of the correlation is not as strong ; however the same qualitative features are clearly present .it is important to understand whether the popular hjm framework can capture the empirical properties discussed here .the stationary one - factor gaussian hjm model is described by : where : is the market price of risk and is a brownian motion .the average frc is given by : which corresponds to an average over a finite time period .for comparisons with our empirical average frc , we can consider to be 5 years which was the approximate length of our dataset .there are 3 separate contributions to the average frc .first is the contribution of the initial frc . in this casethe initial frc was somewhat steeper than the average frc .yet its contribution to the average frc is still roughly a factor of 3 less than the observed average .we can expect the magnitude of this contribution to decrease with increasing .the second contribution comes from the factor in eq .the magnitude of this contribution grows linearly with . yeteven for years we find that the size of this term is very small , at least a factor of 10 smaller than the observed average frc for the early maturities .this term can therefore be neglected .more interesting is the contribution of the market price of risk term .we can show that this contribution is always _ negative _ for some initial region of the frc if for all .we found that this condition holds for all the data - sets we studied .this negative contribution has a maximum at . assuming the volatility is constant for large maturities , we find the market price of risk contribution takes the independent form : .\ ] ] in figure 1 we show a plot of eq .( 3.4 ) where we use the empirical volatility eq . ( 2.4 ) and choose ( per ) which gives a best fit to the average frc ; it is clear that this fit is very bad , in particular compared to the simple square - root fit described above . in the usd case the market price of risk contribution is only negative for the first maturity since the usd has a very strong volatility peak .however for the other data - sets it occurs for much longer maturities or may remain negative for the entire maturity spectrum .clearly the hjm model completely fails to account for our empirical results regarding the average frc .the next question to address is whether the hjm model can explain the striking correlation observed between the deformation and anticipated trend .we do this by calculating eq .( 2.9 ) , where all averages are calculated with respect to the hjm model eq . ( 3.1 ) calibrated to the empirical volatility . as before we have also calibrated to the empirical volatility via eq . ( 2.8 ) .an immediate problem arises because , as we have seen , the hjm average frc can not be calibrated to the empirical average frc . as a resultthe average deformation will no longer have the required zero mean .we will ignore this problem by defining the deformation as eq .( 2.5 ) but with the empirical average frc now replaced by the hjm average frc .in this case we find the finite contributions of eq .( 2.9 ) are negligible and tend to zero for large .the result is plotted in figure 3 where we again consider months .we see that the hjm model fails to adequately account for the strong anticipated trend effect observed here and more strikingly in .this is even after we have , in effect , assumed that the hjm model does describe the correct average frc . on the other hand ,our model is very close in spirit to the strong correlation limit of the ` two - factor ' spot rate model of hull - white , which was introduced in an _ ad hoc _ way to reproduce the volatility hump .although phrased differently , this model assumes in effect the existence of an anticipated trend following an ornstein - uhlenbeck process driven by the spot rate . it would be interesting to understand better the precise relation , if any , between this model and the hjm framework .our main conclusions are as follows .we confirm with much more data that the average frc indeed follows a simple square - root law , with a prefactor closely related to the spot volatility .this strengthens the idea of a var - like pricing of the frc proposed in .we also confirm the striking correlation between the instantaneous frc and the past spot trend over a certain time horizon .this provides a clear empirical confirmation of the anticipated trend mechanism first proposed in .this mechanism provides a natural explanation for the universal qualitative shape of the frc volatility at the short end of the frc .this point is particularly important since the short end of the curve is the most liquid part of the curve , corresponding to the largest volume of trading ( in particular on derivative markets ) .interest rate models have evolved towards including more and more factors to account for the dynamics of the frc .yet our study suggests that after the spot , it is the _ spot trend _ which is the most important model component .finally , we saw that the one - factor gaussian hjm model calibrated to the empirical volatility fails to adequately describe the qualitative features discussed here .we presented a simple one - factor version of a more complete model described in , which is consistent with the above interpretation .bouchaud , n. sagna , r. cont , n. elkaroui , m. potters , risk magazine , july 1998 ; j.p .bouchaud , n. sagna , r. cont , n. elkaroui , m. potters , _ phenomenology of the interest rate curve _ , to appear in applied mathematical finance ( 1999 ) .available at : http://www.science-finance.fr
|
we present compelling empirical evidence for a new interpretation of the forward rate curve ( frc ) term structure . we find that the average frc follows a square - root law , with a prefactor related to the spot volatility , suggesting a value - at - risk like pricing . we find a striking correlation between the instantaneous frc and the past spot trend over a certain time horizon . this confirms the idea of an anticipated trend mechanism proposed earlier and provides a natural explanation for the observed shape of the frc volatility . we find that the one - factor gaussian heath - jarrow - morton model calibrated to the empirical volatility function fails to adequately describe these features . = 8.5 in = 6.5 in = -0.5 in = 0.in = 0.in addtoresetequationsection plus 1000pt minus 1000pt # 1 # 1= to # 1= to
|
integrated photonics components such as waveguides , beam - splitters , slow - light devices , lenses and beam - shapers have received considerable attention in the last decade .these components can be manufactured using two - dimensional ( 2d ) photonic crystals ( phcs ) .more specifically , phcs based on air holes in a planar waveguide based dielectric core have been successfully manufactured in silicon - on - insulator material using microfabrication techniques . prior to fabrication, a design step often involves selecting a number of geometric parameters related to the phc and performing optimization of a cost function over a large solution space .for instance , a basic photonic lattice can be defined and holes allowed to be present or absent , thereby enabling a binary encoding of the solution space .optimization may then be carried out using a standard genetic algorithm .this conference paper is concerned with the generation of arbitrary coherent beam profiles using engineered 2d phcs . in other terms , we seek to transform a known input beam into another beam of controlled amplitude _ and _ phase profile .those two conditions constitute a _ multiobjective optimization problem _ ( mop ) , which must be solved by sampling the set of optimal solutions , commonly known as the pareto set .the first part of this contribution briefly describes the optimization procedure for a specific target , namely the generation of coherent hermite - gauss beam profiles using a 2d phc slab . in the latter part, we describe the mop solving procedure used and present some solutions offering an acceptable trade - off between the two aforementioned objectives ( amplitude and phase profile of the beam ) .laser beam shaping is defined as redistributing the irradiance and phase of a beam .this contribution is concerned with finding a phc configuration which , when illuminated with a gaussian beam , produces a scattered wavefunction that matches a desired profile in a given plane .the beam shaping problem can be formulated as the minimization of the following integral where is the location of the target plane , is the computed em field on the target plane , is the desired beam at the device output ( the is the beam propagation axis ) .it was recently proposed to solve this optimization problem using a combination of multiple scattering computations and a genetic algorithm .further details can be found in ref .however , the minimization of does not take into account the phase profile of the beam , only the amplitude , or irradiance distribution .this kind of optimization problem is called _ incoherent beam shaping_. as a result , optimized beams may exhibit large transverse phase fluctuations , which in turn results in an poor field depth .this is a major impediment to applications such as atom guiding and microscopy , where beams with large field depths ( low divergence ) are needed . in order to achieve_ coherent beam shaping _, we must define another objective function related to phase fluctuations of the transverse profile .our proposal is to minimize the following integral \big|^2 dy}{\int |\bar{u}(x_0,y)|^2 dy } \ ] ] where / \mathrm{re } [ u(x , y ) ] ] . for each of those values ,48 tabu search processes are executed in parallel , each totalling 5000 iterations .this set of search processes yields a number of final solutions , out of which we extract the optimal solutions .a solution is pareto optimal if there is no solution found that is characterized by a lower value of both and .the dotted lines indicate the best possible value for each of the two objectives.,scaledwidth=50.0% ] .* left column : * solution with ( ) = ( ) .* center column : * solution with ( ) = ( ) . * right column : * solution with ( ) = ( ) . ]pareto optimal results , as well as the various sub - optimal ( non - pareto ) solutions , are presented on fig .[ fig : pareto ] .the set of pareto solutions found offer the best possible compromise between an accurate amplitude profile and a uniform phase front .as illustration , three sample pareto solutions are presented on fig .[ fig : beams ] .the configuration on the left offers the most accurate reproduction of a hermite - gauss beam profile .however , the non - uniformity of the phase front results in a poor field depth . inversely, the configuration on the right exhibits a mostly uniform phase front , but is not close to the amplitude profile of a order 2 hermite - gauss beam .the central configuration exhibits the best trade - off between the two required attributes , keeping a hermite - gaussian profile over a greater distance .it should be noted that the `` optimality '' of the solutions strongly depends on a given application .the results on fig . [ fig : pareto ] rather give a snapshot of the potential of the pre - defined photonic lattice .we also stress the fact that multiobjective techniques can readily take other objectives into account . for instance, one could seek to minimize the backscattering losses associated to the finite photonic cystal .however , since at least all pareto solutions of a objective problem are necessary solutions of the same problem with a greater number of objectives , the computational cost increases accordingly .in this contribution , we have reported the possibility to control the coherent profile ( amplitude and phase ) of the output beam in a phc - based integrated beam shaping device .the shaping problem was formulated in terms of two objective functions , one for the amplitude and one for the phase of the transformed beam .a parallel tabu search algorithm was used to sample the pareto front ( optimal solutions ) of the resulting multiobjective problem .our results show that multiobjective optimization of integrated photonics devices is within reach of currently available algorithms .10 l. h. frandsen , p. i. borel , y. x. zhuang , a. harpth , m. thorhauge , m. kristensen , w. bogaerts , p. dumon , r. baets , v. wiaux , j. wouters , and s. beckx , `` ultralow - loss 3-db photonic crystal waveguide splitter , '' _ opt ._ , vol . 29 , pp .16231625 , july 2004 .p. pottier , s. mastroiacovo , and r. m. de la rue , `` power and polarization beam - splitters , mirrors , and integrated interferometers based on air - hole photonic crystals and lateral large index - contrast waveguides , '' _ opt .express _ ,14 , pp .56175633 , june 2006 .j. marques - hueso , l. sanchis , b. cluzel , f. de fornel , and j. p. martnez - pastor , `` properties of silicon integrated photonic lenses : bandwidth , chromatic aberration , and polarization dependence , '' _ opt ._ , vol .52 , no . 9 , pp . 091710 + , 2013
|
optical devices based on photonic crystals such as waveguides , lenses and beam - shapers , have received considerable theoretical and experimental attention in recent years . the production of these devices has been facilitated by the wide availability of silicon - on - insulator fabrication techniques . in this theoretical work , we show the possibility to design a coherent phc - based beam - shaper . the basic photonic geometry used is a 2d square lattice of air holes in a high - index dielectric core . we formulate the beam shaping problem in terms of objective functions related to the amplitude and phase profile of the generated beam . we then use a parallel tabu search algorithm to minimize the two objectives simultaneously . our results show that optimization of several attributes in integrated photonics design is well within reach of current algorithms .
|
it has been known for a decade that web - document popularity follows the zipf law . nevertheless , the exponent values reported by different authors vary significantly , from 0.60 to 1.03 ( see table [ datasets ] ) .we believe that the scattering of the reported values is due to the small sample size in some cases and to the details of the fitting procedure used to extract the exponent . in this paper , we propose that the rank distribution of the websites follows the zipf law and give arguments supporting our idea .we must note that website statistics are more extensive than web - document statistics , and the distribution parameters can be obtained with higher accuracy .we address the following questions : is the rank distribution of websites zipf - like ?if yes , what are the conditions under which the `` true '' exponent can be obtained ? does the exponent depend on the duration of the observation ? or on the geographical position of the observer ? and does the exponent vary with time , as the internet develops ?we report some answers to these questions .we have studied website statistics , which are indeed more stable than web - document statistics .we have analyzed log files accumulated on cache servers of russian academic networks ( freenet , rasnet , and rssi ) for about six years .these networks differ by their connectivity topology and bandwidth , both national and international .these cache servers have different geographical locations ( moscow , moscow region , and yaroslavl in russia ) .in addition , we analyzed some statistics collected during seven weeks in the fall of 2004 at a number of ircache servers in the united states ( see table [ table - setus ] ) .we found that the statistics studied become stable when the number of queries for the given statistics exceeds .it is therefore meaningful to fit only those data for which the number of queries exceeds this value .this simple criterion can be used to estimate the critical window for the rank interval where the distribution is stable and the power law can be observed .we found that the statistics are independent of the geographical location of the cache server ( observer ) collecting the data , at least for the analyzed data sets .we found that the distribution is independent of the different years of data collection and is therefore stable over internet history and development .nevertheless , we found that the zipf - like law approximation is suitable only in the middle region of several orders of rank magnitude .we propose a modification of the zipf - like law with two additional parameters and explain its possible meaning .we found that if we fit the equation of the modified law to the data , the website popularity distribution becomes quite stable .the value of the exponent is for all datasets studied in this paper .we thus may suggest that website popularity follows the zipf law .we verified that the same modification also works perfectly for the web - document ranked distribution .the paper is organized as follows . in section [ nature ] , we present a brief history of the power laws observed in nature and society .we describe the data collection and processing in section [ datasets ] .we discuss the results in section [ discussion ] and present our conclusions in section [ conclusions ] .more than 100 years ago , pareto observed that the income distribution in all countries can be described by the relation where the exponent and is some constant .about 70 years ago , george zipf discovered a striking regularity in english texts : the relative occurrence frequency of the most popular word is inversely proportional to the rank : a more general form of zipf law ( [ zipf - law ] ) with the exponent is often encountered in the literature and is known as a _ zipf - like law _ : a zipf - like law has been found in many areas of human activity and in nature . among examplesare the distribution of words in random texts , of nucleotide `` words '' in dna , of bit sequences in unix executable files , of book popularities in libraries , of countries areas and population sizes , of scientific publication citation indices , of forest - fire areas .many other examples can be found in recent reviews . meanwhile , there are many discussions whether a lognormal or power law is a better fit for some empirical distributions , for example , income distribution , population fluctuations , file size distribution , and some others ( for a short review , see ) . in many casesa lognormal distribution looks like a power law distribution for a several orders of magnitude .we leave this question open and analyse our data using a zipf - like law ..characteristics of published web datasets [ cols="<,^,^,^ , < , < " , ] to check this statement deeper , we also analyze recently available data collected during the period from 11/03/2004 to 12/29/2004 at nine cache - servers of the us national cache - mesh system for science and education built - up within the ircache project . table [ table - setus ] presents data from the following locations : * _ bo _ ncar at boulder , colorado * _ ny _ new york , new york * _ pa _ digital internet exchange in palo alto , california * _ pb _ psc at pittsburgh , pennsylvania * _ rtp _ research triangle park , north carolina * _ sd _ sdsc at san diego , california * _ sj _ mae west exchange point in san jose , california * _ sv _ nasa - ames / fix - west in silicon valley , california * _ uc _ ncsa at urbana - champaign , illinois .the second and third entries from the bottom demonstrate the stability of the fit for two subsets of the data collected at _uc_-location , for 12 days ( set name _ us-12d _ ) and for 1 day ( set _ us-1d _ ) .the last entry represents the fit to the sum of the preceding data sets .results of the fit by expression ( [ bestfit ] ) are close to unity and quite similar to those for russian servers presented in table [ alpha ] .we have presented modified zipf law ( [ bestfit ] ) , which fits the rank distribution of web sites in the full range of ranks rather well .we found that the value of the exponent in expression ( [ bestfit ] ) is stable for the analyzed datasets .it does not vary with ( 1 ) the year of data collection , ( 2 ) the period of data collection , or ( 3 ) the geographical location of the cache server where we collected data .we found that is very close to .we have reasons to suppose this value of is a universal property of web - traffic for the website rank . we have also presented a clear explanation of the `` trickle - down effect '' based on the properties of our modified zipf law .we suggest that website popularity is universal property of internet and follows the zipf law . in a similar experiment ,fluctuations of the exponent value were checked as a function of the volume of statistics , where cache traces of user requests to different internet domains were analyzed .user requests were sent to internet through the cache triangle , namely , they went to the master server , which sent each odd request to the left cache and each even request to the right cache .clearly , the traces should be nearly equal in the limit of a large number of requests .indeed , it was estimated that exponents extracted separately from the `` left '' traces and `` right '' traces were within five per cent for a set volume larger than ten thousand requests , and that those for set volume less than a few hundred fluctuated strongly .thus , rare statistics may significantly affect the results .the results in this paper may be useful for building mirror sites and cdns as well as for improving software for dns request caching .we also conjecture that fitting with the modified zipf law is suitable for describing the rank distribution of web - document popularity .the authors thank the anonymous referees for the valuable remarks and comments that allowed us to improve this paper .special thanks to duane wessels for access to logs from ircache web - cache servers .steven glassman , _ a caching relay for the world wide web .conference on the world - wide web , cern , geneva ( switzerland ) , may 1994 .computer networks and isdn systems , 27(2 ) , 165 - 173 ( 1994 ) .lee breslau , pei cao , li fan , g. phillips , s. shenker , _ web caching and zipf - like distributions : evidence and possible implications _ieee infocom 99 : 18th annual joint conference of the ieee computer and communications societies , volume : 1 , p. 126 - 134 , 1999 .shchur , _ incipient spanning clusters in square and cubic percolation _ , in springer proceedings in physics , `` computer simulation studies in condensed matter physics xii '' , eds . d.p .landau , s.p .lewis , and h.b .schttler , ( springer - verlag , berlin , 2000 ) azer bestavros ._ www traffic reduction and load balancing through server - based caching_. ieee concurrency , * 5*(1 ) , 56 - 67 ( 1997 ) ; takashi hatashima , toshihiro motoda , shuichro yamamoto ._ an `` interest '' index for www servers and cyberranking_. ieice trans .inf . & syst ., * e83-d * , 729 - 734 ( 2000 ) ; venkata n. padmanabhan , lili qiu . _the content and access dynamics of a busy web site : findings and implications_. proc .acm sigcomm00 , stockholm , sweden , 2000 , pp .111 - 123 ; adeniyi oke , rick bunt ._ hierarchical workload characterization for a busy web server_. in : computer performance evaluation ( ed. t. field , p.g .harrison , j. bradley , u. harder ) .springer - verlag : berlin ea , 2002 , pp.309 - 328 .tools2002 : 12th int . conf . on modeling techniques and tools , london , uk , april 14 - 17 2002 .[ lecture notes in computer science , vol .2324 ] .masaki aida , noriyuki takahashi , tetsua abe . _ a proposal of dual zipfian model for describing http access trends and its application to address cache design_. ieice trans .commun . , * e81-b * ( 7 ) , 1475 - 1485 ( 1998 ) .p. barford , a. bestavros , a. bradley , m. crovella , _ changes in web client access patterns : characteristics and caching implications_. world wide web j. , spec .issue on characterization and performance evaluation , * 2 * , 15 - 28 ( 1999 ) .shudong jin and azer bestavros , _ sources and characteristics of web temporal locality_. proc .mascots2000 : the 8th ieee / acm international symposium on modeling , analysis and simulation of computer and telecommunication systems , san francisco , ca , 29 aug - 1 sept 2000 .p.28 - 35 .
|
we present an extensive analysis of long - term statistics of the queries to websites using logs collected on several web caches in russian academic networks and on us ircache caches . we check the sensitivity of the statistics to several parameters : ( 1 ) duration of data collection , ( 2 ) geographical location of the cache server collecting data , and ( 3 ) the year of data collection . we propose a two - parameter modification of the zipf law and interpret the parameters . we find that the rank distribution of websites is stable when approximated by the modified zipf law . we suggest that website popularity may be a universal property of internet . , , internet , web traffic , rank distribution , zipf law 89.20.hh world wide web , internet - 89.75.da systems being scaling laws
|
modern cosmology is based on general relativity ( gr ) and einstein equations .gr requires lengthy ( or cumbersome ) calculations which could be solved by computer algebra methods . during the years, a plethora of ca platforms was used for gr purposes , as reduce ( with excalc package ) , sheep or maxima ( see for example in , or ) . although some advantages as flexibility and speed were obvious , recently , platforms as maple or mathematica are preferred by those working in the field , due to their more advanced graphical facilities - for a comparison between maple and reduce see . in the last years , an increased interest in theoretical cosmology is visible because of the new facts revealed by the experimental astrophysics , mainly in the sense that the universe is actually in an accelerated expansion period - the so called `` cosmic acceleration '' ( see ) . in order to fit these new facts with the standard model of the universesome new mechanisms are proposed , based on dark - matter , dark - energy and/or cosmological constant hypothesis .new models are proposed in the literature practically on a daily basis demanding new specific tools and libraries from the computational science , including ca applications specially designed for theoretical cosmology .thus we concentrate here in symbolic manipulation of einstein equations with maple and grtensorii package ( see at * http://grtensor.org * ) .we packed our procedures in a specific library , containing all the necessary ingredients for theoretical cosmology - friedmann equations , a scalar field minimally coupled with gravity and other matter fields terms to be used specifically .the article is organized as follows : next section 2 introduces shortly grtensorii package and his main facilities .then section 3 presents how we implemented non - vacuum einstein equations in a specific form for cosmology ( based on friedmann - robertson - walker - frw metric ) with the stress - energy tensor components designed for interacting with gravity matter and one real scalar field separately added .the last section is dedicated to some new results we obtained with our maple libraries in the so called `` reverse - technology '' method for treating inflation and cosmic expansion triggered by a real scalar field .our library , called cosmo , can be provided by request to the authors .we mainly used maple 7 and maple9 versions but as far as we know the library can be used with other maple environments starting with maple v.grtensorii is a free package from * http://grtensor.org * for the calculation and manipulation of components of tensors and related objects , embedded in maple . rather than focus upon a specific type or method of calculation, the package has been designed to operate efficiently for a wide range of applications and allows the use of a number of different mathematical formalisms .algorithms are optimized for the individual formalisms and transformations between formalisms has been made simple and intuitive .additionally , the package allows for customization and expansion with the ability to define new objects , user - defined algorithms , and add - on libraries .the geometrical environment for which grtensorii is designed is a riemannian manifold with connection compatible with the riemannian metric .thus there are special commands and routines for introducing and calculating geometrical objects as the metric , christoffel symbols , curvature ( ricci tensor and scalar ) and the einstein tensor - as for a couple of examples . manipulating with indices andextracting tensor components are easy to do from some special commands and conventions .grtensorii has a powerful facility for defining new tensors , using their natural definitions . as for an example , for calculating the bianchi identities ( where is the einstein tensor defined with the ricci tensor and the ricci scalar , is the metric and we denoted with the semicolon the covariant derivative of the metric compatible connection ) we can use a short sequence of grtensorii commands for calculating the left side of eq .[ bianchi ] : .... > grtw ( ) ; > qload(rob_sons ) ; > grdef(`bia { ^i } : = g { ^i ^j ; j } ` ) ; > grcalc(bia(up ) ) ; grdisplay(bia(up ) ) ; .... actually above , the first two commands are for starting the grtensorii package and loading the frw metric ( previously constructed and stored in a special directory - grtensorii provides also an entire collection of predefined metrics , but the user can also define his owns using a * gmake ( ... ) * command ) .the last line contains two commands , for effectively calculating the new * bia(up ) * tensor and for displaying the results .if the metric in discussion is compatible with the connection the * bia ( ) * tensor must have all components vanishing .the central point of any calculation with grtensorii is * grcalc ( ) * command. often large terms result in individual tensor components which need to be simplified . for this * gralter ( ) * and * grmap ( ) * commands are provided equiped with several simplifying options , mainly coming from the simplifying commands of maple and some specific to grtensorii .actually the user is free to choose his own simplification strategy inside these commands .special libraries are also available for doing calculation in different frames or basis and in newman - penrose formalism .as we mentioned earlier , in modern cosmology we are using the friedmann - robertson - walker metric ( frw ) , having the line element in spherical coordinates \ ] ] as a generic metric for describing the dynamics of the universe . here is a constant with arbitrary value , positive ( for closed universes ) , negative ( for open universe ) and zero for flat universes .usually , this constant is taken , or respectively . is called scale factor , and is only function of time , due to the homogeneity and isotropy of space as in standard model of the universe is presumed .the dynamic equations are obtained introducing ( [ frw ] ) in the non - vacuum einstein equations , namely where is the cosmological constant , the stress - energy tensor , g the gravitational constant , the speed of light and .the matter content of the universe is given by the stress - energy tensor which we shall use as : where the stress - energy tensor of a scalar field minimally coupled with gravity and the stress - energy tensor of the matter ( other than the scalar field ) have the form of a perfect fluid , namely : above the scalar field pressure and density are here we used the 4-velocities obviously having .introducing all these in ( [ ee ] ) and defining the hubble function ( usually called hubble constant ) and the deceleration factor as where a dot means time derivative and is the initial ( actual ) scale factor , we should obtain the dynamical equations describing the behavior of the universe , the so called friedmann equations .the whole package will contain also the conservation laws equations and the klein - gordon equation for the scalar field , separately .we composed a sequence of grtensorii commands for this purpose .first , defining the 4-velocities , the scalar field functions and the einstein equations , we have .... > restart;grtw();qload(rob_sons ) ; >grdef(`u { i } : = -c*kdelta { i $ t } ` ) ; > grdef(`scal : = phi(t ) ` ) ; > grdef(`t1 { i j } : = scal { , i } * scal { , j } - g { i j } * ( g { ^a ^b } * scal { , a } * scal { , b } + v(t))/2 ` ) ; > grdef(`tt1 { i j } : = ( epsilonphi(t)+ pphi(t))*u { i } * u { j } + pphi(t)*g { i j } ` ) ; > pphi(t):=diff(phi(t),t)^2/2/c^2-v(t)/2 ; > epsilonphi(t):=diff(phi(t),t)^2/2/c^2+v(t)/2 ; > grdef(`test { i j } : = t1 { i j } - tt1 { i j } ` ) ; > grcalc(test(dn , dn ) ) ; grdisplay(test(dn , dn ) ) ; > grdef(`t2 { i j } : = ( epsilon(t ) + p(t))*u { i } * u { j } + p(t)*g { i j } ` ) ; >grdef(`t { i j } : = t1 { i j } + t2 { i j } ` ) ; > grdef(`cons { i } : = t { i ^j ; j } ` ) ; grcalc(cons(dn ) ) ; > ecukg:=grcomponent(box[scal ] , [ ] ) -dv(t)/2 ; >grdef(`ein { i j } : = g { i j } - 8*pi*g*t { i j } /c^4 ` ) ; > grcalc(ein(dn , dn ) ) ; gralter(ein(dn , dn),expand ) ; .... here we defined twice the stress - energy components for the scalar field , due to the possibility of a direct definition ( * t1 ( ) * ) and through the corresponding density and pressure ( * tt1 ( ) * ) . because we are working in a coordinate frame, these must have equal components and we can check it through * test(dn , dn ) * tensor as having vanishing components .finally the total stress - energy tensor and the einstein equations are defined , as it is obvious .separately we defined the conservation law - equation ( * cons ( ) * ) as the contracted covariant derivative of the stress - energy tensor and the klein - gordon equation for the scalar field - as the unique component of the dalembertian and adding a special function of the derivative of the potential in terms of the scalar field * dv(t)*. we shall treat this as an extra variable to be extracted solving the equations . next step is to extract , one by one the components of * ein(dn , dn ) * as the final form of ( [ ee ] ) through a sequence of * grcomponent * commands followed by certain simplifications and rearrangements of terms .as some of the equations are identical we shall restrict only to two of them , coupled with conservation and klein - gordon equations . as a result we denoted with * ecunr1 * and * ecunr2 * the independent einstein equations and with * ecunr3 * the conservation law equation ( * ecukg * remains as it is ) .we also provided a separate equation ( * ecnur22 * ) for one of the above terms written with the acceleration factor * q(t)*. then comes a series of substitution commands for casting the equations in terms of the hubble function , deceleration factor and geometrical factor defined as : .... > ecunr1:=expand(simplify(subs(k = k(t)*rr(t)^2,ecunr1 ) ) ) ; > ecunr2:=expand(simplify(subs(k = k(t)*rr(t)^2,ecunr2 ) ) ) ; > ecunr1:=subs(diff(rr(t),t)=h(t)*rr(t),ecunr1 ) ; > ecunr22:=subs(diff(rr(t),t , t)=-2*h(t)^2*rr(t)*q(t ) , ecunr2 ) ; > ecunr22:=subs(diff(rr(t),t)=h(t)*rr(t),ecunr22 ) ; >ecunr2:=subs(diff(rr(t),t)=h(t)*rr(t),ecunr2 ) ; >ecunr2:=expand(ecunr2 ) ; >ecunr2:=subs(diff(rr(t),t)=h(t)*rr(t),ecunr2 ) ; >ecunr3:=subs(diff(rr(t),t)=h(t)*rr(t),ecunr3 ) ; >ecukg:=subs(diff(rr(t),t)=h(t)*rr(t),ecukg ) ; .... .... > parse(cat("save " , substring(convert([anames ( ) , " cosmo.m"],string),2 .. -2)),statement ) ; .... having this library stored , every - time one need the above equations , it can load fast through a * read * command .it provides all the functions and variables directly without running all the stuff we presented here above .thus , the * cosmo.m * library provides all the necessary environment for doing calculation within the standard model of cosmology , with frw metric and a scalar field and other matter variables included . for these last onesthere are some functions left undefined ( * epsilon(t ) * and * p(t ) * ) where the user can define other matter fields than the scalar field to be included in the model - even a second scalar field and/or the cosmological constant as describing the dark - energy content of the universe .thus our library can be used in more applications than those we presented in the next section . in the same purpose , we left in the library some of the original equations unprocessed - having different names - as for example the components of the einstein tensor ( * ein(dn , dn ) * ) .thus the user can finally save his own library , expanding the class of the possible applications of our * cosmo * library .as an example , we shall next point out some results we obtained by using this library for the so called `` reverse - technology '' treatment of inflation triggered by the scalar field .in the standard treatment of cosmological models with scalar field , it is prescribed a certain potential function for the scalar field ( taking into account some physical reasons specific to the model processed ) and then the dynamical friedmann equations are solved ( if it is possible ) to obtain the time behavior of the scale factor of the universe . as recently some authors pointed out ,a somehow `` reverse '' method is also interesting , where the time behavior of the scale factor is `` a priori '' prescribed ( as a function of time which will model the supposed time behavior of the universe in inflation or in cosmic accelerated expansion ) then solving the friedmann equations we can extract the shape of the corresponding potential for the theory .this is the so called `` reverse technology '' and we shall use it here to illustrate the usage of our * cosmo.m * library . we shall concentrate ourselves to the case of no matter variables other than the scalar field . in this casewe solve first equations ( [ ecunr1 ] ) and ( [ ecunr2 ] ) for the potential and , not before denoting the last one with a special intermediate maple function called * d2phi(t ) * with * subs * command : .... > ecunr1:=subs(diff(phi(t),t)^2=d2phi(t),ecunr1 ) ; >ecunr2:=subs(diff(phi(t),t)^2=d2phi(t),ecunr2 ) ; >solve({ecunr1,ecunr2},{v(t),d2phi(t ) } ) ; .... thus we have : \ ] ] \ ] ] here and in the following pages we have , as usual geometrical units . herewe shall process one of the examples pointed out in ellis and madsen article , namely that one of de sitter exponential expansion , where thus ( [ pot ] ) and ( [ dotphi2 ] ) became after simple evaluations of the corresponding maple expressions .it is obvious that can be simply obtained by square root of the above expression and can also be integrated to give the potential as : the result is that , after evaluating the einstein equations we have automatically satisfied * ecunr1 * , * ecunr2 * and * ecunr3 * and the klein gordon equation has the form : the last one is used to express the * dv(t ) * by solving it , and it is a point to check the calculation if this expression is equal to that one obtained directly from the potential . but this checking can be done only if we express , after a sequence of simple * subs * and * solve * commands , the potential and his derivative in terms of the scalar field , more precisely in terms of .the result is these results are in perfect agreement with the well - known results from .we processed in the same way more examples , some of them completely new .our purpose was to produce maple programs for processing the `` reverse - technology ''- method for these type of potentials with matter added to the model , especially dust or radiative matter .although the steps for computing are the same , there are two points of the calculations where troubles can appear and the solution is not straightforward .the first one is the integration of the * dphi(t ) * obtained as the square root of * d2phi(t)*. sometime it is not trivial to do this , so in several cases we used approximation techniques , by evaluating the cosmological functions at the initial time .our main purpose was to produce good initial data for numerical solving the einstein equations ( with the cactus code , for example ) thus these approximations can be a good solution for short time after the initial time . the second trouble point is to evaluate the potential in terms of the scalar field , namely to extract the time variable from it . sometimes herewe have transcendental equations and again some approximation methods can solve the problem . because these results are not in the topic of this article we plan to report them in a future article .special thanks to one of the referees who revealed many week points of our article . this work was partially supported by the romanian space agency ( grant nr .258/2002 - 2004 ) and the albert einstein institute , potsdam , germany .
|
the article mainly presents some results in using maple platform for computer algebra ( ca ) and grtensorii package in doing calculations for theoretical and numerical cosmology . 68w30 , 83c05 , 85a40 _ keywords and phrases : _ computer algebra , general relativity , cosmology
|
in the past few years , social media services as well as the users who subscribe to them , have grown at a phenomenal rate .this immense growth has been witnessed all over the world with millions of people of different backgrounds using these services on a daily basis . this widespread generation and consumption of contenthas created an extremely complex and competitive online environment where different types of content compete with each other for the attention of users .it is very interesting to study how certain types of content such as a viral video , a news article , or an illustrative picture , manage to attract more attention than others , thus bubbling to the top in terms of popularity . through their visibility ,these popular topics contribute to the collective awareness reflecting what is considered important .it can also be powerful enough to affect the public agenda of the community .there have been prior studies on the characteristics of trends and trend - setters in western online social media ( , ) . in this paper , we examine in detail a significantly less - studied but equally fascinating online environment : chinese social media , in particular , sina weibo : china s biggest microblogging network . over the yearsthere have been news reports on various internet phenomena in china , from the surfacing of certain viral videos to the spreading of rumors ( ) to the so called `` human flesh search engines '' : a primarily chinese internet phenomenon of massive search using online media such as blogs and forums ( ) .these stories seem to suggest that many events happening in chinese online social networks are unique products of china s culture and social environment .due to the vast global connectivity provided by social media , netizens all over the world are now connected to each other like never before ; they can now share and exchange ideas with ease .it could be argued that the manner in which the sharing occurs should be similar across countries .however , china s unique cultural and social environment suggests that the way individuals share ideas might be different than that in western societies . for example , the age of internet users in china is a lot younger .so it is likely that they may respond to different types of content than internet users in western societies . the number of internet users in china is larger than that in the u.s , and the majority of users live in large urban cities .one would expect that the way these users share information can be even more chaotic .an important question to ask is to what extent would topics have to compete with each other in order to capture users attention in this dynamic environment .furthermore , as documented by , it is known that the information shared between individuals in chinese social media is monitored .hence another interesting question to ask is what types of content would netizens respond to and what kind of popular topics would emerge u nder such constant surveillance . given the above questions , we present an analysis on the evolution of trends in sina weibo .we monitored the evolution of the top trending keywords in sina weibo for 30 days .first , we analyzed the model of growth in these trends and examined the persistance of these topics over time . in this regard, we investigated if topics initially ranked higher tend to stay in the list of top 50 trending topics longer .subsequently , by analyzing the timestamps of tweets , we looked at the propagation and decaying process of the trends in sina weibo and compare it to earlier observations of twitter .our findings are as follows : * we discovered that the majority of trends in sina weibo are arising from frivolous content , such as jokes and funny images and photos unlike twitter where the trends are mainly news - driven .* we established that retweets play a greater role in sina weibo than in twitter , contributing more to the generation and persistence of trends . * upon examining the tweets in detail , we made an important discovery .we observed that many trending keywords in sina weibo are heavily manipulated and controlled by certain fraudulent accounts .the irregular activities by these accounts made certain tweets more visible to users in general .* we found significant evidence suggesting that a large percentage of the trends in sina weibo are due to artificial inflation by fraudulent accounts .the users we identified as fraudulent were 1.08% of the total users sampled , but they were responsible for 49% of the total retweets ( 32% of the total tweets ) . * we evaluated some methods to identify fraudulent accounts .after we removed the tweets associated with fraudulent accounts , the evolution of the tweets containing trending keywords follow the same persistent and decaying process as the one in twitter .the rest of the paper is organized as follows . in section [ background ]we provide background information on the development of internet in china and on the sina weibo social network .in section [ relate ] we survey some related work on trends and spam in social media . in section [ evol ], we perform a detailed analysis of trending topics in sina weibo . in section [ future ] , we provide a discussion of our findings .in this section , we provide some background information on the internet in china , the development of chinese social media services , and sina weibo , the most popular microblog service in china the development of the internet industry in china over the past decade has been impressive . according to a survey from the china internet network information center ( cnnic ) , by july 2008 , the number of internet users in china has reached 253 million , surpassing the u.s . as the world slargest internet market .furthermore , the number of internet users in china as of 2010 was reported to be 420 million . despite this, the fractional internet penetration rate in china is still low .the 2010 survey by cnnic on the internet development in china reports that the internet penetration rate in the rural areas of china is on average .in contrast , the internet penetration rate in the urban cities of china is on average . in metropolitan cities such as beijing and shanghai, the internet penetration rate has reached over , with beijing being and shanghai being . according to the survey by cnnic in 2010 ,china s cyberspace is dominated by urban students between the age of 1830 ( see figure [ age ] and figure [ occupation ] , taken from ) .[ ht ] [ ht ] the government plays an important role in fostering the advance of the internet industry in china .tai points out four major stages of internet development in china , `` with each period reflecting a substantial change not only in technological progress and application , but also in the government s approach to and apparent perception of the internet . ''according to _ _ the internet in china __ released by the information office of the state council of china : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the chinese government attaches great importance to protecting the safe flow of internet information , actively guides people to manage websites in accordance with the law and use the internet in a wholesome and correct way ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ online social networks are a major part of the chinese internet culture .netizens in china organize themselves using forums , discussion groups , blogs , and social networking platforms to engage in activities such as exchanging viewpoints and sharing information . according to _ the internet in china _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ vigorous online ideas exchange is a major characteristic of china s internet development , and the huge quantity of bbs posts and blog articles is far beyond that of any other country .china s websites attach great importance to providing netizens with opinion expression services , with over 80% of them providing electronic bulletin service . in china , there are over a million bbss and some 220 million bloggers . according to a sample survey , each day people post over three million messages via bbs , news commentary sites , blogs , etc ., and over 66% of chinese netizens frequently place postings to discuss various topics , and to fully express their opinions and represent their interests . the new applications and services on the internet have provided a broader scope for people to express their opinions .the newly emerging online services , including blog , microblog , video sharing and social networking websites are developing rapidly in china and provide greater convenience for chi nese citizens to communicate online . actively participating in online information communication and content creation ,netizens have greatly enriched internet information and content ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ sina weibo was launched by the sina corporation , china s biggest web portal , in august 2009 .it has been reported by the sina corporation that sina weibo now has 250 million registered accounts and generates 90 million posts per day .similar to twitter , a user profile in sina weibo displays the user s name , a brief description of the user , the number of followers and followees the user has .there are three types of user accounts in sina weibo , regular user accounts , verified user accounts , and the expert ( star ) user account .a verified user account typically represents a well known public figure or organization in china .twitter users can address tweets to other users and can mention others in their tweets .a common practice in twitter is `` retweeting '' , or rebroadcasting someone else s messages to one s followers .the equivalent of a retweet in sina weibo is instead shown as two amalgamated entries : the original entry and the current user s actual entry which is a commentary on the original entry .sina weibo has another functionality absent from twitter : the comment . when a sina weibo user makes a comment , it is not rebroadcasted to the user s followersinstead , it can only be accessed under the original message .in this section , we provide a survey of papers in two related areas : spam detection and the study of trends in social networks . in each area, we present work on both western social networks and chinese social networks .spam and bot detection in social networks is a relatively recent area of research , motivated by the vast popularity of social websites such as twitter and facebook .it draws on research from several areas of computer science such as computer security , machine learning , and network analysis . in the 2010 work by benevenutoet al , the authors examine spam detection in twitter by first collecting a large dataset of more than 54 million users , 1.9 billion links , and 1.8 billion tweets .after exploring content and behavoir attributes , they developed an svm classifier and was able to detect spammers with 70% precision and non - spammers with 96% precision .as an insightful follow up , the authors used statistics to evaluate the importance of the attributes they used in their model .the second paper with direct application to spam detection in twitter was by wang .wang motivated his research with the statistic that an estimated 3% of messages in twitter are spam .the dataset used in in this study was relatively smaller , gathering information from 25,847 users , 500 thousand tweets , and 49 million follower / friend relationships .wang used decision trees , neural network , svm , and naive bayesian models .finally , lee et al . described a different approach to detect spammers .they created honeypot user accounts in twitter and recorded the features of users who interact with these accounts .they then used these features to develop a classifier with high precision . in social bookmarking websites , markines et al . used just 6 features - tag spam , tag blur , document structure , number of ads , plagiarism , and valid links , to develop a classifier with 98% accuracy .on facebook , boshmaf et al . successfully launched a network of social bots . despite facebook s bot detection system ,the authors were able to achieve an 80% infiltration rate over 8 weeks . in online ad exchanges ,advertisers pay websites for each user that clicks through an ad to their website .the way fraud occurs in this domain is for bots to click through ads on a website owned by the botnet owners .the money at stake in this case has made the bots employed very sophisticated .the botnet owners use increasingly stealthy , distributed traffic to avoid detection .stone et al .examined various attacks and prevention techniques in cost per click ad exchanges . gave a sophisticated approach to detect low - rate bot traffic by developing a model that examines query logs to detect coordination across bots within a botnet .some studies had been done on spam and bot detection in chinese online social networks , . observed the spammers in sina weibo and found that the spammers can be classified into two categories : promoters and robot accounts .lin et al . presented an analysis of spamming behaviors in sina weibo . using methods such as proactive honeypots , keyword based search and buying spammer samples directly from online merchants .they were able to collect a large set of spammer samples . through their analysisthey found three representative spamming behaviors : aggressive advertising , repeated duplicate reposting , and aggressive following .spammer identification system . through tests with real datait is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in sina weibo .one relevant area of research is the study of the `` online water army . ''it represents full - time or part - time paid posters hired by pr companies to help in raising the popularity of a specific company or person by posting articles , replies , and comments in online social networks . according to cctv ,these paid posters in china help their customers using one of the following three tactics : 1 . promoting a specific product , company or person ; 2 .smear / slander competitors ; 3 . help deleting negative posts or comments .st in bbs systems , and online social networks . in the work by chen et al . , the authors examined comments in the chinese news websites such as sina.com and sohu.com and used reply , activity , and semantic features to develop an svm classifier via the libsvm python library with 95% accuracy at detecting paid posters .interesting information discussed in the paper includes the organizational structure of pr firms which hire the paid posters and the choice of features : percentage of replies , average interval time of posts , active days , and number of reports commented on . for many yearsthe structural properties of various western social networks have been well studied by sociologists and computer scientists . in social network analysis ,_ social influence_ refers to the concept of people modifying their behavior to bring them closer to the behavior of their friends . in a social - affiliation networkconsists of nodes representing individuals , links representing friendships , and nodes representing _ foci _ : `` social , psychological , legal , or physical entities around which joint activities are organized ( e.g. , workplace , social groups ) '' , if and are friends , and is a focus that participates in . over time , can participate in the same focus due to s involvement , this is called a _ _ membership closure__ .agarwal et al . examined methods to identify influential bloggers in the blogosphere .they discovered that the most influential bloggers are not necessarily the most active .backstrom et al . studied the characteristics of _ membership closure _ in livejournal .crandall et al . studied the adaptation of influences between editors of wikipedia articles .romero et al . measured retweets in twitter and found that passivity was a major factor when it comes to message forwarding . based on this result, they presented a measure of social influences that takes into account the passivity of the audience in social networks .there are various studies on trends in twitter .one of the most extensive investigations into trending topics in twitter was by asur et al .the authors examined the growth and persistence of trending topics in twitter and observed that it follows a log - normal distribution of popularity . accordingly, most topics faded from popularity relatively quickly , while a few topics lasted for long periods of time .they estimated the average duration of topics to be around 20 - 40 minutes .when they examined the content of the trends , they observed that traditional notions of influence such as the frequency of posting and the number of followers were not the main drivers of popularity in trends .rather it was the resonating nature of the content that was important .an interesting finding was that news topics from traditional media sources such as cnn , new york times and espn was shown to be some of the most popular and long lasting trending topics in twitter , suggesting that twitter amplifies some of the broader trends occurring in society .cha et al . explored user influences on twitter trends and discovered some interesting results .first , users with many followers were found to not be very effective in generating mentions or retweets .second , the most influential users tend to influence more than one topic .third , influences were found to not arise spontaneously , but instead as the result of focused efforts , often concentrating on one topic .researchers have analyzed the structure of various chinese offline social networks .there have been only a few studies on social influences in chinese online social networks .jin studied the structure and interface of chinese online bulletin board systems ( bbs ) and the behavioral patterns of its users .xin conducted a survey of bbs s influence on university students in china . looked at the adaptation of books , movies , music , events and discussion groups on douban , the largest online media database and one of the largest online communities in china . in a similar area, there are some studies on the structural properties and the characteristics of information propagation in chinese online social networks , , , .yang et al . noted that various information services ( e.g. , ebay , orkut , and yahoo ! ) encountered serious challenges when entering china .they presented an empirical study of social interactions among chinese netizens based on over 4 years of comprehensive data collected from mitbbs ( www.mitbbs.com ) , the most frequently used online forum for chinese nationals who are studying or working abroad .lin et al . presented a comparison of the interaction patterns between two of the largest online social networks in china : renren and sina weibo .niu et al . gave an empirical analysis of renren , it follows an exponentially truncated power law in - degree distribution , and has a short average node distance . king et al . studied the concept of _ guanxi _ , a unique dyadic social construct , as applied to the interaction between web sites in china .chang et al . studied a special case of the propagation of information in chinese online social networks : the sending and receiving of messages containing wishes and moral support .they provided analysis on the data from linkwish , a micro social network for wish sharing with users mainly from taiwan , hong kong , and macao .fan et al . looked at the propagation of emotion in sina weibo .they found that the correlation of anger among users is significantly higher than that of joy , which indicates that angry emotion could spread more quickly and broadly in the network . and , there is a stronger sentiment correlation between a pair of users if they share more interactions .finally , users with larger number of friends possess more significant sentiment influence to their neighborhoods .sina weibo offers a list of 50 keywords that appear most frequently in users tweets .they are ranked according to the frequency of appearances in the last hour .this is similar to twitter , which also presents a constantly updated list of trending topics : keywords that are most frequently used in tweets over a period of time .we extracted these keywords over a period of 30 days ( from june 18th , 2011 to july 18th , 2011 ) and retrieved all the corresponding tweets containing these keywords from sina weibo .we first monitored the hourly evolution of the top 50 keywords in the trending list for 30 days .we observed that the average time spent by each keyword in the hourly trending list is 6 hours . andthe distribution for the number of hours each topic remains on the top 50 trending list follows the power law ( as shown in figure [ power ] a ) .the distribution suggests that only a few topics exhibit long - term popularity .[ ht ] another interesting observation is that a lot of the key words tend to disappear from the top 50 trending list after a certain amount of time and then later reappear .we examined the distribution for the number of times keywords reappear in the top 50 trending list ( figure [ power ] b ) .we observe that this distribution follows the power law as well .both the above observations are very similar to the earlier study of trending topics in twitter by . however , one important difference with twitter is that the average trending time is significantly higher in sina weibo ( in twitter it was 20 - 40 minutes ) .this suggests that weibo may not have as many topics competing for attention as twitter .following our observation that some keywords stay in the top 50 trending list longer than others , we wanted to investigate if topics that are ranked higher initially tend to stay in the top 50 trending list longer .we separated the top 50 trending keywords into two ranked sets of 25 each : the top 25 and the bottom 25 .figure [ stay ] illustrates the plot for the percentage of topics that placed in the bottom 25 relating to the number of hours these topics stayed in the top 50 trending list .we can observe that topics that do not last are usually the ones that are in the bottom 25 . on the other hand ,the long - trending topics spend most of their time in the top 25 , which suggests that items that become very popular are more likely to stay longer in the top 50 .this intuitively means that items that attract phenomenal attention initially are not likely to dissipate quickly from people s interests .[ ht ] next , we investigate the process of persistence and decay for the trending topics in sina weibo .in particular , we want to measure the distribution for the time intervals between tweets containing the trending keywords .we continuously monitored the keywords in the top 50 trending list and for each trending topic we retrieved all the tweets containing the keyword from the time the topic first appeared in the top 50 trending list until the time it disappeared .accordingly , we collected complete data for 811 topics over the course of 30 days ( from june 20th , 2011 to july 20nd , 2011 ) .in total we collected 574,382 tweets from 463,231 users . among the 574,382 tweets , 35% of the tweets ( 202,267 tweets ) are original tweets , and 65% of the tweets ( 372,115 tweets ) are retweets .40.3% of the total users ( 187130 users ) retweeted at least once in our sample .we measured the number of tweets that each topic gets in 10 minute intervals , from the time the topic starts trending until the time it stops .from this we can sum up the tweet counts over time to obtain the cumulative number of tweets of topic for any time frame , this is given as : where is the number of tweets on topic in time interval .we then calculate the ratios for topic for time frames and .figure [ ratio_general ] shows the distribution of s over all topics for two arbitrarily chosen pairs of time frames : ( 10 , 2 ) and ( 8 , 3 ) ( nevertheless such that , and is relatively large , and is small ) .[ ht ] these figures suggest that the ratios are distributed according to the log - normal distributions .we tested and confirmed that the distributions indeed follow the log - normal distributions .this finding agrees with the result from a similar experiment in twitter trends .asur and others argued that the log - normal distribution occurs due to the multiplicative process involved in the growth of trends which incorporates the decay of novelty as well as the rate of propagation .the intuitive explanation is that at each time step the number of new tweets ( original tweets or retweets ) on a topic is multiplied over the tweets that we already have .the number of past tweets , in turn , is a proxy for the number of users that are aware of the topic up to that point .these users discuss the topic on different forums , including twitter , essentially creating an effective network through which the topic spreads .as more users talk about a particular topic , many others are likely to learn about it , thus giving the multiplicative nature of the spreading . on the other hand ,the monotically decreasing decaying process characterizes the decay in timeliness and nove lty of the topic as it slowly becomes obsolete .however , while only 35% of the tweets in twitter are retweets , there is a much larger percentage of tweets that are retweets in sina weibo .from our sample we observed that a high 65% of the tweets are retweets .this implies that the topics are trending mainly because of some content that has been retweeted many times .thus , sina weibo users are more likely to learn about a particular topic through retweets . for every new trending keyword we retrieved the most retweeted tweets in the past hour and compiled a list ofmost retweeted users .table [ trend_setter_retweet ] illustrates the top 20 most retweeted authors appearing in at least 10 trending topics each .the influential authors are ranked according to the ratio between the number of times the authors tweets are retweeted and the number of trending topics these tweets appeared in . [ cols="^,^,^,^,^,^",options="header " , ] [ still ] we observe that in some cases , accounts with lower user - retweet ratios can still be a spam account .for example , an account could retweet a number of posts from other spam accounts , thus minimizing the suspicion of being detected as a spam account itself . from our sample ,after automatically checking each account , we identified 4985 accounts that were deleted by the sina weibo administrator .we called these 4985 accounts `` suspected spam accounts '' .there were 463,231 users in our sample , and 187,130 of them retweeted at least once .thus we identified 1.08% of the total users ( 2.66% of users that retweeted ) as suspected spam accounts .next , in order to measure the effect of spam on the weibo network , we removed all retweets from our sample disseminated by suspected spam accounts as well as posts published by them ( and then later retweeted by others ) .we hypothesize that by removing these retweets , we can eliminate the influences caused by the suspected spam accounts .we observed that after these posts were removed , we were left with only 189,686 retweets in our sample ( 51% of the original total retweets ) .in other words , by removing retweets associated wth suspected spam accounts , we * successfully removed 182,429 retweets , which is 49% of the total retweets and 32% of total tweets ( both retweets and original tweets ) from our sample*. this result is very interesting because it shows that a large amount of retweets in our sample are associated with suspected spam accounts .the spam accounts are therefore artificially inflating the popularity of topics , causing them to trend .to see the difference after the posts associated with suspected spam accounts were removed , we re - calculated the distribution of user - retweet ratios again for arbitrarily chosen pairs of time frames .figure [ correct ] illustrates the distribution for time frames ( 10 , 2 ) .we observed that the distribution is now much smoother and seem to follow the log - normal distribution .we performed the log - normal test and verified that this is indeed the case .[ ht ] we found 6824 users in our sample whose tweets were retweeted .however , the total number of users who retweeted at least one person s tweet was 187130 , which is very skewed .figure [ distribution_r ] illustrates the distribution for the number of times users were retweeted .this distribution follows the power law .[ ht ] we discovered that the number of users whose tweets were retweeted by the suspected spam accounts was 4665 , which is a surprising * 68% * of the users who were retweeted in our sample .this shows that the suspected spam accounts affect a majority of the trend - setters in our sample , helping them raise the retweet number of their posts and thereby making their posts appear on the trending list .the overall effect of the spammers is very significant .we also observed that a high 98% of the total trending keywords can be found in posts retweeted by suspected spam accounts .thus it can also be argued that * many of the trends themselves are artificially generated * , which is a very important result .next , we investigate the activities of typical spam accounts in sina weibo .we have shown that accounts with high retweet ratios are likely to be spam accounts . although the majority of the accounts had already been deleted by the administrator , we manually inspected 100 currently existing accounts with high retweet ratios and found that 95 clearly participate in spamming activities .the other 5 were regular users supporting their favorite singers and celebrities by repeatedly retweeting their posts , which can also be construed as spam ; however , we exclude those from our list of suspected spam accounts .figure [ example_2 ] illustrates two examples of the activities from suspected spam accounts . [ ht ] first , we observe that the suspected spam accounts we inspected tend to repeatedly retweet the same post with the goal of increasing the retweet number of said post .next , the interval time of these repeated retweets tend to be very close to each other with long breaks between each set .finally , we observe that the replies left from spam accounts often do not make any sense ( see the comments circled in figure [ example_2 ] ) . had similar findings , and explained that this was because the paid posters are mainly interested in finishing the job as quickly as possible , thus they tend to retweet multiple times in short bursts and leave gibberish as replies .we observe that the replies in [ example_2 ] a ) and b ) are not proper sentences .for the 4665 users whose tweets were retweeted by at least one suspected spam account , we calculate the percentage of retweets from spam accounts and the percentage of suspected spam accounts involved .we selected only accounts whose tweets were retweeted by at least 50% of the accounts that are suspected spam accounts . from our manual inspectionwe found mainly three types of accounts : 1 .verified accounts from celebrities and reality show contestants : we hypothesize that they employ spam accounts to boast the popularity of their posts , making it seem like the posts were retweeted by many fans ; 2. verified accounts from companies : we hypothesize that they employ spam accounts to boast the perceived popularity of their products ; 3 .unverified accounts with posts consist of ads for products : we hypothesize that these accounts employ spam accounts to distribute the ads and to boast the perceived popularity of their products , hoping other users will notice and distribute ( see figure [ example_beidong ] for an example ) .we have examined the tweets relating to the trending topics in sina weibo .first we analyzed the growth and persistence of trends .when we looked at the distribution of tweets over time , we observed that there was a significant difference when contrasted with twitter .the effect of retweets in sina weibo was significantly higher than in twitter .we also found that many of the accounts that contribute to trends tend to operate as user contributed online magazines , sharing amusing pictures , jokes , stories and antidotes .such posts tend to recieve a large amount of responses from users and thus retweets .yang et al . have shown similar results about mitbbs users forwarding amusing messages and `` virtual gifts '' to online friends .the effect of this is similar to that of sending `` a cyber greeting card '' .this phenomenon can also be observed from text messages sent from cell phones between individuals in china .this is interesting in the context of there being strong censorship in chinese social media .it can be hypothesized that under such circumstances , it is these kind of `` safe '' topics that can emerge .when we examined the retweets in more detail , we made an important discovery .we found that 49% of the retweets in sina weibo containing trending keywords were actually associated with fraudulent accounts .we observed that these accounts comprised of a small amount ( 1.08% of the total users ) of users but were responsible for a large percentage of the total retweets for the trending keywords .these fake accounts are responsible for artificially inflating certain posts , thus creating fake trends in sina weibo .we relate our finding to the questions we raised in the introduction .there is a strong competition among content in online social media to become popular and trend and this gives motivation to users to artificially inflate topics to gain a competitive edge .we hypothesize that certain accounts in sina weibo employ fake accounts to repeatedly repeat their tweets in order to propel them to the top trending list , thus gaining prominence as top trend setters ( and more visible to other users ) .we found evidence suggesting that the accounts that do so tend to be verified accounts with commercial purposes .it is clear that the owners of these user contributed online magazines see this as a business opportunity to gain audience for their content .they can start by generating and propagating popular content and subsequently begin inserting advertisements amongst the jokes in their their sina weibo accounts .the artificial inflation makes it an even more effective campaign .we have found that we can effectively detect suspected spam accounts using retweet ratios .this can lead to future work such as using machine learning to identify other spamming techniques . in the future, we would like to examine the behavior of these fake accounts that contribute to artificial inflation in sina weibo to learn how successful they are in influencing trends .b. wang , b. hou , y. yao , and l. yan , `` human flesh search model incorporating network expansion and gossip with feedback , '' in _ proceedings of the 2009 13th ieee / acm international symposium on distributed simulation and real time applications_.1em plus 0.5em minus 0.4emieee computer society , 2009 , pp .8288 . v. king , l. yu , and y. zhuang , `` guanxi in the chinese web , '' in _ proceedings of the 2009 ieee international conference on computational science and engineering _ , vol .4.1em plus 0.5em minus 0.4emieee computer society , 2009 , pp . 917 .f. benevenuto , g. magno , t. rodrigues , and v. almeida , `` detecting spammers on twitter , '' in _ collaboration , electronic messaging , anti - abuse and spam conference ( ceas ) _ , vol .6.1em plus 0.5em minus 0.4em , 2010 .k. lee , j. caverlee , and s. webb , `` uncovering social spammers : social honeypots+ machine learning , '' in _ proceeding of the 33rd international acm sigir conference on research and development in information retrieval_.1em plus 0.5em minus 0.4em , 2010 , pp . 435442 .b. markines , c. cattuto , and f. menczer , `` social spam detection , '' in _ proceedings of the 5th international workshop on adversarial information retrieval on the web_.1em plus 0.5em minus 0.4em , 2009 , pp .y. boshmaf , i. muslukhov , k. beznosov , and m. ripeanu , `` the socialbot network : when bots socialize for fame and money , '' in _ proceedings of the 27th annual computer security applications conference_.1em plus 0.5em minus 0.4em , 2011 , pp .93102 .b. stone - gross , r. stevens , a. zarras , r. kemmerer , c. kruegel , and g. vigna , `` understanding fraudulent activities in online ad exchanges , '' in _ proceedings of the 2011 acm sigcomm conference on internet measurement conference_.1em plus 0.5em minus 0.4em , 2011 , pp .279294 .f. yu , y. xie , and q. ke , `` sbotminer : large scale search bot detection , '' in _ proceedings of the third acm international conference on web search and data mining_.1em plus 0.5em minus 0.4em , 2010 , pp . 421430 .y. zhou , k. chen , l. song , x. yang , and j. he , `` feature analysis of spammers in social networks with active honeypots : a case study of chinese microblogging networks , '' in _ advances in social networks analysis and mining ( asonam ) , 2012 ieee / acm international conference on _ , 2012 , pp . 728729 .x. yong , z. yi , and c. kai , `` observation on spammers in sina weibo , '' in _ proceedings of the 2nd international conference on computer science and electronics engineering ( iccsee 2013)_.1em plus 0.5em minus 0.4ematlantis press , 2013 . c. lin , j. he , y. zhou , x. yang , k. chen , and l. song , `` analysis and identification of spamming behaviors in sina weibo microblog , '' in _ proceedings of the 7th workshop on social network mining and analysis _ , ser .snakdd 13 , 2013 , pp .5:15:9 .a. mislove , m. marcon , k. p. gummadi , p. druschel , and b. bhattacharjee , `` measurement and analysis of online social networks , '' in _ proceedings of the 7th sigcomm conference on internet measurement_.1em plus 0.5em minus 0.4emacm , 2007 , pp .2942 .r. kumar , j. novak , and a. tomkins , `` structure and evolution of online social networks , '' in _ proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining_.1em plus 0.5em minus 0.4emacm , 2006 , pp .611617 .l. backstrom , d. huttenlocher , j. kleinberg , and x. lan , `` group formation in large social networks : membership , growth , and evolution , '' in _ proceedings of the 12th international conference on knowledge discovery and data mining_.1em plus 0.5em minus 0.4emacm , 2006 , pp .d. crandall , d. cosley , d. huttenlocher , j. kleinberg , and s. suri , `` feedback effects between similarity and social influence in online communities , '' in _ proceedings of the 14th acm sigkdd international conference on knowledge discovery and data mining_.1em plus 0.5em minus 0.4em acm , 2008 , pp .160168 .m. mathioudakis and n. koudas , `` twittermonitor : trend detection over the twitter stream , '' in _ proceedings of the 2010 international conference on management of data _ , ser .sigmod 10 , 2010 , pp .11551158 .s. asur , b. a. huberman , g. szabo , and c. wang , `` trends in social media : persistence and decay , '' in _5th international aaai conference on weblogs and social media_.1em plus 0.5em minus 0.4em , 2011 , pp .434437 .m. cha , h. haddadi , f. benevenuto , and k. p. gummadi , `` measuring user influence in twitter : the million follower fallacy , '' in _ 4th international aaai conference on weblogs and social media ( icwsm)_.1em plus 0.5em minus 0.4em , 2010 .m. xin , `` chinese bulletin board system s influence upon university students and ways to cope with it ( in chinese ) , '' _ journal of nanjing university of technology ( social science edition ) _ , vol . 4 , pp . 100 104 , 2003. l. yu and v. king , `` the evolution of friendships in chinese online social networks , '' in _ proceedings of the 2010 ieee second international conference on social computing _ , ser .socialcom 10 , 2010 , pp . 8187 .m. chan , x. wu , y. hao , r. xi , and t. jin , `` microblogging , online expression , and political efficacy among young chinese citizens : the moderating role of information and entertainment needs in the use of weibo , '' _ cyberpsy . , behavior , and soc .networking _ ,15 , no . 7 , pp . 345349 , 2012 .h. chen and e. haley , `` the lived meanings of product placement in social network sites ( snss ) among urban chinese white - collar professional users : a story of happy network , '' _ journal of interactive advertising _ , vol .11 , no . 1 , pp . 1116 , 2010 .chu and s. m. choi , `` electronic word - of - mouth in social networking sites : a cross - cultural study of the united states and china , '' _ journal of global marketing _ , vol .24 , no . 3 , pp . 263281 , 2011 .j. yang , m. s. ackerman , and l. a. adamic , `` virtual gifts and guanxi : supporting social exchange in a chinese online community , '' in _ proceedings of the acm 2011 conference on computer supported cooperative work _ ,cscw 11 , 2011 , pp .j. lin , z. li , d. wang , k. salamatian , and g. xie , `` analysis and comparison of interaction patterns in online social network and social media , '' in _ computer communications and networks ( icccn ) , 2012 21st international conference on _ , 2012 , pp .
|
there has been a tremendous rise in the growth of online social networks all over the world in recent years . it has facilitated users to generate a large amount of real - time content at an incessant rate , all competing with each other to attract enough attention and become popular trends . while western online social networks such as twitter have been well studied , the popular chinese microblogging network sina weibo has had relatively lower exposure . in this paper , we analyze in detail the temporal aspect of trends and trend - setters in sina weibo , contrasting it with earlier observations in twitter . we find that there is a vast difference in the content shared in china when compared to a global social network such as twitter . in china , the trends are created almost entirely due to the retweets of media content such as jokes , images and videos , unlike twitter where it has been shown that the trends tend to have more to do with current global events and news stories . we take a detailed look at the formation , persistence and decay of trends and examine the key topics that trend in sina weibo . one of our key findings is that retweets are much more common in sina weibo and contribute a lot to creating trends . when we look closer , we observe that most trends in sina weibo are due to the continuous retweets of a small percentage of fraudulent accounts . these fake accounts are set up to artificially inflate certain posts , causing them to shoot up into sina weibo s trending list , which are in turn displayed as the most popular topics to users . social network ; web structure analysis ; temporal analysis ; china ; social computing
|
the resolution of stochastic differential equations is either difficult in general or we do not have explicit solutions .numerical schemes provide an easy way to integrate these equations but the implementation of a `` good '' integrator , in the sense of convergence order for example , is difficult ( see and ) . + despite the convergence order , a natural question on numerical schemes can be the following : _ does the considered numerical scheme preserve the dynamical properties of the initial system ? _ + the usual numerical schemes , even in the deterministic case ( see and references therein ) , such as euler , runge - kutta and euler - maruyama in the stochastic case , do not preserve dynamical properties without conditions on the time - step of the numerical integration . the question is : _ can we construct a stochastic numerical scheme respecting dynamical properties with a minimum of restrictions ? _ + major work on domain invariance has been done in and with an introduction of a class of stochastic numerical schemes , called _ balanced implicit methods _ ( bim ) . in ,domain invariance is illustrated through multiple examples from biology , finance and marketing sciences with these methods .the domain invariance by these methods is subject to conditions on the time - step even for the numerically computed expectation .+ the aim of this paper is to introduce the notion of nonstandard stochastic scheme based on the rules introduced in the deterministic case by r.e .mickens ( see , , ) .it provides a new way to create numerical schemes which preserve dynamical properties .the nonstandard rules are based on the way to construct exact numerical schemes which lead to multiple consequences to interpret the discrete derivative , integral and nonlinear terms .one of the main differences with the balanced implicit methods , is that we obtain domain invariance unconditionally for the numerically computed expectation .+ the plan of the paper is as follows : + in section 2 , we remind classical definitions about continuous and discrete stochastic differential equations systems . in section 3, we introduce our scheme and the assumptions made for a stochastic differential equation . in section 4 , we study the strong convergence of our scheme in the case where and are not necessarily globally lipschitz functions .it generalizes the result obtained in and ( * ? ? ?* corollary 3.11 and 3.12 ) with the euler - maruyama scheme in a sense of the nonstandard context . in section 5 ,we study the preservation of domain invariance such as positivity which occurs in a lot of problems in scientific fields such as astronomy ( see ) , economics , physics or more often in biology ( see , ) . in section 6 ,we illustrate numerically the scheme and its better behavior compared to the euler - maruyama scheme and balanced implicit methods through the geometric brownian motion .in this section , we remind classical results about continuous and discrete stochastic differential equations systems .we refer in particular to and for more details and proofs .we introduce the definition of stochastic nonstandard scheme based on the rules defined by r.e .mickens in his book , and . + we consider the it stochastic differential equation ( sde ) with for each , is a -dimensional brownian motion , : and : .+ in many applications , the solution of the stochastic differential equation must belong to a given domain .such constraint is called _ domain invariance _ and is defined as follows : a domain is said to be _ invariant _ for the stochastic system if for every initial data the corresponding solution , , satisfies the following theorem ( see ( * ? ? ? * theorem 1 ) ) characterizes the class of functions and such that the stochastic system preserves the domain invariance of solutions .[ thminv ] let be a non - empty subset and such that . then , the set is invariant for the stochastic system if and only if for all and .let with .for , we denote by the discrete time defined by . a general one - step stochastic numerical scheme with step size and brownian motion which computes approximations of the solution of a general system such as ( [ dyt ] ) with can be written in the form where is a function depending on and , for all . in the case where is identically zero , we recover the usual definition of deterministic numerical scheme which approximate in this case is denoted only by to .consider a _ continuous - time extension _ of the discrete approximation .a general one - step stochastic numerical scheme is said to be strongly convergent if its continuous time approximation satisfies =0.\ ] ] as the continuous case , we define the domain invariance for a numerical scheme .a domain is said to be invariant for a general one - step stochastic numerical scheme if for any initial condition , satisfies in the deterministic case , the rules defined by r.e .mickens ( see , , ) can be states as follows : [ nsdef ] a general one - step deterministic numerical scheme is called nonstandard finite difference scheme if at least one of the following conditions is satisfied : * is approximate as or equivalently is approximate as where is a nonnegative function , * is a nonlocal approximation of .the terminology of _ nonlocal approximation _ comes from the fact that the approximation of a given function is not only given at point by but can eventually depend on more points , as for example in the previous definition , we restricted our attention to the easiest case , depending only on , and .of course , more points can be considered if necessary .following the first rule in definition [ nsdef ] , we introduce the nonstandard - euler - maruyama ( nsem ) scheme applied to which is given by where is a nonnegative function with and .a continuous - time extension of is given by where is defined by .+ up to our knowledge , the nsem scheme is a new numerical scheme .it can not be obtained as a specialization of a bim scheme ( see remark [ remark_diff ] ) or a modified version of the classical em scheme found in the literature .[ remark_diff ] the approach to construct balanced implicit methods ( see ( * ? ? ?* section 3 p.1014 ) ) is based on the introduction of an explicit and implicit part in the euler - maruyama scheme . for a one dimensional problem ,these two parts are governed by two constants such that we have where and are positive constants .for example , the bim numerical scheme associate to the geometric brownian motion ( see section 6 ) is given by ( see ( * ? ? ?* lemma 4.2 p.8 ) ) and the nsem scheme associate to the geometric brownian motion is given by we can see that the construction is completely different .the choice of is given in section 6 with a comparison of these two schemes in section 6.2 .it shows a better behavior of the nsem scheme compared to the bim scheme on this example .as the nsem scheme can not be derived from a known numerical scheme for stochastic differential equations ( see remark [ remark_diff ] ) , we can not deduce the convergence of our scheme using classical proofs ( see remark [ remark_diff2 ] for more details ) . in order to study the strong convergence of the nsem scheme , we make assumptions on the stochastic differential equations : [ assum1 ]we assume that the initial condition is chosen independently of the brownian motion driving the equation .[ assum2 ] for each there exists a constant , depending only on , such that for all with .[ assum3 ] there exist such that for all .we follow the strategy used for the euler - maruyama scheme in the proof of ( * ? ? ?* theorem 2.2 ) to prove the strong convergence of the nsem scheme . from assumptions[ assum1 ] , [ assum2 ] and [ assum3 ] , we deduce two results : [ lemass1 ] under assumptions [ assum1 ] and [ assum3 ] , for any there exists a constant depending on and such that the exact solution and the approximate solution to the equation have the property \vee e\left[\sup_{0\leq t\leq t}|\overline{x}(t)|^{p}\right]\leq c_1(h , p)\ .\ ] ] the proof is given in appendix a.1 [ lemass2 ] under assumptions [ assum1 ] , [ assum2 ] and [ assum3 ] there exists depending on and such that \leq c_2(h , p , r)\left(\varphi(h)^2 + nh\right)\ ] ] the proof is given in appendix a.2 in the result obtained in lemma [ lemass1 ] is an assumption for .as noticed by the authors , it is a strong assumption but it can be recovered by assuming linear growth condition .[ convnsem ] under assumptions [ assum1 ] , [ assum2 ] and [ assum3 ] , the solution of with continuous - time extension is strongly convergent .the proof is given in appendix a.3 the proof of theorem [ convnsem ] encompasses the case where we have the global lipschitz condition , i.e. for all .bounding , such as they are independent of and , we obtain which depend only on as where is a constant .then , choosing and we obtain \leq \alpha\left(\varphi(h)^2 + h+\left(\frac{c(h)}{h}\right)^2\right)\ ] ] where is a constant .moreover , in the standard case , i.e. , we recover the classical result for the euler - maruyama scheme , =o(h)\ ] ] found for example in , , and .[ remark_diff2 ] our proof is not a particular case of the proof of strong convergence by local lipschitz assumption and moment bounds for the euler - maruyama or the balanced implicit methods .indeed , our scheme does not enter in the class considered in ( * ? ? ?* corollary 3.11 , 3.12 and 3.15 ) and ( * ? ? ? * theorem 1 ) by construction .however , different hypothesis to prove the strong convergence of the euler - maruyama scheme can be used to weaken the linear growth condition . for a complete review and details of these possibilities for the euler - maruyama scheme , see ( * ? ? ?* section 3.4 ) .in order to study the invariance describe in theorem [ thminv ] for the numerical approximation of obtained by the nsem scheme , we restrict our intention to the domain and the domain where is a non - empty subset and such that . as the condition of invariance for the numerical scheme of would be both conditions of invariance for and .the methods being the same to prove the invariance of and , we write only the proof for . +as there exists a multitude of choices for the function , we select one of the most used in the literature of nonstandard schemes , which is the following : where is a positive constant and is a function which takes values in ,1[ ] then , remains in for all .this concludes the proof .let and let . by hypothesis we have and for .we assume for that .taylor s expansion up to the first order with integral remainder on and gives {il } ( x_k - a)_l , \\g_{ij}(x_k ) & = g_{ij}(a ) + \left[\int_0 ^ 1 g'(a+s(x_k - a))ds\right]_{ijl } ( x_k - a)_l,\end{aligned}\ ] ] where .inserting the taylor s expansion of and in the nsem gives {il } \\& + ( \delta w_k ) _ j \left[\int_0 ^ 1 g'(a+s(x_k - a))ds\right]_{ijl}\bigg)\end{aligned}\ ] ] by hypothesis and . then, using the definition of we obtain as and ,1[ ] and by using a classical property of it s integral ( see ( * ? ? ?* theorem 3.2.1 ( iii ) p. 30 ) ) .then , we obtain from hypothesis , is a constant and the definition of , we obtain as and ,1[$ ] then , remains in for all .this concludes the proof .
|
we construct a nonstandard finite difference numerical scheme to approximate stochastic differential equations ( sdes ) using the idea of weighed step introduced by r.e . mickens . we prove the strong convergence of our scheme under locally lipschitz conditions of a sde and linear growth condition . we prove the preservation of domain invariance by our scheme under a minimal condition depending on a discretization parameter and unconditionally for the expectation of the approximate solution . the results are illustrated through the geometric brownian motion . the new scheme shows a greater behavior compared to the euler - maruyama scheme and balanced implicit methods which are widely used in the literature and applications . syrte umr cnrs 8630 , observatoire de paris , france
|
in this paper we propose a new probability distribution to handle the problem of survival data .motivated by research developed in recent years , we introduce the kumaraswany inverse weibull distribution that includes several well known distributions used in survival analysis .recently , many authors have proposed new classes of distributions , which are modifications of distribution functions which provide hazard ratios contemplating various shapes .we can cite for example the weibull exponential , which has also the hazard rate function with a unimodal form , ( see also ) . proposed a four - parameter distribution denoted generalized modified weibull ( gmw ) distribution , introduced and studied the tri - parametric inverse weibull generalized distribution that possesses failure rate with unimodal , increasing and decreasing forms . proposed a distribution with four parameters , called beta generalized half normal distribution .underexplored in the literature and rarely used by statisticians , the kumaraswamy distribution has a domain in the real interval .this property turns the kumaraswamy distribution a natural candidate to combine with other distributions to produce a more general one .its cumulative distribution function ( cdf ) is given by , ^b,\ ] ] and its probability density function ( pdf ) is given by , ^{b-1 } , ~0<x<1,\ ] ] where and .this density can be unimodal , increasing , decreasing or constant .recently , proposed to use the kumaraswamy to generalize other distributions . considering that a random variable has distribution , they suggest to apply the kumarasawamy distribution to .note that , since for any distribution function , then evaluating equation ( [ eq_kum - cdf ] ) at we obtain , ^b.\ ] ] where is the cdf of the generalized kumaraswamy- distribution .based on these ideas , we consider the inverse weibull distribution as a candidate for , using equation ( [ eq_kg ] ) .then , performing some adjustments and mathematical manipulations , we obtain the kumaraswamy inverse weibull ( kum - iw ) distribution .the rest of the paper is organized as follows . in section [ sec_kum - iw ], we develop the kum - iw distribution .section [ sec : properties ] is devoted to describe basic properties of the distribution .inference procedures via maximum likelihood and bayesian approaches are presented in section [ sec : estimation ] .section [ application ] is devoted to analyze a real data set and in section [ sec_final ] we present some conclusions of this work .let a random variable with inverse weibull distribution .then its cdf can be written as , , \quad t > 0,\ ] ] where , , and its pdf is given by , .\end{aligned}\ ] ] inserting the function of equation ( [ eq_iw ] ) in equation ( [ eq_kg ] ) it follows that , \right\}^b.\ ] ] we note that the parameters and in ( [ eq_kum - iw-1 ] ) are not identifiable and we adopt the reparameterization so that the kum - iw cdf is rewritten as , \right\}^b,\ ] ] where and are the shape and scale paremeters respectively .accordingly , the kum - iw pdf is now given by , \left\ { 1-\exp \left [ -\left ( \frac{c}{t } \right)^\beta \right ] \right\}^{b-1}.\ ] ] it can be easily seen that when we obtain the pdf of the inverse weibull ( iw ) distrbution given by , .\end{aligned}\ ] ] finally , the corresponding survival and hazard functions are respectively given by , \right\}^b \quad\mbox{and}\quad h_g(t ; b , c , \beta ) = \frac{\beta b c^{\beta } t^{-\left(\beta+1\right ) } \exp \left [ -\left ( \frac{c}{t } \right)^\beta \right ] } { 1-\exp \left [ -\left ( \frac{c}{t } \right)^\beta \right]}.\ ] ] while the quantile function , , of the kum - iw distribution is given by , the following well known and new distributions are special sub - classes of the kum - iw distribution .in this section we describe in detail some properties like expansions , moments , mean deviations , bonferroni and lorenz curves , order statistics and entropies which might be useful in any application of the distribution .we now give simple expansions for the cdf of the kumaraswamy inverse weibull distribution . if and is a non - integer real number , we have if is a positive integer , the series stops at . using expansion in equation ( [ 01 ] ) it follows that , .\ ] ] and .\ ] ] because the integrals involved in the computation of moments , bonferroni and lorenz curves , reliability , shannon and rnyi entropies and other inferential results do not have analytical solutions , these expansions are necessary . we hardly need to emphasize the need and importance of the moments in any statistical analyses , especially in applied work .some of the most important features and characteristics of a distribution can be studied using their moments ( e.g. tendency , dispersion , skewness and kurtosis ) .if the random variable follows the kum - iw distribution , its -th moment about zero is given by , \left [ 1-\exp \left[- \left ( \frac{c}{t } \right)^\beta \right ] \right]^{b-1}dt\\ & = & b c^k \sum^{\infty}_{r=0 } \frac{\left(-1\right)^r \gamma\left(b\right)}{\gamma\left(b - r\right ) r ! } \left(r+1\right)^{\frac{k}{\beta}-1 } \gamma\left(1-\frac{k}{\beta}\right).\end{aligned}\ ] ] the moment generating function of for is , hence , for , the cumulative generating function of is \right].\ ] ] we note that it was necessary to use the expansions previously presented for the results of this section .the amount of scattering in a population may be measured by all the absolute values of the deviations from the mean or the median . if x is a random variable with kum - iw distribution with mean ] .the -th moment of the order statistic is ^{r-1}\left[1-f(t)\right]^{n - r}f(t ) dt,\end{aligned}\ ] ] for , and .hence , and we obtain an expression for the moment given by , .2 cm the shannon entropy of a random variable is defined as a measure of the quantity of information .a certain message has more quantity of information the greater degree of uncertainty and is defined mathematically by \} ] and $ ] ; still , represents the censored data and represents the failure data .the maximum likelihood estimate ( mle ) of is obtained by solving the nonlinear likelihood equations , and .these equations can not be solved analytically and statistical software can be used to solve the equations numerically . for interval estimation of , and , and tests of hypotheses on these parameters ,we must obtain the observed information matrix which is given by , under conditions met for parameters obeying the parametric space and not considering the limits of the same , the asymptotic distribution of where is the expected information matrix . this asymptotic behavior is valid if is replaced by , the observed information matrix evaluated at .the asymptotic multivariate normal distribution can be used to construct approximate confidence intervals and confidence regions for the individual parameters and for the hazard and survival functions .the asymptotic normality is also useful for testing goodness of fit of the three parameters the kum - iw distribution and for comparing this distribution with some of its special submodels using one of the two well - known asymptotically equivalent test statistics - namely , the likelihood ratio ( lr ) statistic and the wald and rao score statistics .following the bayesian paradigm , we need to complete the model specification by specifying a prior distribution for the parameters . by bayestheorem , the posterior distribution is then proportional to the product of the likelihood function by the prior density .subjetivism is the predominant philosophical foundation in bayesian inference , although in practice noninformative prior densities ( built on some formal rule ) are frequently used ( ) .since the parameters in the kum - iw distribution are all positive quantities and due to the flexibility generated by the two - parameter gamma distribution this is adopted as prior distribution .so , , and .assuming independence among the prior densities , the posterior density is expressed by , \nonumber\\ & & \left\{\prod^{n}_{i=1 } t^{-\beta-1}_{i } \left\{1- \exp\left[-\left(\frac{c}{t_{i}}\right)^{\beta}\right]\right\}^{b-1 } \right\}. \end{aligned}\ ] ] this joint density has no known analytical form but we can provide an approximate solution based on the complete conditional distributions of , and .these are given by the following expressions , \left\{\prod^{n}_{i=1 } t^{-\beta-1}_{i } \left\{1- \exp\left[-\left(\frac{c}{t_{i}}\right)^{\beta}\right]\right\}^{b-1 } \right\},\ ] ] \left\{\prod^{n}_{i=1 } t^{-\beta-1}_{i } \left\{1- \exp\left[-\left(\frac{c}{t_{i}}\right)^{\beta}\right]\right\}^{b-1 } \right\},\ ] ] \left\{\prod^{n}_{i=1 } t^{-\beta-1}_{i } \left\{1- \exp\left[-\left(\frac{c}{t_{i}}\right)^{\beta}\right]\right\}^{b-1 } \right\}.\ ] ]in this section , we present estimation results for parameters of the kum - iw distribution under a bayesian approach .the commercial production of cattle meat in brazil , which usually comes from the cattle of the nelore race , search to optimize the process trying to obtain a time for the cattle to reach the specific weight in the period of the birth until it weans . for a data set with 69 bulls of the nelore race , was observed the time ( in days ) until the animals reach the weight of 160 kg relative to the period from birth until it weans .we compared the kaplan - meyer and bayesian survival functions through two graphic methods . using the expression of which a routine was escribed in the software winbugs see to esteem the values of the vector of parameters of the kum - iw distribution for the data of the cattle of the nelore race .[ tete ] one of the methods that dispose for us to test if our model is well adjusted to the data consists in the comparison of the function of survival of the parametric model proposed with the estimator of kaplan - meier .another method consists of sketching the survival function of the model parametric versus the estimate of kaplan - meier for the survival function , if this curves is close of the straight line we will have a good adjustment , figure[kmt ] .we worked a three parameter lifetime distribution called the kumaraswamy inverse weibull ( kum - iw ) distribution which extends inverse weibull distribution proposed and widely used in the lifetime literature .the model is much more flexible than the inverse weibull .the kum - iw distribution could have increasing , decreasing and unimodal hazard rates .we provide a mathematical overview of this distribution including the densities of the order statistics , rnyi entropy , shannon entropy , bonferroni and lorenz curves and mean deviations . also , we derive an explicit algebraic formula for the -th moment , expressions for the order statistics , and the maximum likelihood estimation for the censored data .the performance of the model was analized using real data sets where the kum - iw distribution performed verywell and the estimation was given by bayes method .
|
the kumaraswamy inverse weibull distribution has the ability to model failure rates that have unimodal shapes and are quite common in reliability and biological studies . the three - parameter kumaraswamy inverse weibull distribution with decreasing and unimodal failure rate is introduced . we provide a comprehensive treatment of the mathematical properties of the kumaraswany inverse weibull distribution and derive expressions for its moment generating function and the -th generalized moment . some properties of the model with some graphs of density and hazard function are discussed . we also discuss a bayesian approach for this distribution and an application was made for a real data set . _ keywords _ : kumaraswamy distribution , weibull distribution , survival , bayesian analysis .
|
the accurate delivery of various cargoes is of great importance for maintaining the correct function of cells and organisms .particles , such as vesicles , proteins , organelles , have to be transported to their specific destinations . in order to enable this cargo transfer ,cells are equipped with a complex filament network and specialized motor proteins .the cytoskeleton serves as tracks for molecular motors .they convert the energy provided by atp ( adenosine triphosphate ) hydrolysis into active motion along the cytoskeletal filaments , while they simultaneously bind to cargo .in addition to intracellular transport , the dynamic cytoskeleton and its associated motors also stabilize the cell shape , adjust it to different environmental circumstances , and drive cell motility or division .+ the two main constituents of the cytoskeleton involved in intracellular transport are the polarized microtubules and actin filaments . in cells with a centrosome ,the rigid microtubules grow radially from the central mtoc ( microtubule organizing center ) towards the cell periphery . in conjunction with the associated motor proteins kinesin and dynein, microtubules manage fast long - range transport between the cell center and periphery .in contrast to microtubules , which spread through the whole cell , actin filaments are mostly accumulated in a random fashion underneath the plasma membrane and construct the so called actin cortex .myosin motors operate on actin filaments and are therefore specialized for lateral transport in the cell periphery . consequently, the cytoskeletal structure is very inhomogeneous and characterized by a thin actin cortical layer .+ the ` saltatory ' transport by molecular motors is a cooperative mechanism .several motors of diverse species are simultaneously attached to one cargo .this enables a frequent exchange between actin and microtubule based transport , which is necessary for specific search problems .a prominent example of collaborative transport on actin and microtubule networks is the motion of pigment granules in fish and frog melanophores .the activity level of the particular motor species , and thus the share in cooperation , is regulated by cell signaling . in fish and frog , pigment granules are accumulated near the nucleus by extracellular stimuli transduced via pka ( protein kinase a ) . the motor activity is thereby controlled via the level of camp ( cyclic adenosine monophosphate ) . while low concentrations promote the action of dyneins , intermediate values stimulate myosin motors and high amounts of camp active kinesins . moreover , the infection of cells by adenoviruses triggers signaling through pka and map ( mitogen activated protein kinase ) , which enhances transport to the nucleus ; and the net transport of lipid droplets in drosophila is directed to the cell center ( periphery ) by absence ( presence ) of the transacting factor halo . +another aspect of intracellular transport is its intermittent nature .molecular motors perform two phases of motility .periods of directed active motion along cytoskeletal filaments interfere with effectively stationary states .intersection nodes of the cytoskeleton cause the molecular motors to pause until they either manage to squeeze through the constriction and pass it on the same filament or switch to the crossing track and thus change the direction of the transported cargo .motor proteins also detach of the filaments out of chemical reasons . in the cytoplasm the cargoes experiencesubdiffusive dynamics due to crowding effects .the displacement is limited to the vicinity of the detachment site and negligible compared to the one of the active motion phase .hence , detachment and reattachment processes effectively contribute to waiting times .however , cargo particles preferentially change their direction of motion at cytoskeletal intersections , which constitute motion barriers .the mean distance between two intersections , the mesh size of a network , is typically smaller than the processive run length of a single molecular motor .+ the spatial organization of the cytoskeleton as well as the activity of the different motor species and their behavior at network intersections establish a typical stochastic movement pattern of intracellular cargo , which suggests a random walk description .the narrow escape problem depicts a common delivery process , where the target destination is represented by a specific , usually small region on the plasma membrane of a cell .typical examples involve regulated secretion processes of neurotransmitters or digestive enzymes . in immune cells , secretory vesiclesare actively transported to the immunological synapse . moreover, the outgrowth of dendrites or axons from neurons as well as repair mechanisms for corruptions of a cell s plasma membrane require the target - oriented transport of mitochondria and vesicles .in the following , we present a coarse grained model of intracellular transport by considering the effective movement between network nodes , while discarding the single steps of individual motors at the molecular level . we introduce a random velocity model with intermittent arrest states where the dynamic cytoskeleton is implicitly modeled by probability density functions for network orientation and mesh size . within that framework, we address the effects of the interplay between inhomogeneous cytoskeletal architecture and motor performance on the search efficiency to narrow escapes alongside the membrane of a cell .cells establish specific search strategies for cargo transport by alterations of the cytoskeletal organization and regulation of the motor behavior at network intersections . in order to study the efficiency of spatially inhomogeneous search strategieswe formulate a random velocity model in continuous space and time composed of two states of motility : ( i ) a ballistic motion state at constant speed , which corresponds to active transport by molecular motors in between two successive intersections of the filamentous network and ( ii ) a waiting state , which is associated to pauses at intersection nodes of the cytoskeleton .the swaps from one state to another are arranged via constant but generally asymmetric transition rates for a switch from motion to waiting and for an inverse transition , see figure [ figure1 ] ( b ) . these leadto exponentially distributed time periods , spent in each state of motility and mean residence times of , for the motion and waiting state , respectively , which is biologically consistent as active lifetimes of cargo particles are exponentially distributed .the event rates are directly connected to biologically tractable properties of the cytoskeleton and the motor proteins .the mesh size of the underlying cytoskeletal network , which reflects the typical distance between two consecutive intersections , defines the rate for a transition from the ballistic motion to the pausing state . whereas the event rate is determined by the characteristic waiting time at an intersection node .+ whenever a particle has reached a network intersection and paused , it may either keep moving processively along the same filament with probability or it may change to a crossing track with probability , as sketched in figure [ figure1 ] ( d ) .this provides a typical timescale ^{-1} ] ) of changing ( remaining on ) the filament subsequent to a waiting period . with regard to the rotational symmetry of a cell , the new direction is always chosen with respect to the radial direction .the rotation angle is drawn from a distribution which is characteristic for the underlying cytoskeletal network . due to the inhomogeneous structure of the cytoskeleton, depends on the location of the particle inside the cell .our model system is designed according to the inhomogeneous internal organization of a cell , see figure [ figure1 ] ( a ) .we assume a circular confined geometry of radius , which displays the plasma membrane and will be fixed in the following ( ) .the cytoplasm is split into an interior region , where only microtubules are present , and a periphery , which is dominated by the actin cortex but may also be pervaded by microtubules .the width of the actin cortical layer is denoted by , so that an internal margin of radius emerges . throughout this article ,the target destination of the cargo is assumed to be a narrow escape hole in the plasma membrane with opening angle .the interior of a cell is controlled by the radial network of microtubules with its associated kinesin and dynein motors . since they manage fast long - range transport inside a cell , the rate to switch from the motion to the waiting state fixed to zero for simplicity .this leads to uninterrupted radial movement along the internal microtubule network .however , the particle may be forced into the waiting state due to confinement events , so the transition rate to the motion state is set to a non - zero value .due to the complex network structure of the periphery the searcher frequently encounters intersection nodes at rate and switches to the waiting state .subsequently , the particle may either keep moving along the previously used track at rate or it may change to a crossing filament at rate . the rotation angle is thereby drawn of a distribution which specifies the peripheral environment in terms of the filament orientation ( ) and motor species activity ( ) .the probabilities , , correspond to directional changes induced by kinesin , dynein and myosin , respectively , with .with regard to the radial orientation of microtubules and the directionality of the motors , the rotation angle distributions associated to kinesins and dyneins are delta peaked ,\\ f_\text{d}(\alpha_\text{rot } ) & = \delta(\alpha_\text{rot}-\pi),\qquad\quad\text { for } \alpha_\text{rot } \in ( { -}\pi,\pi].\end{aligned}\ ] ] in case of a directional change initiated by myosins , the rotation angle distribution is assumed to be either uniform or cut - off - gaussian , which takes into account the randomness of actin networks ,\\ f_\text{m}^\text{g}(\alpha_\text{rot } ) & = \frac{1}{2 } f^+ + \frac{1}{2 } f^- , \label{eq : rotanglemyosin}\end{aligned}\ ] ] with ,\\ f^- & = \frac{\mathcal{n}}{\sigma\sqrt{2\pi}}\exp\left({-}\frac{(\alpha_\text{rot}{+}\mu)^2}{2\sigma^2}\right),\text { for } \alpha_\text{rot } \in [ { -}\pi,0],\end{aligned}\ ] ] where and denotes the normalization constant , see figure [ figure1 ] ( c ) . in conclusion , the peripheral network is characterized by a mean mesh size , its structure is reflected by the rotation angle distributions and the motor activity is defined via .the spatial geometry of the model system imposes various confinement events , which are further specified in the following . at the onset, each searcher is assumed to start its walk in the center of the cell , where it is linked to a kinesin and runs in a uniformly distributed initial direction .as soon as a particle encounters the outer membrane margin at radius it will switch into the pausing state , since it is assumed to detach of the filament , and check for the target zone .if it is found , the walk will be terminated , otherwise the particle will wait at rate and the rotation angle distribution will be restricted to allowed values .the same holds in the case that a cargo transported by myosin hits the internal margin , which is created by the structural inhomogeneity of the cytoskeleton .crossovers of the internal margin by kinesins or dyneins , happen uninterruptedly under a change of the characteristic event rates for interior and periphery .whenever a dynein coupled particle reaches the mtoc in the center of the cell , it will wait with rate before it will change to kinesin motion in a uniformly distributed direction .consequently , the mean waiting time at confinement events is not necessarily the same as the one at network intersections in the bulk .in general it will be larger .in the following , we perform extensive monte carlo simulations in order to analyze the dependence of the search efficiency to narrow escapes alongside a cell s membrane on the spatial organization of the cytoskeleton as well as the motor performance at network intersections . at first , we will consider search strategies , where the motors operate on homogeneous cytoskeletal networks .then , we will focus on the inhomogeneous architecture and elaborate the influence of the actin cortical width on the mean first passage properties of narrow escape problems . here , we neglect the inhomogeneous structure of the cytoskeleton and assume that it spreads through the whole cell in a homogeneous manner .since we aim to isolate the impact of the cytoskeletal architecture , we ignore the motor s processivity ( ) and waiting processes ( ) .hence at each network intersection the particle immediately changes its direction according to where . in the case of myosin motorsare deactivated and the transport is managed by kinesin and dynein on a radial network of microtubules , while leads to a pure and uniformly random actin mesh with myosins .intermediate values of correspond to cooperative transport on homogeneous combinations of microtubules and actin filaments with active kinesins , dyneins and myosins .sample trajectories which implicitly reflect the network structure are given in figure [ figure2 ] . in order to investigate the influence of the network composition and motor activity, we measure the mean first passage time ( mfpt ) in dependence of the target size for different event rates and various probabilities .as expected , the mfpt increases monotonically with decreasing opening angle , see figure [ figure2 ] ( c ) .apparently , directional changes at intersections of a homogeneous cytoskeletal network do not improve the search efficiency nor does a cooperative behavior of different motor species ( ;1[ ] is most prominent in the case of , i.e. a peripheral motion dominated by transport along actin filaments , achieved by a high activity level of myosins , is most efficient .contrary , a high activity of dyneins would be favourable for virus trafficking to the nucleus .thus , inhomogeneous networks lead to a considerable gain of efficiency in comparison to the corresponding homogeneous limits and . for ,the search is managed on a homogeneous network composed of actin filaments and microtubules according to the probabilities , .hence , for and we retrieve intracellular transport on a homogeneous and uniformly random actin network . for ,the cargo is transported on a homogeneous network of radial microtubules . for comparison purposes ,the resulting mfpt is represented by the dashed black line in figure [ figure4 ] .+ moreover , the mfpt diverges in the limit of for .cargo particles which are transported by myosins get localized for narrow actin cortices , since the resulting prompt collisions with the two margins at radii and inhibit substantially large displacements .thus , the actin cortex can also act as a barrier for transport to a cell s plasma membrane .the results shown in figure [ figure4 ] visualize that an increased chance of directional alterations by increase of , leads to a loss of search efficiency .this directly suggests a profit by a processive behavior of motor proteins at intersection nodes .furthermore , a decrease in the opening angle provokes a more prominent minimum of the mfpt at lower cortical widths , compare figure [ figure4 ] ( a ) and ( b ) .so far the waiting times , which are induced by intersection nodes and confinement events , have been neglected by fixing the rate to switch from waiting to motion to infinity , , . herewe address the impact of the mean waiting time per arrest state . for that purposewe assume radial microtubules in the cell interior , , .the motion in the periphery is non - processive ( ) and performed on a random actin network ( ) of a given mesh size , defined by .the mean waiting time is determined by and the target size is fixed to . + as expected , a systematic decrease of the rate , i.e. increase of the mean waiting time per arrest state , extends the mfpt in comparison to instantaneous directional changes for , as shown in figure [ figure5 ] ( a ) .remarkably , the position of the minimum , which determines the optimal cortical width , is shifted by a reduction in .this is further illustrated in the inset of figure [ figure5 ] ( a ) by normalization of the mfpt to .+ the change in is based on the enhanced impact of waiting times on the first passage properties .the mfpt is composed of the total mean motion time ( mmt ) and the total mean waiting time ( mwt ) which the cargo experiences in the course of the search .figure [ figure5 ] ( b ) shows that the mmt displays pronounced minima and is indeed independent of the rate and thus fully determined by the mesh size of the network given via .in contrast to that , the mwt , given in figure [ figure5 ] ( c ) , depends on both rates and .since the rate determines the mean waiting time per arrest state , it influences the mwt only via a multiplicative factor , as obvious by the shift in figure [ figure5 ] ( c ) for .the impact of the transition rate on the mwt can be excluded by the transformation to the mean number of waiting periods the inset of figure [ figure5 ] ( c ) displays its dependence on and . remarkably, the number of waiting periods may also exhibit a minimum for small cortical widths .+ consequently , we can solve the question of the shift in for presented in figure [ figure5 ] ( a ) . in the case of ,the mmt exhibits a minimum at roughly ( figure [ figure5 ] ( b ) ) , while a minimum at emerges for the number of waiting periods and thus also for the mwt ( figure [ figure5 ] ( c ) and inset ) .hence , the optimal cortical width of the mfpt , which is the sum of mmt and mwt , undergoes a crossover from to as the impact of waiting times increases with decreasing rate .+ in conclusion , the mwt dominates the behavior of the first passage properties in the limit of .this results in a shift of the optimal width in dependence of the mesh size of the underlying network defined by . for small rates a raising impact of waiting times may lead to a shift of from intermediate values to ( compare to the inset of figure [ figure5 ] ( c ) ) and thus a favour of homogeneous cytoskeleton .larger rates , and hence biologically relevant mesh sizes , conserve the gain of search efficiency by inhomogeneous cytoskeletal organizations , since the number of waiting periods exhibits a minimum for ;1[ ] diverges in the limit of .figure [ figure8 ] ( c ) visualizes , that the mfpt is dominated by the mwt for small rates .consequently , the mfpt exhibits a minimum for in contrast to the mmt , which is minimal for ( compare to figure [ figure7 ] ( c ) ) . due to inevitable waiting processes , it may be more efficient to change directions with a specific probability rather than transport by completely processive motors ( ) .a specific processivity is also reported in biological systems .with the aid of a random velocity model with intermittent arrest states , we studied the first passage properties of intracellular narrow escape problems . via extensive computer simulations we are able to systematically analyze the influence of the cytoskeletal structure as well as the motor performance on the search efficiency to small target zones on the plasma membrane of a cell .+ for a spatially homogeneous cytoskeleton , the mfpt diverges in the limit and directional changes at intersection nodes ( , ) do not improve the search efficiency .moreover , we find that a random actin mesh ( ) is preferable to radial microtubule networks ( ) as well as combinations of both ( ;1[$ ] ) , which underlines the benefit of motor activity regulation by outer stimuli .homogeneous motion pattern are sufficient for large target sizes , but actually fail in the biologically relevant case of narrow escape problems. + by varying the width of the actin cortical layer , we elaborate the impact of the cytoskeletal inhomogeneity on the search efficiency to narrow escapes . remarkably , we find that a cell can optimize the detection time by regulation of the motor performance and convenient alterations of the spatial organization of the cytoskeleton .an inhomogeneous architecture with a thin actin cortex constitutes an efficient intracellular search strategy for narrow targets and generally leads to a considerable gain of efficiency in comparison to the homogeneous pendant .however , the mfpt diverges in the limit of , which outlines the task of the actin cortex between functional delivery and transport barrier .+ molecular motors do not necessarily change their direction of motion at each filamentous intersection , they may also overcome the constriction and remain on the same track , a property referred to as processivity .we find that an increased motor processivity systematically improves the search efficiency in the case of instantaneous directional changes on networks of a mesh size defined by . due towaiting processes an optimal value of the motor processivity emerges , which minimizes the detection of membranous targets .specific probabilities to overcome constricting filament crossings are also reported during intracellular transport .+ if we assume a cell radius of m , the opening angle of ( ) leads to an arc length of m nm ( m ) .we further assume that motors move processively at intersections with probability and their velocity typically is about . a transition rate to the waiting state of thus leads to a mesh size of nm , which is biologically reasonable . under these conditions ,figure [ figure9 ] reveals an optimal width of the actin cortex which varies from approximately m to m .this is in good qualitative agreement to biological data .+ , ; periphery parameters : , , , ; velocity : ; search domain : ; target size : .the mmt ( mfpt for ) as well as the mean number of waiting periods display distinct minima at small cortical widths in dependence of the target size .the optimal width of the actin cortex varies according to and the mean waiting time per arrest state from approximately m to m , which is in qualitative agreement to biological measurements ., scaledwidth=37.0% ] in summary , our model indicates that the spatial organization of the cytoskeleton of spherical cells with a centrosome minimizes the characteristic time necessary to detect small targets on the cell membrane by random intermittent search processes along the cytoskeleton filaments ( see also for similar findings in a model with intermittent diffusive search ) .the minimization is achieved by a small width of the actin cortical layer and by regulation of the motor activity and behavior at network intersections .remarkably a thin actin cortex is also more economic than distributing cytoseleton filaments in all directions over the cell body .thus , our work outlines that a thin confinement of the actin cortex , besides its advantages concerning cell stability or motility , also serves as an efficient key to intracellular narrow escape problems .this work was funded by the german research foundation ( dfg ) within the collaborative research center sfb 1027 .qu b , pattu v , junker c , schwarz ec , bhat ss , kummerow c , marshall m , matti u , pfreundschuh m , becherer u , rieger h , rettig j and hoth m 2011 docking of lytic granules at the immunological synapse in human ctl required vti1b - dependent pairing with cd3 endosomes * 186 * 6894 - 6904 papadopulos a , tomatis vm , kasula r and meunier fa 2013 the cortical acto - myosin network : from diffusion barrier to functional gateway in the transport of neurosecretory vesicles to the plasma membrane * 4 * 1 - 11
|
intracellular transport is vital for the proper functioning and survival of a cell . cargo ( proteins , vesicles , organelles , etc . ) is transferred from its place of creation to its target locations via molecular motor assisted transport along cytoskeletal filaments . the transport efficiency is strongly affected by the spatial organization of the cytoskeleton , which constitutes an inhomogeneous , complex network . in cells with a centrosome microtubules grow radially from the central microtubule organizing center towards the cell periphery whereas actin filaments form a dense meshwork , the actin cortex , underneath the cell membrane with a broad range of orientations . the emerging ballistic motion along filaments is frequently interrupted due to constricting intersection nodes or cycles of detachment and reattachment processes in the crowded cytoplasm . in order to investigate the efficiency of search strategies established by the cell s specific spatial organization of the cytoskeleton we formulate a random velocity model with intermittent arrest states . with extensive computer simulations we analyze the dependence of the mean first passage times for narrow escape problems on the structural characteristics of the cytoskeleton , the motor properties and the fraction of time spent in each state . we find that an inhomogeneous architecture with a small width of the actin cortex constitutes an efficient intracellular search strategy .
|
computer experiment by the method of molecular dynamics ( md ) is intensively exploited in solving various tasks of chemical physics [ 1 ] , biochemistry [ 2 ] and biology [ 3 ] . among theseare investigations of structure and dynamical properties of molecular liquids which normally are treated as collections of rigid bodies . despite the long prehistory of md simulation ,the development of efficient and stable algorithms for the integration of motion for such systems still remains an actual problem .usually , molecular movements are simulated using constrained dynamics [ 47 ] in which the phase trajectory of each atom is evaluated by newton s equations , while the molecular structures are maintained by holonomic constraints to keep intramolecular bond distances .although the atomic - constraint technique can be applied , in principle , to arbitrary polyatomics regardless of its chemical structure and size , it appears to be very sophisticated to implement for some particular models .for example , when there are more than two , three or four interaction sites per molecule for linear , planar and three - dimensional bodies , bond lengths and angles can not be fixed uniquely [ 5 ] .systems of point molecules with embedded multipoles present additional complexities too , because then the intermolecular forces can not easily be decomposed into direct site - site interactions .the limitation of constrained dynamics is also caused by the fact that constraint forces are calculated at each time step of the produced trajectory to balance all other potential forces in the system .as the number of atoms in each molecule is increased , the number of constraints raises dramatically , resulting in a decreased speed of computations .moreover , to reproduce the rigid molecular structure , cumbersome systems of nonlinear equations must be solved iteratively .this can lead to a problem for molecules with light hydrogen atoms or with linear or planar fragments . in this case, the algorithm converges rather slowly [ 8 ] already at relative small step sizes and , thus , it requires a considerably portion of the computational time .recently , it was shown that a non - iterative calculation of constraint forces is possible [ 9 ] , but this is practical only for simple models in which the problem can be reduced to inversion of a banded matrix [ 10 , 11 ] .some of the limitations just mentioned are absent in the molecular approach , when the displacements of rigid bodies are analyzed in view of translational and rotational motions .the translational dynamics is defined by motion of molecular centres of masses , whereas the orientations typically are expressed in terms of quaternions [ 1214 ] or principal - axis vectors [ 13 ] .the straightforward parameterization of orientational degrees of freedom , euler angles , is very inefficient for numerical calculations because of singularities inherent in the description [ 12 , 15 , 16 ] .multistep predictor - corrector methods were applied to integrate rotational motion in early investigations [ 1720 ] . as was soon established , the extra order obtained in these methods is not relevant , because the forces existing in a real system are not sufficiently smooth . as a result , high - order schemes appear to be less accurate at normal step sizes than low - order integrators , such as verlet [ 21 ] , velocity verlet [ 22 ] and leapfrog [ 23 ] ones .the last algorithms are also the most efficient in view of cost measured in terms of force evaluations .that is why , they are widely used in different approaches , for instance , in the atomic - constraint technique , to integrate translational motion .these traditional algorithms were derived , however , assuming that velocities and forces are coordinate- and velocity - independent , respectively . in general ,the time derivatives of orientational positions may depend not only on angular velocities but also on these positions themselves resulting in the explicit velocity - dependence of angular accelerations .therefore , additional revisions are necessary to apply the standard integrators to rotational motion . in the atomic approach, the problem with the coordinate and velocity dependencies is circumvented by involving fundamental variables , namely , the individual cartesian coordinates of atomic sites .similarly , this problem can be solved within the molecular approach choosing appropriate generalized coordinates in orientational space .ahlrichs and brode proposed a method [ 24 ] in which the principal axes of molecules are treated as pseudo particles and constraint forces are introduced to maintain their orthonormality .considered the entire rotation matrix and the corresponding conjugate momentum as dynamical variables [ 25 ] .the rotation matrices can be evaluated within the usual verlet or leapfrog frameworks , using either recursive [ 24 ] or iterative [ 25 ] procedures , respectively .the recursive method behaves relatively poor with respect to long - term stability of energy , whereas the iterative procedure requires again , as in the case of constrained dynamics , to find solutions for systems of highly nonlinear equations . in general , the convergence of iterations is not guaranteed and looping becomes possible even at not very large step times .examples for not so well behaved cases are models with almost linear or planar molecules , when the diagonal mass matrices are hard to numerical inversion since they have one or two elements which are very close to zero .the extension of the atomic and pseudo - particle approaches to temperature - conserving dynamics is also a difficult problem , given that the rigid - reactions and temperature - constraint forces are coupled between themselves in a very complicated manner .a viable alternative to integrate the rigid - body motion has been provided by explicit and implicit angular - momentum algorithms of fincham [ 2627 ] .this was the first attempt to adopt the leapfrog framework to rotational motion in its purely classical treatment .the chief advantage of these rotational leapfrog algorithms is the possibility to perform thermostatted simulations .however , even in the case of a more stable implicit algorithm , the total energy fluctuations in energy - conserving simulations are too big with respect to those identified in the atomic - constraint technique . moreover , despite the fact that no constraint forces are necessary in the rotation dynamics , the rigidness of molecules is not satisfied automatically , because the equations of motion are not solved exactly .usually , the artificial rescaling method [ 19 , 27 ] is used to preserve the unit norm of quaternions and , as a consequence , to ensure the molecular rigidity .recently [ 28 ] , it has been shown that the crude renormalization can be replaced by a more rigorous procedure introducing so - called numerical constraints . as a result ,quaternion [ 28 ] and principal - axis [ 29 ] algorithms were devised within the velocity verlet framework .it was demonstrated [ 29 , 30 ] that these algorithms conserve the total energy better than the implicit leapfrog integrator [ 27 ] , but worse with respect to the atomic - constraint method , especially in the case of long - duration simulations with large step sizes . quite recently , to improve the stability ,a new angular - velocity leapfrog algorithm for rigid - body simulations has been introduced [ 30 ] .the automatic preservation of rigid structures and good stability properties can be related to its main advantages . but a common drawback , existing in all long - term stable integrators on rigid polyatomics , still remained here , namely , the necessity to solve by iteration the systems of nonlinear equations .although such equations are much simpler than those arising in the atomic and pseudo - particle approaches , the iterative solution should be considered as a negative feature . moreover , since the nonlinear equations are with respect to velocities , it is not so simple matter to extend the angular - velocity algorithm to a thermostatted version .this study presents a modified formulation of the angular - momentum approach within the leapfrog framework . unlike the standard approach by fincham [ 27 ], the new formulation is based on more natural interpolations of dynamical variables and it uses no extrapolation .the algorithm derived appears to be free of all the drawbacks inherent in previous descriptions .it can easily be implemented to arbitrary rigid bodies and applied to temperature - conserving dynamics .the integrator exhibits an excellent energy conservation , intrinsically reproduces rigid structures and allows to avoid any iterative procedures at all .let us consider a system of interacting rigid bodies .according to the classical approach , any movements of a body can be presented as the sum of two motions , namely , a translational displacement of the centre of mass and a rotation about this centre .the translational displacements in the system are expressed in terms of the centre - of - mass positions and velocities , where , given in a space - fixed laboratory frame . the time evolution of such quantities can described by writing newton s law in the form of two per particle three - dimensional differential equations of first order , \\ [ -6pt ] \frac{{\rm d } { \bf r}_i}{{\rm d } t}&=&{\bf v}_i \ , , \nonumber\end{aligned}\ ] ] where is the total force acting on body due to the interactions with all the rest of particles and denotes the mass of the body . to determine the rotational motion , one needs to use frames attached to each body together with the laboratory system of coordinates .it is more convenient for further consideration to direct the body - fixed - frame axes along the principal axes of the particle , which pass through its centre of mass .then the matrix of moments of inertia will be diagonal and time - independent in the body - fixed frame .we will use the convention that small letters stand for the representation of variables in the fixed laboratory frame , whereas their counterparts in the body frame will be designated by capital letters .the transitions and between these both representations of vectors and in the laboratory and body frames , respectively , can be defined by the time - dependent rotation matrix .such a matrix must satisfy the orthogonormality condition , or in other words , to ensure the invariance of quadratic norms for vectors and . in our notations and are the matrices inversed and transposed to , correspondingly , and denotes the unit matrix .let be an arbitrary vector fixed in the body . by definition ,such a vector does not change in time in the body - fixed frame , .the angular velocity is introduced differentiating its counterpart in the laboratory frame over time , .then , using the equality and the orthonormality of , the rate of change in time of the orientational matrix can be expressed in terms of either laboratory or principal angular velocity as where is a skewsymmetric matrix , i.e. , , and , and are components of vector . from the orthogonormality conditionit follows that maximum three independent parameters are really necessary to describe orientations of a rigid body and to evaluate the nine elements of the rotation matrix .however , the well - known parameterization of in terms of three eulerian angles [ 12 ] is unsuitable for numerical calculations because of the singularities . in the body - vector representation[ 13 , 24 , 25 , 29 ] , all the elements of the rotation matrix are considered as dynamical variables .these variables present , in fact , cartesian coordinates of three principal axes of the body in the laboratory frame .the alternative approach applies the quaternion parameterization [ 4 , 9 ] of rotation matrices , where is a vector - column consisting of four quaternion components . using the normalization condition , which ensures the orthonormality of ] appear implicitly , and they are computed via relation ( 4 ) using quaternion values .as far as the quantities and are not known at mid - step level , it was assumed to propagate the time derivatives of by means of the relation + { \cal o}(h^2) ] , where , and are nonzero elements of matrix , is the boltzmann s constant and denotes the number of degrees of freedom per particle ( for linear bodies ) .this allows to synchronize in time the temperature with the potential energy and , therefore , to calculate the total energy of the system . in the temperature - conserving dynamics , on - step velocities and angular momentaare modified as and using the scaling factor , where is the required constant temperature [ 27 , 31 ] .the velocity integration is completed by { \bf v}_i'(t)+{\textstyle \frac{h}{2 } } { \bf f}_i(t)/m \ , , \\ [ 3pt ]{ \bf l}_i'(t+{\textstyle \frac{h}{2}})&= & [ 2-\beta^{-1}(t ) ] { \bf l}_i'(t)+{\textstyle \frac{h}{2 } } { \bf k}_i(t)\end{aligned}\ ] ] which satisfy the interpolations ] and the constant - temperature condition , where . finally , the translational and orientational positions are updated according to the same equations replacing by and by .as has been established [ 29 , 30 ] , the rotational leapfrog algorithm , described in the preceding subsection , exhibits rather poor long - term stability of energy with respect to atomic - constraint integrators [ 47 ] , for example .moreover , it requires iterative solutions and does not conserve the unit norm and orthonormality of quaternions and orientational matrices . for this reason, a question arises how about the existence of a revised scheme which is free of all these drawbacks and which has all advantages of the standard approach .we shall show now that such a scheme really exists .first of all , one points out some factors which can explain bad stability properties of the standard scheme .when calculating orientational variables , the fincham s algorithm uses up three additional estimators , namely , the propagations for on - step angular momentum ( eq . ( 12 ) ) and mid - step time derivative ( eq . ( 14 ) ) as well as the prediction ( eq . ( 16 ) ) of angular momentum . among these only the first two evaluationscan be classified as interpolations which correspond to a simple averaging over the two nearest neighbouring values . at the same time , the last prediction ( 16 ) presents , in fact , an extrapolation that is , strictly speaking , beyond the leapfrog framework . indeed , applying equation ( 12 ) for the next step time yields the following interpolated values for angular momenta , which differ from previously predicted ones , i.e. , and , as a consequence , .extrapolations are commonly used in low - precision explicit schemes and they should be absent in more accurate implicit integrators . the main idea of the revised approach is to derive an implicit equation for reducing the number of auxiliary interpolations to a minimum and involving no extrapolations .this can be realized starting from the same evaluation ( 10 ) for mid - step angular momenta , but treating the time derivatives in a somewhat other way .as was mentioned earlier , these derivatives are necessary to evaluate orientational positions ( eq . ( 13 ) ) , and they require the knowledge of two per body quantities , namely , and .it is crucial to remark that since the advanced angular momenta are already known , such two quantities are not independent but connected between themselves by the relation then , as can be seen easily , the calculation of is reduced to a propagation of one variable only , namely , .it is quite naturally to perform this propagation by writing + { \cal o}(h^2)\ ] ] and the algorithm proceeds as follows taking into account expressions ( 19 ) and ( 20 ) , matrix equation ( 21 ) constitutes an implicit system for unknown elements of . as for the usual scheme , the system can be solved iteratively , putting initially for in all nonlinear terms collected in the right - hand side of ( 21 ) .then the obtained values for in the left - hand side are considered as initial guesses for the next iteration .the convergence of iterations is justified by the smallness of nonlinear terms which are proportional to the step size . in such a way, we have derived a new leapfrog algorithm to integrate orientational degrees of freedom .it involves only one auxiliary interpolation ( 20 ) which is completely in the spirit of the leapfrog framework .moreover , this interpolation concerns the most slow variables , rather than their more fast time derivatives and angular momenta , thus , leading to an increased precision of the calculations .when on - step temperature is required , for instance to check the energy conservation , we can apply usual interpolation ( 12 ) of angular momenta and relation ( 11 ) for velocities .it is worth underlining that , unlike the standard rotational integrator , the angular - momentum interpolation errors are not introduced into trajectories ( 21 ) produced by the revised algorithm at least within the energy - conserving dynamics .the extension of the revised scheme to a thermostatted version is trivial . using the calculated temperature define the scaling factor .the mid - step angular momenta are then replaced by their modified values ( see eq . ( 18 ) ) and substituted into eq .( 19 ) to continue the integration process according to equations ( 20 ) and ( 21 ) .besides the evident simplicity of the revised approach with respect to the standard scheme , a very nice surprise is that the unit norm of quaternions and the orthonormality of orientational matrices appear to be now by numerical integrals of motion . indeed , considering the quantity as a parameter and explicitly using coordinate interpolation ( 20 ) , we can present eq .( 21 ) in the equivalent form ^{-1 } [ { \bf i}+{\textstyle \frac{h}{2 } } { \bf h}_i(t+{\textstyle \frac{h}{2 } } ) ] { \bf s}_i(t ) \ , , \ ] ] where and it is understood that designates either three- or four - dimensional unit matrix in the principal - axis or quaternion domain , respectively .it can be checked readily that the matrix is orthonormal for an arbitrary skewsymmetric matrix .as far as the matrix is skewsymmetrical by definition , the following important statement emerges immediately .if initially the orthonormality of is fulfilled , it will be satisfied perfectly for the advanced matrices as well , despite an approximate character of the integration process .thus , no artificial or constraint normalizations and no recursive procedures are necessary to conserve the rigidness of molecules .the alternative presentation ( 22 ) may be more useful for iterating since it provides the orthonormality of at each iteration step and leads to an increased speed of the convergence .because of this , we show eq .( 22 ) more explicitly , +h { \bf q}_i}{1+{\textstyle \frac{h^2}{16 } } \omega_i^2(t+{\textstyle \frac{h}{2 } } ) } \ , { \bf q}_i(t ) \equiv { \bf g}_i(t , h ) \ , { \bf q}_i(t ) \ , , \\ [ 15pt ] { \bf a}_i(t+h)&=&\frac{{\bf i}\,[1-{\textstyle \frac{h^2}{4 } } \omega_i^2(t+{\textstyle \frac{h}{2}})]+h{\bf w}_i+ { \textstyle \frac{h^2}{2}}{\bf p}_i}{1 + { \textstyle \frac{h^2}{4 } } \omega_i^2(t+{\textstyle \frac{h}{2 } } ) } \ , { \bf a}_i(t ) \equiv { \bf d}_i(t , h ) \ , { \bf a}_i(t ) \ , , \end{aligned}\ ] ] for the cases of quaternion and entire - rotation - matrix representations , respectively , where expressions ( 3 ) and ( 5 ) for matrices and have been taken into account , and are orthonormal evolution matrices , and {\alpha \beta}={\omega}_\alpha^i { \omega}_\beta^i ] and ] , where and {t+\frac{h}{2}} \rho ] . for molecules with cylindric distribution of mass sites ,when two of three of principal moments of inertia are equal , the numerical trajectory can also be determined in a simpler manner .let us assume for definiteness that and .then arbitrary two perpendicular between themselves axes , lying in the plane perpendicular to the principal axis , can be considered initially as - and -th principal orths . since now , the component of the angular velocity is found automatically , . as in the general case ,the two rest solutions and of system ( 28 ) are calculated on the basis of eq .( 29 ) taking into account that , whereas the orientational matrices are evaluated via eq .( 24 ) . a special attention must be paid for purely linear molecules when and each body has two , instead of free , orientational degrees of freedom .the relative positions of all atoms within a linear molecule can be expressed in terms of an unit vector and besides one finds that = j_{zz}^i { \omega_z^i}=0 \rho ] putting formally , where two other components of are and ( this immediately follows from eq . ( 28 ) ) .planar molecules do not present a specific case within our approach and they are handled in the usual way as tree - dimensional bodies .the system chosen for numerical tests was the tip4p model ( ) of water [ 32 ] at a density of g and a temperature of k. because of the low moments of inertia of the water molecule and the large torques due to the site - site interactions , such a system should provide a very severe test for rotational algorithms . in orderto reduce cut - off effects to a minimum we have applied an interaction site reaction field geometry [ 33 ] and a cubic sample with molecules . all runs were started from an identical well equilibrated configuration .the md simulations have been carried out in both energy - conserving ( nve ) and thermostatted ( nvt ) ensembles .the equations of rotational motion were integrated using the standard quaternion integrator [ 27 ] and our revised leapfrog algorithm .as far as water is usually [ 34 ] simulated in an nve ensemble by the atomic - constraint technique [ 4 , 5 ] , the corresponding calculations on this approach and the angular - velocity verlet method [ 28 ] were performed for the purpose of comparison as well .all the approaches required almost the same computer time per step given that near 97% of the total time were spent to evaluate pair interactions .the following thermodynamic quantities were evaluated : total energy , potential energy , temperature , specific heat at constant volume , and mean - square forces and torques .the structure of the tip4p water was studied by determining the oxygen - oxygen and hydrogen - hydrogen radial distribution functions ( rdfs ) .orientational relaxation was investigated by evaluating the molecular dipole - axis autocorrelations .centre - of - mass and angular - velocity time autocorrelation functions were also found . to reduce statistical noise ,the measurements were averaged over 20 000 time steps . in the case of nve dynamics to verify whether the phase trajectories are produced properly , we applied the most important test on conservation of total energy of the system .the total energy fluctuations ^{1/2}/|\langle e \rangle| ] . using these velocities in the quaternion space causes an inconsistency of such an interpolation with the corresponding interpolation ] .therefore , to follow rigorously the leapfrog framework , the auxiliary mid - step angular velocities must be involved within the principal - axis representation exclusively .no shift of the total energy and temperature was observed during the revised leapfrog trajectories at fs over a length of 20 000 time steps . as is well known , to reproduce features of an nve ensemble correctly, the ratio of the total energy fluctuations to the corresponding fluctuations of the potential energy must be within a few per cent .the following levels of at the end of the revised leapfrog trajectories have been obtained : 0.0016 , 0.0066 , 0.017 , 0.030 , 0.051 and 0.11 % .they correspond to 0.29 , 1.2 , 3.0 , 5.4 , 9.1 and 20 % at 1 , 2 , 3 , 4 , 5 and 6 fs , respectively , where it was taken into account that for the investigated system .the deviations in all the rest measured functions with respect to their benchmark values ( obtained in the atomic - constraint nve simulations with 2 fs ) were in a complete agreement with the corresponding relative deviations in the total energy conservation .for example , the results of the revised leapfrog algorithm at 2 fs were indistinguishable from the benchmark ones , whereas they differed as large as around 5% , 10% and 20% with increasing the time step to 4 fs , 5 fs and 6 fs , respectively . however , the differences were much smaller than in the case of the standard rotational integrator .we see , therefore , that step sizes of order 5 fs are still suitable for precise nve calculations .even a time step of 6 fs can be acceptable when a great precision is not so important , for instance , to achieve an equilibrium state . what about the nvt simulations? it is well established [ 27 , 35 ] that thermostatted versions allow to perform reliable calculations with significantly greater step sizes than those used within the energy - conserving dynamics . to confirm such a statement ,we have made nvt runs on the basis of our non - iterative revised leapfrog algorithm ( within principal - axis variables ) and a thermostatted version of the standard implicit integrator [ 27 ] of fincham .the oxygen - oxygen and hydrogen - hydrogen rdfs , calculated during the revised leapfrog trajectories for three different step sizes , 2 , 8 and 10 fs , are plotted in fig .2a by the curves in comparison with the benchmark result ( open circles ) .note that the rdfs corresponding to and 6 fs coincide completely with those for fs and they are not included in the graph .a similar behaviour of rdfs was identified for the standard rotational integrator , but the results are somewhat worse especially at 8 and 10 fs .no drift of the potential energy was observed at fs and fs for the revised and standard algorithms , respectively . from the above, we can conclude that the revised leapfrog integrator is suitable for simulating even with huge step sizes of 10 fs , because then there is no detectable difference in rdfs .other thermodynamic quantities such as the centre - of - mass and angular - velocity time autocorrelation functions appeared to be also close to genuine values .quite recently , it was shown [ 36 ] that the time interval of 10 fs should be considered as an upper theoretical limit for the step size in md simulations on water .we see , therefore , that this limit can be achieved in practice using the revised leapfrog algorithm .the molecular dipole - axis time autocorrelation function is the most sensitive quantity with respect to varying the step size .such a function obtained within the standard ( s ) and revised ( r ) schemes at five fixed step sizes , 2 , 4 , 6 , 8 and 10 fs , is presented in fig .2b . for 6 fs the results of s- and r - schemes are indistinguishable between themselves .with increasing the step size to 8 fs or higher we can observe a systematic discrepancy which is smaller in the case of the r - scheme .reliable results can be obtained here at time steps of 8 fs for both the standard and revised schemes .however within the standard approach , the solutions converged too slow already at 6 fs and they began to diverge at greater step sizes . to perform the simulations in this case , special time - consuming transformations to unsure the convergence have been applied . forthe revised integrator which is free of iterations , the computer time did not depend on the step size .during last years there was a slow progress in the improvement of existing md techniques concerning the numerical integration of motion for systems with interacting rigid bodies .we have attempted to remedy such a situation by formulating a revised angular - momentum approach within the leapfrog framework . as a result , a new integrating algorithm has been derived .the revised approach reduces the number of auxiliary interpolations to a minimum , applies the interpolations to the most slow variables and avoids any extrapolations .this has allowed to achieve the following two significant benefits : ( i ) all final expressions are evaluated explicitly without involving any iterative procedures , and ( ii ) the rigidity of bodies appears to be a numerical integral of motion .another positive feature of the algorithm is its simplicity and universality for the implementation to arbitrary rigid structures with arbitrary types of interactions . as has been shown on the basis of actual simulations of water , the proposed algorithm exhibits very good stability properties and conserves the total energy in microcanonical simulations with the same precision as the cumbersome atomic - constraint technique . in the case of temperature - conserving dynamics , reliable calculations are possible with huge step sizes around 10 fs .such sizes are very close to the upper theoretical limit and unaccessible in usual approaches .j. p. ryckaert , g. ciccotti , and h. j. c. berendsen , `` numerical integration of the cartesian equations of motion of a system with constraints : molecular dynamics of -alkanes '' , j. comput .phys . , 23 , 327 ( 1977 ) .d. levesque , j. j. weis , and g. n. patey , `` fluids of lennard - jones spheres with dipoles and tetrahedral quadrupoles . a comparison between computer simulation and theoretical results '' , mol ., 51 , 333 ( 1984 ) .w. c. swope , h. c. andersen , p. h. berens , and k. r. wilson , `` a computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules : application to small water clusters '' , j. chem .phys . , 76 , 637 ( 1982 ) .d. brown and j. h. r. clarke , `` a comparison of constant energy , constant temperature and constant pressure ensembles in molecular dynamics simulations of atomic liquids '' , mol .phys . , 51 , 1243 ( 1984 ) .1 . the total energy fluctuations as functions of the length of the nve simulations on the tip4p water , evaluated in various techniques at four fixed time steps : * ( a ) * 1 fs , * ( b ) * 2 fs , * ( c ) * 3 fs and * ( d ) * 4 fs . fig .oxygen - oxygen ( o - o ) and hydrogen - hydrogen ( h - h ) radial distribution functions * ( a ) * , and orientational relaxation * ( b ) * , obtained in nvt simulations on the tip4p water using the revised ( * ( a ) * , * ( b ) * ) and standard ( * ( b ) * ) leapfrog algorithms .the results corresponding to the step sizes 2 , 8 and 10 fs are plotted by bold solid , short - dashed and thin solid curves , respectively .additional long - short dashed and dashed curves in * ( b ) * correspond to cases of 4 and 6 fs .the sets of curves related to standard and revised integrators are labelled in * ( b ) * as `` s '' and `` r '' , respectively .the benchmark data are shown as open circles .note that the standard- and revised - algorithm curves are indistinguishable in * ( b ) * at 2 , 4 and 6 fs .
|
a new algorithm is introduced to integrate the equations of rotational motion . the algorithm is derived within a leapfrog framework and the quantities involved into the integration are mid - step angular momenta and on - step orientational positions . contrary to the standard implicit method by fincham [ mol . simul . , 8 , 165 ( 1992 ) ] , the revised angular momentum approach presented corresponds completely to the leapfrog idea on interpolation of dynamical variables without using any extrapolations . the proposed scheme intrinsically preserves rigid molecular structures and considerably improves stability properties and energy conservation . as is demonstrated on the basis of simulations for water , it allows to reproduce correct results with extra large step sizes of order 5 fs and 10 fs in the cases of energy- and temperature - conserving dynamics , respectively . we show also that iterative solutions can be avoided within our implicit scheme shifting from quaternions to the entire rotation - matrix representation . * keywords : * numerical algorithms ; long - term integration ; motion of rigid bodies ; polyatomic molecules
|
the goal of this work is to investigate the concept of `` effective free surfaces '' which are defined here as stationary interfaces corresponding to the time averaged balance of mass , momentum and , if applicable , energy in a time dependent flow with free surfaces .in other words , given an unsteady two phase flow with fluctuating boundaries , the effective free surface represents the boundary between the two phases in the corresponding mean flow which satisfies the time averaged form of the original system of governing equations subject to a number of modeling assumptions .the motivation for this work comes from the field of flow control where many emerging applications involve control and optimization of free surface phenomena .the particular applications underlying this research concern optimization of complex thermo fluid phenomena occurring in liquid metals during welding , see volkov _ _et al.__ .while the mathematical foundations for the optimal control of time dependent free boundary problems are relatively well understood , such approaches tend to result in computational problems of significant complexity even for simple models .the main difficulty arising when methods of the optimal control , or more broadly , the calculus of variations are applied to such problems is that some of the optimality conditions have the form of partial differential equations ( pdes ) defined on interfaces which evolve with time .needless to say , such problems tend to be quite hard to solve for non trivial applications . on the other hand , this framework becomes much more tractable when time _ independent _ free boundary problems are considered instead .moreover , on a more practical level , fluid flows with free surfaces may generate `` subgrid scale '' features which are particularly difficult to compute , and it is therefore desirable to account for their effect in the average balance of mass and momentum in a systematic manner . in this paper we propose and test a simple mathematical model , in the form of a system of coupled pdes of the free boundary type , representing the time averaged conservation of mass and momentum in a given time dependent problem with free surfaces .while such averaging approaches are well established in the study of turbulent flows in domains with fixed boundaries , giving rise to the well known reynolds averaged navier stokes ( rans ) equations , see , e.g. , pope , the additional complication in the present problem is that one also needs to take into account the effect of the fluctuations of the location of the free surfaces on the average mass and momentum balance .our approach to this problem relies on a number of simplifying assumptions which are all clearly identified . in the spirit of the `` closure problem '' arising in turbulence modeling ,see ref ., in order to close the resulting system of equations one needs to express average products of fluctuating quantities in terms of average quantities .however , in contrast to the classical closure problem where the reynolds stresses are modeled with terms defined in the bulk of the fluid , in the present problem , subject to certain assumptions , such closure terms will appear in the boundary conditions defined on the effective free boundary .we will also discuss some very simple strategies for constructing such closures .the question of ensemble averaged , or time averaged , description of flows with interfaces has received some attention in the literature and we mention here the work of dopazo , hong & walker and brocchini & peregrine which also contains references to a number of earlier attempts .these problems were recently revisited in the context of the derivation and validation of suitable models for multiphase reynolds averaged navier stokes ( rans ) equations and large eddy simulation ( les ) .we also mention the recent investigation by wacawczyk & oberlack where a number of closure strategies were proposed for this type of flows . finally , we add that the related question of homogenization of free boundary problems is an emerging topic in the mathematical analysis of pdes , see , e.g. , schweizer .a detailed description of various computational methods applied to multiphase flows can be found in the monograph by prosperetti & tryggvason . as compared to these earlier studies , novel aspects of the present investigationare that , first , we want to compute steady state solutions , which is motivated by the optimization applications mentioned above , and secondly , we want our averaged flows to feature _ sharp _ effective surfaces , so that the free boundary property of the original problem is preserved in its averaged version .in contrast , we note that the formulations developed in refs . and lead to interfaces , referred to as `` surface layers '' , characterized by finite thickness .we also wish to highlight that although brocchini & peregrine derived averaged equations taking into account the fluctuations of the free boundaries and also proposed a simple closure model , to the best of our knowledge , there have been no attempts to actually compute such solutions for nontrivial problems which is one of the contributions of the present work .the resulting system of pdes represents the averaged balance of mass and momentum which has the form of a steady state free boundary problem .since such problems tend to be difficult to solve numerically , we also propose a solution approach based on shape optimization which is well adapted to the numerical solution of this class of problems . in order to test our approachwe choose a very simple model problem which , while allowing us to focus on certain methodological aspects , still captures some essential features of the motivating application , namely , the transfer of mass and momentum to the weld pool via droplets , see figure [ fig : model ] .this model describes the two dimensional ( 2d ) , time periodic impingement of droplets on the free surface of the fluid in a container . in view of the comments made above, we see that formulation of an optimal control problem for the original time dependent system would require us to satisfy certain optimality conditions on the boundary of each individual moving droplet in addition to conditions on the free surface of the liquid in the pool .on the other hand , the concept of the effective free surface allows us to replace this optimization problem with a simpler one , which is also computationally more tractable , where the optimality conditions have to be imposed on the stationary effective surfaces .thus , as one application , the proposed approach will allow us to extend the optimization formulation developed in volkov __ et al.__ to include the effect of the mass transfer into the weld pool via droplet impingement .we remark that droplet impingement onto a thin liquid film is a phenomenon with manifold manifestations in technology , including chocolate processing , spray painting , corrosion of turbine blades , fuel injection in internal combustion engines , and aircraft icing .it also occurs in many natural phenomena such as the erosion of soil and the dispersal of spores and micro - organisms .a considerable amount of literature is available as concerns the numerical modeling of droplet impingement onto a solid surface .harlow & shannon were the first to simulate this phenomenon and several other authors have applied the volume of fluid ( vof ) based approaches such as ripple and sola vof to understand droplet impingement phenomena ._ also performed a numerical investigation and experimental characterization of the heat transfer from a periodic impingement of droplets .the structure of this paper is as follows : in the next section we present the formulation of the problem in general terms , in the following section we introduce our model problem and in section [ sec : efs_closure ] we discuss a very simple closure strategy which may be suitable for this problem , in section [ sec : efs_steady ] we introduce a shape optimization approach to the numerical solution of the resulting averaged equations , whereas in section [ sec : efs_results ] we present some computational results together with a discussion ; final conclusions are deferred to section [ sec : final ] and some technical results concerning solution of the shape optimization problem in section [ sec : efs_steady ] are collected in appendix [ sec : efs_grad ] .in order to simplify the presentation of our approach , we will consider a two dimensional problem formulated in a general domain , shown schematically in figure [ fig : domains ] , where and represent the subdomains occupied , respectively , by the immiscible liquid and gas phases , whereas represents the liquid gas interface ( e.g , droplet boundary or the free surface of the weld pool ) . for a general description of the equations and boundary conditions governing multiphase flows we refer the reader to monograph by prosperetti & tryggvason .we assume that our model problem involves the following dependent variables 1 .velocity ^t \ , : \ , \omega \times ( 0,t ] \quad \to \mathbb{r}^2 ] and 3 .position of the free surface } , \\gamma_{lg}(t ) \triangleq \overline{\omega}_l(t ) \cap \overline{\omega}_g(t) ] , denotes the identity matrix and the viscosity coefficient or in the liquid and gas phase , respectively .the symbol denotes the gravitational acceleration .equations and represent conservation of mass , whereas equations and represent conservation of momentum in both the liquid and gas phase . systems and are subject to the following boundary conditions on the liquid gas interface [ bc_interface ] ^g . \mathbf{n } & = \gamma \kappa\ , \mathbf{n } \quad & & \text{on } \quad \gamma_{lg } , \label{bc_interface_b}\end{aligned}\ ] ] where and are the unit normal and tangential vectors on the interface , is the interface curvature , the surface tension ( a material property assumed constant ) , whereas the subscripts and ( with or without the vertical bar ) denote quantities defined in the corresponding phases ( figure [ fig : domains ] ) .we note that the vector valued condition implies the balance of both the normal and tangential stresses . for simplicity , on the far boundary , cf .figure [ fig : domains ] , we adopt the no slip boundary condition as regards the mathematical description of free boundary problems , there are two main paradigms , namely , ( i ) `` interface tracking '' approaches , see neittaanmaki __ and ( ii ) `` interface capturing '' approaches , see sethian . while description , featuring the location of the interface as the dependent variable , belongs to the first category , for the purpose of developing our formulation an interface capturing approach will be more suitable and we employ a technique known as `` volume of fluid '' ( vof ) .however , our computations of the effective surfaces will be ultimately carried out with an `` interface tracking '' approach , see section [ sec : efs_steady ] .a detailed description of the vof methodology can be found in the paper by hirt & nichols , see also monograph by prosperetti & tryggvason .this method employs the `` volume fraction '' as an indicator function to mark different fluids } \qquad f(\x , t ) = \left \ { \begin{array}{ll } 1 & \x \in \omega_l \\ 0 & \x \in \omega_g \end{array } \right .. \ ] ] while in the continuous setting the interface is sharp and the vof function may take the values of 0 and 1 only , in a numerical approximation there may exist a transition region where and the fluid can be treated as a mixture of the two fluids on each side of the interface .the values of the indicator function are associated with each fluid and hence are propagated as lagrangian invariants .therefore , the indicator function obeys a transport equation of the form based on the indicator function , local material properties such as the density of the fluid can be expressed as \rho_g .\label{rho_lg } \ ] ] relationship allow us to rewrite formulation in an equivalent form as one system of conservation equations defined in the _ entire _ flow domain where the fluid properties are , in general , discontinuous across the interface between the two fluids . in this single field representation the two fluids are identified by indicator function , whereas the material properties are expressed as piecewise constant functions and can be written in terms of their values on either side of interface , cf . .[ ns_domain ] & = 0 & & \text{in } \\omega , \label{ns_domain_a}\\ \frac{\partial \rho(f)\ , { \mathbf{v}}}{\partial t}+\bnabla \cdot\left[\rho(f ) \ , { \mathbf{v}}{\mathbf{v}}\right]&= \bnabla \cdot { \boldsymbol{\sigma}}+ \int_{\gamma_{lg } } \gamma \kappa\ , \n \delta(\x-\x ' ) \ , ds(\x ' ) \quad & & \text{in } \ \omega , \label{ns_domain_b } \\\frac{\partial f}{\partial t } + ( { \mathbf{v}}\cdot \bnabla ) f & = 0 & & \textrm{in } \\omega , \label{ns_domain_c}\end{aligned}\ ] ] where denotes the dyadic product , i.e. , the tensor defined as {ij } = \left [ { \mathbf{v}}\right]_{i}\left [ { \mathbf{v}}\right]_{j} ] , , we thus define the pointwise time average as where the time window is assumed large compared to the time scale of the random fluctuations associated with free boundaries . since in the present problemwe are interested in steady solutions , we take , so that the averaged variables do not depend on time . in the conventional reynolds decomposition , the chaotically varying flow variables are replaced by the sums of their time averages and fluctuations , i.e. , by definition , the time average of a fluctuating quantity is zero , i.e. , , and .we also note that the averaging operator commutes with differentiation with respect to the space variables .we shall furthermore assume that our derivation of the averaged equation follows the general development presented in hong & walker , although we use a somewhat different notation adapted to the present problem .we begin with continuity equation and decompose the dependent variables as in .the equation is then time averaged and we obtain we need to re express the right hand side of equation to eliminate . from and applying the reynolds decomposition to the indicator function we obtain \rho_l + [ 1-\langle f \rangle(\x ) - f'(t,\x ) ] \rho_g \\ & = \langle \rho \rangle+ f ' ( t,\x)(\rho_l-\rho_g ) \end{aligned}\ ] ] which allows us to identify .using we can now deduce so that becomes \langle { \mathbf{v}}\rangle ) \big\ } = - ( \rho_l-\rho_g)\bnabla \cdot \langle f ' { \mathbf{v } } ' \rangle.\ ] ] which is the reynolds averaged form of the continuity equation , where the right hand side ( rhs ) terms represent the average effect of the fluctuations of the free boundary .we now turn our attention towards momentum equation . in order to simplify the formulation of the present problem we make the following the fluctuations of viscosity ] .this is in addition to the closures required for the `` classical '' reynolds stress tensor . while modeling the latter expressions is a rather well advanced area , development of closures for product terms corresponding to fluctuations of the free boundaries has been the subject of relatively few investigations , see , e.g. , refs . and , which focused on the case with a diffuse interface . a very simple closure model for these terms adapted to the present formulation of the problem with a sharp interface , cf .assumption [ assume1 ] , will be presented in section [ sec : efs_closure ] .the question of closure models for the reynolds stress terms will not be considered in this work . in the derivation of the closure modelsthe quadratic and cubic products involving the fluctuation fields , and will need to be expressed _solely _ in terms of the time averaged fields , and .as regards the dependence on , this means that expressions for these closures will depend on the location relative to the effective free surface and , evidently , the components of the tensors and are nonvanishing only in a close proximity of the free boundary , cf .figure [ fig : domains ] . from the point of view of the formulation of a computation oriented modelit is therefore not `` economical '' to introduce new terms into the averaged equations which would be nonzero only in a very small fraction of the domain .we therefore propose the following simplifying approach in which the averaged fluctuation terms involving tensors and defined in the bulk are approximated with suitable terms defined on the effective boundary .this can be done by integrating the terms involving and in over their support and then using the divergence theorem ( in principle , the supports of these two terms may in general be different , but for the sake of simplicity we assume here that they coincide ; this simplification does not in any way affect the accuracy of the proposed approach ) .we remark that analogous ideas were also pursued by brocchini & peregrine and by brocchini. one important difference between these approaches and the formulation explored here concerns the description of the effective boundary ( explicit in refs and versus intrinsic considered here ) . noting that the fields and are discontinuous at the effective surface ( which is contained inside the integrations domain ) , and vanish on , we obtain [ eq : i1 ] ^g \ , d\sigma & = & \int_{\tg } a \ , d\sigma , \label{eq : i1a } \\ i_2 & = \int_{\omega ' } \bnabla\cdot\bb \ , d\omega & = & \int_{\tg } \left [ \n\cdot\bb\right]_l^g \ , d\sigma & = & \int_{\tg } \b \ , d\sigma \label{eq : i1b}\end{aligned}\ ] ] in which the fields and are defined in terms of the jumps of and as ^g , \qquad \b = \begin{bmatrix } b_1 \\ b_2 \end{bmatrix } \triangleq \left [ \n\cdot\bb\right]_l^g .\label{eq : ab}\ ] ] we thus see that in the _mean sense _ the fluxes due to the fluctuating terms and in the averaged mass and momentum equations and can be realized by the terms and , cf . , defined on the effective boundary .this leads to the following which has two parts 1 .we replace the source term in averaged mass conservation equation with an additional term in the corresponding boundary condition , 2 .we replace the source term in averaged momentum conservation equation with an additional term in the corresponding boundary condition , so that the following system of equations is obtained ( rewritten here in the two subdomains together with all boundary conditions ) [ nsr2 ] ^g & = a\,\n \quad & & \textrm{on } \\tilde{\gamma}_{lg } , \label{nsr2e } \\\n \cdot \left[\langle { \boldsymbol{\sigma}}\rangle \right]_l^g & = \gamma \kappa\ , \mathbf{n } + \b \quad & & \textrm{on } \\tilde{\gamma}_{lg } , \label{nsr2f } \\\left({\langle \mathbf{v } \rangle }+ { \langle \mathbf{v } \rangle } \big|_g \right ) \cdot \n&=0 & & \textrm{on } \\tilde{\gamma}_{lg } , \label{nsr2g}\end{aligned}\ ] ] where boundary condition corresponds to condition in the situation when the normal velocity at the effective surface is allowed to have a discontinuity , cf . .[ assume2 ] as is evident from figure [ fig : domains ] , this assumption is satisfied when the subregion forms narrow bands along the effective free boundary which happens when the fluctuations of the free boundary occur at a length scale significantly smaller than the characteristic dimension of the entire domain . as regards the averaged conservation equations , assumption [ assume2 ] has the effect of reducing , or localizing , the influence of the averaged terms involving fluctuation of the free boundary to the effective free boundary .the as of now undefined functions and represent the required closure models and need to be determined separately for every flow problem .we add , that since these functions depend on the location of the effective free surface , boundary conditions and are in fact geometrically nonlinear .we also remark that in ref .closure models for certain free boundary problems were derived based on an analogous concept of integral balances in the surface layer .construction of a very simple closure model for functions and applicable to a model problem introduced in the next section will be presented in section [ sec : efs_closure ] .while up to this point our discussion has been concerned with a generic two phase , free boundary problem , we will from now on focus on a specific flow configuration .thus , to fix attention , we will consider the flow set up shown schematically in figure [ fig : model ] .it features droplets entering the domain periodically through the top boundary and impinging on the free surface resulting in sloshing . on the lateral boundaries slip boundary condition is applied and we observe that , respectively , an unsteady or steady contact line will appear where the time dependent interface , or the corresponding effective surface , intersects the boundary .while it is well known that subject to the classical no slip and free surface boundary conditions the contact line problem is not well posed , development of a both mathematically and physically consistent description of this problem still remains an open question .addressing this issue is beyond the scope of the present investigation , and our treatment of the contact line is a standard one : in the solution of the time dependent problem a suitable regularizing effect is achieved by discretization of the governing equations ( described further below ) , whereas in the solution of the steady problem with the effective surface regularization is introduced via formulation in terms of variational shape optimization .application of this numerical approach to a closely related problem with a contact line singularity is analyzed in detail by volkov and protas . in order to maintain a constant average ( over one period of droplet impingement ) mass of the fluid , the fluid is drained through the bottom boundary of the domain ( i.e. , suitable nonzero velocity boundary condition is applied there ) .solutions to the problem described above depend on the following three parameters 1 .length of the interval at which droplets are released , 2 .velocity of the droplet , and 3 .radius of the droplets .we emphasize that the choice of this particular model problem was in fact inspired by an industrial application described in volkov _ et . which has also motivated our broader research program. numerical solution of this time dependent free boundary problem is obtained using the solver ` interfoam ` which is a part of the library ` openfoam ` is based on the vof method .details of the numerical method and its implementation in ` interfoam ` can be found for instance in the ph.d .thesis of rusche .the resolution used in our calculations was grid points with a nondimensional time step of 0.05 . in order to characterize the time dependent and mean fields obtained as solutions to this problem , in figure [ fig : vof ] we present several snapshots of the indicator function at different time levels spanning two periods of droplet impingement . to fix attention ,the results presented in figure [ fig : vof ] were obtained using the following parameters , and .in section [ sec : efs_avr ] we introduced the reynolds decomposition of the flow variables into the time averaged quantities ( denoted with angle brackets ) and fluctuating quantities ( denoted with primes ) .the terms involving averaged products of fluctuating quantities appear as unknowns in averaged equations and must be closed with suitable `` closure models '' , analogous to those which arise in classical turbulence modeling approaches .the most commonly used methods of turbulence modeling are surveyed in monograph by pope .briefly speaking , depending on their mathematical structure , such approaches fall into two main categories , namely , algebraic models and differential models in which evolution of the quantities introduced to close the system is governed by additional pdes .some attempts at deriving closure models for two phase flows were already made by brocchini & peregrine and by brocchini who obtained such models for regimes characterized by different values of the turbulent kinetic energy and the turbulent length scale . in this section , we make an attempt to derive an extremely simple algebraic closure relationship based on an elementary model of the process defined by the following set of assumptions ( see also figure [ figavg ] ) .1 . droplets are spherical with radius and move as rigid objects , 2 .there is no collision or coalescence of droplets , 3 .droplets are falling periodically with frequency and constant velocity , 4 .the fluid outside droplets ( i.e. , the gas phase ) is motionless , 5 .the mean fields do not depend on the vertical coordinate .[ assume3 ] we observe that assumption [ assume3]b constrains the problem parameters so that .it is also to be noted that assumption [ assume3]e implies that the model is effectively one dimensional with variations only in the direction normal to the effective surface .while the above assumptions are rather far reaching ( in particular , the model does not include any effects of droplet impingement on the free surface ) , our objective here is to provide some preliminary insights concerning computation of effective free surfaces , and development of closures based on more accurate models is left to future research ( some possible directions are discussed briefly in section [ sec : final ] ) . we thus proceed to use assumptions [ assume3 ] in order to derive expressions for the fluctuating fields , and which will be given in terms of the mean fields ( or ) , and . these expressions will be in turn used to determine the fields and in . the coordinate system is shown in figure [ figavg ] .we begin by observing that in the model problem considered the horizontal velocity component vanishes identically , i.e. , , , , as do its mean and the corresponding fluctuation fields since the model considered assumes periodic behavior , without loss of generality we are going to focus on a single period of droplet impingement , i.e. , ] is the corresponding arclength coordinate .the new gradient with zero mean displacement in the normal direction is then obtained as }.\ ] ] the cost functional gradient is replaced with zero mean gradient in expressions and .in this section we present sample computations for the problem of determining the effective free surfaces in the flow described in section [ sec : model ] , see also figure [ fig : model ] . in order to calculate the cost functional gradient ( given by expression in appendix [ sec : efs_grad ] ) we need to solve `` direct '' system and adjoint system . both these solutions are obtained using the finite element method implemented in the comsol script environment. the domain ( figure [ fig : geom ] ) is discretized using approximately 4000 lagrangian elements with mesh size varying between 0.9 to 0.01 . in all computations presented here we used the physical parameters with values indicated in table [ val ] and these calculationswere performed using the navier stokes and poisson solvers available in comsol . for illustration purposes , in figures [ figsec]a and [ figsec]b we show the fields of the direct and adjoint vorticity obtained at the first iteration . before analyzing the solutions with effective free surfaces obtained for different parameters of the closure models , we validate the calculation of the cost functional gradient which is the main element of our computational approach , cf . section [ sec : efs_steady ] and appendix [ sec : efs_grad ] ..values of the physical parameters used in the computation . [ cols="^,^",options="header " , ] in this section we demonstrate the consistency of the gradients obtained using expression in appendix [ sec : efs_adj ] .a standard test consists in computing the gteaux differential of cost functional in some arbitrary direction using a finite difference technique and comparing it with the expressions for the same differential obtained using the gradient and riesz representation formula .the ratio of these two expressions , which is a function of the finite difference step size , is defined as proximity of to the unity is thus a measure of the accuracy of the cost functional gradient computed based on the adjoint field .figure [ figkappa1 ] shows the behavior of the quantity as a function of the parameter for different perturbations .we note that in all cases the quantity is quite close to unity for spanning over 5 orders of magnitude which indicates that our gradients are evaluated fairly accurately .figure [ figkappa1 ] reveals deviations of from unity for large values of , which is due to truncation errors , and also for very small , due to round off errors , both of which are well known effects .these inaccuracies do not affect the optimization process , since the deviations observed for very small are only an artifact of how expression is evaluated , whereas large values of ( or , equivalently , ) are outside the range of validity of the linearization on which the optimization approach is based .we also performed a grid refinement study of the cost functional gradients which indicated that the calculation of the gradients is not sensitive to the resolution .figure [ fig : cf ] shows the decrease of cost functional in the case with and without the closure terms and in boundary conditions as a function of the number of iterations .we observe that the proposed algorithm results in a steady convergence despite the complicated nature of the problem , although the rate of convergence is relatively slow , especially in the case when the closure model is present . in this sectionwe employ the computational approach developed in section [ sec : efs_steady ] and validated in section [ sec : efs_kappa ] to construct effective surfaces corresponding to different values of the three parameters characterizing the algebraic closure model introduced in section [ sec : efs_closure ] . in order to reveal different trends , in figures [ fig : ef]a , b ,c we show the effective surfaces obtained by changing one parameter with the other two held fixed . for comparison , in these figureswe also include the effective surfaces obtained without any closure model ( i.e. , with in ) .the parameters are chosen in such a way that the case with is present in all three figures [ fig : ef]a , b , c where it represents the intermediate solution .first of all , we observe that in all cases smooth effective surfaces have been obtained .as regards the results shown in figures [ fig : ef]a and [ fig : ef]b , we observe that the effective surfaces approach the effective surface obtained in the case with no closure as and , respectively .this is consistent with the fact that with and fixed , and with and fixed , cf . .in the other limits , i.e. , for large ( figure [ fig : ef]a ) and small ( figure [ fig : ef]b ) , we observe that the liquid column becomes much thinner . as regards the dependence on droplet velocity , from we observe that would correspond to the case with a vanishing closure model ( i.e. , ) , however , this value of is outside the range of parameter values consistent with assumption [ assume3]b .hence , convergence of effective surfaces to the surface corresponding to the case with no closure is not observed in figure [ fig : ef]c .we observe that in the proposed model the closure terms contribute additional flux of momentum in the direction tangential to the effective surface which can be interpreted as additional shear stress , cf . and .in the cases with large and small , corresponding to a thinner liquid column , the effect of the closure model could be compared to an increase of the surface tension ( although this analogy is rather superficial , since the surface tension contributes to the normal stresses ) .we also add that , in addition to the effective surfaces presented in figures [ fig : ef]a , b , c , for some parameter values we also found solutions featuring asymmetric effective surfaces .this nonuniqueness of solutions is a consequence of the nonlinearity of the governing system which is reflected in the nonconvexity of optimization problem . since these asymmetric solutions are not physically relevant , at least not from the point of view of the actual applications of interest to us , we do not discuss them in this work . in problems with multiple solutionsin which such selection can not be done based on the properties of symmetry , one can identify the relevant solution as the one corresponding to the smallest value of cost functional reflecting the smallest residual . and( b ) , cf . , evaluated based on ( solid line ) the averaged solution of the time dependent problem and ( dashed line ) the closure model described in section [ sec : efs_closure ] , cf . .the data is shown as a function of the normalized arclength coordinate along the effective surface measured from the top , and the values of the parameters are , and .,scaledwidth=100.0% ] finally , in figure [ fig : ef]d , we perform a comparison between the effective free surfaces constructed using the algebraic closure model from section [ sec : efs_closure ] and using the time averaged solutions of the original unsteady problem to evaluate the terms and via relations and .the parameters used in this case are , and , and the corresponding data for obtained by averaging over 200 periods of droplet impingement in the time dependent case is shown in figure [ fig : b ] ( the data for the term is not shown as it vanishes by construction in both cases , cf . ) .for comparison , in figure [ fig : b ] we also indicate the values of the components of obtained from expressions , and we see that the predictions of the simple algebraic closure model developed in section [ sec : efs_closure ] are not too far off from the actual data : as regards the vertical component , they are within the same order of magnitude ( figure [ fig : b]b ) , whereas for the horizontal component the difference is ( figure [ fig : b]a ) . we add that for both and there exists a part of the effective surface , located towards the bottom of the liquid column , where the agreement between the closure model and the actual data is particularly good . in figure[ fig : ef]d we note an overall fairly good agreement between the effective surfaces obtained with terms and evaluated in the two different ways .this is , in particular , the case as regards the top part of the liquid column which , somewhat interestingly , does not coincide with the region where the closure model is the most accurate according to the data from figure [ fig : b ] .we attribute this effect to the nonlinear and nonlocal character of the averaged free boundary problem .finally , we conclude that , despite its simplicity , our closure model performs relatively well in the present problem .in this investigation we revisited the concept of `` effective free surfaces '' arising in the solution of time averaged hydrodynamic equations in the presence of free boundaries .the novelty of our work consists in formulating the problem such that there is a _sharp _ interface separating the two phases in the time averaged sense , an approach which appears preferable from the point of view of a number of possible applications .the resulting system of equations is of the free boundary type and we also propose a flexible and efficient numerical method for the solution of this problem which is based on the shape optimization formulation .subject to some clearly stated assumptions , the terms representing the average effect of the boundary fluctuations appear in the form of interface boundary conditions , and a simple algebraic model is proposed to close these terms ( this is to be contrasted with the `` classical '' reynolds stresses which are defined in the bulk of the flow ) .this work is motivated by applications of optimization and optimal control theory to problems involving free surfaces . in such problems dealing with time dependent governing equations leads to technical difficulties , many of which are mitigated when methods of optimization are applied to a _steady _ problem with effective free surfaces .the model problem considered in this study concerns impingement of free falling droplets on a liquid with a free surface in two dimensions andis motivated by optimization of the mass and momentum transfer phenomena in certain advanced welding processes , see volkov _ _ et al.__ .the computational results shown in this paper are , to the best of our knowledge , the first ever presented for a problem of this type where the effective boundary has the form of a sharp interface .the computed effective free surfaces exhibit a consistent dependence on the problem parameters introduced via the closure model , and despite the admitted simplicity of this model , these results match well the effective surfaces obtained using data from the solution of the time dependent problem .a key element of the proposed approach is a closure model for the fluctuation terms representing the motion of the free surfaces .the model we developed here is a very elementary one resulting in simple algebraic relationships . as in the traditional turbulence research ,more advanced and more general closure models can be derived based on the pdes describing the transport of various relevant quantities such as the turbulent kinetic energy , the turbulent length scale , etc .in fact , such approaches have already been explored in the context of free boundary problems leading to more general , albeit less explicit , closure models than the model considered here .in addition to investigating such more advanced closure models , our future work will focus on quantifying the effect of and , ultimately , weakening the assumptions employed to derive the present approach , so that it can be applied to a broader class of problems , especially interfacial . at the same time, we will seek to incorporate this approach into the optimization oriented models of complex thermo fluid phenomena occurring in welding processes , such as discussed in volkov _. al__ .while the present investigation responded to needs arising from a certain class of applications , it has also highlighted a number of more fundamental research questions which it will be worthwhile to explore based on even simpler model problems such as , e.g. , capillary or gravity waves on a flat interface .the authors wish to acknowledge generous funding provided for this research by the natural sciences and engineering research council of canada ( collaborative research and development program ) and general motors of canada .the authors are also thankful to dr .jrme hoepffner for interesting discussions , and to the two anonymous referees for insightful and constructive comments .in this appendix we obtain expressions for the gradient of the cost functional with respect to the position of the effective interface .characterization of this gradient requires one to differentiate solutions of governing pdes system with respect to the shape of the domain on which these solutions are defined .this is done properly using tools of the shape differential calculus which are briefly introduced below . in the calculations we will assume that the problem parameters are given .hereafter primes ( ) will denote perturbations ( shape differentials ) of the different variables which is consistent with the convention used in the literature . since fluctuation variables do not appear in this appendix ,there is no risk of confusion resulting from this abuse of notation . in the shape calculus perturbations of the interface geometrycan be represented as where is a real parameter , is the original unperturbed boundary and is a `` velocity '' field characterizing the perturbation .the gteaux shape differential of a functional such as with respect to the shape of the interface and computed in the direction of perturbation field is given by in the sequel we will need the following fundamental result concerning shape differentiation of functionals defined on smooth domains and on their boundaries , and involving smooth functions and as integrands where and are the shape derivatives of and , and is the curvature of the boundary . in order to obtain the gradients of the cost functional with respect to the control variable , we first need to obtain the gteaux ( directional ) derivative of . using relation and substituting and , we obtain the following expression for the cost functional differential where ( \mathbf{e_y } \cdot \n ) ( \mathbf{e_y } \cdot \n').\end{gathered}\ ] ] using the following identities of shape calculus and , where and are the tangential gradient and the laplace beltrami operator , we obtain ^g - 2 \bnabla_{\gamma } ( \x ' \cdot \n ) \cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot \n + 2 \bnabla_{\gamma } ( \x ' \cdot \n ) \cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot \n + \gamma \delta_{\gamma } ( \x ' \cdot \n)\bigg ) \\ + 2 ( \rho_l- \rho_g)\left[\frac{r}{t^2 } \sqrt{4 v_d^2t^2-\pi^2 r^2}- \frac{r^2}{v_d^2 t^4}\left ( 4 v_d^2t^2-\pi^2 r^2 \right ) \right ] ( \mathbf{e_y } \cdot \n ) \left(\mathbf{e_y } \cdot \bnabla_{\gamma } ( \x ' \cdot \n ) \right ) \\ + \left ( \frac{1}{2 } \kappa\ , \chi^2 + \chi \,\frac{\partial \chi}{\partial n}\right ) ( \x ' \cdot \n)\bigg ] d\sigma.\end{gathered}\ ] ] considering gteaux differential and invoking the riesz representation theorem allows us to extract the cost functional gradient through the following identity where for simplicity the inner product was used and which implies that the gradient is a scalar valued function describing the sensitivity to shape perturbations in the normal direction .we note that expression contains terms which are already in the riesz form with the perturbation appearing as factor , in addition to terms involving the shape derivatives of the state variables , namely and .the presence of these terms makes it impossible at this stage to use to identify the gradient . in order to transform the remaining part of relation into a form consistent with the riesz representation it is necessary to define suitable _ adjoint variables _ and the corresponding adjoint system .consider the weak form of system for the variables and \cdot { \langle \mathbf{v } \rangle } ^{*}- ( \bnabla \cdot { \langle \mathbf{v } \rangle } ) \langle p \rangle ^ { * } d\x + \\ \label{ns_weak } \int_{\omega_g}\left[\rho_g{\langle \mathbf{v }\rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \bnabla \cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}- \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^{*}- ( \bnabla\cdot{\langle \mathbf{v } \rangle } ) \langle p \rangle ^ { * } d\x=0\end{aligned}\ ] ] after integrating the second order terms by parts , becomes \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot \bnabla \langle p\rangle^ * + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } : \bnabla{ \langle \mathbf{v } \rangle } ^ { * } d\x + \int_{\omega_g}\left[\rho_g{\langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^ { * } \\ \label{ns_weak_int } - { \langle \mathbf{v } \rangle } \cdot\bnabla \langle p \rangle^ * + { \langle\boldsymbol{\sigma}_g^{\mu } \rangle } : \bnabla{ \langle \mathbf{v } \rangle } ^ * \ , d\x - \int_{\tg } \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot { \langle \mathbf{v } \rangle } ^ { * } d\sigma - \int_{\tg } \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot { \langle \mathbf{v } \rangle } ^ { * } d\sigma \\ - \int_{\tg } \n\cdot { \langle \mathbf{v } \rangle } \ , \langle p \rangle^ * d\sigma = 0 \end{gathered}\ ] ] next , using relation and shape differentiating , we obtain \cdot { \langle \mathbf{v } \rangle } ^{*}- \bnabla \langle p \rangle ^ * \cdot{\langle \mathbf{v } \rangle } ' + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } ' : \bnabla { \langle \mathbf{v } \rangle } ^ * d\x + \\ \nonumber \int_{\omega_g } \left[\rho_g { \langle \mathbf{v } \rangle } \cdot\bnabla { \langle \mathbf{v } \rangle } ' + \rho_g { \langle \mathbf{v } \rangle } ' \cdot \bnabla { \langle \mathbf{v } \rangle } \right ] \cdot { \langle \mathbf{v } \rangle } ^{*}- \bnabla \langle p \rangle ^*\cdot{\langle \mathbf{v } \rangle } ' + { \langle \boldsymbol{\sigma}_g^{\mu } \rangle } ' : \bnabla { \langle \mathbf{v } \rangle } ^ * d\x+ \mathcal{i}=0 \label{ns_weak_int_a}\end{gathered}\ ] ] where \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot \bnabla \langle p \rangle^ * + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^*+\kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \\ + \frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*\right ) + \left [ \rho_g { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^{*}-{ \langle \mathbf{v } \rangle } \cdot\bnabla \langle p \rangle^ * + { \langle \boldsymbol{\sigma}_g^{\mu } \rangle } : \bnabla \mathbf{v^*}+ \kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \\ + \frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \right ) \bigg \ } ( \x ' \cdot \n)\ , d\sigma - \int_{\tg } \big [ \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}'\cdot{\langle \mathbf{v } \rangle } ^*+ \n'\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*+\n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}'\cdot { \langle \mathbf{v } \rangle } ^*+ \n'\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \\ + \n\cdot { \langle \mathbf{v } \rangle } ' \ , \langle p \rangle^*+ \n'\cdot{\langle \mathbf{v } \rangle } \,\langle p \rangle ^ * \big]\ , d\sigma.\end{gathered}\ ] ] performing one more time integration by parts in we obtain \cdot { \langle \mathbf{v } \rangle } ' - { \langle p \rangle } ' \ , \bnabla\cdot{\langle \mathbf{v } \rangle } ^ * - \mu\ , { \langle \mathbf{v } \rangle } ' \cdot \delta{ \langle \mathbf{v } \rangle } ^* - { \langle \mathbf{v } \rangle } ' \cdot \bnabla { \langle p \rangle}^ * \bigg\ } d\x \\ + \int_{\omega_g } \bigg\{\left[-\rho_g { \langle \mathbf{v } \rangle } \cdot\bnabla { \langle \mathbf{v } \rangle } ^ * + \rho_g{ \langle \mathbf{v }\rangle } ^*\cdot ( \bnabla { \langle \mathbf{v } \rangle } ) ^t\right]\cdot { \langle \mathbf{v } \rangle } ' - \langle p \rangle'\ , \bnabla\cdot{\langle \mathbf{v } \rangle } ^ * \ , - \mu\,{\langle \mathbf{v } \rangle } ' \cdot \delta{ \langle \mathbf{v } \rangle } ^ * \\ - { \langle \mathbf{v } \rangle } ' \cdot \bnabla { \langle p \rangle}^*\bigg\ } d\x + \mu \int_{\tg } \left[(\n \cdot \bnabla{ \langle \mathbf{v } \rangle } ^*)\cdot { \langle \mathbf{v } \rangle } ' + ( \n\cdot ( \bnabla { \langle \mathbf{v } \rangle } ^*)^t)\cdot { \langle \mathbf{v } \rangle } ' \right]d\sigma \\ + \mu \int_{\tg } \left[(\n \cdot \bnabla{ \langle \mathbf{v } \rangle } ^*)\cdot { \langle \mathbf{v } \rangle } ' + \n\cdot ( \bnabla { \langle \mathbf{v } \rangle } ^*)^t\cdot { \langle \mathbf{v } \rangle } ' \right ] d\sigma+ \mathcal{i } = 0,\end{gathered}\ ] ] where and can be identified as the adjoint variables with respect to and , provided they satisfy the following adjoint equations substituting for in we can simplify the expression for as follows \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot\bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^*+ \ \kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \\ + \frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*\right ) + \left[\rho_g { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot\bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_g^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^ * \\+ \kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*+ \frac{\partial } { \partial n } ( \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*)\bigg\ } ( \x ' \cdot \n)\ , d\sigma - \int_{\tg } \bigg[\left[\n\cdot { \langle \boldsymbol{\sigma}^{\mu } \rangle}'\cdot{\langle \mathbf{v } \rangle } ^*\right]_l^g- \bnabla_{\gamma } ( \x'\cdot \n)\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \\ - \bnabla_{\gamma } ( \x'\cdot \n)\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * - \bnabla_{\gamma } ( \x'\cdot \n)\cdot{ \langle \mathbf{v } \rangle } \ , { \langle p \rangle}^ * \bigg]\ , d\sigma .\end{gathered}\ ] ] imposing the boundary conditions the expression for the differential of the cost functional becomes ( \mathbf{e_y } \cdot \n ) \left ( \mathbf{e_y } \cdot \bnabla_{\gamma } ( \x'\cdot \n ) \right )\\ + \left ( \frac{1}{2 } \kappa\ , \chi^2+\chi \frac{\partial \chi}{\partial n}\right ) ( \x'\cdot\n ) - \bigg[\left[\rho_l { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_l \g\right ] \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } : \bnabla \mathbf{v^ * } \\+ \kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot { \langle \mathbf{v }\rangle } ^*+ \frac{\partial } { \partial n } ( \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * ) + \left[\rho_g { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^{*}- { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_g^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^ * \\ + \kappa \ , \n \cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*+\frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*\right)\bigg ] ( \x ' \cdot \n ) \bigg\ } d \sigma\end{gathered}\ ] ] which is now consistent with riesz s representation .finally , after applying tangential green s formula to the terms involving , the cost functional gradient can be identified as follows ( \mathbf{e_y } \cdot \n ) - \left[\rho_l { \langle \mathbf{v }\rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_l \g\right ] \cdot { \langle \mathbf{v } \rangle } ^ { * } \\ - { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_l^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^ * + \kappa\ , \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*+ \frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_l^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*\right ) + \left[\rho_g { \langle \mathbf{v } \rangle } \cdot \bnabla { \langle \mathbf{v } \rangle } - \rho_g \g\right ] \cdot { \langle \mathbf{v } \rangle } ^ { * } \\ -{\langle \mathbf{v } \rangle } \cdot \bnabla { \langle p \rangle}^ * + { \langle \boldsymbol{\sigma}_g^{\mu } \rangle } : \bnabla { \langle \mathbf{v } \rangle } ^*+ \kappa \,\n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^*+\frac{\partial } { \partial n } \left ( \n\cdot { \langle \boldsymbol{\sigma}_g^{\mu } \rangle}\cdot{\langle \mathbf{v } \rangle } ^ * \right)+ \gamma \kappa \bigg ] \quad \text{on } \quad \tg\cdot\end{gathered}\ ] ] consistency of this expression for the cost functional gradient is demonstrated in section [ sec : efs_kappa ] .e. labourasse , d. lacanette , a. toutant , p. lubin , s. vincent , o. lebaigue , j .-caltagirone and p. sagaut , `` towards large eddy simulation of isothermal two - phase flows : governing equations and a priori tests '' , _ int .j. multiphase flow _ * 33 * , 139 , ( 2007 ) .a. toutant , e. labourasse , o. lebaigue and o. simonin , `` dns of the interaction between a deformable buoyant bubble and a spatially decaying turbulence : a priori test for les two - phase flow modelling '' , _ comp . fluids _ * 37 * , 877886 , ( 2008 ) .m. f. trujillo , j. alvarado , e. gehring , and g. s. soriano , `` numerical simulations and experimental characterization of heat transfer from a periodic impingement of droplets '' , _ j. heat trans ._ * 133 * , 122201 , ( 2011 ) .
|
in this investigation we revisit the concept of `` effective free surfaces '' arising in the solution of the time averaged fluid dynamics equations in the presence of free boundaries . this work is motivated by applications of the optimization and optimal control theory to problems involving free surfaces , where the time dependent formulations lead to many technical difficulties which are however alleviated when steady governing equations are used instead . by introducing a number of precisely stated assumptions , we develop and validate an approach in which the interface between the different phases , understood in the time averaged sense , is sharp . in the proposed formulation the terms representing the fluctuations of the free boundaries and of the hydrodynamic quantities appear as boundary conditions on the effective surface and require suitable closure models . as a simple model problem we consider impingement of free falling droplets onto a fluid in a pool with a free surface , and a simple algebraic closure model is proposed for this system . the resulting averaged equations are of the free boundary type and an efficient computational approach based on shape optimization formulation is developed for their solution . the computed effective surfaces exhibit consistent dependence on the problem parameters and compare favorably with the results obtained when the data from the actual time dependent problem is used in lieu of the closure model . = 1
|
social media is growing at an explosive rate , with millions of people all over the world generating and sharing content on a scale barely imaginable a few years ago .this has resulted in massive participation with countless number of updates , opinions , news , comments and product reviews being constantly posted and discussed in social web sites such as facebook , digg and twitter , to name a few . this widespread generation and consumption of contenthas created an extremely competitive online environment where different types of content vie with each other for the scarce attention of the user community . in spite of the seemingly chaotic fashion with which all these interactions take place ,certain topics manage to attract an inordinate amount of attention , thus bubbling to the top in terms of popularity . through their visibility , this popular topics contribute to the collective awareness of what is trending and at times can also affect the public agenda of the community . at presentthere is no clear picture of what causes these topics to become extremely popular , nor how some persist in the public eye longer than others .there is considerable evidence that one aspect that causes topics to decay over time is their novelty .another factor responsible for their decay is the competitive nature of the medium . as content starts propagating throught a social network it can usurp the positions of earlier topics of interest , and due to the limited attention of users it is soon rendered invisible by newer content .yet another aspect responsible for the popularity of certain topics is the influence of members of the network on the propagation of content .some users generate content that resonates very strongly with their followers thus causing the content to propagate and gain popularity .the source of that content can originate in standard media outlets or from users who generate topics that eventually become part of the trends and capture the attention of large communities . in either casethe fact that a small set of topics become part of the trending set means that they will capture the attention of a large audience for a short time , thus contributing in some measure to the public agenda . when topics originate in media outlets , the social medium acts as filter and amplifier of what the standard media produces and thus contributes to the agenda setting mechanisms that have been thoroughly studied for more than three decades .in this paper , we study trending topics on twitter , an immensely popular microblogging network on which millions of users create and propagate enormous content via a steady stream on a daily basis .the trending topics , which are shown on the main website , represent those pieces of content that bubble to the surface on twitter owing to frequent mentions by the community .thus they can be equated to crowdsourced popularity .we then determine the factors that contribute to the creation and evolution of these trends , as they provide insight into the complex interactions that lead to the popularity and persistence of certain topics on twitter , while most others fail to catch on and are lost in the flow .we first analyze the distribution of the number of tweets across trending topics .we observe that they are characterized by a strong log - normal distribution , similar to that found in other networks such as digg and which is generated by a stochastic multiplicative process .we also find that the decay function for the tweets is mostly linear .subsequently we study the persistence of the trends to determine which topics last long at the top .our analysis reveals that there are few topics that last for long times , while most topics break fairly quickly , in the order of 20 - 40 minutes .finally , we look at the impact of users on trend persistence times within twitter . we find that traditional notions of user influence such as the frequency of posting and the number of followers are not the main drivers of trends , as previously thought .rather , long trends are characterized by the resonating nature of the content , which is found to arise mainly from traditional media sources .we observe that social media behaves as a selective amplifier for the content generated by traditional media , with chains of retweets by many users leading to the observed trends .there has been some prior work on analyzing connections on twitter .huberman et al . studied social interactions on twitter to reveal that the driving process for usage is a sparse hidden network underlying the friends and followers , while most of the links represent meaningless interactions .jansen et al . have examined twitter as a mechanism for word - of - mouth advertising .they considered particular brands and products and examined the structure of the postings and the change in sentiments .galuba et al . proposed a propagation model that predicts which users will tweet about which url based on the history of past user activity .yang and leskovec examined patterns of temporal behavior for hashtags in twitter .they presented a stable time series clustering algorithm and demonstrate the common temporal patterns that tweets containing hashtags follow .there have also been earlier studies focused on social influence and propagation .agarwal et al . studied the problem of identifying influential bloggers in the blogosphere .they discovered that the most influential bloggers were not necessarily the most active .aral et al , have distinguished the effects of homophily from influence as motivators for propagation . as to the study of influence within twitter , cha et al . performed a comparison of three different measures of influence - indegree , retweets , and user mentions .they discovered that while retweets and mentions correlated well with each other , the indegree of users did not correlate well with the other two measures .based on this , they hypothesized that the number of followers may not a good measure of influence .recently , romero and others introduced a novel influence measure that takes into account the passivity of the audience in the social network .they developed an iterative algorithm to compute influence in the style of the hits algorithm and empirically demonstrated that the number of followers is a poor measure of influence .twitter is an extremely popular online microblogging service , that has gained a very large user following , consisting of close to 200 million users .the twitter graph is a directed social network , where each user chooses to follow certain other users .each user submits periodic status updates , known as _tweets _ , that consist of short messages limited in size to 140 characters .these updates typically consist of personal information about the users , news or links to content such as images , video and articles .the posts made by a user are automatically displayed on the user s profile page , as well as shown to his followers .a _ retweet _ is a post originally made by one user that is forwarded by another user .retweets are useful for propagating interesting posts and links through the twitter community . twitter has attracted lots of attention from corporations due to the immense potential it provides for viral marketing . due to its huge reach ,twitter is increasingly used by news organizations to disseminate news updates , which are then filtered and commented on by the twitter community .a number of businesses and organizations are using twitter or similar micro - blogging services to advertise products and disseminate information to stockholders ._ trending topics _ are presented as a list by twitter on their main twitter.com site , and are selected by an algorithm proprietary to the service .they mostly consist of two to three word expressions , and we can assume with a high confidence that they are snippets that appear more frequently in the most recent stream of tweets than one would expect from a document term frequency analysis such as tfidf .the list of trending topics is updated every few minutes as new topics become popular .twitter provides a search api for extracting tweets containing particular keywords . to obtain the dataset of trends for this study, we repeatedly used the api in two stages .first , we collected the trending topics by doing an api query every 20 minutes .second , for each trending topic , we used the search api to collect all the tweets mentioning this topic over the past 20 minutes . for each tweet , we collected the author , the text of the tweet and the time it was posted . using this procedure for data collection, we obtained 16.32 million tweets on 3361 different topics over a course of 40 days in sep - oct 2010 .we picked 20 minutes as the duration of a timestamp after evaluating different time lengths , to optimize the discovery of new trends while still capturing all trends .this is due to the fact that twitter only allows 1500 tweets per search query .we found that with 20 minute intervals , we were able to capture all the tweets for the trending topics efficiently .we noticed that many topics become trends again after they stop trending according to the twitter trend algorithm .we therefore considered these trends as separate sequences : it is very likely that the spreading mechanism of trends has a strong time component with an initial increase and a trailing decline , and once a topic stops trending , it should be considered as new when it reappears among the users that become aware of it later .this procedure split the 3468 originally collected trend titles into 6084 individual trend sequences .\(a ) ( b ) we measured the number of tweets that each topic gets in 20 minute intervals , from the time the topic starts trending until it stops , as described earlier . from thiswe can sum up the tweet counts over time to obtain the cumulative number of tweets of topic for any time frame , where is the number of tweets on topic in time interval .since it is plausible to assume that initially popular topics will stay popular later on in time as well , we can calculate the ratios for topic for time frames and .figure [ fig : count_densities](a ) shows the distribution of s over all topics for four arbitrarily chosen pairs of time frames ( nevertheless such that , and is relatively large , and is small ) .these figures immediately suggest that the ratios are distributed according to log - normal distributions , since the horizontal axes are logarithmically rescaled , and the histograms appear to be gaussian functions . to checkif this assumption holds , consider fig .[ fig : count_densities](b ) , where we show the q - q plots of the distributions of fig .[ fig : count_densities](a ) in comparison to normal distributions .we can observe that the ( logarithmically rescaled ) empirical distributions exhibit normality to a high degree for later time frames , with the exception of the high end of the distributions .these 10 - 15 outliers occur more frequently than could be expected for a normal distribution .log - normals arise as a result of multiplicative growth processes with noise . in our case , if is the number of tweets for a given topic at time , then the dynamics that leads to a log - normally distributed over can be written as : n_q(t - 1 ) , \label{eq : multiplicative_growth}\ ] ] where the random variables are positive , independent and identically distributed as a function of with mean and variance .note that time here is measured in discrete steps ( expresses the previous time step with respect to ) , in accordance with our measurement setup . is introduced to account for the novelty decay .we would expect topics to initially increase in popularity but to slow down their activity as they become obsolete or known to most users .since is made up of decreasing positive numbers , the growth of slows with time . to see that eq .( [ eq : multiplicative_growth ] ) leads to a log - normal distribution of , we first expand the recursion relation : n_q(0 ) .\label{eq : nq_expressed}\ ] ] here is the initial number of tweets in the earliest time step . taking the logarithm of both sides of eq .( [ eq : nq_expressed ] ) , \label{eq : summed_noise_terms}\ ] ] the rhs of eq .( [ eq : summed_noise_terms ] ) is the sum of a large number of random variables .the central limit theorem states thus that if the random variables are independent and identically distributed , then the sum asymptotically approximates a normal distribution .the i.i.d condition would hold exactly for the term , and it can be shown that in the presence of the discounting factors ( if the rate of decline is not too fast ) , the resulting distribution is still normal . in other words ,we expect from this model that $ ] will be distributed normally over when fixing .these quantities were shown in fig .[ fig : count_densities ] above . essentially , if the difference between the two times where we take the ratio is big enough , the log - normal property is observed . the intuitive explanation for the multiplicative model of eq .( [ eq : multiplicative_growth ] ) is that at each time step the number of _ new _ tweets on a topic is a multiple of the tweets that we already have .the number of past tweets , in turn , is a proxy for the number of users that are aware of the topic up to that point .these users discuss the topic on different forums , including twitter , essentially creating an effective network through which the topic spreads .as more users talk about a particular topic , many others are likely to learn about it , thus giving the multiplicative nature of the spreading .the noise term is necessary to account for the stochasticity of this process . on the other hand ,the monotically decreasing characterizes the decay in timeliness and novelty of the topic as it slowly becomes obsolete and known to most users , and guarantees that does not grow unbounded . to measure the functional form of , we observe that the expected value of the noise term in eq .( [ eq : multiplicative_growth ] ) is . thus averaging over the fractions between consecutive tweet counts yields : the experimental values of in timeare shown in fig .[ fig : gamma_in_time ] .it is interesting to notice that follows a power - law decay very precisely with an exponent of , which means that . in time as measured using eq .( [ eq : gamma_measurement ] ) .the log - log plot exhibits that it decreases in a power - law fashion , with an exponent that is measured to be exactly -1 ( the linear regression on the logarithmically transformed data fits with ) .the fit to determine the exponent was performed in the range of the solid line next to the function , which also shows the result of the fit while being shifted lower for easy comparison .the inset displays the same function on standard linear scales . * ]the interesting fact about the decay function is that it results in a _ linear increase _ in the total number of tweets for a topic over time . to see this, we can again consider eq .( [ eq : summed_noise_terms ] ) , and approximate the discrete sum of random variables with an integral of the operand of the sum , and substitute the noise term with its expectation value , as defined earlier ( this is valid if is changing slowly ) .these approximations yield the following : d\tau \approx \int_{\tau = 0 } ^ { t } \frac{1}{\tau } d\tau = \ln t.\ ] ] in simplifying the logarithm above , we used the taylor expansion of , for small , and also used the fact that as we found experimentally earlier .it can be immediately seen then that for the range of where is inversely proportional to .in fact , it can be easily proven that no functional form for would yield a linear increase in other than ( assuming that the above approximations are valid for the stochastic discrete case ) .this suggests that the trending topics featured on twitter increase their tweet counts linearly in time , and their dynamics is captured by the multiplicative noise model we discussed above . to check this, we first plotted a few representative examples of the cumulative number of tweets for a few topics in fig .[ fig : growth_examples ] .it is apparent that all the topics ( selected randomly ) show an approximate initial linear growth in the number of tweets.we also checked if this is true in general .figure [ fig : diff_curvature ] shows the second discrete derivative of the total number of tweets , which we expect to be if the trend lines are linear on average .a positive second derivative would mean that the growth is superlinear , while a negative one suggests that it is sublinear .we point out that before taking the average of all second derivatives over the different topics in time , we divided the derivatives by the average of the total number of tweets of the given topics .we did this so as to account for the large difference between the ranges of the number of tweets across topics , since a simple averaging without prior normalization would likely bias the results towards topics with large tweet counts and their fluctuations .the averages are shown in fig .[ fig : diff_curvature ] .we observe from the figure that when we consider all topics there is a very slight sublinear growth regime right after the topic starts trending , which then becomes mostly linear , as the derivatives data is distributed around .if we consider only very popular topics ( that were on the trends site for more than 4 hours ) , we observe an even better linear trend .one reason for this may be that topics that trend only for short periods exhibit a concave curvature , since they lose popularity quickly , and are removed from among the twitter trends by the system early on . so that they can be shown on the same plot .the randomly selected topics were ( from left to right ) : `` earnings '' , `` # pulpopaul '' , `` sheen '' , `` deuces remix '' , `` isaacs '' , `` # gmp24 '' , and `` mac app '' .* ] these results suggest that once a topic is highlighted as a trend on a very visible website , its growth becomes linear in time .the reason for this may be that as more and more visitors come to the site and see the trending topics there is a constant probability that they will also talk and tweet about it .this is in contrast to scenarios where the primary channel of information flow is more informal . in that casewe expect that the growth will exhibit first a phase with accelerated growth and then slow down to a point when no one talks about the topic any more .content that spreads through a social network or without external `` driving '' will follow such a course , as has been showed elsewhere .+ an important reason to study trending topics on twitter is to understand why some of them remain at the top while others dissipate quickly . to see the general pattern of behavior on twitter, we examined the lifetimes of the topics that trended in our study . from fig [ sequences](a ) we can see that while most topics occur continuously , around 34% of topics appear in more than one sequence .this means that they stop trending for a certain period of time before beginning to trend again . a reason for this behavior may be the time zones that are involved . for instance ,if a topic is a piece of news relevant to north american readers , a trend may first appear in the eastern time zone , and 3 hours later in the pacific time zone .likewise , a trend may return the next morning if it was trending the previous evening , when more users check their accounts again after the night .given that many topics do not occur continuously , we examined the distribution of the lengths sequences for all topics . in fig[ sequences](b ) we show the length of the topic sequences .it can be observed that this is a power - law which means that most topic sequences are short and a few topics last for a very long time .this could be due to the fact that there are many topics competing for attention .thus , the topics that make it to the top ( the trend list ) last for a short time .however , in many cases , the topics return to trend for more time , which is captured by the number of sequences shown in fig [ sequences](a ) , as mentioned .we first examine the authors who tweet about given trending topics to see if the authors change over time or if it is the same people who keep tweeting to cause trends . when we computed the correlation in the number of unique authors for a topic with the duration ( number of timestamps ) that the topic trends we noticed that correlation is very strong ( 0.80 ) .this indicates that as the number of authors increases so does the lifetime , suggesting that the propagation through the network causes the topic to trend . to measure the impact of authors we compute for each topicthe active - ratio as : the correlation of active - ratio with trending duration is as shown in fig [ active ] .we observe that the active - ratio quickly saturates and varies little with time for any given topic .since the authors change over time with the topic propagation , the correlation between number of tweets and authors is high ( 0.83 ) . of 0.9112.*,width=264,height=226 ] on twitter each topic competes with the others to survive on the trending page . as we now show , for the long trending ones we can derive an expression for the distribution of their average length .we assume that , if the relative growth rate of tweets , denoted by , falls below a certain threshold , the topic would stop trending .when we consider long - trending topics , as they grow in time , they overcome the initial novelty decay , and the term in equation ( 3 ) becomes fairly constant .so we can measure the change over time using only the random variable as : since the are independent and identical distributed random variables , would be independent with each other .thus the probability that a topic stops trending in a time interval , where is large , is equal to the probability that is lower than the threshold , which can be written as : is the cumulative distribution function of the random variable . given that distribution we can actually determine the threshold for survival as : from the independence property of the , the duration or life time of a trending topic , denoted by , follows a geometric distribution , which in the continuum case becomes the exponential distribution .thus , the probability that a topic survives in the first time intervals and fails in the time interval , given that is large , can be written as : the expected length of trending duration would thus be : we considered trending durations for topics that trended for more than 10 timestamps on twitter .the comparison between the geometric distribution and the trending duration is shown in fig [ geom1 ] . in fig [ geom2 ]the fit of the trending duration to density in a logarithmic scale suggests an exponential function for the trending time .the r - square of the fitting is 0.9112 .we consider two types of people who contribute to trending topics - the sources who begin trends , and the propagators who are responsible for those trends propagating through the network due to the nature of the content they share .we examined the users who initiate the most trending topics .first , for each topic we extracted the first 100 users who tweeted about it prior to its trending .the distribution of these authors and the topics is a power - law , as shown in fig [ auth_top ] .this shows that there are few authors who contribute to the creation of many different topics . to focus on these multi - tasking users , we considered only the authors who contributed to at least five trending topics . when we consider people who are influential in starting trends on twitter , we can hypothesize two attributes - a high frequency of activity for these users , as well as a large follower network . to evaluate these hypotheses we measured these two attributes for these authors over these months .* frequency : * the tweet - rate can effectively measure the frequency of participation of a twitter user .the mean tweet - rate for these users was tweets per day , indicating that these authors tweeted fairly regularly .however , when we computed the correlation of the tweet - rate with the number of trending topics that they contributed to , the result was a weak positive correlation of 0.22 .this indicates that although people who tweet a lot do tend to contribute to the trending topics , the rate by itself does not strongly determine the popularity of the topic .in fact , they happen to tweet on a variety of topics , many of which do not become trends .we found that a large number of them tended to tweet frequently about sporting events and players and teams involved .when some sports - related topics begin to trend , these users are among the early initiators of them , by virtue of their high tweet - rate .this suggests that the nature of the content plays a strong role in determining if a topic trends , rather than the users who initate it .* audience : * when we looked at the number of followers for these authors , we were surprised to find that they were almost completely uncorrelated ( correlation of 0.01 ) with the number of trending topics , although the mean is fairly high ( 2481 ) .the absence of correlation indicates that the number of followers is not an indication of influence , similar to observations in earlier work .we have observed previously that topics trend on twitter mainly due to the propagation through the network .the main way to propagate information on twitter is by retweeting .31% of the tweets of trending topics are retweets .this reflects a high volume of propagation that garner popularity for these topics .further , the number of retweets for a topic correlates very strongly ( 0.96 ) with the trend duration , indicating that a topic is of interest as long as there are people retweeting it .each retweet credits the original poster of the tweet .hence , to identify the authors who are retweeted the most in the trending topics , we counted the number of retweets for each author on each topic ..*top 22 retweeted users in at least 50 trending topics each * [ cols="^,^,^,^",options="header " , ] [ infs ] * domination : * we found that in some cases , almost all the retweets for a topic are credited to one single user .these are topics that are entirely based on the comments by that user .they can thus be said to be dominating the topic .the _ domination - ratio _ for a topic can be defined as the fraction of the retweets of that topic that can be attributed to the largest contributing user for that topic . however , we observed a negative correlation of between the domination - ratio of a topic to its trending duration .this means that topics revolving around a particular author s tweets do not typically last long .this is consistent with the earlier observed strong correlation between number of authors and the trend duration .hence , for a topic to trend for a long time , it requires many people to contribute actively to it .* influence : * on the other hand , we observed that there were authors who contributed actively to many topics and were retweeted significantly in many of them . for each author , we computed the ratio of retweets to topics which we call the _ retweet - ratio_. the list of influential authors who are retweeted in at least 50 trending topics is shown in table [ infs ] .we find that a large portion of these authors are popular news sources such as cnn , the new york times and espn .this illustrates that social media , far from being an alternate source of news , functions more as a filter and an amplifier for interesting news from traditional media .to study the dynamics of trends in social media , we have conducted a comprehensive study on trending topics on twitter .we first derived a stochastic model to explain the growth of trending topics and showed that it leads to a lognormal distribution , which is validated by our empirical results .we also have found that most topics do not trend for long , and for those that are long - trending , their persistence obeys a geometric distribution .when we considered the impact of the users of the network , we discovered that the number of followers and tweet - rate of users are not the attributes that cause trends .what proves to be more important in determining trends is the retweets by other users , which is more related to the content that is being shared than the attributes of the users .furthermore , we found that the content that trended was largely news from traditional media sources , which are then amplified by repeated retweets on twitter to generate trends .w. galuba , d. chakraborty , k. aberer , z. despotovic , and w. kellerer .outtweeting the twitterers - predicting information cascades in microblogs . in _3rd workshop on online social networks ( wosn 2010 ) _ , 2010 .
|
social media generates a prodigious wealth of real - time content at an incessant rate . from all the content that people create and share , only a few topics manage to attract enough attention to rise to the top and become temporal trends which are displayed to users . the question of what factors cause the formation and persistence of trends is an important one that has not been answered yet . in this paper , we conduct an intensive study of trending topics on twitter and provide a theoretical basis for the formation , persistence and decay of trends . we also demonstrate empirically how factors such as user activity and number of followers do not contribute strongly to trend creation and its propagation . in fact , we find that the resonance of the content with the users of the social network plays a major role in causing trends .
|
mathematical modeling and computer simulation of biological systems is in a stage of burgeoning growth .advances in computer technology but also , perhaps more importantly , breakthroughs in simulational methods are helping to reduce the gap between quantitative models and actual biological behavior .the main challenge remains the wide and disparate range of spatio - temporal scales involved in the dynamical evolution of complex biological systems . in response to this challenge ,various strategies have been developed recently , which are in general referred to as `` multiscale modeling '' .some representative examples include hybrid continuum - molecular dynamics algorithms , heterogeneous multiscale methods , and the so - called equation - free approach .these methods combine different levels of the statistical description of matter ( for instance , continuum and atomistic ) into a composite computational scheme , in which information is exchanged through appropriate hand - shaking regions between the scales .vital to the success of this information exchange procedure is a careful design of proper hand - shaking interfaces .kinetic theory lies naturally between the continuum and atomistic descriptions , and should therefore provide an ideal framework for the development of robust multiscale methodologies .however , until recently , this approach has been hindered by the fact that the central equation of kinetic theory , that is , the boltzmann equation , was perceived as an equally demanding approach as molecular dynamics from the computational point of view , and of very limited use for dense fluids due to the lack of many - body correlations . as a result ,multiscale modeling of nanoflows has developed mostly in the direction of the continuum / molecular dynamics paradigm . over the last decade and a half , major developments in latticekinetic theory are changing the scene .minimal forms of the boltzmann equation can be designed on the lattice , which quantitatively describe the behavior of fluid flows in a way that is often computationally more advantageous than the continuum approach based on the navier - stokes equations .moreover , lattice kinetic theory has proven capable of dealing with complex flows , such as flows with phase transitions and strong heterogeneities , for which continuum equations are exceedingly difficult to solve , if at all known ( for a recent review see ) .these advances have opened the road to developing new mesoscopic multiscale solvers .the present work provides a successful implementation of such an approach .we will focus on the coupling of a _ mesoscopic _fluid solver , the lattice boltzmann method , with simulations at the atomistic scale employing explicit molecular dynamics . a unique feature of our approach is the dual nature of the mesoscopic kinetic solver , which propagates coarse - grained information ( the single - particle boltzmann probability distribution ) , along straight particle trajectories .this dual field / particle nature greatly facilitates the coupling between the mesoscopic fluid and the atomistic levels , both on conceptual and computational grounds .the paper is organized as follows . in section [basic ] we present the basic elements of the multiscale methodology , namely the lattice boltzmann treatment of the fluid solvent , and its coupling to a molecular dynamics simulation of the solute biopolymer . in section [applymethod ] , we present an application of this multiscale methodology to the problem of long polymer translocation through a nanopore ; in particular , we analyze in detail the role of hydrodynamics in accelerating the translocation process . in section [relate2dna ] we elaborate on the relevance of our results to the problem of dna translocation , which has attracted much theoretical and experimental attention recently .we conclude in section [outlook ] with general remarks and outlook for future extensions .we consider the generic problem of tracing the dynamic evolution of a polymer molecule interacting with a fluid solvent .this involves the simultaneous interaction of several physical mechanisms , often acting on widely separate temporal and spatial scales .essentially , these interactions can be classified in three distinct categories as solute - solute , solvent - solvent and solvent - solute .the first category includes the conservative many - body interactions among the single monomers in the polymer chain .being atomistic in nature , these interactions usually set the shortest scale in the overall multiscale process .they are typically handled by molecular dynamics techniques for constrained molecules .the second category , the solvent - solvent interactions , refer to the dynamics of the solvent molecules , which are usually dealt with by a continuum fluid - mechanics approach ; in the present work these will be described by the mesoscopic lattice boltzmann equation .the second and third category have also been handled by simulating the solvent explicitly via molecular dynamics , implicit solvent particles via brownian dynamics including hydrodynamic interactions , or solving the corresponding fokker - planck equation .finally , the solvent - solute dynamics will be treated by augmenting the molecular dynamics side with dissipative fluid - molecule interactions ( langevin picture ) and including the corresponding reaction terms in the fluid - kinetic equations .we consider a polymer consisting of monomer units ( also referred to as beads ) .the polymer is advanced in time according to the following set of molecular dynamics - langevin equations for the bead positions and velocities : where we distinguish four types of forces : the first term represents the conservative bead - bead interactions through a potential which we will take to have the standard lennard - jones form , \label{lj_potential}\ ] ] truncated at a distance of r= .this was combined with a harmonic part to account for the energy cost of distorting the angular degrees of freedom , with the relative angle between two consecutive bonds .torsional motions will not be included in the present model , but can easily be incorporated if needed .we consider next the solute - solvent interactions . the second term on the right - hand - side of eq.([md ] )represents the mechanical friction between the single bead and the surrounding fluid , being the bead velocity and the fluid velocity evaluated at the bead position .in addition to mechanical drag , the polymer feels the effects of stochastic fluctuations of the fluid environment , through the random term , , a gaussian noise obeying the fluctuation - dissipation relations : where is the volume of the cell to which beads and belong .finally , is the reaction force resulting from holonomic constraints for molecules modelled with rigid covalent bonds : being the prescribed bond length , and is the set of lagrange multipliers conjugated to each constraint .the usage of constraints instead of flexible bond lengths makes it possible to eliminate unimportant high - frequency intra - molecular motion which would render the underlying lb propagation prone to numerical instabilities . in this way, the time - step of the molecular dynamics part can be increased by about one order of magnitude , as much as the overall efficiency of the lbmd method , as we shall discuss in section [eff ] .finally , in order to avoid spurious dissipation , the bead velocities are required to be strictly orthogonal to the relative displacements . given the second order atomistic dynamics , the velocities must obey the independent constraints : the constraints ( [ kappa ] ) , ( [ kappadot ] ) are enforced over positions and momenta separately via the shake and the rattle algorithms .the implementation of these constraints requires the iterative solution of the system of equations ( [ kappa])-([kappadot ] ) , typically accomplished via standard newton - raphson techniques .the set of discrete speeds in the standard 19-speed 3d lattice for the lattice - boltzmann method.,scaledwidth=45.0% ] the lattice boltzmann equation is a minimal form of the boltzmann kinetic equation in which all details of molecular motion are removed except those that are strictly needed to recover hydrodynamic behavior at the macroscopic scale ( mass - momentum and energy conservation ) .the result is an elegant equation for the discrete distribution function describing the probability to find a particle at lattice site at time with speed . more specifically , since we are dealing with nanoscopic flows , in this work we shall consider the fluctuating lattice boltzmann equation which takes the following form : where represents the probability of finding a fluid particle at spatial location and time with discrete speed .the particles can only move along the links of a regular lattice defined by the discrete speeds , so that the synchronous particle displacements never take the fluid particles away from the lattice . for the present study, the standard three - dimensional 19-speed lattice is used ( see figure [ fig1 ] ) .the right hand side represents the effect of intermolecular solvent - solvent collisions , through a relaxation toward local equilibrium , , typically a second order ( low - mach ) expansion in the fluid velocity of a local maxwellian with speed : \rbrace\ ] ] where is the inverse fluid temperature ( with the boltzmann constant ) , a set of weights normalized to unity , and * i * is the unit tensor in configuration space .the relaxation frequency controls the fluid kinematic viscosity through the relation : where is the sound speed in the solvent .knowledge of the discrete distributions allows the calculation of the local density , flow speed and momentum - flux tensor , by a direct summation upon all discrete distributions : the diagonal component of the momentum - flux tensor gives the fluid pressure , while the off - diagonal terms give the shear - stress . unlike in hydrodynamics ,both quantities are available locally and at any point in the simulation .thermal fluctuations are included through the source term which reads as follows ( index notation ) where is the fluctuating stress tensor ( a stochastic matrix ) . consistency with the fluctuation - dissipation theorem at _all _ scales requires the following conditions where is the fourth - order kronecker symbol . is related to the fluctuating heat flux and is the corresponding basis in kinetic space , essentially a third - order hermite polynomial ( full details are given in ) .the polymer - fluid back reaction is described through the source term , which represents the momentum input per unit time due to the reaction of the polymer on the fluid population : \cdot \vec{c}_i\ ] ] where denotes the mesh cell to which the _ p_ bead belongs .the quantities on the left hand side in the above expression have to reside on the lattice nodes , which means that the frictional and random forces need to be extrapolated from the particle to the grid location .the use of a lb solver for the fluid solvent is particularly well suited to this problem because of the following reasons : + i ) free - streaming proceeds along straight trajectories .this is in stark contrast with hydrodynamics , in which fluid momentum is transported by its own space - time varying velocity field . besides securing exact conservation of mass and momentum of the numerical scheme , this also greatly facilitates the imposition of geometrically complex boundary conditions .+ ii ) the pressure field is available locally , with no need of solving any ( computationally expensive ) poisson problem for the pressure , like in standard hydrodynamics .+ iii ) unlike hydrodynamics , diffusivity is not represented by a second - order differential operator , but it emerges instead from the first - order lb relaxation - propagation dynamics .the result is that the kinetic scheme can march in time - steps which scale linearly , rather than quadratically , with the mesh resolution .this facilitates high - resolution down - coupling to atomistic scales .+ iv ) solute - solute interactions preserve their local nature since they are explicitly mediated by the solvent molecules through direct solvent - solute interactions . as a result ,the computational cost of hydrodynamic interactions scales only linearly with the length of the polymer ( no long - range interactions ) .+ v ) since _ all _ interactions are local , the lb scheme is ideally suited to parallel computing .it is worth mentioning that more advanced lattice boltzmann models could equally well be coupled to the atomistic dynamics .the molecular - langevin - dynamics solver is marched in time with a stochastic integrator ( due to extra non - conservative and random terms ) , proceeding at a fraction of the lb time - step , the time - step ratio controls the scale separation between the solvent and solute timescales .the numerical solution of the stochastic equations is performed by means of a modified version of the langevin impulse propagation scheme , derived from the assumption that the systematic forces are constant between consecutive time steps .the propagation of the unconstrained dynamics proceeds according to the scheme where is an array of gaussian random variables with zero mean and variance , and represent temporary positions .the propagator ( [ limod ] ) is particularly suitable for our purposes since it is second order accurate in time and robust , that is , it reduces to the symplectic verlet algorithm for .moreover , at variance with the original langevin impulse scheme , the modified propagator allows for an unambiguous definition of velocities , which are needed to couple the polymer to the hydrodynamic field of the surrounding solvent .the particle positions and velocities corrected via the shake and rattle algorithms read and . for consistency , in considering the momentum exchange with the solvent the corrected velocities appear in the friction forces .the md cycle is repeated times , with the hydrodynamic field frozen at time .the transfer of spatial information from / to grid to / from particle locations is performed at each lb time - stamp . to this purpose , on account of its simplicity , a simple nearest grid point ( ngp ) interpolation scheme is used ( see fig.[fig2 ] ) .momentum conservation was checked to hold up to six digits . with reference to a time slice , the pseudo - algorithm performing a single lb time - step , reads as follows _ _ 1 .interpolation of the velocity : 2 . for : * advance the molecular state from to 3 .extrapolation of the forces : 4 .advance the boltzmann populations from to this time - marching can be formally represented by an operator - splitting multi - step time procedure for two coupled kinetic equations describing the dynamic evolution for the fluid and the polymer distribution functions , respectively .it is worth emphasizing that , while lb and md with langevin dynamics have been coupled before , notably in the study of single - polymer dynamics , to the best of our knowledge , this is the first time such that coupling is put in place for _ long _ molecules of biological interest .transfer of spatial information ( a ) from grid to particle , and ( b ) from particle to grid .black spheres denote beads , while in white are the lattice sites.,scaledwidth=80.0% ] the total cost of the computation scales roughly like where is the cpu time required to update a single lb site per timestep and is the cpu time to update a single bead per timestep , is the volume of the computational domain in lattice units and is the number of polymer beads , with m the lb - md time - step ratio .finally , is the number of lb timesteps . in the above equation, includes the overhead of lb - md coupling .note that is largely independent of because i ) the lb - md coupling is local , ii ) the forces are short ranged and iii ) the shake / rattle algorithms are empirically known to scale linearly with the number of constraints .regarding the cost of the lb section , this is known to scale linearly with the volume occupied by the solvent . for the casewhere polymer concentration is kept constant , the volume needed to accommodate a polymer of beads should scale approximately as ; however , for translocation studies such as those discussed later in this paper , we shall consider a box of given volume , independently on the polymer length . from the above expression it is clear that should be chosen as small as possible , consistent with the requirement of providing a realistic description of the polymer dynamics .in the present simulation we typically choose between and , depending on the parameters of the simulation , particularly the temperature .this means that we are taking the lb representation close to the molecular scale .we will return to this important issue in the quantitative discussion of the physical application . a tentative estimate of the computational cost proceeds as follows : assuming flops / site / lb - step and flops / bead / md - step ( including the lb - md coupling overhead ) , and an effective processing speed of mflop / s , the evolution over lb steps= md steps of a typical grid and beads set - up , would take about : /10 ^ 8=(9600 + 1500 ) sec \sim 3 hrs,\ ] ] which is in reasonable agreement with the simulation time observed with the present version of the code ( ) , including the relative md / lb cost ( ) .we wish to emphasize that the key feature of the lb - md approach , namely _ linear _ scaling of the cpu cost with the number of beads ( at constant volume ) is indeed observed .in fact , the execution times for , and beads are , , and sec / step , respectively on a 2ghz amd opteron processor . by excluding hydrodynamics ,these numbers become , , and sec / step .it is worth mentioning that thus far , no effort has been directed to code optimization ; it is quite possible that careful optimization may lower the execution time by an order of magnitude .the static and dynamic behavior of the dna chain obtained by our methodology has been compared to the scaling predictions for a single chain at infinite dilution .given the structure factor , standard theory predicts the scaling law , where is a universal function and is the gyration radius . for large ,the structure factor is independent of , and it follows where experiments , theory and simulations agree on the scaling exponent value .the static scaling law is not affected by the presence of hydrodynamics . however , verification of the scaling law and attainment of the scaling regime for large enough chains is a good check for the correctness of our simulation scheme and for the subsequent validation of the hydrodynamic behavior .the dynamic behavior of the chain is deeply affected by the presence of hydrodynamic interactions .the standard picture of polymer dynamics is based on the rouse ( no hydrodynamics ) or zimm ( hydrodynamic ) description in terms of an underlying gaussian chain . in this case , the chain intermediate scattering function }\rangle$ ] should follow the universal behavior where is another universal function and is the center of mass diffusion constant . having introduced the dynamic scaling exponent via , the exponent is found to be . according to zimm theory , and while, according to molecular dynamics simulations , it appears that the actual value is somehow lower , i.e. .we have considered a chain made of 30 monomers with bead - bead lennard - jones parameters taken from previous studies of chains simulated via brownian dynamics ( , , bond length ) in a simulation box of edge .this choice was motivated to verify the range of scaling behavior as compared to previous numerical results .we have computed the structure factor , as reported in fig.[fig3](a ) , and observed that the scaling regime is clearly visible for with an exponent equal . in the range of vectorswhere the static scaling holds , the dynamic scaling has been checked by considering , for given values of the computed scattering function , the loci of points and by fitting via a power law curve ( see ref . for details ) . as illustrated in fig.[fig3](b ) , the resulting scaling exponent is found to be , in excellent agreement with the expected value and similar to previous simulation results on single polymers surrounded by a lattice boltzmann fluid .moreover , by applying a heuristic argument , we have verified that within the scaling regime , the finite size of the periodic box was not biasing the data .( a ) log - log plot of the structure factor of a polymer in solution made of 30 monomers .the straight line is the power law fit in the range with exponent .( b ) log - log parametric plot of vs for ( circles ) and ( triangles ) .the lines represent the power law fits within the scaling region with exponent .,scaledwidth=100.0% ]the scheme described above is general and applicable to any situation where a long polymer is moving in a solvent .this motion is of great interest for a fundamental understanding of polymer dynamics in the presence of the solvent .for example , the translocation of a polymer through a pore of very small size ( of order the separation between monomers ) , is a process in which the coupling of the molecular motion to the solvent dynamics may be of crucial significance . in this section, we will therefore provide a detailed discussion of the polymer dynamics in the presence of a solvent for the example of translocation through a nanopore but without reference to a specific physical system . in the next sectionwe explore the relevance of these results to dna translocation through a nanopore .the polymer is initialized via a standard self - avoiding random walk algorithm and further relaxed to equilibrium by standard molecular dynamics .the solvent is initialized with the equilibrium distribution corresponding to a constant density and zero macroscopic speed .boundary conditions for the fluid are periodic at inlet / outlet sections , and zero - speed at rigid walls , using the standard bounce - back rule . for the polymer ,periodicity is again imposed at inlet / outlet , whereas the interaction with rigid walls is handled by a lennard - jones potential with specific wall - polymer parameters and in lb units .the connection between slip - flow at the wall and intermolecular solid - fluid interactions shall be the objects of future research .we consider a three - dimensional box of size lattice units , with the spacing between lattice points .we will take , ; the separating wall is located in the mid - section of the direction , at with . at the polymer resides entirely in the right chamber at . at the center of the separating wall ,a square hole of side is opened , through which the polymer can translocate from one chamber to the other .translocation is induced by a constant electric field which acts along the direction , and is confined in a rectangular channel of size along the streamline ( direction ) and cross - flow ( directions ) .the spatial coarse - graining is such that the presence of the solvent as well as electrostatic forces acting due to charges on the polymer are neglected altogether as being of secondary importance compared to hydrodynamics . here andthroughout we work in lattice boltzmann units , in which length and time are measured in units of the lattice spacing and time - step , respectively .mass is defined as .the dimensionless mass used in the simulations is set to unity , which means that mass is measured in units of the solvent mass .this choice is not restrictive since the present approach is used to model incompressible flows in which density is a parameter which can be rescaled by any arbitrary factor .however , it is of some interest to estimate the number of solvent molecules represented by a single lb computational molecule , since the inverse of this number conveys a measure of the importance of statistical fluctuations at the scale of the lattice spacing .let be this number , which will be defined as , where is the dimensionless density used in the lb simulations . in order for the boltzmann probability distribution to make sense as a statistical observable , .for typical values of gr/ , amu , ( which correspond to water ) , and in the range nm , this yields .this shows that the neglect of many - body fluctuations inherent to the single - particle boltzmann representation is still justified even at the nanoscopic scale of the lattice spacing .we will focus here on the _ fast _ translocation regime , in which the translocation time is much smaller than the zimm time , , i.e. the typical relaxation time of the polymer towards its native ( minimum energy , maximum entropy ) configuration .under fast - translocation conditions , the many - body aspects of the polymer dynamics can not be ignored because different beads along the chain do not move independently . as a result, simple one - dimensional brownian models do not apply .in addition to many - body solute - solute interactions , the present approach also takes full account of many - body solute - solvent hydrodynamic interactions .the conditions for fast - translocation regime can be appraised as follows . the translocation time is estimated by equating the driving force , , to the drag force exerted by a solvent with dynamic viscosity on a polymer with radius of gyration , .this yields . since the zimm timeis given by , the fast - translocation condition becomes : our reference simulation is performed with and , with the mass of one bead ( monomer ) of the polymer .the polymer length is in the range beads .it can be readily checked that by assuming our set of parameters falls safely within the fast translocation regime .however , for , is of the order of which is much closer to breaking the above condition .the main parameters of the simulation are ( in lb units ) and for the lennard - jones potential .the bond length among the beads is set at . according to these values ,the lennard - jones time - scale , , is of the order of .thus , by choosing as a time - gap factor , we obtain , which is adequate for the resolution of the polymer dynamics . the solvent is set at a density , with a kinematic viscosity and a damping coefficient .the flexional rigidity for the angular potential between beads will be /rad . in order to resolve the structure of the solvent accurately on the atomistic scale , we should use a higher resolution of at least 3 - 4 orders of magnitude .this means resolving the radial structure of the pore , a task that can only be undertaken by resorting to parallel computing .it is nonetheless hoped , and verified _ a posteriori _ , that this artificial magnification does not affect adversely the most significant dynamical and statistical properties of the translocation process , by which we mean that eventually , the time - scale of the simulated process may not be the same as in the physical process of interest , but the simulated dynamics is related to the physical dynamics by a simple rescaling of the time variable .the most immediate quantity of interest in the translocation process is the dependence of the translocation time on the polymer length .this is usually expressed by a scaling law of the form where is a reference time - scale , formally the translocation time of a single monomer , and a scaling exponent measuring the degree of competition ( ) / cooperation ( ) of the various monomers in the chain .we first turn to the derivation of the scaling behavior of the translocation process in the case where hydrodynamic interactions are included . in order to take into account the statistical nature of the phenomenon ,simulations of a large number of translocation events ( up to ) for each polymer length were carried out .the ensemble of simulations is generated by different realizations of the initial polymer configuration .the duration histograms were constructed by cumulating all events for various lengths .overall , our results are quite similar to the corresponding experimental data for dna translocation through a nanopore , which we discuss in more detail in the following section . at a next step ,our data were shifted and scaled so that the distribution curve starts at zero - time and the total probability is equal to unity .the resulting distributions are on average not gaussians , but skewed towards long translocation times , consistent with experiment .therefore , the translocation time for each length is not assigned to the mean time , but to the most probable time , which is the position of the maximum in the histogram . in fig.[fig4 ] the distribution of all the events for polymer sizes n=50 , 100 and 300 are shown . in this figure , the most probable translocation time for each lengthis denoted by an arrow . from this analysis, a nonlinear relation between the most probable translocation time and the polymer length is obtained that follows closely the theoretically expected scaling , with ( see fig.[fig4 ] ) .probability distributions of the translocation times for various lengths : ( a ) n = 50 , ( b ) n=100 , and ( c ) n=300 , respectively . both axes are scaled to produce normalized probability distributions .the arrows show the most probable translocation time for each length.,scaledwidth=100.0% ] a closer inspection into the polymer dynamics reveals some interesting features .the molecule shows a blob - like conformation on either side of the membrane as it moves through the hole .it may either translocate very fast or move from one chamber to the other intermittently , with pauses . both types of events are present with and without a fluid solvent .in addition , a careful analysis of all the translocated chains unravels the difference between slower and faster translocation within the same fast translocation regime .the nature of the variations in time is connected to the random fluctuations of the polymer throughout its motion , rather than the temperature or its length .these fluctuations are correlated to the entropic forces ( gradient of the free energy with respect to a conformational order parameter , typically the fraction of translocated beads , see below ) acting on both translocated and untranslocated parts of the polymer .in fact , when a solvent is present , the interplay between these forces and , determines the motion and the shape of the chain and thereby the translocation time . at some point part ofthe chain shapes up in an almost linear conformation increasing in this way the entropic force acting on it .this eventually leads to deceleration of the whole chain .fig.[fig6 ] shows an illustration of this argument , where a polymer chain , surrounded by a solvent is represented at a time where it starts to slow down . in this figure , a polymer with the same length but different initial configuration is also shown at the same time .it is very instructive to monitor the progress in time of the number of translocated monomers .note that serves as a reaction - coordinate , with the translocation time defined by the condition .the translocated monomers for processes with and without hydrodynamics are shown in fig.[fig7 ] .for the former , events related to the polymers of fig.[fig6 ] are shown ( curves , ) , as well as that related to the most probable time ( ) .the arrow in this figure indicates the timestep corresponding to the snapshots in fig.[fig6 ] .the translocation for a given polymer proceeds along a curve virtually related to its initial configuration and its interactions with the fluid .it is clearly visible that there is no general trend .the non - hydrodynamic case is in principle different , especially in terms of the time range which is larger .this reveals the importance of hydrodynamic coherence .additional insight into the dynamics is obtained by altering the parameter set .this has not yet been extensively explored , but it was found that a choice of , , and leads to the frequent retraction of the polymer . in other words , after having translocated a large fraction of its length , the polymer occasionally reverses its motion and anti - translocates away from the hole , never to find its way back into it .moreover , we find that a polymer that retracted in the presence of a solvent , manages to fully translocate if the solvent is absent .it is interesting to observe that , in principle , no such type of anti - translocating behavior has been observed for short polymers .this indicates that hydrodynamics significantly speed - up and alter the nature of translocation , especially for long polymers at low temperatures .this highly irregular dynamics escapes any scaling or statistical analysis , as well as dynamic monte carlo simulations , and can only be revealed by self - consistent many - body hydro - dynamic simulations .polymer configuration ( ) corresponding to ( a ) fast and ( b ) slow translocation events .both snapshots are shown at a timestep where the polymer ( b ) starts to slow down ( see arrow in fig.[fig7 ] ) . is applied at the hole region towards a direction indicated by the arrow.,scaledwidth=100.0% ] progress in time of the number of translocated beads for chains with monomers .curves , correspond to slow and fast translocation events ( polymers shown in fig.[fig5 ] ) , while to an event related to the most probable time . the initial configuration for the polymer in the event is the same as for , but in that case no hydrodynamic interactions are included .time is scaled with respect to the value of in the case with hydrodynamics .the arrow indicates the timestep at which the snapshots in fig.[fig6 ] are shown.,scaledwidth=50.0% ]the translocation of biopolymers , such as dna and rna plays a major role in many important biological processes , such as viral infection by phages , inter - bacterial dna transduction , and gene therapy .the importance of this process has spawned a number of _ in vitro _experiments , aimed at exploring the translocation process through micro - fabricated channels under the effects of an external electric field , or through protein channels across cellular membranes . in particular ,recent experimental work has focused on the possibility of fast dna - sequencing by reading the base sequence as the polymer passes through a nanopore .some universal features of dna translocation can be analyzed by means of suitably simplified statistical schemes and non - hydrodynamic coarse - grained or microscopic models .however , a quantitative description of this complex phenomenon calls for state - of - the art modeling of the type described above .accordingly , we explore here to what extent the results discussed above for the generic situation of polymer translocation apply to the dna case .first , we note that , as already mentioned in the previous section , our results are quite similar to the experimental data for dna translocation through a nanopore .three different interpretations of the current model are physically plausible : + ( a ) following the framework used in recent studies of dna packing in bacteriophages , one monomer in our simulation can be thought of as representing a dna segment of about 8 base - pairs , that is , each bead has a diameter of 2.5 nm , the hydrated diameter of b - dna in physiological conditions .+ ( b ) it is also physically plausible to assume that a bead represents a portion of dna equivalent to its persistence length of about 50 nm , which translates into mapping one bead to base - pairs .+ ( c ) alternatively , as is typically done in simulations of the -phage dna in solution , one bead can be taken to correspond to base - pairs .+ in all three cases is equal to the bead size , while the pore , having a width of 2 , will be different from the pores used experimentally , either smaller or larger . in addition , the coarse graining model that handles the dna molecules indicates that the md timescale is stretched over the physical process .a direct comparison between our probability distributions for polymer translocation and the experimental results sets a different md timestep for the cases ( a ) , ( b ) , and ( c ) which is of the order of 3 nsec , 100 nsec and 5 , respectively , leading to a lb timestep of 15 nsec , 500 nsec , and 25 .it is difficult at this stage to assign a unique interpretation of our model in relation to a physical system .a thorough exploration of the parameter space is required before such an assignment can be made convincingly .this is beyond the scope of the present work but will be reported in future publications .translocated ( ) , untranslocated ( ) and effective ( ) radii of gyration for the ( a ) non - hydrodynamic and ( b ) hydrodynamic cases with .all radii are normalized with respect to the initial value .time is also scaled with respect to the total translocation time for each of the events ( a ) and ( b ) .the dotted lines denote the regions where is nearly constant ( see text ) ., scaledwidth=100.0% ] a second encouraging comparison is that the scaling we found for the translocation time with polymer length ( with exponent ) is quite close to the experimental measurement for dna translocation ( ) . beyond the apparent consistency between experiment and theory ,additional insight can be gained by analyzing the polymer dynamics during translocation .a hydrodynamic picture of dna translocation has been presented in ref . . in this work, the authors assume that the electric field drive is in balance with the stokes drag exerted by the solvent on the blob configuration of the polymer , that is where is the density , the kinematic viscocity , the translocation time , and the translocated part of the radius of gyration . in order for this balance to apply at all times ,it is clear that must be constant in time , hence it can not be identified neither with the translocated nor with the untraslocated gyration radius of dna . to this end , in fig.[fig8 ] we represent the time - evolution of the radii of gyration for the two sections of dna , and , the untranslocated ( ) [ with ] and translocated [ with parts , respectively . in order to identify a time - invariant radius ,we define the , where stands for the untranslocated and translocated parts and is the corresponding number of monomers .the exponent is the same as previously noted and is a constant ( for long enough polymers ) .if translocation could be described by the dynamics of a single - blob object , characterized by an effective radius of gyration , defined as then this quantity should be constant in time .since the holds for all , the above relations lead to ^{\zeta } = const \label{r_total}\ ] ] we focus first on the case without hydrodynamics , fig.[fig8](a ) .for very small chains does not scale as and the definition for is not valid at the first and last parts of the event , during which the untranslocated and the translocated parts , respectively , are small . outside these limits , as obtained from the definition ( [ r_total ] ) , with the values of directly taken from the simulations , is indeed approximately constant .in addition , the values and do not coincide , since the former is lower than the latter .this is also the case when a solvent is present , fig.[fig8](b ) .thus , regardless of the dynamic pathway and the different conformations the chain may possess during translocation , once the event is completed the polymer is more compact than at .comparison of the cases with and without hydrodynamics reveals that in the latter case the polymer becomes up to more confined than when a solvent is added .the untranslocated part of the radius of gyration at the end of the process shows an abrupt drop . as a consequence the polymer does not fully recover its initial volume .it it plausible , that by allowing the polymer to further advance in time , will become similar to , but this remains to be examined . nevertheless , in this work we have been interested mainly on the chain dynamics related to the first passage times , which correspond to the exact period of time needed until all the beads have translocated .we have presented a multiscale methodology based on the concurrent coupling of constrained molecular dynamics for the solute biopolymers with a lattice boltzmann treatment of solvent dynamics . owing to the dual field - particle nature of the lattice boltzmann technique, this coupling proceeds seamlessy in time and only requires standard interpolation / extrapolation for information - transfer in physical space .this multiscale methodology has been applied to the case of polymer translocation through a nanopore , with special emphasis on the role of hydrodynamic coherence on the dynamic and statistical properties of the translocation process .it is found that hydrodynamic interactions play a major role in accelerating the translocation process , especially for long molecules at low temperature .an attempt to connect these results to the process of dna translocation through a nanopore revealed certain similarities with experiment , especially in the scaling law of the translocation time with polymer length .the presence of hydrodynamic interactions lead to a decrease in the translocation times , compared to the cases without a fluid solvent . inspection of the variation of the translocated beads and the radii of gyration with time reveals interesting aspects of the dna dynamics during translocation .future directions for the simulations include the detailed study of the effects of temperature , finite - length and geometrical details of the nanopore geometry , as well as electrostatic interactions of the dna molecule with the surrounding fluid . to this end ,resort to parallel computing is mandatory , and we expect the favourable properties of lb towards parallel implementations to greatly facilitate the task . work along these lines is currently in progress . , _ heterogeneous atomistic - continuum representations for dense fluid systems _ , intc , 8 ( 1997 ) , pp . 967976 ; a. wagner , e. flekkoy , j. feder , and t. jossanq , _ coupling molecular dynamics and continuum dynamics _ , comp .comm . , 147 ( 2002 ) ,pp . 670673 ; x. nie , s. y. chen and w. e , _ a continuum and molecular dynamics hybrid method for micro- and nano - fluid flow _ , j. fluid mech ., 500 ( 2004 ) , pp .5564 . , _ the lattice boltzmann - equation - theory and applications _ , phys. rep . , 222 ( 1992 ) ,pp . 145197 ; d. a. wolf - gladrow , _ lattice gas cellular automata and lattice boltzmann models _ , springer - verlag , new york 2000 ; s. succi , _ the lattice boltzmann equation _ , oxford university press , oxford 2001 ., _ use of the boltzmann - equation to simulate lattice - gas automata _ ,lett . , 61 ( 1988 ) , pp .23322335 ; f. higuera , s. succi , and r. benzi , _ lattice gas - dynamics with enhanced collisions _ , europhys . lett . ,9 ( 1989 ) , pp . 345349 ; f. higuera , j. jimenez , _ boltzmann approach to lattice gas simulations _ ,lett . , 9 ( 1989 ) , pp .663668 ; h. chen , s. chen , w. matthaeus , _ recovery of the navier - stokes equations using a lattice - gas boltzmann method _, phys rev a , 45 ( 1992 ) , pp .53395342 ; y. h. qian , d. dhumieres , p. lallemand , _ lattice bgk models for navier - stokes equation _ , europhys ., 17 ( 1992 ) , pp .479484 ; i.v .karlin , a. ferrante , h.c .ttinger , _ perfect entropy functions of the lattice boltzmann method _ , europhys .lett . , 47 ( 1999 ) , 182 . , _ simple models for complex nonequilibrium fluids _ , phys ., 390 ( 2004 ) , pp .453 - 551 ; martin kr , a. alba - p , m. laso , h. c. , _ variance reduced brownian simulation of a bead - spring chain under steady shear flow considering hydrodynamic interaction effects _ , j. chem .phys . , 113 ( 2000 ) , pp .4767 - 4773 . , _ lattice - boltzmann simulation of polymer - solvent systems _ , int .c , 9 ( 1999 ) , pp .14291438 ; _ simulation of a single polymer chain in solution by combining lattice boltzmann and molecular dynamics _ , j. chem .phys . , 111 , ( 1999 ) 82258239 ; a. chatterji , j. horbach , _ combining molecular dynamics with lattice boltzmann : a hybrid method for the simulation of ( charged ) colloidal systems _ , j. chem .phys . , 122(18 ) , ( 2005 ) , 184903 . ,_ molecular dynamics investigation of dynamic scaling for dilute polymer solutions in good solvent conditions _ , j. chem .phys . , 96 ( 1992 ) , pp.85398551 ;_ excluded volume effects on the structure of a linear polymer under shear flow _ , ibid ., 113 ( 2000 ) , pp.55455558 . , _ characterization of individual polynucleotide molecules using a membrane channel _ , pnas , 93 , ( 1996 ) , pp .1377013773 ; a. meller , l. nivon , e. brandin , j. golovchenko , and d. branton , _ rapid nanopore discrimination between single polynucleotide molecules _ ,pnas , 97 ( 2000 ) , pp . 10791084 . , _ single polymer dynamics in an elongational flow _ , science , 276 ( 1997 ) , 20162021 ; j. s. hur , e. s. g. shaqfeh , and r. g. larson , _ brownian dynamics simulations of single dna molecules in shear flow _ , j. rheol. , 44 ( 2000 ) , pp .713742 ; r. m. jendrejack , j. j. de pablo , and m. d. graham , _stochastic simulations of dna in flow : dynamics and the effects of hydrodynamic interactions _ , j. chem .phys . , 116 ( 2002 ) , 77527759 .
|
we present a multiscale approach to the modeling of polymer dynamics in the presence of a fluid solvent . the approach combines langevin molecular dynamics ( md ) techniques with a mesoscopic lattice - boltzmann ( lb ) method for the solvent dynamics . a unique feature of the present approach is that hydrodynamic interactions between the solute macromolecule and the aqueous solvent are handled explicitly , and yet in a computationally tractable way due to the dual particle - field nature of the lb solver . the suitability of the present lb - md multiscale approach is demonstrated for the problem of polymer fast translocation through a nanopore . we also provide an interpretation of our results in the context of dna translocation through a nanopore , a problem that has attracted much theoretical and experimental attention recently . multiscale modeling , lattice - boltzmann method , solvent - solute interactions , polymer translocation , dna 68u20 , 92 - 08 , 92c05
|
a temporal graph is a data structure , consisting of nodes and edges in which the edges are associated with time labels .while many temporal graph datasets exist online , none could be found that used the interval labels in which each edge is associated with a starting and ending time .for this reason we generated several synthetic datasets and modified two existing datasets , i.e. , wikipedia reference graph and the facebook message graph , for analysis . in this report, we introduce the creation of the wikipedia reference graph and analyze some properties of this graph from both static and temporal perspectives .this report aims to provide more details of this graph benchmark to those who are interested in using it and also serves as the introduction to the dataset used in the master thesis of wouter ligtenberg .this dataset was created from the wikipedia refence set . in thisdataset edges represent wikipedia articles and every edge represents a reference from one article to another .the edges have 2 values , one with the time stamp of the edge creation or the edge removal , and one indicating if it was added or removed .we transformed this dataset into an interval set in which we connected the creation of an edge to the removal of an edge to create an edge with an interval . in this datasetwe only included edges that were first added and later removed .if an edge is added but never removed we ignore it . since this is the first real interval set we contacted both konect and icon to submit this dataset to their index for further research . both instances accepted the dataset to their database . in details ,this graph dataset contains the evolution of hyperlinks between articles of the dutch version of wikipedia .each node is an article .each edge is a hyperlink from one page to another ( directed ) . in the dataset ,each edge has a start and an end time , which allows us to look at the network in different points in time .the network under study has 67,8907 vertices and 472,9035 edges .the first edge is created on tuesday august 28 , 2001 , and the last edge removal takes place on sunday july 10 , 2011 .so , our data roughly has a time span of 10 years .in this section , we introduce the properties of this temporal graph benchmark in two different perspectives , i.e. , static and temporal . for the static graph analysis , we ignore the time stamps and view this dataset as a static graph .then we calculate three graph measures on this graph including degree , clustering coefficient and pagerank .the average degree , average in - degree and average out - degree characterize the network in a way that it gives an indication of how many edges there are connected on average to a node .we also calculate the max degree as the comparison .the results are shown in table [ tb : degree ] .the degree distribution using a logarithmic scale for both the degree frequencies as the number of nodes including the degree distribution , in - degree distribution and out - degree distribution , are shown in figure [ fig : degree ] .the curves indicate that these degree distributions follow the power law distribution ..degree statistics of the static graph .[ cols="^,^,^,^",options="header " , ] how much nodes cluster together and form neighborhoods can be measured using the clustering coefficient .this measure quantifies the proportion of a node s neighbors to which the node is connected through triangles ( three - node cycles ) .the clustering coefficient of node is defined as : where is the number of triangles ( three - node cycles ) through and and is the degree of node . if none of the neighbors of a vertex are connected , and if all of the neighbors are connected .remark that the graph is considered as if it was undirected . in this graph dataset ,the average clustering coefficient is 0.114 . for 28638 nodes , which means that for 4.22% of the nodes all neighbors are connected . on the other hand , there are also 57.40% of the nodes have .this means that these nodes are not part of any triangle at all .this measure ranks nodes by their importance .the pagerank value of a node is based on the nodes linking to and from that node .pagerank is mainly used by google to rank websites by calculating the importance each website , but can also be used for other purposes such as rating wikipedia articles . to visualize the results a boxplotwas plotted , which is shown in figure [ fig : pg ] .the figure clearly shows that only a few pages are classified as important , i.e. that have a high pagerank .while a lot of nodes / pages have relative small pagerank , i.e. , less importance .we use the snapshot - based method to analyze the temporal graph .therefore the first step is choose the time window to divide the temporal graph into several snapshots .the time between the creation of the first edge , and the removal of the last edge covers a time period of 311,268,466 seconds , which is almost 10 years ( 9.87 years ) . for simplicity, we use the time window of 1 year in the temporal analysis to divide the temporal graph into 10 snapshots . in figure[ fig : tdegree ] the degree distributions per snapshot are visualized .each year has its own color , ranging from blue ( year 1 ) to red ( year 10 ) .we see a similar pattern for degree , in - degree and out - degree . in the first years, the graph shows a fairly large deviation from the power law . as the yearspass , the graph develops to fit the power line better .the years 5 to 8 show a similar graph as we have seen in the static analysis of the graph .the last two years again start to deviate from this diagonal , although still keeping a fairly diagonal line .this behavior can be partly explained by the fact that the graph is growing at the beginning , and getting smaller again at the end . in all snapshots it seems to approximately hold that this network thus is a scale - free network .moreover , the change of degrees in different graph snapshots is shown in figure [ fig : tcdegree ] .figure [ fig : tcoeff ] shows the change of cluster coefficient of the temporal graph in different snapshots and the clustering does change over time .it is with relatively small steps , but we can conclude that the graph shows a downward trend . this trend might indicate that more wikipedia pages tend to link to pages outside their general topic , which would lower the clustering coefficient .we use the pagerank to analyze the importance of nodes change over time . to visualize the results ,we pick the top 30 nodes that scored the highest on pagerank and the results are shown in figure [ fig : tpagerank ] . from the results ,there is not a single most important node over the complete time period .times change , inventions are done and new articles are written about new subjects .this is something one would expect in a data file like this , and thus is confirmed by the data .section [ properties ] , i.e. , the analysis of graph properties , is from the advanced homework of 2d0 - web analytics course .we thank evertjan peer , hilde weerts , jasper adegeest and gerson foks ( students of group 12 in 2d0 ) for their efforts in the network analysis .
|
a temporal graph is a data structure , consisting of nodes and edges in which the edges are associated with time labels . to analyze the temporal graph , the first step is to find a proper graph dataset / benchmark . while many temporal graph datasets exist online , none could be found that used the interval labels in which each edge is associated with a starting and ending time . therefore we create a temporal graph data based on wikipedia reference graph for temporal analysis . this report aims to provide more details of this graph benchmark to those who are interested in using it . graph , graph analysis , graph benchmark
|
understanding the dynamics of epidemic spreading is a long - term challenge , and has attracted increasing attention recently .firstly , the fast development of data base technology and computational power makes more data available and analysable to scientific community .secondly , many new objects of study come into the horizon of epidemiologists , such as computer virus , opinions , rumors , behaviors , innovations , fads , and so on .lastly , in addition to the compartment model and population dynamics , novel models and tools appeared recently inspired by the empirical discoveries about network topology , temporal regularities of human activities and scaling laws in human mobility . in the simplest way, we can roughly divide the human - activated spreading dynamics into two classes according to the disseminules : one is the spreading of infectious diseases requiring physical contacts , and the other is the spreading of information including opinions , rumors and so on ( here we mainly consider the information whose value and authenticity need judge and verification by individuals , different from the information about jobs , discounts , etc . ) . in the early stage, scientists tried to describe these two classes by using a unified framework and analogous models ( see , e.g. , ref . ) , emphasizing their homology yet overlooking their essential differences . very recently, scientists started to take serious consideration about the specific features of information spreading , as well as the different mechanisms across different kinds of information .dodds and watts studied the effects of limited memory on contagion , yet did not consider the social reinforcement .some recent works indicate that the social reinforcement plays important role in the propagation of opinions , news , innovations and fads . ., width=302 ] in this paper , we propose a variant of the susceptible - infected - recovered ( sir ) model for information spreading , which takes into account three different spreading rules from the standard sir model : ( i ) memory effects , ( ii ) social reinforcement , and ( iii ) non - redundancy of contacts .the main contributions are twofold .firstly , we show that when the spreading rate is smaller than a certain value , the information spreads more effectively in regular networks than in random networks , which to some extent supports the experiment reported by centola : behavior spreads faster and can infects more people in a regular online social network than in a random one ( with no more than 200 people in the experiment ) .we further show that as the increasing of the network size , the value of will decrease , which challenges the validity of centola s experiment for very large - scale networks .secondly , the effectiveness of information spreading can be remarkably enhanced by introducing a little randomness into the regular structure , namely the small - world networks yield the most effective information spreading .this result is complementary to the traditional understanding of epidemic spreading on networks where the infectious diseases spread faster in random networks than in small - world networks . , , and .the results are obtained by averaging over 500 independent realizations.,title="fig:",width=160 ] , , and .the results are obtained by averaging over 500 independent realizations.,title="fig:",width=160 ] , , and .the results are obtained by averaging over 500 independent realizations.,title="fig:",width=160 ] , , and .the results are obtained by averaging over 500 independent realizations.,title="fig:",width=160 ]given a network with nodes and links representing the individuals and their interactions , respectively .hereinafter , for convenience , we use the language of news spreading , but our model can be applied to the spreading of many kinds of information like rumors and opinions , not limited to news . at each time step , each individual adopts one of four states : ( i ) _ unknown_the individual has not yet heard the news , analogous to the susceptible state of the sir model .known_the individual is aware of the news but not willing to transmit it , because she is suspicious of the authenticity of the news .( iii ) _ approved_the individual approves the news and then transmits it to all her neighbors .exhausted_after transmitting the news , the individual will lose interest and never transmit this news again , analogous to the recovered state in the sir model . at the beginning, one node is randomly chosen as the `` seed '' and all others are in the unknown state .this seed node will transmit the news to all her neighbors , and then become exhausted .once an individual ( either in unknown or known state ) receives a news , she will judge whether it is true depending on the number of times he has heard it a news or a rumor is more likely to be approved if being heard many times ( a very recent model allows the infectivity and/or susceptibility of hosts to be dependent on the number of infected neighbors ) .the present rules imply two features of information spreading , namely the memory effects and social reinforcement , which are usually neglected in the standard sir model and its variants for rumor propagation . for regular ( red solid line ) andrandom ( blue dash line ) networks .the parameters are , , and , as the same as those for fig .inset shows the number of final approved nodes on regular network minus that on random networks , against .the results are obtained by averaging over 500 independent realizations.,width=302 ] in our model , we assume that for a given individual if she receives the news at least once at the time step , and she has received times of this news until time ( is a cumulative number ) , the probability she will approve it at time is , where is the approving probability for the first receival . ] and when , the topological statistics are very close to the ones of random networks . in all simulations ,the node degree is set to be , and we have carefully checked that the results are not sensitive to the node degree unless is very large or very small .given ( triangles ) , ( squares ) and ( circles ) .other parameters are , , and . the results are obtained by averaging over 10000 independent realizations .the clustering coefficient , as a monotonic function of , is also displayed.,width=340 ] denote by the number of approved nodes of the news .larger at the final state indicates a broader spreading .we firstly compare the spreading processes on regular and random networks .figure [ fig2 ] reports four typical examples with different and fixed .surprisingly , for small ( e.g. , fig . [ fig2](a ) ) , the spreading on regular networks is faster and broader than on random networks .these results are in accordance with the online social experiment of centola , yet against the traditional understanding of network spreading . with the increasing of , the random networks will be favorable for faster and broader spreading .figure [ fig3 ] shows the dependence of the number of approved nodes at the final state on the parameter . there is a crossing point at about , after which of random networks exceeds that of regular networks . the inset shows the difference between numbers of final approved nodes on regular and random networks , namely against . with very large , almost every node will run into the approved state , and thus is not sensitive to the network structure , but the spread on random networks is still faster than on regular networks ( see , for example , fig .[ fig2](d ) ) .figure 4 displays the crossing point as a function of the network size .when is small , decreases sharply with the increasing of , while when gets larger becomes insensitive to . as a whole, shows a non - increasing behavior versus .notice that , the phenomenon that spreading on regular networks is faster and broader than on random networks is more remarkable and easier to be observed if is large .therefore , our result about indicates that for large - scale systems , centola s experimental results may be not hold or will be weaken to some extent . on the strength of social reinforcement given different .the results are obtained by averaging over 500 independent realizations.,width=302 ] in previous study on sir model , it was pointed out that the number of recovered nodes at the end of evolution increases with the increasing of randomness in small - world networks .in contrast , our simulations show that the number of approved nodes in the final state does not monotonously increase with the increasing of , instead , an optimal randomness exists subject to the highest .figure [ fig5 ] shows the dependence of the number of final approved nodes on the randomness given ( triangles ) , ( squares ) and ( circles ) . with strong social reinforcement ,even a very small randomness can bring a remarkable improvement of the number of final approved nodes , .take the case for example , on the regular networks ( i.e. , ) , is 205 , while by introducing a tiny randomness , this number will suddenly increase to 6593 , which is also higher than the random networks ( i.e. , , ) .we also plot the clustering coefficient as a function of in figure 5 .as expected , decreases as the increasing of .the results indicate that the local clustering can to some extent enhance the approving rate of information , which refine the completely negative valuation of clustering coefficient in epidemic spreading .the dependence of optimal randomness on the strength of social reinforcement given different are shown in fig .[ fig6 ] , where one can observe that the stronger social reinforcement ( i.e. , larger ) results in a smaller . in the presence of weak social reinforcement ( i.e. , small ) , our result ( is close to 1 ) is analogous to the well - known one that the speed and range of spreading obey the relation random small - world regular " .in contrast , the small - world networks yield the most effective spreading when the social reinforcement plays a considerable role ( i.e. , large ). individuals for regular , random and small - world networks .the parameters are , , and .these distributions are obtained from 10000 realizations.,width=302 ] to further investigate the advantages of small - world networks for information spreading , we calculate the complementary cumulative distribution , namely the probability that in a realization the information has reached more than individuals . as shown in figure 7 , comparing with random networks , the advantages of small - world networks are twofold . on one hand , it has higher probability to spread out ( see the region when is small ) .for example , in small - world networks , while for random networks , this number is only 0.460 . if the information can spread out , like an epidemic for a disease , in both two kinds of networks it can reach the majority of population .in contrast , comparing with regular networks , information in small - world networks can spread wide . according to figure 7 ,the maximum in regular networks is only 1680 while in small - world networks it can reach 9900 individuals with probability 0.684 .thanks to the fast development of data base technology and computational power , the detailed analysis about information spreading in large - scale online systems become feasible nowadays . in our opinion ,the similarity between information spreading and epidemic spreading are over emphasized in the previous studies ( see , for example , the models summarized in the review article ) , and currently we should turn to the other side of the matter : revealing the essential difference between them. the significant difference may include : ( i ) _ time decaying effects_. an infectious disease can exist more than thousands of years in human society and still keep active , but no one is willing to spread a news one year ago .actually , our attention on information decays very fast , and thus when we model the information spreading , especially if it involves multiple information competing for attention , we have to consider the time decaying effects .( ii ) _ tie strength_.it is well known that in social networks , ties with different strengths play different roles in maintaining the network connectivity , information filtering , information spreading , and so on .we guess the weak ties provide faster paths for information spreading while the strong ties provide trust paths ( i.e. , with high infectivity ) . however , this point is still not clear till far .information content_. information with different contents may have far different spreading paths , and even with the same content , different expressions may lead to far different performances .some of them are born with fashionable features while others are doomed to be kept from known . whether these two kinds of information are only different quantitatively or they follow qualitatively different dynamic patterns are still under investigation .( iv ) _ role of spreaders_. recent analysis on twitter show that different kinds of spreaders , such as media , celebrities , bloggers and formal organizations , play remarkably different roles in network construction and information spreading , which may result in different spreading pathes and outbreaking mechanisms from epidemic spreading .memory effects_. previous contacts could impact the information spreading in current time .such memory effects can be direct since an agent may tend to be interested in our disgusted with objects heard many times , and/or indirect since previous contacts could change the tie strength that further impact the current interactions .( vi ) _ social reinforcement_. if more than one neighbor approved the information and transferred it to you , you are of high probability to approve it. generally speaking , is an agent receives twice an information item recommended from her neighbors , the approval probability should be much larger than the twice of the approval probability with a single recommending .( vii ) _ non - redundancy of contacts_. people usually do not transfer an information item more than once to the same guy , which is far different from the sexually transmitted diseases . to name just a few . in this paper, we propose a simple model for information spreading in social networks that considers the memory effects , the social reinforcement and the non - redundancy of contacts . under certain conditions ,the information spreads faster and broader in regular networks than in random networks , which to some extent supports the centola s experiment . at the same time, we show that the random networks tend to be favorable for effective spreading when the network size increases , which challenges the validity of the centola s experiment for large - scale systems .furthermore , simulation results suggest that by introducing a little randomness into regular structure , the small - world networks yield the most effective information spreading .although this simple model can not take into account all the above - mentioned features in information spreading , it largely refines our understanding about spreading dynamics . for example , traditional spreading models on complex networks show that the diseases spread faster and broader in random networks than small - world networks , yet our results suggest that the small world may be the best structure for effective spreading under the consideration of social reinforcement .indeed , information in small - world networks has much higher probability to spread out than in random networks , and can spread much broader than in regular networks .in addition , the local clustering is well - known to play a negative role in spreading , while our model indicates that local clustering are very helpful in facilitating the acceptance / approval of the information for individuals and thus can to some extent fasten the spreading .we acknowledge xiao - ke xu for valuable discussion .this work is supported by the national natural science foundation of china under grant no .90924011 , the fundamental research funds for the central universities , and the swiss national science foundation under grant no . 200020 - 132253 .99 r. l. may and r. m. anderson , infectious diseases of humans : dynamics and control ( oxfrod : oxford university press , 1991 ) .r. pastor - satorras and a. vespignani , phys .lett . * 86 * , 3200 ( 2001 ) .a. barrat , m. barthelemy , and a. vespignani , dynamical processes on complex networks ( cambridge : cambridge university press , 2008 ) .a. vazquez , b. rcz , a. lukcs , and a .-barabsi , phys .* 98 * , 158702 ( 2007 ) .j. l. iribarren and e. moro , phys .lett . * 103 * , 038702 ( 2009 ) .z. yang , a .- x .cui , and t. zhou , physica a * 390 * , 4543 ( 2011 ) .p. wang , m. c. gonzlez , c. a. hidalgo , and a .-barabsi , science * 324 * , 1071 ( 2009 ) .d. balcan , v. colizza , b. gonalves , h. hu , j. j. ramasco , and a. vespignani , proc .106 * , 21484 ( 2009 ) .z. liu , y .- c .lai , and n. ye , phys .e * 67 * , 031911 ( 2003 ) .y. moreno , m. nekovee , and a. f. pacheco , phys .e * 69 * , 066130 ( 2004 ) .a. l. hill , d. g. rand , m. a. nowak , and n. a. christakis , plos comput .* 6 * , e1000968 ( 2010 ) .t. house , j. r. soc .interface * 8 * , 909 ( 2011 ) .d. m. romero , b. meeder , and j. kleinberg , proc .20th intl .www ( new york : acm press , 2011 ) .p. s. dodds and d. j. watts , phys .lett . * 92 * , 218701 ( 2004 ) .d. centola , v. m. eguluz , and m. w. macy , physica a * 374 * , 449 ( 2007 ). m. medo , y .- c .zhang , and t. zhou , epl * 88 * , 38005 ( 2009 ) .g. cimini , m. medo , t. zhou , d. wei , and y .- c .zhang , eur .j. b * 80 * , 201 ( 2011 ) .d. wei , t. zhou , g. cimini , p. wu , w. liu , and y .- c .zhang , physica a * 390 * , 2117 ( 2011 ) .p. l. prapivsky , s. redner , and d. volovik , arxiv : 1104.4107 .d. centola , science * 329 * , 1194 ( 2010 ) .d. j. watts and s. h. strogatz , nature ( london ) * 393 * , 440 ( 1998 ). f. j. prez - reche , j. j. ludlam , s. n. taraskin , and c. a. gilligan , phys .lett . * 106 * , 218701 ( 2011 ) . f. c. santos , j. f. rodrigues , and j. m. pacheco , phys .e * 72 * , 056128 ( 2005 ) .s. maslov and k. sneppen , science * 296 * , 910 ( 2002 ) .v. m. eguluz and k. klemm , phys .lett . * 89 * , 108701 ( 2002 ) .t. petermann and p. de los rios , phys .e * 69 * , 066116 ( 2004 ) .t. zhou , g. yan , and b .- h .wang , phys .e * 71 * , 046141 ( 2005 ) .t. zhou , z .- q .fu , and b .- h .wang , prog .* 16 * , 452 ( 2006 ) .d. h. zanette , phys .e * 64 * , 050901(r ) ( 2001 ) . c. castellan , s. fortunato , and v. loreto , rev .phys . * 81 * , 591 ( 2010 ) .f. wu and b. a. huberman , proc .104 * , 17599 ( 2007 ) .onnela , j.saramki , j. hyvnen , g. szab , d. lazer , k. kaski , j. kersz , and a .-barabsi , proc .104 * , 7332 ( 2007 ) .l. l and t. zhou , epl * 89 * , 18001 ( 2010 ) . g. miritello , e. moro , and r. lara , phys . rev .e * 83 * , 045102(r ) ( 2011 ) .r. crane and d. sornette , proc .* 105 * , 15649 ( 2008 ) .s. wu , j. m. hofman , w. a. mason , and d. j. watts , proc .20th intl .www ( new york : acm press , 2011 ) .
|
spreading dynamics of information and diseases are usually analyzed by using a unified framework and analogous models . in this paper , we propose a model to emphasize the essential difference between information spreading and epidemic spreading , where the memory effects , the social reinforcement and the non - redundancy of contacts are taken into account . under certain conditions , the information spreads faster and broader in regular networks than in random networks , which to some extent supports the recent experimental observation of spreading in online society [ d. centola , science * 329 * , 1194 ( 2010 ) ] . at the same time , simulation result indicates that the random networks tend to be favorable for effective spreading when the network size increases . this challenges the validity of the above - mentioned experiment for large - scale systems . more significantly , we show that the spreading effectiveness can be sharply enhanced by introducing a little randomness into the regular structure , namely the small - world networks yield the most effective information spreading . our work provides insights to the understanding of the role of local clustering in information spreading .
|
quantum computation makes it possible to achieve polynomial complexity for many classical problems that are believed to be hard . to preserve coherence , quantum operations need to be protected by quantum error correcting codes ( qeccs ) . with error probabilities in elementary gates below a certain threshold ,one can use multiple layers of encoding ( concatenation ) to reduce errors at each level and ultimately make arbitrarily - long quantum computation possible .the actual value of the threshold error probability strongly depends on the assumptions of the error model and on the chosen architecture , and presently varies from for a chain of qubits with nearest - neighbor couplings and for qubits with nearest - neighbor couplings in two dimensions , to with postselection , or even above if additional constraints on errors are imposed .the quoted estimates have been made using stabilizer codes , an important class of codes which originate from additive quaternary codes , and have a particularly simple structure based on abelian groups .recently , a more general class of codeword stabilized ( cws ) quantum codes was introduced in refs .this class includes stabilizer codes , but is more directly related to non - linear classical codes .this direct relation to classical codes is , arguably , the most important advantage of the cws framework . specifically , the classical code associated with a given cws quantum code has to correct certain error patterns induced by a graph associated with the code .the graph also determines the graph state serving as a starting point for an encoding algorithm exploiting the structure of the classical code . with the help of powerful techniques from the theory of classical codes , already several new families of non - additive codeshave been discovered , including codes with parameters proven to be superior to any stabilizer code .both classical additive codes and additive quantum codes can be corrected by first finding the syndrome of a corrupted vector or quantum state , respectively , and then looking up the corresponding error ( coset leader ) in a precomputed table .this is not the case for non - linear codes .in fact , even the notions of a syndrome and a coset become invalid for general non - linear codes . furthermore , since quantum error correction must preserve the original quantum state in all intermediate measurements , it is more restrictive than many classical algorithms .therefore , the design of a useful cws code must be complemented by an efficient quantum error correction algorithm .the goal of this work is to address this important unresolved problem for binary cws codes .first , we design a procedure to _ detect _ an error in a narrower class , the _ union stabilizer _ ( ust ) codes , which possess some partial group structure .then , for a general cws code and a set of graph - induced maps of correctable errors forming a group , we construct an auxiliary ust code which is the union of the images of the original cws code shifted by all the elements of the group .finally , we construct abelian groups associated with correctable errors located on certain _ index sets _ of qubits .the actual error is found by first applying error - detection to locate the index set with the relevant auxiliary ust code , then using a collection of smaller ust codes to pinpoint the error in the group .since we process large groups of errors simultaneously , we make a significant reduction of the number of measurements compared with the brute force error correction for non - linear ( quantum or classical ) codes .more precisely , we consider an arbitrary distance- cws code that uses qubits to encode a hilbert space of dimension and can correct all -qubit errors , where . in sec .[ sec : background ] we give a brief overview of the notations and relevant facts from the theory of quantum error correction . then in sec .[ sec : generic ] , we construct a reference recovery algorithm that deals with errors individually .this algorithm requires up to measurements , where is the total number of errors of size up to ( this bound is tight for non - degenerate codes ) .each of these measurements requires up to two - qubit gates . in order to eventually reduce the overall complexity, we consider the special case of ust codes in sec .[ sec : ust - measurement ] .here we design an error - detecting measurement for a ust code with a translation set of size that requires two - qubit gates to identify a single error .our error grouping technique presented in sec .[ sec : clusters ] utilizes such a measurement to check for several errors at once . foradditive cws codes the technique reduces to stabilizer - based recovery [ sec .[ sec : additive ] ] . in the case of generic cws codes[ sec : generic - cws ] ] , we can simultaneously check for all errors located on size- qubit clusters ; graph - induced maps of these errors form groups of size up to .searching for errors in blocks of this size requires up to measurements to locate the cluster , plus up to additional measurements to locate the error inside the group . in sec .[ sec : conclusions ] we discuss the obtained results and outline the directions of further study . finally ,in appendix [ app : orthogonality ] we consider some details of the structure of corrupted spaces for the codes discussed in this work .note that some of the reported results have been previously announced in ref .throughout the paper , denotes the complex hilbert space that consists of all possible states of a single qubit , where and correspondingly , we use the space to represent any -qubit state .also , denotes the pauli group of size , where , , are the usual ( hermitian ) pauli matrices and is the identity matrix .the members of this group are called pauli operators ; the operators in ( [ eq : pauli - group ] ) with form a basis of the vector space that consists of all operators acting on -qubit states .the _ weight _ of a pauli operator is the number of terms in the tensor product ( [ eq : pauli - group ] ) which are not a scalar multiple of identity .up to an overall phase , a pauli operator can be specified in terms of two binary strings , and , hermitian operators in have eigenvalues equal to or .generally , unitary operators ( which can be outside of the pauli group ) which are also hermitian , i.e. , all eigenvalues are , will be particularly important in the discussion of measurements .we will call these operators _ measurement operators_. indeed , for such an operator , a measurement gives a boolean outcome and can be constructed with the help of a single ancilla , two hadamard gates , and a controlled gate ( see fig .[ fig : measurementm ] ) .the algebra of measurement operators is related to the algebra of projection operators discussed in , but the former operators , being unitary , are more convenient in circuits . with all eigenvalues .the first hadamard gate prepares the ancilla in the state , hence .the controlled- gate returns .the second hadamard gate finishes the incomplete measurement , , where we used the projector identities ( [ eq : projector - identities ] ) .if the outcome of the ancilla measurement is , the result is the projection of the initial -qubit state onto the eigenspace of ( ) , otherwise it is the projection onto the eigenspace of ( ) . for an input state with ancilla in the state , the circuit returns . ] a measurement of an observable defined by a pauli operator will be also called pauli measurement . for lack of a better term , other measurements will be called _ non - pauli _ ; typically the corresponding circuits are much more complicated than those for pauli measurements .we say that a state is stabilized ( anti - stabilized ) by a measurement operator if ( . the corresponding projectors onto the positive and negative eigenspace are denoted by and , respectively ; they satisfy the identities we say that a space is stabilized by a set of operators if each vector in is stabilized by each operator in .we use to denote the maximum space stabilized by , and to denote the corresponding orthogonal complement . for a set of measurement operators ,each state in is anti - stabilized by some operator in .when discussing complexity , we will quote the two - qubit complexity which just counts the total number of two - qubit gates .thus , we ignore any communication overhead , as well as any overhead associated with single - qubit gates .for example , the complexity of the measurement in fig .[ fig : measurementm ] is just that of the controlled- gate operating on qubits .for all circuits we discuss , the total number of gates ( single- and two - qubit ) is of the same order in as the two - qubit complexity .a general -qubit quantum code encoding quantum states is a -dimensional subspace of the hilbert space .let be an orthonormal basis of the -dimensional code and let be some set of pauli errors .the overall phase of an error [ in eq .( [ eq : pauli - group ] ) ] is irrelevant and will be largely ignored .the code detects all errors if and only if where only depends on the error , but is independent of the basis vectors .the code has distance if it can detect all pauli errors of weight , but not all errors of weight .such a code is denoted by .the necessary and sufficient condition for correcting errors in is that all non - trivial combinations of errors from are detectable .this gives where and , again , is the same for all basis states , .a distance- code corrects all errors of weight such that , that is , .the code is _ non - degenerate _ if linearly independent errors from produce corrupted spaces whose intersection is trivial ( equals to ) ;otherwise the code is _ degenerate _ . a stricter condition that the code is _ pure _ ( with respect to )requires that the corrupted spaces and be mutually orthogonal for all linearly independent correctable errors , . for a degenerate code, we call a pair of correctable errors _ mutually - degenerate _ if the corrupted spaces and coincide .such errors belong to the same _ degeneracy class_. for recovery , one only needs to identify the degeneracy class of the error that happened .the operators like , connecting mutually - degenerate correctable errors and , have no effect on the code and can be ignored . as shown in appendix [ app : orthogonality ] , for all codes discussed in this work , any two correctable errors , yield corrupted spaces , that are either identical or orthogonal .then , errors from different degeneracy classes take the code to corrupted spaces that are mutually orthogonal .also , for these codes , a non - degenerate code is always pure . in terms of the error correction condition ( [ sncondtion - correcting ] ), we have for errors , in different degeneracy classes and for errors in the same degeneracy class .stabilizer codes are a well known family of quantum error - correcting codes that are analogous to classical linear codes .an ] stabilizer code is defined by the generators for this code , the logical operators can be taken as a basis of the code space is ( up to normalization ) latexmath:[\[|\bar{0}\rangle = \prod_{i=1}^{4}{(\openone+g_{i})}|00000\rangle , \quad basis states are stabilized by the generators .the corresponding stabilizer group is .the group of equivalence classes of correctable errors is generated by the representatives ( note the mixed notation , e.g. , ) the are chosen to commute with the logical operators and also to satisfy .note that the operators of weight one forming the correctable error set do not by themselves form a group .the generators can be used to map correctable errors to the corresponding group elements with the same syndrome .this gives , e.g. , , , . the decomposition ( [ eq : subspace - decomposition ] ) can be viewed as a constructive definition of the abelian group of all _ translations _ of the original stabilizer code in .( in the following , this stabilizer code is denoted . )any two non - equivalent translations belong to different cosets of the normalizer of the code in and , therefore , yield mutually orthogonal shifts a union stabilizer code ( ust ) is a direct sum of shifts of the code by non - equivalent translations , .the basis of the code defined by ( [ eq : ust - defined ] ) is the union of the sets of basis vectors of all . as a result ,the dimension of is .this code is then denoted , where is the distance of the new code .generally , this distance does not exceed the distance of the original code with respect to the same error set , .however , if the code is degenerate and the original code is one - dimensional , this need not be true . the unique state defined by eq .( [ eq : stabilized - state ] ) itself forms a single - state stabilizer code ] with the stabilizer and logical operators , , corresponds to a cws code with the stabilizer and the codeword operator set forming a group of size .generally , an lc transformation is required to obtain standard form of this code .conversely , an additive cws code where the codeword operators form an abelian group ( in which case with integer ) is a stabilizer code ] stabilizer code[ see example 1 ] as a cws code in standard form , we explicitly construct alternative generators of the stabilizer [ eq . ( [ eq : stab1 ] ) ] to contain only one operator each .we obtain and its four cyclic permutations .this does not require any qubit rotations due to a slightly unconventional choice of the logical operators in eq .( [ eq : logical513 ] ) .the corresponding graph is the ring , see fig .[ fig : tria](b ) .the codeword operators are , which by eq .( [ eq : logical513 ] ) correspond to classical binary codewords .note that the error map induced by the graph is different from the mapping to group elements in example 1 ; in particular , , , . this section we construct a generic recovery algorithm which can be adapted to any non - additive code . to our best knowledge ,such an algorithm has not been explicitly constructed before .the basic idea is to construct a non - pauli measurement operator , where is the projector onto the code spanned by the orthonormal basis .the measurement operator is further decomposed using the identity we use the graph state encoding unitary from eq .( [ stabilizedstate ] ) and the following decomposition of the standard - form basis of the cws code in terms of the graph state and the classical states : the measurement operator is rewritten as a product where the measurement operator stabilizes the orthogonal complement of the classical state .the components of the binary vector are the respective complements of , .the operator in the parentheses in eq .( [ eq : factorizationmirror1 ] ) is the -controlled gate ( the -operator is applied to the -th qubit only if all the remaining qubits are in state ) .it can also be represented as -qubit controlled phase gate with , where the operator projects onto the state .this can be further decomposed as a product of two hadamard gates and an -controlled cnot gate [ fig .[ fig : cz - decomposition ] ] which for can be implemented in terms of one ancilla and three - qubit toffoli gates and therefore has linear complexity in . with no ancillas ,the complexity of the -controlled cnot gate is .-qubit controlled- gate in terms of -controlled cnot gate ( -qubit toffoli gate ) for . ]the corresponding ancilla - based measurement for can be constructed with the help of two hadamard gates [ fig .[ fig : measurementm ] ] by adding an extra control to each gate .indeed , this correlates the state of the ancilla with acting on the qubits , and the state of the ancilla with .when constructing the measurement for the product of the operators , it is sufficient to use only one ancilla , since for each basis state only one of these operators acts non - trivially . the classical part of the overall measurement circuit without the graph state encoding is shown in fig .[ fig : gen ] .the complexity of measuring [ eq . ( [ eq : mirroroperator ] ) ] then becomes times the complexity of -qubit toffoli gate for measuring each , plus the complexity of the encoding circuit and its inverse , which is at most .overall , for large , the measurement complexity is no larger than , or for a circuit without additional ancillas .we would like to emphasize that so far we have only constructed the measurement for _ error detection_. actual _ error correction _ for a non - additive code in this scheme involves constructing measurements for all corrupted subspaces corresponding to different degeneracy classes given by different .this relies on the orthogonality of the corrupted subspaces , see appendix [ app : orthogonality ] . for a general -error correcting code, the number of these measurements can reach the same exponential order as the number of correctable errors in eq .( [ eq : sphere ] ) . for non - degenerate codes, we can not do better using this method . note that the measurement circuit derived in this section first decodes the quantum information , then performs the measurement for the classical code , and finally re - encodes the quantum state . .notations as in fig .[ measurementand ] . the result ( cf .[ fig : measurementm ] ) is equivalent to . ]in this section we construct a quantum circuit for the measurement operator of a ust code . to this end , we define the logical combinations of non - pauli measurements in agreement with analogous combinations defined in ref . for the projection operators , and construct the circuits for logical combinations and [ figs .[ measurementand ] , [ measurementand_new ] ] and xor [ figs. [ measurementxor ] , [ fig : measurementm1plusm0 ] ] .we use these circuits to construct the measurement for with complexity not exceeding . _logical and _ : given two commuting measurement operators and , let denote the measurement operator that stabilizes all states in the subspace the output of the measurement is identical to the logical and operation performed on the output of measurements and .this measurement can be implemented by the circuit in fig .[ measurementand ] . herethe first two ancillas are entangled with the two measurement outcomes ; the third ancilla is flipped only if both ancillas are in the state , which gives the combination .the projector onto the positive eigenspace of satisfies the identity this identity can be used to obtain a simplified circuit which only uses two ancillas , see fig .[ measurementand_new ] , with the price of two additional controlled - hadamard gates [ fig .[ chgate ] ] .gate based on the identity . ] the circuits in figs .[ measurementand ] and [ measurementand_new ] can be generalized to perform the measurement corresponding to the logical and of commuting measurement operators with the help of associativity , e.g. , .the generalization of the simplified circuit in fig .[ measurementand_new ] requires only two ancillas for any .the corresponding complexity is times the complexity of a controlled- gate , plus times the complexity of a controlled single - qubit gate .when all are -qubit pauli operators , the overall complexity with two ancillas is . _logical xor _ :in analogy to the logical `` exclusive or '' , we define the symmetric difference of vector spaces , , , as the vector space formed by the basis vectors that belong to an odd number of the original vector spaces . for two vector spaces this operation is obviously associative , . for two commuting measurement operators , , let be the measurement operator that stabilizes the subspace .explicitly , \oplus \left[\mathcal{p}(m_{0})\cap\mathcal{p}^\perp(m_{1})\right ] .\label{eq : oplus - space}\end{aligned}\ ] ] the output of measuring is identical to the logical xor operation performed on the outputs of measurements and .the corresponding measurement can be implemented by combining the two ancillas with a cnot gate [ fig .[ measurementxor ] ] . to simplify this measurement ,we show that .indeed , eq . ( [ eq : oplus - space ] ) implies that for the projection operators , , the corresponding measurement operator is factorized with the help of the projector identities ( [ eq : projector - identities ] ) , -\openone\\ & = & -(2p_1-\openone ) ( 2p_0-\openone ) = -m_1m_0.\end{aligned}\ ] ] this implies that .in other words , the measurement of can be implemented simply as an ( inverted ) concatenation of two measurements , see fig . [fig : measurementm1plusm0 ] . the same circuit can also be obtained from that in fig .[ measurementxor ] by a sequence of circuit simplifications ( not shown ) . the circuit in fig . [ fig : measurementm1plusm0 ] is immediately generalized to a combination of more than two measurements , .the corresponding complexity for computing the xor of measurements is simply the sum of the individual complexities , implying that this concatenation has no overhead .a ust code is a direct sum ( [ eq : ust - defined ] ) of mutually orthogonal subspaces obtained by translating the originating stabilizer code \bigtriangleup ] .no additional overhead is required to form the logical xor of the results .thus , we obtain the following [ theorem : complexity1 ] error detection for a ust code of length and dimension , formed by a translation set of size , has complexity at most . note that in the special case of a cws code ( ) , the prefactor of is quadratic in whereas the corresponding prefactor obtained in sec .[ sec : generic ] is _ linear _ in .the reason is that in eq .( [ eq : mirroroperator ] ) the graph encoding circuit with complexity is used only twice , and the projections onto the classical states have linear complexity . in eq .( [ eq : ust - decomposition ] ) we are using projections onto basis states of the quantum code .the advantage of the more complex measurement constructed in this section is that it does not involve having unprotected decoded qubits for the entire duration of the measurement .recall from section [ sec : stabilizer - codes ] that for stabilizer codes the representatives of the error degeneracy classes form an abelian group whose generators are in one - to - one correspondence with the generators of the stabilizer .measuring the generators of the stabilizer of a stabilizer code ] [ see eq .( [ eq : err - detection - aux ] ) below ] . in other words, can be viewed as a quantum code which detects errors from .this observation , together with the error - detection measurement for ust codes constructed in the previous section , forms the basis of our error grouping technique .we prove the following for a cws code in standard form and a group formed by graph images of some correctable errors in , the code is a ust code which detects all errors in .[ theorem : error - detect ] * proof*. first , we show that the subspace is a ust code .the corresponding set of basis vectors is these vectors are mutually orthogonal , since every element of the group is a representative of a separate error degeneracy class .further , the group is abelian , and its elements commute with the codeword generators , .therefore , using eq .( [ eq : codeword - def ] ) , we can rearrange the set ( [ eq : aux - basis ] ) as the set in the parentheses on the right hand side is a basis of the additive cws code formed by the group acting on the graph state .then , we can write the subspace explicitly as a ust code [ cf . eq . ( [ eq : ust - defined ] ) ] where the translations are given by the set of codeword operators of the original code . orthogonality condition ( [ eq : ust - orthogonality ] ) is ensured by eq .( [ eq : orthogonality ] ) .second , we check the error - detection condition ( [ sncondtion - detecting ] ) for the code ( [ eq : aux - code ] ) .explicitly , for an error , and for the orthogonal basis states , for all , according to eqs .( [ sncondtion - correcting ] ) , ( [ eq : unrelated ] ) and the group property of . now , to correct errors in groups , we just have to find a suitable decomposition of the graph images of the original error set into a collection of groups , , and perform individual error - detection measurements for the auxiliary codes until the group containing the error is identified . to find an error within a group with generators , we can try all subgroups of with one generator missing .more specifically , for a generator we consider the subgroup and perform error detection for the code . after completing measurements ,we obtain a representative of the actual error class .this is the product of all generators for which the corresponding code detected an error . to actually carry out the discussed program , we need to construct the generators of the stabilizer of the code .the generators have to commute with the generators in the group .this can be done with the gram - schmidt ( gs ) orthogonalization of the graph - state generators [ eq . ( [ eq : canonical - generators ] ) ] with respect to the generators .as a result , we obtain the orthogonalized set of independent generators such that .we can take the last of the obtained generators as the generators of the stabilizer , , .the orthogonalization procedure is guaranteed to produce exactly generators anti - commuting with the corresponding errors , .indeed , the gs orthogonalization procedure can be viewed as a sequence of row operations applied to the original binary matrix with the elements which define the original commutation relation , the generator anti - commutes with at least one operator in if and only if the -th column of is not an all - zero column. then all generators are independent ( no generator can be expressed as a product of some others ) if and only if has full column rank . by this explicit construction , the generators of the stabilizer of the auxiliary code [ eq .( [ eq : aux - code ] ) ] are pauli operators in the original graph - state basis . the complexity of each error - detection measurement is therefore given by theorem [ theorem : complexity1 ] .the procedure described above appears to be extremely tedious , much more complicated than the syndrome measurement for a stabilizer code .however , it turns out that for stabilizer codes this is no more difficult than the regular syndrome - based error correction .indeed , for a stabilizer code ] , see examples 1 and 4 .the graph - induced maps of single - qubit errors form a group of translations of the code , .this group contains all error degeneracy classes , . with the addition of the logical operator , these can generate the entire -qubit hilbert space from the graph state ; we have . indeed , if we form a measurement as for a generic cws code , we first obtain the stabilizer of the auxiliary code [ eq .( [ eq : aux - code ] ) ] which in this case has only one generator , , where , see example 4 . translating this code with the set ( in this case , group ) of codeword operators , we get the auxiliary ust code as the union of the positive eigenspaces of the operators [ see eq .( [ eq : ust - space - decomposition ] ) ] , which is the entire hilbert space , , as expected . to locate the error within the group with generators , we form a set of smaller codes , , where the group is obtained from by removing the -th generator .the corresponding stabilizers are , , etc .the matrices of conjugated generators have the form , e.g. , {cc } s_1 , & -s_1\\ s_5 , & -s_5 \end{array } \right ) , \quad m_{i , j}^{(2)}=\left ( \begin{array}[c]{cc } s_2 , & -s_2\\ s_5 , & -s_5 \end{array } \right ) , \ldots\ ] ] the code is formed as the union of the common positive eigenspaces of the operators in the columns of the matrix .clearly , these codes can be more compactly introduced as positive eigenspaces of the operators , .such a simplification only happens when the original code is additive . while the operators are different from the stabilizer generators in eq .( [ eq : stab513 ] ) , they generate the same stabilizer of the original code .it is also easy to check that the same procedure gives the original generators [ eq . ( [ eq : stab513 ] ) ] if we start with the error representatives ( [ eq:513-errors ] ) . now consider the case of a generic cws code . without analyzing the graph structure ,it is impossible to tell whether there is any set of classical images of correctable errors that forms a large group .however , since we know its minimum distance , we know that the code can correct errors located on qubits .all errors located on a given set of qubits form a group . therefore , by taking an _ index set _ of different qubit positions , we can ensure that the corresponding correctable errors form a group with independent generators .the corresponding graph images obey the same multiplication table , but they are not necessarily independent . as a result , the abelian group generally has generators .since all group elements correspond to correctable errors , the conditions of theorem [ theorem : error - detect ] are satisfied . overall , to locate an error of weight or less , we need to iterate over each ( but the last one ) of the index sets of size and perform the error - detecting measurements in the corresponding ust codes until the index set with the error is found .this requires up to measurements to locate the index set , and the error can be identified after additional measurements .this can be summarized as the following a cws code of distance can correct errors of weight up to by performing at most measurements . for any length , this scheme reduces the total number ( [ eq : sphere ] ) of error patterns by a factor {ll } \displaystyle \frac{3n+1}{n+1 } , & \text{if}\;\,t=1,\\ \displaystyle \;3^{t^{\text{\strut } } } , & \text{if}\;\,t>1 .\end{array } \right .\label{factoreven}\ ] ] * example 6 . * consider the code previously discussed in example 3 . while the distance is too small to correct arbitrary errors , we can correct an error located at a given qubit .assume that an error may have happened on the second qubit .then we only need to check the index set .the errors located in form a group with generators ; the corresponding group of classical error patterns induced by the ring graph in fig .[ fig : tria](b ) is .the three generators of the stabilizer of the originating ust code can be chosen as , e.g. , , , . using the classical codewords ( [ classicalcode ] ) for the translation operators , we obtain the conjugated generators {rrrrrr } g_1 , & -g_1 , & g_1 , & g_1 , & g_1 , & -g_1\\ g_2 , & g_2 , & -g_2 , & -g_2 , & g_2 , & -g_2\\ g_3 , & -g_3 , & g_3 , & -g_3 , & -g_3 , & g_3 \end{array}\right).\ ] ] according to eq .( [ eq : ust - space - decomposition ] ) , the auxiliary code is a direct sum of the common positive eigenspaces of the operators in the six columns of the matrix ( [ eq : translated-562 ] ) . to locate the actual error in this -dimensional space , we consider the two subgroups and of .the stabilizers of the corresponding auxiliary codes and can be obtained by adding and , respectively ; this adds one of the rows to the matrix ( [ eq : translated-562 ] ) .the original code is the intersection of the codes and ; the corrupted space is located in , but not in , while , e.g. , the corrupted space is located in , but not in or . a similar procedure can be carried over for a general ust code , with the only difference that the definitions of the groups and the auxiliary codes [ eq . ( [ eq : aux - code ] ) ] should also include the generators of the originating stabilizer code [ sec . [ sec : ust ] ] .overall , the complexity of error recovery for a generic ust code can be summarized by the following [ main]consider any -error correcting ust code of length and dimension , with the translation set of size .then this code can correct errors using or fewer measurements , each of which has complexity or less . for additive quantum codes, the syndrome measurement locates all error equivalence classes , not only those with `` coset leaders '' of weight .the same could be achieved with a series of clustered measurements , by first going over all clusters of weight , then , etc .this ensures that the first located error has the smallest weight .in contrast , such a procedure will likely fail for a non - additive code where the corrupted spaces and can partially overlap if either or is non - correctable . for instance , the measurement in example 6 may destroy the coherent superposition if the actual error ( e.g. , , ) was not on the second qubit. therefore , if no error was detected after measurements , we can continue searching for the higher - weight errors only after testing the remaining size- index set . with a non - additive cws code , generally we have to do a separate measurement for each additional correctable error of weight .for generic cws and ust codes , we constructed a _ structured recovery algorithm _ which uses a single non - pauli measurement to check for groups of errors located on clusters of qubits .unfortunately , for a generic cws code with large and large distance , both the number of measurements and the corresponding complexity are exponentially large , in spite of the exponential acceleration already achieved by the combined measurement . to be deployed , error - correction must be complemented with some fault - tolerant scheme for elementary gates .it is an important open question whether a fault - tolerant version of our measurement circuits can be constructed for non - additive cws codes .it is clear , however , that such a procedure would _ not _help for any cws code that needs an exponential number of gates for recovery .therefore , the most important question is whether this design can be simplified further .we first note that the group - based recovery [ see theorem [ theorem : error - detect ] ] is likely as efficient as it can possibly be , illustrated by the example of additive codes in sec .[ sec : additive ] where this procedure is shown to be equivalent to syndrome - based recovery . also , while it is possible that for fixed the complexity estimate of theorem [ theorem : complexity1 ] can be reduced in terms of ( e.g. , by reusing ancillas with measured stabilizer values ) , we think that for a generic code the complexity is linear in . however , specific families of cws codes might be represented as unions of just a few stabilizer codes which might be mutually equivalent as in eq .( [ eq : ust - defined ] ) , or non - equivalent . the corresponding measurement complexity for error detectionwould then be dramatically reduced .examples are given by the quantum codes derived from the classical non - linear goethals and preparata codes .another possibility is that for particular codes , larger sets of correctable errors may form groups .indeed , we saw that for an additive code , all error degeneracy classes form a large group of size which may include some errors of weight well beyond .such a group also exists for a cws code which is a subcode of an additive code .there could be interesting families of non - additive cws codes which admit groups of correctable errors of size beyond .for such a code , the number of measurements required for recovery could be additionally reduced .this research was supported in part by the nsf grant no .centre for quantum technologies is a research centre of excellence funded by ministry of education and national research foundation of singapore .the authors are grateful to bei zeng for the detailed explanation of the cws graph construction .as discussed in sec . [ sec : general ] , for a general non - additive quantum code and two linearly independent correctable errors , the corrupted spaces and may be neither identical nor orthogonal .however , for cws and ust codes it is almost self - evident that when and do not coinside , they are mutually orthogonal . this orthogonality is inherited from the originating stabilizer code . in particular , in some previous publications ( e.g. , ref . ) orthogonality is implied in the discussion of degenerate errors for cws codes . however , to our knowledge , it was never explicitly discussed for cws or ust codes . since our recovery algorithms for cws and ust codes rely heavily on this orthogonality , we give here an explicit proof .first , consider a stabilizer code .for any pauli operator , there are three possibilities : ( * i * ) is proportional to a member of the stabilizer group , , where and , , ( * ii * ) is in the code normalizer but is linearly independent of any member of the stabilizer group , and ( * iii * ) is outside of the normalizer , . case ( * i * ) implies that the space is identical to the code ; the errors and are mutually degenerate .indeed , for any basis vector , the action of the error just introduces a common phase ; any vector is mapped to and hence no recovery is needed . in case ( * ii * ) the operator also maps to itself , but no longer identically .therefore , at least one of the two errors , is not correctable .indeed , in this case we can decompose ( see sec . [sec : stabilizer - codes ] ) as the product of an element in the stabilizer and logical operators , i.e. , , where determines the overall phase . while acts trivially on the code , the logical operator specified by the binary - vectors , is non - trivial , . using the explicit basis ( [ eq : stabilizer - code - basis ] ) , it is easy to check that the error - correction condition ( [ sncondtion - correcting ] ) is _ not _satisfied for the operators , .finally , in case ( * iii * ) the spaces and are mutually orthogonal .indeed , since is outside of the code normalizer , there is an element of the stabilizer group that does not commute with .therefore , for any two states in the code , , we can write which gives , and the spaces and [ also , and are mutually orthogonal . now , consider the same three cases for a ust code ( [ eq : ust - defined ] ) derived from . in case( * i * ) the code is mapped to itself , .the operator acts trivially on the code ( and the errors , are mutually degenerate ) if either commutes ( [ eq : translations - commute ] ) or anti - commutes ( [ eq : translations - anti - commute ] ) with the entire set of translations generating the code : if neither of these conditions is satisfied , the error - correction condition ( [ sncondtion - correcting ] ) is violated .this is easily checked using the basis .finally , in case ( * iii * ) , the space is either orthogonal to , or the error correction condition is not satisfied .the latter is true if is proportional to an element in one of the cosets , where , .then the inner product , , which contradicts the error - correction condition ( [ sncondtion - correcting ] ) . in the other case , namely , when is linearly independent of any operator of the form , , , must be a member of a different coset of the stabilizer of the code in .this implies orthogonality: where accounts for a possible phase factor .overall , as long as the error correction condition ( [ sncondtion - correcting ] ) is valid for a ust code and the pauli operators , , the spaces and either coincide , or are orthogonal . since cws codes can be regarded as ust codes originating from a one - dimensional stabilizer code , [ sec .[ sec : cws - defined ] ] , the same is also true for any cws code .m. hein et al . , entanglement in graph states and its applications , in _ quant ., algorithms . and chaos : proc .school physics `` enrico fermi '' _ , vol .115218 , ios press , amsterdam , 2005 .
|
codeword stabilized ( cws ) codes are , in general , non - additive quantum codes that can correct errors by an exhaustive search of different error patterns , similar to the way that we decode classical non - linear codes . for an -qubit quantum code correcting errors on up to qubits , this brute - force approach consecutively tests different errors of weight or less , and employs a separate -qubit measurement in each test . in this paper , we suggest an error grouping technique that allows to simultaneously test large groups of errors in a single measurement . this structured error recovery technique exponentially reduces the number of measurements by about times . while it still leaves exponentially many measurements for a generic cws code , the technique is equivalent to syndrome - based recovery for the special case of additive cws codes . by -0.05 in
|
nowadays , a large amount of information is produced and shared in unstructured form , mostly unstructured text .this information can be exploited in decision making processes but , to be useful , it should be transformed and presented in ways that make its intrinsic knowledge more readily intelligible . for that ,we need efficient methods and tools that quickly extract useful information from unstructured text collections . such demand can be observed , for instance , in biology , where researchers , in order to be abreast of all developments , need to analyse new biomedical literature on a daily basis .another application is on fraud and corruption studies where the network information the set of actors and their relationships is implicitly stored in unstructured natural - language documents .hence , text mining and information extraction are required to pre - process the texts in order to extract the entities and the relations between them .information extraction is a challenging task mainly due to the ambiguous features of natural - language .moreover , most tools need to be adapted to different human languages and to different domains .in fact , the language of the processed texts is still the decisive factor when choosing among existing information extraction technologies .this is also true for the task of entity extraction ( named entity recognition - ner ) . for several reasons ,text mining tools are typically first developed for english and only afterwards extended to other languages .thus , there are still relatively few text mining tools for portuguese and even less that are freely accessible . in particular , for the named entities recognition task in portuguese texts ,we find three extractors available : _ alchemy _ , _ zemanta _ and _ rembrandt _we also find some studies where the measures ( , and ) for those extractors are computed and compared , but their comparative effectiveness remains domain and final purpose dependent . in this work , we present pampo ( pattern matching and pos tagging based algorithm for ner ) , a new method to automatically extract named entities from unstructured texts , applicable to the portuguese language but potentially adaptable to other languages as well .the method relies on flexible pattern matching , part - of - speech tagging and lexical - based rules .all steps are implemented using free software and taking advantage of various existing packages .the process has been developed using as case - study a specific book written in portuguese , but it has since been used in other applications and successfully tested in different text collections . in this paper , we describe the evaluation procedures on independent textual collections , and produce a comparative study of pampo with other existing tools for ner .in 1991 , lisa f. rau presented a paper describing an algorithm , based on heuristics and handcrafted rules , to automatically extract company names from financial news .this was one of the first research papers on the ner field .ner was first introduced as an information extraction task but since then its use in natural language text has spread widely through several fields , namely information retrieval , question answering , machine translation , text translation , text clustering and navigation systems . in an attempt to suit the needs of each application , nowadays ,a ner extraction workflow comprises not only analysing some input content and detecting named entities , but also assigning them a type and a list of uris for disambiguation .new approaches have been developed with the application of supervised machine learning ( sl ) techniques and ner evolved to nerc named entity recognition and classification .the handicap of those techniques is the requirement of a training set , _i.e. _ , a data set manually labelled .therefore , the ner task depends also on the data set used to train the ner extraction algorithm . currently , many existing approaches for ner / nerc are implemented and available as downloadable code , apis or web applications , _i.e. _ , as tools or services available on the web .a thorough search produces the following list : aida , alchemyapi , apache stanbol , cicerolite , dbpedia spotlight , evri , extractiv , fox , fred , lupedia , nerd , open calais , poolparty knowledge discoverer , rembrandt , reverb , saplo , semiosearch wikifier , wikimeta , yahohh ! content analysis ( yca ) , zemanta .more detailed information may be found in , where the authors compare the services strengths and weaknesses and compute some measures for their performance .nadeau _ et al ._ in _ a survey of named entity recognition and classification _ point out three factors that distinguish the nerc algorithms : the language , the textual genre or domain , and the entity type . regarding the third one , based on the grishman _definition , named entity refers to the name of a person or an organization , a location , a brand , a product , a numeric expression ( including time , date , money and percentage ) , found in a sentence , but generally , the most studied types consider the _ enamex _ designation proper names of ` persons ' , ` locations ' and ` organizations ' the ` miscellaneous ' category for the proper names that fall outside the classic _ enamex _ ) . in recent research ,the possible types to extract are open and include subcategories .the language is an important factor to be taken in consideration in the ner task .most of the services are devoted to english and few support ner on portuguese texts .the first reference to work developed in portuguese texts was published in 1997 ; the authors perform the ner task and compute some measures in a portuguese corpus and other five corpora . until now, we have only identified the rembrandt tool as a service developed and devoted to extract named entities in portuguese texts .other tools ( _ alchemyapi , nerd _ and _ zemanta _ ) have been adapted to work and accept portuguese texts but were not specifically developed for that purpose . as recently pointed out by taba and caseli , the portuguese language still lacks high quality linguistic resources and tools .ner is not only one task of the text mining process but also an initial step in the performance of other tasks , such as relation extraction , classification and/or topic modelling .this makes the quality of the ner process particularly important . in the light of the related works and taking in considerationthat most of the approaches optimize but not , we propose pampo to extract named entities in portuguese texts . in this workwe do not classify neither disambiguate the entity .our major concern is to increase the without decreasing the of the named entity extractor .in this work , we consider the _ enamex _ definition of entities plus the miscellaneous named entities where we include events like , for instance , _ ` jogos olmpicos ' _ ( _ ` olympic games ' _ ) . to identify those entities ,an information extraction procedure was designed using regular expressions and other pattern matching strategies , along with part - of - speech tagging , _i.e. _ , employing a part - of - speech tagger ( post ) tool .the extraction of the named entities from portuguese unstructured texts is composed of two phases : candidate generation , where we generate a superset of candidate entities , and entity selection , where only relevant candidates are kept .the two phases are described in algorithms [ algo1 ] and [ algo2 ] , respectively . * pampo - candidate generation * in this phase , we provide a customizable base of regular expressions that gathers common candidate entities .typical expressions capture capitalized words , personal titles ( president , deputy , etc . ) and other common words ( assembly ) .this patterns base is extendable and the aim of the process in this phase is to identify all good candidates . , :term pattern base * pampo - entity selection * here , all candidate entities of the previous phase are part - of - speech tagged .the post process tags tokens with their corresponding word type ( lexical category ) .based on the tagging of the terms in candidate entities , we can identify some that can be discarded .this is done by applying a second level of regular expressions . in the entity selection phase ,the regular expressions are defined on the lexical categories instead of terms themselves .for example , if the first word type is a ` pron - det ' ( pos tag meaning determiner pronoun ) the word is removed .another example is the removal of candidate entities that do not have at least one tag ` prop ' or ` n ' ( pos tag meaning a proper noun and a noun ) . : candidate entities , : category clipping patterns , : category pruning patterns , : term pruning pattern base post of the candidate entity remove matching prefix from remove corresponding prefix from modified the program was developed in r and makes use of some specific text mining packages .we have implemented our method using the following r packages : * tm * , * cwhmisc * , * memoise * , * opennlp * , * hmisc * . the opennlp pos tagger uses a probability model to predict the correct pos tag and , for portuguese language , it was trained on conll_x bosque data .the , , and bases adopted , for portuguese texts , and used in this application are described in this section . as a first approach , and to test the pampo algorithm , we selected a book about the portuguese freemasonry . despite being on a specific topic, it contains a rich variety of situations to test our extractor . as an example , the piece of text shown in figure [ fig : livro ] was scanned from the book with current ocr software and will be used here to highlight the contribution of each phase to the final result .the five named entities manually identified in this piece of text are _ `irmandade do bairro ut o ' , ` parlamento do g ' , ` jorge silva ' , ` ian ' _ and _ ` ministro miguel relvas'_. applying algorithm 1 to the paragraph of figure [ fig : livro ] , the set of ` candidate entities ' found are _ ` irmandade do bairro ut o ' , ` conhecemos ' , ` parlamento do g ' , ` l ' , ` k ' , ` jorge silva ' , ` ian ' _ and _ ` ministro miguel relvas'_. although most of the words in the extracted ` candidate entities ' list start with capital letter , with this algorithm we were able to extract also other important words that are not capitalized like the first word in the last named entity ( _ ministro _ ) .this is possible because the base includes a set of patterns that captures not only words ( or sequence of words ) starting with capital letters but also words that are associated to some entity s name like the ones in list1 on appendix a. having collected the ` candidate entities ' in the previous step , we now proceed by removing from that list the ones that do not correspond to named entities . for that purpose , we use list2 ( see appendix a ) as base , all the tags that are not a noun ( ) or a proper noun ( ) are included in the base and , finally , some terms that are not named entities but that were not excluded by previous actions ( see list3 on appendix a ) , are used as base . applying algorithm 2 with those lists to the set of ` candidate entities ' , from figure [ fig : livro ] , we obtain as named entities _ ` irmandade do bairro ut o ' , ` parlamento do g ' , ` jorge silva ' , ` ian ' _ and _ ` ministro miguel relvas'_. in fact , these five terms are the only named entities in the paragraph . table [ cloud ] shows the most frequent ` candidate entities ' from the whole book , as extracted by algorithm 1 and which of those candidate entities were considered as actual ` named entities ' by algorithm 2 .to give an idea of the improvement introduced by each phase , we represent the ` candidate entities ' set in a word cloud where words with higher frequency have larger font size .as it can be observed in figure [ fig : wordcloud ] , after phase 1 some words that do not refer to entities , such as _ ` idem'(`idem ' ) , ` entre ' ( ` between ' ) _ and _ ` nas ' ( ` at the ' ) _ , are present in the cloud , but , as expected , they disappear in phase 2 . .some results of the application of pampo to the whole book . in the left columnwe have the most frequent ` candidate entities ' ( please note that the column is folded in two ) , in the right column we have a ` ' if the algorithm considers it a ` named entity ' and a ` ' if not . [ cols="<,^",options="header " , ] [ tab : addlabel3 ] based on all the values of the differences between pampo and the other extractors , represented in tables [ tab : addlabel1 ] and [ tab : addlabel3 ] , we may say that : * the of the pampo extractor is the highest in almost all the news ; * does not differ much between pampo and the other extractors ; * as a consequence the of pampo is also the highest in almost all the news ; * the mean difference of between pampo and _ alchemyapi _ seams to be at least greater than 0.25 ; * the mean difference of between pampo and _ rembrandt _ seams to be at least greater than 0.35 ; * the mean difference of between pampo and _ zemanta _ seams to be at least greater than 0.40 ; * the mean difference of is positive but near zero for all the three extractors ; * the mean difference of between pampo and _ alchemyapi _ seams to be at least greater than 0.15 ; * the mean difference of between pampo and _ rembrandt _ seams to be at least greater than 0.25 ; * the mean difference of between pampo and _ zemanta _ seams to be at least greater than 0.30 . to test the null hypothesis that the mean differences between pampo and the other extractors are equal to 0.25 , 0.35 and 0.40 , for _ alchemyapi _ , _ rembrandt _ and _ zemanta _ , respectively , _ztest _ was performed considering as alternative the mean differences greater than those values .based on the results of these two corpora the p - values are smaller than 9.5e-05 .hence , the results obtained so far provide statistical evidence that pampo increases ner by at least 0.25 .in this work we propose a novel effective method to extract named entities from unstructured text .the proposed pampo method is implemented using free software , namely r and available packages .two manually annotated portuguese news corpora were used to empirically evaluate the algorithm using the measures of , and .these corpora did not influence the definition of the algorithm or the construction of its pattern bases .we have compared pampo with three other ner extractors : _ alchemyapi _ , _ rembrandt _ and _ zemanta_. experimental results clearly show that pampo obtains significantly higher and than existing tools .the values of are identical .we may say also that pampo s performance in the _ harem _ corpus was at least as good as the best one of the systems reported over there when we consider all categories of entities .however , when we exclude dates and numeric expressions , it presents better results than the ones reported for other tools . despite its simplicity, pampo has a very good performance and is highly configurable .the pampo algorithm is potentially adaptable to be used for other languages by properly defining the pattern bases .furthermore , it allows for straightforward improvement of the results by adding terms to the lists .the results take us one step closer to the creation of a text intelligence system to be used in several applications , namely , in the study of the social context of possible economic and financial offenses . as future work the authors are planning to improve the text mining procedure , by including a classification and a disambiguation step , as well as by automatically characterizing the relations between entities .the authors would like to thank sapo labs ( http://labs.sapo.pt ) for providing the data set of news from _ lusa _ agency .the authors would also like to thank grant # 2014/08996 - 0 and grant # 2013/14757 - 6 , so paulo research foundation ( fapesp ) .this work is partially funded by fct / mec through piddac and erdf / on2 within project norte-07 - 0124-feder-000059 and through the compete programme ( operational programme for competitiveness ) and by national funds through the fct - fundao para a cincia e a tecnologia ( portuguese foundation for science and technology ) within project fcomp-01 - 0124-feder-037281 .* list1 * - \{gro , papa , duque , duquesa , conde , condessa , visconde , viscondessa , rei , ranha , prncipe , princesa , marqus , marquesa , baro , baronesa , bispo , presidente , secretrio , secretria , ministro , ministra , primeiro , primeira , deputado , deputada , general , tenente , capito , capit , sargento , governador , governadora , diretor , director , diretora , directora , ex , filho , filha , irmo , irm , pai , me , tio , tia , padrinho , madrinha , sobrinho , sobrinha , afilhado , afilhada , av , av , neto , neta , enteado , enteada , padrasto , madrasta } + * list2 * - \{pron - det , adv adv , adv prop , adv adj , adv v - fi } + * list3 * - \{aproveitamento , cuidado , decerto , desta , desenvolvimento , lanamento , levantamento , muitos , muitas , nessa , nesse , nessas , nesses , nestes , neste , nesta , nestas , noutro , outros , outro , outra , outras , onde , poucos , poucas , perante , pela , recm , tal , vrios , vrias , vs , aceite , comprometo , cabe , coloca , conhecemos , casado , considerava , desejo , devamos , escolhiam , executa , faa , fica , interrompidas , indicar , includo , leva , morrer , ouvistes , prestaste , praticou , pressiona , pensa , poder , podes , revolta , sabe , ser , ter , toque , toma , trata , vens , verificou , viver , vivemos , venho , reao , sesso , testamento , tolerncia , trmino , vitria , visita , harmonia , iniciado , instalao , ibidem , inventariao , irregularidades , internet , lda , manuteno , nomeado , obedincia , petio , passaporte , proposta , programa , proibio , paz , publicao , questionrio , quadro , relatrio , reduo , reorganizao,revoluo , repblica , reequilbrio , anexo , abertura , atestado , ata , adoo , atualizao , s , , capa , convite , compromisso , condecorao , convocatria , carto , causa , comunicao , corrupo , convergncia , decreto , ditadura , democracia , democrata , estrutura , ficha , fax , fixao , futuro , gabinete , glria , janeiro , fevereiro , maro , abril , maio , junho , julho , agosto , setembro , outubro , novembro , dezembro , dirio , semanal , mensal , minutos , meses , ano , anos , hoje}\{portuguese stopwords on r } n. cardoso ._ rembrandt - a named - entity recognition framework ._ in proceedings of the eight international conference on language resources and evaluation ( lrec 2012 ) , istanbul , turkey ( 2012 ) .european language resources association ( elra ) .n. mendes , m. jakob , a. garca - silva , and c. bizer .dbpedia spotlight : shedding light on the web of documents _ proceedings of the 7th international conference on semantic systems _ , i - semantics 11
|
this paper deals with the entity extraction task ( named entity recognition ) of a text mining process that aims at unveiling non - trivial semantic structures , such as relationships and interaction between entities or communities . in this paper we present a simple and efficient named entity extraction algorithm . the method , named pampo ( pattern matching and pos tagging based algorithm for ner ) , relies on flexible pattern matching , part - of - speech tagging and lexical - based rules . it was developed to process texts written in portuguese , however it is potentially applicable to other languages as well . we compare our approach with current alternatives that support named entity recognition ( ner ) for content written in portuguese . these are _ alchemy _ , _ zemanta _ and _ rembrandt_. evaluation of the efficacy of the entity extraction method on several texts written in portuguese indicates a considerable improvement on and measures .
|
the numerical approximation of initial value ordinary differential equations is a fundamental problem in computational science , and many integration methods for problems of different character have been developed . among different solution strategies ,this paper focuses on a class of iterative methods called spectral deferred corrections ( sdc ) , which is a variant of the defect and deferred correction methods developed in the 1960s . in sdc methods , high - order temporal approximationsare computed over a timestep by discretizing and approximating a series of correction equations on intermediate substeps .these corrections are applied iteratively to a provisional solution computed on the substeps , with each iteration or _ sweep _ improving the solution and raising the formal order of accuracy of the method , see e.g. .the correction equations are cast in the form of a picard integral equation containing an explicitly calculated term corresponding to the temporal integration of the function values from the previous iteration .substeps in sdc methods are chosen to correspond to gaussian quadrature nodes , and hence the integrals can be stably computed to a very high order of accuracy .one attractive feature of sdc methods is that the numerical method used to approximate the correction equations can be low - order ( even first - order ) accurate , while the solution after many iterations can in principal be of arbitrarily high - order of accuracy .this has been exploited to create sdc methods that allow the governing equations to be split into two or more pieces that can be treated either implicitly or explicitly and/or with different timesteps , see e.g. . for high - order sdc methods constructed from low - order propagators , the provisional solution and the solution after the first few correction iterations are of lower - order compared to the final solution .hence it is possible to reduce the computational work done on these early iterations by reducing the number of substeps ( i.e. quadrature nodes ) since higher - order integrals are not yet necessary . in ,the number of substeps used in initial iterations of sdc methods is appropriately reduced to match the accuracy of the solution , and the methods there are referred to as _ladder methods_. ladder methods progress from a low - order coarse solution to a high - order fine solution by performing one or more sdc sweeps on the coarse level and then using an interpolated ( in time and possibly space ) version of the solution as the provisional solution for the next correction sweep . inboth the authors conclude that the reduction in work obtained by using ladder methods is essentially offset by a corresponding decrease in accuracy , making ladder methods no more computationally efficient than non - ladder sdc methods . on the other hand , in , sdc methods for a method of lines discretizations of pdes are explored wherein the ladder strategy allows both spatial and temporal coarsening as well as the use of lower - order spatial discretizations in initial iterations .the numerical results in indicate that adding spatial coarsening to sdc methods for pdes can increase the overall efficiency of the timestepping scheme , although this evidence is based only on numerical experiments using simple test cases .this paper significantly extends the idea of using spatial coarsening in sdc when solving pdes .a general multi - level strategy is analyzed wherein correction sweeps are applied to different levels as in the v - cycles of multigrid methods ( e.g. ) . a similar strategy is used in the parallel full approximation scheme in space and time ( pfasst ) , see and also , to enable concurrency in time by iterating on multiple timesteps simultaneously .as in nonlinear multigrid methods , multi - level sdc applies an fas - type correction to enhance the accuracy of the solution on coarse levels .therefore , some of the fine sweeps required by a single - level sdc algorithm can be replaced by coarse sweeps , which are relatively cheaper when spatial coarsening strategies are used .the paper introduces mlsdc and discusses three such spatial coarsening strategies : ( 1 ) reducing the number of degrees of freedom , ( 2 ) reducing the order of the discretization and ( 3 ) reducing the accuracy of implicit solves . to enablethe use of a high - order compact stencils for spatial operators , several modifications to sdc and mlsdc are presented that incorporate a weighting matrix .it is shown for example problems in one and two dimensions that the number of mlsdc iterations required to converge to the collocation solution can be fewer than for sdc , even when the problem is poorly resolved in space .furthermore , results from a three - dimensional benchmark problem demonstrate that mlsdc can significantly reduce time - to - solution compared to single - level sdc .the details of the mlsdc schemes are presented in this section . the original sdc method is first reviewed in [ subsec : sdc ] , while mlsdc along with a brief review of fas corrections , the incorporation of weighting matrices and a discussion of different coarsening strategies is presented in [ subsec : mlsdc ] .sdc methods for odes were first introduced in , and were subsequently refined and extended e.g. in .sdc methods iteratively compute the solution to the collocation equation by approximating a series of correction equations at spectral quadrature nodes using low - order substepping methods .the derivation of sdc starts from the picard integral form of a generic ivp given by where ] , which is divided into substeps by defining a set of quadrature nodes on the interval . herewe consider lobatto quadrature and denote nodes such that .we now denote the collocation polynomial on ] .we define by the quadrature weights for node - to - note integration , approximating integrals over ] with periodic boundary conditions and for . for the spatial derivatives ,centered differences of order with points are used on the fine level and of order with points on the coarse .both sdc and mlsdc perform timesteps of length to integrate up to and iterations on each step are performed until .the average number of fine level sweeps over all steps for sdc and mlsdc is shown in table [ tab : wave_eq ] for three different values of . in all cases, mlsdc leads to savings in terms of required fine level sweeps .we note that for a fine level spatial resolution of only points , using spatial coarsening has a significant negative effect on the performance of mlsdc ( not documented here ) : this suggests that for a problem which is spatially under - resolved on the finest level , further coarsening the spatial resolution within mlsdc might hurt performance , see also [ subsec : shear_layer ] ..average number of fine level sweeps over all time - steps of sdc and mlsdc for the wave equation example to reach a residual of .the numbers in parentheses after mlsdc indicate the used coarsening strategies , see [ sec : mlsdc_spatial_coarsening ] . [cols="^,^,^",options="header " , ] to demonstrate that mlsdc can not only reduce iterations but also runtime , we consider viscous burgers equation in three dimensions ^{3 } , \quad 0 \leq t \leq 1\nonumber\ ] ] with , initial value homogeneous dirichlet boundary condition and diffusion coefficients and .the problem is solved using a fortran implementation of mlsdc combined with a c implementation of a parallel multigrid solver ( pmg ) in space .a single timestep of length is performed with mlsdc , corresponding to cfl numbers from the diffusive term on the fine level , that is of about ( for ) and ( for ) .the diffusion term is integrated implicitly using pmg to solve the corresponding linear system and the advection term is treated explicitly .simulations are run on cores on the ibm bluegene / q juqueen at the jlich supercomputing centre .mlsdc is run with , and gauss - lobatto nodes with a tolerance for the residual of .two mlsdc levels are used with all three types of coarsening applied : 1 .the fine level uses a point mesh and the coarse level .2 . a -order compact difference stencil for the laplacian and a -order weno for the advection term are used on the fine level ; a -order stencil for the laplacian and a -order upwind scheme for advection on the coarse .the accuracy of the implicit solve on the coarse level is varied by fixing the number of v - cycles of pmg on this level .three runs are performed , each with a different number of v - cycles on the coarse level . in the first run ,the coarse level linear systems are solved to full accuracy , whereas the second and third runs use one and two v - cycles of pmg on the coarse level , respectively , instead of solving to full accuracy .these cases are referred to as mlsdc(1,2 ) , mlsdc(1,2,3(1 ) ) , and mlsdc(1,2,3(2 ) ) . on the fine level ,implicit systems are always solved to full accuracy ( the pmg multigrid iteration reaches a tolerance of reach a tolerance of or stalls ) .[ [ required - iterations - and - runtimes . ] ] required iterations and runtimes .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + table [ tab : burger3d_eq ] shows both the required fine level sweeps for sdc and mlsdc as well as the total runtimes in seconds for and for three different values of .mlsdc(1,2 ) and mlsdc(1,2,3(2 ) ) in all cases manage to significantly reduce the number of fine sweeps required for convergence in comparison to single - level sdc , typically by about a factor of two .these savings in fine level sweeps translate into runtime savings on the order of . for and quadrature nodes, there is no negative impact in terms of additional fine sweeps by using a reduced implicit solve on the coarse level and mlsdc(1,2,3(2 ) ) is therefore faster than mlsdc(1,2 ) .however , since coarse level v - cycles are very cheap due to spatial coarsening , the additional savings in runtime are small . for quadrature nodes , using a reduced implicit solve on the coarse level in mlsdc(1,2,3(2 ) ) comes at the price of an additional mlsdc iteration and therefore , mlsdc(1,2 ) is the fastest variant in this case . using only a single v - cycle for implicit solves on the coarse grid in mlsdc(1,2,3(1 ) ) results in a modest to significant increase in the number of required mlsdc iterations compared to mlsdc(1,2,3(2 ) ) in almost all cases .the only exception is the run with nodes and .therefore , mlsdc(1,2,3(1 ) ) is typically significantly slower than mlsdc(1,2 ) or mlsdc(1,2,3(2 ) ) and not recommended for use in three dimensions . for quadrature nodes , using only a single v - cycle leads to a dramatic increase in the number of required fine sweeps and mlsdc becomes much slower than single level sdc , indicating that the inaccurate coarse level has a negative impact on convergence .method & f - sweeps & runtime ( sec ) + sdc & 9 & 39.4 + mlsdc(1,2 ) & 4 & 26.2 + mlsdc(1,2,3(2 ) ) & 4 & 25.6 + mlsdc(1,2,3(1 ) ) & 5 & 29.7 + method & f - sweeps & runtime ( sec ) + sdc & 16 & 74.1 + mlsdc(1,2 ) & 8 & 49.1 + mlsdc(1,2,3(2 ) ) & 8 & 47.0 + mlsdc(1,2,3(1 ) ) & 8 & 46.7 + * gauss - lobatto nodes * method & f - sweeps & runtime ( sec ) + sdc & 7 & 59.5 + mlsdc(1,2 ) & 3 & 40.8 + mlsdc(1,2,3(2 ) ) & 3 & 39.8 + mlsdc(1,2,3(1 ) ) & 8 & 79.7 + method & f - sweeps & runtime ( sec ) + sdc & 18 & 162.7 + mlsdc(1,2 ) & 9 & 105.6 + mlsdc(1,2,3(2 ) ) & 9 & 101.5 + mlsdc(1,2,3(1 ) ) & 14 & 142.8 + * gauss - lobatto nodes * method & f - sweeps & runtime ( sec ) + sdc & 5 & 82.4 + mlsdc(1,2 ) & 2 & 46.1 + mlsdc(1,2,3(2 ) ) & 3 & 57.2 + mlsdc(1,2,3(1 ) ) & 11 & 147.2 + method & f - sweeps & runtime ( sec ) + sdc & 17 & 224.7 + mlsdc(1,2 ) & 8 & 139.5 + mlsdc(1,2,3(2 ) ) & 9 & 148.1 + mlsdc(1,2,3(1 ) ) & 44 & 560.4 +the paper analyzes the multi - level spectral deferred correction method ( mlsdc ) , an extension to the original single - level spectral deferred corrections ( sdc ) as well as ladder sdc methods .in contrast to sdc , mlsdc performs correction sweeps in time on a hierarchy of discretization levels , similar to v - cycles in classical multigrid .an fas correction is used to increase the accuracy on coarse levels .the paper also presents a new procedure to incorporate weighting matrices arising in higher - order compact finite difference stencils into the sdc method .the advantage of mlsdc is that it shifts computational work from the fine level to coarse levels , thereby reducing the number of fine sdc sweeps and , therefore , the time - to - solution . for mlsdc to be efficient, a reduced representation of the problem on the coarse levels has to be used in order to make coarse level sweeps cheap in terms of computing time .three strategies are investigated numerically , namely ( 1 ) using fewer degrees of freedom , ( 2 ) reducing the order of the discretization , and ( 3 ) reducing the accuracy of the linear solver in implicit substeps on the coarse level .numerical results are presented for the wave equation , viscous burgers equation in 1d and 3d and for the 2d navier - stokes equation in vorticity - velocity formulation .it is demonstrated that because of the fas correction , the solutions on all levels converge up to the accuracy determined by the discretization on the finest level .more significantly , in all four examples , mlsdc can reduce the number of fine level sweeps required to converge compared to single level sdc .for the 3d example this translates directly into significantly reduced computing times in comparison to single - level sdc .one potential continuation of this work is to investigate reducing the accuracy of implicit solves on the fine level in mlsdc as well . in , so called _ inexact _ spectral deferred corrections ( isdc ) methods are considered , where implicit solves at each sdc node are replaced by a small number of multigrid v - cycles . as with mlsdc , the reduced cost ofx implicit solves are somewhat offset by an increase in the number of sdc iterations required for convergence .nevertheless , numerical results in demonstrate an overall reduction of cost for isdc methods versus sdc for certain test cases .the optimal combination of coarsening and reducing v - cycles for sdc methods using multigrid for implicit solves appears to be problem - dependent , and an analysis of this topic is in preparation .the mlsdc algorithm has also been applied to adaptive mesh refinement ( amr ) methods popular in finite - volume methods for conservative systems . in the amr + mlsdc algorithm ,each amr level is associated with its own mlsdc level , resulting in a hierarchy of hybrid space / time discretizations with increasing space / time resolution . when a new ( high resolution ) level is added to the amr hierarchy, a new mlsdc level is created .the resulting scheme differs from traditional sub - cycling amr time - stepping schemes in a few notable aspects : fine level sub - cycling is achieved through increased temporal resolution of the mlsdc nodes ; flux corrections across coarse / fine amr grid boundaries are naturally incorporated into the mlsdc fas correction ; fine amr ghost cells eventually become high - order accurate through the iterative nature of mlsdc v - cycling ; and finally , the cost of implicit solves on all levels decreases with each mlsdc v - cycle as initial guesses improve .preliminary results suggest that the amr+mlsdc algorithm can be successfully applied to the compressible navier - stokes equations with stiff chemistry for the direct numerical simulation of combustion problems .a detailed description of the amr+mlsdc algorithm with applications is currently in preparation .finally , the impact and performance of the coarsening strategies presented here are also of relevance to the parallel full approximation scheme in space and time ( pfasst ) algorithm , which is a time - parallel scheme for odes and pdes .like mlsdc , pfasst employs a hierarchy of levels but performs sdc sweeps on multiple time intervals concurrently with corrections to initial conditions being communicated forward in time during the iterations .parallel efficiency in pfasst can be achieved because fine sdc sweeps are done in parallel while sweeps on the coarsest level are in essence done serially . in the pfasst algorithm , there is a trade - off between decreasing the cost on coarse levels to improve parallel efficiency and retaining good accuracy on the coarse level to minimize the number of parallel iterations required to converge . in it was shown that , for mesh - based pde discretizations , using a spatial mesh with fewer points on the coarse level in conjunction with a reduced number of quadrature nodes , led to a method with significant parallel speed up . incorporating the additional coarsening strategies presented here for mlsdc into pfasst would further reduce the cost of coarse levels , but it is unclear how this might translate into an increase in the number of parallel pfasst iterations required .alam , j.m . ,kevlahan , n.k.r . ,vasilyev , o.v . : simultaneous spacetime adaptive wavelet solution of nonlinear parabolic differential equations . journal of computational physics * 214*(2 ) , 829 857 ( 2006 ) .ascher , u.m . ,petzold , l.r . : computer methods for ordinary differential equations and differential - algebraic equations .siam , philadelphia , pa ( 2000 ) bolten , m. : evaluation of a multigrid solver for 3-level toeplitz and circulant matrices on blue gene / q . in : k.binder , g. mnster , m. kremer ( eds . ) nic symposium 2014 , pp .john von neumann institute for computing ( 2014 ) .( to appear ) chow , e. , falgout , r.d ., hu , j.j . , tuminaro , r.s ., yang , u.m .: a survey of parallelization techniques for multigrid solvers . in : parallel processing for scientific computing , siam series of software , environements and tools .siam ( 2006 ) christlieb , a. , morton , m. , ong , b. , qiu , j.m . :semi - implicit integral deferred correction constructed with additive runge kutta methods .communications in mathematical science *9*(3 ) , 879902 ( 2011 ) .christlieb , a. , ong , b. , qiu , j.m . : comments on high - order integrators embedded within integral deferred correction methods .communications in applied mathematics and computational science *4*(1 ) , 2756 ( 2009 ) .christlieb , a. , ong , b.w . , qiu , j.m . : integral deferred correction methods constructed with high order runge - kutta integrators .mathematics of computation * 79 * , 761783 ( 2010 ) .dai , x. , le bris , c. , legoll , f. , maday , y. : symmetric parareal algorithms for hamiltonian systems .esaim : mathematical modelling and numerical analysis * 47 * , 717742 ( 2013 ) .daniel , j.w ., pereyra , v. , schumaker , l.l .: iterated deferred corrections for initial value problems .acta cient .venezolana * 19 * , 128135 ( 1968 ) emmett , m. , minion , m.l .: efficient implementation of a multi - level parallel in time algorithm . in : proceedings of the 21st international conference on domain decomposition methods , lecture notes in computational science and engineering ( 2012 ) .( in press ) emmett , m. , minion , m.l . : toward an efficient parallel in time method for partial differential equations .communications in applied mathematics and computational science * 7 * , 105132 ( 2012 ) .hagstrom , t. , zhou , r. : on the spectral deferred correction of splitting methods for initial value problems .communications in applied mathematics and computational science *1*(1 ) , 169205 ( 2006 ) .hairer , e. , norsett , s.p . ,wanner , g. : solving ordinary differential equations i , nonstiff problems .springer - verlag , berlin ( 1987 ) huang , j. , jia , j. , minion , m. : accelerating the convergence of spectral deferred correction methods .journal of computational physics * 214*(2 ) , 633 656 ( 2006 ) .hunter , j.d .: matplotlib : a 2d graphics environment .computing in science & engineering * 9*(3 ) , 9095 ( 2007 ) layton , a.t .: on the efficiency of spectral deferred correction methods for time - dependent partial differential equations . applied numerical mathematics *59*(7 ) , 1629 1643 ( 2009 ) .layton , a.t . ,minion , m.l . :conservative multi - implicit spectral deferred correction methods for reacting gas dynamics .journal of computational physics *194*(2 ) , 697715 ( 2004 ) layton , a.t . ,minion , m.l . :implications of the choice of quadrature nodes for picard integral deferred corrections methods for ordinary differential equations .bit numerical mathematics * 45 * , 341373 ( 2005 ) minion , m.l . : a hybrid parareal spectral deferred corrections method .communications in applied mathematics and computational science *5*(2 ) , 265301 ( 2010 ) .pereyra , v. : iterated deferred corrections for nonlinear operator equations .numerische mathematik * 10 * , 316323 ( 1966 ) ruprecht , d. , krause , r. : explicit parallel - in - time integration of a linear acoustic - advection system .computers & fluids * 59*(0 ) , 72 83 ( 2012 ) . south , j.c . , brandt , a. : application of a multi - level grid method to transonic flow calculations . in : transonic flow problems in turbomachinery ,. 180206 . hemisphere ( 1977 ) speck , r. , ruprecht , d. , krause , r. , emmett , m. , minion , m. , winkel , m. , gibbon , p. : a massively space - time parallel n - body solver . in : proceedings of the international conference on high performance computing , networking , storage and analysis , sc 12 , pp .ieee computer society press , los alamitos , ca , usa ( 2012 ) .speck , r. , ruprecht , d. , minion , m. , emmett , m. , krause , r. : inexact spectral deferred corrections using single - cycle multigrid ( 2014 ) . arxiv:1401.7824 [ math.na ] xia , y. , xu , y. , shu , c.w . : efficient time discretization for local discontinuous galerkin methods .discrete and continuous dynamical systems series b * 8*(3 ) , 677 693 ( 2007 ) .zadunaisky , p. : a method for the estimation of errors propagated in the numerical solution of a system of ordinary differential equations . in : g. contopoulus ( ed . ) the theory of orbits in the solar system and in stellar systems .proceedings of international astronomical union , symposium 25 ( 1964 )
|
the spectral deferred correction ( sdc ) method is an iterative scheme for computing a higher - order collocation solution to an ode by performing a series of correction sweeps using a low - order timestepping method . this paper examines a variation of sdc for the temporal integration of pdes called multi - level spectral deferred corrections ( mlsdc ) , where sweeps are performed on a hierarchy of levels and an fas correction term , as in nonlinear multigrid methods , couples solutions on different levels . three different strategies to reduce the computational cost of correction sweeps on the coarser levels are examined : reducing the degrees of freedom , reducing the order of the spatial discretization , and reducing the accuracy when solving linear systems arising in implicit temporal integration . several numerical examples demonstrate the effect of multi - level coarsening on the convergence and cost of sdc integration . in particular , mlsdc can provide significant savings in compute time compared to sdc for a three - dimensional problem .
|
studies related to drug delivery draw much attention to present day researchers for their theoretical and clinical investigations .the controlled drug delivery is a process by means of which a drug is delivered at a pre - determined rate , locally or systemically , for a stipulated period of time .controlled drug delivery systems may incorporate the maintenance of drug levels within a desired therapeutic range , the need of less number of administrations , optimal use of the required drug and yet possibility of enhanced patient compliance .the quintessential drug delivery should be not only inert , biocompatible , but also at the same time provide patient compliance , capable of attaining high drug loading .however , it should also have preventive measures for the accidental release of drug and at the same time it should be simple to administer and to remove from the body . controlled release drug delivery involves drug - encapsulating devices from which drug or therapeutic compounds may be released at controlled rates for prolonged span of time .+ while many old and new therapeutics are well tolerated , numerous compounds are in requirement of localized advanced drug delivery technologies to reduce toxicity level , enhance therapeutic efficacy and potentially recast bio - distribution .local drug delivery is the manifestation of drug delivery in which drugs are delivered at a specific site inside the body to a particular diseased organ or tissue .though the drug delivery , in principle , may be monitored , but the most important hazard is that the design of drug delivery is unclear , which must be used to attain the level of control required for a specific purpose .this is because there exists complex interaction between biology , polymer chemistry and pharmacology .+ mathematical modelling of drug delivery and predictability of drug release is a steadily growing field with respect to its importance in academic and industrial areas due to its astronomic future potential . in the light of drug dose to be incorporated in desired drug administration and targeted drug release profile, mathematical prognosis will allow for good estimates of the required composition along with other requirements of the respective drug dosage forms .an extremely challenging aspect is to combine mathematical theories with models quantifying the release and transport of drug in living tissues and cells .various works are done in the past on drug delivery devices regarding its therapeutic efficiency , optimal design with the aid of either experimental methods or numerical / modelling simulations and sometimes both procedures are used .the investigation in which drug association / dissociation aspect is taken into account with regard to transdermal drug delivery while in another study , drug release process from a microparticle is considered based on solubilisation dynamics of the drug .very recently , the updated mathematical model considering both the above mentioned aspects is framed and analysed successfully .+ the study of concern is presented by a phenomenological mathematical model of drug release from a local drug delivery device and its subsequent transport to the biological tissue .the model is comprised of two - phase drug release where the drug undergoes solubilisation , recrystallisation and internalization through a porous membrane .an important aspect of the aforementioned model is appropriate judgement of the model parameters of significance such as diffusion coefficient , solid - liquid transfer rate , mass - transfer , drug association / dissociation rate constants , membrane permeability and internalization rate constant .the numerical simulation provides reliable information on different properties of drug release kinetics .to model the local drug delivery device , a two - phase system is considered which is made of : ( a ) a polymeric matrix that operates as a reservoir where the drug is loaded initially , and ( b ) the biological tissue where the drug is being transported as target region . the first phase i.e the drug delivery device is framed as a planar slab , encompassed on one side with an impermeable backing and the other side of the device is in contact with layer ( b ) .a rate - controlling membrane protecting the polymeric matrix is present at the interface of the coupled layers .+ at the beginning , the drug occurs completely in a solid phase embraced within the polymeric matrix ( e.g. in crystalline form ) ( ) at its maximum concentration .being in bound state , it can not be transferred to the tissues directly .water enters into the polymeric matrix and wets the drug encapsulated inside it , permitting solubilisation of the loaded drug crystals into free state ( ) which diffuses out of the matrix into the tissue .the rate of transfer of drug from solid state to free state depends not only on solubilisation phenomenon but is also proportional to the difference between and .again , a fraction of solid drug ( ) is transformed to its free state which is competent to diffuse .conversely , through a recrystallisation process , another fraction of free drug ( ) is transferred back to its bound state .simultaneously , a part of free drug ( ) diffuses into the tissue . in the similar way , in tissue , a portion of free drug ( ) is metabolised into bound phase ( ) , which also unbinds ( ) to form free drug .now , the bound drug is engulfed ( internalized ) ( ) by the cell in the tissue through the process of endocytosis .endocytosis is an active energy - using transport phenomenon in which molecules ( proteins , drugs etc . )are transported into the cell .thus , bound drug gets transformed into internalized drug particles ( ) .these internalized drug particles , after a span of time , gets degraded by the lysosomes and the drug remnants after degradation ( ) is expelled out of the cell into the extracellular fluid . the complete drug transport process is schematically demonstrated in fig . 1. schematic diagram of drug transport , width=529,height=113 ]generally , mass transport prevails in the direction normal to the tissue which may be the reason behind the modelling restriction confined to one - dimensional case . in the present study , x- axis is considered to be normal to the layer and aligned with the positive direction outwards .+ the governing equations describing the dynamics of drug release in the polymeric matrix phase are where , is the ratio of accessible void volume to solid volume , denotes available molar concentration of solid drug , is the available molar concentration of free drug , stands for partition coefficient , denotes porosity , is the length of the polymeric matrix , is mass transfer coefficient , stands for drug solubilisation limit , is the dissociation rate constant , is the association rate constant , denotes solid - liquid rate parameter . is the diffusion coefficient of free drug in the matrix .+ the corresponding equations governing the dynamics of drug in the tissue are where , is the available molar concentration of free drug in the tissue , is the available molar concentration of solid drug in the tissue , denotes the molar concentration of internalized drug particles , is the length of the tissue , depicts the binding rate coefficient , is the dissociation rate coefficient , stands for internalization rate coefficient , denotes degradation rate constant in the lysosome , and is the diffusion coefficient of free drug in the biological tissue .the initial conditions are as follows : + , , , , . + a flux continuity must be assigned at the interface , i.e at , .+ no mass flux can pass to exterior environment due to the presence of impermeable backing and hence no flux condition arises . at , .+ lastly , at , is considered to be finite .for the purpose of reducing the number of model parameters , the entire above - mentioned equations together with all the conditions are made dimensionless .the transformed equations subject to the conditions in dimensionless approach are solved analytically by separation of variables procedure .thus , the solutions should be read as + \ ] ] +e^{-(\alpha_0\phi_0+\beta_0)t}\ ] ] \ ] ] \ ] ] where , and are arbitrary constants to be determined from the prevailing conditions , and , and are any positive real numbers . + all the parameters and the variables are expressed in dimensionless form whose expressions are not included here for the sake of brevity .numerical illustration for the present drug release system is performed by taking various parameter values of the model in order to characterise the pharmacokinetic aspects .the graphical representations of the time - variant concentration profiles of the drug in its different states for both the facets are well illustrated through the figs .2 - 6 in order to understand the drug release phenomenon .+ the time - variant concentration profiles for solid ( loaded ) and free drug particles in the polymeric matrix phase for four different axial locations spread over the entire domain are demonstrated in figs. 2 and 3 . time variant concentration profile of at different locations.,width=288 ] time variant concentration profile of at different locations.,width=288 ] the rate of decrease of concentration of loaded drug ( ) becomes higher and higher as one proceeds away from the commencing region of the polymeric matrix to the interface resulting in early disappearance .when the loaded drug gets exposed to water , a solid - liquid mass transfer is initiated causing drug release from the matrix which is on the process of transformation of free drug .one may note on the other hand that grows and acquires certain peak in accordance to a specific instant of time followed by a gradual descend for rest of the times .+ time variant concentration profile of at different locations.,width=220 ] time variant concentration profile of at different locations.,width=220 ] time variant concentration profile of at different locations.,width=220 ] figs . 4 - 6 represent the time - variant concentration profiles for free , bound and internalized drug particles respectively in the biological tissue for different axial locations stretched over the entire domain .it is important to observe that heightens hastily compared to both and towards the inception .this observation , as anticipated , reflects in the realm of drug kinetics that free drug ( ) gets transformed into bound drug ( ) and subsequently the bound drug is metabolised into internalized drug ( ) after a short passage of time .ultimately , it is further noted that the internalized drug takes more time to get absorbed completely in the tissue than the characteristics of both bound and free drug particles .in addition to the present findings , one may append that the extended time span distinctly reveals that both loaded and free drug particles in polymeric matrix melt away in a comparatively small span of time than those in the tissue where they need to take time lengthened to get the drug absorbed completely .+ one may also explore a variety of cases in order to exhibit the behaviour of the concentration profiles for both the phases under present consideration by varying all the parameter values of significance .the sensitivity of the model parameters imply the need of the components to be included in the model formation for future course of studies relevant to this domain .the sensitivity of the model parameters poses challenges to the applicability of drug administration for treatment of patients at large through pharmacotherapy .one may highlight that as both loaded and free drug particles in polymeric matrix melt away in comparatively small span of time than those in the tissue , its influence will certainly persist for a long time before repeated medication occurs and hence care needs to be exercised for maintaining appropriate time - gap before redispensation in order to avoid toxicity by the presence of excess drug .8 siepmann j and siepmann f 2008 _ int ._ b * 364(2 ) * 328 siepmann j and gopferich a 2001 _ adv .drug delivery rev . _ * 48 * 229 crane m , hurley n j , crane l , healy a m , corrigan o i , gallagher k m and mccarthy l g 2004 _ simul . model .pract . & theory _ * 12(2 ) * 147 narasimhan b 2001 _ adv .drug delivery rev . _* 48 * 195 bozsak f , chomaz j m and barakat a i 2014 _ biomech . model .mechanobiol . _ * 13(2 ) * 327 pontrelli g and monte f 2014 _ mathematical biosciences _ * 257 * 96 casalini t , rossi f , lazzari s , perale g and masi m 2014 _ mol .pharmaceutics _ * 11 * 4036 chakravarty k and dalal d c 2016 _ mathematical biosciences _ * 272 * 24
|
local drug delivery has received much recognition in recent years , yet it is still unpredictable how drug efficacy depends on physicochemical properties and delivery kinetics . the purpose of the current study is to provide a useful mathematical model for drug release from a drug delivery device and consecutive drug transport in biological tissue , thereby aiding the development of new therapeutic drug by a systemic approach . in order to study the complete process , a two - layer spatio - temporal model depicting drug transport between the coupled media is presented . drug release is described by considering solubilisation dynamics of drug particle , diffusion of the solubilised drug through porous matrix and also some other processes like reversible dissociation / recrystallization , drug particle - receptor binding and internalization phenomena . the model has led to a system of partial differential equations describing the important properties of drug kinetics . this model contributes towards the perception of the roles played by diffusion , mass - transfer , particle binding and internalization parameters .
|
to understand the generic behavior of a quantum system , effective equations are useful .in contrast to individual wave functions , or even just stationary states , they directly provide ( approximate ) equations for time - dependent expectation values . since the dynamics of expectation values depends on the whole motion of a state by quantum back - reaction , all moments of a state couple to the expectation values these equations in general differ from the classical ones by quantum corrections .if quantum corrections , for instance an effective potential , can be found explicitly , an interpretation of quantum dynamics in generic terms becomes much easier .such results are more general compared to conclusions based on individual states . especially in quantum cosmology ,the ability to draw generic conclusions is important .not much is known about the state of the universe except , perhaps , that it currently can well be considered semiclassical .but semiclassicality is not a sharp notion , and so wide classes of states , differing for instance by the sizes of their quantum fluctuations or correlations , are still allowed . in any such situation , a generic analysisis called for , most crucially when long - term evolution is involved , or when one evolves toward strong - curvature regimes such as the big bang singularity where quantum effects of all kinds are expected to be important .it is sometimes suggested , at least implicitly ( and especially in the context of loop quantization ) , that quantum cosmology might somehow be different from other quantum systems , and that quantum back - reaction could be ignored in its effective equations .quantum back - reaction might be weak for certain states or in certain regimes , especially for models close to solvable ones , but this observation can not be generalized . like the harmonic oscillator in quantum mechanics ,there are harmonic cosmologies where expectation values of states follow exactly the trajectories of a corresponding classical system .such systems are entirely free of quantum back - reaction . for `` small anharmonicity '' , quantum back - reaction might still be weak , as it is realized in quantum cosmology for matter dominated by its kinetic energy density .but the tough reality of stronger deviations from the solvable ideal of harmonic systems can introduce severe quantum back - reaction , which must be studied in an unbiased and systematic way . here, we introduce a model of quantum cosmology which is not solvable but still treatable by two rather different methods : effective constraints and physical states .the model is an anisotropic cosmology of locally rotationally symmetric ( lrs ) bianchi i symmetry type , filled with an isotropic , slightly sub - stiff fluid of negative energy density where is the average scale factor of the anisotropic geometry . with this specific density ,the model becomes treatable by physical coherent states , which justifies its contrived and exotic form .effective constraint techniques , applicable much more widely , do not require such a tailored matter source ; they are considerably more powerful .as we will see by the explicit comparison of this paper , they capture the information in semiclassical physical states to an excellent degree .moreover , effective techniques self - consistently determine their ranges of validity. the general applicability of effective constraints will be demonstrated by our final analysis of a model whose matter content , more realistically , is pure radiation .an anisotropic bianchi i model has a line element with three independent scale factors as functions of proper time . for a canonical formulation ,we denote the momenta of by .it is convenient to introduce misner variables and their momenta by the canonical transformation in these definitions , is a reference scale factor ( e.g. at one moment of time ) introduced to be insensitive to coordinate rescalings .canonical dynamics in general relativity is determined by the hamiltonian constraint with the spatial metric , its ricci scalar and its momenta . reduced to bianchi i metrics in misner variables , specified to a lapse function , it simplifies considerably to the form as one can check directly , the constraint generates the correct hamiltonian equations of motion in coordinate time , from which the kasner solutions follow .we now restrict the model further by requiring the anisotropy parameter and its momentum to vanish : . in this way , which can easily be confirmed to be consistent with the equations of motion , we enhance the symmetry and leave two independent gravitational variables : the logarithm of the average scale factor , , and one anisotropy parameter .the resulting hamiltonian constraint is equivalent to that of a free , massless relativistic particle . to introduce a `` potential '', we will work with a matter source whose energy density is where , in addition to already introduced , is the matter energy density at some `` initial '' time . in the presence of matter , its density multiplied with is to be added to the constraint ( [ vacconstr ] ) .it becomes to integrate out homogeneous quantities such as the symplectic form , the momenta depend on the rather arbitrary .the lapse function as chosen here would then be homogeneous in , too , such that is the matter contribution to the constraint . with this ,all its terms scale in the same way if is changed . solving the constrained system, we obtain the same reduced phase space for all choices of . ] if we make convenient ( and irrelevant ) choices for the prefactors and define .our constraint then becomes that of a `` relativistic harmonic oscillator '' as studied in : in this analogy , we consider as our time variable , and as the corresponding `` energy . ''evolution of and with respect to is then generated by the hamiltonian .a more realistic matter content could be chosen as radiation , with an energy density . here , the hamiltonian is .we define , such that our constraint solved for takes the deparameterized form ( from now on we choose positive , to be specific . )this constraint generates hamiltonian equations of motion for the anisotropies and , allowing us to identify the time parameter along its flow with as an internal time .all derivatives in hamiltonian equations of motion are then with respect to , specifically since is constant in , we can combine these equations to a second order one for , solved by with integration constants and .the equation of motion for tells us that with this , the integration constants can be related to by .solutions in phase space are ellipses of axis lengths and , traversed in time with frequency .to derive the behavior with respect to proper time , we use the original constraint not yet deparameterized , and remember that we chose a lapse function .thus , the equation of motion for with respect to proper time is from here , we determine proper time as a function of by integrating with our solution . inverting the solution allows us to insert into and , resulting in solutions as functions of proper time .it is not easy to integrate explicitly , but it is clear from the integrand that is a monotonic , one - to - one function which can be inverted globally .thus , and are defined and finite for all values of proper time .there is no singularity in this model .clearly , the negative amount of matter energy violates energy conditions sufficiently strongly to provide the cyclic bouncing solutions embodied by ellipses in the -plane .one can see this directly from the friedmann - type equation resulting from the hamiltonian constraint .we have , where is constant and can be obtained from the equation , relating it to the derivative of by proper time .thus , the constraint equation reads and for implies on the right hand side , we identify as the anisotropic shear term , and the contribution as our exotic energy density .the friedmann equation shows when can vanish , which is realized for , two solutions which correspond to the maximal and minimal along our circles .the extrema indeed give us a maximum for and a minimum for : the sign of , by the raychaudhuri equation , is determined by the sign of of all the matter sources combined . here, pressure is obtained from the energy density as the negative derivative of energy by volume , i.e.by resulting in for the negative energy fluid with , and for the shear contribution with . for ,this gives which at the extrema evaluates to .the extrema provide a maximum at and a minimum at , and the evolution is that of a cyclic model with infinitely many bounces and recollapses . for the model with radiation, we have equations of motion and , with still constant . the second order equation for .for the solution of this model , see section [ sec : rad_universe ] . to represent the model as a quantum system , we start with the obvious kinematical hilbert space of square integrable wave functions of two variables , and .the momentum operators are derivatives times , as usual .to arrive at observable information and physical states , we have to implement the constraint operator for simplicity .it can be absorbed into the variables by first dividing the constraint throughout by , followed by the canonical transformation , , , . ] find its kernel and equip it with a physical inner product . to analyze the quantum constraint , we reformulate it in a first - order way in our time variable by taking a square root : introducing the deparametrized hamiltonian operator .all solutions of this equation can be expressed as where are the eigenstates of with eigenvalues . since is the square root of the ( positive definite ) harmonic oscillator hamiltonian ( with `` mass '' and `` frequency '' ) ,its eigenstates are of the well - known form , with eigenvalues . in ( [ states ] ) ,the subscript indicates the sign taken when solving for by a square root .solutions naturally split into two classes , positive - frequency solutions and negative - frequency solutions .the separation into two classes becomes relevant when we introduce the inner product of klein gordon form .although appears on the right hand side , the inner product evaluated on solutions ( [ states ] ) is time independent .however , is not positive definite : it is positive for positive - frequency solutions , but negative for negative - frequency solutions .a positive - frequency solution is automatically orthogonal to a solution with the sign of its frequency flipped . to correct the sign, we define the physical inner product as and extend it linearly to superpositions of positive and negative frequency solutions .( alternatively , we may declare positive and negative frequency solutions , respectively , to define two superselection sectors .the procedure here is analogous to . )this completes the construction of the physical hilbert space . from physical states ,-dependent expectation values ( or moments ) can be computed , playing the role of evolving observables . as with the non - relativistic harmonic oscillator, this is most easily done using ladder operators : , .a direct calculation gives and similarly for moments .physical hilbert spaces can be constructed and physical states decomposed in this way whenever one knows an explicit diagonalization of the hamiltonian . for our model, the closeness to the harmonic oscillator has an additional advantage in that it allows us to use its simple form of coherent states gaussians of arbitrary width expanded in the stationary states as initial values for evolution in . for the non - relativistic harmonic oscillator, the resulting physical states would be dynamical coherent states : their shape would remain unchanged and they keep saturating the uncertainty relation at all times .moreover , their expectation values follow the classical trajectories exactly , without quantum back - reaction . for the relativistic harmonic oscillator , andthus our anisotropic toy model , the dynamical behavior of the states is still to be seen .we thus assume an initial state , at some fixed value of , of the kinematical coherent form : such that from ( [ states ] ) .a time - dependent physical state then has coefficients in its expansion by the . with the square root of , the exponentials can not simply be combined to a -dependent .the shape of the state changes as moves away from zero : the time - dependent coefficients are no longer of the form ( [ cn ] ) for .physical states with initial conditions given by the coherent states of the non - relativistic harmonic oscillator are not dynamical coherent states .the model introduced here does show spreading and quantum back - reaction , which makes it interesting for a comparison with effective constraint techniques . in an effective treatment of a quantum system, we can consider the same dynamics , but focus on the algebra of observables .results will thus be manifestly representation independent . to set up this framework ,no approximations are required ; we are thus dealing with an exact quantum theory .only when evaluating the equations , which would give us expressions such as or moments of a state as functions of , do approximation schemes typically enter .this is no difference to a representation dependent treatment , where exact evaluations of expectation values such as ( [ alphaexp ] ) or ( [ pexp ] ) are hard to sum explicitly . at the kinematical level ,effective techniques are based entirely on the algebra of basic operators , in our case =i\hbar ] , with all other basic commutators vanishing . as it happens at the quantum level , dynamicsis brought in by a constraint operator which might have more complicated algebraic relationships with the basic operators , no longer forming a closed algebra .a whole representation of these algebraic relationships on wave functions carries much more details than necessary for extracting physical results .instead , it is often convenient to focus directly on expectation values and derive dynamical equations for them , avoiding the detour of computing wave functions .expectation values are not sufficient to characterize a state or its dynamics , but when combined with all moments a complete set of variables results . here , the subscript `` weyl '' in the definition of the moments indicates that we are ordering all operator products totally symmetrically .any of the basic operators , , , and , can appear as the in the basic moments .( to match with standard notation , we will write for fluctuations . )expectation values of basic operators together with the moments can be used to characterize an arbitrary state ( pure or mixed ) ; they can be used as coordinates on the state space .moreover , the commutator of basic operators endows the state space with a poisson structure , defined by \rangle}{i\hbar}\ ] ] for any operators and . by linearity and the leibniz rule ,this defines poisson brackets between all the moments and expectation values . in this way, the quantum phase space is defined .it is not a linear space since there are restrictions for the moments , most importantly the uncertainty relations such as on this kinematical quantum phase space , the constraint must be imposed . for a constraint operator polynomial in the basic operators, defines a function on the state space , expandable in the moments , for any arbitrary polynomial of the basic operators .this infinite set of functions satisfies two important properties : ( i ) all these functions vanish on physical states annihilated by , and ( ii ) they form a first class set in the sense of classical constraint analysis , i.e. vanishes on the subset of the phase space where all vanish .the quantum constraint can thus be implemented on the space of moments by imposing the infinite set as constraints as one would do it on a classical phase space .we have to find the submanifold on which all constraints vanish , and factor out the flow generated by them .if this reduction is completed , we obtain the physical quantum phase space and can look for solutions of observables .this procedure has several advantages .as already mentioned , it is completely representation independent and instead focuses on algebraic aspects of a quantum system . as a consequence , implementing constraint operators with zero in their continuous spectra is no different from implementing those with zero in their discrete spectrum . any difficulties in finding physical inner products can be avoided , for the physical normalization arises automatically when the constraints are solved for moments .once we try to find specific solutions , there are of course practical difficulties .we are dealing with infinitely many constraints on an infinite - dimensional phase space .sometimes , this set of equations decouples to finitely many ones , but this happens only in rare solvable cases . in more general systems, we must use approximations to reduce the set to a finite one of relevant equations , with a semiclassical approximation as the main example . here , we look for solutions whose moments satisfy a certain hierarchy , higher moments in the semiclassical case being suppressed by powers of , . toany given order in , only a finite number of moments need be considered , subject to a finite number of non - trivial constraints . in our model ,to second order in moments we have the effective constraints the reduction has been performed in , with the result that the physical state space is equivalent to a deparametrized quantum system with constraint with the reduced hamiltonian to leading order , we reproduce the classical hamiltonian , but there are corrections from quantum fluctuations and correlations coupling to the expectation values . moreover , from the poisson brackets between moments we obtain hamiltonian equations of motion telling us how a state spreads or is being squeezed .we have solutions , , , and provide physical observables , corresponding to expectation values and moments in physical states , telling us how a state moves and spreads .initially , the kinematical coherent state with expansion coefficients given by ( [ cn ] ) yields the expectation values and .the second order moments then saturate the uncertainty relation and their values are for a concrete comparison we select a state that is initially peaked about and , so that .in order for the state to be semiclassical , we need to be `` significantly larger '' than . we make a concrete choice . in fig .[ fig:1 ] we plot alongside each other the classical and two quantum - corrected trajectories of the system starting from the above initial values .the two corrected trajectories were calculated using two different methods : the kinematical coherent state of section [ sec : state ] and effective equations truncated at order , eqs .( [ eq : eff1st])([eq : efflast ] ) of section [ sec : eff ] .classical ( dotted ) , coherent state ( solid ) and effective ( dashed ) phase space trajectories , evolved for .,width=453 ] the two quantum trajectories agree very well for much of the evolution shown . in figs .[ fig:2a][fig:2c ] we plot the quantum evolution of the second order moments generated by and . from fig .[ fig:2b ] in particular it is clear that the semiclassical approximation breaks down somewhere between and is no longer `` much smaller '' than . up until that point both methods for quantum evolution are in close agreement in describing not only the trajectory in the space but also the evolution of the second order moments themselves . since semiclassicality was used to obtain the truncated system of equations , there is no reason to expect it to be accurate beyond .coherent state ( solid ) and effective ( dashed ) evolution of the second order moment in units of .,width=453 ] coherent state ( solid ) and effective ( dashed ) evolution of the second order moment in units of .,width=453 ] coherent state ( solid ) and effective ( dashed ) evolution of the second order moment in units of .,width=453 ] the two methods agree excellently where the domains of their applicability overlap . both methods have their own strengths and weaknesses .the weakness of the semiclassically truncated effective equations should by now stand out in some systems semiclassicality eventually breaks down and moments of high order dominate .direct evaluation on the states does not rely on this approximation and remains a valid method for computing the wave function at all times .the shortcomings of the latter technique are less obvious and are of practical nature . in order to apply the method one needs , first of all , to decompose the initial state as a sum of the eigenstates of the hamiltonian , which for an arbitrary state and a typical hamiltonian can be complicated . during evolution , each of the eigenstates acquires a phase factor and they need to be re - summed to compute the wave function at a later time . for an arbitrary state , the sum may converge very slowly requiring one to sum over a very large number of eigenstates to obtain an accurate description of the wave function . finally , for expectation values and moments of observables further integrations must be done . in systems of several degrees of freedom , this will add considerably to computation times . if one is to make robust predictions , a range of initially semiclassical states with a variety of initial values of moments should be considered the procedure is very complicated to implement using state decomposition but amounts to nothing more than simply changing the initial conditions in the case of the effective equations . in the subsections that follow we use the methods separately and exploit their individual strengths . knowing the stateexactly , allows us to plot the magnitude of the wave function and make precise long - term predictions . from fig .[ fig:3 ] we see that after the state becomes highly quantum , spread out over an entire orbit .expectation values can no longer be interpreted as the most probable outcome of a measurement .square - magnitude of the coherent state wave function evolved for .,width=453 ] in figs .[ fig:4a][fig:4c ] we look at the long term behavior of the leading order quantum degrees of freedom .the magnitudes of and rise rapidly until around .the state remains stable during after which all second order moments begin to oscillate with an increasing amplitude .the amplitude reaches its peak around ; thereafter the moments keep oscillating with a stable amplitude for as long as the evolution has been traced , up to around . as can be seen from the example of in fig .[ fig:5 ] , third order moments follow a similar pattern .coherent state evolution of the second order moment in units of .,width=453 ] coherent state ( solid ) and effective ( dashed ) evolution of the second order moment in units of .,width=453 ] coherent state ( solid ) and effective ( dashed ) evolution of the second order moment in units of .,width=453 ] coherent state evolution of the third order moment in units of .,width=453 ] as expected , semiclassicality eventually breaks down and quantum fluctuations become large ; this trend is illustrated by the evolving uncertainty measure ( bounded below by the uncertainty relation ) shown in fig .[ fig:6 ] . after the initial increase, there is a period of approximate stability ; then the leading order fluctuations start to oscillate with large amplitudes .even though some moments return to small values during these oscillations , semiclassicality is not regained as shown be the long - term behavior of the uncertainties in fig .[ fig:6b ] .details of this behavior , found numerically , appears rather characteristic , but it is difficult to find an explanation based on the underlying equations .short - term coherent state evolution of the `` quantum uncertainty '' defined as in units of .,width=453 ] long - term coherent state evolution of the `` quantum uncertainty '' defined as in units of .,width=453 ] we now evolve effectively starting from the same initial expectation values , but varying the initial values of second order moments . by the specific choices , some sets of moments used here no longersaturate the uncertainty relations , and some have non - vanishing correlations .they can not correspond to gaussian states , and so their initial configurations would be much more difficult to implement using wave functions of physical states .the results are plotted in fig .[ fig:7 ] . in figs .[ fig:8a][fig:8d ] we plot the corresponding effective evolution of moments for each set of the initial conditions , where the thin horizontal line approximately indicates the threshold of the semiclassical approximation for the second order moments at .effective evolution of semiclassical states with different initial values of moments for .the initial values of moments are as follows .dashed line the original coherent state : , .dotted line : , , .thin solid line : , , .thick solid line: , , .,width=453 ] larger results in a larger deviation from the classical behavior and a faster breakdown of the semiclassical approximation .the above effect is much less sensitive to the momentum spread .this disparity between the effects of the two spreads may seem surprising given the symmetry between and in the expression for the physical hamiltonian .this symmetry , however is broken by the initial state we have chosen , which is peaked about and , so that the spread in , produces a larger spread in the energy than the spread in . effective evolution of second order moments ( dashed ) , ( solid ) and ( dotted ) , which initially take the values , and respectively.,width=453 ] effective evolution of second order moments ( dashed ) , ( solid ) and ( dotted ) , which initially take the values , and respectively.,width=453 ] effective evolution of second order moments ( dashed ) , ( solid ) and ( dotted ) , which initially take the values , and respectively.,width=453 ] effective evolution of second order moments ( dashed ) , ( solid ) and ( dotted ) , which initially take the values , and .,width=453 ]in this section we use the effective equations on their own to analyze quantum corrections to the dynamics of a radiation - filled bianchi i universe briefly mentioned in section [ sec : model ] .this model has a more realistic matter content than the one analyzed above .we recall that the energy density of radiation has the form , which results in the hamiltonian constraint .the constraint condition may be implemented effectively in fashion a very similar to the way it was done in section [ sec : eff ] ; however , the physical inner product treatment would require a detailed knowledge of the ( now continuous ) spectrum of an operator that is completely different from was used in ( [ eq : schroedinger ] ) and is not straightforward to obtain . even if we could determine energy eigenstates , expanding gaussians or other general semiclassical states in this basis would be challenging .for this reason we do not attempt to implement the constraint of the radiation - filled model on the kinematical hilbert space and restrict our analysis to the effective procedure , demonstrating its wide applicability .we deparametrize the constraint exactly as in section [ sec : classical ] , selecting as time .the dynamics on and is then generated by the hamiltonian and results in the equations of motion and .noting that is once again a constant of motion we can immediately infer the classical phase - space trajectories , which are of the form .they split the space into three regions as illustrated in fig .[ fig:20 ] .there are no classical orbits in region 3 , as becomes complex .disjoint regions of classical solutions .region 1 : trajectories have the shape , they correspond to expanding universes as increases with .region 2 : trajectories have the shape , they correspond to contracting universes as with .region 3 : no classical solutions.,width=453 ] the explicit solution to the equations of motion for evolution in terms of is given by where the integration constants and can be related to the initial values and via in terms of and , the constant of motion is which requires in order for to be real . using ( [ alpharad ] ), may be recovered from the equations of motion as .for the orbits in region 1 , and reach infinity at a finite positive value of the evolution parameter , namely at .explicit integration of the expression ( [ eq : proper ] ) for the proper time using the above solution for shows that this value of is reached in infinite proper time .in other words , the expansion , as expected for the radiation filled universe , takes infinite proper time , but anisotropy asymptotically approaches a maximum value . for the orbits in region 2one can obtain a similar result by tracing evolution backwards in time ; in this case reaches infinity and reaches negative infinity at , which is always negative in that region .following those orbits forward in time , one finds that the collapse happens only as goes to infinity .once again one can use ( [ eq : proper ] ) to convert this to a proper time interval , with the result that , as one would expect , the collapse takes a finite amount of proper time , in this model , with positive energy , the singularity is certainly not resolved . following the effective procedure for solving constraints outlined in section [ sec : eff ] we find the constraint functions truncated at second order : these can be solved and gauge fixed following the same steps as for the model of section [ sec : eff ] , with the result that evolution in on the expectation values and moments of and is generated by the quantum hamiltonian the equations of motionare obtained by taking the quantum poisson bracket between quantum variables and the quantum hamiltonian . as before , these equations reduce to the classical equations of motion at zeroth order in moments and are straightforward to evolve numerically . in this sectionwe take a classical phase space trajectory from each of the regions 1 and 2 and compare the effective trajectories for a variety of semiclassical states initially peaked about these classical solutions .effective and classical trajectories of the expanding universe are plotted in fig .[ fig:21 ] .clearly , the significance of quantum back - reaction can be seen well before the approximation breaks down .phase space trajectories of an expanding universe classical ( dotted ) and effective with different initial values for second order moments : dashed line , , ; thin solid line , , ; thick solid line , , .solutions were evolved for ; the vertical line indicates the breakdown of the semiclassical approximation based on the size of second order moments.,width=453 ] the corresponding evolution of the leading order moments starting from different initial values is plotted in figs .[ fig:22a]-[fig:22c ] , where the horizontal line , as before , indicates an approximate threshold of the semiclassical approximation for the second order moments .evolution of second order moments in an expanding universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ] evolution of second order moments in an expanding universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ] evolution of second order moments in an expanding universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ] phase space trajectories of a contracting universe classical ( thin solid line ) and effective with different initial values for second order moments : dashed line , , ; dotted line , , ; thick solid line , , .solutions were evolved for ; the vertical line indicates the breakdown of the semiclassical approximation ( coming from the right along the -axis).,width=283 ] effective and classical trajectories of the contracting universe are plotted in fig .[ fig:23 ] . the corresponding evolution of the leading order moments starting from different initial values is plotted in figs .[ fig:24a]-[fig:24c ] , where the horizontal line , once again , indicates an approximate threshold of the semiclassical approximation for the second order moments .evolution of second order moments in a contracting universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ] evolution of second order moments in a contracting universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ] evolution of second order moments in a contracting universe . ( dashed ) , ( solid ) , ( dotted ) , with initial values : , , .,width=453 ]most quantum systems of physical relevance can only be analyzed by perturbation methods .quantum gravity and quantum cosmology can not be considered exceptions . as demonstrated by the examples of this article , canonical effective equations , based on the back - reaction of moments of a state on its expectation values ,are of wide applicability , capture quantum effects reliably , and approximate the full quantum dynamics in a self - consistent way .they are tractable even in situations where semiclassical wave functions in physical hilbert spaces would be too complicated to be constructed cases which abound in quantum cosmology and quantum gravity .this work was supported in part by nsf grant phy 0748336 .f. cametti , g. jona - lasinio , c. presilla , and f. toninelli , comparison between quantum and classical dynamics in the effective action formalism , in _ proceedings of the international school of physics `` enrico fermi '' , course cxliii _ ,pages 431448 , amsterdam , 2000 .ios press , [ quant - ph/9910065 ]
|
a cosmological model with a cyclic interpretation is introduced , which is subject to quantum back - reaction and yet can be treated rather completely by physical coherent state as well as effective constraint techniques . by this comparison , the role of quantum back - reaction in quantum cosmology is unambiguously demonstrated . also the complementary nature of strengths and weaknesses of the two procedures is illustrated . finally , effective constraint techniques are applied to a more realistic model filled with radiation , where physical coherent states are not available . igc09/114 + effective constraints and physical coherent states + in quantum cosmology : a numerical comparison + martin bojowald and artur tsobanjan + institute for gravitation and the cosmos , + the pennsylvania state university , + 104 davey lab , university park , pa 16802 , usa +
|
physical networked infrastructures ( ) such as power , gas or water distribution are at the heart of the functioning of our society ; they are very well engineered systems designed to be at least robust i.e. , they should be resilient to the loss of a single component via automatic or human guided interventions .the constantly growing size of has increased the possibility of multiple failures which escape the criteria ; however , implementing robustness to _ any sequence _ of failures ( robustness ) requires an exponentially growing effort in means and investments .in general , since can be considered to be aggregations of a large number of simple units , they are expected to exhibit emergent behaviour , i.e. they show as a whole additional complexity beyond what is dictated by the simple sum of its parts .a general problem of are cascading failures , i.e. events characterized by the propagation and amplification of a small number of initial failures that , due to non - linearity of the system , assume system - wide extent .this is true even for systems described by linear equations , since most failures ( like breaking a pipe or tripping a line ) correspond to discontinuous variations of the system parameters , i.e. are a strong non - linear event .this is a typical example of emergent behavior leading to one of the most important challenges in a network - centric word , i.e. systemic risk .an example of systemic risk in are the occurrence of blackout in one of the most developed and sophisticated system , i.e. power networks .it is important to notice that if such large outages were intrinsically due to an emergent behaviour of the electric power systems , increasing the accuracy of power systems simulation would not necessarily lead to better predictions of black - outs .power grids can be considered an example of complex networks hence cascading failures in complex networks is field with important overlaps with system engineering and critical infrastructures protection ; however , most of the cascading model are based on local rules that are not appropriate to describe systems like power grids that , due to long range interactions , require a different approach .another import issue is increasing interdependent among critical infrastructures ; seminal papers have pointed out the possibility of the occurrence of catastrophic cascades across interdependent networks . however , there is still room for increasing the realism of such models , especially in the case of electric grids or gas pipelines . in this paperwe move a preliminary step in such direction , trying to capture the systemic effect for coupled networks with long range interactions .to highlight the possibility of emergent behavior , we will first abstract in order to understand the basic mechanisms that could drive systemic failures ; in particular , we will consider finite capacity networks where a commodity ( a scalar quantity ) is produced at source nodes , consumed at load nodes and distributed as a kirchoff flow ( e.g. fluxes are conserved ) .for such systems , we will first introduce a simplified model that is amenable of a self - consistent analytical solution .subsequently , we will extended such model to the case of several coupled networks and study the cascading behavior under increasing stress ( i.e. increasing flow magnitudes ) . in section [ sec : model ], we develop our simplified model of overload cascades first in isolated ( sec .[ sec:1sys ] ) and coupled systems ( sec .[ sec : coupledsys ] ) . in particular , in subsection [ sec : kirchoff ], we introduce the concept of flow network with a finite capacity and relate conservation laws to kirchoff s equations and to the presence of long range correlation . to account for such correlations , in subsection [ sec:1sys ]we introduce a mean field model for the cascade failures of flow networks ; in subsection [ sec : coupledsys ] , we extend the model to the case of several interacting systems .finally , in section [ sec : discuss ] we discuss and summarize our results .let s consider a network where is the node set , is the set of edges and is the vector characterizing the capacities of the edges .we associate to the nodes a vector that characterize the production ( ) or the consumption ( ) of a commodity .we further assume that there are no losses in the network ( i.e. ) ; hence , the total load on the network is the distribution of the commodity is described by the fluxes on the edges that are supposed to respect kirchoff equations , i.e. the relation among fluxes and demand / load is described by constitutive equations where in general eq.[eq : constitutive ] is non - linear but satisfies eq.[eq : kirchoff ] .the finite capacity constrains the maximum flux on link above which the link will cease functioning . as an example ,power lines are tripped ( disconnected ) when power flow goes beyond a certain threshold . since flows will redistribute after a link failure, it could happen that other lines get above their flow threshold and hence consequently fail , eventually leading to a cascade of failures . a typical algorithm to calculate the consequences of an initial set of line failures the alg.[alg : net_cascade ] . set initial failures calculate flows calculate new failures [ alg : net_cascade ] here calculates the flows subject to the constrains that flows are zero in the failure set of edges . to develop a general model that helps us understanding the class of failures that can affect kirchoff - like flow networks ,let s start from rewriting eq.[eq : kirchoff ] in matrix form using the incidence matrix that associates to each link its nodes and and vice - versa . is an matrix where each column corresponds to an edge ; its columns are zero - sum and the only two non - zero elements have modulus and are on the and on the row .the matrix is related to the laplacian of the system ; in particular , it shares the same right eigenvalues and the same spectrum ( up to a squaring operations ) ; hence , it is a long - range operator since perturbation on a node of the system can be reflected on nodes far away on the network . due to the long range nature of kirchoff s equations , to understand the qualitative behavior of such networks we can resort to a mean field model of flow networks where one assumes that when a link fails , its flow is re - distributed equally among all other links .subsequently , the lines above their threshold would trip again , their flows would be re - distributed and so on , up to convergence ; recalling that is the total load of the system and assuming the each link has an initial flux , we can describe such a model by alg.[alg : mfcascading ] .such model , introduced in , is akin to the fiber - bundle model and has been considered in more details in for the case of a single system . while similar in spirit to the cascade model for black - outs , it yelds different results since it does not describe the statistic of the cascades in power systems but concentrates on the order of the transition in a single system . initial number of failed links number of working links average flux on the working links [ alg : mfcascading ] such algorithm can be cast in the form of a single equation in the case where the system is composed by a large number of elements with capacity .in fact , in such limit we can describe the links population by the probability distribution function of their capacities .indicating with the initial number of links , we see that if we apply an overall load to the system , all the links will be initially subject to a flow .thus , a fraction of links would immediately fail , since their thresholds are lower than the flux they should sustain .after the first stage of a cascade , there will be surviving links and the new load per link is .the following cascade s stages follow analogously ; we can thus write the mean field equations for the stage of the cascade : where is the initial load per link and is the cumulative distribution function of link capacities ; the initial conditions are .the fix - point of eq.[eq : mfmodel1sys ] satisfies the equation and represents the total fraction of links broken at the end of the cascading stages .the behavior of depends on the functional form of .in particular , following we can define and and we have that so that we can rewrite eq.([eq : mfmodel1sys ] ) as equation ( [ eq : dasilveira ] ) has a trivial fix - point ( representing a total breakdown of the system ) since .such fix - point is unstable for and becomes stable for .we notice that if does not change convexity ( i.e. has no bumps ) and the transition is first order , the system will breakdown directly to the total collapsed state .in general , the behavior of the fix - point depends on the tail of the distribution and is known to present a first order transition for a wide family of curves .depending on the functional form of , eq.([eq : fixpoint1sys ] ) could sometimes be solved analytically .otherwise , the fix - point of eq .( [ eq : fixpoint1sys ] ) can be solved numerically either by iterating the eq .( [ eq : mfmodel1sys ] ) or by finding the zeros of eq.([eq : fixpoint1sys ] ) by newton - raphson iterations .notice that , if the system is long range , modelling cascade via homogeneous load redistribution allows to capture the order of the transition even when it gives not an accurate prediction of the actual location of the transition point .an example of such accordance for the case of power networks is given in , where both synthetic networks , realistic networks and mean - field systems show a first order transition .commodities are defined substitutable when they can be used for the same aim ; when commodities are substitutable , they can expressed in the same units .an example of such commodities are electricity and gas , since both can be used for domestic heating .hence , an increase on the cost of the gas ( as the one that has been recently experienced by ukraine ) could induce stress on the electric network of the country since most customer will possibly switch to the cheaper energy vector . to take account for such effects , we will extend the model described by eq.([eq : mfmodel1sys ] ) to the case of several coupled systems that transport substitutable commodities .we will consider coupled systems assuming that when a system is subject to some failures , it sheds a fraction of the induced flow increase on system .in other words , after failure system decreases its stress by a quantity by increasing the load of all other systems by .thus , the coupled systems are described by a set of equations of the form of eq.([eq : mfmodel1sys ] ) where is the load per link experimented by system in the stage of the cascade and is the cumulative of the probability distribution function for the capacities of the system .equations ( [ eq : mfmanysys ] ) are not independent , since the systems coupling is reflected by the dependence of on the fractions of failed links in all the other systems , i.e. where has the form of a laplacian operator .thus , the full equations for coupled systems are . for simplicity , from now on we will consider the case of two identical systems with a uniform distribution of link capacities .notice that for a single system the transition is first order unless the probability distribution of the capacities is a power - law an event that is not realistic for real world flow networks .since the functional form of is easy to recover for a uniform distribution , we can solve the fix - point of eq.([eq : mfmanysysfull ] ) numerically by iterating the equations up to convergence ; an alternative methodology would be using newton - raphson algorithms .we show in fig.([fig:2behaviours ] ) the cascading behavior of two coupled systems ; we observe that as in the single system case transitions are in the form of abrupt jumps , i.e. are first order .let s rewrite eq.([eq : mfmanysysfull ] ) in the case of symmetric couplings and same probability distribution for the capacities \right)\\ f_{2}^{t+1}=p\left(\frac{l_{1}}{1-f_{2}^{t}}\left[1-t\left(f_{2}-\frac{l_{1}}{l_{2}}f_{1}\right)\right]\right ) \end{array}\right.\label{eq : mf2syssymm}\ ] ] if the two systems described by eq.([eq : mf2syssymm ] ) are stressed at the same pace ( i.e. ) , we get the case \right)\\ f_{2}^{t+1}=p\left(\frac{l}{1-f_{2}^{t}}\left[1+t\deltaf_{12}\right]\right ) \end{array}\right.\ ] ] ; from the symmetric solution we see that the breakdown of both systems happen at the same critical load as the uncoupled systems .such situation is shown in the left panel of fig.([fig:2behaviours ] ) . in the general, only one of the systems will be the first one to break down ( i.e. the fraction of broken links jumps to ) : correspondingly , also the other systems will experience a jump in the number of broken links .let s consider the symmetric case described by equations ( [ eq : mf2syssymm ] ) and suppose that , so that system is the first to breakdown ( i.e. ) ; hence , the equation for the fix - point of the second system becomes \right)=p\left(\frac{l^{+}}{1-f_{2}^{*}}\right)\ ] ] i.e. the system behaves like a single system starting with a renormalized load >l$ ] .thus , if ( the critical value of eq.([eq : mfmodel1sys ] ) , system will break down at higher values of the stress . such situation is shown in the right panel of fig.([fig:2behaviours ] ) . in fig.([fig: phasediagrams ] ) we show the full phase diagrams of two coupled systems while varying the coupling among them .according to the initial loads , we can distinguish an area near the origin where the system is safe and three separate cascade regimes : and , where either system or fails , and where both systems fail .we notice that , by increasing the coupling among the systems , both the area where the two systems are safe and the area where they fail together grow ; accordingly , the areas where only one system fails shrink .we have introduced a model for cascade failures due to the redistribution of flows upon overload of link capacities .for such a model , we have developed a mean field approximation both for the case of a single network and for the case of coupled networks .our model is inspired to a possible configuration for future power systems where network nodes the so - called energy hubs , i.e. points where several energy vectors converge and where energy demand / supply can be satisfied converting one kind of energy in another .hubs condition , transform and deliver energy in order to cover consumer needs . in such configurations , one can alleviate the stress on a network by using the flows of the the other energy vectors ; on the other hand , transferring loads from a network to the other can trigger cascades that can eventually backfire . by analyzing the case of two coupled systems andby varying the strength of the interactions among them , we have shown that at low stresses coupling has a beneficial effect since some of the loads are shed to the other systems , thus postponing the occurrence of cascading failures .on the other hand , with the introduction of couplings the region where not only one system fails but both systems fail together also increases .the higher the couplings , the more the two systems behave like a single one and the area where only a system has failed shrinks .our model also applies to the realistic scenario where existent grids gets connected to allow power to be delivered across states ; such scenario has inspired the analysis of that , even using an unrealistic model of power redistribution in electric grids , reaches conclusion that are similar to ours .it is worth noting that while fault propagation models do predict a general lowering of the threshold for coupled systems , in the present model a beneficial effect due to the existence of the interdependent networks is observed for small enough overloads , while the expected cascading effects take place only for large initial disturbances .this picture is consistent with the observed phenomena for interdependent electric systems .moreover the existence of interlinks among different networks may increase their synchronization capabilities .as and gd acknowledge the support from eu home/2013/cips / ag/4000005013 project ci2c . as acknowledges the support from cnr - pnr national project `` crisis - lab '' . as andgc acknowledge the support from eu fet project dolfins nr 640772 and eu fet project multiplex nr.317532 .gd acknowledges the support from fp7 project n.261788 after . any opinion , findings and conclusions or reccomendations expressed in this material are those of the author(s ) and do not necessary reflect the views of the funding parties .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 p. w. anderson , http://www.sciencemag.org/content/177/4047/393.short[more is different ] , science 177 ( 4047 ) ( 1972 ) 393396 . http://arxiv.org/abs/http://www.sciencemag.org/content/177/4047/393.full.pdf [ ] , http://dx.doi.org/10.1126/science.177.4047.393 [ ] .g. a. pagani , m. aiello , http://www.sciencedirect.com/science/article/pii/s0378437113000575[the power grid as a complex network : a survey ] , physica a : statistical mechanics and its applications 392 ( 11 ) ( 2013 ) 2688 2700 .http://dx.doi.org/http://dx.doi.org/10.1016/j.physa.2013.01.023 [ ] .http://www.sciencedirect.com/science/article/pii/s0378437113000575 p. hines , e. cotilla - sanchez , s. blumsack , http://scitation.aip.org/content/aip/journal/chaos/20/3/10.1063/1.3489887[do topological modelsprovide good information about electricity infrastructure vulnerability ? ] , chaos 20 ( 3 ) ( 2010 ) .http://dx.doi.org/http://dx.doi.org/10.1063/1.3489887 [ ] .g. dagostino , a. scala , http://link.springer.com/book/10.1007%2f978-3-319-03518-5[networks of networks : the last frontier of complexity ] , understanding complex systems , springer international publishing , 2014 . http://dx.doi.org/10.1007/978-3-319-03518-5 [ ] . http://link.springer.com/book/10.1007%2f978-3-319-03518-5 h. e. daniels , the statistical theory of the strength of bundles of threads .i , proceedings of the royal society of london .series a. mathematical and physical sciences 183 ( 995 ) ( 1945 ) 405435 . http://dx.doi.org/10.1098/rspa.1945.0011 [ ] .o. yagan , http://link.aps.org/doi/10.1103/physreve.91.062811[robustness of power systems under a democratic - fiber - bundle - like model ] , phys . rev .e 91 ( 2015 ) 062811 . http://dx.doi.org/10.1103/physreve.91.062811 [ ] .http://link.aps.org/doi/10.1103/physreve.91.062811 i. dobson , j. chen , j. thorp , b. carreras , d. newman , examining criticality of blackouts in power system models with cascading events , in : system sciences , 2002 .proceedings of the 35th annual hawaii international conference on , 2002 , p. 10http://dx.doi.org/doi:10.1109/hicss.2002.993975 [ ] .favre - perrod , a vision of future energy networks , in : power engineering society inaugural conference and exposition in africa , 2005 ieee , 2005 , pp .http://dx.doi.org/10.1109/pesafr.2005.1611778 [ ] .h. wang , q. li , g. dagostino , s. havlin , h. e. stanley , p. van mieghem , http://link.aps.org/doi/10.1103/physreve.88.022801[effect of the interconnected network structure on the epidemic threshold ] , phys .e 88 ( 2013 ) 022801 . http://dx.doi.org/10.1103/physreve.88.022801 [ ] . http://link.aps.org/doi/10.1103/physreve.88.022801 j. martin - hernandez , h. wang , p. v. mieghem , g. dagostino , http://www.sciencedirect.com/science/article/pii/s0378437114001526[algebraic connectivity of interdependent networks ] , physica a : statistical mechanics and its applications 404 ( 0 ) ( 2014 ) 92 105 .http://dx.doi.org/http://dx.doi.org/10.1016/j.physa.2014.02.043 [ ] .http://www.sciencedirect.com/science/article/pii/s0378437114001526 of the systems .for simplicity , we present the case of two identical systems with a flat distribution of link capacities and symmetric couplings .we show the result of increasing the total stress in the two systems along the lines .* left panel * : we show the case where both systems are subject to a similar stress while increasing . in such case both system break down together at the same critical load ; in the region both systems have failed .* right panel * : we show the case where when increasing system is more stressed than system . in this case , the break down of system at the critical load induces a jump in the number of failures system , but system is still able to sustain stress and will break down only at higher values of .respect to the case , there is now a region where only system has failed . ] ) .the plane of initial loads and is separated in four different regions by critical transition lines .the labels ( ) mark the areas where only system suffers systemic cascades ( , ) , while the label marks the area where both systems suffer system wide cascades ( ) .the label marks the area near the origin where no systemic cascades occur .* left panel : * the case corresponds to two uncoupled systems : thus , each system suffers systemic failure at ( where is the critical load for an isolated system ) ; both systems have failed in the area corresponding to the quadrant .* central panel , right panel : * when couplings are introduced , each system is able to discharge stress on the other one and the area where both systems are safe increases . on the other hand , the area where _ both _systems are in a failed state increases . ]
|
in this manuscript , we investigate the abrupt breakdown behavior of coupled distribution grids under load growth . this scenario mimics the ever - increasing customer demand and the foreseen introduction of energy hubs interconnecting the different energy vectors . we extend an analytical model of cascading behavior due to line overloads to the case of interdependent networks and find evidence of first order transitions due to the long - range nature of the flows . our results indicate that the foreseen increase in the couplings between the grids has two competing effects : on the one hand , it increases the safety region where grids can operate without withstanding systemic failures ; on the other hand , it increases the possibility of a joint systems failure . complex networks , interdependencies , mean field models
|
random 3-sat is a classic problem in combinatorics , at the heart of computational complexity studies and a favorite testing ground for both exactly analyzable and heuristic solution methods which are then applied to a wide variety of problems in machine learning and artificial intelligence .it consists of a ensemble of randomly generated logical expressions , each depending on boolean variables , and constructed by taking the and of clauses .each clause consists of the or of 3 `` literals '' . is taken to be either or at random with equal probability , and the three values of the index in each clause are distinct .conversely , the neighborhood of a variable is , the set of all clauses in which or appear . for each such random formula, one asks whether there is some set of values for which the formula evaluates to be true .the ratio controls the difficulty of this decision problem , and predicts the answer with high accuracy , at least as both and tend to infinity , with their ratio held constant . at small , solutions are easily found , while for sufficiently large there are almost certainly no satisfying configurations of the , and compact proofs of this fact can be constructed . between these limits lies a complex , spin - glass - like phase transition , at which the cost of analyzing the problem with either exact or heuristic methods explodes .a recent series of papers drawing upon the statistical mechanics of disordered materials has not only clarified the nature of this transition , but also lead to a thousand - fold increase in the size of the concrete problems that can be solved this paper provides a derivation of the new methods using nothing more complex than probabilities , suggests some generalizations , and reports numerical experiments that disentangle the contributions of the several component heuristics employed . for two related discussions ,see .an iterative `` belief propagation '' ( bp ) algorithm for k - sat can be derived to evaluate the probability , or `` belief , '' that a variable will take the value true in variable configurations that satisfy the formula considered . to calculate this ,we first define a message ( `` transport '' ) sent from a variable to a clause : * is _ the probability that variable satisfies clause _ in the other direction , we define a message ( `` influence '' ) sent from a clause to a variable : * is _ the probability that clause is satisfied by another variable than _ in 3-sat , where clause depends on variables , and , bp gives the following iterative update equation for its influence . the bp update equations for the transport involve the products of influences acting on a variable from the clauses which surround , forming its `` cavity , '' , sorted by which literal ( or ) appears in the clause : the update equations are then the superscripts and denote iteration .the probabilistic interpretation is the following : suppose we have for all clauses connected to variable .each of these clauses can either be satisfied by another variable ( with probability ) , or not be satisfied by another variable ( with probability ) , and also be satisfied by variable itself .if we set variable to 0 , then some clauses are satisfied by , and some have to be satisfied by other variables .the probability that they are all _ satisfied _ is .similarly , if is set to 1 then all these clauses are satisfied with probability . the products in ( [ eq : scott - scaled - t - iterations ] )can therefore be interpreted as joint probabilities of independent events .variable can be or in a solution if the clauses in which appears are either satisfied directly by itself , or by other variables .hence a bp - based decimation scheme results from fixing the variables with largest probability to be either true or false .we then recalculate the beliefs for the reduced formula , and repeat .to arrive at sp we introduce a modified system of beliefs : every variable falls into one of three classes : true in all solutions ( 1 ) ; false in all solutions(0 ) ; and true in some and false in other solutions ( ) .the message from a clause to a variable ( an influence ) is then the same as in bp above . although we will again only need to keep track of one message from a variable to a clause ( a transport ) , it is convenient to first introduce three ancillary messages : * is _ the probability that variable is true in clause in all solutions _ * is _ the probability that variable is false in clause in all solutions _ * is _ the probability that variable is true in clause in some solutions and false in others_. note that there are here three transports for each directed link , from a variable to a clause , in the graph . as in bp ,these numbers will be functions of the influences from clauses to variables in the preceeding update step .taking again the incoming influences independent , we have the proportionality indicates that the probabilities are to be normalized .we see that the structure is quite similar to that in bp .but we can make it closer still by introducing with the same meaning as in bp . in spit will then , as the case might be , be equal to to or .that gives ( compare ( [ eq : scott - scaled - t - iterations ] ) ) : the update equations for are the same in sp as in bp , i.e. one uses ( [ eq : scott - scaled - i - iterations ] ) in sp as well .similarly to ( [ eq : single - bit - probabilities ] ) , decimation now removes the most fixed variable , i.e. the one with the largest absolute value of .given the complexity of the original derivation of sp , it is remarkable that the sp scheme can be interpreted as a type of belief propagation in another belief system . andeven more remarkable that the final iteration formulae differ so little .a modification of sp which we will consider in the following is to interpolate between bp and sp by considering equations we do not have an interpretation of the intermediate cases of as belief systems .early work on developing 3-sat heuristics discovered that as is increased , the problem changes from being easy to solve to extremely hard , then again relatively easy when the formulae are almost certainly unsat .it was natural to expect that a sharp phase boundary between sat and unsat phases in the limit of large accompanies this `` easy - hard - easy '' observed transition , and the finite - size scaling results of confirmed this .their work placed the transition at about .monasson and zecchina soon showed , using the replica method from statistical mechanics , that the phase transition to be expected had unusual characteristics , including `` frozen variables '' and a highly nonuniform distribution of solutions , making search difficult .recent technical advances have made it possible to use simpler cavity mean field methods to pinpoint the sat / unsat boundary at and suggest that the `` hard - sat '' region in which the solution space becomes inhomogeneous begins at about .these calculations also predicted a specific solution structure ( termed 1-rsb for `` one step replica symmetry - breaking '' ) in which the satisfiable configurations occur in large clusters , maximally separated from each other .two types of frozen variables are predicted , one set which take the same value in all clusters and a second set whose value is fixed within a particular cluster .the remaining variables are `` paramagnetic '' and can take either value in some of the states of a given cluster .a careful analysis of the 1-rsb solution has subsequently shown that this extreme structure is only stable above . between 3.92 and 4.15 a wider range of cluster sizes , and wide range of inter - cluster hamming distancesare expected . as a result, we expect the values , and to separate regions in which the nature of the 3-sat decision problem is distinctly different .[ fig : dependance - on - rho . ] `` survey - induced decimation '' consists of using sp to determine the variable most likely to be frozen , then setting that variable to the indicated frozen value , simplifying the formula as a result , updating the sp calculation , and repeating the process .for we expect sp to discover that all spins are free to take on more than one value in some ground state , so no spins will be decimated . above ,sp ideally should identify frozen spins until all that remain are paramagnetic .the depth of decimation , or fraction of spins reminaing when sp sees only paramagnetic spins , is thus an important characteristic .we show in fig .1 the fraction of spins remaining after survey - induced decimation for values of from to in hundreds of formulae with .the error bars show the standard deviation , which becomes quite large for large values of . to the left of ,on the descending part of the curves , sp reaches a paramagnetic state and halts . on the right , or ascending portion of the curves, sp stops by simply failing to converge .fig 1 also shows how different the behavior of bp and the hybrids between bp and sp are in their decimation behavior .we studied bp ( ) , underrelaxed sp ( ) , sp , and overrelaxed sp ( ) .bp and underrelaxed sp do not reach a paramagnetic state , but continue until the formula breaks apart into clauses that have no variables shared between them .we see in fig .1 that bp stops working at roughly , the point at which sp begins to operate .the underrelaxed sp behaves like bp , but can be used well into the rsb region . on the rising parts of all four curves in fig 1 ,the scheme halted as the surveys ceased to converge .overrelaxed sp in fig .1 may give reasonable recommendations for simplification even on formulae which are likely to be unsat .next we consider wsat , the random walk - based search routine used to finish the job of exhibiting a satisfying configuration after sp ( or some other decimation advisor ) has simplified the formula .the surprising power exhibited by sp has to some extent obscured the fact that wsat is itself a very powerful tool for solving constraint satisfaction problems , and has been widely used for this .its running time , expressed in the number of walk steps required for a successful search is also useful as an informal definition of the complexity of a logical formula .its history goes back to papadimitriou s observation that a subtly biased random walk would with high probability discover satisfying solutions in the simpler 2-sat problem after , at worst , steps .his procedure was to start with an arbitary assignment of values to the binary variables , then reverse the sign of one variable at a time using the following random process : * select an unsatisfied clause at random * select at random a variable that appears in the clause * reverse that variable this procedure , sometimes called rwalksat , works because changing the sign of a variable in an unsatisfied clause always satisfies that clause and , at first , has no net effect on other clauses .it is much more powerful than was proven initially .two recent papers . have argued analytically and shown experimentally that rwalksat finds satisfying configurations of the variables after a number of steps that is proportional to for values of up to roughly , after which this cost increases exponentially with .[ fig:2 ] [ cols="^,^,^ " , ]the power of sp comes from its use to guide decimation by identifying spins which can be frozen while minimally reducing the number of solutions that can be constructed . to assess the complexity of the reduced formulae that decimation guided in this way produces we compare , in fig .4 , the median number of wsat steps required to find a satisfying configuration of the variables before and after decimation . to a rough approximation , we can say that sp caps the cost of finding a solution to what it would be at the entry to the critical regime .there are two factors , the reduction in the number of variables that have to be searched , and the reduction of the distance the random walk must traverse when it is restricted to a single cluster of solutions . in fig .2c the solid lines show the wsat costs divided by n , the original number of variables in each formula .if we instead divide the wsat cost after decimation by the number of variables remaining , the complexity measure that we obtain is only a factor of two larger , as shown by the dotted lines .the relative cost of running wsat without benefit of decimation is 3 - 4 decades larger .we measured the actual compute time consumed in survey propagation and in wsat .for this we used the zecchina group s version 1.3 survey propagation code , and the copy of wsat ( h. kautz s release 35 , see ) that they have also employed .all programs were run on a pentium iv xeon 3ghz dual processor server with 4 gb of memory , and only one processor busy .we compare timings from runs on the same 100 formulas with and and 4.2 ( the formulas are simply extended slightly for the second case ) . in the first case ,the 100 formulas were solved using wsat alone in 921 seconds . using sp to guide decimation one variable at a time , with the survey updates performed locally around each modified variable , the same 100 formulas required 6218 seconds to solve , of which only 31 sec was spent in wsat .when we increase alpha to 4.2 , the situation is reversed .running wsat on 100 formulas with required 27771 seconds on the same servers , and would have taken even longer if about half of the runs had not been stopped by a cutoff without producing a satisfying configuration .in contrast , the same 100 formulas were solved by sp followed with wsat in 10,420 sec , of which only 300 seconds were spent in wsat .the cost of sp does not scale linearly with , but appears to scale as in this regime .we solved 100 formulas with using sp followed by wsat in 39643 seconds , of which 608 sec was spent in wsat .the cost of running sp to decimate roughly half the spins has quadrupled , while the cost of the final wsat runs remained proportional to .decimation must stop short of the paramagnetic state at the highest values of , to avoid having sp fail to converge . in those cases we found that wsat could sometimes find satisfying configurations if started slightly before this point .we also explored partial decimation as a means of reducing the cost of wsat just below the 1-rsb regime , but found that decimation of small fractions of the variables caused the wsat running times to be highly unpredictable , in many cases increasing strongly . as a result ,partial decimation does not seem to be a useful approach .the sp and related algorithms are quite new , so programming improvements may modify the practical conclusions of the previous section .however , a more immediate target for future work could be the wsat algorithms .further directing its random choices to incorporate the insights gained from bp and sp might make it an effective algorithm even closer to the sat / unsat transition .we have enjoyed discussions of this work with members of the replica and cavity theory community , especially riccardo zecchina , alfredo braunstein , marc mezard , remi monasson and andrea montanari .this work was performed in the framework of eu / fp6 integrated project evergrow ( www.evergrow.org ) , and in part during a thematic institute supported by the exystence eu / fp5 network of excellence . e.a .acknowledges support from the swedish science council .s.k . and u.gare partially supported by a us - israeli binational science foundation grant .99 kirkpatrick s. & selman b. ( 1994 ) critical behaviour in the satifiability of random boolean expressions ._ science _ * 264 * : 1297 - 1301 .monasson r. & zecchina r. ( 1997 ) statistical mechanics of the random k - sat problem .e _ * 56 * : 13571361 .montanari a. , parisi g. & ricci - tersenghi f. ( 2003 ) instability of one - step replica - symmetric - broken phase in satisfiability problems .cond - mat/0308147 .papadimitriou c.h .( 1991 ) . in _ focs 1991 _ ,selman b. & kautz h.a .( 1993 ) in _ proc . aaai-93_ * 26 * , pp .semerjian g. & monasson r. ( 2003 ) ._ phys rev e _ * 67 * : 066103 .barthel w. , hartmann a.k .& weigt m. ( 2003 ) .e _ * 67 * : 066104 .selman b. , kautz k. & cohen b. ( 1996 ) local search strategies for satisfiability testing ._ dimacs series in discrete mathematics and theoretical computer science _ * 26*. ` http://www.cs.washington.edu/homes/kautz/walksat/ `
|
survey propagation is a powerful technique from statistical physics that has been applied to solve the 3-sat problem both in principle and in practice . we give , using only probability arguments , a common derivation of survey propagation , belief propagation and several interesting hybrid methods . we then present numerical experiments which use wsat ( a widely used random - walk based sat solver ) to quantify the complexity of the 3-sat formulae as a function of their parameters , both as randomly generated and after simplification , guided by survey propagation . some properties of wsat which have not previously been reported make it an ideal tool for this purpose its mean cost is proportional to the number of variables in the formula ( at a fixed ratio of clauses to variables ) in the easy - sat regime and slightly beyond , and its behavior in the hard - sat regime appears to reflect the underlying structure of the solution space that has been predicted by replica symmetry - breaking arguments . an analysis of the tradeoffs between the various methods of search for satisfying assignments shows wsat to be far more powerful than has been appreciated , and suggests some interesting new directions for practical algorithm development .
|
certain nonlinear dynamical systems can demonstrate non - periodic , long - term non - predictive behaviors , known as chaos .chaotic behavior results from the high sensitivity of the system to the initial state , which is never exactly known in practice .any small perturbation can grow exponentially with time within the system leading to a non - predictive behavior of chaotic systems .chaotic waveforms have been extensively used in various research areas such as the modeling of the behavior of human organs as well as the modulation of signals and chaotic encryption of telecommunication data . truly random number generators ( trng )can be utilized in cryptographic systems .a trng is a number generator that is capable of producing uncorrelated and unbiased binary digits through a nondeterministic and irreproducible process .a trng requires a high entropy source , which can be provided by uncertain chaotic sources .a discrete - time chaotic map , formed by the iteration of the output value in a transformation function , can be used for the generation of random numbers .simple piecewise affine input - output ( i / o ) characteristics have been extensively used for the generation of random bits , e.g. , the bernoulli map , and the tent map .the entropy source of a chaotic map is the inherent noise of the system , which is amplified in the positive gain feedback loop by the iteration of the output signal in the map function .high speed , capability of integration , and the high quality of the generated bits make the discrete - time chaotic maps very good candidates for high speed embeddable random number generation . the practical application of both the tent map and the bernoulli map can be hindered by noise and implementation errors , where they are unable to maintain the state of the system confined . in this paper , we present a modified tent map that can be interchanged with the tent map in practical applications .the rest of this paper is organized as follows . in section [ sec : map ] , the fundamentals of discrete - time chaotic maps are reviewed , and the practical problems of the tent map are pointed out . in section[ sec : modified_tent ] , we present the modified tent map and investigate its chaotic characteristics . in section [ sec : circuit ] , we demonstrate the feasibility of implementing the presented modified tent map for true random number generation .section [ sec : conclusion ] concludes the paper .in this section , after a brief introduction to discrete - time chaotic maps , we review the tent map and the previously proposed implementations . discrete - timemarkov chaotic sources are a subclass of discrete - time chaotic nonlinear dynamical systems .a discrete - time chaotic system is formed by the iteration of the output signal through a transformation function as given by in this equation , represents the time step , is the initial state of the system , and is the state of the system at time step . [fig : maps ] the tent map function , shown in figure [ fig : maps ] ( a ) , is given by it can be shown that after several iterations , the density of states in the tent map asymptotically follows a uniform distribution , i.e. , the state of the system is uniformly distributed in and the asymptotic density distribution for the tent map satisfies .the lyapunov exponent of a discrete - time map can be calculated from where is the derivative of the map function and is the asymptotic density distribution .a positive lyapunov exponent implies the chaotic behavior in the system .the rate of the separation between the trajectories of very close initial states is given by .for tent map , and , which result in .tent map circuit implementation has been proposed in .these circuit implementations suffer from the confinement problem in practice , i.e. , the output value of the map can be trapped in a point outside the map due to noise or implementation errors .a tailed tent map has been presented in to solve the problem .the tailed tent map maintains the uniform asymptotic density distribution of the states , while not disturbing characteristics of the tent map .for example , the utilization of the tailed tent map for true random number generation results in the generation of correlated output binary sequence . in ,a hardware implementation has been proposed for the tent map based on reducing the slope , which will change the characteristics of the map and degrade the quality of the generated binary sequence in terms of the statistical characteristics .in this section , we present the modified tent map and investigate its chaotic behavior . the presented modified tent map function , shown in figure [ fig : maps ] ( b ) ,is given by in the modified tent map , the sign of the output value alternates in each iteration , i.e , .suppose , , and are the abscissa of three successive output values of the tent map , and , , and are the output values of the modified tent map .if , we have and , as shown in figure [ fig : maps ] ( a , b ) . in other words , for an equal initial state , the absolute value of the output sequence is equal for both the tent map and the modified tent map while the output of the modified tent map alternates between positive and negative values . since the output values of the tent map and the modified tent map have equal absolute values , the lyapunov exponent of the modified tent map is equal to that of the tent map , which was shown to be .) . the generalized modified tent map with three different values of m ( , , and ) is shown in the inset.,width=297 ] [ fig : bifurcation ] in order to further investigate the presented modified tent map , we generalize the map function by introducing a slope parameter , where , as given by here , represents the modified tent map , as given by equation ( [ eq : tent_map ] ) .figure [ fig : bifurcation ] shows the bifurcation diagram of the generalized map function as a function of the bifurcation parameter .a bifurcation diagram is obtained by following the output trajectory to find the possible long - term states of the system . under certain conditions ,the bifurcation diagram can also represent the density of the states due to the ergodicity . in this diagram ,the density of the states is proportional to the darkness , i.e. darker regions have higher density of states .this diagram is obtained assuming that the inherent initial noise is a very small positive value , i.e. , . for , the system could not maintain the state in the input range , which could not be used for chaotic applications .for , the system is chaotic and the bifurcation diagram represents the steady state density distributions .chaotic behavior can also be observed for . since the initial state of the systemis assumed to be positive , the output is confined in positive values . for , the system does not demonstrate chaotic behavior .regardless of the initial state of the system , the output will ultimately settle in zero .for , the system also shows chaotic behavior . in this mode , the output alternates between positive and negative values with each iteration . in this region , the system does not have an asymptotic density distribution . for ,the system is chaotic and the bifurcation diagram represents its asymptotic density distribution. for , equation ( [ eq : bifurcation ] ) represents a map that is given by as the equation shows , this is exactly like the tent map for . here ,if is a little bit greater than , the system could still maintain the state confined in the system since the output could jump to the negative side of the map , which is the fundamental difference between this map and the tent map . in the tent map ,if the slope is more than , the system could not maintain the state confined .therefore , the tent map could not be used for practical applications under process variations and noise .although in the neighborhood of , the asymptotic density distribution does not exist , the modified tent map could be used in practical applications such as true random number generation based on the principles explained earlier .in this section , we propose the utilization of a current mode circuit implementation for the presented modified tent map . in current mode circuits ,the signal is represented by the branch current instead of the node voltage in the voltage mode circuits .current mode circuits have recently attracted great attention because of their capability of very low supply voltage operation .while scaling the feature size , the supply voltage is scaled more rapidly than the threshold voltage , which in turn decreases the overdrive voltage of the transistors and limits the voltage swing . in current mode circuits ,high current gains can be achieved while the nodal voltages are floating .current mode circuits will be playing an important role in the future deep sub - micron technologies , e.g. , telecommunications , analog signal processing , and multiprocessors .the presented modified tent map is a _ continuous _ piecewise affine map , where we implemented it by the affine interpolation of the value at the breakpoints , proposed in . in this method , the affine partitionsare implemented using elementary blocks based on the detection of the breakpoints .therefore , it is straight - forward to detect whether the output value is within a certain affine partition of the map , and generate a binary output sequence .we performed hspice simulations in tsmc technology for the modified tent map circuit .the i / o characteristic curve of the map is shown in figure [ fig : map_bit_setti](a ) , where the input and output range are equal to .investigating the transient response of the circuit with an input current pulse , we demonstrated that at the worst case the output would settle within of its final value in less than ns .therefore , the circuit could be utilized at operation frequencies as high as mhz .we demonstrate the feasibility of binary generation from the presented modified tent map . in figure[ fig : map_ideal](a ) , the partition is represented by state , and the partitions and are represented by the states and , respectively . in order to generate a binary output ,the states and are combined in a macro - state .the state corresponds to the bit and the state corresponds to the bit .as discussed earlier , the binary sequence generated from a modified tent map is exactly the same as the binary sequence generated from a tent map due to the symmetry of the bit - generation process around the origin .therefore , the bit - generation process can be modeled as a markov chain similar to the tent map , as shown in figure [ fig : map_ideal](b ) . in this figure, is the probability of the generation of when the latest generated bit is . is the probability of the generation of a , when the previous bits is . since the functionality of the map is similar to the tent map , we have and the generated bits are truly random . in figure[ fig : map_bit_setti](b ) , the simulation result for the bit - generation circuit is presented , where it demonstrates very good agreement with the desired bit - generation pattern .[ fig : map_bit_setti ] [ fig : map_ideal ]the tent map suffers from the inability to maintain the state confined in the input range of the map under noise and implementation errors .this fundamental issue limits the practical application of the tent map . in this paper, we presented a modified tent map , which can solve the state confinement problem of the tent map .the presented modified tent map could be used in all practical applications instead of the tent map .we demonstrated that the presented map could be used for true random number generation .s. callegari , r. rovatti , and g. setti , `` embeddable adc - based true random number generator for cryptographic applications exploiting nonlinear signal processing and chaos , '' _ ieee trans . signal processing _ , vol .53 , no . 2 ,pp . 793805 , february 2005 .t. morie , s. sakabayashi , m. nagata , and a. iwata , `` cmos circuits generating arbitrary chaos by using pulsewidth modulation techniques , '' _ ieee trans .circuits syst .i _ , vol .47 , no . 11 , pp .16521657 , november 2000 .r. rovatti , n. manaresi , g. setti , and e. franchi , `` a current - mode circuit implementing chaotic continuous piecewise - affine markov maps , '' _ international conference on microelectronics for neural , fuzzy and bio - inspired systems _ , pp .275282 , 1999 .
|
tent map is a discrete - time piecewise - affine i / o characteristic curve , which is used for chaos - based applications , such as true random number generation . however , tent map suffers from the inability to maintain the output state confined to the input range under noise and process variations . in this paper , we propose a modified tent map , which is interchangeable with the tent map for practical applications . in the proposed modified tent map , the confinement problem is solved while maintaining the functionality of the tent map . we also demonstrate simulation results for the circuit implementation of the presented modified tent map for true random number generation .
|
cells process information from the outside and regulate their internal state by means of proteins and dna that chemically and physically interact with one another .these biochemical networks are often highly stochastic , because in living cells the reactants often occur in small numbers .this is particularly important in gene expression , where transcription factors are frequently present in copy numbers as low as tens of molecules per cell .while it is generally believed that biochemical noise can be detrimental to cell function , it is increasingly becoming recognized that noise can also be beneficial to the organism . understanding noise in gene expressionis thus important for understanding cell function , and this observation has recently stimulated much theoretical and experimental work in this direction .however , the theoretical analyses usually employ the zero - dimensional chemical master equation .this approach takes into account the discrete character of the reactants and the probabilistic nature of chemical reactions .it does assume , however , that the cell is a ` well - stirred ' reactor , in which the particles are uniformly distributed in space at all times ; the reaction rates only depend upon the global concentrations of the reactants and not upon the spatial positions of the reactant molecules .yet , in order to react , reactants first have to move towards one another .they do so by diffusion , or , in the case of eukaryotes , by a combination of diffusion and active transport .both processes are stochastic in nature and this could contribute to the noise in the network . here , we study by computer simulation the expression of a single gene that is under the control of a repressor in a spatially - resolved model .we find that at low repressor concentration , _ < 50 { \rm nm} ] , the noise in the spatially - resolved model can be more than five times larger than the noise in the well - stirred model .we also show that a cell could minimize the effect of spatial fluctuations , either by tuning the open complex formation rate or by changing the number of repressors and their affinity for the binding site on the dna . in section [ sec :operbind ] , we elucidate the origin of the enhanced noise in the spatially resolved model . in the subsequent section , we show that in the model employed here the effect of spatial fluctuations can be quantitatively described by a well - stirred model in which the reaction rates for repressor binding and unbinding are appropriately renormalized ; however , as we discuss in the last section , we expect that in a more refined model the effect of diffusion will be more complex , impeding such a simplified description . in section [ sec : ps ] , we discuss how the operator state fluctuations propagate through the different stages of gene expression using power spectra for the operator state , elongation complex , mrna and protein .the results show that these power spectra are highly useful for unraveling the dynamics of gene expression .we hope that this stimulates experimentalists to measure power spectra of not only mrna and protein levels , but also of the dynamics of transcription initiation and elongation using _e.g. _ magnetic tweezers . as we argue in the last section, such experiments should make it possible to determine the importance of spatial fluctuations for the noise in gene expression .we explicitly simulate the diffusive motion of the repressor molecules in space . however , since the experiments of riggs _et al . _ and the theoretical work of berg , winter , and von hippel , it is well known that proteins could find their target sites via a combination of 1d sliding along the dna and 3d diffusion through the cytoplasm `` hopping '' or `` jumping '' from one site on the dna to another .this mechanism could speed up the search process and make it faster than the rate at which particles find their target by free 3d diffusion ; this rate is given by ] is the concentration of the ( repressor ) protein .however , while it is clear that the mechanism of 3d diffusion and 1d sliding could potentially speed up the search process , whether this mechanism in living cells indeed drastically reduces the search time is still under debate . in this context, it is instructive to discuss the two main results of recent studies on this topic .the first is that the mean search time is given by ,\ ] ] where is the total length of the dna , is the average distance over which the protein slides along the dna before it dissociates , is the diffusion constant for sliding , is the typical mesh size in the nucleoid , and is the diffusion constant in the cytoplasm .this formula has a clear interpretation : is the sliding time , is the time spent on 3d diffusion , the sum of these terms is thus the time to perform one round of sliding and diffusion , and is the total number of rounds needed to find the target .the other principal result is that the search time is minimized when the sliding distance is under these conditions , a protein spends equal amounts of time on 3d diffusion and 1d sliding ( a protein is thus half the time bound to the dna ) .[ eq : lambda ] is a useful result , because it shows that the average sliding distance depends upon the ratio of diffusion constants and on the typical mesh size in the nucleoid .if we now assume that and are equal ( which is not obvious given that proteins bind relatively strongly to dna could thus very well be much smaller than ) and if we take the mesh size to be given by , where is the volume of an _ escherichia coli _ cell and , we find that ( 30 bp ) .this corresponds to the typical diameter of a protein or dna double helix and is thus not very large .interestingly , recent experiments seem to confirm this : experiments from halford _et al . _ on restriction enzymes ( ecorv and bbcci ) with a series of dna substrates with two target sites and varying lengths of dna between the two sites , suggest that under the _ in vivo _ conditions , sliding is indeed limited to relatively short distances , _i.e. _ to distances less than 50 bp ( ) .now , it should be realized that on length scales beyond the sliding length , the motion is essentially 3d diffusion : the sliding / hopping mechanism corresponds to 3d diffusion with a jump distance given by the sliding distance .moreover , since the sliding distance is only on the order of a particle diameter , as discussed above , we have therefore decided to model the motion of the repressor molecules as 3d diffusion .but it should be remembered that on length scales smaller than , this approach is not correct .we discuss the implications of this for our results in the discussion section .most repressors bind to a site that ( partially ) overlaps with the core promoter the binding site of the rna polymerase ( rnap ) .when a repressor molecule is bound to its operator site , it prevents rnap from binding to the promoter , thereby switching off gene expression . only in the absence of a repressor on the operator site , can rnap bind to the promoter and initiate transcription and translation , ultimately resulting in the production of a protein .we model this by the following reaction network : eqs .[ eq : gene_exp1 ] and [ eq : gene_exp2 ] describe the competition between the binding of the repressor and the rnap molecules to the promoter ( is the operator site ) . in our simulationwe fix the binding site in the center of a container with volume , comparable to the volume of a single _ e. coli _ cell .we simulate both the operator site and the repressor molecules as spherical particles with diameter nm . the operator site is surrounded by repressor molecules that move by free 3d diffusion ( see previous section ) with an effective diffusion constant , as has been reported for proteins of a similar size .the intrinsic forward rate for the repressor particles at contact is estimated from the maxwell - boltzmann distribution .the backward rate depends on the interaction between the dna binding site of the repressor and the operator site on the dna and varies greatly between different operons , with stronger repressors having a lower . in our simulations , we vary between , as discussed in more detail below .the concentration of rnap is much higher than that of the repressor . because of this we treat the rnap as distributed homogeneously within the cell and we do not to take diffusion of rnap into account explicitly . instead ,rnap associates with the promoter with a diffusion - limited rate ] , leading to a forward rate .finally , the backward rate is determined such that .transcription initiation is described by eqs .[ eq : gene_exp3 ] and [ eq : gene_exp4 ] . beforeproductive synthesis of rna occurs , first the rnap in the rnap - promoter complex unwinds approximately one turn of the promoter dna to form the open complex .the open complex formation rate has been measured to be on the order of .we approximate open complex formation as an irreversible reaction .some experiments find this step to be weakly reversible .however , adding a backward reaction to the model did not change the dynamics of the system in a qualitative way , as long as the backward rate is smaller than , which is in agreement with experimental results .after open complex formation , rnap must first escape the promoter region before another rnap or repressor can bind . since elongation occurs at a rate of nucleotides per second and between nucleotidesmust be cleared by rnap before the promoter is accessible , a waiting time of is required before another binding can occur .since promoter clearance consists of many individual elongation events that obey poisson statistics individually , we model the step as one with a fixed time delay , not as a poisson process with rate .[ eq : gene_exp5]-[eq : gene_exp9 ] describe the dynamics of mrna and protein numbers .after clearing the promoter region , rnap starts elongation of the transcript . as for clearance , the elongation step is modeled as a process with a fixed time delay , corresponding to an elongation rate of nucleotides per second and a bp gene . when a mrna is formed , it can degrade with a rate . here, the mrna degradation rate is determined by fixing the average mrna concentration in the unrepressed state , as described below .furthermore , a mrna molecule can form a mrna - ribosome complex and start translation .we assume that proteins are produced on average from a single mrna molecule , so that the start of translation occurs at a rate . after a fixed timedelay a protein is produced .the mrna is available for ribosome binding immediately after the start of translation . due to the delay in protein production , can start to be degraded , while the mrna - ribosome complex is still present ; thus represents the mrna leader region rather than the entire mrna molecule .finally , the protein degrades at a rate , which is determined by the requirement that the average protein concentration in the unrepressed state has a desired value , as we describe now .we vary the free parameters in the reaction network described in eqs .[ eq : gene_exp1]-[eq : gene_exp9 ] , , , in the following way : first , we choose the concentration of mrna and protein in the absence of repressor molecules . in this case ,tuning of the concentrations is most straightforward by adjustment of the mrna and protein decay rates and . for the above reaction networkone can show that the average mrna number protein number is given by where , , , and are equilibrium constants , is the volume of the cell and is the total number of repressors .the unrepressed state corresponds to . in our simulations, we fix the mrna and protein numbers in the unrepressed state at and . the mrna andprotein decay rates then follow straightforwardly from eqs .[ eq : rate_eq1 ] and [ eq : rate_eq2 ] : the mrna degradation rate is and the protein degradation rate is ; the latter corresponds to protein degradation by dilution with a cell cycle time of around 1h .next , we determine by what factor these concentrations should decrease in the repressed state .this can be done by changing the number of repressors and the repressor backward rate .we define the repression level as the transcription initiation rate in the absence of repressors , divided by the initiation rate in the repressed state . for a repression level ,the concentration of mrna and proteins in the repressed state is a fraction of the concentration in the unrepressed state and it follows that thus , a fixed repression level does not specify a unique combination of and : increasing the number of repressors twofold , while also increasing the repressor backward rate by the same factor , gives the same repression level .this means that the cell can control mrna and protein levels in the repressed state either by having a large number of repressors that stay on the dna for a short time or by having a small number of repressors , possibly even one , that stay on the dna for a long time . even though it is conceivable that the latter is preferable for economic reasons , there is no difference between the two extremes in terms of the average gene expression . in our simulations , we vary and , but use a fixed repression level . consequently , in the repressed state , on average and .we simulate the above reaction network using green s function reaction dynamics ( gfrd ) .as discussed above , only the operator site and the repressor particles are simulated in space .all other reactions are assumed to occur homogeneously within the cell and are simulated according to the well - stirred model or with fixed time delays for reaction steps involving elongation .a few modifications with respect to the algorithm described in are implemented to improve simulation speed .first , we neglect excluded volume interactions between repressor particles mutually , as the concentration of repressor is very low .this means that the only potential reaction pairs we consider are operator - repressor pairs .secondly , we use periodic boundary conditions instead of a reflecting boundary , which leads to a larger average time step . asthe operator site is both small compared to the volume of the cell and is far removed from the cell boundary , this has no effect on the dynamics of the system . finally ,as the repressor backward rate is rather small , the operator site can be occupied by a repressor for a time long compared to the average simulation time step . if the repressor is bound to the operator site longer than a time , where is the length of the sides of our container , the other repressor molecules diffuse on average from one side of the box to the other .consequently , when the repressor eventually dissociates from the operator site , the other repressor molecules have lost all memory of their positions at the time of repressor binding . here ,when a repressor will dissociate after a time longer than , we do not propagate the other repressors with gfrd , but we only update the master equation and fixed delay reactions .we update the positions of the free repressors at the moment that the operator site becomes accessible again , by assigning each free repressor molecule a random position in the container ; the dissociated repressor is put at contact with the operator site .we see no noticeable difference between this scheme and results obtained by the full gfrd algorithm described in refs . .to study the effect of spatial fluctuations on the repression of genes , we simulate the reaction network described in eqs .[ eq : gene_exp1]-[eq : gene_exp9 ] both by gfrd , thus explicitly taking into account the diffusive motion of the repressor particles , and according to the well - stirred model , where the repressor particles are assumed to be homogeneously distributed in space and the dynamics depends only on the concentration of repressor . in fig .[ fig : tracks ] we show the behavior of mrna and protein numbers for a system with open complex formation rate and with varying numbers of repressors .we keep the repression factor fixed at so that with increasing the repressor backward rate is also increased , _i.e. _ repressor particles are bound to the dna for a shorter time . .the number of mrna and protein molecules is shown for simulations with gfrd ( black line ) and according to the master equation ( gray line ) . in the gfrd simulation ,diffusion of repressor particles is explicitly included .( a ) and ( b ) .( c ) and ( d ) .( e ) and ( f ) .in general , there is a dramatic difference in dynamics due to the spatial fluctuations of the repressor molecules .this difference becomes more pronounced as the number of repressors decreases .however , we find that in all cases and , on average.,width=302 ] it is clear from fig . [fig : tracks ] that there is a dramatic difference between the behavior of mrna and protein numbers between the gfrd simulation and the well - stirred model .when spatial fluctuations of the repressor molecules are included , mrna is no longer produced in a continuous fashion , but instead in sharp , discontinuous bursts during which the mrna level can reach levels comparing to those of the unrepressed state .these bursts in mrna production consequently lead to peaks in protein number .as the protein decay rate is much lower than that of mrna , these peaks are followed by periods of exponential decay over the course of hours . due to these fluctuations ,protein numbers often reach levels of around of the protein levels in the unrepressed state . in contrast , in the absence of repressor diffusion , the fluctuations around the average protein number are much lower . for both cases ,however , the average behavior is identical : even though the dynamics is very different , we always find that on average and . also , in all cases the fluctuations in mrna number are larger than those in protein number .this means that the translation step functions as a low - pass filter to the repressor signal .when we increase the number of repressors and change in such a way that the repression level remains constant , we find that both for gfrd and the well - stirred model the fluctuations in mrna and protein number decrease . in the absence of spatial fluctuations this effect is minor , but for gfrd this decrease is sharp : for large number of repressors , the burst in mrna become both weaker and more frequent .this in turn leads to smaller peaks and shorter periods of exponential decay in protein numbers .in fact , as is increased both approaches converge to the same behavior . at around , the dynamics of the protein number is similar for the well - stirred model and the spatially resolved model .the same happens for mrna number when . and for constant repression factor .data obtained by gfrd simulation is shown for and .noise levels for the well - stirred model are shown as grey lines and those for the well - stirred model with reaction rates renormalized according to eqs .[ eq : renorm1 ] and [ eq : renorm2 ] are shown as black lines , both for ( solid lines ) , ( dashed lines ) and ( dotted lines ) .only when the reaction rates are properly renormalized does the noise in the well - stirred model agree well with the noise in the gfrd simulations , which include the effect of diffusion .( insets ) noise levels as a function of .symbols indicate results for gfrd and lines are results for the chemical master equation with renormalized reaction rates.,width=302 ] in fig .[ fig : noise ] , we quantify the noise in mrna and protein number , defined as standard deviation divided by the mean , while we change the number of repressors .as we keep the amount of repression fixed at , we simultaneously vary the backward rate according to eq .[ eq : repression_factor ] .when all parameters are the same , the noise for the gfrd simulation , including the diffusive motion of the repressors , is always larger than the noise for the well - stirred model , where the diffusive motion is ignored . in both cases ,the noise decreases when the number of repressors is increased and the repressor backward rate becomes larger .this is consistent with the mrna and protein tracks shown in fig .[ fig : tracks ] .we also investigated the effect of changing the open complex formation rate . in nature , this rate can be tuned by changing the base pair composition of the promoter region on the dna . when we change , we change the mrna decay rate so that the average mrna and protein concentrations remain unchanged ( see section [ sec : tt ] ) .we find that when is lowered , the fluctuations in mrna and protein levels are sharply reduced .when is much larger than the rnap backward rate , almost every rnap binding to the promoter dna will result in transcription of a mrna . for than , rnap binding will lead to transcription only infrequently . as a consequence , the operator filters out part of the fluctuations in rnap binding due to the diffusive motion of the repressor particles , leading to the decrease in noise observed in fig .[ fig : noise ] .this shows that the open complex formation rate plays a considerable role in controlling noise in gene expression .to understand how the diffusive motion of repressor molecules leads to increased fluctuations in mrna and protein numbers , it is useful to look in some detail at the dynamics of repressor - dna binding . in figure[ fig : rebinding]a , we show the bias for both gfrd and the well - stirred model .the bias is a moving time average over with a time window and should be interpreted as the fraction of time the operator site was bound by repressor particles over the last seconds .the results we show here are for repressors and a repression factor . at this repression factor, is such that the repressor molecules are bound to the operator only fifty percent of the time , making it easier to visualize the operator dynamics than in the case of as used above . and .( a ) the -bias for gfrd ( black line ) and the well - stirred model ( gray line ) .the -bias is defined as the fraction of time a repressor is bound to the operator site in the last seconds . when the diffusive motion of repressor molecules is included ( black line ) ,the -bias switches between periods where repressors are continuously bound to or absent from the dna for long times .( b ) and ( c ) time trace of the occupancy of the operator site by repressor molecules .when a repressor is bound to the operator site and indicates either a free operator site or one with rnap bound . for the gfrd simulations , an initial bindingis followed by several rapid rebindings , whereas for the well - stirred model binding and rebinding is much more unstructured .note that here , for reasons of clarity , instead of as used in the text and figs .[ fig : tracks ] and [ fig : noise].,width=302 ] the bias for the well - stirred model fluctuates around the average value , indicating that on the timescale of several binding and unbinding events occur , in agreement with for . on the other hand , when including spatial fluctuations , the bias switches between periods in which repressors are bound to the dna continuously and periods in which the repressors are virtually absent , both on timescales much longer than the time window .how is it possible that repressors are bound to the operator site for times much longer than the timescale set by the dissociation rate from the dna ?the answer to that question can be found in figs . [fig : rebinding]b and c , where a time trace is shown of the operator occupancy by the repressor for both gfrd and the well - stirred model .the time trace for the simulation of the well - stirred model in fig .[ fig : rebinding]c shows a familiar picture : binding and dissociation of the repressor from the operator occurs irregularly , the time between events given by poisson distributions . the time trace for gfrd in fig .[ fig : rebinding]b looks rather different . here , in general a dissociation eventis followed by a rebinding very rapidly .only occasionally does a dissociation result in the operator being unbound by repressors for a longer time .when this happens , repressors stay away from the operator for a time much longer than the typical time separating binding events in fig .[ fig : rebinding]c .these series of rapid rebindings followed by periods of prolonged absence from the operator result in the aberrant bias shown in fig .[ fig : rebinding]a .the occurrence of rapid rebindings is intimately related to the nature of diffusion . when diffusion and the positions of the reactants are ignored all dynamicsis based only on the average concentration of the reactants . as a consequence ,when in this approach a repressor dissociates from the operator site , the probability of rebinding depends only on the concentration of repressor in the cell . on the level of actual positions of the reactants ,this amounts to placing the repressor at a random position in the container .the situation is very different for the gfrd approach , where the positions of the reactants are taken into account . after a dissociation from the operator site ,the repressor particle is placed at contact with the operator site .because of the close proximity of the repressor to its binding site , it has a high probability of rapidly rebinding to , and only a small probability of diffusing away from , the binding site .at the same time , when the repressor eventually diffuses away from the operator site , the probability that the same , or more likely , another repressor diffuses to and binds the operator site is much smaller than the probability of binding in the well - stirred model , as will be shown quantitatively in sec .[ sec : twostep ] .this results in the behavior observed in fig .[ fig : rebinding]b .it can now be understood that the bursts in mrna production correspond to the prolonged absence of repressor from the operator site compared to the well - stirred model . especially for low repressor concentrations , these periods of absencecan be long enough that the concentration of mrna reaches values comparable to those in the unrepressed state for brief periods of time .when a repressor binds to the operator site , due to the rapid rebindings it will remain bound effectively for a time much longer than the mrna lifetime , leading to long periods where mrna is absent in the cell .this shows that under these conditions spatial fluctuations and not stochastic chemical kinetics are the dominant contribution to the noise in mrna and protein numbers in the repressed state .in this section we investigate to what extent the effect of diffusion on the repressor dynamics can be modeled by the two - step kinetic scheme : the first step in eq .[ eq : two_step_kinetic_scheme ] describes the diffusion of repressor to the operator site resulting in the encounter complex , with the rates and depending on the diffusion coefficient and the size of the particles .the next step describes the subsequent binding of repressor to the dna . in this case the rates are related to the microscopic rates defined in eq .[ eq : gene_exp1 ] .when the encounter complex is assumed to be in steady state , the two - step kinetic scheme can be mapped onto the reaction described in eq .[ eq : gene_exp1 ] , but with effective rate constants and .the two - step kinetic scheme should yield the same average concentrations as the scheme in eq .[ eq : gene_exp1 ] , so that the equilibrium constant , where and are the reaction rates defined in eq .[ eq : gene_exp1 ] .it is possible to express the effective rate constants and in terms of the microscopic rate constants and . for the setup used here , where a single operator o is surrounded by a homogeneous distribution of repressor r , the rate follows from the solution of the steady state diffusion equation with a reactive boundary condition with rate at contact andis given by the diffusion - limited reaction rate .the rates and depend on the exact definition of the encounter complex .it is natural to identify the rate with the intrinsic dissociation rate , thus . from these expressions for and and the requirement that the equilibrium constant should remain unchanged , one finds that . using this resultone obtains and .these renormalized rate constants have a clear interpretation . for the effective forward rate it follows , for instance , that : : that is , on average , the time required for repressor binding is given by the time needed to diffuse towards the operator plus the time for a reaction to occur when the repressor is in contact with the operator site .the effective backward rate has a similar interpretation .the probability that after dissociation the repressor diffuses away from the operator site and never returns is given by , where is the irreversible survival probability for two reacting particles . using that , the expression for can be written as : that is , the effective dissociation rate is the microscopic dissociation rate multiplied by the probability that after dissociation the repressor escapes from the operator site . for diffusion limited reactions , such as the reaction considered here , we have that .now , the renormalized rate constants reduce to : in fig . [fig : noise ] , we compare the noise profiles for the gfrd algorithm with those obtained by a simulation of the well - stirred model , where instead of the microscopic rates and we use the renormalized rates from eqs .[ eq : renorm1 ] and [ eq : renorm2 ] .surprisingly , we find complete agreement .one of the main reasons why this is unexpected , is that for the master equation the time between events is poisson - distributed , whereas after a dissociation the time to the next rebinding is distributed according to a power - law distribution when diffusion is taken into account .the reason that this power - law behavior of rebinding times is not of influence on the noise profile , is that the time scale of rapid rebinding is much smaller than any of the other relevant time scales in the network .specifically , rebinding times are so short that the probability that a rnap will bind before a rebinding is negligible . as a consequence ,the transcription network is not at all influenced by the brief period the operator site is accessible before rebinding : for the transcription machinery the series of consecutive rebindings , albeit distributed algebraically in time individually , is perceived as a single event .and on much longer time scales , when a repressor diffuses in from the bulk towards the operator site , the distribution of arrival times is expected to be poissonian , because on these time scales the repressors are distributed homogeneously in the bulk .it is possible to reinterpret the effective rate constants in eq .[ eq : renorm1 ] and [ eq : renorm2 ] in the language of rapid rebindings .the probability that a rebind will occur after a dissociation from the dna is given by , where .the probability that consecutive rebindings occur before the repressor diffuses away from the operator site is then given by . from thisfollows that the average number of rebindings is . using again that , we find that . combining this with eqs .[ eq : renorm1 ] and [ eq : renorm2 ] , we get : in words , after an initial binding the repressor spends times longer on the dna than expected on the basis of the microscopic backward rate , as it rebinds on average times . because the average occupancy should not change , the forward rate should be renormalized in the same way . in conclusion , in this model the effects of diffusion can be properly described by a well - stirred model when the reaction rates are renormalized by the average number of rebindings .in this section , we study how the noise due to the stochastic dynamics of the repressor molecules propagates through the different steps of gene expression for both the spatially resolved model and the well - stirred model .this analysis will also provide further insight into why the well - stirred model with renormalized rate constants for the ( un)binding of the repressor molecules works so well . in biochemical networks ,the noise in the output signal depends upon the noise in the biochemical reactions that constitute the network , the so - called intrinsic noise , and on the noise in the input signal , called extrinsic noise . in our case, the output signal is the protein concentration , while the input signal is provided by the repressor concentration .the intrinsic noise arises from the biochemical reactions that constitute the transcription and translation steps .moreover , we consider the noise in the protein concentration that is due to the ( un)binding of the rnap to ( from ) the dna to be part of the intrinsic noise . the extrinsic noise is provided by the fluctuations in the binding of the repressor to the operator , _i.e. _ in the state or . since the total repressor concentration , = [ r ] + [ or] ] .it is clear from fig .[ fig : ps2](a ) that already at the level of the elongation complex , the high - frequency noise due to the rapid rebindings is filtered .transcription can thus already be described by a well - stirred model with properly renormalized rate constants for repressor ( un)binding to ( from ) the dna . , both for the well - stirred model with renormalized rate constants ( rws ) and for gfrd .repressor power spectra show a difference between the spatially resolved model and the well - stirred model at high frequencies , due to the diffusion of the repressor molecules .the power spectra for the elongation complexes coincide for the well - stirred and the spatially resolved model .the power spectrum of the elongation complex shows a series of peaks and valleys due to the presence of fixed delays in the dynamics of the elongation complex .( inset ) power spectrum of rnap dynamics ( ) . shownare the power spectra in the presence and absence of fixed delays in the rnap dynamics . due to the competition between rnap and rerepssor for binding to the promoter, the power spectrum is described by a sum of two lorentzians .( b ) power spectra of the elongation complex and mrna .peaks due to the delays in rnap dynamics are still present in the mrna dynamics . for high frequencies ,the mrna dynamics is well described by a linear birth - and - death process .( c ) spectra of mrna and protein .the slow protein dynamics filters out all the peaks resulting from the delays in the rnap dynamics .the only difference between the full spectrum of the output signal and that of the intrinsic noise is an increased noise at low frequencies , due to the repressor dynamics.,width=302 ] the power spectrum of the elongation complex exhibits two corner frequencies , one around and another one at .these two corner frequencies arise from the competition between repressor and rnap for binding to the promoter . to elucidate this ,we have plotted in the inset the power spectrum for rnap bound to the promoter , thus the power spectrum for + [ orp^*] ] , where is the renormalized association rate ( see eq .[ eq : renorm1 ] ) . the rate constant denotes the renormalized rate for repressor unbinding , ( see eq .[ eq : renorm2 ] ) ; denotes the rate at which rnap binds to the promoter .the rate constant is the rate at which the rnap leaves the promoter . since the promoter can become accessible for the binding of another rnap or repressor by either the dissociation of rnap from the closed complex or by forming the open complex and then clearing the promoter , this rate is given by . if promoter clearance would be neglected , then , indeed , . the power spectrum of the rnap dynamics in eq .[ eq : three_state ] can be calculated analytically and is given by a sum of two lorentzians : where and are coefficients .the corner frequencies and are given by , where and .the dynamics of repressor binding and unbinding is much slower than that of rnap binding and unbinding , meaning that .this allows us to approximate the corner frequencies as and .this yields the following expressions for the corner frequencies : [ o]^{\prime}. \end{aligned}\ ] ] here , ^\prime\equiv k_4 / ( k_3 + k_4) ] is thus given by the fourier transforms were computed at 10000 logarithmically spaced angular frequencies starting from , where is the total length of the signal . power spectra obtained according to eq .[ eq : ps ] were filtered with a box average over 20 neighboring points .
|
we study by simulation the effect of the diffusive motion of repressor molecules on the noise in mrna and protein levels in the case of a repressed gene . we find that spatial fluctuations due to diffusion can drastically enhance the noise in gene expression . for a fixed repressor strength , the noise due to diffusion can be minimized by increasing the number of repressors or by decreasing the rate of the open complex formation . we also show that the effect of spatial fluctuations can be well described by a two - step kinetic scheme , where formation of an encounter complex by diffusion and the subsequent association reaction are treated separately . our results also emphasize that power spectra are a highly useful tool for studying the propagation of noise through the different stages of gene expression . + key words : gene expression , noise , systems biology , computer simulation
|
left - censored data is a characteristic of many datasets . in physical science applications, observations can be censored due to limit of detection and quantification in the measurements .for example , if a measurement device has a value limit on the lower end , the observations is recorded with the minimum value , even though the actual result is below the measurement range . in fact, many of the hiv studies have to deal with difficulties due to the lower quantification and detection limits of viral load assays . in social science studies , censoring may be implied in the nonnegative nature or defined through human actions .economic policies such as minimum wage and minimum transaction fee result in left - censored data , as quantities below the thresholds will never be observed . with advances in modern data collection ,high - dimensional data where the number of variables , p , exceeds the number of observations , n , are becoming more and more commonplace .hiv studies are usually complemented with observations about genetic signature of each patient , making the problem of finding the association between the number of viral loads and the gene expression values extremely high dimensional .hence , it is important to develop inferential methods for left - censored and high - dimensional data .a general approach to estimation of the unknown parameter in high dimensional settings , is given by the penalized m - estimator where is a loss function ( e.g. , the negative log - likelihood ) and is a penalty function with a tuning parameter .examples include but are not limited to the lasso , scad , mcp , etc .significant progress has been made towards understanding the estimation theory of penalized m - estimators with recent breakthroughs in quantifying the uncertainty of the obtained results .however , no general theory exists for high - dimensional estimation in the setting of left - censored data , not to mention for understanding their uncertainty .a few challenges of left - censored data are particularly difficult even in low - dimensional settings .left - censored models rarely obey particular distributional forms , preventing the use of likelihood theory and demanding for estimators that are semi - parametric in nature .for the same reasons , the estimators need to be robust to the presence of outliers in the design or model error .lastly , theoretical results can not be obtained using naive taylor expansions and require the development of novel concentration of measure results . to bridge this gap, this paper proposes a new mechanism , named as _ smoothed estimating equations _ ( see ) and _ smoothed robust estimating equations _ ( sree ) , for construction of confidence intervals for low - dimensional components in high - dimensional left - censored models . for a high - dimensional parameter of interest , we aim to provide confidence intervals for any of its coordinates while adapting to the left - censored nature of the problem .no distributional assumption will be made on the model error .t the proposed estimators and confidence intervals are thus semiparametric .the main challenge in such setting is the non - differentiability of many of semiparametric loss functions , e.g , the least absolute deviation ( lad ) loss . to handle this challenge , we apply a smoothing operation on the high dimensional estimating equations , so that the obtained see become smooth in the underlying .moreover , see are designed to handle high - dimensional model parameters and hence differ from the classical approaches of estimating equations .although we consider left - censored models , the proposed see equations are quite general and can apply to any non - differentiable loss function even with fully observed data . for example , they can provide valid confidence sets using penalized rank estimator with both convex and non - convex penalties .we establish theoretical asymptotic coverage for confidence intervals while allowing left - censoring and .moreover , for the estimators resulting from the see and sree equations , we provide delicate bahadur representation and establish the order of the residual term . under mild conditions , we show that the effects of censoring asymptotically disappear , a result that is novel and of independent interest even in low - dimensional setting .additionally , we establish a number of new uniform concentration of measure results particularly useful for many left - censored models . to further broaden our framework we formally develop robust mallow s , schweppe s and hill - ryan s estimators that adapt to the unknown censoring .we believe these estimators to be novel even in low - dimensional setting .this generalizes the classical robust theory developed by .we point out that the see framework can be viewed as an extension of the de - biasing framework of .in particular , the confidence intervals resulting from the see estimator are asymptotically equivalent to the confidence intervals of de - biasing methods in the case of a smooth loss function and non - censored observations .however , sree confidence sets provide robust alternative to the naive de - biasing as the resulting inference procedures are robust to the distributional misspecifications , and most appropriate for applications with extremely skewed observations .given the prevalence of left - censored data , a large body of work in model estimation and inference has been dedicated to the topic .estimation in the left - censored models has been studied since the 1950 s . first proposed the model with a nonnegative constraint on the response variable , which is also known as the tobit - i model .later , proposed a maximum likelihood estimator where a data transformation model is considered , and then impose a class of distributions for the resulting model error . however , as zellner has noted , knowledge of the underlying data generating mechanism is seldom available , and thus models with parametric distributions may be subject to the distributional misspecification . , , and pioneered the development of robust inference procedures for the left - censored data , and relieved the assumption on model error distribution in prior work . introduced a lad estimator , whereas introduced robust estimators and inference based on maximum entropy principles . proposed an alternative robust two - step estimator , while and developed distribution free and rank - based tests .for these models , the common assumption is that . for high - dimensional models , and with lasso being the cornerstone of achieving sparse estimators ,numerous efforts have been made on establishing finite sample risk oracle inequalities of penalized estimators ; examples include , , , , , , and .regarding censored data , offered a penalized version of powell s estimator .however , substantially smaller efforts have been made toward high - dimensional inference , namely confidence interval construction and statistical testing in the uncensored high - dimensional setting , not to mention in the censored high - dimensional setting .recently , , and have corrected the bias of high - dimensional regularized estimators by projecting its residual to a direction close to that of the efficient score .such technique , named de - biasing , is parallel to the bias correction of the nonparametric estimators in the semiparametric inference literature . considered an extension of this technique to generalized linear model , while and considered extensions to graphical models . developed a three - step bias correction technique for quantile estimation .for inference in censored high - dimensional linear models , to the best of our knowledge , there has been no prior work .it is worth pointing out that the main contribution of this paper is in understanding fundamental limits of semiparametric inference for left - censored models . in section 2 ,we propose the smoothed estimating equations ( see ) for left - censored linear models . in section 3, we establish general results for confidence regions and the bahadur representation of the see estimator .we also emphasize on the new concentration of measure results , the building blocks of the main theorems . in section 4 ,we develop robust and left - censored mallow s , schweppe s and hill - ryan s estimators and present their theoretical analysis .section 5 provides numerical results on simulated and real data sets .we defer technical details to the supplementary materials .we begin by introducing a general modeling framework followed by highlighting the difficulty for directly applying existing inferential methods ( such are de - biasing , score , wald , etc . ) to the models with left - censored observations .finally , we propose a new mechanism , named smoothed estimating equations , to construct semi - parametric confidence regions in high - dimensions .we consider the problem of confidence interval construction where we observe a vector of responses and their censoring level together with covariates .the type of statistical inference under consideration is regular in the sense that it does not require model selection consistency .a characterization of such inference is that it does not require a uniform signal strength in the model .since ultra - high dimensional data often display heterogeneity , we advocate a robust confidence interval framework .we begin with the following latent regression model : where the response , , and the censoring level , , are observed and the vector is unknown .this model is often called the semi - parametric censored regression model , whenever the distribution of the error , , is not specified .we assume that are independent across and are independent of .matrix ] with and this motivates us to consider the following as an estimator for the inverse .let and denote the estimators of and , respectively .we will show that a simple plug - in lasso type estimator is sufficiently good for construction of confidence intervals .we propose to estimate , with the following penalized plug - in least squares regression , notice that this regression does not trivially share all the nice properties of the penalized least squares , as in this case the rows of the design matrix are not independent and identically distributed .an estimate of can then be defined through the estimate of the residuals we propose the plug - in estimate for as and a bias corrected estimate of defined as observe that the naive estimate does not suffice due to the bias carried over by the penalized estimate .lastly , the matrix estimate of , much in the same spirit as is defined with the proposed scale estimate can be considered as the censoring adaptive extension of the graphical lasso estimate of . certainly , there are alternative procedures for estimating with examples parallel to the dantzig selector .however , we believe , the choice of tuning parameters for such estimates will depend on the unknown sparsity of , thus will be especially difficult to choose in practice . whenever the model considered is homoscedastic , i.e. , are identically distributed with a density function ( denoted whenever possible with ) , we propose a novel density estimator designed to be adaptive to the left - censoring in the observations . for a positive bandwidth sequence , we define the density estimator of as of course , more elaborate smoothing schemes for the estimation of could be devised for this problem , but there seems to be no a priori reason to prefer an alternate estimator . [ remark4 ] we will show that a choice of the bandwidth sequence satisfying suffices .however , we also propose an adaptive choice of the bandwidth sequence and consider such that for a constant . here , denotes the size of the estimated set of the non - zero elements of the initial estimator , i.e. , . following the see principles ,the one - step solution is defined as an estimator , for the presentation of our coverage rates of the confidence interval and , we start with the bahadur representation .lemmas 1 - 6 ( presented below ) enable us to establish the following decomposition for the introduced one - step estimator , where the vector represents the residual component .we show that the residual vector s size is small uniformly and that the leading term is asymptotically normal .the theoretical guarantees required from an initial estimator is presented below .* condition ( i ) * : [ condition_i ] _ an initial estimate is such that for the left - censored model , irrespective of the density assumptions , the following three properties hold .there exists a sequence of positive numbers and such that when and , and . _ penalized clad estimator studied in , under suitable conditions and a choice of the tuning parameter , satisfies the condition ( i ) with .results of can be extended to guarantee that and , under the same conditions ( proof is trivial extension of and is hence not provided ) .it is worth noting that the above condition does not assume model selection consistency of the initial estimator . with the normality result of the proposed estimator ( as shown in theorem [ cor : ci ] , section [ sec : theory ] ) , we are now ready to present the confidence intervals .fix to be in the interval and let denote the standard normal percentile point .let be a fixed vector in .based on the results of section [ sec : theory ] , the standard studentized approach leads to a confidence interval for of the form where is defined in and with defined in , defined in and as defined in . in the above , for , the above confidence interval provides a coordinate - wise confidence interval for each , .notice that the above confidence interval is robust in a sense that it is asymptotically valid irrespective of the distribution of the error term .we begin theoretical analysis with the following decomposition of we can further decompose the last factor of the last term in as , ] , for all and .for some constant , and take value in interval ] and is for near zero and uniformly in ._ the above assumption is the only condition we assign to the error distribution .we require the error density function to be with bounded first derivative .this excludes densities with unbounded first moment , but includes a class of distributions much larger than the gaussian .moreover , this assumption implies that are distributed much like the error , for close to and close to the censoring level .[ lemma0 ] suppose that the conditions * ( x ) * , * ( e ) * hold .consider the class of parameter spaces modeling sparse vectors with at most non - zero elements , where is a sequence of positive numbers .then , there exists a fixed constant ( independent of and ) , such that the process satisfies with probability . \right| \leq c\left ( \sqrt{\frac { r_n t \sqrt{t } \log(n p /\delta)}{n } } \bigvee \frac { t \log(2n p /\delta)}{n } \right).\ ] ] the preceding lemma immediately implies strong approximation of the empirical process with its expected process , as long as , the estimation error , and , the size of the estimated set of the initial estimator , are sufficiently small .the power of the lemma [ lemma0 ] is that it holds uniformly for a class of parameter vectors enabling a wide range of choices for the initial estimator .apart from the condition on the design matrix and the error distribution , we need conditions on the censoring level of the model for further analysis . * condition ( c ) * : [ condition_c ] _ there exists some constant , such that for all satisfying , where the operation is entry - wise maximization ._ the censoring level has a direct influence on the constant . in general , higher values for increase the number of censored data .the bounds for the coverage probability ( see theorem [ cor : fixed : ci ] ) do not depend on the censoring level .the fact that the censoring level does not directly appear in the results should be understood in the sense that the percentage of the censored data is important , not the censoring level .* condition ( cc ) * : [ condition_cc ] _ for some compatibility constant and all satisfying , the following holds (\betab-\betab^ * ) s_{\betab^ * } . ] satisfies with probability and a constant defined in condition ( * e * ) .lemma [ lemma4 ] establishes a uniform tail probability bound for a growing supremum of an empirical process .it is uniform in and it is growing as supremum is taken over , possibly growing ( ) coordinates of the process .the proof of lemma [ lemma4 ] is further challenged by the non - smooth components of the process itself and the multiplicative nature of the factors within it .it proceeds in two steps .first , we show that for a fixed the term is small . in the second step ,we devise a new epsilon net argument to control the non - smooth and multiplicative terms uniformly for all simultaneously .this is established by devising new representations of the process that allow for small size of the covering numbers . in conclusion, lemma [ lemma4 ] establishes a uniform bound in .size of the remainder term in is controlled by the results of lemmas 1 - 6 and we provide details below . [ cor : fixed : ci ] let for a constant and let conditions * ( i ) * , * ( x ) * , * ( e ) * , * ( c ) * , * ( cc ) * and * ( ) * hold . with , we first notice that the expression above requires , a condition frequently imposed in high - dimensional inference ( see for example ) .then , in the case of low - dimensional problems with and , we observe that whenever the initial estimator of rate , is in the order of , for a small constant , then .in particular , for a consistent initial estimator , i.e. we obtain that .for high - dimensional problems with and growing with , for all initial estimators of the order such that and we obtain that whenever , where .further discussion is relegated to the comments following theorem [ thm : clad_temp ] .next , we present the result on the asymptotic normality of the leading term of the bahadur representation .[ normality ] let for a constant and let conditions * ( i ) * , * ( x ) * , * ( e ) * , * ( c ) * , * ( cc ) * and * ( ) * hold .define .furthermore , assume denote . if , the density of at 0 is known , {jj}^{-\frac{1}{2}}u_j \xrightarrow[n , p , \bar s\rightarrow\infty]{d } \mathcal{n}\left(0,\frac{1}{4f(0)^2}\right ) . \ ] ] a few remarks are in order .theorem [ normality ] implies that the effects of censoring asymptotically disappear .namely , the limiting distribution only becomes degenerate when the censoring rate asymptotically explodes , implying that no data is fully observed .however , in all other cases the limiting distribution is fixed and does not depend on the censoring level .density estimation is a necessary step in the semiparametric inference for left - censored models .below we present the result guaranteeing good qualities of density estimator proposed in .[ thm : f ] there exists a sequence such that and and .assume conditions * ( i ) * , * ( x ) * and * ( e ) * hold , then together with theorem [ normality ] we can provide the next result .[ cor1 ] with the choice of density estimator as in , under conditions of theorem [ normality ] and [ thm : f ] , the results of theorem [ normality ] continue to hold unchanged , i.e. , {jj}^{-\frac{1}{2}}u_j \cdot 2 \widehat f(0 ) \xrightarrow[n , p,\bar s\rightarrow\infty]{d } \mathcal{n}\left(0,1\right).\end{aligned}\ ] ] observe that the result above is robust in the sense that the result holds regardless of the particular distribution of the model error .condition * ( e ) * only assumes minimal regularity conditions on the existence and smoothness of the density of the model errors . in the presence of censoring ,our result is unique as it allows , and yet it successfully estimates the variance of the estimation error .combining all the results obtained in previous sections we arrive at the main conclusions .[ cor : ci ] let for a constant and let conditions * ( i ) * , * ( x ) * , * ( e ) * , * ( c ) * , * ( cc ) * and * ( ) * hold .furthermore , assume for .denote .let and be defined in and .then , for all vectors and any , when we have the statements of theorems [ cor : fixed : ci ] and [ normality ] also hold in a uniform sense , and thus the confidence intervals are honest. in particular , the confidence interval does not suffer from the problems arising from the non uniqueness of .we consider the set of parameters let be the distribution of the data under the model . then the following holds .[ cor : uniform : ci ] under the setup and assumptions of theorem [ cor : ci ] when previous results depend on a set of high - level conditions imposed on the initial estimate .moreover , rates depend on the initial estimator precisely and to better understand them we present here their summary when the initial estimator is chosen to be penalized clad estimator of .[ thm : clad_temp ] let be defined as in with a choice of the tuning parameter for a constant and independent of and .assume that , for with .\(i ) suppose that conditions * ( x ) * , * ( e ) * , * ( c ) * , * ( cc ) * and * ( ) * hold .moreover , let for a constant .then ( ii ) for and and defined in \(iii ) let defined in . then , for as in , we have moreover , \(iv ) let be defined as in with defined in , defined in and as defined in .then , for with , the size of the residual term in is \(v ) assume that , for with .let and be defined in and .then , for all vectors and any , when we have result ( i ) suggests that the rates of estimation match those of simple linear model as long as proportion of censored data is not equal 1 . in that sense ,our results are also efficient .moreover , result ( ii ) implies that the rates of estimation of the variance are slower by a factor of compared to the least squares method .this is also apparent in the result ( iii ) where the rate of convergence of the precision matrix is slower by a factor of , due to the non - standard dependency issues in the plug - in lasso estimator .lastly , results ( iv ) and ( v ) suggest that the confidence interval is asymptotically valid and that the coverage errors are of the order of whenever .classical results on inference for left - censored data , with , only imply that the error rates of the confidence interval is ; instead , we obtain a precise characterization of the size of the residual term .moreover , with the rates above match the optimal rates of inference for the absolute deviation loss ( see e.g. ) , indicating that our estimator is asymptotically efficient in the sense that the censoring asymptotically disappears .however , we impose slightly stronger dimensionality restrictions as for fully observed data is a sufficient condition. the additional condition can be thought of as a penalty to pay for being adaptive to left - censoring .this implies that a larger sample size needs to be employed for the results to be valid .however , this is not unexpected as censoring typically reduces the effective sample size .statistical models are seldom believed to be complete descriptions of how real data are generated ; rather , the model is an approximation that is useful , if it captures essential features of the data .good robust methods perform well even if the data deviates from the theoretical distributional assumptions .the best known example of this behavior is the outlier resistance and transformation invariance of the median .several authors have proposed one - step and k - step estimators to combine local and global stability , as well as a degree of efficiency under the target linear model .there have been considerable challenges in developing good robust methods for more general problems . to the best of our knowledge, there is no prior work that discusses robust one - step estimators for the case of left - censored models ( for either high or low dimensions ) .we propose here a family of doubly robust estimators that stabilize estimation in the presence of `` unusual '' design or model error distributions .observe that rarely follows distribution with light tail .namely , model can be reparametrized as where , and .hence will rarely follow light tailed distribution and it is in this regard very important to design estimators that are robust .we introduce mallow s , schweppe s and hill - ryan s estimators for left - censored models . in this sectionwe propose a doubly robust population system of equations =0\ ] ] with and where is an odd , nondecreasing and bounded function . throughoutwe assume that the function either has finitely many jumps or is differentiable with bounded first derivative .notice that when and , with being the sign function , we have of previous section .moreover , observe that for the weight functions and , both functions of , the true parameter vector satisfies the robust population system of equations above .appropriate weight functions and are chosen for particular efficiency considerations .points which have high leverage are considered `` dangerous '' , and should be downweighted by the appropriate choice of the weights . additionally , if the design has `` unusual '' points , the weights serve to downweight their effect in the final estimator .we augment the system above similarly as before and consider the system of equations + { \boldsymbol { \upsilon}}^r [ \betab -\betab^ * ] = 0,\ ] ] for a suitable choice of the robust matrix .ideally , most efficient estimation can be achieved when the matrix is close to the influence function of the robust equations . to avoid difficulties with non - smoothness of , we propose to work with a matrix that is smooth enough and is robust simultaneously . to that end , observe for a suitable function and .we consider a smoothed version of the hessian matrix and work with for \ ] ] where denotes the density of the model error . to infer the parameter , we adapt a one - step approach in solving the empirical counterpart of the population equations above .we name the empirical equations as _ smoothed robust estimating equations _ or sree in short . for a preliminary estimatewe solve an approximation of the robust system of equations above and search for the that solves the particular form of the matrix depends on the choice of the weight functions and and the function . in particular , for the left - censored model & = n^{-1 } \sum_{i=1}^{n } q_i \nabla_{\betab^ * } \ee_\varepsilon \left[\psi \left ( v_i ( y_i- \max\{0,x_i\betab^*\})\right)\right]\ \end{aligned}\ ] ] leading to the following form \ ] ] whenever the function is differentiable . here, we denote as . in case of non - smooth , should be interpreted as for ] and with for .when function does not have first derivative , we replace with ' ] takes the form of a weighted covariance matrix .hence , to estimate the inverse , we project columns one onto the space spanned by the remaining columns . for , we define the vector as follows , also , we assume the vector is sparse with . thus , we propose the following as a robust estimate of the scale with and the normalizing factor estimator is a high - dimensional extension of hampel s ideas of approximating the inverse of the hessian matrix in a robust way , by allowing data specific weights to trim down the effects of the outliers . such weights can be stabilizing estimation in the presence of high proportion of censoring . compared the efficiency of the mallow s and schweppe s estimators to several others and found that they dominate in the case of linear models in low - dimensions .lastly , we arrive at a class of doubly robust one - step estimators , we propose a one - step left - censored mallow s estimator for left - censored high - dimensional regression by setting the weights to be , and for constants and , and with and . extending the work of ,it is easy to see that mallow s one - step estimator with and quantile of chi - squared distribution with improves a breakdown point of the initial estimator to nearly , by providing local stability of the precision matrix estimate .similarly , the one - step left - censored hill - ryan estimator is defined with and the one - step left - censored schweppe s estimator with similar to the concise version of bahadur representation presented in for the standard one - step estimator with and , we also have the expression for doubly robust estimator , next , we show that the leading component has asymptotically normal distribution and that the residual term is of smaller order . for simplicity of presentation we present results below with an initial estimator being penalized clad estimator with the choice of tuning parameter as presented in theorem [ thm : clad_temp ] .we introduce the following condition .* condition ( r)*:[condition_rpc ] _ parameters for all are bounded , and such that for some .moreover , are sub - exponential random vectors .let and be functions such that and for positive constants and and =0 ] and . by condition * ( cc ) * , we have is bounded away from zero .then , is also bounded away from zero by , and so is , since we have {jj } - \left [ \hat \omegab(\hat\betab ) \sigmab ( \hat \betab ) \hat \omegab(\hat\betab ) \right]_{jj } \leq \left\|\hat \omegab(\hat\betab ) \sigmab ( \hat \betab ) \hat \omegab(\hat\betab ) - \sigmab^{-1 } ( \betab^*)\right\|_{\max } = { \mbox{\scriptsize}}_p \left ( 1 \right).\end{aligned}\ ] ] the rate above follows from in the proof of theorem [ cor : ci ] .notice the rate is of order smaller than the rate assumption in theorem [ cor : fixed : ci ] .thus , we can deduce that {jj}^{-\frac{1}{2 } } - \left [ \sigmab^{-1}(\betab^*)_{jj } \right]^{-\frac{1}{2 } } \leq c \left\|\hat \omegab(\hat\betab ) \sigmab ( \hat \betab ) \hat \omegab(\hat\betab ) - \sigmab^{-1 } ( \betab^*)\right\|_{\max}.\end{aligned}\ ] ] for some finite constant .applying slutsky theorem on with the inequality above , the desired result is obtained .we can rewrite the expression in as since \right| = { \mbox{\scriptsize}}_p(1) ] .we consider the same covering sequence as in lemma 6 .then , we observe furthermore , ^ 2 ] be the centers of the balls of radius that cover the set .such a cover can be constructed with ( see , for example * ? ? ?furthermore , let \right] ] note that = 0 ] .hence , } \abr { x_i\tilde\deltab_k } & \leq c ^2 \sqrt{n}\sqrt{\tilde\deltab_k^\top w^\top ( \betab^*+\tilde\deltab_k ) w ( \betab^*+\tilde\deltab_k ) \tilde\deltab_k } \leq 2 c ^2 r_n \sqrt{n}\left(\lambda_{\max}^{1/2}(\sigmab ( \betab^ * ) ) \vee1\right),\end{aligned}\ ] ] where the line follows using the cauchy - schwartz inequality and inequality ( 58a ) of and lemma [ lemma2 ] . hence , with probability we have for all }z_{ik } } \leq c\rbr { \sqrt { \frac { r_n\log(2/\delta)}{n } } \bigvee \frac { \log(2/\delta)}{n } } .\end{aligned}\ ] ] using the union bound over ] .we can choose , which gives us .with these choices , we obtain which completes the proof .we begin by rewriting the term , and aim to represent it through indicator functions .observe that .\end{aligned}\ ] ] using the fundamental theorem of calculus , we notice that if , , where is the univariate distribution of .therefore , with expectation on , we can obtain an expression without the .\\ & = \left[n^{-1}\sum_{i=1}^n x_i^\top \ind(x_i\betab>0)\cdot2f(u^*)x_i(\betab^*-\betab ) \right ] : = \lambda_n(\betab)(\betab^*-\betab),\end{aligned}\ ] ] for some between 0 and , and where we have defined .\ ] ] we then show a bound for {jk}\right| ] .moreover , the original expresion is also smaller than or equal to .the term can be bounded by condition * ( x ) * and * ( e ) * , with the help of hlder s inequality , by triangular inequality and condition * ( e ) * we can further upper bound the right hand side with then we are ready to put terms together and obtain a bound for . additionally , by condition * ( x ) * we have for and a constant .essentially , this proves that is not greater than a constant multiple of the difference between and .thus , we have as for the simplicity in notation we fix and denote with . the proof is composed of two steps : the first establishes a cone set and an event set of interest whereas the second proves the rate of the estimation error by certain approximation results .* step 1*. here we show that the estimation error belongs to the appropriate cone set with high probability .we introduce the loss function . the loss function above is convex in hence \geq 0.\ ] ] let .let .kkt conditions provide for all with . moreover , observe that for all , \\ & = \sum_{j \in s_1^c } \deltab_j ( \nabla_{\gammab } l(\hat \betab , \gammab)|_{\gammab=\gammab^ * + \deltab } ) _ j + \sum_{j \in s_1 } \deltab_j ( \nabla_{\gammab } l(\hat \betab , \gammab)|_{\gammab=\gammab^ * + \deltab } ) _ j + \deltab^\top ( - \nabla_{\gammab } l(\hat\betab,\gammab ) |_{\gammab= \gammab^ * } ) \\& \leq \sum_{j \in s_1^c } \deltab_j ( - \lambda_1 \mbox{sgn}(\gammab^*_j + \deltab_j ) ) + \lambda_1 \sum_{j \in s_1 } |\deltab_j| + h^ * \|\deltab \|_1 \\ & = \sum_{j \in s_1^c } - \lambda_1 |\deltab_j| + \sum_{j \in s_1 } \lambda_1 |\deltab_j| + h^ * \|\deltab_{s_1 } \|_1 + h^ * \|\deltab_{s_1^c } \|_1 \\ & = ( h^ * - \lambda_1 ) \|\deltab_{s_1^c } \|_1 + ( \lambda_1 + h^ * ) \|\deltab_{s_1 } \|_1.\end{aligned}\ ] ] next , by lemma [ lemma0 ] we observe and similarly .recall that .let be defined as .then , by hlder s inequality \right | \\ & \qquad = \ocal_p \left ( k^3 k_\gamma r_n^{1/2 } t^{3/4 } s_1 ( \log p / n)^{1/2 } \bigvee k^3 k_\gamma t s_1 \log p / n \right)\end{aligned}\ ] ] and similarly . putting all the terms together we obtain . next , we focus on the term .simple computation shows that for all , we have for .observe that the sequence across , is a sequence of independent random variables .as and are independent we have by the tower property = \ee_x \left[x_{ik } \ind\ { x_i \betab^ * > 0\ } \ee_\varepsilon [ \zetab_{1,i}^ * ] \right ] = 0 ] .let ] be centers of the balls of radius that cover the set .such a cover can be constructed with ( see , for example * ? ? ?furthermore , let be a ball of radius centered at with elements that have the same support as .in what follows , we will bound using an -net argument . in particular, using the above introduced notation , we have the following decomposition } \sup_{\deltab \in \mathcal{b } ( \tilde \deltab_k , r_n\xi_n ) } \norm { \mathbb{v}_n ( \deltab)}_\infty \\ & \leq \underbrace { \max_{k \in [ n_\delta ] } \norm{\mathbb{v}_n ( \tilde \deltab_k)}_\infty } _ { t_1 } + \underbrace { \max_{k \in [ n_\delta ] } \sup_{\deltab \in \mathcal{b } ( \tilde \deltab_k , r_n\xi_n ) } \norm { \mathbb{v}_n ( \deltab ) - \mathbb{v}_n ( \tilde \deltab_k)}_\infty } _ { t_2}. \end{aligned}\ ] ] observe that the term arises from discretization of the sets . to control it, we will apply the tail bounds for each fixed and .the term captures the deviation of the process in a small neighborhood around the fixed center . for those deviations we will provide covering number arguments . in the remainder of the proof , we provide details for bounding and .we first bound the term in .let .we are going to decouple dependence on and . to that end , let } - \rbr { f_i ( \zero ) g_i(\zero ) - \ee \left [ f_i ( \zero ) g_i(\zero ) | x_i \right ] } } \\ \intertext{and } \tilde z_{ijk } & = a_{ij}(\betab^ * + \tilde\deltab_k ) \rbr { \ee \left [ f_i ( \tilde\deltab_k ) g_i(\tilde\deltab_k ) | x_i \right ] - \ee \left [ f_i ( \zero ) g_i(\zero ) | x_i \right ] } \\ & - \ee\sbr { a_{ij}(\betab^ * + \tilde\deltab_k ) \rbr { f_i ( \tilde\deltab_k ) g_i(\tilde\deltab_k ) - f_i ( \zero ) g_i(\zero ) } } .\end{aligned}\ ] ] with a little abuse of notation we use to denote the density of for all .observe that = f_i ( \deltab ) \pp(\varepsilon_i\leq x_i \deltab) ] and } ] \\ & = a_{ij}^2(\betab^ * + \tilde\deltab_k ) \big ( w_i ( \tilde\deltab_k ) - w_i^2 ( \tilde\deltab_k ) + w_i ( \zero ) - w_i^2 ( \zero ) \\ & \qquad\qquad\qquad\qquad\qquad\qquad- 2 \rbr{w_i ( \zero ) \vee w_i(\tilde\deltab_k ) } + 2 w_i\rbr { \tilde\deltab_k } w_i\rbr { \zero } \big ) \\ & \stackrel{(i)}{\leq } a_{ij}^2(\betab^ * + \tilde\deltab_k ) \rbr { w_i ( \tilde\deltab_k ) + w_i ( \zero ) - 2 \rbr{w_i ( \zero ) \vee w_i ( \tilde\deltab_k ) } } \\ & \stackrel{(ii)}{\leq } a_{ij}^2(\betab^ * + \tilde\deltab_k ) f_i ( \tilde \deltab_k ) \abr { x_i \tilde\deltab_k } f\rbr { \eta_ix_i \tilde\deltab_k } \quad \rbr{\eta_i \in [ 0,1 ] } \\ & \stackrel{(iii)}{\leq } a_{ij}^2(\betab^* + \tilde\deltab_k ) f_i ( \tilde \deltab_k ) \abr { x_i \tilde\deltab_k } f_{\max}\end{aligned}\ ] ] where follows by dropping a negative term , follows by the mean value theorem , and from the assumption that the conditional density is bounded stated in condition * ( e)*. furthermore , conditional on } ] and ] , and ]. for a fixed , and we have is upper bounded with } \mathbb{q}_{ij}(\deltab ) - \mathbb{q}_{ij}(\tilde\deltab_k ) } } _ { t_{21 } } + \underbrace { \sup_{\deltab \in \mathcal{b}(\tilde \deltab_k , r_n\xi_n ) } \abr { n^{-1}\sum_{i\in[n ] } w_{ij}(\deltab ) - \ee\sbr { w_{ij}(\deltab ) } } } _ { t_{22}}.\end{aligned}\ ] ] we will deal with the two terms separately .let observe that the distribution of is the same as the distribution of due to the condition ( * e * ) .moreover , where is a constant such that .hence , }\max_{i\in[n ] } \sup_{\deltab \in \mathcal{b}(\tilde \deltab_k , r_n\xi_n ) } \abr { x_i \deltab - x_i \tilde\deltab_k } \leq r_n\xi_n\sqrt{t } \max_{i , j}|x_{ij}| \leq c r_n\xi_n \sqrt { t } = : \tilde l_n.\end{aligned}\ ] ] for , we will use the fact that and are monotone function in .therefore , }\bigg [ \abr{a_{ij}(\betab^ * ) } \big ( \ind\cbr{z_i \leq x_i \tilde\deltab_k + \tilde l_n}- \ind\cbr{-x_i\betab^ * \leq x_i \tilde\deltab_k - \tilde l_n}- \ind\cbr{z_i \leq x_i \tilde\deltab_k } \\ & + \ind\cbr{-x_i\betab^ * \leq x_i \tilde\deltab_k } - \pp\sbr{z_i \leq x_i \tilde\deltab_k - \tilde l_n}+ \pp\sbr{-x_i \betab^ * \leq x_i \tilde\deltab_k+ \tilde l_n } \\ & + \pp\sbr{z_i \leq x_i \tilde\deltab_k } - \pp\sbr{-x_i \betab^*\leq x_i \tilde\deltab_k } \big ) \bigg ] \end{aligned}\ ] ] furthermore , by adding and substracting appropriate terms we can decompose the right hand side above into two terms . the first , }\bigg [ \abr{a_{ij}(\betab^ * ) } \big ( \ind\cbr{z_i \leq x_i \tilde\deltab_k + \tilde l_n}- \ind\cbr { -z_i \betab^*\leq x_i \tilde\deltab_k - \tilde l_n } - \ind\cbr{z_i \leq x_i \tilde\deltab_k } \\ & + \ind\cbr{-x_i\betab^ * \leq x_i \tilde\deltab_k } - \pp\sbr{z_i \leq x_i \tilde\deltab_k + \tilde l_n}+ \pp\sbr{-x_i \betab^ * \leq x_i \tilde\deltab_k- \tilde l_n}\\ & + \pp\sbr{z_i \leq x_i \tilde\deltab_k } - \pp\sbr{-x_i \betab^*\leq x_i \tilde\deltab_k } \big ) \bigg ] \end{aligned}\ ] ] and the second }\bigg [ \abr{a_{ij}(\betab^ * ) } \big ( \pp\sbr{z_i \leq x_i \tilde\deltab_k + \tilde l_n}- \pp\sbr{-x_i \betab^ * \leq x_i \tilde\deltab_k- \tilde l_n } \\ & - \pp\sbr{z_i \leq x_i \tilde\deltab_k - \tilde l_n}+ \pp\sbr{-x_i \betab^ * \leq x_i \tilde\deltab_k+ \tilde l_n } \big ) \bigg].\end{aligned}\ ] ] the first term in the display above can be bounded in a similar way to by applying bernstein s inequality and hence the details are omitted . for the second termwe have a bound , since by the definition of and lemma [ lemma2 ] and . in the last inequality we used the fact that .therefore , with probability , a bound on is obtain similarly to that on .the only difference is that we need to bound , for and , instead of .observe that .moreover , by construction is a continuous , differentiable and convex function of and is bounded away from zero by lemma [ lemma2 ] .additionally , is a convex function of as a set of solutions of a minimization of a convex function over a convex constraint is a convex set .moreover , is a bounded random variable according to lemma [ lemma2 ] .hence , , for a large enough constant .therefore , for a large enough constant we have a bound on now follows using a union bound over ] .crawford , d. c. , zheng , n. , speelmon , e. c. , stanaway , i. , rieder , m. j. , nickerson , d. a. , mcelrath , m. j. , lingappa , j. ( 2009 ) .an excess of rare genetic variation in abce1 among yorubans and african - american individuals with hiv-1 ._ genes and immunity _ , 10(8):pp .715 - 721 .fouts , t. r. et al .balance of cellular and humoral immunity determines the level of protection by hiv vaccines in rhesus macaque models of hiv infection ._ proceedings of the national academy of sciences_,112(9):pp .992 - 999 .javanmard , a. and montanari , a. ( 2014 ) .hypothesis testing in high - dimensional regression under the gaussian random design model : asymptotic theory ._ information theory , ieee transactions on _ , 60(10 ) : pp .6522 - 6554 .maldarelli , f. , wu , x. , su , l. , simonetti , f. r. , shao , w. , hill , s. , spindler , j. , ferris , a. l. , mellors , j. w. , kearney , m. f. , coffin , j. m. , hughes , s. h. ( 2014 ) .specific hiv integration sites are linked to clonal expansion and persistence of infected cells ._ science ( new york , n.y . ) _ , 345(6193):pp .179 - 183 .negahban , s. n. , ravikumar , p. , wainwright , m. j. and yu , b. ( 2012 ) . a unified framework for high - dimensional analysis of m - estimators with decomposable regularizers ._ statistical science _ , 27(4):pp .538 - 557 .sawaya b , khalili k , amini s. j. ( 1998 ) .transcription of the human immunodeficiency virus type 1 ( hiv-1 ) promoter in central nervous system cells : effect of yb-1 on expression of the hiv-1 long terminal repeat ._ journal of general virology _ , 79(2):pp .239 - 246 .swenson , l.c . ,cobb , b. , geretti , a.m , et al .comparative performances of hiv-1 rna load assays at low viral load levels : results of an international collaboration ._ tang y - w , ed .journal of clinical microbiology _ , 52(2):pp .517 - 523 .wainwright , m. j. ( 2009 ) .sharp thresholds for high - dimensional and noisy sparsity recovery using -constrained quadratic programming ( lasso ) . _ieee transactions on information theory _ , 55(5):pp .2183 - 2202 .zellner , a. , ( 1996 ) .bayesian method of moments ( bmom ) analysis of mean and regression models . _lee , j.c . ,johnson , w.o . ,zellner , a .. modelling and prediction honoring seymour geisser , springer , new york _ , pp .61 - 72 .zhang , c. h. and zhang , s. s. ( 2014 ) .confidence intervals for low dimensional parameters in high - dimensional linear models ._ journal of royal statistical society .series b. statistical methodology _ , 76(1):pp .217 - 242 .zhao , y. , brown , b. m. and wang , y - g .( 2014 ) . smoothed rank - based procedure for censored data . _electronic journal of statistics _ , 8(2):pp .2953 - 2974 .
|
this paper develops robust confidence intervals in high - dimensional and left - censored regression . type - i censored regression models are extremely common in practice , where a competing event makes the variable of interest unobservable . however , techniques developed for entirely observed data do not directly apply to the censored observations . in this paper , we develop smoothed estimating equations that augment the de - biasing method , such that the resulting estimator is adaptive to censoring and is more robust to the misspecification of the error distribution . we propose a unified class of robust estimators , including mallow s , schweppe s and hill - ryan s one - step estimator . in the ultra - high - dimensional setting , where the dimensionality can grow exponentially with the sample size , we show that as long as the preliminary estimator converges faster than , the one - step estimator inherits asymptotic distribution of fully iterated version . moreover , we show that the size of the residuals of the bahadur representation matches those of the simple linear models , that is , the effects of censoring asymptotically disappear . simulation studies demonstrate that our method is adaptive to the censoring level and asymmetry in the error distribution , and does not lose efficiency when the errors are from symmetric distributions . finally , we apply the developed method to a real data set from the maqc - ii repository that is related to the hiv-1 study .
|
the spread of epidemics can be considered to occur on networks that describe contacts between individuals .the size and dynamics of epidemics heavily depend on the structure of the contact network .in particular , in networks in which the number of contacts per individual ( _ i.e. _ , the degree ) is heterogeneous , as represented by scale - free networks , epidemic spreading can occur on a large scale even at a small infection rate . given a limited dose of immunization , it is practically necessary to establish efficient immunization strategies against epidemics occurring on networks .if an appropriate ordering of immunization of nodes in a network is followed , the potential risk of large - scale epidemic spreading can be suppressed .we measure the efficiency of immunization by assessing the capability of the immunization strategy to fragment the network into small parts with a small number of sequentially removed nodes .this method of assessment has a wide applicability beyond prevention of epidemics . in ecology , it is important to identify the nodes in a food web whose removal causes the catastrophic disintegration of the food web , which would seriously damage the ecosystem .the possibility of efficient immunization of a network also implies that the network is vulnerable to intentional attacks such as those attributable to terrorism .the standard solution to an immunization problem is to immunize hubs ( _ i.e. _ , nodes having large degrees ) preferentially .the degree - based immunization strategy and its variants are very efficient for scale - free network models and some real data . however , many real networks are more structured than merely having heterogeneous degree distributions . partly because of this factor ,an immunization strategy based on a graph partition algorithm performs better than degree - based strategies .immunization strategies involving the preferential removal of nodes with large betweenness centrality ( betweenness - based strategies ; see sec .[ sec : results ] for definition ) also perform better than degree - based strategies in some networks .developing efficient immunization strategies for general complex networks is an unresolved question . in this study, we focus on networks with modular structure . by definition ,nodes in a network with modular structure are partitioned into multiple modules ( also called communities ) such that the number of links connecting the nodes in the same module is relatively large .the number of links connecting different modules is relatively small .such networks abound in various fields . in simple casesin which modules are homogeneous and of equal size , epidemic dynamics and immunization have been mathematically analyzed in the limit of the infinite network size .however , in practical applications , relevant networks are finite , modules in a network are heterogeneous in various aspects , and nodes in a module play different roles .metapopulation modeling is a promising approach to the understanding of epidemic dynamics in such modular networks .establishing practical immunization strategies for general modular networks is an important issue .we develop an immunization strategy for modular networks by extending an analytical framework proposed recently .it is our contention that it is important to consider the role of each node in the coarse - grained network among modules rather than in the original network so as to preferentially immunize nodes that bridge important modules .some algorithms for community detection effectively solve the same problem .we believe that our method is much less computationally expensive than these methods and therefore is suitable for large modular networks .consider an undirected and unweighted contact network with nodes . even though our results can be easily extended to the case of weighted networks , this study is confined to the immunization of unweighted networks for simplicity .an immunization strategy is an ordering of all the nodes in a network according to which the nodes are removed .the fraction of the removed nodes is set equal to ( ) ; the fraction of the remaining nodes is equal to .the fraction of nodes contained in the largest connected component ( lcc ) is denoted by . in a good immunization strategy , is small with a small number of removed nodes , _i.e. _ , with a large .restrepo and colleagues proposed an immunization strategy based on the so - called dynamical importance of nodes . although the dynamical importance is defined for directed networks , the exposition of their results in this section is concerned with the undirected version .the adjacency matrix is denoted by ; when node and node are adjacent , and otherwise . because is a symmetric matrix , all the eigenvalues of are real .the largest eigenvalue of and the corresponding eigenvector , which is called the perron vector , are denoted by and , respectively . in networks with low clustering ( _ i.e. _ , small density of triangles ) ,the lcc is large ( _ i.e. _ , ) if and only if exceeds unity . the perron vector is the mode that survives after multiplying repeatedly to an almost arbitrary initial -dimensional vector . intuitively , the multiplication of implies the spread of epidemics to the nearest neighbors .a large implies the efficient expansion of the lcc . when , we generate the effective adjacency matrix from the network composed of only the remaining nodes and the links among these nodes .we apply the threshold condition to the effective adjacency matrix to determine whether the lcc is large . in this way , we can estimate the critical value of with regard to the percolation transition .the dynamical importance of node , denoted by , is defined by the decrement of owing to the removal of node .the linearized eigenequation after removing node is expressed as where and is kronecker s delta . because the element of , denoted by , necessarily becomes zero owing to the removal of node ,the appropriate perturbation is given by , where is the unit vector for the component and is an -dimensional small vector . by inserting these expressions and into eq ., we obtain the following equation to first order : therefore , for undirected networks is equal to the square of the eigenvector centrality . in the immunization strategy developed by restrepo et al . , which we label as the res strategy , we first remove the node with the largest . then , we recalculate the dynamical importance of each node in the updated network to determine the second node to be removed .we repeat this procedure .this method works efficiently in various networks .the threshold condition is ineffective for modular networks . to demonstrate this ,consider an ad hoc modular network composed of homogeneous modules of equal size .a node is connected to each of the nodes in the same module with probability 1 and to each of the nodes in the other modules with probability .a small value of implies modular structure of the network .we can approximate the adjacency matrix by the following block - circulant matrix composed of blocks , each of which is an matrix .let be the unit matrix , and be the matrix whose all elements are unity .the diagonal blocks of are equal to . if we approximate the probability that a link exists between two nodes in different modules by the weight of the link , which is not crucial for the following arguments , the off - diagonal blocks of are equal to . an example network in the case of and shown in fig .[ fig : community anneal ] .the leading eigenvalues of the approximated adjacency matrix are represented by ,\quad ( 1\le i\le n_{\rm m } ) , \label{eq : lambda_i anneal}\ ] ] where is an generic root of unity .although eq .more simply indicates the existence of an -fold degenerate eigenvalue and a nondegenerate eigenvalue , we use eq . for theoretical developments below . for further analysis , we fix a specific .the corresponding eigenvectors are given by where denotes the transpose .the other eigenmodes have degenerated eigenvalues and are irrelevant to the percolation transition. when is small , , , are almost the same . in the limit , we obtain . in this limit, a proper linear summation of ( ) yields a localized mode represented by where a block of ones appears from the element to the element .each represents a mode that is localized in a module . according tothe criterion explained in sec .[ sub : restrepo ] , the lcc is large when any of the values , , exceeds unity .when there are more than two nodes in each module ( _ i.e. _ , ) , the lcc is large even in the limit , because .however , when , the lcc does not extend beyond a single module , _i.e. _ , .when there are many modules ( _ i.e. _ , large ) , the result for implies that the actual is small .when is small , a similar relation holds true . in this case ,the largest eigenvalue is not degenerated .however , for a moderate , the lcc tends to contain a majority of nodes in a single module and does not extend beyond the module .such a lcc is regarded to be large by the res strategy , whereas it is actually small when is large . in summary ,the res strategy applied to modular networks may be inefficient , because it does not distinguish between local and global epidemics .the same is the case for degree - based immunization strategies in which hubs are preferentially immunized .if a considerable number of hubs contribute to intramodular but not to intermodular connectivity , alternative strategies may work better .even though we have dealt with networks with modules of equal size , the discussions above can also be applied to modular networks in which the size of modules is heterogeneous .we develop an immunization strategy that can be applied to modular networks . by definition , intermodular links are rare compared to intramodular links in a modular network .if intermodular links are preferentially removed during immunization , the modular structure will be preserved throughout the immunization procedure .therefore , if the lcc at a certain value of contains a considerable number of modules that are connected at this value of , many nodes in each of such modules are likely to belong to the lcc . on this basis , for simplicity , we assume that all the nodes in each module belong to the lcc or none of them belongs to the lcc . to establish an efficient immunization strategy for modular networks ,we apply the res strategy to the coarse - grained network representing the connectivity among modules .given a partition of nodes into modules , we define an coarse - grained adjacency matrix as where denotes the module .the matrix is weighted , and is equal to the number of links between and . it should be noted that is set to 0 to respect the assumption that all the nodes in a module are simultaneously included in or excluded from the lcc . otherwise , a localized mode such as may become the perron vector of , owing to which epidemics restricted to a single module can not be ruled out .the perron vector of is determined by , where is the largest eigenvalue of . represents the importance of the module in terms of the eigenvector centrality .we calculate the shift in , denoted by , owing to the removal of a single node .we denote the index of the module that node belongs to by . the removal of node elicits a change in the coarse - grained adjacency matrix by where is the number of intermodular links that exist between node and module , _i.e. _ , it should be noted that . to calculate ,it is necessary to evaluate the amount of perturbation in owing to the node removal .generally , is perturbed by an amount larger than ( because only the elements of in the row or those in the column can decrease after node is removed . however , as opposed to the formulation of the res strategy ( sec .[ sub : restrepo ] ) , the removal of node does not result in , unless node is the only node contained in .although this situation occurs after some nodes have been removed , it is not very common except near the percolation threshold .therefore , we assume that the node removal changes the perron vector to where is a small vector .we determine as follows .the linear equations for the perron vector before and after the removal of node are represented by and respectively . by combining these equations and neglecting small - order terms ( ) and , we obtain if node is the last node in that is removed at a certain value of , eq. becomes .this relation is consistent with the fact that vanishes after the removal of node . by substituting eqs ., , and and in , we obtain the following expression as the first - order approximation : on the basis of eqs . and, we sequentially remove node that maximizes .we label this immunization strategy as the mod strategy . when there are many nodes in module , eqs . and imply .therefore , the contribution of the node removal to is attributed to two factors : the importance of the module that node belongs to ( _ i.e. _ , ) and the connectivity of node to other important modules ( _ i.e. _ , ) . as the other extreme to the case described above, we consider the situation in which node is the only node that constitutes module . by substituting ( ) in eqs ., , and , we have ; the res strategy is reproduced . in other words ,the mod strategy is equivalent to the res strategy when all the nodes form isolated modules . to apply the mod strategy to real data , we first partition the network into modulesthen , we calculate ( ) by the power method . this operation is fast unless the spectral gap of is too small and is too large .the power method produces as a byproduct ; this value is used in eq . .then , we remove the node that realizes the maximum .next , we repeat this procedure . to save computation time, we do not apply a module detection algorithm in each step . on the basis of the modular structure determined for the original network , we recalculate andremove the nodes one at a time .if all the modules are isolated , we sequentially remove the nodes in the descending order of .we recalculate of all the remaining nodes after the removal of each node .this part of the mod strategy is heuristic and can be replaced by other immunization strategies .we compare the efficiency of the mod strategy on various networks with those of other immunization strategies .to detect modules in networks , we apply either the greedy algorithm proposed by clauset and colleagues that approximately maximizes the modularity of a network , the fast heuristic algorithm to the same end proposed by blondel and colleagues , or the algorithm based on random walks proposed by rosvall and bergstrom . forall the examined data sets , blondel s and rosvall s algorithms identify the smallest and the largest number of modules among the three algorithms , respectively ( tab .[ tab : statistics of networks ] ) .we call the mod strategy combined with the community detection algorithms of clauset , blondel , and rosvall as the mod - c , mod - b , and mod - r strategies , respectively .we compare the efficiency of the mod strategy with the following immunization strategies .* _ degree - based ( d ) strategy _ : we remove the nodes in decreasing order of their degree in the original network . if there exists more than one node with the same degree , we select one of them with equal probability . *_ recalculated degree - based ( rd ) strategy _ : we sequentially remove the nodes with the largest degree .this strategy differs from the d strategy in that we recalculate the degrees of all the remaining nodes after removing each node . *_ betweenness - based ( b ) strategy _ : we remove the nodes in decreasing order of the betweenness centrality .the betweenness centrality of a node is the normalized number of shortest paths between node pairs that pass through the node . * _ recalculated betweenness - based ( rb ) strategy _ : we sequentially remove the nodes with the largest betweenness centrality .we recalculate the betweenness centralities of all the remaining nodes after removing each node . *_ strategy based on dynamical importance ( res ) _ : see sec . [ sub : restrepo ] for the explanation . if , in any strategy , there are multiple nodes that realize the maximum value of the relevant quantity , we select one of these nodes with equal probability .because the b strategy performs poorly compared to other strategies in all the networks described in the following sections , we do not show the numerical results of this strategy . although the d strategy performs worse than the rd strategy ( and many other strategies ) in most cases , we present the results obtained from the d strategy because it is a typical strategy . while efficiencies of the d , rd , b , and rb strategies are were compared in a previous study for some networks , we examine these strategies with regard to modular networks .our methods do not improve upon the previous methods for networks without modular structure . to verify this, we generate a scale - free network with and the degree distribution using the barabsi - albert ( ba ) model .we set by setting the parameters and of the ba model to 6 .major statistics for the generated ba model are listed in tab .[ tab : statistics of networks ] .the relative size of the lcc is plotted against the node occupation probability in fig .[ fig : adhoc](a ) .if is very small for a large value of , an immunization strategy is considered to be efficient .the mod - c , mod - b , and mod - r strategies are as efficient as the d strategy .these three strategies are superceded by the rd , res , and rb strategies , as expected .the inefficiency of the mod strategies is presumably caused by the lack of the modular structure in the ba model .in general , a large q - value indicates the presence of modular structure in a network ( but see ) .the q - values of this network determined by the three community detection algorithms are equal to 0.249 ( clauset ) , 0.258 ( blondel ) , and 0.184 ( rosvall ) and are considered to be small . for a systematic comparison , we compare these q - values with those of the networks generated by random rewiring of edges with the degree of each node preserved .the generated networks do not have particular structure except that the degrees are heterogeneous .the q - values of the rewired networks are almost the same as those of the ba model ( tab .[ tab : statistics of networks ] ) , which indicates the absence of modular structure in the ba model .next , we apply the mod strategy to ad hoc networks with modular structure. there are various algorithms that produce benchmark networks with modular structure .we generate two networks as follows .the following numerical results do not critically depend on the method of construction of the modular network .consider modules of the same size .in the first ad hoc network , a module is the erds - rnyi random graph with the connection probability , such that the mean degree within a module is equal to .then , we generate the coarse - grained network among modules in the form of the random graph with a mean degree of 6 .any pair of node in module and node in another module ( ) may be connected if and are connected in the coarse - grained network .when this is the case , we connect nodes and with probability .then , for each node , the expected number of neighbors in different modules is equal to .we set and and run the algorithm until we obtain a connected network. the mean degree of the generated network is equal to .the results for different immunization strategies are compared in fig .[ fig : adhoc](b ) . the results labeled as mod in fig .[ fig : adhoc](b ) are based on the predefined modular structure with the number of modules , because all the three algorithms for community detection identify the correct modular structure . figure [ fig : adhoc](b ) indicates that the mod strategy substantially outperforms the res strategy .this is presumably because the res strategy detects lccs contained in a single module or a small number of modules as a signature of a global epidemic , as discussed in sec .[ sub : localized ] , whereas the mod strategy does not . figure [ fig : adhoc](b ) indicates that the rb strategy outperforms the mod strategy .this is as expected because a link version of the rb strategy is used to partition the network efficiently into modules ; if we remove links in the decreasing order of the recalculated betweenness centrality of the links , the network is partitioned into modules efficiently .the drawback of the rb strategy with respect to the mod strategy is the former s high computation time ; we can not apply the rb strategy to larger networks .we discuss this point in sec .[ sec : discussion ] .we also carry out numerical simulations on a heterogeneous ad hoc modular network .we generate each module using the ba model with ( _ i.e. _ , ) .the coarse - grained network among modules is assumed to be the ba model with a mean degree of 6 ( _ i.e. _ , ) .pairs of nodes in different modules are connected in the same way as in the previous network , such that .we set and .the mean degree of the generated network .the generated network is a connected network .the immunization results for this modular scale - free network are shown in fig .[ fig : adhoc](c ) .the results are qualitatively the same as those in fig .[ fig : adhoc](b ) .we investigate the application of the mod strategy to four real - world networks .the statistics for each network including the number of modules and the q - values are listed in tab .[ tab : statistics of networks ] .the first example is a high energy particle ( hep ) citation network .we use this network as a representative of a relatively dense network .this network is used in a previous study of immunization .because of its large mean degree , a relatively large fraction of nodes have to be removed to fragment this network .the q - value for the partition using the three algorithms are large .they are also much larger than the q - values for the networks generated by rewiring the edges without changing the degree of each node . therefore , the hep network has major modular structure .note that the rewiring sometimes makes the network disconnected .however , the q - value does not differ much between connected and disconnected rewired networks .therefore , we do not explore the effect of disconnectedness of the rewired networks .we do the same omission for the three other real - world networks examined later .the immunization results for the hep network are shown in fig .[ fig : real](a ) . the results for the rb strategyare not shown because is too large for us to employ the rb strategy .this limitation with regard to the rb strategy is also true for the three other networks .it can be observed from fig .[ fig : real](a ) that the mod - r strategy outperforms all the other strategies including the res strategy .the improvement obtained by employing the mod - r strategy , which is quantified by the amount of shift of the percolation threshold is approximately as large as that obtained from the recently proposed strategy using graph partitioning .this strategy divides the network into equal - sized groups ; it is distinct from the mod strategy .the lcc for the mod - c strategy is small when is large .however , below , decreases slowly with a decrease in . at ,all the modules are already separated .the lcc for the mod - c strategy occupies a significant fraction of the original network at and is represented by the largest module in the network . because we have not optimized the mod strategy after all the modules are separated , the mod - c strategy does not perform well below .the mod - b strategy yields a similar result ; below , the lcc is the largest module in the network .however , the lcc is smaller than that for the mod - c strategy because the size of the largest module detected by blondel s algorithm is smaller than that detected by clauset s algorithm. the performance of the mod strategies can be enhanced if we improve the immunization strategy after all the modules are separated .however , we do not explore this aspect in the present study .the second example is a social network called the pretty good privacy ( pgp ) network .a link is formed when two persons share confidential information using the pgp encryption algorithm on the internet .this network has a prominent community structure ( see tab . [ tab : statistics of networks ] for the q - values ) .the immunization results are shown in fig .[ fig : real](b ) . for this network , the mod - c ,mod - b , and mod - r strategies outperform the d , rd , and res strategies .the third example is the lcc of a dataset of the world wide web .we ignore the direction of the links .the numerical results for this lcc are shown in fig .[ fig : real](c ) .the mod - c , mod - b , and mod - r strategies perform better than the d , rd , and res strategies at least in terms of the percolation threshold .the fourth example is an email - based social network .the results shown in fig . [ fig : real](d ) indicate that , for this network , the mod - c , mod - b , and mod - r strategies do not outperform the other strategies .the performance of the mod - r strategy is superior to those of the other methods near the percolation threshold , but this superiority is only marginal .the performance of the mod - c strategy is inferior to those of the other methods over the entire range of .the mod - b and mod - r strategies are more inefficient than the d , rd , and res strategies when is large .the three community detection algorithms result in large q - values for the email social network. however , this network may not be as modular as indicated by the large q - values for two reasons . first , the rewired networks also have relatively large q - values , although they are significantly smaller than the q - values for the original network ( tab . [tab : statistics of networks ] ) .second , generally speaking , networks with small mean degree tend to have large q - values even if the modular structure is absent . the email social network may not have sufficient modular structure , which may have caused the inefficiency of the mod strategy for this network .we have proposed an efficient algorithm called the mod strategy for immunizing networks with modular structure .this strategy combines a community detection algorithm and the identification of nodes with crucial intermodular links .we have validated the effectiveness of the mod strategy with artificial and real networks using two community detection algorithms .the mod strategy is applicable to networks in which the size of modules is heterogeneous , as is the case in real modular networks .the mod strategy can be extended to the case of networks with more than two hierarchical levels , which are often found in real data . in such a network , we first remove the nodes responsible for the formation of the most global connection . if modules at the most global level have been fragmented , we apply the community detection algorithm to each module such that the nodes responsible for connecting different submodules in a module are preferentially removed . for networks with bipartite modular structure ,the mod strategy is inefficient .this is because the mod strategy is based on the conventional concept of modular structure , _i.e. _ , there are relatively more links within a module than across different modules .this property is not satisfied by networks with bipartite modular structure . dealing with bipartite modular structure and also overlapping modular structure ( see for a review )is beyond the scope of the present paper .the mod strategy does not outperform the rb strategy .this is as expected because the rb strategy provides a useful algorithm for community detection .the heart of the algorithm lies in fragmenting a network into modules with a small number of links ( not nodes ) that are removed in the decreasing order of the betweenness centrality .however , carrying out community detection on the basis of the rb strategy is computationally formidable ; this strategy requires time for sparse networks .this fact has led to the development of faster algorithms for community detection that are independent of the recalculated betweenness centrality .the rb strategy of immunization also requires time .an immunization strategy developed by the adaptation of a faster community detection algorithm that sequentially removes links ( so - called divisive algorithms ) would outperform the mod strategies examined in the present study ( _ i.e. _ , mod - c , mod - b , and mod - r ) .however , such a community detection algorithm seems to be unknown .we state that the mod strategy outperforms the rb strategy when the network is large .we have implicitly used fast community detection algorithms so that the mod strategy performs faster than the rb strategy . for sparse networks ,rosvall s algorithm runs comfortably fast .clauset s algorithm runs faster than rosvall s algorithm on our data and requires only time .blondel s algorithm is even faster in general . in the so - called out - of - the - neighborhood ( out ) immunization strategies , one picks a neighbor of node that has largest degrees out of the neighborhood of the original node .this is an efficient immunization strategy that uses only the local information about the network .the ring vaccination stands on a similar spirit .in contrast to these strategies , the mod strategy has an important limitation that one needs global information about the connectivity among modules . nevertheless , the mod and out strategies are complementary with regard to the information needed for implementation .the mod strategy requires coarse but global information about the network , plus the degree of each node .the out strategies require only the local information about the network , but with the information about the degree of the neighbors included .our results are consistent with the finding that nodes in a network can be classified according to their global and local roles .this is particularly true when the betweenness centrality is not predicted from the degree , which is typical for modular networks .the deviation of the global importance of a node from the local importance of the same node in modular networks is also reported for the pagerank and other similar centrality measures .in this situation , the mod strategy preferentially immunizes globally important nodes having important intermodular links rather than locally important ones such as local hubs .the general idea of targeting globally important nodes in modular networks has potential applications in other dynamical phenomena on networks , such as epidemic dynamics , synchronization , opinion formation , and traffic .we thank toshihiro tanizawa and taro ueno for their valuable discussions .n.m . acknowledges the support through grants - in - aid for scientific research ( nos .20760258 and 20540382 ) from mext , japan .newman m e j 2003 _ siam rev . _ * 45 * 167 boccaletti s , latora v , moreno y , chavez m and hwang d - u 2006 _ phys .rep . _ * 424 * 175 barrata , barthlemy m and vespignani a 2008 _ dynamical processes on complex networks _ ( cambridge : cambridge university press ) hethcote h w and yorke j a 1984 _ lect . notes in biomath .* 1 anderson r m , medley g f , may r m and johnson a m 1986 _ i m a j. math .* 3 * 229 may r m and anderson r m 1988 _ phil .london b _ * 321 * 565 cohen r , erez k , ben - avraham d and havlin s 2000 _ phys .lett . _ * 85 * 4626 callaway d s , newman m e j , strogatz s h and watts d j 2000 _ phys .lett . _ * 85 * 5468 pastor - satorras r and vespignani a 2001 _ phys .* 86 * 3200 sol r v and montoya j m 2001 _ proc .london b _ * 268 * 2039 dunne j a , williams r j and martinez n d 2002 _ ecol .lett . _ * 5 * 558 albert r , jeong h and barabsi a - l 2000 _ nature _ * 406 * 378 cohen r , erez k , ben - avraham d and havlin s 2001 _ phys .lett . _ * 86 * 3682 holme p , kim b j , yoon c n and han s k 2002 _ physe _ * 65 * 056109 holme p 2004 _ europhys .lett . _ * 68 * 908 chen y , paul g , havlin s , liljeros f and stanley h e 2008 _ phys .* 101 * 058701 ueno t and masuda n 2008 _ j. theor .biol . _ * 254 * 655 wasserman s and faust k 1994 _ social network analysis _ ( cambridge : cambridge university press ) girvan m and newman m e j 2002 _ procusa _ * 99 * 7821 newman m e j 2004 _ eur .j. b _ * 38 * 321 fortunato s 2009 _ phys ._ in press .arxiv:0906.0612v1 becker n g and dietz k 1995 _ math .biosci . _ * 127 * 207 ball f and neal p 2002 _ math ._ * 180 * 73 schinazi r b 2002 _ theor .biol . _ * 61 * 163 ball f , britton t and lyne o 2004 _ math .biosci . _ * 191 * 19 ball f and lyne o 2006 _ stat .methods med .* 15 * 481 guimer r , mossa s , turtschi a and amaral l a n 2005 _ proc .usa _ * 102 * 7794 guimerr and amaral l a n 2005 _ nature _ * 433 * 895 guimer r and amaral l a n 2005 _ j. stat ._ p02001 rosvall m and bergstrom c t 2008 _ proc . natl .usa _ * 105 * 1118 freeman l c 1979 _ soc .netw . _ * 1 * 215 barabsi a - l and albert r 1999 _ science _ * 286 * 509 guimer r , sales - pardo m and amaral l a n 2004 _ phys .e _ * 70 * 025101(r ) .( a ) scale - free network .( b ) ad hoc random network with communities .( c ) ad hoc scale - free network with communities.,title="fig:",width=226 ] .( a ) scale - free network .( b ) ad hoc random network with communities .( c ) ad hoc scale - free network with communities.,title="fig:",width=226 ] .( a ) scale - free network .( b ) ad hoc random network with communities .( c ) ad hoc scale - free network with communities.,title="fig:",width=226 ] .statistics of networks .the number of nodes and links are those of the lcc of the network . is the number of modules detected by each algorithm . for the rewired networks , the average and the standard deviation of the q - valuesare shown for each community detection algorithm . to this end, we generate 100 rewired networks from each original network . [cols="<,<,<,<,<,<,<",options="header " , ]
|
in this study , an efficient method to immunize modular networks ( _ i.e. _ , networks with community structure ) is proposed . the immunization of networks aims at fragmenting networks into small parts with a small number of removed nodes . its applications include prevention of epidemic spreading , intentional attacks on networks , and conservation of ecosystems . although preferential immunization of hubs is efficient , good immunization strategies for modular networks have not been established . on the basis of an immunization strategy based on the eigenvector centrality , we develop an analytical framework for immunizing modular networks . to this end , we quantify the contribution of each node to the connectivity in a coarse - grained network among modules . we verify the effectiveness of the proposed method by applying it to model and real networks with modular structure .
|
when nature is observed , it is possible to see that birds , are capable of taking advantage of wind currents not only to minimize their energy consumption , but also to maximize their endurance .one important aspect of this is , they do not hold any information and the ability to estimate the weather conditions on the path they fly ( and/or going to fly through ) .all decisions are solely based on with respect to local and instantaneous wind conditions . in this way, they optimize flight trajectory based on the local and instantaneous decisions .this is , a simple but , yet , inspiring mechanism to learn and incorporate into flight dynamics through the mechanics of flight .ideally , if the regional wind information is completely known in advance ( with the help of a pre - determined ( forecasted ) weather / wind maps over the flight region ) , optimal flight / trajectory planning can be used to determine flight paths that minimize the total power consumption over a specified time interval , subject to various constraints . however , the main challenge for such a _ pre - determined map _ approach is that ( due to the highly complex , stochastic , coupled and nonlinear nature of the atmosphere ) weather forecasting related prediction errors also propagate into the optimization routine . therefore ,instead of making decisions based on pre - determined weather maps , with this study , we propose real - time guidance strategies that will make _ local _ , _ in - situ _ decisions using _ available on - board _ instruments to benefit from the existing _ local _ wind conditions and minimize power consumption during the flight .the foundation of this concept had been briefly outlined in turkoglu s fundamental work .there are pioneering works in the area of uav flights utilizing wind energies .the developments and flight tests of practical guidance strategies for detecting and utilizing thermals have illustrated the feasibility of these concepts .patel studied the effect of wind in determining optimal flight control conditions under the influence of atmospheric turbulence .langelaan studied how to exploit energy from high frequency gusts in the vertical plane for uavs .in addition , langelaan presented a method for minimum energy path planning in complex wind fields using a predetermined energy map .sukkarieh et al . developed a framework for an energy - based path planning that utilizes local wind estimations for dynamic soaring using the measurements and predictions from the wind patterns .rysdyk studied the problem of course and heading changes in significant wind conditions .mcneely and et al . studied the tour planning problem for uavs under wind conditions .mcgee presented a study of optimal path planning using a kinematic aircraft model . in all of these studies, it is assumed that wind information is fully known over the region of flight .but in reality , wind is a stochastic process that needs to be addressed accordingly .one suitable approach is to devise real - time strategies that will benefit from the instantaneous nature of wind dynamics , on the spot , rather than depending on big forecasted weather maps .thus , it is reasonable to adopt the idea of executing _ local _ , _ instantaneous _ maneuvers based on local / on - board measurements to utilize wind energy via the information available at that specific time instant . compared with the dynamic optimization studies ,this study presents the use and the utilization of _ in - situ _ wind measurements alone with no regional wind information , to optimize power consumptions . in this paper, optimal adjustments are made to the airspeed , heading angle and/or flight path angle commands to minimize a projected power consumption , based on the _ instantaneous _ , _ local _ wind conditions / measurements .the onboard feedback control system then tracks these modified ( updated ) commands , and this process is repeated periodically throughout the entire flight .when the nature of trajectory optimization problems is taken into account , it is possible to see that there are six main components which are vital in determining ( and also analyzing ) the flight trajectory : airspeed ( v ) , heading angle ( ) , flight path angle ( ) and location of the aircraft , namely , and .once these values are known ( and/or provided ) , trajectory planning becomes a relatively easy task . in this study , for the purpose of developing optimal guidance strategies ,uav flights are represented using a 3d dynamic - point - mass model , and the detailed structure is provided in further detail in the following sections . in order to increase numerical efficiency and to reduce computational complexity , 3-d dynamic point mass equations of motion are normalized by specifying a characteristic air - speed and mass . in this paper , characteristic _ normalization speed- _ is selected to be the maximum speed of the aircraft ( i.e. ) . following to some algebraic manipulations , it is possible to obtain normalized equations of motion as and where the functional dependencies of the wind terms are shown in parenthesis , for convenience .since this is a constrained optimization problem , imposed constraints on states and controls are also expressed using normalized values , and are presented in the following sections .guidance strategies , in general , cold be grouped into three basic categories : * action strategy * , * velocity strategy * , and * trajectory strategy*. in this study , * action strategies * refer to the _ direct specifications _ of some ( or all ) of the control variables , namely = ( i.e. power , lift coefficient , bank angle ) over a certain period of time . here, they represent open - loop control schemes . in comparison , * velocity strategies * specify some or all of the desired velocity components ( i.e. airspeed , heading angle , flight path angle ) over a certain time interval ( ) as flight commands , where these commands are then followed via closed - loop tracking . finally as its name suggests , * trajectory strategies * specify a flight trajectory of desired positions as functions of time over a certain time interval : where ], we also have these expressions depend on , which also have a dependency on , , and through eqs.([xdotnormalized])-([hdotnormalized ] ) , and reciprocally on the wind components over the specified time interval .now , in order to complete the derivation of the projected wind rate expression for level flight strategies , expressions for will be developed in the following section .the concentration in this section will mainly be focused on developing expressions for .their dependencies , explicitly , show on the increments of airspeed and heading angle . from eqs.([xdotnormalized])-([ydotnormalized ] ) , and for , it is possible to have with the assumption that both airspeed ( ) and heading angle ( ) will have achieved their commanded values ( i.e. reach steady state conditions ) at the end of the specified time interval , and keeping in mind that wind speeds are obtained using eq.([eq : windspeeds ] ) ( through the trapezoidal rule ) , the numerical integration of the above equations leads to \\\delta \bar{y } & = { 1 \over q_{level } } \left [ \left({\partial{\bar{w}_{y } } \over \partial{\bar{x } } } \right ) b_1 + \left ( { 2 \over \delta \bar{t}}-{\partial{\bar{w}_{x } } \over \partial{\bar{x } } } \right ) b_2 \right ] \end{split}\ ] ] where for a sufficiently small , expression in eq.([eq : q_2d ] ) shall always be nonzero ; ensuring the existence of solutions for the position change expressions .based on the derivations above , the power consumption at terminal state ( i.e. at time ) can now be expressed as a function of the current command adjustments in airspeed and heading angle .then , the problem of reducing future power consumptions reduces to determine and from subject to and incremental constraints , and the initial state conditions required .from the given problem formulation , it is possible to see that the nature of the problem is very complex , extensively coupled and highly non - linear . at this point ,different algorithms may be used to solve the static optimization problem in hand .but most of the existing numerical methods heavily depend on iteration routines which are not desirable in real - time applications .they impose major drawbacks from the perspective of computation time and `` convergence rate in allowed computation time '' ( which can be extremely small in some cases and applications ) . to avoid this to certain extent , in this section, it is aimed to solve the static optimization problem in an _ analytical _ , and _ a - single - shot _ manner . for this purpose , here ,yet simple but powerful _ gradient method _ will be presented to aid in solving the static optimization problem in hand . with second - order gradient algorithms ,the main goal is to find _ locally _ optimal adjustments : , and using the necessary conditions for optimality . for this purpose ,second order taylor series approximation of projected power function , in the neighbourhood of initial state , is taken into account therefore , in presence of favourable wind conditions , the solution to second - order optimal adjustment strategy ( , and ) for the case of 2d flight is obtained as {nx1}^{opt } = \left [ \left ( t_2^t \right)_{nxn } \left ( t_2 \right)_{nxn } \right]^{-1 } \left ( -t_2^t \right)_{nxn } \left ( t_1 \right)_{nx1} ] , ( i.e. ] . here ,a generic case of increments are considered .the mean of basic average power consumption over different initial heading conditions is defined as the measure of the performance where each corresponds to a different initial heading angle , and is the number of different initial heading angles used . to evaluate the proposed guidance strategies , following two scenariosare considered : * * scenario-0 : * is the reference strategy that aims to follow the reference airspeed and a constant heading angle command set at the initial heading angle .this provides the reference average power consumption , , in case of no wind . ** scenario-1 : * is the case where both airspeed and heading angle commands are adjusted periodically based on the current wind measurements .the resulting average power consumption is . to assess the outcomes of proposed strategies , the following benefit criterion is introduced as a relative measure of potential fuel savings of the proposed guidance strategies over the reference strategy in simulations ,uav parameters similar to those of the scaneagle uav are used , where characteristic values , taken from here , the power available is assumed to be able to vary instantaneously ( change in 1sec for propeller - driven engines ) .in addition , available control rate bounds are taken into account as ( , ( change in 1sec),( ( ] with a simulation step size of = 50[hz] ] ) .wind magnitude ( ) and wind direction ( ) , obtained from the stochastic wind model formulation , taken from , is used .wind frequency , throughout these equations is assumed to be constant and equal to \approx 9.4[deg / m] ] , which is the same as the measurement update time ( i.e. $ ] ) . in this scenario , it is desired to fly with optimal airspeed ( determined for no wind case ) and constant heading angle , which is the initial heading angle given in the simulation settings . with this scenario ,if the mission is specified to fly with optimal airspeed and initial heading angle ( say ( with respect to true north ) ) , uav dynamics will execute these commands throughout the entire flight and will determine the power consumption during this specific flight routine .obtained power consumption value is the _ reference _ power setting , , and will be compared with other outcomes to measure the efficiency of proposed strategies in given flight conditions .it is intuitive that combined affects of both ( airspeed and heading ) adjustments will help to harvest the maximum amount of benefit from the proposed guidance strategy and local wind conditions . for this purpose ,extensive simulations have been conducted , and obtained results are given in fig.([fig : rb1 ] ) .fig-[fig : rb1 ] compares the relative benefits of the two aforementioned strategies in which periodic adjustments are made in the commands of airspeed alone ( _ dashed line _ ) , and both airspeed and heading ( _ solid line _ ) , respectively , over the reference strategy .it is possible to see from fig-[fig : rb1 ] that the maximum benefit is obtained for the case where adjustments both in airspeed and heading angle are applied . in this scenario ,overall power savings goes up to in terms of total power consumption .it is worhtwhile to note that for different wind and atmospheric conditions , flight path ( due to heading angle adjustments ) will also change .eventually , while we seek to benefit from the wind currents and aim to extract energy from the wind , this may potentially bring the uav to a completely undesired location , totally unrelated to the assigned flight mission .thus , it is important to impose boundary control and restrict the flight region to complete the guidance strategies .this concept is currently under investigation and will be reported in another study , in further details .this paper presents real - time uav guidance strategies that utilize wind energy to improve flight endurance and minimize power / fuel consumption . in these strategies , airspeed and/or heading angle commandsare periodically adjusted based on the _ insitu _ measurements of local wind components .it has been shown that using local , instantaneous wind measurements , without the knowledge of the wind field that the uav is flying through , it is possible to benefit from the wind energy , greatly enhance the performance and increase flight endurance of the uav . throughout this research effort, uav has been modeled using 3d point - mass equations .corresponding performance and practical constraints has been introduced to mimic a realistic flight of an uav .a stochastic wind model has been taken into account to simulate the true nature of the wind .uav flights were formulated as a non - linear optimization problem and a cost function has been introduced to model power characteristics at terminal state as a terminal state cost function , which minimizes overall power consumption .this optimization problem has been solved as a single shot optimization , in real - time .second - order gradient algorithms are used to find local , optimal solutions ( adjustments ) that will minimize the power with respect to taken local , instantaneous wind measurements .extensive simulation results show that it is possible to obtain power savings up to with respect to the flight scenario , with no wind .the proposed strategies offer improvements over the constant airspeed reference strategy in terms of average power consumptions even in a constant wind field .these benefits initially increase as the spatial frequency of the wind field gradually increases , but reaches a peak at certain frequencies and then start to decrease beyond these frequencies .the author is grateful to dr.yiyuan j. zhao for his insightful discussions and suggestions throughout this study .turkoglu , k. , zhao , y. j. and capozzi , b. , `` real - time insitu strategies for enhancing uav endurance by utilizing wind energy '' aiaa guidance , navigation , and control conference , chicago , illinois , aug .10 - 13 , 2009 .lawrance , n.r . and sukkarieh , s. , a guidance and control strategy for dynamic soaring with a gliding uav , proceedings of the 2009 ieee international conference on robotics and automation ( icra 2009 ) , 2009 turkoglu , k. , statistics based modeling of wind speed and direction in real time optimal guidance strategies via ornstein - uhlenbeck stochastic processes , _ accepted for publication _ , 4th aviation , range , and aerospace meteorology special symposium , ams 94th annual meeting , 2 - 6 feb .2014 in atlanta , ga
|
this study presents a real - time guidance strategy for an unmanned aerial vehicles ( uavs ) that can be used to enhance their flight endurance by utilizing _ insitu _ measurements of wind speeds and wind gradients . in these strategies , periodic adjustments are made in the airspeed and/or heading angle command , in level flights , for the uav to minimize a projected power requirement . in this study , uav dynamics are described by a three - dimensional dynamic point - mass model . a stochastic wind field model has been used to analyze the effect of the wind in the process . onboard closed - loop trajectory tracking logics that follow airspeed vector commands are modeled using the method of feedback linearization . to evaluate the benefits of these strategies in enhancing uav flight endurance , a reference strategy is introduced in which the uav would follow the optimal airspeed command in a steady level flight under zero wind conditions . a performance measure is defined as the average power consumption with respect to no wind case . different scenarios have been evaluated both over a specified time interval and over different initial heading angles of the uav . a relative benefit criterion is then defined as the percentage improvement in the performance measure of a proposed strategy over that of the reference strategy . extensive numerical simulations are conducted to show efficiency and applicability of the proposed algorithms . results demonstrate possible power savings of the proposed real - time guidance strategies in level flights , by utilization of wind energy .
|
systems with a complex time evolution , which generate a great impact event from time to time , are ubiquitous .examples include fluctuations of prices for financial assets in economy with rare market crashes , electrical activity of human brain with rare epileptic seizures , seismic activity of the earth with rare earthquakes , changing weather conditions with rare disastrous storms , and also fluctuations of online diagnostics of technical machinery and networks with rare breakdowns or blackouts .due to the complexity of the systems mentioned , a complete modeling is usually impossible , either due to the huge number of degrees of freedom involved , or due to a lack of precise knowledge about the governing equations .this is why one applies the framework of prediction via precursory structures for such cases .the typical application for prediction with precursory structures is a prediction of an event which occurs in the very near future , i.e. , on short timescales compared to the lifetime of the system .a classical example for the search for precursory structures is the prediction of earthquakes .a more recently studied example is the short term prediction of strong turbulent wind gusts , which can destroy wind turbines . in a previous work , we studied the quality of predictions analytically via precursory structures for increments in an ar(1 ) process and numerically in a long - range correlated arma process .the long - range correlations did not alter the general findings for gaussian processes , namely , that larger events are better predictable .furthermore we found other works which report the same effect for earthquake prediction , prediction of avalances in soc - models and in multiagent games . in this contribution , we investigate the influence of the probability distribution function ( pdf ) of the noise term in detail by using not only gaussian , but also exponential and power - law distributed noise .this approach is also motivated by the book of egans which explains that receiver operator characteristics ( roc ) obtained in signal detection problems can be ordered families of functions in dependence on a parameter .we are now interested in learning how the behavior of these families of functions depends on the event magnitude and the distribution of the stochastic process , if the roc curve is used for evaluating the quality of predictions . + after defining the prediction scheme in sec .[ pre ] and the method for measuring the quality of a prediction in sec .[ roc ] , we explain in sec . [ evsize ] how to consider the influence on the event magnitude . in sec .[ constr ] we formulate a constraint , which has to be fulfilled in order to find a better predictability of larger ( smaller ) events . in the next section , we apply this constraint to compare the quality of predictions of large increments within gaussian ( sec .[ gaussian ] ) , exponential distributed ( sec .[ symexpo ] ) and power - law distributed i.i.d .random numbers ( sec . [ powl ] ) .we study the prediction of increments in free jet data in sec .[ freejet ] .conclusions appear in sec .[ conclusions ] .the considerations in this section are made for a time series , i.e. , a set of measurements at discrete times , where with a sampling interval and .the recording should contain sufficiently many extreme events so that we are able to extract statistical information about them .we also assume that the event of interest can be identified on the basis of the observations , e.g. by the value of the observation function exceeding some threshold , by a sudden increase , or by its variance exceeding some threshold .we express the presence ( absence ) of an event by using a binary variable . when we consider prediction via precursory structures ( _ precursors _ , or _ predictors _ ) , we are typically in a situation where we assume that the dynamics of the system under study has both , a deterministic and a stochastic part .the deterministic part allows one to assume that there is a relation between the event and its precursory structure which we can use for predictive purposes .however , if the dynamic of the system was fully deterministic there would be no need to predict via precursory structures , but we could exploit our knowledge about the dynamical system as it is done , e.g. , in weather forecasting . in this contributionwe focus on the influence of the stochastic part of the dynamics and assume therefore a very simple deterministic correlation between event and precursor.the presence of this stochastic part determines that we can not expect the precursor to preced _ every _ individual event .that is why we define a precursor in this context as a data structure which is _ typically _ preceding an event , allowing deviations from the given structure , but also allowing events without preceeding structure . for reasons of simplicitythe following considerations are made for precursors in real space , i.e. , structures in the time series .however , there is no reason not to apply the same ideas for precursory structures , which live in phase space .in order to predict an event occurring at the time we compare the last observations , to which we will refer as the _ precursory variable _ with a specific precursory structure once the precursory structure is determined , we give an alarm for an event when we find the precursory variable inside the volume there are different strategies to identify suitable precursory structures .we choose the precursor via maximizing a conditional probability which we refer to as the _ likelihood _ .. and the term _ aposterior pdf _ for the probability to find a precursor before of an already observed extreme event .note that the names might be also used vice versa , if one refers to the precursor as the previously observed information . ]the likelihood provides the probability that an event follows the precursor .it can be calculated numerically by using the joint pdf .our prediction strategy consists of determining those values of each component of for which the likelihood is maximal .this strategy to identify the optimal precursor represents a rather fundamental choice . in more applied examplesone looks for precursors which minimize or maximize more sophisticated quantities , e.g. , discriminant functions or loss matrices .these quantities are usually functions of the posterior pdf or the likelihood , but they take into account the additional demands of the specific problem , e.g. , minimizing the loss due to a false prediction .the strategy studied in this contribution is thus fundamental in the sense that it enters into many of the more sophisticated quantities which were used for predictions and decision making .a common method to verify a hypothesis or to test the quality of a prediction is the receiver operating characteristic curve ( roc ) .the idea of the roc consists simply of comparing the rate of correctly predicted events with the rate of false alarms by plotting vs. .the rate of correct predictions and the rate of false alarms can be obtained by integrating the _ aposterior _ pdfs and on the precursory volume . that these rates are defined with respect to the total numbers of events and nonevents .thus the relative frequency of events has no direct influence on the roc , unlike on other measures of predictability , as e.g. , the brier score or the ignorance .plotting vs for increasing values of one obtains a curve in the unit - square of the - plane ( see , e.g. , fig .[ fig : rocgauss ] ) .the curve approaches the origin for and the point in the limit , where accounts for the magnitude of the precursory volume .the shape of the curve characterizes the significance of the prediction .a curve above the diagonal reveals that the corresponding strategy of prediction is better than a random prediction which is characterized by the diagonal .furthermore we are interested in curves which converge as fast as possible to , since this scenario tells us that we reach the highest possible rate of correct prediction without having a large rate of false alarms .that is why we use the so - called _ likelihood ratio _ as a summary index , to quantify the roc . for our inference problemsthe likelihood ratio is identical to the slope of the roc - curve at the vicinity of the origin which implies .this region of the roc is in particular interesting , since it corresponds to a low rate of false alarms .the term likelihood ratio results from signal detection theory . in the context of signal detection theory, the term _ a posterior pdf _ refers to the pdf , which we call likelihood in the context of predictions and vice versa .this is due to the fact that the aim of signal detection is to identify a signal which was already observed in the past , whereas predictions are made about future events .thus the likelihood ratio " is in our notation a ratio of a posterior pdfs . however , we will use the common name likelihood ratio throughout the text . for other problemsthe name likelihood ratio is also used for the slope at every point of the roc .since we apply the likelihood ratio as a summary index for roc , we specify , that for our purposes the term likelihood ratio refers only to the slope of the roc curve at the vicinity of the origin as in eq .( [ defm ] ) .note , that one can show that the precursor , which maximizes the likelihood as explained in sec .[ pre ] also maximizes the and is in this sense the optimal precursor .we are now interested in learning how the predictability depends on the event magnitude which is measured in units of the standard deviation of the time series under study .thus the event variable becomes dependent on the event magnitude via bayes theorem the likelihood ratio can be expressed in terms of the likelihood and the total probability to find events . inserting the technical details of the calculation of the likelihood and the total probability ( see the appendix ) we can see that the likelihood ratio depends sensitively on the joint pdf of pecursory variable and event . hence once the precursor is chosen , the dependence on the event magnitude enters into the likelihood ratio , via the joint pdf of event and precursor .looking at the rather technical formula in eq .( [ mgeneral ] ) , there are two aspects , which we find remarkable : * the slope of the roc curve is fully characterized by the knowledge of the joint pdf of precursory variable and event .this implies that in the framework of statistical predictions all kinds of ( long - range ) correlations which might be present in the time series influence the quality of the predictions only through their influence on the joint pdf .* the definition of the event , e.g. , as a threshold crossing or an increment does change this dependence only insofar as it enters into the choice of the precursor and it influences also the set on which the integrals in eq .( [ mgeneral ] ) are carried out . both and the set have to be defined according to the type of events one predicts .when predicting , e.g. , increments via the precursory variable , then ] .the total probability to find increments of magnitude is given by hence the condition in eq .( [ c2b ] ) reads fig .[ fig : cgauss ] illustrates this expression and figure [ fig : cnumgauss ] compares it to the numerical results. for the ideal precursor the condition is according to eq .( [ cgauss]) zero , since in this case , the slope of the roc - curve tends to infinity and does not react to any variation in . for any finite value of the precursory variable we have to distinguish three regimes of , namely , or and finally also the case .in the first case we study the behavior of for a fixed value of the precursory variable and .this implies that and we can use the asymptotic expansion for large arguments of the complementary error function rocs for gaussian distributed i.i.d .random variables .the symbols represent roc curves which where made via predicting increments in normal i.i.d . random numbers .the predictions were made according to the prediction strategy described in sec .the lines represent the results of evaluating the integrals in eqs .( [ rcor ] ) and ( [ rf ] ) for the gaussian case .note that the quality of the prediction increases with increasing event magnitude.,width=340 ] which can be found in to obtain this expression is appropriate for since the asymptotic expansion in eq .( [ erfcapprox ] ) holds only if the argument of the complementary error function is positive . in this case is larger than zero , if is fixed and finite and . in the second case , we assume to be fixed , and .hence we can use the expansion in eq .( [ erfcapprox ] ) only to obtain the asymptotic behavior of the dependence on and not for the dependence on .an asymptotic expression of hence reads since tends to minus unity as the expression in eq .( [ gausszwei ] ) is positive if and if we can assume the squared exponential term to be sufficiently small .if the later assumption is not fullfilled one might observe some regions of intermediate values of , for which is negative .however the roc curves in fig .[ fig : rocgauss ] suggest that the influence of these regions is sufficiently small , if the alarm volume is chosen to be ] as an alarm volumen .hence we can expect that the influence of the regions , where is negative , is suppressed since we average over many different values of and the condition is positive as .( positive is meant here in the sense , that approaches the value zero for from small positive numbers . ) in the third case , for and hence we find that is positive if . in totalwe can expect larger increments in gaussian random numbers to be easier to predict the larger they are .the rocs in fig .[ fig : rocgauss ] support these results .the pdf of the symmetrized exponential reads with , . applying the filtering mechanism according to the appendix we find the joint pdfs of precursory variable and event aposterior probabilities , the likelihood and the total probability to find events of magnitude if we are not interested in the range of the precursory variable , the total probability to find events is given by hence the condition reads figure [ fig : csymexp ] compares the results of the numerical evaluation of the condition and the analytical expression given by eq .( [ csymexp ] ) . since most precursors of large increments can be found among negative values , the numerical evaluation of becomes worse for positive values of , since in this limit the likelihood is not very well sampled from the data .this leads also to the wide spread of the bootstrap samples in this region .figure [ fig : csymexp ] shows that in the vicinity of the smallest value of the data set , the condition is zero .as we approach larger values of , approaches zero in the whole range of data values .that is why we would expect to see no influence of the event magnitude on the quality of predictions in the exponential case . the roc curves in fig .[ fig : rocsymexp ] support these results .the numerical roc curves were made via predicting increments in normal i.i.d .random numbers according to the prediction strategy described in sec .[ pre ] . the precursor for the roc - curvesis chosen as the maximum of the likelihood according to eq .( [ symexplike ] ) , i.e. , , so that the alarm interval is $ ] . in summarythere is no significant dependence on the event magnitude for the prediction of increments in a sequence of symmetrical exponential distributed random numbers .we investigate the pareto distribution as an example for power - law distributions .the pdf of the pareto distribution is defined as for with the exponent , the lower endpoint , and variance .filtering for increments of magnitude we find the following conditional pdfs of the increments : within the range the likelihood has no well defined maximum .however , since the likelihood is a monotonously decreasing function , we use the lower endpoint as a precursor . the total probability to find events of magnitude is given by where denotes the hypergeometric function with , . using and inserting the expressions ( [ paretolikeli ] ) and ( [ paretoapost ] ) for the components of we can obtain an explicit analytic expression for the condition . in fig .[ fig : powerlawcompare ] we evaluate this expression using _ mathematica _ and compare it with the results of an empirical evaluation on the data set of i.i.d . random numbers . figure ( [ fig : powerlawcompare ] ) displays that the value of depends sensitively on the choice of the precursor . for the ideal precursor all values of are negative .hence one should in this case expect smaller events to be better predictable .the corresponding roc curves in figs.[fig : powlroc3 ] , and [ fig : powlroc9 ] verify this statement of .in summary we find that larger events in pareto distributed i. i. d. random numbers are harder to predict the larger they are .this is an admittedly unfortunate result , since extremely large events occur much more frequently in power - law distributed processes than in gaussian distributed processes .hence , their prediction would be highly desirable .in this section , we apply the method of statistical inference to predict acceleration increments in free jet data .therefore we use a data set of samples of the local velocity measured in the turbulent region of a round free jet .the data were sampled by a hot - wire measurement in the central region of an air into air free jet .one can then calculate the pdf of velocity increments , where and are the velocities measured at time step and .the taylor hypothesis allows one to relate the time - resolution to a spatial resolution .one observes that for large values of the pdf of increments is essentially indistinguishable from a gaussian , whereas for small , the pdf develops approximately exponential wings .[ fjhisto ] illustrates this effect using the data set under study .thus the incremental data sets provides us with the opportunity to test the results for statistical predictions within gaussian and exponential distributed i.i.d .random numbers on a data set , which exhibits correlated structures .we are now interested in predicting increments of the acceleration in the incremental data sets . in the following we concentrate on the incremental data set , which has an asymptotically exponential pdf and the data set , which has an asymptotically gaussian pdf .furthermore we focus on increments between relatively large time steps , i.e. , , so that the short - range persistence of the process does not prevent large events from occuring . as in the previous sections we are hence exploiting the statistical properties of the time series to make predictions , rather than the dynamical properties .we can now use the evaluation algorithm which was tested on the previous examples to evaluate the condition for these data sets .the results are shown in fig .[ fjcondi ] .we find that at least for larger values of the main features of for the exponential and the gaussian case as described in sec .[ gaussian ] and [ symexpo ] are also present in the free jet data . for larger values of , is either larger than zero in the gaussian case ( ) or equal to zero in the exponential case ( ) in the region of interesting precursory variables , i.e. , small values of .however , the presence of the exponential and the gaussian distributions is more prominent in the corresponding roc curves . for the free jet dataset , the predictions were made with an algorithm similar to the one described in sec .[ pre ] . instead of a specific precursory structure , which corresponds to the maximum of the likelihood ,we use here a threshold of the likelihood as a precursor . in thissetting we give an alarm for an extreme event , whenever the likelihood that an extreme event follows an observation is larger than a given threshold value . in the exponential case ( ) shown in fig .[ fjrocs](a ) the roc curves for different event magnitude almost coincide , although the range of is larger ( ) than in the gaussian case shown in fig .[ fjrocs ] ( b ) . for roc curves are further apart , which corresponds to the results of secs .[ gaussian ] and [ symexpo ] .this example of the free jet data set shows that the specific dependence of the roc curve on the event magnitude can also in the case of correlated data sets be characterized by the pdf of the underlying process .we study the magnitude dependence of the quality of predictions for increments in a time series which consists in sequences of i.i.d .random numbers and in acceleration increments measured in a free jet flow . using the first part of the increment as a precursory variable we predict large increments via statistical considerations . in order to measure the quality of the predictions we use roc curves .furthermore we introduce a quantitative criterion which can determine whether larger or smaller events are better predictable .this criterion is tested for time series of gaussian , exponential and pareto i.i.d .random variables and for the increments of the acceleration in the free jet flow .the results obtained from the criterion comply nicely with the corresponding roc - curves .note that for both , the numerical evaluation of the condition and the roc - plots , we used only event magnitudes for which we found at least events , so that the observed effects are not due to a lack of statistics of the large events . in the sequence of gaussian i.i.d .random numbers , we find that large increments are better predictable the larger they are . in the pareto distributed time series we observe that in slowly decaying power laws larger events are harder to predict , the larger they are .we find no significant dependence on the event - magnitude for the sequence of exponentially i.i.d . random numbers .while the condition can be easily evaluated analytically , it is not that easy to compute numerically from observed data , since the calculation implies evaluating the derivatives of numerically obtained distributions . using savitzky - golay filtersimproved the results , but especially in the limit of larger events , where the distributions are difficult to sample , one can not trust the results of the numerically evaluated criterion .however , it is still possible to apply the criterion by fitting a pdf to the distribution of the underlying process and then evaluate the criterion analytically .although the magnitude dependence of the quality of predictions was observed in different contexts and for different measures of predictability , in this contribution only roc curves were used . in order to exclude the possibility that the effect is specific to the roc curve, future works should also include other measures of predictability .reviewing the results for the gaussian case and the slowly decaying power law from a philosophic point of view one can conclude that nature allows us to predict large events from the most frequently occuring distribution easily .however in gaussian distributions very large events are rare and therefore less likely to cause damage .whereas in the less frequently occurring distributions with heavy power - law tails , large events are especially hard to predict . therefore one can assume , that rare large impact events of processes with power - law distributions will remain unpredictable , although their prediction would be highly desirable .we thank j. peinke and his group for supplying us with the free jet data .an analytic expression for a filter which selects the pdf of our extreme increments out of the pdfs of the underlying stochastic process can be obtained through the heaviside function .( note that is not scaled by the standard deviation , i.e. , . )this filter is then applied to the joint pdf of a stochastic process or to be more precise to the likelihood that the step follows the previously obtained values .if we condition only on the last values , we neglect the dependence on the past . the likelihood that an event y(d)=1 follows in the step can then be obtained by multiplication with . if the resulting expression is nonzero , the condition of the extreme event in eq .( [ e0 ] ) is fulfilled and for and the following relation holds : hence it is possible to express the likelihood in terms of , which is a part of the precursory structure .we can use the integral representation of the heaviside function with appropriate substitutions to obtain if we are interested in the prediction of threshold crossings instead of increments , we can interpret as the magnitude of the threshold and set in order to obtain the corresponding expressions for the likelihood , the joint pdf , the aposterior pdf , and the total probability .99 david d. jackson , _ hypothesis testing and earthquake prediction _ , proc .* 93 * 3772 ( 1996 ) .h. kantz , d. holstein , m. ragwitz and n. k. vitanov , _ markov chain model for turbulent wind speed data _ , physica * a 342 * 315 ( 2004 ) .holger kantz , detlef holstein , mario ragwitz and nikolay k. vitanov , short time prediction of wind speeds from local measurements , in : _ wind energy proceedings of the euromech colloquium _ , edited by j. peinke , p. schaumann and s. barth , ( springer , new york , 2006 ) .s. hallerberg , e. g. altmann , d. holstein and h. kantz , _ precursors of extreme increments _ , phys .e * 75 * , 016706 ( 2007 ) m.reza rahimi tabar , m. sahimi , f. ghasemi , k. kaviani , m. allamehzadeh , j. peinke , m. mokhtaru and m. vesaghi , m. d. niry , a. bahraminasab , s. tabatabai , s. fayazbakhsh and m. akbari _ short - term prediction of medium- and large - size earthquakes based on markov and extended self - similarity analysis of seismic data _ arxiv : physics/0510043 v1 6 oct 2005 .a. b. shapoval and m. g. shrirman , _ how size of target avalanches influence prediction efficiency _ , international journal of modern physics c , * 17 * , 1777 ( 2006 ) .d. lamper , s.d .howison and n.f .johnson , _ predictability of large future changes in a competitive evolving population _ ,lett . * 88 * , 017902 ( 2002 ) .j. p. egan , signal detection theory and roc analysis , academic press , new york 1975 .g. e. p. box , g. m. jenkins and g. c. reinsel , time series analysis ( prentice - hall , inc ., eaglewood clif ., nj , 1994 ) .brockwell and r.a .davis , time series : theory and methods ( springer , new york , 1998 ) .j. m. bernado and a. f. m. smith , bayesian theory ( wiley , new york , 1994 ) .d. m. green and j. a. swets , _ signal detection theory and psychophysics . _( wiley , new york , 1966 ) .m. s. pepe , the statistical evaluation of medical tests for classification and prediction , ( oxford university press , new york , 2003 ) .j. broecker and l. a. smith _ scoring probabilistic forecasts : on the importance of being proper _ , weather and forecasting * 22 * , 382 ( 2007 ) .a. johansen and d. sornette , _ stock market crashes are outliers _ , european physical journal b * 1 * , 141 ( 1998 ) .n. vandewalle , m. ausloos , p. boveroux , et al ., _ how the financial crash of october 1997 could have been predicted _ , european physical journal b * 4 * 139 ( 1998 ) .a. savitzky and m. j. e. golay _ smoothing and differentiation of data by simplified least squares procedures _, analytical chemistry * 36 * 1627 ( 1964 ) . w. h. press , numerical recipes in c ( cambridge university press , cambridge , england 1992 ) .m. abramowitz , and i. a. stegun , handbook of mathematical functions , ( dover , new york , 1972 ) . w. feller , an introduction to probability theory and its applications ( wiley , new york ( 1970 ) , vol .ii . c. renner , j. peinke and r. friedrich , _ experimental indications for markov properties of small - scale turbulence _ , j. fluid .mech . * 433 * , 383 ( 2001 ) . c. w. van atta and j. park , statistical self - similarity and intertial subrange turbulence , in _ statistical models and turbulence _ , edited by m. rosenblatt and c. w. van atta , lect. notes in phys .12 , ( springer , berlin , 1972 ) , pp .402 - 426 .y. gagne , e. hopfinger and u. frisch a new universal scaling for fully developed turbulence : the distribution of velocity increments in _ new trends in nonlinear dynamics and pattern - forming phenomena _ edited by p. coullet and p. huerre , nato asi ( plenum press , new york 1990 ) , vol .315 - 319 .u. frisch turbulence cambridge university press , ( cambridge , england , 1995 ) .
|
we investigate the predictability of extreme events in time series . the focus of this work is to understand under which circumstances large events are better predictable than smaller events . therefore we use a simple prediction algorithm based on precursory structures which are identified using the maximum likelihood principle . using the receiver operator characteristic curve as a measure for the quality of predictions we find that the dependence on the event magnitude is closely linked to the probability distribution function of the underlying stochastic process . we evaluate this dependence on the probability distribution function analytically and numerically . if we assume that the optimal precursory structures are used to make the predictions , we find that large increments are better predictable if the underlying stochastic process has a gaussian probability distribution function , whereas larger increments are harder to predict if the underlying probability distribution function has a power law tail . in the case of an exponential distribution function we find no significant dependence on the event magnitude . furthermore we compare these results with predictions of increments in correlated data , namely , velocity increments of a free jet flow . the velocity increments in the free jet flow are in dependence on the time scale either asymptotically gaussian or asymptotically exponential distributed . the numerical results for predictions within free jet data are in good agreement with the previous analytical considerations for random numbers .
|
intelligent transport system ( its ) is a system that manages transportation from traffic management to law enforcement. one important object that widely explored by its is a vehicle and their properties , including type , color , and license plate .vehicle color is an important property for vehicle identification and provide visual cues for fast action law enforcement .recognize vehicle color is very challenging task because several factors including weather condition , quality of video / image acquisition , and strip combination of the vehicle .the first factor , weather condition , may dramatically change the color illumination of the acquisition image .for example , if the image / video taken at haze condition then there a lot of `` soft '' white noise added to the image .soft white noise means that the noise is not random but continues and blended with the foreground and background objects .the quality of video / image acquisition is affected the final decision of the vehicle color recognition system and its depends of the optical sensor in the camera .camera that can capture object at high speed is recommended for its , but not all installed camera in the road can do that .a lot of cameras installed in the road only used to monitor the traffic , pedestrians , and street conditions .the last factor is strip combination of the vehicle , which is very affected to the vehicle recognition system .region selection is very important to tackle the problem .there are some research paper published to tackle vehicle color recognition problem , like in .chen et al . use feature context and linear svm classifier to tackle the problem .feature context is a collection of histogram that build with several areas , like spatial pyramid structure but with different region configuration .in other paper , they try to tackle vehicle color recognition problem using 2d histogram with some roi configuration as features and neural network as classifier .baek et al . also use 2d histogram but without roi configuration and svm as classifier .another approach is described by son et al . which using convolution kernel to extract similarity between positive and negative images and then feed up those similarity score to svm classifier .color spaces are very important to color recognition applications , like vehicle color recognition .the selection of color space will impact the recognition performance .the most usable color space in digital photography is rgb color space , but rgb color space has problem to color recognition because channel of rgb color space contribute equal for each channel so to distinct color is more difficult .usually , researcher will not use rgb as their primary color space and convert it to other color spaces that separate illumination and color , like cie lab or hsv .another approach is to make 2d histogram of two channels , like h and s channel in hsv color space , and do classification using those 2d histogram . in this paper, we present vehicle color recognition method using convolutional neural network ( cnn ) .cnn is type of neural network but instead of using fully connected layer , cnn use layer called convolution layer to extract features from data .the training mechanism is very similar to normal neural network and use stochastic gradient descent as training algorithm .cnn is become very popular after winning the ilsvrc ( imagenet large scale visual recognition challenge ) 2012 . in those paper, they use more than 600,000 neuron and 7 hidden layer to provide good model of the data . to avoid overfitting krizhevsky et al . employed regularization method called dropout to the fully connected layer .the krizhevsky model is huge and as reported in the paper , the model trained in six day for 450,000 iteration in gpu hardware . before going into details , in section twowe describe detils related works in color recognition .section two describe details architecture of our cnn model .section three reports the experiments we have done and discuss the results .there are several research that try to tackle vehicle color recognition problem including in .the newest research is describe by chen et al . in 2014 and hsieh et al . in 2015 .chen et al .use feature context ( fc ) with selected configuration to divide the images into subregions , create histogram for each subregion , and learned it using linear svm .not all value in histogram is used to classify the vehicle color but the values clustered to form codebook for the problem and then choose the codebook as feature for the classifier . this mechanismknow as bag - of - word ( bow ) method .chen et al . done preprocessing using haze removal method and color contrast normalization method .the accuracy of system proposed by chen et al .is very high , over 92% .another paper by hsieh et al . proposed color correction using background image and two frame image of car . not only color correction method , hsieh et al .also proposed window removal method that remove the window part of the car images and classify vehicle color using lower part , like bumper and doors , of the car .the window removal done by taking the orientation of the car , fit the detail segmented car image by ellipse shape and cut a half of the ellipse .hsieh et al . done the experiments using three different classifier , g - classifier , dc - classifier , and dg - classifier .g - classifier responsible for classify gray and non - gray color .the method is very simple threshold method with assumption that for gray color the avarage of three channel , rgb , is very close with color value of each channel .the dc - classifier and dg - classifier trained using svm with features extracted from rgb and cie lab color space .red , green , blue , and yellow color class classified using dc - classifier and the rest of the color class classified using dg - classifier . from the experiments , hsieh et al .report that the average accuracy for the system is 93,59% with 7 color class including black , silver , white , yellow , red , green , and blue .fc also used by dule et al . to tackle vehicle color recognition problem .the different between fc used by chen et al . and dule et al .is that dule et al . only used two roi ( smooth hood peace and semi front vehicle ) .roi is selected automatically using plate detection method and otsu thresholding to search smooth hood peace and heuristic approach for semi front vehicle .the classifier used by dule et al .are k - nn , ann , and svm .the best accuracy that reported in dule et al .paper is 83,5% with configuration of 8 bin histogram , several combination of color spaces , and ann classifier .other approach for vehicle color recognition problem is classify vehicle color using 2d histogram features .baek et al . proposed the vehicle color recognition system using 2d histogram features and svm classifier .hue and saturation in hsv color space is used for creating the 2d histogram . from the experiments , the average accuracy of the system is 94,92% .the dataset used in the experiment has 500 outdoor vehicle images with five color class including black , white , red , yellow , and blue color class .son et al . proposed other possible approach for color recognition using similirity method .the system using grid kernel that run on hue and saturation channel of hsv color space .the same dataset as in is used in the experiments .son et al . reported only precission and recall for each color class .the percentage of precission and recall from the experiments is very high and close to 100% .high precission and high recall indicate that the model has good accuracy .the architecture of our cnn can viewed in figure [ fig : cnnarch ] .our cnn architecture consists 2 base networks and 8 layers for each base network with total 16 layers .the first two layers of our cnn architecture is a convlutional layer and it does convolution process following by normalization and pooling .convolutional layer is a layer that do convolution process that same as convolution process in image processing algorithm . for an input image and is a some convolution kernel , output image for convolution process can be written as = \sum_{j=-\infty}^\infty\sum_{i=-\infty}^\infty i_i[i , j].h[m , n]\end{aligned}\ ] ] with $ ] is pixel value at coordinate .training process of cnn will learn , may called as kernel , as parameters of convolutional layer .the choice of activation function in convolutional layer have huge impact for the networks .there a several choice of activation function including and ( rectified linear unit ) . in our cnn networkswe use activation function for all layers including the fully - connected layers . the normalization process done by following equation [ eq : lrn ] with , , and . with is normalization result and is output of layer activation function for convolution at coordinate . using those normalization , the accuracy of cnn increase about 2% according to .the last process in two first layers is pooling process .there are two type of pooling , max pooling and mean pooling .each type has different approach , max pooling will take maximum respon from the convolutional process which is shape with sharp edges and mean pooling will take the average of the convolutional process respon which is summarize the shape in neighborhood . in our cnn architecture , we use max pooling with size 3x3 and stride 2 for overlapping pooling .the second , fourth and fifth layer are grouping into two group which each group is independent each others .the third and fourth layer is also a convolutional layer but without pooling and normalization process .output of third and fourth layer is same as input because we use 3x3 kernel with pad 1 added for each border .the fifth layer is convolutional layer with only pooling process without normalization . before going into a fully - connected layers , the pooling output of the fifth layer from two base networks is concatenate and flattened into one long vector .the sixth and seventh layer is a fully - connected layer employed dropout regularization method to reduce overfitting .the last layer is the softmax regression layer which can describe in the following equation with is probability of being class given input with weight parameter .overall , our cnn architecture consists 2 base networks , 8 layers each with total 16 layers .first layer use 11x11 kernel with total 48 kernels , second layer use 3x3 kernel with total 128 kernels , third use 3x3 kernel with total 192 kernels , fourth layer use 3x3 kernel with total 192 kernels , and fifth layer use 3x3 with total 128 kernels .pooling process is employed in first , second , and fifth layer with same parameter , pooling size of 3x3 with 2 pixel stride .sixth , seventh , and eight layers is fully - connected layers with each 4096 - 4096 - 8 neuron with dropout regularization method employed in sixth and seventh layer .the network s input is a 3 channel image with 150,228 dimensional or 227x227 resolution .total neuron involved in the networks is 658,280 neurons .our models trained using stochastic gradient descent with 115 examples per batch , momentum of 0.9 and weight decay of 0.0005 . for the experiments , we use chen dataset and some sample images of the dataset can be viewed in figure [ fig : sample_dataset ] .the dataset contains 15601 vehicle images with 8 classes of vehicle color , which are black , blue , cyan , gray , green , red , white , and yellow . in the training process, half of class examples are used .each example is resized into 256x256 resolution with certain color spaces .we use four different color spaces , rgb , cie lab , cie xyz , and hsv . before the data processed for training, it cropped to 227x227 and subtracted by mean image of the training data . in training process the data randomly mirrored to increase the classifier accuracy .we use learning rate of 0.01 and reduced continuously by a factor of 10 at multiple iteration of 50,000 with maximum iteration of 200,000 .we use caffe framework to implement our models .the weights of the networks are initialized using a gaussian function with for connecting weights and fixed value of for bias value .the stochastic gradient descent method , sgd for short , is an optimization method that want to find minimum or maximum value of some function .sgd will work for all function that have gradient or first derivative .usually the system use sgd for minimizing the error or loss function and update the weight parameters based on following function with is current weight parameters , is learning rate , and is the gradient of loss function with respect to input examples . for faster model convergence ,the weight decay and momentum are added to the update equation .the final equation of update function in sgd method is describe following with is momentum variable and is weight decay .changing momentum and weight decay may accelerate the training process . the training process done in gpu hardware to reduce the training time .our gpu hardware consists 14 multiprocessor within 3 gb memory .there are two limitations of our gpu hardware for training process , the memory limiting the size of the networks or the batch size used in the training process and the maximum dimension of the grid block execution ( parallel execution configuration ) also limiting the batch size used in the training process .the training process taken over 2 gb gpu memory for the data and the networks with 4 days of execution time .c|>p0.09|>p0.09|>p0.11|>c|>p0.11| & & + & & & & & + & 0.9794 & 0.9450 & 0.9656 & * 0.9828 * & 0.9553 + & * 0.9666 * & 0.9624 & 0.9561 & 0.9649 & 0.9423 + & 0.9410 & * 0.9576 * & 0.9410 & 0.9484 & 0.9535 + & 0.9645 & 0.9716 & 0.9645 & 0.9716 & * 0.9787 * + & * 0.9897 * & 0.9866 & * 0.9897 * & 0.9886 & 0.9878 + & 0.8608 & 0.8503 & * 0.8668 * & 0.8647 & 0.8466 + & * 0.9738 * & 0.9703 & 0.9703 & 0.9709 & 0.9730 + & * 0.8257 * & 0.8215 & 0.8215 & 0.7676 & 0.7884 + & * 0.9447 * & 0.9372 & 0.9414 & 0.9432 & 0.9282 + l|>p0.15|>p0.15| & cpu(1 core ) & gpu(448 cores ) + & 4.849 s & 4.849 s + & 3.248 s & 0.156 s + for testing purpose , we use 50% examples of dataset that not used in the training process .table [ tbl : tbl_acc ] summarize our testing results with four different color spaces and compare the results with the system provide by chen et al .each class consists different number of examples from 141 to 2371 . from table[ tbl : tbl_acc ] , it can see that rgb color space achieve the highest accuracy of the testing process with average accuracy of 94,47% .four color spaces used in the models have high accuracy , more than 90% , with narrow deviation .the results show that our cnn model outperform the original system of dataset provide by chen et al .our models outperform chen et al .system in yellow , white , blue , red , gray , black , and green color class . only on cyan color classour system had lower accuracy comparing to the chen et al .system but with different only 0.7% .figure [ fig : confmat ] is a confusion matrix for our model using rgb color space .the confusion matrix shows that the most worst accuracy of our model is in green and gray color class .some examples of green class is misclassified as gray class and its above 10% .as seen in the dataset , some green color class examples has color that very close to gray , more like green - gray color than green , so the classifier may have it wrong classified as a gray color class .the same case appears in gray color class which some gray color class examples misclassified as white color class .thus may appears because of very bright sunlight reflection on metallic paint or the color is too light so it s very close to another color as well .[ fig : n1_conv6 ] another issue to tackle is execution time used to classify vehicle color .we implement the models using two different hardware , the first one the model running on 1 core cpu and the second one the model running on 448 cores gpu with nvidia tesla c2050 .table [ tbl : tbl_exec ] summarize average execution time for all testing examples .as shown in table [ tbl : tbl_exec ] , the models that run on gpu have more than 20x faster than the models that run on cpu , so the issue of execution time is solved if the models running on appropriate hardware configuration .the initialization time is a time for the system to prepare the model , load it to the memory , and load mean image . for the practical implementation, we recommend using the client server mechanism , send the vehicle detection result to the server , do the vehicle color classification in server backend using gpu hardware , and send back the result to the intelligent transportation system for further processing . to see how our models capturing color information in the data , we visualize a several layer of our cnn models .the first convolutional layer is an important part of the network to extract low - level features .figure [ fig : first_conv ] is a visualization of all kernels in the first convolutional layer and an example output of the pooling process in layer conv1 and conv2 of our cnn architecture . as seen in figure [ fig : first_conv ] , the first convolutional layer capture rich color features in the input image . all vehicle color variations in dataset are present in the kernels .the kernels from network 1 , figure [ fig : n1_conv1 ] , capture a lot of cyan - like color .cyan - like color that appears in the kernel may contribute to the red color class or cyan color class .another color that appears repeatedly in the kernel are red - blue color , green - gray color , and orange - like color .for further investigation , we capture respond from convolutional layer continuing with normalization and pooling process and it can see in figure [ fig : pooloutput ] .we test our models using one of the test images and try to analyze the behaviour of our models .figure [ fig : pooloutput ] show that for yellow color class a lot of the green - like color kernel neuron is active and its looks like our models learned that color can be recognize from the hood color or the top color of the car .this behaviour occurs because most all of the images in dataset take image of the front of the car from some height and a little deviation of angle , so the side of the car is not cover very much .the camera configuration of taken images in dataset simulate the cctv or other street camera that relatively used such that configuration .in the paper , we present the vehicle color recognition system using convolutional neural network .our model succesfully capturing vehicle color in very high accuracy , 94,47% , and outperform the original system provide by chen . from the experiment ,the best accuracy is achieve using rgb color space and this is contradictive with several papers that not recomend rgb color space for color recognition and using another color space like hsv or yuv .execution time for our models is about 3 s for cpu ( 1 core ) and 0.156 s for gpu ( 448 cores ) , although the execution time is slower than system provide by chen but its still can be used for practical implementation with several adjustment .hsieh , l .- c .chen , s .- y .chen , d .- y .chen , s. alghyaline , and h .- f .chiang , _ vehicle color classification under different lighting conditions through color correction _, ieee sensors journal , 2015 , vol : 15 , issue : 2 , pp : 971983 .n. srivastava , g. e. hinton , a. krizhevsky , i. sutskever , and r. salakhutdinov , _ dropout : a simple way to prevent neural networks from overfitting _ , journal of machine learning research 15 ( 2014 ) , pp : 19291958 .y. jia , e. shelhamer , j. donahue , s. karayev , j. long , r. girshick , s. guadarrama , and t. darrell , _ caffe : convolutional architecture for fast feature embedding _ , acm international conference on multimedia 2014 , pp : 675678 .e. dule , m. gokmen , m. s. beratoglu , _ a convenient feature vector construction for vehicle color recognition _ , proc .11th wseas international conference on neural networks , evolutionary computing and fuzzy systems , pp : 250255 , ( 2010 ) j .- w .son , s .- b .park , and k .- j .kim , _ a convolutional kernel method for color recognition _ , international conference on advanced language processing and web information technology ( alpit 2007 ) , pp : 242 - 247 .
|
vehicle color information is one of the important elements in its ( intelligent traffic system ) . in this paper , we present a vehicle color recognition method using convolutional neural network ( cnn ) . naturally , cnn is designed to learn classification method based on shape information , but we proved that cnn can also learn classification based on color distribution . in our method , we convert the input image to two different color spaces , hsv and cie lab , and run it to some cnn architecture . the training process follow procedure introduce by krizhevsky , that learning rate is decreasing by factor of 10 after some iterations . to test our method , we use publicly vehicle color recognition dataset provided by chen . the results , our model outperform the original system provide by chen with 2% higher overall accuracy .
|
the analogy between filtering and object detection in point distributions was recognized since more than 25 years ( e.g. , yashin 1970 , snyder 1972 , rubin 1972 ) .recent astronomical applications are given in , e.g. , dalton et al .( 1994 ) , kawasaki et al .( 1998 ) , and postman et al .( 1996 ) . in the latter project ( palomar distant cluster survey , pdcs ) clusters of galaxies are detected and their redshifts and richnesses are estimated by maximizing the excess of the galaxy number counts as a function of angular coordinate and apparent magnitude with respect to the foreground and background galaxy distribution . in the following both distributionsare referred to as ` background ' distribution .the algorithm is based on the assumption that the statistics are dominated by the background galaxy counts . in this ` high - background approximation ' the expected large numbers of galaxies ensure the validity of a gaussian approximation of the corresponding likelihood function ( central limit theorem ) which is maximized to find the clusters and their basic parameters .it can be shown that this type of maximization corresponds to the minimization of the error - weighted squared differences between observed and modeled data ( method of least squares , e.g. , barlow 1996 , p.93 ) .all counting errors are attributed to poisson noise in the background .the method outlined above is optimized in finding distant clusters where the approximations seem to be appropriate . in the followingwe describe a generalization of the pdcs filter where no _ a priori _ information about the contrast of the cluster distribution with respect to the background distribution is needed .we replace the gaussian approximation of the likelihood function by an exact expression based on local poisson distributions with position - dependent mean intensities guided by the projected cluster number density profile and by the luminosity function . a similar approach based on binned datawas developed independently by kawasaki et al .their finally used likelihood function , however , differs from the one derived in the present paper .the aim of the present investigation is to study a mathematically exact and general likelihood filter which can be further optimized and then applied to the detection and characterization of isolated galaxy clusters located in comparatively uniform background galaxy fields .we plan to apply the method to galaxy fields centered around flux - limited samples of x - ray sources , e.g. , to the rosat - eso - flux - limited x - ray ( reflex ) cluster catalogue ( bhringer et al ., in preparation ). the algorithm can be applied , however , to a much broader class of detection problems .the reflex clusters have measured redshifts up to ; a few exceptional cases have .the bulk of the data is located in the range . within the reflex project, the filter can be used to compute statistical significances of cluster detections , and to estimate redshifts and richnesses of the optical counterparts .the redshift estimates could further support those cluster redshifts which are obtained spectroscopically with only small numbers of cluster galaxies .the richness estimates could give more information about selection effects introduced by the complex x - ray / optical survey process of the reflex project .the approach is based on the assumption that the spatial and the magnitude distributions of the galaxies located in the direction of clusters can be modeled by a marked inhomogeneous poisson point process .the resulting likelihood function is given in a compressed analytic form using the concept of the likelihood ratio statistics ( sect.[s_gm ] ) .it is shown that the pdcs filter can be recovered in cases where background number counts dominate the statistic ( appendixb ) .some practical notes on the application of the filter are given in sec.[s_pc ] .in sect.[s_fp ] the performance of the general method is illustrated by analyzing simulated data to measure possible statistical biases of the derived numerical estimators ( very often maximum likelihood estimators are not unbiased ; see , e.g. , ripley 1991 , sect.4 , and barlow 1996 , p.84 ) and by analyzing subsamples of the cosmos galaxy catalogue to test the methods under more realistic survey conditions .the results are summarized and the deviations from the idealized model assumptions are discussed in sect.[s_cr ] .all computations assume pressureless friedmann - lematre world models with the hubble constant , , in units of , a negligible cosmological constant , and the deceleration parameter .the spatial distribution of the galaxies can be regarded as an inhomogeneous but in principle more or less random pattern of points , i.e. , as a realization of a point process in mathematical terminology ( neyman & scott 1952 , layzer 1956 , see also neyman 1961 and references therein ) .the point process can be considered either as a random set of discrete points or as a random measure , counting the number of points in a given region ( method of counts - in - cells ) .the points may be distributed in a three - dimensional volume , in a two - dimensional patch on the celestial sky , in a magnitude space , etc .all distributions with random locations of points and with local density parameters guided by a _ nonrandom _ variable fall into the category of inhomogeneous poisson point processes .this is the case for the inhomogeneous galaxy distribution seen in the direction of one galaxy cluster where the local more or less radial - symmetric variation of the projected galaxy number density profile plus the uniform background field are determined by the intensity .the inhomogeneous poisson point process is well - known in the theory of point processes ( see , e.g. , cressie 1993 , p.650 and references therein ) . if we replace the deterministic variation of the density parameter by avariable we enter the field of doubly stochastic poisson processes ( cox processes , see , e.g. , stoyan , kendall , & mecke 1995 , p.154 ) which are the general types of point processes to characterize large - scale distributions of galaxies ( e.g. , a gaussian random field combined with local poisson point processes ) .let be one spatial realization of a point process .the points , all with different coordinates , are distributed within the total volume .a useful quantity with a simple interpretation and a direct relation to powerful analytical tools provided by the theory of point processes is the local janossy density , : after multiplication of with the product of the volume elements it gives the probability that in the total volume there are exactly points in the process , one point in each of the distinct infinitesimal half - open regions .the janossy density may be regarded as the likelihood of the realization , ignoring as usual the principle differences between probability density functions and sample functions : assume that the number of galaxies observed per unit solid angle depends only on the spatial coordinate and gives the intensity .more complicated models where also the magnitudes of the galaxies are taken into account are discussed below in the context of marked point processes ( see sec.[ss_impp ] ) . the intensity is considered as the -dependent density parameter of an inhomogeneous poisson process .the janossy density and thus the likelihood function of the inhomogeneous poisson point process can be derived directly by using the machinery of probability generating functionals and khinchin measures ( daley & vere - jones 1988 , p.498 ) .the application of this formalism offers , however , the possibility to derive likelihood functions for more complex point processes , e.g. , multiple stochastic point processes which might be interested for many cosmological applications . a guideline through the basic equations of the formalismis given in appendixa .for inhomogeneous poisson point processes this standard formalism leads to the logarithmic likelihood ( see also snyder 1975 , karr 1986 , and appendixa ) equation ( [ l1 ] ) can also be found in a more heuristic way . as already mentioned above , the quantity may be regarded as the probability of finding exactly one point in each of the infinitesimal volume elements , ( case a ) , whereas outside of these regions no further points are present , ( case b ) .these events are independent , due to the independence properties of the poisson process .the probabilities for ( a ) are given by ( for each individual point ) , and the probability for ( b ) is given by the avoidance or void probability function , .the integration extents over the volume reduced by the sum of the infinitesimal volume elements , , which can be neglected due to their small size so that , in total , the integration extents over the complete volume ( stoyan & stoyan 1992 , p.258 ) . in order to generalize the model described above note that galaxy distributions can also be regarded as realizations of _ marked _ spatial point processes , each consisting of locations of events in a bounded study region and associated measurements ( marks ) .typical marks , often found in cosmological applications are apparent magnitude , luminosity , mass , energy , color , morphological type , etc . in this casethe realization of a marked point process is .we assume the absence of any segregation effects , i.e. , that the marks are independent - and - identically - distributed and are independent of the associated marginal spatial point process .define the point process by a counting measure on the product space of two intervals with , for example , , and .the moment measures of a marked spatial point process are simple extensions of the moment measures of an ordinary spatial point process .the generalization of ( [ l1 ] ) to marked inhomogeneous poisson point processes is thus straightforward and is given by usually , can be factorized as where is the conditional density of the marks , so that the log - likelihood can be maximized separately for the parameters characterizing and . in the case of finding clusters of galaxies using spatial and magnitude information not be factorized as can be seen from the model for the number of galaxies observed per unit area and per unit magnitude interval introduced by postman et al .( 1996 ) , here , gives the differential magnitude number counts of the background galaxies , is the luminosity function ( e.g. , in the schechter prescription ) of one cluster shifted along the apparent magnitude scale in accordance with the given redshift parameter and superposed onto the background distribution , is the cluster angular surface number density profile as a function of the projected ( radial ) distance from the center of the cluster with the projected characteristic radius , and is a dimensionless parameter characterizing the intrinsic cluster richness .equation ( [ mux ] ) assumes a uniform , i.e. , unclustered background galaxy distribution , and that all clusters have the same luminosity function and the same spatial number density profile . for the model ( [ mux ] ) with given , and the normalization ( if the integrals do not diverge ) equation ( [ lm1 ] ) can be written as it is convenient to normalize likelihood functions using the concept of likelihood ratio statistics .further notes on the scope of adaptability of the likelihood ratio statistics , especially when not all random variables may be independent , can be found in sen & singer ( 1993 , p.72 ) .for the present situation where is assumed to be known the results do , however , not explicitly depend on the specific normalization . as a reference process in the denominatorusually the homogeneous poisson process is chosen . in order to get a likelihood ratio where its maximization explicitly maximizes the contrast between cluster and background , an inhomogeneous poisson process with an -independent density as given in ( [ mux ] ) for ( background model ) seems to be more appropriate .note that although the point distribution of this specific model is homogeneous in the spatial domain it is still inhomogeneous in the magnitude space .the corresponding log - likelihood is with this normalization the final log - likelihood ratio is the richness parameter is obtained from the relation , resulting in the condition : equations ( [ lr1 ] ) and ( [ lam1 ] ) can be used for the detection of clusters in the following way : in the first step , values for and are selected so that , for given functional forms of and , equation ( [ lam1 ] ) gives the corresponding richness parameter using standard numerical root finding algorithms ( see , e.g. , brent 1973 , sect.3 and 4 ) . in the second step , this value is inserted into ( [ lr1 ] ) where serves as a significance measure for the detected cluster candidate .the relations between the observed and the distant - independent values of the filter parameters are determined by the redshift and the chosen cosmological model . the final cluster redshift and richness values are thus fixed by the value with the highest log - likelihood ratio . from it can be deduced that for a general galaxy field , is the mode of the distribution of the logarithmic values of the likelihood ratios .the distribution is asymmetric with a long tail to larger -values where the rich clusters are expected .approximations of the equations ( [ lr1 ] ) and ( [ lam1 ] ) are given in appendixb .for the application of the filter the intervals and in ( [ norm ] ) must be specified .the following computations assume the half - open - and -ranges in equations ( [ interv ] ) is the halo or cutoff radius defined by for , and for .the bright and the faint - end limit of the apparent magnitudes of the sample galaxies are and , respectively . for radial - symmetric cluster profiles , equations ( [ interv ] ) , ( [ mux ] ) , and ( [ norm ] ) give a second estimator for the cluster richness , with the specific choice ( [ interv ] ) , is the difference between the total number of galaxies , , seen in the direction of the cluster , and the number of background galaxies , , expected for the cluster area determined by .therefore , is a statistical measure of the number of cluster galaxies , , detected in the magnitude range above the background distribution .the estimator gives richnesses which depend on cluster redshift .note the difference between defined as a parameter characterizing the intrinsic cluster richness by equation ( [ mux ] ) without any constrains on the intervals and , and defined as an estimator of the cluster richness through equation ( [ estrich ] ) under the constrains ( [ interv ] ) .equation ( [ estrich ] ) not only shows that estimates of cluster richnesses obtained with ( [ lam1 ] ) or ( [ estrich ] ) are biased .the equation also offers a new way to circumvent the more complicated estimation of by solving condition ( [ lam1 ] ) .richnesses obtained with ( [ estrich ] ) are , however , not obtained with the maximum likelihood principle .therefore , the combination of with ( [ lr1 ] ) does not necessarily yield consistent maximum likelihood values .a redshift - independent and thus unbiased estimate , , of the intrinsic cluster richness may be obtained from the ratios ^{-1}\,,\end{aligned}\ ] ] where is given by ( [ lam1 ] ) , and the cutoff magnitude , , used as the upper limit of the numerical integration of the luminosity function , must be chosen in accordance with the cluster redshift .this magnitude may be defined by \,.\ ] ] here , is determined by the cluster redshift and by a predefined increment , magnitudes fainter than .the last equality in ( [ estrich1 ] ) uses the normalization ( [ norm ] ) separately for the radial and for the magnitude profile : the equations ( [ estrich1 ] ) can also be used to correct for redshift biases .finally , it should be noted that must be bright enough to include the brightest galaxy of the sample .the following computations illustrate the basic properties of the likelihood filter . the model is given by the equations ( [ lr1 ] ) , ( [ lam1 ] ) , ( [ interv ] ) , ( [ estrich1 ] ) , and ( [ incr ] ) . all numerical simulations and reductionsare performed for the photographic passband .observed data are taken from the cosmos galaxy catalogue ( e.g , heydon - dumbleton et al .1989 ) . for the magnitude filter , , a schechter - type luminosity function is used with the characteristic magnitude , [mag ] , and the faint - end slope , ( colless 1989 , lumsden et al .the cosmos magnitudes are corrected for the effect of the limited dynamic range within the microdensitometer ( ` saturation effect ' ) as in lumsden et al .( 1997 , sect.2.2 ) .the cosmic corrections for the passband are parameterized as [mag ] ( efstathiou , ellis , & peterson 1988 ; dalton et al .1997 ) . if the galaxy magnitudes have large random errors one has to replace by the luminosity function convolved with the magnitude errors .the corrections for galactic extinction are applied to observed data using the correlation between visual extinction and neutral hydrogen column density , /a_v]-valued ( test ) function . in ( [ gf ] ) the point process on represented by , and the points are located at the spatial positions . in the second step , the logarithmic generating functional obtained with ( [ gf ] ) is compared with the expansion \,&=&\,-k_0(a)\,+\,\sum_{n=1}^\infty\,(n!)^{-1}\ , \int_{a^{(n)}}\nonumber \\ & & \cdots\int\,h(x_1)\cdots h(x_n)\,k_n(dx_1\cdots dx_n|a)\,,\end{aligned}\ ] ] to get the khinchin measures , .some remarks concerning the derivations of both equation ( [ gak ] ) and ( [ jn1 ] , see below ) can be found in daley & vere - jones ( 1988 , sect.5.5 , and p.230 ) . in the last step ,the measures are inserted into the exansion of the local janossy densities , to get in combination with equation([la ] ) the likelihood function of the point process .the second sum on the right - hand side of ( [ jn1 ] ) is taken over all -partitions where gives the number of elements in each partition set .example : inhomogeneous poisson point process . + for this process , equation ( [ gf ] ) gives ( cressi 1993 , p.650 ) \,=\,-\int_a\,(1-h(x))\,\lambda(x)\,dx\,.\ ] ] with equation ( [ gak ] ) we obtain the only nonzero khinchin measures , and , resulting to the janossy density which directly leads to the likelihood function of the inhomogeneous poisson point process ( eq.[l1 ] in sect.[ss_ippp ] ) .equations ( [ lr1 ] ) and ( [ lam1 ] ) are mathematically exact and are based on no special mathematical approximation . in this section it will be shown that the equations of the digital filter derived by postman et al .( 1996 ) not the finally equations used can be obtained if in this limit the left side of the condition ( [ lam1 ] ) may be approximated by from which a closed analytic form for the cluster richness is obtained in the same limit ( [ lr1 ] ) reduces to inserting ( [ hba2 ] ) into ( [ hba3 ] ) gives an analog equation to ( [ lr1 ] ) in the high - background approximation it is seen that equations ( [ hba4 ] ) and ( [ hba2 ] ) correspond to equations ( 15 ) and ( 14 ) in postman et al .( 1996 ) , respectively , if and and the normalization ( [ norm ] ) holds .in the present investigation median polishing is used to identify those photographic plates where the surface number density of galaxies deviates significantly from the central plate containing the major part of the galaxy cluster , so that the galaxies on those ` pathological ' plates can be rejected from further analyses .the algorithm is , however , more powerful and thus interesting enough to be described in more detail .the method can , for example , be used for samples of sufficiently large sizes as a _ definition procedure _ of stationary point distributions .let , be the number of points counted in the cell with the indices and .the and the index may number the count cells along the right ascension and along the declination axis , respectively .we follow the terminology of cressi ( 1993 , p.186 ) and define for and define for here , is the median of .the algorithm starts with usually the method converges after three to five iterations .after median polishing the original data matrix is replaced by a residual matrix , , and by the extra elements containing the deviations along the rows , , the deviations along the columns , , and the average background level , .these quantities are related by equation([mpol4 ] ) shows that describes the overall median surface number density of galaxies in the area covered by the spatial indices and .correspondingly , the projected median surface number density profile in excess to the global median along , e.g. , the right ascension and the declination axes , is thus given by the and by the vector , respectively .the residual matrix gives the remaining deviations from ( first - order ) stationarity .the three effects are unbiased estimates if the coordinates of the individual galaxies are transformed into equal - area hammer aitoff coordinates .figures[f_nomed ] and [ f_med ] illustrate the performance of the filter . in this examplethe cosmos galaxy data of four adjacing schmidt plates ( [ f_nomed ] ) are combined .median polishing detects the artifical inhomogeneity at degrees , rejects the corresponding data from further analyses , and starts again with a new combination of the cosmos data from the remaining schmidt plates ( [ f_med ] ) .the algorithm can be regarded as a standard procedure in spatial data analyses and is thus well studied both mathematically and in many practical applications ( see the examples and references collected by cressi 1993 ) .
|
the likelihood filter for cluster detection introduced by postman et al . is generalized by using standard procedures and models originally developed in the theory of point processes . it is shown that the filter formulae of postman et al . can be recovered in cases where background fields dominate the number counts . the performance of the generalized method is illustrated by using monte carlo simulations and by analyzing galaxy distributions extracted from the cosmos galaxy catalogue . the generalized method has the advantage of being less biased at the expense of some higher computational effort .
|
motivated by a diverse array of applications in automotive industry , railway vehicles , and networked control , the recent works dealt in detail with the concept of _ maximum hands - off control_. the purpose of maximum hands - off control is to design actuator signals which are most often zero , but nonetheless achieve given control objectives .this motivates the use of instantaneous cost functions where the control effort is penalized via the -(semi)norm , thereby leading to a _ sparse _control function , cf .sparse controls are of great importance in situations where a central processor must be shared by different controllers , and sparse control is a new and emerging area of research , including applications in the theory of control of partial differential equations . due to the discontinuous and non - convex nature of the instantaneous cost function in -optimal control problems , solving such problemsis in general difficult .hence , the precursor article focused on relaxations to the problem , akin to methods used in compressed sensing applications . to be more precise , examined smooth and convex relaxations of the maximum hands - off control problem , including considering an -cost and regularizations with an -cost to obtain smooth hands - off control .( it is a well - known and classical result that under `` nonsingularity '' assumptions on the control system ( * ? ? ?* chapter 8) , -costs lead to sparse solutions in the control .however , in singular problem instances , it is unclear whether -regularizations lead to sparse solutions . )the exact -optimal control problem was not investigated in .the purpose of the present article is to complement by directly dealing with the underlying non - smooth and non - convex -optimal control problem without the aid of smooth or convex relaxations .we will focus on nonlinear controlled dynamical systems of the form with state , input and where is a continuously differentiable map describing the open - loop system dynamics .the maximum hands - off control problem aims to minimize the support of the control map , or in other words , maximize the time duration over which the control map is exactly zero . in other words , given real numbers with , vectors , a compact set containing in its interior , we consider the optimal control problem )}\\ \sbjto & \quad \begin{cases } \dot z(t ) = \phi\bigl(z(t ) , u(t)\bigr ) \text { for a.e.\ } t\in[\tinit , \tfin],\\ z(\tinit ) = a,\quad z(\tfin ) = b,\\ u:[\tinit , \tfin]\lra\admact \text { lebesgue measurable}. \end{cases } \end{aligned}\ ] ] here the -(semi)norm)} ] is defined by the lebesgue measure of the support of , i.e. , ) } \let \leb\bigl(\bigl\{s\in[\tinit , \tfin]\,\big|\ , u(s ) \neq 0\bigr\}\bigr).\ ] ] observe that if the minimum time to transfer the system states from to is larger than the given duration , then the optimal control problem has no solution .thus , a standing assumption used throughout this work is that there is a feasible solution to . in other words , despite the limited control authority described by the compact set , we shall assume that it is possible to steer the system states from to in finite time .observe also that , unlike minimum attention control la , the optimal control problem does not penalize the rate of change of the control .nonetheless , can be viewed through the looking glass of least attention in the sense that the control is ` active ' for the least duration of time .the current work investigates optimality in using a nonsmooth maximum principle as summarized in ( * ? ? ?* chapter 22 ) .the main contributions and outline of this article are given below : 1 .we show that can be recast in the form of an optimal control problem involving an integral cost with a discontinuous cost function .[ contrib:1 ] we apply a non - smooth pontryagin maximum principle directly to problem and obtain an exact set of necessary conditions for optimality .this result is presented in .it characterizes solutions to provided that they exist . 2 .sheds further insight into the case where the system dynamics in are linear .this section also illustrates that , perhaps contrary to intuition , in singular problem instances , -relaxations may fail to give sparse controls ; cf .* chapter 8) .the pontryagin maximum principle gives necessary conditions for an extremum .naturally , any state - action trajectory satisfying the pontryagin maximum principle is not necessarily optimal . in we provide conditions under which the necessary conditions are also sufficient for optimality .our proof of optimality follows from inductive methods in optimal control .[ [ notation ] ] notation : + + + + + + + + + the notations employed in this article are standard .the euclidean norm of a vector , belonging to the -dimensional euclidean space , is denoted by ; vectors are treated as column vectors . for a set let denote the indicator ( characteristic ) function of the set defined to be if and otherwise .[ r : comparison ] the version of the maximum hands - off control problem posed in is slightly different from the one we examine in above . indeed , studies the following problem : )}\\ \sbjto & \quad \begin{cases } \dot z(t ) = \phi\bigl(z(t ) , u(t)\bigr ) \text { for a.e.\ } t\in[\tinit , \tfin],\\ z(\tinit ) = a,\quad z(\tfin ) = b,\\ u:[\tinit , \tfin]\lra\admact\text { lebesgue measurable } , \end{cases } \end{aligned}\ ] ] where are given positive weights .this cost function features the controls of a multivariable plant as additive terms .in contrast , and by noting that ( where the on the left - hand side belongs to and the one on the right - hand side belongs to , ) the cost function features a multiplicative form in the controls .the techniques exposed for in the sequel carry over in a straightforward fashion to . in order not to blur the message of this article, we stick to the simpler case of .by definition , we have ) } = \tfin - \tinit - \int_{\tinit}^{\tfin } \indic{\{0\}}(u(s))\,\dd s.\ ] ] since and are fixed , the minimization of )} ] to there exist an absolutely continuous curve \ni t\mapsto p(t)\in\r^d ] : and .\ ] ] a proof of proposition [ p : exact solution ] is provided in appendix [ s : app ] .[ r : solutions ] proposition [ p : exact solution ] gives a set of necessary conditions for optimality of state - action trajectories in the same spirit as the standard first order necessary conditions for an optimum in a finite - dimensional optimization problem .we see that the ordinary differential equations ( o.d.e.s ) describing the system state and its adjoint constitute a set of -dimensional o.d.e.s with constraints .this amounts to a well - defined boundary value problem in the sense of carathodory ( * ? ? ?* chapter 1 ) .indeed , the control map is lebesgue measurable , and depends parametrically on ; therefore , the right - hand side of under satisfies the carathodory conditions ( * ? ? ?* chapter 1 ) that guarantee existence of a carathodory solution .numerical solutions to differential equations such as the ones in are typically carried out by what are known as the shooting and multiple shooting methods .this is an active area of research ; see ( * ? ? ?* chapter 3 ) for a detailed discussion .[ r : extremal lift ] the quadruple is known as the _ extremal lift _ of the optimal state - action trajectory .the scalar is known as the _abnormal multiplier_. if , then the extremal is said to be normal ; if , then the extremal is said to be abnormal .the scalar is a lagrange multiplier associated to the instantaneous cost .interestingly , the curves for which are not detected by the standard calculus of variations approach .the reason is that in calculus of variations the underlying assumption is that there are curves `` close '' to the optimal ones satisfying the same boundary conditions . butthis assumption fails whenever the optimal curves are isolated in the sense that there is only one curve satisfying the given boundary conditions . in that case, a comparison between the costs corresponding to this optimal curve and other neighbouring curves turns out to be impossible to perform .the pontryagin maximum principle , however , detects such abnormal curves and characterizes them . at the level of generality of proposition [ p : exact solution ] we can not rule out the presence of abnormal extremals in our setting .proposition [ p : exact solution ] characterizes the necessary conditions for optimality of maps \ni t\mapsto\bigl(z(t ) , u(t)\bigr) ] to there exists a number or and a vector such that : if , then if , then in the above we simply have and .observe that in the normal case of , we have sparse controls since the optimal controls are explicitly set to .we provide a proof of corollary [ c : linear case ] in appendix [ s : app ] , and note that the message of remark [ r : solutions ] applies accordingly to corollary [ c : linear case ] . for the particular case where the control inputs are constrained to lie in the closed unit ball ( with respect to the euclidean norm ) centered at , we have the particularly simple formula for the optimal control in the context of corollary [ c : linear case ] if : in the further special case of the control dimension being and ] .we seek a control that is feasible given the preceding conditions , and that is set to for the maximal duration of time . in the context of this simple example it is clear that any control that is equal to on a lebesgue measurable subset of ] . since } \bigl\ { p_0 v + \eta \indic{\{0\}}(v)\bigr\},\ ] ] we have the first case of ruled out because the corresponding constant control , regardless of the value of the constant , is not feasible . in other words, our probelm conforms to the normal case .we rule out the two constant controls corresponding to and since they too are not feasible . for the same reason we also eliminate all controls taking values in .the only remaining possibility corresponds to any feasible control taking values in .we described an uncountable family of such controls above , and therefore , each of these controls satisfies the assertions of corollary [ c : linear case ] .[ example : singular ] consider the following linear plant : we seek a control that drives the states from a given initial state to .the admissible action set is ] , it follows that any control that achieves this manoeuvre must spend at least units of time with non - zero control values . in other words ,the minimum cost is , indeed , .. then the optimal control satisfies } \ { v(p_1 ( 5-t ) + \hat p_2 ) \}\ ] ] according to corollary [ c : linear case ] .note that both can not be zero simultaneously .thus , i.e. , our problem corresponds to the normal case . using the result of corollary [ c : linear case ] inwe obtain the following necessary conditions for -optimal controls in this normal case : for ] we have .] of the hamiltonian function now ( * ? ? ?* theorem 24.1 , corollary 24.2 ) asserts that the state - action trajectory \ni t\mapsto \bigl(z\opt(t ) , u\opt(t)\bigr) ] . for the sake of completeness , we adapt the non - smooth pontryagin maximum principle from the monograph , to which we refer the reader for complete details including the notations . below ( which is why it is an adaptation of ( * ? ? ?* theorem 22.26 ) ) ; for the present purpose , further generality is not needed . ][ t : clarke extended theorem ] consider the optimal control problem ,\\ u:[\tinit , \tfin]\lra\admact \text { lebesgue measurable},\\ \bigl(x(\tinit ) , x(\tfin)\bigr ) \in e\subset\r^d\times\r^d , \end{cases } \end{aligned}\ ] ] where is bounded and lower semicontinuous , is lower semicontinuous if for every the set is closed .a function is said to be upper semicontinuous if is lower semicontinuous .] is continuously differentiable , compact , and is closed .let \ni t\mapsto \bigl(x\opt(t ) , u\opt(t)\bigr ) ] together with a scalar equal to or satisfying the _ nontriviality condition _ for all ] : the _ hamiltonian maximum condition _ for a.e . ] : the above non - smooth maximum principle can be used to derive the exact set of necessary conditions for maximum hands - off control as follows : we apply the non - smooth pontryagin maximum principle theorem [ t : clarke extended theorem ] to the optimal control problem . for define the _ hamiltonian function _ ) in order to derive the _adjoint state equation _, we notice that for fixed , the function is smooth .it follows that the adjoint state differential equation , is given by .\ ] ] this o.d.e .is linear in , and due to continuous differentiability of , admits a unique solution on ] .since the function is upper semicontinuous , the supremum is attained in by weierstrass theorem . in other words , the optimal control is given by , for a.e . ] .thus , solutions \ni t\mapsto \bigl(z\opt(t ) , u\opt(t)\bigr) ] .since is a closed subset of , the map is an upper semicontinuous function .due to upper semicontinuity of and compactness of ( and in view of weierstrass theorem ) , the supremum above is attained at some point of for a.e . ] would be violated .d. chatterjee was supported in part by the grant 12ircc005sg from ircc , iit bombay , india .m. nagahara was supported in part by jsps kakenhi grant numbers 26120521 , 15k14006 , and 15h02668 . , _ a convex analysis approach to optimal controls with switching structure for partial differential equations_. esaim : control , optimization , and calculus of variations .http://dx.doi.org/10.1051/cocv/2015017 . ,_ differential equations with discontinuous righthand sides _ , vol .18 of mathematics and its applications ( soviet series ) , kluwer academic publishers group , dordrecht , 1988 . translated from the russian . ,_ elliptic optimal control problems with -control cost and applications for the placement of control devices _ , computational optimization and applications .an international journal , 44 ( 2009 ) , pp .
|
maximum hands - off control aims to maximize the length of time over which zero actuator values are applied to a system when executing specified control tasks . to tackle such problems , recent literature has investigated optimal control problems which penalize the size of the support of the control function and thereby lead to desired sparsity properties . this article gives the exact set of necessary conditions for a maximum hands - off optimal control problem using an -(semi)norm , and also provides sufficient conditions for the optimality of such controls . numerical example illustrates that adopting an cost leads to a sparse control , whereas an -relaxation in singular problems leads to a non - sparse solution . 2
|
the idea that a scale - free network has a zero percolation threshold , , has sparked a good deal of interest lately . in this paper, we offer an example of a scale - free network ( in the sense of a power law degree distribution ) that has very different properties than the original barabasi - albert ( ba ) network .we find that replacing the preferential attachment and well - mixed structure of the ba network with two - dimensional ( 2d ) clustering makes the percolation threshold nonzero .further , percolation on our network can be mapped onto the problem of sir epidemic propagation .thus , these percolation results have practical implications for the control of real world epidemics .specifically , in contrast to recent claims , a random immunization program might successfully eradicate an epidemic on a scale - free network , such as a sexually transmitted disease .furthermore , we will show that the addition of a sizable number of random links , or small worlds bonds , to the lattice does not change the result of a nonzero percolation threshold in our network . in our network model ,nodes are embedded on a 2d lattice .they asymptotically have a power law distribution of degree , , but they connect to _ the nearest neighbors on the lattice _, not randomly chosen nodes .the relevance of this to disease propagation is that for epidemics such as sexually transmitted diseases ( stds ) and many other human diseases it is known that the number of contacts of different individuals differs over a wide range .the distributions of the numbers of sexual contacts have been traced in several studies and are known to show fat tails which approximate power - law behavior : the number of sexual partners of an individual during a year is distributed according a power law with .these distributions possess a mean value , , but they lack a dispersion and thus show large , universal fluctuations . to modelsuch behavior , pastor - satorras and vespignani as well as lloyd and may have proposed a scale - free network with preferential attachment to already highly connected nodes , or hubs . in the original model due to barabasi and albert, the network grows sequentially : each newly introduced node brings proper bonds ; each of them is attached to one of the nodes already existing ; and the probability of attachment is proportional to the number of the already existing bonds of this node .this model leads naturally to a power - law probability distribution of the degree of the node ( number of its bonds ) , .infection propagation on a scale - free network was considered in pastor - satorras and vespignani .they find that a scale - free network is very robust against the random removal , or immunization , of the nodes .a giant component still persists even if almost all nodes in the system are eliminated , i.e. the critical threshold for all . asserting that the network of human sexual contacts has the scale - free construction and structure of the ba model would have important consequences for controlling the epidemics of sexually transmitted diseases : the immunization of the large part of the population would be a useless measure . instead , one would have to concentrate on the most active agents , which may be hard to identify .although it may be a reasonable model for growing technological networks , such as the world wide web , where physical distance is not an issue , the scale - free construction appears to us to be unnatural as a model for the disease transmission .its `` well - mixed '' spanning character and the absence of any underlying metric ( i.e. the impossibility of definition of a geographical `` neighborhood '' ) are unrealistic .( in simple terms , although some people often use airplanes , a large portion of the earth s population does not . )sexual contacts may have well - mixed properties _ locally _ within a neighborhood , town or city , but in general not globally .for nearly the entire population , physical distance at some scale matters .there is no reason that the topology ( a structure of connections ) of a network of sexual contacts is the same as that of the world wide web .eguiliz and klemm recently offered a different example of a scale - free network with nonzero sis epidemic thresholds for . in our model ,nonzero thresholds are present for any , so it addresses the region of interest .also , in their model , nodes with a particular degree , or number of links , are more likely to mix with those of a unlike degree . in theirotherwise well - mixed network , highly - connected nodes are more likely to connect with sparsely - connected nodes , and vice versa .this is a disassortative , or degree - anticorrelated , network .however , newman argues that social interaction networks are assortative(i.e .degree - correlated ) as discussed later . in our network , just like the ba network , there is _ no _ degree correlation .instead , the nonzero threshold comes from the underlying local 2d clustering structure that would seem naturally present in real epidemics , structure that is not present in eguiluz and klemm s model .rosenfeld , cohen , ben - avraham and havlin have a method of embedding a scale - free network into a 2d lattice that is similar to ours but not identical , and they explore the dimensional properties of their network .the basic version of our model on a lattice preserves the local geometrical properties of a 2d lattice . for that reason ,we call it a _ lattice - based scale - free network_. our model is a variant of an old two - dimensional circle model of continuum percolation ( a lattice - based `` inverse swiss cheese '' model with variable radius of `` holes '' ) .starting from the sites on the lattice we assign each node its number of proper bonds defining its `` radius of action '' and connect it to all the nodes within the radius .the distribution of the radii is taken so that the number of proper bonds , the bonds put down within , follows the -law for large and cuts off at the nearest neighbor distance . in dimensions ,if the number of proper bonds of a site is to be distributed according to , it follows that with and .nodes having a bond in common are considered connected .the bond connecting nodes and is counted only once , whether it belongs to the set of the proper bonds of node , of node , or both .note that in our model and are parameters ; they do not arise naturally as in the ba network .first , we show that the mean number of bonds per node in our model has the same asymptotic power - law behavior as the distribution of the number of the proper bonds if .we calculate the mean number of bonds connecting a node to nodes outside .this is also given by the mean number of the nodes at distance from the given one for which .the probability to find larger than is .thus , the mean number of nodes connected to from outside is where is the surface of the -dimensional unit sphere ( , , etc . ) . in general , taking , which tends to zero for large , as long as the mean number of bonds per node exists(i.e . ) . note that the number ( where is the probability that a node considered has exactly bonds outside of ) is larger than the probability to have at least one bond coming from outside , and thus for , tends to zero as grows .this means that the probability distribution of the number of the `` proper '' bonds and the actual number of the bonds of a node follow the same asymptotic pattern for large .the mean number of bonds per node is given by and converges for .now , consider bond percolation on this network .beginning with a 2d square grid of bare nodes , randomly add disks ( i.e. choose radii from and fill up the proper bonds to the nodes that do nt yet have them ) .for what will there be a nonzero percolation threshold , ?we note that percolation on our network differs from conventional lattice percolation or continuum percolation in that for any finite lattice there is a possibility of a node having a radius of action so large that it spans the entire x lattice .the probability of drawing such a giant disk scales as .the average number of disks put down before such a giant disk is encountered is approximately . in dimensions , that number corresponds to disks , where is the average threshold for adding a giant disk on an x lattice .thus , and solving for the average threshold for an x lattice , we obtain so that as for .since adding a giant disk is only one of several ways of spanning the lattice , is an upper bound for the percolation probability .thus , as for as well. the general sir ( susceptible - infected - recovered ) model of the infection propagation on a simple lattice can be mapped on to the percolation problem on that same lattice .the bonds present in the percolation problem correspond to successful propagation of the disease from an infected individual to a susceptible .thus , a subcritical cluster in the percolation problem corresponds to a subcritical epidemic , an epidemic that dies out .infection propagation is possible ( i.e. a giant component of a graph exists ) if a finite fraction of the individuals are infected .nodes without disks might be thought of as `` immune '' , but the analogy is not complete .since we are doing bond percolation , nodes are always present .`` immune '' nodes will lack the proper bonds , but they can still have outside bonds , which would not be the case for a truly immune individual .thus , our model will actually _ underestimate _ the true epidemic threshold .consider the difference in the simple example of two disks shown in fig .1 . in our model ,just the presence of those two disks would mean that the epidemic spans .thus , for this simple example , . in the proper epidemic model , one would need an additional susceptible node in the overlap of the two disks for the epidemic to be able to span the lattice .thus , .the correspondence of our disk model to an epidemic on a scale - free network is not perfect but adequate for our purposes .we found percolation thresholds using the newman - ziff algorithm .disks were randomly added to an x lattice at different sites until a cluster spanned the lattice .figure 2 shows the average percolation threshold as a function of .it appears from the plot that there is a finite percolation threshold for not just but as well , the region of interest for real world epidemics .thus , if these real world epidemic are not well mixed but rather dominated by local geometry , they will have a finite threshold .for , the results are consistent with and seem to scale according to eq .( [ onespan ] ) for sufficiently large .the transition is gradual . in comparing simulations, we find that is significantly larger than , even for . however , as one can see from fig .2 , the slopes of and as a function of lattice size match fairly well for .consider now the same model with small world links , bonds that connect two randomly chosen nodes , added as well . in the context of disease propagation , this is included to model infrequent , distant contacts occasional airline travel , as it were .suppose disks are present along with random links as well .how do these additional links affect the percolation threshold ? on a lattice with small world links , the simple spanning criterion of conventional lattice percolation is not appropriate .random links may possibly connect two sites on opposite boundaries at a low concentration with no infections between , but this is hardly captures the idea of a sustained epidemic . as we argued previously , the proper criterion for the percolation threshold is when the fraction of the lattice occupied by the largest cluster becomes a finite fraction of the x lattice as .we use finite size scaling on to find this value as in .3 shows simulation results for the percolation threshold for the disk lattice with small world bonds as a function of the fraction of added random links .clearly , for , the addition of even a sizable number of small world links does not result in .one approach to finding the percolation threshold on a small world lattice is to ignore the random links and consider the subcritical clusters of the lattice as nodes on a random network .the random links become the links for this random network . using this approach, the percolation threshold for a random network with nodes and bonds is .if is the average subcritical cluster size , there will be nodes on such a network and bonds .the subcritical percolation clusters will scale as , where is the threshold without random links and is a characteristic exponent . we can estimate a new threshold for the small world lattice implicitly from the relation where is a nonuniversal constant .what is ? for conventional 2d lattice percolation , .as shown in figure 3 , eq .( [ sweq ] ) with the 2d value for work quite well for .the data point was used to calculate . for smaller , specifically the measured exponent of for females in the swedish sexual network , this approximation breaks down .one possibility for this breakdown is an incorrect value of .perhaps the presence of more large disks changes . using the and data points ,a value of was calculated , but the fit is still inadequate .some crossover seems to be occurring which we do not understand .we believe the most likely explanation is that implicit in the nodes - on - a - random - graph approximation is the assumption that the subcritical islands are equally likely to be chosen by the random links .however , if there are several large subcritical islands on the lattice , this assumption apparently breaks down .these large islands gain more attachments but not enough to reduce the percolation threshold to zero .the primary reason for the difference between the ba model and our model is the distance and number of connections between the highly connected hubs . with the preferential attachment of the ba model ,nodes are more likely to attach to hubs and particularly connect two hubs , offering numerous _ very short _network paths between hubs and thus to a sizable portion of the population .these numerous pathways between make percolation more likely .in contrast , the local clustering of geography in our model lengthens the network pathways between hubs. a disease on our network would be much easier to control .similarly , as also mentioned by newman in an degree anticorrelated network , highly connected nodes are more likely to be connected to sparsely connected nodes , thus lengthening the network distance between the hubs .a moderate number of random links in our model will not change these results even though these links are effectively preferentially attached .this is because the bulk of the attachments are made through local clustering not preferential attachment . at any rate , in order to characterize the percolation properties of a scale - free network , one needs to know more than the degree distribution and the degree correlation distribution .newman notes that social networks tend to be assortative or degree - correlated , and he concludes that because of this they may not be conducive to immunization efforts .however , the networks that he cites are career - related collaborations such as movies and coauthorships , which may not reflect the nature of the network of physical interaction that would be relevant to disease propagation .in addition , with 2d local clustering we have provided an alternative reason that immunization efforts may indeed be fruitful as in the case of other highly infectious diseases such as polio and smallpox .anderson and r.m .may , nature * 333 * , 514 ( 1988 ) ; k. wellings , j. field , j. wadsworth , a.m. johnson , r.m .anderson and s.a .bradshaw , nature * 348 * , 276 ( 1990 ) ; a.m. johnson , j. wadsworth , k. wellings , s.a .bradshaw , j. field , nature * 360 * , 410 ( 1992 ) ; m. morris , nature * 365 * , 437 ( 1993 )
|
we offer an example of an network model with a power law degree distribution , , for nodes but which nevertheless has a well - defined geography and a nonzero threshold percolation probability for , the range of real - world contact networks . this is different from the for results for the original well - mixed scale - free networks . in our _ lattice - based scale - free network _ , individuals link to nearby neighbors on a lattice . even considerable additional small - world links do not change our conclusion of nonzero thresholds . when applied to disease propagation , these results suggest that random immunization may be more successful in controlling human epidemics than previously suggested if there is geographical clustering .
|
an important function of all biological systems is responding to signals from the surrounding environment . these signals ( hereafter assumed to be scalars ) , ,are often probabilistic , described by some probability distribution ] , places severe constraints on admissible forms of . to see this , for quasi - stationary signals( that is , when the signal correlation time is large , ) , we use eq .( [ filter ] ) to write the steady state dose - response curve a typical monotonic , sigmoidal is characterized by only a few large - scale parameters : the range , ] . however , since environmental changes that lead to varying and , as well as mechanisms of the adaptation may be distinct , it often makes sense to consider the two adaptations as separate phenomena .adaptation to the mean , sometimes also called _ desensitization _ , has been observed and studied in a wide variety of biological sensory systems , with active work persisting to date .in contrast , while gain control has been investigated in neurobiology , we are not aware of its systematic analysis in molecular sensing . in this article, we start filling in the gap .our main contribution is the observation that a mechanism for gain control , observed in a fly motion estimation system by borst et al . , can be transferred to molecular information processing with minimal modifications .importantly , unlike adaptation to the mean , which is implemented typically using extra feedback circuitry , the gain control mechanism we analyse requires no additional regulation . it is built - in into many molecular signaling systems .the main ingredients of the gain control mechanism in ref . is a strongly nonlinear , sigmoidal response function and a realization that real - world signals are dynamic with a nontrivial temporal structure .thus one must move away from the steady state response analysis and autocorrelations within the signals will allow the response to carry more information about the signal than seems possible naively . specifically , we show that even a simple biochemical circuit in eq .( [ filter ] ) , with no extra regulatory features can be made insensitive to changes in .that is , for an arbitrary choice of , and for a wide range of other parameters , the circuit can generate an output that is informative of the input , and , in particular , carries more than a single bit of information about it . for brevity , we will not review the original work on gain control in neural systems , but will instead develop the methodology directly in the molecular context .let s assume for simplicity that the signal in eq .( [ filter ] ) has the ornstein - uhlenbeck dynamics with : we will assume that the response has been adapted to the mean value of this signal ( likely by additional feedback control circuitry , not considered here explicitly ) , so that the response to is half maximal . now we explore how insensitivity to can be achieved as well .we start with a step - function approximation to the sigmoidal response synthesis where is some constant .this is a limiting case of very high hill number dose - response curves , which have been observed in nature .figure [ examples ] shows sample signals and responses produced by this system .notice that such makes the system manifestly insensitive to .any changes in will not result in changes to the response , hence the gain is controlled _perfectly_. nevertheless , this choice of is pathological , resulting in a binary steady state response ( for , and otherwise ) .that is , the response can not carry more than one bit of information about the stimulus .however , as illustrated in fig .[ examples ] , a _ dynamic _response is not binary and varies over its entire dynamic range .can this make a difference and produce a dose - response relation that is both high fidelity and insensitive to the variance of the signal ? to answer this , we first specify what we mean by the dose - response curve or the input - output relation when there is no steady state response .for the response at a single time point , we can write ) ] is obtained by solving eq .( [ filter ] ) . since the signal is probabilistic , marginalizing over all but the instantaneous value of it at time , one gets , the distribution of the response at time conditional on the value of the signal at .further , for the distribution of the signal given by eq .( [ signal ] ) , one can numerically integrate eq .( [ filter ] ) and evaluate the correlation .integration time steps , and averages were taken over time steps . to change the value of , only was adjusted . ] since eq .( [ filter ] ) is causal , has a maximum at some , illustrated in fig .[ corr_delay ] .correspondingly , in this paper we replace the familiar notion of the dose - response curve by the probabilistic input - output relation . in fig . [conditional ] , we plot the input - output relation for . to emphasize the _ independence _ of the response on and hence the gain - compensating nature of the system , we plot in units of . a smooth , probabilistic , sigmoidal response with a width of the transition region clearly visible .this is because , for a step - function , the value of depends not on , but on how long the signal has been positive prior to the current time . in its turn , this duration is correlated with , producing a probabilistic dependence between and .the latter is manifestly invariant to variance changes .these arguments make it clear that the fidelity of the response curve should depend on the ratio of characteristic times of the signal and the response , . indeed , as seen in fig .[ examples ] , for , the response integrates the signal over long times .it is little affected by the current value of the signal and does not span the full available dynamic range . at the other extreme of a very fast response , ,the system is almost quasi - stationary .then the step - nature of is evident , and the response quickly swings between two limiting values ( and 0 ) .we illustrate the dependence of the response conditional distribution on the integration time in fig .[ means ] by plotting , the conditional - averaged response for different values of .neither nor are optimal for signal transmission .one expects existence of an optimal , for which most of the dynamic range of gets used , but the response is not completely binary . to find this optimum ,we evaluate the mutual information between the signal and the response at the optimal delay , ] .consider now the fraction of time the derivative of the response is near .this requires that ( so that the degradation , , is negligible ) , but is already large , .the probability of this happening depends on the signal variance and hence on the speed with which the signal crosses over the threshold region .thus one can estimate by observing a molecular circuit for a long time and counting how often the rate of change of the response is large . while the probability of a large derivative will depend on the exact shape of , for a signal defined by eq .( [ signal ] ) , the statistical error of any such counting estimator will scale as .hence , the system can be almost insensitive to on short time scales , but allow its determination from long observations . to verify this, we simulate the signal determined by eq .( [ signal ] ) with the , which maximizes the signal - response mutual information .we calculate the mean fraction of time when the response derivative is above 80% of its maximum value .we further calculate the standard deviation of the fraction .we repeat this for signals with various and for for experiments of different duration , obtaining a time - dependence of the -score for disambiguating two signals with different variances , where the indeces denote the signals being disambiguated .for example , for distinguishing signals with and , we estimate , consistent with the square root scaling ( the error bars indicate the 95% confidence interval ) . that is , for as little as 10 , , and the two signals are distinguishable .signals with larger variances are harder to disambiguate .for example , for and , , and crosses 2 for .this long - term variance determination can be performed molecularly in many different ways .for example , one can use a feedforward incoherent loop with as an input .the loop acts as a approximate differentiator for signals that change slowly compared to its internal relaxation times .the output species of the loop can then activate a subsequent species by a hill - like dynamics , with the activation threshold close to the maximum of the possible derivative .if this last species degrades slowly , it will integrate the fraction of time when is above the threshold , providing the readout of the signal variance .in this article , we have argued that simple molecular circuitry can respond to signals in a gain - insensitive way without a need for adaptation and feedback loops .that is , these circuits can be sensitive only to the signal value relative to its standard deviation . to make the mechanism work , the signaling system must obey the following criteria * a nonlinear - linear ( nl ) response ; that is , a strongly nonlinear , sigmoidal synthesis function integrated ( linearly ) over time ; * properly matched time scales of the signal and the response dynamics .in addition , the information about the signal variance can be recovered , for example , if * large excursions of the response derivative can be counted over long times .naively transmitted information of only one bit ( on or off ) would be possible with a step - function synthesis .however , the response in this system is a time - average of a nonlinear function of the signal .this allows to use temporal correlations in the signal to transmit more than 1 bit of information for broad classes of signals .while 1.35 bits may not seem like much more than 1 , the question of whether biological systems can achieve more than 1 bit at all is still a topic of active research .similar use of temporal correlations has been reported to increase information transmission in other circuits , such as clocks . in practice , in our case, there is a tradeoff between variance - independence and high information transmission through the circuit : a wider synthesis function would produce higher maximal information for properly tuned signals , but the information would drop down to zero if . it would be interesting to explore the optimal operational point for this tradeoff under various optimization hypotheses . while our analysis is applicable to any molecular system that satisfies the three conditions listed above , there are specific examples where we believe it may be especially relevant .e. coli _ chemotaxis flagellar motor has a very sharp response curve ( hill coefficient of about 10 ) .this system is possibly the best studied example of biological adaptation to the mean of the signal .however , the question of whether the system is insensitive to the signal variance changes has not been addressed .the ultrasensitivity of the motor suggests that it might be .similarly , in eukaryotic signaling , push - pull enzymatic amplifiers , including map kinase mediated signaling pathways , are also known for their ultrasensitivity . andyet ability of these circuits to respond to temporally - varying signals in a variance - independent way has not been explored .we end this article with a simple observation .while the number of biological information processing systems is astonishing , the types of computations they perform are limited .focusing on the computation would allow cross - fertilization between seemingly disparate fields of quantitative biology .the phenomenon studied here , lifted wholesale from neurobiology literature , is an example .arguably , computational neuroscience has had a head start compared to computational molecular systems biology .the latter can benefit immensely by embracing well - developed results and concepts from the former .we thank f alexander , w hlavacek , and m wall for useful discussions in the earlier stages of the work , participants of _ the fourth international _ q - bio _ conference _ for the feedback , and f family for commenting on the manuscript .we are grateful to r de ruyter van steveninck for providing the data for one of the figures .this work was supported in part by doe under contract no.de-ac52-06na25396 and by nih / nci grant no .7r01ca132629 - 04 .
|
statistical properties of environments experienced by biological signaling systems in the real world change , which necessitates adaptive responses to achieve high fidelity information transmission . one form of such adaptive response is gain control . here we argue that a certain simple mechanism of gain control , understood well in the context of systems neuroscience , also works for molecular signaling . the mechanism allows to transmit more than one bit ( on or off ) of information about the signal independently of the signal variance . it does not require additional molecular circuitry beyond that already present in many molecular systems , and , in particular , it does not depend on existence of feedback loops . the mechanism provides a potential explanation for abundance of ultrasensitive response curves in biological regulatory networks . _ keywords _ : adaptation , information transmission , biochemical networks
|
research in estimating losses for catastrophes have led to the development of a wide variety of earthquake loss models .earthquake loss models can generate loss values before an event occurs or while an event is evolving or after an event occurs .earthquake loss models can be classified as probabilistic , deterministic and real - time models .probabilistic models produce a maximum probable loss value using a stochastic event catalog which represents a sample of possible future earthquakes .models such as capra - central american probabilistic risk assessment , eqrm - earthquake risk model and riskscape are probabilistic models . in deterministic models the losses caused by a specific event that occurred are estimated .lnecloss , redars - risks from earthquake damage to roadway systems and nhematis are deterministic models .real - time models estimate losses soon after ( near real - time ) an earthquake has occurred .examples include eler - earthquake loss estimation routine , emergeo and pager - prompt assessment of global earthquakes for response .a hybrid of the former models are seen in hazus ( combines deterministic , probabilistic and real - time models ) , koeriloss and maeviz . in this paper , a loss estimator which produces loss values in near real - time and can model past earthquake eventsis presented .models that focus on generating a probable loss value use a catalog of possible future earthquakes . in such models , there is no focus on a specific event and any analysis is done before an earthquake may occur and is called pre - event analysis .examples include air , dbela - displacement - based earthquake loss assessment and mdla . for quick and imminent decision makingit is desirable that loss estimates be accurately generated as an event evolves .post - event analysis presents a timely evaluation of losses due to an earthquake in the minutes , hours , days and weeks immediately following an earthquake .examples of post - event models are inlet - internet - based loss estimation tool , pager and extremum .models combining both pre - event and post - event analysis are available in epedat - early post - earthquake damage assessment tool , hazus - and selena - seismic loss estimation using a logic tree approach .the model proposed in this paper focuses on analysing the effects of an earthquake soon after it occurs and modelling the effects of a past earthquake .pre - event models are of limited interest in the context of estimating losses in real - time . in this paperthe focus is on post - event analysis since it is different from pre - event analysis in a number of important ways : * the focus is on a single earthquake event which has just occurred rather than a catalog of possible future events , or on a past earthquake event which can be modelled from archived sensory data .* there is an evolving view of the event as it unfolds , and therefore the sensor data related to the event changes hours , days and weeks after the event , * there is a need for rapid estimation of losses to guide early responses , and * since post - event data is available from multiple sources , there is a need to visualise and integrate hazard , exposure and loss data from these multiple sources .the 2011 tohoku earthquake that struck off the pacific coast of japan at 05:46 utc on friday , 11 march 2011 is a recent example that illustrates the importance of post - event analysis .figure 1 presents the timeline of the earthquake .fifteen alerts were issued by pager / shakemap in time periods ranging from within an hour to six months after the earthquake .the first alert was issued twenty three minutes after the event and reported a magnitude 7.9 earthquake .additional information such as initial peak ground velocity and peak ground acceleration maps of the ground shake was also available with the alert .further , over the course of the first day alone four additional alerts were issued each updating the data available .not only did the earthquake event unfold over time but the data describing the event and our knowledge of the event evolved .the earthquake data alone was not sufficient to produce reliable loss estimates because between 06:15 utc and 07:52 utc a tsunami struck the coastal towns .additional data sources are required for complete loss estimation . estimating loss values of a future earthquakeis based on using a static catalog containing data related to historic events and is employed in pre - event analysis .for example , models such as air , dbela and eqrm employ static catalogs .a static catalog therefore is not sufficient to estimate accurate losses as an earthquake evolves over hours and days of its occurrence .there is a need for up - to - date information of an earthquake as it evolves .one possibility is to make use of seismic sensor networks which can provide earthquake information as soon as minutes after it has occurred .shakemaps , for example , are a representation of earthquake sensory information .models that employ real - time models include emergeo , inlet and pager .a few models incorporate both historic and sensor data such as in hazus , mdla and selena . in this paper, we investigate how sensor data from multiple sources can be used for timely estimation of losses .the use of regional seismic sensor networks can provide a model with only region specific data and thereby restricts loss estimation to regions .this may be due to the nature of the research where the project was undertaken and therefore only a country or a region was considered .models such as openrisk , tefer - turkish emergency flood and earthquake recovery programme earthquake model and teles - taiwan earthquake loss estimation system are examples that analyse earthquakes in a region . to ensure global applicability of the model it needs to rely on global sensor networks .epedat , radius and qlarm - earthquake loss assessment for response and mitigation are a few examples .further , full - fledged global applicability also implies being able to use the model to estimate losses at different geographic levels ( for example , loss estimation at cities , counties , states and countries ) .the model presented in this paper explores how global applicability can be achieved . among the earthquake loss estimation models that have been referenced ,eler , emergeo , epedat , extremum , hazus , inlet , pager , qlarm , quake - loss , selena and teles support post - event analysis . among these , models such as , eler , epedat , hazus , inlet and teles are region restricted . while these models may provide close to accurate loss estimates , yet they do not support global earthquakes .this may be due to the reliance of the models on regional seismic networks .the emergeo earthquake model produces maps of mmi and peak ground acceleration ( pga ) and can predict damages .loss estimates are not a focus in the model .both the extremum and quakeloss models rely on multiple data sources but are focused on structural and human losses .financial loss estimates are not considered in both models .pager ( prompt assessment of global earthquakes for response ) provides fatality and economic loss impact estimates . however , pager does not determine region specific loss data .global financial and economic organisations need to know the losses ( estimates ) incurred at different geographical levels .the qlarm model calculates human losses and damage in a given human settlement .however , qlarm does not focus on estimating financial losses .the selena model and the complementing rise ( risk illustrator for selena ) visualisation software computes real - time loss estimates and presents the losses visually. however , there seems to be less automation along the pipeline from obtaining real - time data to visualising the losses .the real - time data needs to be provided by the user to the selena model .research that is pursued for automated post - event estimation of financial losses globally is sparse at best , though many loss models are available in the public domain .the research reported in this paper is motivated towards the development of ( a ) a real - time , ( b ) a post - event , ( c ) a multiple sensor data relying and ( d ) a globally applicable loss model . to achieve thisthere is a need to support rapid data ingestion , rapid loss estimation , rapid visualisation and integration of data from multiple data sources and rapid visualisation at multiple geographic levels .the * * a**utomated * * p**ost-**e**vent * * e**arthquake * * l**oss * * e**stimation and * * v**isualisation ( ape - elev ) system is proposed , which comprises three primary modules , namely the earthquake loss estimator ( ele ) , the earthquake visualiser ( ev ) and the elev database ( elev - db ) . the ele module is built on pager and shakemap for accessing real - time earthquake data and estimating losses at different geographic levels .the ele module computes financial losses .visualisation of the losses is facilitated by the ev module .the elev - db module aids the functioning of the ele and ev modules .the remainder of this paper is organised as follows .section [ centralisedarchitecture ] proposes a centralised architecture for the automated post - event earthquake loss estimator and visualiser ( ape - elev ) .the loss estimation module is presented in section [ estimator ] and the loss visualiser module is presented in section [ visualiser ] .section [ distributedarchitecture ] presents a distributed architecture for the ape - elev and how estimation and visualisation are distributed across the server and the client respectively .section [ experiments ] presents one test case using ape - elev and a validation study of the model using ten global earthquakes .section [ conclusion ] concludes the paper .the automated post - event earthquake loss estimator and visualiser ( ape - elev ) is a system that determines expected losses due to the occurrence of an earthquake ( on building that are exposed to the earthquake , otherwise called exposure ) and graphically display these losses .decision makers in financial organisations , governmental agencies working toward disaster management and emergency response teams can benefit from interpreting the output produced by ape - elev for aiding imminent decision making .the output can also be adjusted for the benefit of the decision maker by changing the exposure data .the ape - elev system determines two types of losses .firstly , the ground up loss , referred to as gul which is the entire amount of an insurance loss , including deductibles , before applying any retention or reinsurance .secondly , the net of facultative loss , referred to as nfl which is the entire amount of an insurance loss , including deductibles , primary retention and any reinsurance .the determined losses can be visualised at four geographic levels , namely country , state , county and city , on a geo - browser .the country , state and county levels are sometimes referred to as regions , while the city level is referred to as both point and population centre .indicators are defined to facilitate visualisation at the region level ; indicators are either event - specific ( for example , losses at regions ) or geography - specific ( for example , population at cities or regions ) .ape - elev is composed of three primary modules , namely the earthquake loss estimator and visualiser database ( elev - db ) , the earthquake loss estimator ( ele ) and the earthquake visualiser ( ev ) .figure 2 shows the architecture of ape - elev . the elev - db module is a collection of tables related to an event and geographic data . the ele model ( see figure 2 ( top ) ) as the name suggests estimates the losses incurred when an earthquake occurs . the ev model ( see figure 2 ( bottom ) )again as the name suggests facilitates the visualisation of the loss estimates generated by the ele model .the elev - db module comprises seven tables which contribute to the working of the ele and the ev modules .the tables are : 1 . , which consists of industrial data for ground up exposure , 2 . , which consists of industrial data for net of facultative exposure , 3 . , which consists of event data , 4 . , which consists of a set of indicators , 5 . , which consists of geographic information that is used to map lower geographic levels onto higher geographic levels ( for example , mapping of cities onto counties or counties onto state ) , 6 . , which consists of data that is generated from the jaiswal and wald mean damage ratio ( mdr ) model , and 7 . , which comprises loss data populated by the ele module .the ele module , as shown in figure 2 ( top ) , comprises three sub - modules , namely the hazard , vulnerability and loss modules .the hazard module receives two inputs , firstly , the data on cities ( i.e. population centres with more than one thousand people ) affected by the earthquake , and secondly , geographic information required for mapping lower geographic levels onto higher geographic levels .the hazard module produces the measure of severity of an earthquake , otherwise referred to as the modified mercalli intensity ( mmi ) , in a city and region .the mmi values along with data from are used by the vulnerability module to produce mdr values .this data is employed by the loss module along with two types of exposure data , namely ground up exposure and net of facultative exposure to generate both the gul and nfl losses .the event data extractor receives the notification of the event and initiates the ele .the ev module , as shown in figure 2 ( bottom ) , comprises five sub - modules , namely the exposure data visualiser , loss data visualiser , hazard data visualiser , static data visualiser and the portfolio visualiser .the visualiser modules employ a geo - browser for graphical display .the exposure data visualiser presents the exposure for different geographic levels .the loss data visualiser presents the gul and nfl for different geographic levels .the hazard data visualiser presents the mmi and mdr for different geographic levels .static data visualiser is employed for presenting geography - specific indicators , and as the name implies these indicator values do not change from one event to another .the portfolio visualiser presents a comparison of losses and exposures .the earthquake visualiser mapping engine ( ev - me ) module facilitates visualisation of data on a geo - browser .having presented the architecture of ape - elev , it is also necessary to consider how the ele , ev and elev - db modules and their sub - modules glue together for coherent functioning .the data required to kick - start ape - elev is obtained before the occurrence of an earthquake or in a pre - event phase .an accumulation model is used to generate the ground up and net of facultative exposures at the region level .casualties are proportional to the number of people present in the affected area and the quantity and value of buildings , infrastructure and other property in this area .the accumulation model quantifies regional exposure based on the whether economic losses need to be determined for the assets insured by the insurance / reinsurance company . in the research reported in this paper ,the accumulation model is a black box used by the industrial partner supporting this research and the model generated gul and nfl exposures for a given region .the region level exposure is then disaggregated into cities ( i.e. , population centres that fall within the region ) based on the percentage of population . the city level exposure is further used by the ele module in the post - event phase .for an earthquake event , , that has just occurred or is unfolding we firstly need to be notified of the event .an automated system for notifying earthquakes is shakecast lite .the ele module employs shakecast lite for notification alerts which are received by the event data extractor . when the notification alert is received the ele module is instantiated .further , we require real - time data of the earthquake . the prompt assessment of global earthquakes for response ( pager ) is an automated system that can provide such real - time data .the ele module employs the real - time data from pager / shakemap that is acquired as an .xml file .the .xml file is then parsed to extract event related information that is stored in of elev - db .information such as an affected city , represented as ( represents city , represents counties , and represents states and represents countries ) , population of the city , represented as and mmi of the city , represented as is provided to the hazard module .the hazard module computes the mmi at higher geographic levels using the mmi of affected cities .if the geographic level is represented as , where and , the population at the geographic level is represented as and the mmi at the geographic level is represented as , then where ( is the total no of affected regions ) , and ( is the number of affected cities in a region ) .the geographic data to evaluate whether an affected city lies within a given region is provided through .the double subscript notation is used to capture the idea that there are population centres which are affected due to the earthquake within a large affected region .for example , consider an earthquake that affects two counties , and . in the equation counties are represented by and since there are two affected counties , , and iterates two times. assume there are three cities in , namely , and , their populations denoted as , and and their mmis denoted as , and respectively .for this county ( three cities are in the affected region , and iterates three times for this county ) .the mmi at the county levels for is equal to }{p(city_{1 } ) + p(city_{2 } ) + p(city_{3})}\end{aligned}\ ] ] assume four cities in , namely , , and , their populations denoted as , , and and their mmis denoted as , , and respectively . for this county ( four cities are in the affected region , and iterates four times for this county ) .the mmi at the county levels for is }{\big(p(city_{4 } ) + p(city_{5 } ) + } \\ & & p(city_{6 } ) + p(city_{7})\big)\end{aligned}\ ] ] consider that both counties , and , are in the same state , , the population of the counties denoted as and , and the mmis of the counties obtained from the above equations .the mmi at the state level for is }{p(county_{1 } ) + p(county_{2})}\end{aligned}\ ] ] the , where and is then utilised by the vulnerability module to compute . unlike the hazard module ,the city level is considered in the vulnerability module , and therefore ranges from 1 to 4 .it is worthwhile to note that mmi values range from i to xii . which was originally generated by the jaiswal and wald mdr modelprovides the mdr value corresponding to an integer mmi value .should a floating point mmi value be obtained during computations from the hazard module , then the mdr values are computed by linear interpolation in the vulnerability module . for example , if mmi is obtained as 7.5 from the hazard module , then the mdr values corresponding to mmi vii and mmi viii are interpolated in the vulnerability module to obtain the mdr value for mmi-7.5 .such a technique is employed in hazus .the mdr value of a city is provided to the loss module , along with the ground up and the net of facultative exposure data from and .the gul and nfl of a city are computed by multiplying the mdr values for a city with the exposure of the city .the city losses are then aggregated onto higher geographic levels using to compute the losses on the county , state and country levels .the total loss corresponding to an event is provided to , while the regional losses corresponding to an event is provided to and losses related to a specific line of business in .line of business refers to a statutory set of insurance / reinsurance policies to define coverage .the coverage may or may not affect a strategic business unit .the hierarchies structures of lines of business are property - fire insurance , business interruption and natural catastrophes ; casualty - liability , motor , non - life accident and health ; special lines - aviation , engineering , marine ; credit and surety .these lines of business are either industrial , personal or commercial coverages .the elev - db module plays an important role in providing data to and receiving data from the ele module . during the period from the notification of an event until completion of computing losses , tables , and are modified .tables , , and provide input to the ele module .the five sub - modules of ev , namely the exposure data visualiser , the loss data visualiser , the hazard data visualiser , the static data visualiser and the portfolio visualiser operate in parallel .this is unlike the ele sub - modules that operate in sequence .the functioning of the sub - modules of ev are nevertheless presented sequentially in this section for the sake of convenience .the exposure data visualiser utilises and for displaying two types of exposures , the ground up exposure and the net of facultative exposure .the latitude , longitude and geography related indicators of all regions are extracted from and provided to the earthquake visualiser mapping engine ( ev - me ) .the ev - me module generates a .kml ( keyhole markup language ) file that contains place marks which highlight the exposure of the regions .the .kml format is compatible for visualisation on geo - browsers , and in this research google earth is employed .the thematic mapping engine ( tme ) is the underlying building block of ev - me .a number of visualisation techniques such as bar , prism , choropleth , collada and push pins are made available for facilitating analysis of the data .the loss data visualiser utilises from which regional loss data is extracted for displaying the ground up and net of facultative losses .similar to the exposure data visualiser , the ev - me module generates a .kml file that is viewable on google earth .the hazard data visualiser utilises and from which regional and point hazard data are extracted respectively for displaying mmi and mdr at all geographic levels .similar to the above modules a .kml file is generated by the ev - me module .the static data visualiser again utilises and from which cities affected by the event and static - data related to the affected cities are extracted respectively .a .kml file is generated by the ev - me module and the extracted data is visualised .the portfolio visualiser that is incorporated within the ev module compares losses and exposure ( of areas affected by the event ) by line of business .data related to the distribution of total losses by line of business such as industrial , personal and commercial is extracted from . since visualisations are provided on pie - charts , the ev - me module is not employed .the distributed ape - elev comprises the server system and the client system , as shown in figure 3 , and are considered in the following sub - sections .the ape - elev server system consists of the elev - db database , the ele module and an ev module .the elev - db and the ele module are similar to those employed in the centralised architecture .the ev module is different from the centralised architecture as the geo - browser , the web browser and the portfolio visualiser are located on the client system . to facilitate the handling of client requests , an additional sub - module is required on the server visualiser system , and therefore the data handler is employed which acts as an interface between client requests and the data available for visualisation that is stored in the database .four handlers are available , namely the exposure data handler , the hazard data handler , the loss data handler and the static data handler .the exposure data handler retrieves the exposure for different geographic levels .the loss data handler retrieves gul and nfl for different geographic levels .the hazard data handler retrieves mmi and mdr for different geographic levels .the static data handler retrieves geography - specific indicators .the mapping engine receives data from the handlers and facilitates the visualisation of data on the client system .it is built on the thematic mapping engine ( tme ) and generates .kml files . the kml file repository stores the .kml files generated by the mapping engine .the portfolio generator is built on the google chart api and presents a comparison of losses and exposures as pie - charts .the client system in the distributed ape - elev is a client visualiser that consists a geo - browser , an event viewer and a portfolio viewer .figure 4 is the illustration of interactions between the client and server modules .the loss estimation module executes step 1 to step 5 after it receives an earthquake notification , thereby storing loss values in the database .the client system can raise two type of visualisation requests , those to the data handler and to the portfolio generator . a visualisation request to the data handleris made by the event viewer .based on the type of data that needs to be visualised , the exposure , loss , hazard or static data handlers are invoked .the handler retrieves data from elev - db and a .kml file is generated in the kml file repository .the event viewer after receiving a .kml file link requests to read the file and is accessed by the geo - browser on the client system .a visualisation request to the portfolio generator again retrieves loss and exposure data from elev - db .the google chart api is used to generate pie - charts in a repository .the portfolio viewer can then access the pie - charts on the client system .there are seven benefits of distributing the modules of ape - elev on a server and a client : 1 .the server system can facilitate archiving for multiple users .this presents the opportunity for a user to manage his workspace and archive earthquakes of his interest .the server system is accessible to the client but is concealed from the client . therefore the installation of third party softwares such as shakecast lite and the thematic mapping engine which are used in the development of ape - elev is not required on the client system as they are made available from the server .it needs to be however noted that the installation of a geo - browser is mandatory to view .kml files on the client system .3 . there is no data management on the client system .since multiple external data sources including real - time earthquake data , exposure data , geography data and geometry data are ingested by ape - elev , user management of these data sources would be cumbersome . in distributed ape - elev ,data management is carried out at the server .there are no repositories on the client system .should a user require to analyse a large number of earthquakes , then the kml file and pie - chart repositories can be large .the client system is granted access to the repositories that are situated on the server .the database consisting of voluminous data created by ape - elev is resident on the server system .the data is voluminous due to the integration of geometry , geographic , exposure and event data which further produces loss and hazard data at multiple geographic levels .ape - elev can be made globally accessible by hosting the server system on the world wide web .7 . the client system can be made available on multiple platforms such as tablets , smartphones and personal digital assistants ( pdas ) .the availability of ape - elev essentially requires internet access .kml data will require a geo - browser enabled platform .administrative privileges to the server will be required for decision makers to be able to use the distributed ape - elev to their benefit of not merely interpreting the output of ape - elev using the default exposure set but using a custom exposure . as of where the current development of distributed ape - elev stands the data management facilitated by the server limits the user ability to adjust input data and customise the output data; the centralised system lends itself more to such custom user requirements .consequently , multiplier indices considered in section [ validationstudy ] can not be set by the user and this flexibility needs to be incorporated in future research .this section in the first instance considers the experimental platform and the user interface of ape - elev , followed by feasibility and validation studies of the ape - elev model .the feasibility of ape - elev is confirmed using a test case earthquake of magnitude 9.0 that occurred on 11th march 2011 , commonly known as the tohoku earthquake or referred to as near the east coast of honshu , japan with an event i d usc0001xgp in pager .the validation study considers 10 global earthquakes and the expected losses computed by ape - elev is compared against normalised historic loss data .the validation study is also pursued to determine the probability of the expected losses falling within a pre - defined loss threshold .the data related to the earthquake was available on the pager archive and shakemap archive .the event data extractor in the ape - elev architecture fetches data related to the event from the pager archive in .xml format and instantiates the ele module .after the ele module is instantiated , the losses are estimated as considered in section [ estimator ] .the ev module is then employed to visualise the estimated losses .geometry data for the geographic levels was obtained from the global administrative areas database , as shapefiles .the shapefiles obtained were large in size containing accurate boundary specification .since the experiment reported here was a preliminary test , approximate boundary specifications were sufficient , and therefore the shapefile was simplified using the mapshaper tool .figure 5 is a screenshot of the visualiser .the inline map shown on the screenshot represents the shakemap representation of the earthquake .the earthquake related data is shown on the right - hand side of the map .the four visualisers of the ev module are listed under google earth visualisation as static data , exposure data , hazard data and loss data .the visualisation techniques ( choropleth in the screenshot ) are available in a drop - down box .the shakemap link presents the shakemap on the google earth application .the ground up and net of facultative losses computed by the ele module are displayed under global earthquake loss model .the portfolio loss link presents four pie charts that compares the losses and exposures by line of business such as industrial , personal and commercial .the test case employed in the feasibility study is magnitude 9.0 , which occurred in tohoku , japan on 11 march 2011 that struck off the pacific coast of japan at 05:46 utc on friday , 11 march 2011 .this recent earthquake was a major catastrophe and affected 28 prefectures .it is worthwhile to note that the catastrophe was due to both a tsunami and an earthquake .the ape - elev model does not incorporate any mechanism to differentiate between the tsunami and the earthquake related losses .this differentiation , however , is achieved in the model since the input data from usgs pager and shakecast differentiates the catastrophe by producing earthquake related data .therefore , the model inherently produces loss estimates for the catastrophe data provided and its accuracy is dependent on the input .figures 6 - 10 are a set of screenshots obtained from the visualiser .figure 6 shows the mmi of the affected prefectures using the prism visualisation technique .the gradient scale on the left hand side shows the mmi at the prefectures .the right most pop - up shows gul and nfl for the earthquake .the pop - up in the centre shows the exposure , population and hazard data of shizuoka prefecture .figure 7 shows the mdr of the affected prefectures .the choropleth visualisation technique is employed for representing the mdr .the gradient scale on the left hand side shows the mmi at the prefectures .the pop - up shown on the right side shows information relevant to the earthquake for japan and the pop - up in the centre shows regional information for the fukushima prefecture .figure 8 shows the superimposition of mdr and population of the affected prefectures .choropleth is employed for visualising mdr of the prefectures , prisms are employed for visualising nfl and push - pins are used for visualising populations .the two gradient scales on the left side show the scale of mdr and populations .the pop - up shown on the right side shows information relevant to the earthquake and the pop - up in the centre shows regional information relevant to miyagi prefecture .figure 9 shows the mmi of the affected prefectures using choropleth , the population in the prefectures using human push - pins and the estimated losses using prisms .the two gradient scales on the left side show the scale of mmi and population .the pop - up on the right side shows the estimated loss information for the entire event in the gul and nfl categories .the pie charts indicate the losses for industrial , personal , commercial and other lines of business for the exposure data used .figure 10 shows a different view of information visualised in figure 9 .the mmi of the affected prefectures using choropleth , the population in the prefectures using human push - pins and the estimated losses using prisms .mmi and population are shown on the gradient scale .while the right - most pop up showing the pie charts indicates the loss for the entire event , the pop up in the centre shows the losses specific to the saitama prefecture .the gul and nfl aggregated for the prefecture along with information relevant to the prefecture and the event are presented .figures 11 - 18 are screenshots of different alert versions , of the test - case earthquake which shows the evolving view of the earthquake and how losses can be rapidly estimated .the mmi of the affected prefectures are shown using choropleth visualisation technique and the height of the prisms are indicative of the ground up losses . were received within the first day after the event , within the same week after the event , within the same month after the event and the remaining alerts within six months after the event .figure 11 is based on the first alert , which presented data for an overall magnitude of 7.9 twenty two minutes and fifty eight seconds after the event occurred . in this alert , as shown in the figure fourteen prefectures are affected - six prefectures with mmi vii ( dark yellow ) , six prefectures with mmi vi ( light yellow ) and two prefectures with mmi v ( green ) .the ground up loss for the prefectures are estimated and presented above the prisms indicative of the magnitude of the loss .the estimated losses are highest for the chiba and kanagawa prefectures . of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ]figure 12 is based on the third alert , which presented data data for an overall magnitude of 8.8 one hour and fifteen minutes after the event occurred . in this alert ,more data was available and was used to update the first alert . while there is a difference in the data showing the magnitude of the earthquake , the mmi data and the estimates for the ground up loss remained the same . of magnitude 9.0 ,tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ] figure 13 is based on the fifth alert , which presented data for an overall magnitude of 8.9 two hours and forty four minutes after the event .the mmi information of the prefectures were updated - six prefectures with mmi vii ( dark yellow ) , eight prefectures with mmi vi ( light yellow ) , five prefectures with mmi v ( light green ) and three prefectures with mmi iv ( light blue ) .the loss estimates for the prefectures have rapidly changed after this alert .for example , for the chiba and kanagawa prefectures the ground up loss estimates have increased by approximately 8 times after the first and third alert .the sensor data in this alert has gathered more information about the prefectures which are land - locked . of magnitude 9.0 ,tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ] figure 14 is based on the seventh alert , which presented data for an overall magnitude of 9.0 four days and nine hours after the event .again the mmi information of the prefectures are updated with more accurate information gathered by the sensors .one prefecture has an mmi viii and the ground up loss estimates of the prefectures around chiba and kanagawa prefectures have increased .more prefectures to the south of the island have an mmi iv though the losses estimated here are zero . of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ]figure 15 is based on the ninth alert , which presented data for magnitude similar to the previous alert and was received one week and one day after the event . the data for the next alerts will remain almost similar with minor details updated .while in the previous alerts an evolving view of the hazard , vulnerability and loss were visualised from this alert a constant view is obtained .again loss estimates in the prefectures to the vicinity of the coastal prefectures are updated . of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ]figure 16 , figure 17 and figure 18 are based on alerts , , and respectively .the overall data visualised in these alerts are more or less the same with minimal updates to the mmi and losses estimated for the prefectures . of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ] of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ] of magnitude 9.0 , tohoku , japan , 11 march 2011 earthquake , scaledwidth=50.0% ] a study that compares the predicted losses of ten global earthquakes against historic loss datawas pursued in order to validate the ape - elev model .table 1 shows the list of earthquakes selected for this study , their date of occurrence ( dd - mm - yyyy ) , magnitude , latitude and longitude , historic losses in millions of usd in the year of occurrence of the earthquake , adjustment multipliers to normalise the historic losses to 2012 usd , predicted losses in millions of usd and percent error between the normalised historic and predicted losses .the earthquakes were selected such that ( a ) they were distributed geographically across different continents , ( b ) their magnitude was over 5.5 , ( c ) and had occurred in the last 30 years .m1.4 cm | m1.4 cm | m1 cm | * region affected * & * country * & * date * & * mag * & * lat * & * long * & * historic losses in millions of usd for year y , * & & * normalised historic losses in millions of 2012 usd , * & * predicted losses in millions of 2012 usd * & * percent error % * + & & & & & & & * * & * * & * * & * * & & & + libertador ohiggins & chile & 11/03/2010 & 6.9 & -34.2592 & -71.9288 & 16.0500 & 1.0558 & 0.9957 & 0.9651 & 1.0318 & 16.8732 & 238.8136 & 1315.33 + wnw of ferndale & usa & 09/01/2010 & 6.5 & 40.6520 & -124.6920 & 25.0000 & 1.0352 & 0.9850 & 0.9683 & 1.0172 & 25.4904 & 16.8655 & -33.84 + california & usa & 28/06/1992 & 7.3 & 34.2012 & -116.4360 & 37.8403 & 1.4926 & 1.1539 & 0.9368 & 1.2316 & 65.1718 & 601.4143 & 822.81 + ne of san simeon & usa & 22/12/2003 & 6.5 & 35.7058 & -121.1010 & 120.7670 & 1.1981 & 1.0003 & 0.9268 & 1.0793 & 144.7416 & 46.4220 & -67.93 + & usa and mexico & & & & & 400.0000 & - & - & - & - & 421.4479 & 488.3228 & 15.87 + & usa & & & & & 250.0000 & 1.0302 & 0.9897 & 0.9729 & 1.0172 & 254.9038 & 370.2505 & - + & mexico & & & & & 150.0000 & 1.0853 & 1.0230 & 0.9893 & 1.0341 & 166.5441 & 118.0722 & - + california & usa & 18/10/1989 & 6.9 & 37.0400 & -121.8800 & 2,510.0000 & 1.6103 & 1.2119 & 0.9525 & 1.2724 & 4,898.6913 & 7,316.6145 & 49.36 + south island of new zealand & new zealand & 13/06/2011 & 6.0 & -43.5800 & 172.7400 & 2,816.4549 & 0.9909 & 1.0165 & 1.0099 & 1.0066 & 2,836.7903 & 3,132.1219 & 10.41+ south island of new zealand & new zealand & 21/02/2011 & 6.1 & -43.6000 & 172.7100 & 13,000.0000 & 1.0025 & 1.0047 & 0.9976 & 1.0070 & 13,093.8628 & 17,660.6445 & 34.88 + california & usa & 17/01/1994 & 6.7 & 34.2130 & -118.5360 & 22,920.0000 & 1.4381 & 1.1106 & 0.9204 & 1.2066 & 36,606.3931 & 4,787.6419 & -86.92 + tohuku & japan & 11/03/2011 & 9.0 & 38.2970 & 142.3730 & 37,200.0000 & 0.9935 & 0.9978 & 0.9873 & 1.0106 & 36,877.4566 & 4,611.4482 & -87.49 + the historic data related to all the earthquakes were collected from multiple sources , namely the national geophysical data centre ( nsdc ) , united states geological survey ( usgs ) , pager , shakemap , em - dat and cat - dat .the information collected includes , event data , exposure data , hazard data and loss data .the collected loss data is denoted as which are in usd of year in which the earthquake occurred .normalisation of loss data is reported by , , and . in this paper ,the historic loss data is normalised to 2012 usd , denoted as using the normalisation method described by and .three adjustment multipliers are used for the normalisation .firstly , the inflation multiplier , denoted as , which uses the implicit price deflator ( ipd ) for gross domestic product metric sometimes also referred to as gdfdef . using this metric any output obtained at the current price is converted into constant - dollar gdp by taking inflation into account .how much change in a base year s gdp is dependent on the changes in the price level is captured by the metric .this metric is available from economic research of the federal reserve bank of st .louis and the us bureau of economic analysis are employed .secondly , the population multiplier , denoted as , which is the ratio of the population in 2012 and the year of occurrence of the earthquake .the population data is available from the census data published by governmental agencies .thirdly , the wealth multiplier , denoted as is computed as . for year , normalised to 2012 is the inflation - corrected wealth adjustment obtained as .the fixed asset and consumer durable goods ( facdg ) metric in a year is used indicative of the wealth in that given year .the computation of fixed assets capture private and governmental assets and the computation of consumer durable goods take into account non - business goods consumed by households .this metric is obtained from the us bureau of economic analysis ( bea ) .the sole use of the measure of wealth is not indicative of inflation adjustments and therefore the consumer price index ( cpi ) is taken into account .further the wealth multiplier are adjusted for population to a per capita basis .the per capita adjustment is taken into account since increase in wealth is dependent on population and the rate of change of wealth and population are different .the normalisation equation is or can be restated as if the implicit price deflator ( ipd ) index of the gdp is taken into account for computing the inflation - corrected wealth adjustment instead of the consumer price index ( cpi ) , then the normalisation equation is in the research reported in this paper , however , is computed using equation which uses both ipd and cpi .the equation takes into account the effect of population based on the consumption ( definition of cpi ) in normalisation . however , there is no direct dependence on population as seen in equation and equation .there are challenges in considering the population for earthquake losses .for example , consider an area that was affected by a major earthquake 20 years ago and was sparsely populated then which resulted in minimal ground up loss . for normalising the loss of that earthquake in 2012 factors such as how densely populated that area was in 2012 andthe ground up loss if the earthquake occurred in 2012 needs to be considered .for such a consideration regional population statistics will need to be incorporated into the equation .consider for example the earthquake that affected wnw of ferndale , usa on 9 january 2010 with a magnitude of 6.5 .the historic loss for this earthquake in 2010 us dollars is 25 million , represented as .the value needs to be normalised for 2012 usd denoted as .the implicit price deflator index in 2010 normalised for 2012 , represented as can be obtained as the ratio of the implicit price deflator in 2012 ( ) to the implicit price deflator in 2010 . in 2012 , and ^ 1^. therefore , .computing the wealth multiplier index for 2010 normalised to 2012 denoted as requires the computation of two indices , namely the inflation corrected wealth multiplier index ( ) and the population multiplier index ( ) .the wealth of usa in 2012 is 51,117.4 billion usd and the wealth in 2010 is 48,758.9 billion usd computed from the fixed assets and consumer durable goods account^2^. therefore , the ratio of wealth of 2012 to 2010 is .the consumer price index ( cpi ) for 2012 is 231.227 and for 2010 is 217.230 ^ 3^. the ratio of the consumer price index of 2012 to 2010 is computed as . is obtained by dividing the ratio of wealth and the ratio of cpis of 2012 to 2010 , which is .the population of us in 2012 was 314,055,800 and the population in 2010 was 308,745,538 .therefore , the population multiplier index , . the wealth multiplier index , can then obtained as .therefore , for the us earthquake in 2010 , normalisation in 2012 us dollars is obtained as pager data ( mmi at city level , affected cities due to an earthquake ) for global earthquakes are only available after 2007 .therefore , for earthquakes prior to 2008 a in - house computer script was developed to extract data from two sources .the first source was a list of cities whose population is greater than one thousand people .this list is provided by geonames and contains all the cities in the world whose population is more than one thousand .the model assumes population as point values for cities in all its computations .however , in reality population is a gradient , and the loss estimation technique presented can not take into account its continuous nature and underestimates the computation of loss taking into account centres with less than a thousand people .the second source was the shakemap file which is a representation of the affected grid on a map due to an earthquake and comprises a large set of point data ( latitude , longitude and the mmi at that point ) .the script extracts the list of cities that are affected within the grid and their mmis .the cities are mapped onto their respective regions using the latitude and longitude information .the exposure data for the geographic levels are collected from publicly available sources .the above inputs were used to calculate losses using the method in the ape - elev model . as shown in equation ,the mmi at the city level is used to compute the mdr at the same level using the jaiswal and wald mdr model , either by direct comparison or by interpolation .the exposure data , which is available for higher geographic levels , is disaggregated onto the city level based on population .the losses for a region are then computed by calculating the sum of the losses for individual cities ( loss for individual cities can be computed by the product of the exposure and mdr at the city ) within that region .a number of obstacles were encountered during the validation study , which are as follows : * exposure data had to be collected from a number of disparate sources and was not easy to obtain .* hazard data is not readily available for events preceding 2008 . to collect data for events prior to 2008 , as presented above , an in - house script had to be developed . *as data obtained from multiple sources which do not follow a standard convention were integrated in the validation study , significant efforts had to be made towards ordering and organising data and eliminating irrelevant information from the sources . despite the above obstacles , ( a ) event data was easily collected , ( b ) population data was publicly available and ( c ) the mmi to mdr was straight forward to calculate based on the vulnerability curves used in pager .two column charts were generated based on increasing historic losses . in figure 19 ,the predicted and historic losses are shown in millions of usd for events with historic losses less than 1 billion usd , and in figure 20 , for events with historic losses greater than 1 billion usd .there are multiple sources of error in the validation study and are as follows : 1 .input errors , which refer to the flaws and inaccuracy in the input data to the model . cities with a population of over 1000 were only considered .this data is constructed on the assumption that population is a discrete distribution , while in reality it is continuous ( population outside a city with less than 1000 human inhabitants is not considered ) .the population data obtained from geonames was inaccurate since a large number of cities presented zero population .this was partially overcome by doing manual look - ups with other reliable sources .however , conflicts with the dates of census of the geonames and the source of the manual look - ups persisted .application errors , which refer to the inaccuracies and assumptions that exist within the model .the mmi of a city was converted to a mdr value using country - based mmi - mdr curves .the assumption here is that every city follows the same curve ( values ) as of its country .the losses for a few events are calculated in the currency of its country of origin .the value of the currency is then converted to us dollars based on an average conversion rate for the year in which the event occurred .benchmark errors , which refer to the assumptions that exist in setting a benchmark .a range of values are available for historic insured losses .it is difficult to determine which value needs to be selected as the benchmark for comparison against the predicted loss . for certain events , historic insured losses were not available , andtherefore , the total economic losses were used to estimate the insured loss .this was based on a countrywide take - up rate which may not be accurate for certain regions in a country .it is observed that there are two events from the sample which have over 100% error .the first event affected california in 06/28/1992 with a magnitude of 7.3 , have significant error .this is likely because the most recent exposure for california was only available for the validation study , thereby leading to a significant over - prediction .the second event occurred on 03/11/2010 in chile with a magnitude of 6.9 .the over - prediction is in part likely due to the fact that exposure was disaggregated based on population . in this case , the assumption that exposure is proportional to population is less accurate since only one city with a population of over 1000 was affected .the seven events that have less than 100% error indicate the model is feasible .further accuracy can be achieved by calibrating the model .the loss predicted by the ape - elev model is a mean value for an earthquake . to study the probability of a loss threshold the distribution which is the standard normal cumulative distribution function is employed as follows : - \phi \left [ \frac{ln(a ) - \mu_{ln(l)}}{\zeta}\right]\ ] ] where is the predicted value of the logarithm of loss obtained from the model and is assumed to be a lognormal random variable , and is the normalised standard deviation of the logarithm of loss obtained from .figure 21 shows the estimate of probability of different loss thresholds ( , , , , , , ) represented in millions of usd for the earthquakes of table 1 .these loss thresholds best represent magnitude losses and are therefore chosen for validating the results in this paper .different thresholds can be used by appropriately setting and values in equation . in this sectionwe have evaluated the performance on ape - elev both in terms of how well its data acquisition and visualisation facilities are able to capture the evolving history of earthquake alerts and the performance of its simplistic loss model . the tohoku earthquake used in evaluatingthe feasibility demonstrates how data can be rapidly ingested from multiple sources to visualise earthquake alerts as the data related to the event evolves over hours , days and months after its occurrence .evaluation of loss models is tricky at best due to the inherent difficulty in collect consistent exposure and loss data for historic events . in the case of ape - elevis important to remember that the goal is to produce on a global basis a crude loss estimate rapidly , as an event evolves , based on very limited information . in this context, the distribution of expected losses is much more important than the point estimates .our validation demonstrated that the methodology pioneered in pager for economic loss can be usefully applied in the context of portfolio losses . in 50% of our evaluation events the observed historical losses andthe predicted losses fall into the same loss threshold . in 90% of our test eventsthe observed historical losses and the predicted losses fall into the two highest loss thresholds .given the limited data , the loss model gives reasonable order of magnitude estimates , but it is important that users be aware of the inherent limitations of the underlying approach .in the time line of an earthquake the sensory data provided by sources such as pager / shakemap evolves over time .for example , sensory data was updated fifteen times for the tohoku earthquake ranging from within an hour to six months after the earthquake .the data was first issued twenty three minutes after the earthquake and updated four times during the first day alone .not only did the earthquake event unfold over time but the data describing the event and our knowledge of the event evolved .the data available initially alone is not sufficient to produce reliable loss estimates .therefore , analysis of an event soon after it has occurred is challenging and important to generate reliable loss estimates .for an earthquake model to be useful in days and weeks after the event , it needs to support ( a ) rapid data ingestion , ( b ) rapid loss estimation , ( c ) rapid visualisation and integration of hazard , exposure , and loss data from multiple sources , and ( d ) rapid visualisation of hazard , exposure and vulnerability loss data at multiple geographic levels .this paper has presented the design and development of such a model , ape - elev ( automated post - event earthquake loss estimation and visualisation ) .the model comprises three modules , firstly , the earthquake loss estimator ( ele ) , the earthquake visualiser ( ev ) and the elev database ( elev - db ) . the ele module is built on relying multiple data sources for accessing real - time earthquake data .financial losses relevant to the insurance and reinsurance industry are particularly taken into account in the model and are estimated at different geographic levels .the visualisation of the losses on a geo - brower is facilitated by the ev module .the elev - db module aids the cohesive functioning of the ele and ev modules .the recent tohoku earthquake is used as a test case to demonstrate the feasibility of the ape - elev model and how an evolving view of the event is generated using the model .two types of losses , namely ground up and net of facultative losses are computed for the earthquake .further , a set of ten global earthquakes are chosen to validate the model by ( a ) computing the percentage error between the predicted loss and historic loss values and ( b ) estimating the probability of loss thresholds for the earthquakes . in the study ,all historic loss values are normalised to 2012 us dollars .the key observation is that the model produces reasonable order of magnitude estimates . a video demonstrating a prototype of the distributed ape - elev is available at http://www.blessonv.com/software/ape-elev .future work will aim to refine the model by calibrating the pager vulnerability curves ( for economic losses ) for a more accurate use in portfolio insured loss models .a comparison study of estimated losses against normalised historic losses for a larger number of recent earthquake events will be pursued . extending ape - elev for secondary hazards such as tsunamis andfloods will be pursued .efforts will also be made towards augmenting the loss model results with any available historical data points .the distributed ape - elev system will be extended for taking custom user input for exposure and catastrophe data and for adjusting the output presentation as required .a study to quantify the input , benchmark and application errors and consider their impact on the estimated loss will be pursued .amini , j. , karami , j. , sarab , a. a. , and safarrad , t. , an evaluation of the radius model in assessing the damages caused by earthquake via gis ( case study region1 tehran ) , urban - regional studies and research journal , number 11 ( 2012 ) boomer , j. , spence , r. , erdik , m. , tabuchi , s. , aydinoglu , n. , booth , e. , re , d. del , and peterken , o. , development of an earthquake loss model for turkish catastrophe insurance , journal of seismology , volume 6 , number 3 , pp .431 - 446 ( 2002 ) eguchi , r. t. , goltz , j. d. , seligson , h. a. , flores , p. j. , blais , n. c. , heaton , t. h. , and bortugno , e. , real - time loss estimation as an emergency response decision support system : the early pot - earthquake response tool ( epedat ) , earthquake spectra , volume 13 , pp .815 - 832 ( 1997 ) erdik , m. , aydinoglu , n. , fahjan , y. , sesetyan , k. , demircioglu , m. , siyahi , b. , durukal , e. , ozbey , c. , biro , y. , akman , h. , yuzugullu , o. , earthquake risk assessment for istanbul metropolitan area , earthquake engineering and engineering vibration , volume 2 , issue 1 , pp.1 - 23 ( 2003 ) frolova , n. , larionov , v. , and bonnin , j. , earthquake casualties estimation in emergency mode , human casualties in earthquakes , advances in natural and technological hazards research 29 , spence , r. , so , e and scawthorn , c. ( editors ) , springer , pp .107 - 124 ( 2011 ) huyck , c. k. , chung , h. c . , cho , s. , mio , m. z. , ghosh , s. , eguchi , r. t. and mehrotra , s. , centralized web - based loss estimation tool : inlet for disaster response , proceeding of spie 6178 , 61780b ( 2006 ) kaestli , p. , wyss , m. , bonjour , c. , wiemer , s. , and wyss , b. m. , a new tool for estimating losses due to earthquakes : quakeloss2 , american geophysical union , fall meeting 2007 , abstract # s51a-0222 ( 2007 ) kamer , y. , demircioglu , m. b. , erdik , m. , hancilar , u. , harmandar , e. , sesetyan , k. , tuzun , c. , yenidogan , c. , and zulfikar , a. c. , earthquake loss estimation routine eler v 3.0 user manual , department of earthquake engineering , bogazici university , turkey ( 2010 ) miller , s. , muir - wood , r. , and boissonnade , a. , an exploration of trends in normalized weather - related catastrophe losses , climate extremes and society , diaz , h. f. , and murnane , r. j. , ( editors ) , pp .225 - 247 ( 2008 ) molina , s. , lang , d.h . , and lindholm , c.d ., selena - an open - source tool for seismic risk and loss assessment using a logic tree computation procedure , computers & geosciences , volume 36 , issue 3 , pp .257 - 269 ( 2010 ) muto , m. krishnan , s. , beck , j. l. , and mitrani - reiser , j. , seismic loss estimation based on end - to - end simulation , proceedings of the 1st international symposium on life - cycle civil engineering , lake como , italy ( 2008 ) pielke , r. a. , jr ., rubiera , j. , landsea , c. , fernandez , m. l. , and klein , r. , hurricane vulnerability in latin america and the carribean : normalized damage and loss potentials , natural hazards review , vol .4 , issue 3 . pp . 101 - 114 ( 2003 )sousa , m. l. , campos costa , a. , carvalho , a. , and coelho , e. , an automatic seismic scenario loss methodology integrated on a geographic information system , proceedings of the 13th world conference on earthquake engineering , vancouver , canada , paper no .2526 ( 2004 ) spencer , b.f . , myers , j. d. , and yang , g. , maeviz / neesgrid and applications overview , proceedings of the 1st international workshop on an earthquake loss estimation program for turkey , istanbul , turkey ( 2005 ) trendafiloski , g. , wyss , m. , and rosset , ph ., loss estimation module in the second generation software qlarm , human casualties in earthquakes , advances in natural and technological hazards research 29 , spence , r. , so , e and scawthorn , c. ( editors ) , springer , pp .95 - 106 ( 2011 ) wald , d. , lin , k. -w . , porter , k. and turner , l. , shakecast : automating and improving the use of shakemap for post - earthquake decision - making and response , earthquake spectra , volume 24 , no .533 - 553 ( 2008 ) wald , d. j. , earle , p. s. , allen , t. i. , jaiswal , k. , porter , k. , and hearne , m. , development of the u.s .geological survey s pager system ( prompt assessment of global earthquakes for response ) , proceedings of the 14th world conference on earthquake engineering , beijing , china ( 2008 )
|
an automated , real - time , multiple sensor data source relying and globally applicable earthquake loss model and visualiser is desirable for post - event earthquake analysis . to achieve this there is a need to support rapid data ingestion , loss estimation and integration of data from multiple data sources and rapid visualisation at multiple geographic levels . in this paper , the design and development of the automated post - event earthquake loss estimation and visualisation ( ape - elev ) system for real - time estimation and visualisation of insured losses incurred due to earthquakes is presented . a model for estimating ground up and net of facultative losses due to earthquakes in near real - time is implemented . since post - event data is often available immediately from multiple disparate sources , a geo - browser is employed to facilitate the visualisation and integration of earthquake hazard , exposure and loss data . the feasibility of ape - elev is demonstrated using a test case earthquake that occurred in tohoku , japan ( 2011 ) . the ape - elev model is further validated for ten global earthquakes using industry loss data . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
more than half of all humans are now living in cities .cities are responsible for a great deal of global energy consumption and of greenhouse gas emissions .the global challenges of sustainable development call for a quantitative theory of urban organization .there is a well - established connection between the density of an urban environment , and the need to travel within it .good quality of itineraries is one of the necessary conditions for avoiding stagnation and collapse of a city .studies of urban transportation networks have a long history . in his famous paper on the seven bridges of knigsberg published in 1736, l. euler had proven the first theorem of graph theory . in eulers solution , each urban landuse mass is considered as a node of a planar graph , and the bridges connecting them are the edges .euler had found that a route travelling along each edge in the planar graph representation of the ancient knigsberg did not exists . in the _primary _ graph representation of urban transport networks originated from the work of euler , the relationships between the different components of urban environments are often measured along streets and routes considered as edges , while the traffic end points and street junctions are treated as nodes . in the last century, primary city graphs have been used extensively in many studies devoted to the improving of transportation routes , the optimization of power grids , and the surveys of human mobility patterns .another graph representation of urban transport networks is based on the ideas of traffic engineering and queueing theory invented by a.k .erlang ( see ) .it arises naturally when we are interested in how much time a pedestrian or a vehicle would spend while travelling through a particular place in a city .in such a _ secondary _ graph representation , any space of motion is considered as a service station of a queuing network characterized by some time of service , and the relations between streets , squares , and round - abouts are traced through their junctions .travellers arriving to a place are either moving through it immediately or queuing until the space becomes available .once the place is passed through , the traveller is routed to its next station , which is chosen according to a probability distribution among all other open spaces linked to the given one in the urban environment . in general , the secondary graph representations of urban environments are not planar .moreover , they are essentially similar to those of `` dual information representation '' of a city map introduced by and to the `` dual graphs '' extensively investigated within the concept of _ space syntax _ , a theory developed in the late 1970s , that seeks to reveal the mutual effects of complex spatial urban networks on society and vice versa , . in space syntax theory ,built environments are treated as systems of spaces of vision subjected to a configuration analysis .being irrelevant to the physical distances , dual graphs representing the urban environments are removed from the physical space .spatial perception shapes peoples understanding of how a place is organized and eventually determines the pattern of local movement , .the aim of the space syntax study is to estimate the relative proximity between different locations and to associate these distances to the densities of human activity along the links connecting them , .the surprising accuracy of predictions of human behaviour in cities based on the purely topological analysis of different urban street layouts within the space syntax approach attracts meticulous attention .space syntax proves its usefulness for the planning and redevelopment of certain city districts around the world , the designing of commercial centres , museums , railway stations , and airports where easy way - finding is a significant issue .the decomposition of urban spatial networks into the complete sets of intersecting open spaces can be based on a number of different principles . in , while identifying a street over a plurality of routes on a city map , the named - street approach has been used , in which two different arcs of the primary city network were assigned to the same identification number ( i d ) provided they share the same street name .the main problem of the approach is that the meaning of a street name could vary from one district or quarter to another even within the same city .for instance , some streets in manhattan do not meet the continuity principle rather playing the role of local geographical coordinates . in ,an intersection continuity principle ( icn ) has been used : two edges forming the largest convex angle in a junction on the city map are assigned the highest continuity and therefore were coupled together , acquiring the same street identification number ( i d ) .the main problem with the icn principle is that the streets crossing under convex angles would artificially exchange their identifiers , which is not crucial for the study of the probability degree statistics , but makes it difficult to interpret the results if the dynamical modularity of the city is detected .it is also important to mention that the number of street ids identified within the icn principle usually exceeds substantially the actual number of street names in a city . in ,the probability degree statistics and some centrality measures in different world cities have been investigated for a number of one square mile representative samples . however , the decision on which a square mile would provide an adequate representation of a city is always questionable . in ,the approach of has been improved in the sense that two intersecting lines of vision were aggregated into the same node of the dual graph representation if and only if the angle between the linear continuation of the first line and the second line was less than or equal to a predefined threshold . if more than one continuation was available , the line forming the smaller angle was chosen . in our paper , we take a `` named - streets''-oriented point of view on the decomposition of urban spatial networks into the complete sets of intersecting open spaces following our previous works .being interested in the statistics of random walks defined on spatial networks of urban patterns , we assign an individual street i d code to each continuous segment of a street .the secondary graph is then constructed by mapping all edges of the primary graph shared the same street i d into nodes and all intersections among each pair of edges of the primary graph into the edges of the secondary graph connecting the corresponding nodes .in the present paper , we explore the secondary graph representations of different compact urban patterns ( the venetian channel network , the city of paris , enclosed by the peripheral boulevard , and the almost regular street grid in manhattan ) by means or random walks . in the forthcoming section ( sec .[ sec : connectivity ] ) we discuss the connectivity statistics of secondary graphs representing urban environments . in general , compact urban patterns have been developed under the deficits of physical space and therefore bear the multiple fingerprints of the physical landscapes being scale dependent in general .however , the large urban patterns which have not been spatially restricted during their development could constitute a highly heterogeneous scalable spatial networks as we demonstrate for the spatial network of the city of paris bounded by the peripheral boulevard . in the sec .[ sec : random_walks ] , we explain how random walks can be used in order to explore complex networks . in subsec .[ subsec : why_random_walks ] , we demonstrate that the transition operator of a random walk appears naturally as the representation of the group of graph automorphisms in a class of stochastic matrices .random walks provide us with an effective tool for the detailed structural analysis of connected undirected graphs exposing their symmetries .it is well known that while being defined on an undirected graph , random walks determine a unique stationary probability distribution for every node . in sec .[ subsec : times ] , we show that each node of a connected undirected graph can be characterized with respect to random walks by the expected recurrence time and the expected first passage time .the expected recurrence time is simply the inverse of the stationary probability distribution of random walks for the given node and is therefore the local property of the node .the expected first passage time figures out a global relation between the node and other nodes of the given graph accounting for all possible random paths toward the node accordingly to their respective probabilities . in sec .[ subsec : embedding ] , we show that for any undirected graph , it is possible to define a linear self - adjoint operator and then use its nice spectral properties in order to extract the information about the graph structure . in particular , we demonstrate that the complete set of orthonormal eigenvectors of the symmetric transition operator can be used in order to introduce the structure of euclidean space on the graph . in sec .[ subsec : fpt_euclidean ] , we show that any node of the graph can be represented as a vector in the -dimensional vector space and that euclidean distances and angles between nodes have clear probabilistic interpretations . in particular , the square of the norm of the vector representing a node of the graph equals to the expected first passage time to it by random walkers .we can conclude that random walks embed connected undirected graphs into -dimensional euclidean space .the main result of our paper is explained in sec .[ sec : intelligibility ] , where we have shown that the expected recurrence time scale apparently linear with the expected first passage times in compact urban environments we have studied . a similar strong positive relation between the local property of a place and its global properties with respect to other places in the dual graph of a city was known for a long time in the framework of spaces syntax theory .our approach based on investigation of complex networks by means of random walks allows us to extend the notion of intelligibility far beyond space syntax onto the entire domains of complex networks and the general theory of graphs .the degree of a node representing a place in the secondary graph representation of an urban environment is the number of locations directly adjacent to the given one in the city . in space syntax theory ,the degree of a node ( i.e. connectivity ) is considered as a local characteristic quantifying how well the space is connected to others in the urban pattern , .the probability degree distribution , ,\ ] ] suggests that a node selected uniformly at random has a certain degree with the probability .the probability degree distribution is an important concept characterizing the topology of complex networks .it originates from the early studies of random graphs by erds and rnyi , al _ as a common way to classify large graphs into categories such as _ random graphs _ al _ and scale - free networks .it has been reported earlier by that the secondary graphs representing the urban environments under the street - name identification principle exhibit the small - world property , but the scale - free probability degree distributions pertinent to the scale - free graphs can hardly be recognized . in general , compact city patterns do not provide us with sufficient data to conclude on the universality of degree statistics . to give an example ,we display in fig .[ fig1_09 ] the log - log plot of the numbers of venetian channels vs. the numbers of their junctions .the solid line indicates the cumulative empirical probability degree distribution , where is the total number of venetian channels ( including those in the giudecca island ) , and is the number of channels crossing precisely other channels .it is remarkable that the empirical probability degree distributions observed for the secondary graphs are usually broad indicating that the itineraries can cross different numbers of other routes .nevertheless , the distributions usually have a clearly recognizable maximum corresponding to the most probable number of junctions an average transport route has in the city .the distributions usually have a long right tail that decays faster then any power law due to just a few routes that cross many more others than in average , .this conclusion has been recently supported by where it has been suggested that in general the probability degree distributions of secondary graphs are scale - dependent .it is important to note that in the relatively large secondary graphs which may contain many thousands of nodes a power law tail can be observed in the probability degree distributions . in fig .[ fig2 ] , we have sketched the log - log plot of the numbers of open spaces in the secondary graph of paris ( consisting of 5131 interconnected open spaces enclosed by the peripheral boulevard ) versus the numbers of their junctions with others . the spatial network of paris forms a highly heterogeneous apparently scalable graph . in urban studies , scaling and universality have been found with a remarkable regularity .the evolution of social and economic life in cities increases with the population size : wages , income , growth domestic product , bank deposits , as well as the rate of innovations , measured by the number of new patents and employment in creative sectors scale super - linearly , over different years and nations , with statistically consistent exponents .the probable reason for the similarity is that highly complex , self - sustaining structures , whether cells , organisms , or cities constitute of an immense number of units are organized in a form of self - similar hierarchical branching networks , which grow with the size of the organism .a generic social dynamic underlying the scaling phenomena observed in cities implies that an increase in productive social opportunities , both in number and quality , leads to quantifiable changes in individual behaviours of inhabitants integrating them into a complex dynamical network .the famous rank - size distribution of city sizes over many countries is known as zipf s law .if we calculate the natural logarithm of the city rank in some countries and of the city size ( measured in terms of its population ) and then plot the resulting data in a diagram , we obtain a remarkable linear pattern with the slope of the line equals ( or , if cities have been ranked in the ascending order ) , .the similar centrality - rank distributions for the values of a space syntax measure quantifying the centrality of nodes in the secondary graphs of compact urban patterns have been recently reported by us in .a graph naturally arises as the outcome of a categorization , when we abstract any real world system by eliminating all but one of its features and by grouping things ( or places ) sharing a common attribute by classes or categories .for instance , the common attribute of all open spaces in a city is that we can move through them .all open spaces found in a city are considered as physically identical , so that we can regard them as nodes of a secondary graph , in which is the set of all such spaces , and is the set of all interconnections between them . for each graph , there exists a unique , up to permutations of rows and columns , adjacency matrix . in the special case of a finite simple graph ( an undirected graph with no self - loops ) ,the adjacency matrix is a -matrix such that if , , and otherwise .the degree of a node is therefore given by for weighted undirected graphs , the adjacency matrix is replaced by a symmetric positive affinity matrix .the set of graph automorphisms , the mappings of the graph to itself which preserve all of its structure , is specified by the symmetric group that includes all admissible permutations taking the node to some other node .the representation of consists of all matrices such that , and if .a linear transformation of the adjacency matrix belongs to if for any .it is clear that the relation ( [ permut_invar ] ) is satisfied if the entries of the tensor in ( [ lin_fun ] ) meet the following symmetry property : for any .since the action of preserves the conjugate classes of index partition structures , it follows that any appropriate tensor satisfying ( [ symmetry ] ) can be expressed as a linear combination of the following tensors : by substituting the above tensors into ( [ lin_fun ] ) and taking account on the symmetries , we obtain that any arbitrary linear permutation invariant function defined on a simple undirected graph must be of the following form , where and being arbitrary constants .if we impose , in addition , that the linear function preserves the connectivity , it follows that ( since the contributions of and are indeed incompatible with ( [ conn_nodes ] ) ) and the remaining constants should satisfy the relation . by introducing the new parameter , we can reformulate ( [ lin_fun2 ] ) in the following form , it is important to note that ( [ conn_nodes ] ) can be interpreted as a probability conservation relation , and therefore the linear function can be interpreted as a stochastic process . by substituting ( [ lin_fun3 ] ) into ( [ probab_nodes ] ) , we obtain in which the operator is nothing else but the generalized random walk transition operator for ] is a real positive stochastic matrix , it follows from the perron - frobenius theorem that its maximal eigenvalue is simple and equals 1 .the left eigenvector , associated with the maximal eigenvalue 1 is positive , and satisfies the normalization condition independently of $ ] . the left eigenvector is interpreted as the unique stationary distribution of this random walk .if the graph is not bipartite , any density function ( , ) asymptotically tends to the stationary distribution under the actions of the transition operator , let us consider a random walk starting from some node chosen randomly among all nodes of the undirected graph .it is clear from ( [ stationary_pi ] ) that for long enough random walks the probability to find a random walker in a certain node equals that is proportional to the degree of node .the expected recurrence time to is given by , and therefore depends on the local property of the node ( its degree ) .the first passage time is the expected number of steps required for the random walker to reach the node for the first time starting from a node randomly chosen among all nodes of the graph .this characteristic time is calculated as an average over all random paths toward the node taken into account in accordance with their respective probabilities .being the global characteristic of the node , estimates the level of accessibility to the node from the rest of the graph .let us now calculate the first passage time to a node by using spectral analysis of a self - adjoint transition operator . probably , lagrange was the first scientist who investigated a simple dynamical process ( diffusion ) in order to study the properties of a graph , .he calculated the spectrum of the laplace operator defined on a chain ( a linear graph ) of nodes in order to study the discretization of the acoustic equations .the idea of using the spectral properties of self - adjoint operators in order to extract information about graphs is standard in spectral graph theory and in theory of random walks on graphs , . in the following calculations , we take in the transition operator ( [ property ] ) .this choice allows us to compare our results directly with those known from the classical surveys on random walks , .the stationary distribution of random walks defines a unique measure on the set of nodes with respect to which the transition operator ( ( [ property ] ) for ) is self - adjoint , where is the adjoint operator , and is defined as the diagonal matrix . in particular , when is a simple undirected unweighted graph .the ordered set of real eigenvectors of the symmetric transition operator forms an orthonormal basis in hilbert space . the components of the first eigenvector belonging to the largest eigenvalue , describes the connectivity of nodes .the euclidean norm in the orthogonal complement of , , gives the probability that a random walker is not in .the eigenvectors , , belonging to the ordered eigenvalues describe the connectedness of the entire graph .the orthonormal system of functions is useful for decomposing normalized functions defined on .the symmetric transition operator projects any density on the eigenvector related to the stationary distribution , , in which is the vector belonging to the orthogonal complement of characterizing the transient process toward the stationary distribution induced by . given two different densities , it is clear that with respect to random walks they differ only on their transient parts , but not on the final stationary state .therefore , we can compare any two densities defined on by means of random walks . since all components , it is convenient to rescale the densities by dividing their components by , such that for example . then we define the squared euclidean distance between any two densities with respect to random walks by the sum over all times , where we have used dirac s bra - ket notations especially convenient for working with inner products and rank - one operators in hilbert space . in order to perform the summation over time in ( [ distance ] ) , it is convenient to use the spectral representation of , we conclude the description of the -dimensional euclidean space structure induced by random walks defined on by mentioning that every density can be characterised by the -dimensional vector with the norm defined by moreover , given two densities we can introduce a scalar product in the -dimensional euclidean space by so that the angle between can be calculated as random walks embed connected undirected graphs into the euclidean space .this embedding can be used directly in order to calculate the first passage times to individual nodes .indeed , let us consider the vector that represents the node in the canonical basis as a density function .in accordance to ( [ sqaured_norm ] ) , the vector has the squared norm of associated to random walks is it is important to note that in the theory of random walks the r.h.s .of ( [ norm_node ] ) is known as the spectral representation of the first passage time to the node from a node randomly chosen among all nodes of the graph accordingly to the stationary distribution .the first passage time , , can be directly used in order to characterize the level of accessibility of the node . the euclidean distance between any two nodes of the graph calculated in the euclidean space associated to random walks, also gets a clear probabilistic interpretation as the spectral representation of the commute time , the expected number of steps required for a random walker starting at to visit and then to return back to , .the commute time can be represented as a sum , , in which is the first hitting time which quantifies the expected number of steps a random walker starting from the node needs to reach for the first time , .the scalar product estimates the expected overlap of random paths towards the nodes and starting from a node randomly chosen in accordance with the stationary distribution of random walks .the normalized expected overlap of random paths given by the cosine of an angle calculated in the euclidean space associated to random walks has the structure of pearson s coefficient of linear correlations that reveals it s natural statistical interpretation .if the cosine of an angle ( [ angle ] ) is close to 1 ( zero angles ) , it indicates that the expected random paths toward both nodes are mostly identical .the value of cosine is close to -1 if the walkers share the same random paths but in the opposite direction .finally , the correlation coefficient equals 0 if the expected random paths toward the nodes do not overlap .it is important to mention that as usual the correlation between nodes does not necessary imply a direct causal relationship ( an immediate connection ) between them .it is intuitive that the time of recurrence to a node , , has to be positively related to the first passage time to it , : the faster a random walker hits the node for the first time , the more often he is expected to visit it in future . this intuition is supported by ( [ norm_node ] ) from which it follows that provided the sum is uniformly independent of the connectivity for all nodes .the possible relation between the local and global properties of nodes is the most profound feature of a complex network .it is interesting to note that this nontrivial property of eigenvectors seems to be true for the secondary graphs representing complex urban networks : the first passage times to the nodes scale apparently linearly with their connectivity . in fig .[ fig3 ] , we have sketched the 2-dimensional projection of the euclidean space of 355 locations in manhattan ( new york ) set up by random walks .nodes of the secondary graph are shown by disks with radiuses taken proportional to the connectivity of the places .broadway , a wide avenue in manhattan which also runs into the bronx and westchester county , possesses the highest connectivity and located at the centre of the graph shown in fig . [ fig3 ] .other places are located at their euclidean distances from broadway calculated accordingly ( [ commute ] ) , and ( [ angle ] ) has been used in order to compute angles between broadway and other places . a part - whole relationship between local and global properties of the spaces of motionis known in space syntax theory as an _intelligibility _ property of urban pattern ,. the adequate level of intelligibility is proven to be a key determinant of the human behaviour in urban environments encouraging peoples way - finding abilities , . in space syntax theory ,the local property of an open space is qualified by its connectivity , while its global property is estimated by a special space syntax measure called integration. in the traditional space syntax analysis , the integration of a place into urban environments is estimated by the normalized sum of all graph theoretical distances toward the place from all other places in the city . in ,the integration of a place has been estimated by means of the centrality of the node in the dual graph representation of the urban environment .the first passage time to a node which we use in the present paper in order to quantify the relation of the node with other nodes in the graph has an immediate connection to neither the traditional space syntax integration measure discussed in , nor the centrality measure investigated in .however , the first passage time indicates a strong positive relation between the local and global properties of the spaces of motion in urban environments ( see fig [ fig4 ] ) in a pretty same way as it has been demonstrated in the classical space syntax analysis .the approach based on investigation of complex networks by means of random walks allows us to extended the notion of intelligibility far beyond the urban studies where it has been originally invented onto the entire domains of complex networks and the general theory of graphs .the work has been supported by the volkswagen foundation ( germany ) in the framework of the project `` _ network formation rules , random set graphs and generalized epidemic processes _ '' ( contract no az .: i/82 418 ) .barabsi , a .-2004 _ linked : how everything is connected to everything else _ , penguin .batty , m. , 2004 _ a new theory of space syntax _ ,ucl centre for advanced spatial analysis publications , casa working paper * 75*. bettencourt , l.m.a ., lobo , j. , helbing , d. , khnert , c. , west , g.b . , 2007 growth , innovation , scaling , and the pace of life in cities , _ pnas _ * 104 * 7301 - 7306 . published on - line on april 16 , 2007 , 10.1073/pnas.0610172104 .biggs , n. , lloyd , e. , wilson , r. , 1986 _ graph theory : 1736 - 1936_. oxford university press .brockmeyer , e. , halstrm , h.l ., jensen a. , 1948 the life and works of a.k .transactions of the danish academy of technical sciences _availiable on line at http://oldwww.com.dtu.dk/teletraffic/erlang.html cardillo , a. , scellato , s. , latora , v. , porta , s. , 2006 _ phys .e _ * 73 * , 066107 .figueiredo , l. , amorim , l. , 2005 _ continuity lines in the axial system _ , in a. van nes ( ed . ) : _ international space syntax symposium , tu delft , faculty of architecture , section of urban renewal and management _ , delft , pp .161 - 174 .figueiredo , l. , amorim , l. 2007 decoding the urban grid : or why cities are neither trees nor perfect grids ._ 6th international space syntax symposium _, 12 - 15 jun 2007 , istanbul , turkey .hillier , b. , hanson , j. , 1984 _ the social logic of space_. cambridge university press .isbn 0 - 521 - 36784 - 0 .hillier , b. , 1999 _ space is the machine : a configurational theory of architecture_. cambridge university press .isbn 0 - 521 - 64528-x .horn , r.a . ,johnson , c.r .matrix analysis_(chapter 8) , cambridge university press .jiang , b. claramunt , c. , 2004 topological analysis of urban street networks . _ environment and planning b : planning and design _ * 31 * , pion ltd . , 151- 162 .klarqvist , b. 1993 a space syntax glossary , _ nordisk arkitekturforskning _ * 2*. newman , m.e.j . , strogatz , s.h.,watts , d.j . 2001 random graphs with arbitrary degree distributions and their applications ._ physical review e _ * 64 * , 026118 .penn , a. , 2001 space syntax and spatial cognition .or , why the axial line ? in : peponis , j. and wineman , j. and bafna , s. , ( eds ) ._ proc . of the space syntax international symposium _, georgia institute of technology , atlanta , may 7 - 11 2001 .porta , s. , crucitti , p . ,latora , v. , 2006 _ physica a _ * 369 * , 853 .rosvall , m. , trusina , a. , p. , minnhagen , sneppen , k. , 2005 networks and cities : an information perspective .lett . _ * 94 * , 028701 .kwok tong soo , 2002 zipf s law for cities : a cross - country investigation ._ regional science and urban economics _ * 35*(3 ) , 239 - 263 .volchenkov , d. blanchard , ph .2007 random walks along the streets and canals in compact cities : spectral analysis , dynamical modularity , information , and statistical mechanics ._ physical review e _ * 75*(2 ) , i d 026104 .volchenkov , d. , blanchard , ph .2008 scaling and universality in city space syntax : between zipf and matthew ._ physica a _ * 387*/10 pp .2353 - 2364 .
|
topology of urban environments can be represented by means of graphs . we explore the graph representations of several compact urban patterns by random walks . the expected time of recurrence and the expected first passage time to a node scales apparently linearly in all urban patterns we have studied in space syntax theory , a positive relation between the local property of a node ( qualified by connectivity or by the recurrence time ) and the global property of the node ( estimated in our approach by the first passage time to it ) is known as intelligibility . our approach based on random walks allows to extend the notion of intelligibility onto the entire domain of complex networks and graph theory . * keywords : * space syntax , random walks , recurrence times , first passage times
|
techniques for inferencing statistical values on markov random fields ( mrfs ) are fundamental techniques used in various fields involving pattern recognition , machine learning , and so on .the computation of statistical values on mrfs is generally intractable because of exponentially increasing computational costs . _ mean - field methods_ , which originated in the field of statistical physics , are the major techniques for computing statistical values approximately .loopy belief propagation methods are the most widely - used techniques in the mean - field methods , and various studies on them have been conducted .many mean - field methods on mrfs with discrete random variables have been developed thus far .however , as compared with the discrete cases , mean - field methods on mrfs with continuous random variables have not been developed well .one reason for this is thought to be that the application of loopy belief propagation methods to continuous mrfs is not straightforward and has not met with no success except in gaussian graphical models .although it is possible to approximately construct a loopy belief propagation method on a continuous mrf by approximating continuous functions using histograms divided by bins , this is not practical ._ nonnegative boltzmann machines _ ( nnbms ) are recurrent probabilistic neural network models that can describe multi - modal nonnegative data . for continuous nonnegative data ,an nnbm is expressed in terms of a maximum entropy distribution that matches the first and second order statistics of the data . because random variables in nnbms are bounded , nnbms are not standard multivariate gaussian distributions , but_ rectified gaussian distributions _ that appear in biological neural network models of the visual cortex , motor cortex , and head direction system , positive matrix factorization , nonnegative matrix factorization , nonnegative independent component analysis , and so on . a learning algorithm for nnbmswas formulated by means of a variational bayes method .because nnbms are generally intractable , we require an effective approximate approach in order to provide probabilistic inference and statistical learning algorithms for the nnbm .a mean - field method for nnbms was first proposed in reference .this method corresponds to a naive mean - field approximation , which is the most basic mean - field technique .a higher - order approximation than the naive mean - field approximation for nnbms was proposed by downs and recently , his method was improved by the author .these methods correspond to _ the thouless anderson palmer _( tap ) equation in statistical physics , and they are constructed using a perturbative approximative method referred to in statistical physics as the plefka expansion .recently , a mean - field method referred to as _ the diagonal consistency method _ was proposed .the diagonal consistency method is a powerful method that can increase the performance of mean - field methods by using a simple extension for them . in this paper, we propose an effective mean - field inference method for nnbms in which the tap equation , proposed in reference , is combined with the diagonal consistency method .the remainder of this paper is organized as follows . in section [ sec : nnbm ] the nnbm is introduced , and subsequently , the tap equation for the nnbm is formulated in section [ sec : gibbs&tap ] in accordance with reference .the proposed methods are described in section [ sec : propose ] . the main proposed method combined with the diagonal consistency methodis formulated in section [ subsec : i - susp ] . in section [ sec: numerical ] , we present some numerical results of statistical machine learning on nnbms and verify the performance of the proposed method numerically .finally , in section [ sec : conclusion ] we present our conclusions .in probabilistic information processing , various applications involve continuous nonnegative data , e.g. , digital image filtering , automatic human face recognition , etc . in general , standard gaussian distributions , which are unimodal distributions , are not good approximations for such data .nonnegative boltzmann machines are probabilistic machine learning models that can describe multi - modal nonnegative data , and were developed for the specific purpose of modeling continuous nonnegative data .let us consider an undirected graph , where is the set of vertices in the graph and is the set of undirected links .on the undirected graph , an nnbm is defined by the gibbs - boltzmann distribution , where is the energy function .the second summation in the energy function is taken over all the links in the graph .the expression is the normalized constant called the partition function .the notations denote the bias parameters and the notations denote the coupling parameters of the nnbm . all variables take nonnegative real values , and and are assumed .the distribution in equation ( [ eq : nnbm ] ) is called the rectified gaussian distribution , and it can represent a multi - modal distribution , unlike standard gaussian distributions . in the rectified gaussian distribution ,the symmetric matrix is _ a co - positive matrix_. the set of co - positive matrices is larger than the set of positive definite matrices that can be used in a standard gaussian distribution , and is employed in co - positive programming .to lead to the tap equation for the nnbm , which is an alternative form of the tap equation proposed in reference , and a subsequent proposed approximate method for the nnbm in this paper , let us introduce a variational free energy , which is called the gibbs free energy in statistical physics , for the nnbm .the variational free energy of the nnbm in equation ( [ eq : nnbm ] ) is expressed by where a brief derivation of this variational free energy is given in appendix [ app : derivation - vfe ] . the notations and are variables that originated in the lagrange multiplier .it should be noted that the independent variables and represent the expectation value and the variance of , respectively .in fact , the values of and , which minimize the variational free energy , coincide with expectations , , and variances , , on the nnbm , respectively , where .the minimum of the variational free energy is equivalent to .however , the minimization of the variational free energy is intractable , because it is not easy to evaluate the partition function .therefore , an approximative instead of the exact minimization . by using the plefka expansion ,the variational free energy can be expanded as where where the function is the scaled complementary error function defined by equation ( [ eq : plefkaexpansion ] ) is the perturbative expansion of the variational free energy with respect to .the notations and are defined by let us neglect the higher - order terms and approximate the true variational free energy , , by the variational free energy in equation ( [ eq : gibbs_nnbm_2nd ] ) .the minimization of the approximate variational free energy in equation ( [ eq : gibbs_nnbm_2nd ] ) is straightforward .it can be done by computationally solving the nonlinear simultaneous equations , by using an iteration method , such as a successive iteration method .the notation is the set of nearest - neighbor vertices of vertex .equations ( [ eq : l_nnbm_2nd ] ) and ( [ eq : r_nnbm_2nd ] ) come from the minimum condition of equation ( [ eq : gibbs_nnbm_2nd ] ) with respect to and , respectively , and equations ( [ eq : mi_nnbm_2nd ] ) and ( [ eq : vi_nnbm_2nd ] ) come from the maximum conditions in equation ( [ eq : def - l&r_2nd ] ). equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) are referred to as the tap equation for the nnbm .although the expression of this tap equation is different from the expression proposed in reference , they are essentially almost the same . since we neglect higher - order effects in the true variational free energy , the expectation values and the variances obtained by the tap equation are approximations , except in special cases .it is known that , on nnbms which are defined on complete graphs whose sizes are infinitely large and whose coupling parameters are distributed in accordance with a gaussian distribution whose variance is scaled by , the tap equation gives true expectations .in this section , we propose an effective mean - field inference method for the nnbm in which the tap equation in equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) is combined with the the diagonal consistency method proposed in reference .the diagonal consistency method is a powerful method that can increase the performance of mean - field methods by using a simple extension for them . before addressing the proposed method ,let us formulate _ the linear response relation _ for the nnbm .the linear response relation enables us to evaluate higher - order statistical values , such as covariances . here, we define the matrix as where is the values of that minimize the ( approximate ) variational free energy .since relations exactly hold when are the true expectation values , we can interpret the matrix as the covariant matrix on the nnbm .the quantity is referred to as _ the susceptibility _ in physics , and is interpreted as a response on vertex for a quite small change of the bias on vertex . in practice , we use obtained by a mean - field method instead of the intractable true expectations and approximately evaluate by means of the mean - field method . let us use obtained by the minimization of the approximate variational free energy in equation ( [ eq : gibbs_nnbm_2nd ] ) , that is the solutions to the tap equation in equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) , in equation ( [ eq : lrr ] ) in order to approximately compute the susceptibilities . by using equations ( [ eq : mi_nnbm_2nd])([eq : lrr ] ), the approximate susceptibilities are obtained by using the solutions to the nonlinear simultaneous equations , which are where , , and .the expression is the kronecker delta .the variables , , , and in equations ( [ eq : susp - chi ] ) and ( [ eq : susp - v ] ) are the solutions to the tap equation in equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) . equations ( [ eq : susp - chi])([eq : susp - r ] ) are obtained by differentiating equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) with respect to .note that , the expression in equations ( [ eq : susp - chi ] ) and ( [ eq : susp - v ] ) is simplified by using equations ( [ eq : mi_nnbm_2nd ] ) and ( [ eq : vi_nnbm_2nd ] ) . after obtaining solutions to the tap equation in equations ( [ eq : mi_nnbm_2nd])([eq : r_nnbm_2nd ] ) , by computationally solving equations ( [ eq : susp - chi])([eq : susp - r ] ) by using an iterative method , such as the successive iteration method , we can obtain approximate susceptibilities in terms of the tap equation .this scheme for computing susceptibilities has a decade - long history and is referred to as _ the susceptibility propagation method _ or to as _ the variational liner response method _ . by using this method , we can obtain not only the expectation values of one variable but also the covariances on nnbms . in the next section ,we propose a version of this susceptibility propagation method that is improved by combining it with the diagonal consistency method . in order to apply the diagonal consistency method to the present framework ,i modify the approximate variational free energy in equation ( [ eq : gibbs_nnbm_2nd ] ) as where the variables are auxiliary independent parameters that play an important role in the diagonal consistency method .we formulate the tap equation and the susceptibility propagation for the modified variational free energy , , in the same manner as that described in the previous sections . by this modification , equations ( [ eq : l_nnbm_2nd ] ) , ( [ eq : r_nnbm_2nd ] ) , and ( [ eq : susp - l ] ) are changed to and respectively .equations ( [ eq : mi_nnbm_2nd])([eq : l_nnbm_2nd ] ) and ( [ eq : r_nnbm_2nd - new ] ) are the tap equation , and equations ( [ eq : susp - chi ] ) , ( [ eq : susp - v ] ) , ( [ eq : susp - r ] ) , and ( [ eq : susp - l - new ] ) are the susceptibility propagation for the modified variational free energy .the solutions to these equations obviously depend on the values of .since the values of can not be specified by using only the tap equation and the susceptibility propagation , we need an additive constraint in order to specify them . as mentioned in sections [ sec : gibbs&tap ] and [ subsec : lrr&susp ] , relations and hold in the scheme with no approximation .hence , the relations should always hold in the exact scheme , and therefore , these relations can be regarded as important information that the present system has .the diagonal consistency method is a technique for inserting these diagonal consistency relations into the present approximation scheme . in the diagonal consistency method , the values of the auxiliary parameters are determined so that the solutions to the modified tap equation in equations ( [ eq : mi_nnbm_2nd])([eq : l_nnbm_2nd ] ) and ( [ eq : r_nnbm_2nd - new ] ) and the modified susceptibility propagation in equations ( [ eq : susp - chi ] ) , ( [ eq : susp - v ] ) , ( [ eq : susp - r ] ) , and ( [ eq : susp - l - new ] ) satisfy the diagonal consistency relations in equation ( [ eq : diagonal - consistency ] ) . from equations ( [ eq : susp - chi ] ) , ( [ eq : susp - l - new ] ) , and ( [ eq : diagonal - consistency ] ) , we obtain where equation ( [ eq : diagmatch ] ) is referred to as _ the diagonal matching equation _ , and we determine the values of by using this equation in the framework of the diagonal consistency method . bycomputationally solving the modified tap equation , the modified susceptibility propagation , and the diagonal matching equation , the expectations and susceptibilities ( covariances ) that we can obtain are improved by applying the diagonal consistency method .it is noteworthy that the order of the computational cost of the proposed method is the same as that of the normal susceptibility propagation method , which is . in the normal scheme presented in section[ subsec : lrr&susp ] , the results obtained from the susceptibility propagation method are not fed back to the tap equation .in contrast , in the improved scheme proposed in this section , they are fed back to the tap equation through parameters , because the tap equation and the susceptibility propagation method share the parameters in this scheme ( see figure [ fig : scheme ] ) . in the improved susceptibility propagation.,height=151 ] instinctually , the role of the parameters can be interpreted as follows . as mentioned in section [ subsec : lrr&susp ] ,the susceptibilities are interpreted as responses to the small changes in the biases on the vertices .self - response , which is a response on vertex to a small change in the bias on vertex , is interpreted as a response coming back through the whole system .thus , it can be expected that the self - responses have some information of the whole system . in the proposed scheme ,the self - responses , which differ from the true response by the approximation , are corrected by the diagonal consistency condition in equation ( [ eq : diagonal - consistency ] ) , and subsequently the information of the whole system , which the self - responses have , are embedded into parameters and are transmitted to the tap equation through parameters .to verify performance of the proposed method , let us consider statistical machine learning on nnbms . for a given complete data set generated from a generative nnbm ,the log - likelihood function of the learning nnbm is defined as this log - likelihood is rewritten as where notation denotes the sample average with respect to the given data set .the learning is achieved by maximizing the log - likelihood function with respect to and .the gradients of the log - likelihood function are expressed as follows : we use , , and obtained by the approximative methods presented in the previous sections in these gradients , and approximately maximize the log - likelihood function .more accurate estimates can be expected to give better learning solutions .we measured the quality of the learning by the mean absolute errors ( maes ) between the true parameters and learning solutions defined by where and are the true parameters on the generative nnbm . in the following sections , the results of the learning on two kinds of generative nnbms are shown . in both cases , we set to and and generated the data sets by using the markov chain monte carlo method on the generative nnbms .let us consider the generative nnbm on a square grid graph whose parameters are where is the unique distribution from to .data set was generated from this generative nnbm , and used to train the learning nnbm .table [ tab : mae-1 ] shows the maes obtained using the approximative methods ..maes between true parameters and learning parameters in equation ( [ eq : maes ] ) .the complete data set are generated from the square grid nnbm in equation ( [ eq : sqnnbm ] ) .each result is the average over 50 trials . [cols="^,^,^,^",options="header " , ] we can see that the proposed improved method ( i - susp ) again outperformed the other methods .in this paper , an effective inference method for nnbms was proposed .this inference method was constructed by using the tap equation and the diagonal consistency method , which was recently proposed , and performed better than the normal susceptibility propagation in the numerical experiments of statistical machine learning on nnbms described in section [ sec : numerical ] . moreover, the order of the computational cost of the proposed inference method is the same as that of the normal susceptibility propagation , which is .the proposed method was developed based on the second - order perturbative approximation of the free energy in equation ( [ eq : gibbs_nnbm_2nd ] ) . since the plefka expansion allows us to derive higher - order approximations of the free energy , we should be able to formulate improved susceptibility propagations base on the higher - order approximations in a similar way to the presented method .it should be addressed in future studies . in reference , a tap equation for more general continuous mrfs , where random variables are bounded within finite values , was proposed .we can formulate an effective inference method for continuous mrfs by combining the tap equation with the diagonal consistency method in a way similar to that described in this paper .this topic should be also considered in future studies .to derive the variational free energy in equation ( [ eq : gibbsfreeenergy_nnbm ] ) , let us consider the following functional with a trial probability density function : :=\int_{0}^{\infty}e(\bm{x};\bm{b},\bm{w})q(\bm{x})\diff\bm{x}+\int_{0}^{\infty}q(\bm{x})\ln q(\bm{x})\diff\bm{x } , \end{aligned}\ ] ] and consider the minimization of this functional under constraints : by using the lagrange multipliers , this minimization yields equation ( [ eq : gibbsfreeenergy_nnbm ] ) , i.e. , \mid \mrm{constraints}\big\}.\end{aligned}\ ] ] this variational free energy is referred to as the gibbs free energy in statistical physics , and the minimum of the variational free energy coincides with .this work was partly supported by grants - in - aid ( no . 24700220 ) for scientific research from the ministry of education , culture , sports , science and technology , japan .j. pearl : probabilistic reasoning in intelligent systems : networks of plausible inference ( 2nd ed . ) , san francisco , ca : morgan kaufmann , 1988 . j. s. yedidia and w. t. freeman : constructing free - energy approximations and generalized belief propagation algorithms , ieee trans . on information theory , vol .51 , pp . 22822312 , 2005 . j. s. yedidia and w. t. freeman : correctness of belief propagation in gaussian graphical models of arbitrary topology , neural computation , vol . 13 , pp . 2173 - 2200 , 2001. o. b. downs , d. j. c. mackay and d. d. lee : the nonnegative boltzmann machine , advances in neural information processing systems , vol .428434 , 2000 .o. b. downs : high - temperature expansions for learning models of nonnegative data , advances in neural information processing systems , vol .13 , pp . 465471 , 2001 .n. d. socci , d. d. lee and h. s. seung : the rectified gaussian distribution , advances in neural information processing systems , vol .10 , pp . 350356 , 1998 .p. paatero , u. tapper : positive matrix factorization : a nonnegative factor model with optimal utilization of error estimates of data values , environmetrics , vol . 5 , pp . 111126 , 1994 .d. d. lee , h. s. seung : learning the parts of objects by nonnegative matrix factorization , nature , vol .401 , pp . 788791 , 1999 . m. plumbley , e. oja : a `` nonnegative pca '' algorithm for independent component analysis , ieee trans .neural networks , vol .15 , pp . 6676 , 2004 . m. harva and a. kabn : variational learning for rectified factor analysis , signal processing , vol.87 , pp.509527 , 2007 .m. yasuda and k. tanaka : tap equation for nonnegative boltzmann machine , philosophical magazine , vol .192209 , 2012 .t. plefka : convergence condition of the tap equation for the infinite - range ising spin glass model , j. phys . a : math .15 , pp . 19711978 , 1982 . m. yasuda and k. tanaka : susceptibility propagation by using diagonal consistency , physical review e , vol .87 , pp . 012134 , 2013 .j. raymond and f. ricci - tersenghi : correcting beliefs in the mean - field and bethe approximations using linear response , proc .ieee icc13 workshop on networking across disciplines : communication networks , complex systems and statistical physics ( netstat ) , 2013 .m. yasuda : a generalization of improved susceptibility propagation , journal of physics : conference series , vol .473 , pp . 012006 , 2013 .k. tanaka : probabilistic inference by mean of cluster variation method and linear response theory , ieice trans . on information and systems , vol .e86-d , pp . 12281242 , 2003 . m. welling and y. w. teh : approximate inference in boltzmann machines , artificial intelligence , vol.143 , pp.1950 , 2003. m. welling and y. w. teh : linear response algorithms for approximate inference in graphical models , neural computation , vol.16 , 197221 , 2004 . m. mzard and t. mora : constraint satisfaction problems and neural networks : a statistical physics perspective , journal of physiology - paris , vol .103 , pp . 107113 , 2009 .r. ben - yishai , r. lev bar - or and h. sompolinsky : theory of orientation tuning in visual cortex , proc .usa , vol .92 , pp . 38443848 , 1995 . m. yasuda and k. tanaka : boltzmann machines with bounded continuous random variables , interdisciplinary information sciences , vol .2531 , 2007 .
|
nonnegative boltzmann machines ( nnbms ) are recurrent probabilistic neural network models that can describe multi - modal nonnegative data . nnbms form rectified gaussian distributions that appear in biological neural network models , positive matrix factorization , nonnegative matrix factorization , and so on . in this paper , an effective inference method for nnbms is proposed that uses the mean - field method , referred to as the thouless anderson palmer equation , and the diagonal consistency method , which was recently proposed .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.